K. Kokilepersaud, M. Prabhushankar, and G. AlRegib
In this work we demonstrate that appropriately integrating granularity information into volumetric representation learning can improve seismic representations for downstream interpretation tasks. Representation learning approaches that perform operations on intermediate outputs of neural networks before the downstream task are seeing increased popularity within annotation-scarce domains due to their focus on training without explicit access to labeled data. Among these methods, contrastive learning approaches, such as SimCLR, have seen state of the art performance by enforcing a representation space where similar pairs of data (positive pairs) project closer to each other in an embedding space compared to dissimilar pairs of data (negative sets). However, these methods generate positive pairs through various types of random augmentations rather than by enforcing any type of seismic intuition. A major issue with these approaches is that they have been shown to perform sub-optimally in settings with higher levels of granularity. High granularity refers to structures in data that are difficult to identify from the surrounding context which is common in seismic data. We propose a novel loss function that encourages both spread between sections further away from each other in a volume as well as additional spread between samples that are more similar to each other to encourage the recognition of fine-grained details within a volume.