CLIP

Image-Based Clip-Guided Essence Transfer

This paper proposes a novel method to transfer the semantic properties that constitute high-level textual description from a target image to a source image, without changing the identity of the source. The method uses CLIP's image latent space, which is more stable and expressive than the textual latent space.

No Token Left Behind: Explainability-Aided Image Classification and Generation

The paper presents a novel use of explainability to perform zero-shot tasks such as image classification and generation. We demonstrate that CLIP guidance based on pure similarity scores between the image and text is unstable as the scores can be based on irrelevant or partial data. Our method demonstrates the effectiveness of using explainability to stabilize the scores.

Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers (Oral)

The paper presents an interpretability method for all types of attention, including bi-modal Transformers and encoder-decoder Transformers. The method achieves SOTA results for CLIP, DETR, LXMERT, and more.