In recent years, the naturally interpretable attention mechanism has become one of the most common building blocks of neural networks, allowing us to produce explanations intuitively and easily. However, the applications of such explanations beyond the scope of accountability and interpretability remain limited. In this talk, Hila will present her latest research on leveraging attention to significantly improve the accuracy and robustness of state-of-the-art large neural networks with limited resources. This is achieved by directly manipulating the attention maps based on intuitive objectives and can be applied to a variety of tasks ranging from object classification to image generation.