ToSA: Token Selective Attention for Efficient Vision Transformers
CoRR(2024)
摘要
In this paper, we propose a novel token selective attention approach, ToSA,
which can identify tokens that need to be attended as well as those that can
skip a transformer layer. More specifically, a token selector parses the
current attention maps and predicts the attention maps for the next layer,
which are then used to select the important tokens that should participate in
the attention operation. The remaining tokens simply bypass the next layer and
are concatenated with the attended ones to re-form a complete set of tokens. In
this way, we reduce the quadratic computation and memory costs as fewer tokens
participate in self-attention while maintaining the features for all the image
patches throughout the network, which allows it to be used for dense prediction
tasks. Our experiments show that by applying ToSA, we can significantly reduce
computation costs while maintaining accuracy on the ImageNet classification
benchmark. Furthermore, we evaluate on the dense prediction task of monocular
depth estimation on NYU Depth V2, and show that we can achieve similar depth
prediction accuracy using a considerably lighter backbone with ToSA.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn