This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
ERATO感謝祭 Season IV
【参考】Satoshi Hara and Takanori Maehara. Enumerate Lasso Solutions for Feature Selection. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI'17), pages 1985--1991, 2017.
【DL輪読会】Efficiently Modeling Long Sequences with Structured State SpacesDeep Learning JP
?
This document summarizes a research paper on modeling long-range dependencies in sequence data using structured state space models and deep learning. The proposed S4 model (1) derives recurrent and convolutional representations of state space models, (2) improves long-term memory using HiPPO matrices, and (3) efficiently computes state space model convolution kernels. Experiments show S4 outperforms existing methods on various long-range dependency tasks, achieves fast and memory-efficient computation comparable to efficient Transformers, and performs competitively as a general sequence model.
This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
ERATO感謝祭 Season IV
【参考】Satoshi Hara and Takanori Maehara. Enumerate Lasso Solutions for Feature Selection. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI'17), pages 1985--1991, 2017.
【DL輪読会】Efficiently Modeling Long Sequences with Structured State SpacesDeep Learning JP
?
This document summarizes a research paper on modeling long-range dependencies in sequence data using structured state space models and deep learning. The proposed S4 model (1) derives recurrent and convolutional representations of state space models, (2) improves long-term memory using HiPPO matrices, and (3) efficiently computes state space model convolution kernels. Experiments show S4 outperforms existing methods on various long-range dependency tasks, achieves fast and memory-efficient computation comparable to efficient Transformers, and performs competitively as a general sequence model.
24. 入力の表現
単語を出現回数で数え
上げる。順序は無視
(Bag of Wordsモデル)!
(Pos) I’m really
loving this ?lm.!
(Neg) I hate this ?lm
because the ?lm
really …
"24
文章 #1! #2
I 1
I'm 1
because 1
?lm 1 2
hate 1
loving 1
really 1 1
the 1
this 1 1