Attention for RNN Seq2Seq Models (1.25x speed recommended)
30.2 هزار بار بازدید -
3 سال پیش
-
Next Video:
Next Video: Self-Attenion for RNN (1.25x speed re...
Attention was originally proposed by Bahdanau et al. in 2015. Later on, attention finds much broader applications in NLP and computer vision. This lecture introduces only attention for RNN sequence-to-sequence models. The audience is assumed to know RNN sequence-to-sequence models before watching this video.
Slides: https://github.com/wangshusen/DeepLea...
Reference:
Bahdanau, Cho, & Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
Attention was originally proposed by Bahdanau et al. in 2015. Later on, attention finds much broader applications in NLP and computer vision. This lecture introduces only attention for RNN sequence-to-sequence models. The audience is assumed to know RNN sequence-to-sequence models before watching this video.
Slides: https://github.com/wangshusen/DeepLea...
Reference:
Bahdanau, Cho, & Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
3 سال پیش
در تاریخ 1400/01/27 منتشر شده
است.
30,269
بـار بازدید شده