Flamingo: Visual Language Model for Few-Shot Learning

Data Science Gems
Data Science Gems
694 بار بازدید - 10 ماه پیش - Flamingo is a family of
Flamingo is a family of Visual Language Models. It includes key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. Flamingo models are evaluated on open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.

In this video, I will talk about the following: What tasks can Flamingo models do? What is the architecture of Flamingo models? How do Flamingo models perform?

For more details, please look at https://arxiv.org/pdf/2204.14198.pdf

Alayrac, Jean-Baptiste, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc et al. "Flamingo: a visual language model for few-shot learning." Advances in Neural Information Processing Systems 35 (2022): 23716-23736.
10 ماه پیش در تاریخ 1402/09/11 منتشر شده است.
694 بـار بازدید شده
... بیشتر