Advanced Reasoning with Large Language Models with Chain of Thought Prompting | Paper explained!

The NLP Lab
The NLP Lab
11.7 هزار بار بازدید - 2 سال پیش - Paper link: https://arxiv.org/abs/2201.11903 Abstract: We
Paper link: https://arxiv.org/abs/2201.11903 Abstract: We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier. #artificialintelligence #nlproc #nlp #deeplearning #ml
2 سال پیش در تاریخ 1401/10/26 منتشر شده است.
11,751 بـار بازدید شده
... بیشتر