Strategies to Monitor LLM Hallucinations | Webinar

NannyML
NannyML
220 بار بازدید - - How to monitor LLMs deployed
How to monitor LLMs deployed to production. We focus on the state-of-the-art solutions for detecting hallucinations, split into two types:

1. LLM self-evaluation
2. Uncertainty Quantification

In the LLM self-evaluation part, we cover using (potentially the same) LLM to quantify the quality of the answer. We will also cover state-of-the-art algorithms such as SelfCheckGPT and LLM-eval.

In the Uncertainty Quantification part, we will discuss algorithms to leverage token probabilities to estimate the quality of model responses. This includes simple accuracy estimation and more advanced methods for estimating Semantic Uncertainty or any classification metric.
55 سال پیش در تاریخ 1403/05/29 منتشر شده است.
220 بـار بازدید شده
... بیشتر