Skip Navigation

Video:Applied Generative AI: Haunting Hallucinations

A spooky Halloween special to share ways to lessen the scare factor of AI.

In episode 5 of Applied Generative AI with AIS, Vishwas Lele hosts a special Halloween edition to share ways to lessen the scare factor of AI. We don’t mean to spook you with warnings of hallucinations. Large language models can sometimes provide false or misleading information. But don’t let hallucinations haunt you…

What You’ll Learn:

  • Hallucinations Overview: What they are, implications, and why they occur.
  • Examples: Hallunication types explained with prompts and responses.
  • Measuring: Methods for tracking hallucinations in LLMs using models and human evaluation.
  • Best Practices: How to structure prompts, using RAG patterns for grounding, and more ways to improve accuracy.

About the Series

How do you apply generative AI to solve real-world problems? In our new mini-series, we share insights on steps you can take to make GenAI work for you.

As an early access partner for Azure OpenAI and an experienced services firm for Azure cognitive services, AIS has tips and approaches to share. We’ll discuss strategies, use cases, engineering approaches, models, fine-tuning, and more.