AI Hallucinations: What they are, why they happen, and the right way to reduce the risk
Failed to add items
Add to basket failed.
Add to wishlist failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Let's talk about the AI elephant in the room: hallucinations. 🐘
Maybe hallucinations are the reason your company has been hesitant on AI.
But here's the thing, y'all. If you know what you're doing, hallucinations are largely manageable.
But first, you gotta understand what they are, how they happen, and how to reduce the risk.
Let's get started cutting down hallucinations together.
AI Hallucinations: What they are, why they happen, and the right way to reduce the risk (Start Here Series Vol 5) -- An Everyday AI Chat with Jordan Wilson
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: info@youreverydayai.com
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
- AI Hallucinations Definition and Causes
- Large Language Models' Hallucination Mechanisms
- Hallucination Types: Fabricated Claims & Sources
- Model Improvements Reducing Hallucination Rate
- Context Window Impact on AI Accuracy
- AI Hallucinations in Legal and Enterprise Settings
- Four-Layer Method for Minimizing Hallucinations
- Custom Instructions and Retrieval-Augmented Generation
- Expert-Driven Verification and Agent Safety Practices
Timestamps:
00:00 "Modulate's Velma: Smarter AI Insights"
03:18 "Reducing AI Hallucinations Explained"
08:36 "Minimizing AI Hallucinations with Skill"
12:30 "Model Retention and Recall Decline"
13:23 AI Advances: Improved Accuracy and Recall
19:24 "AI Hallucinations and Their Causes"
21:07 "Customizing AI Behavior Effectively"
24:47 "Connecting Data to Reduce Hallucinations"
28:47 "AI Oversight and Expert Input"
30:56 "Reducing AI Hallucinations Simplified"
Keywords:
AI hallucinations, large language models, next token prediction, AI error, human error, fabricated claims, reinforcement learning with human feedback, context window, context engineering,
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Human-Level Voice Intelligence, 100x Faster. Try Velma from Modulate today.
Human-Level Voice Intelligence, 100x Faster. Try Velma from Modulate today.