EP 32: AI Fraud Detection - Fighting Fire with Fire
Failed to add items
Add to basket failed.
Add to wishlist failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
Over 50% of fraud now involves AI. FIDZY surveyed 562 fraud professionals globally and found AI-powered fraud has become the norm, not the exception. We're talking about deepfakes, synthetic identities, and AI-powered phishing so sophisticated it's basically indistinguishable from legitimate communications. The counter punch? 90% of banks are now using AI to fight back—fighting fire with fire.
Sam and Mac paint the threat landscape: deepfake calls that sound exactly like your bank's fraud department, using your bank's actual spoofed phone number, with perfect voice and professional script asking for your PIN. California bank customers received dozens of these calls and many fell for it because the technology is that convincing.
This is an arms race. Fraudsters use AI, banks use AI—there's no final victory. As bank AI gets smarter at detection, fraud AI evolves to evade those systems. It's like computer viruses and antivirus software—never-ending evolution and counter-evolution. The economic stakes are enormous: Deloitte estimates US banking losses from fraud could increase from $12.3 billion in 2023 to $40 billion by 2027, more than tripling in four years due to generative AI sophistication.
Human oversight remains essential. 88% of banking professionals say human oversight is non-negotiable. AI identifies potential issues and surfaces them to analysts, but humans make final calls on complex cases. The benefit: 43% of institutions report increased efficiency because AI handles high-volume straightforward cases, freeing human experts for complex nuanced cases requiring judgment.