• The Anti-Silo: Information Technology—The Department That Can't Say Yes (Episode 4)
    Jan 22 2026
    Technical debt in the United States costs organizations 2.41 trillion dollars annually.But here's what that number obscures: IT departments have known about this debt for years. They've raised the alarm. They've documented the risks. And they've been consistently overruled by business stakeholders who don't speak their language.The problem isn't that IT doesn't understand the business. It's that the business has never learned to understand IT—and now AI is making that translation failure catastrophic.**The Scale of the Crisis:**- $2.41 trillion annual cost of technical debt in the US alone (MIT Sloan)- 75% of tech leaders will face moderate-to-high technical debt severity by 2026 (Forrester)- 50%+ of business leaders say their infrastructure can't support the AI workloads they want to run (Microsoft)- Only 23% of CIOs are confident they're investing in AI with built-in data governance (Salesforce)- 282% surge in AI implementation since last year (Salesforce CIO Study)**The Pressure IT Is Under:**CIO.com published their analysis of IT leadership challenges just one week ago. The headline quote came from Barracuda's CIO:[CLIP] "The biggest challenge I'm preparing for in 2026 is scaling AI enterprise-wide without losing control. AI requests flood in from every department."That's the reality. Every department wants AI. Every department wants it now. And IT is the bottleneck everyone resents—until something breaks, at which point IT becomes the scapegoat everyone blames.**Why AI Makes Technical Debt Exponentially Worse:**CFO Dive reported on what they called a "tech debt tsunami" building amid the AI rush. The Forrester principal analyst explained:[CLIP] "There's a massive amount of technical debt in IT infrastructures. It's really this perfect storm of technology growing, companies being far more distributed, and AI coming into the equation, which will make the problem exponentially worse."AI isn't linear. Your legacy systems that "mostly work" become critical failure points when you try to layer AI on top of them.DevPro Journal reframed the conversation: Technical debt isn't actually technical debt. It's business risk.[CLIP] "In the era of Large Language Models and machine learning, technical debt is actually data corruption. If your database schemas are inconsistent or your API endpoints are held together with tape, your expensive new AI features will yield hallucinations rather than insights."**The Translation Gap:**When IT says "technical debt," business hears "maintenance that costs money and delivers no visible value."When IT says "infrastructure risk," business hears "IT trying to slow us down."When IT says "we need to refactor before we scale AI," business hears "bureaucratic delay."IT is trying to communicate probability and consequence—"if we don't fix this, there's a 40 percent chance of failure"—to stakeholders who think in certainty and outcome—"will this work or not?"The result: IT's warnings get discounted as pessimism. Their risk assessments get overruled by business urgency. And when the predicted failures occur, IT gets blamed for not preventing what they warned against.**The Governance Paradox:**IT is asked to simultaneously:- Accelerate AI adoption to meet business demands- Maintain security and compliance standards- Prevent shadow AI without blocking innovation- Scale infrastructure while managing technical debt- Document everything for audit and regulatory purposesThese demands conflict. Acceleration and governance exist in tension. And IT is expected to resolve that tension without adequate resources, authority, or organizational support.**Two Metaphors for Business Communication:****The Poisoned Well (Data Quality):**Your AI is only as good as the data it's trained on. If your data is contaminated—biased, incomplete, inconsistent, or outdated—then every AI system that drinks from that well produces poisoned outputs.The Harvard Kennedy School's Misinformation Review found: "Training data often contain biases, omissions, or inconsistencies, which may embed systemic flaws into outputs."But IT didn't create the data. Business units created the data through years of operational decisions—what to capture, what to ignore, how to categorize. Those decisions embedded biases that AI now amplifies.IT can identify data quality issues. IT can flag bias patterns. But IT can't fix data quality alone—it requires collaboration with the business units that created and own that data.**The Eager Intern (Model Hallucination):**AI hallucinations are a governance crisis that business stakeholders fundamentally misunderstand. They assume AI either works or doesn't work. They don't understand that AI can confidently produce completely fabricated outputs.Imagine an intern who's desperate to please, never admits uncertainty, and will confidently make things up rather than say "I don't know." That's your AI model.Recent incidents documented by Wikipedia (updated three days ago):- October 2025...
    Show More Show Less
    27 mins
  • The Anti-Silo: Middle Management—Where AI Strategy Goes to Die (Episode 3)
    Jan 21 2026
    Gartner predicts that by 2026, 20 percent of organizations will use AI to eliminate more than half of their middle management positions.But here's what that headline misses: the organizations flattening their structures are also losing the only people who can translate C-suite AI mandates into operational reality.Your middle managers aren't the problem. They're the last line of defense between your AI strategy and your shadow AI crisis—and you're about to fire them.**The Scale of the Elimination:**- 20% of organizations will use AI to eliminate 50%+ of middle management positions by 2026 (Gartner)- IMD expects a 10-20% reduction in traditional middle-management positions by the end of 2026- Largest reductions: reporting-heavy roles in finance, compliance, supply chain planning, and procurement**But Here's What the Headlines Miss:**A Prosci study surveying over 1,100 professionals found that 63 percent of organizations cite human factors as the primary challenge in AI implementation.Not technology. Not budget. Human factors.And guess who's supposed to manage those human factors? Middle management.The same research found that mid-level managers are the most resistant group to AI adoption—followed by frontline employees. That finding has been weaponized to justify eliminating the management layer.But resistance isn't random defiance. It's a signal.When middle managers resist AI initiatives, they're often responding to real problems:- Unclear mandates from above- Inadequate training- Tools that don't integrate with existing workflows- Accountability structures that hold them responsible for outcomes they can't control**The Knowledge Inversion:**There's a phenomenon happening that nobody's talking about directly: middle managers often know more about AI than their senior executives.A Mindflow analysis found:- 71% of middle managers actively use AI in their daily work- Only 52% of senior leaders use AI regularly- Nearly half of senior executives have never used an AI tool at allThis creates what researchers call a "knowledge inversion"—the people making strategic AI decisions have less hands-on experience than the people implementing them.C-suite executives issue mandates based on vendor presentations and board pressure. Middle managers receive those mandates knowing—from direct experience—that the implementation will be more complex than leadership understands.When middle managers raise concerns, they're perceived as resistant. When they propose alternatives, they're overruled by executives who lack the operational knowledge to evaluate their suggestions.**The Accountability Trap:**Middle managers are expected to:- Drive AI adoption within their teams- Manage shadow AI risks they can't see- Implement governance protocols they didn't design- Hit productivity targets that assume AI integration- Maintain team morale through technological disruptionAnd they're expected to do all of this without clear authority over tool selection, budget allocation, or policy creation.[CLIP] "This is the accountability trap: responsibility without authority, expectations without resources."The Allianz Risk Barometer 2026—released this month—found that AI has surged to the number two global business risk, up from number ten in 2025. That's the biggest jump in their entire ranking.Their analysis: "In many cases, adoption is moving faster than governance, regulation, and workforce readiness can keep up."Who's responsible for workforce readiness? Middle management.Who's blamed when adoption outpaces governance? Middle management.Who has the authority to slow adoption until governance catches up? Not middle management.**The Translation Failure:**C-suite executives speak in strategy—competitive advantage, market position, ROI potential.Frontline employees speak in tasks—"how does this help me do my job?"Middle managers are supposed to translate between these languages.But AI introduces a third language—technical complexity that neither strategic executives nor task-focused employees fully understand. Inference costs. Model drift. Hallucination rates. Prompt engineering. Fine-tuning requirements.Most middle managers weren't trained in this language. They're expected to translate strategies they don't fully understand into implementations they can't technically evaluate.Fast Company identified three functions that will define the future of middle management:1. Orchestrating AI-human collaboration2. Serving as agents of change through continuous AI-driven disruption3. Coaching employees through constant reskilling and role evolutionThese are sophisticated capabilities. But how many organizations are actually developing these capabilities in their management layer—versus simply expecting them to emerge?**The Human-in-the-Loop Reality:**"Human-in-the-Loop" has become the default reassurance in AI governance. It appears in policies, governance frameworks, and implementation plans. But its practical meaning is still ...
    Show More Show Less
    26 mins
  • The Anti-Silo: The C-Suite Accountability Crisis Episode 2 of the Anti-Silo Series
    Jan 20 2026
    Half of CEOs believe their jobs are on the line if AI doesn't pay off. Seventy-two percent now say they're the main decision maker on AI—double the number from last year. And yet: Gartner predicts over 40 percent of agentic AI projects will be cancelled by 2027.Not because the technology failed. Because accountability outpaced authority.Your C-suite is spending billions on AI while fighting over who owns the outcome—and while they fight, the clock is ticking.**The Scale of the Investment:**Boston Consulting Group released their annual AI survey on January 15th. The findings are staggering:- Companies plan to double their AI spending in 2026, accounting for 1.7 percent of revenues—more than twice the increase from 2025- Ninety-four percent of chief executives say they'll continue investing in AI at current or higher levels even if the investments don't pay off in the next year- Ninety percent of CEOs believe AI agents will produce measurable returns this year- CEOs are committing 30 percent of their organization's AI investment to agentic AI alone**The Confidence Gap:**CEO confidence in AI is significantly higher in the East than in the West:- India and Greater China: 75% of CEOs confident AI will deliver ROI- Europe: 61%- United States: 52%- United Kingdom: 44%Why the gap? BCG's analysis is revealing: "A larger share of Western CEOs say their organizations are investing in AI to avoid falling behind or because they feel pressure."Western executives are investing out of fear—fear of competitive irrelevance—not conviction. They're spending billions because they're terrified of being left behind, not because they have a clear strategy for value creation.IBM's 2025 CEO Study confirmed this pattern: 64 percent of CEOs acknowledge that the risk of falling behind drives them to invest in technologies before they have a clear understanding of the value those technologies bring.[CLIP] "That's not strategy. That's panic buying at enterprise scale."**The C-Suite Accountability Gap:**The farther you get from the corner office, the less confident executives become. BCG found that confidence in AI's eventual payoff drops from 62 percent among CEOs to just 48 percent among non-tech executives outside the C-suite.The CEO sees transformation. Everyone else sees uncertainty.This creates a dangerous dynamic:- The CEO is championing AI initiatives that the rest of the leadership team doesn't believe in- The CFO is skeptical of the ROI- The CIO is worried about technical debt- The CISO is concerned about attack surfaces- The General Counsel is terrified of liabilityThe CEO interprets this skepticism as resistance to change. The other executives interpret the CEO's enthusiasm as reckless optimism. Nobody's wrong—but nobody's aligned.**Role-Specific Strategic Fears:****The CEO's Fear: Competitive Irrelevance**IMD's 2026 AI trends analysis warned: "Organizations that fail to reach AI-native operations by 2027 risk being structurally uncompetitive."That's the CEO's nightmare: not that AI fails, but that competitors succeed while you hesitate.**The CFO's Fear: Unquantifiable Risk**CFOs are trained to evaluate investments through traditional ROI models—payback periods, margin impact, net present value. But AI doesn't fit those models.BCG found that most AI projects need two to four years to demonstrate value. CFOs expect returns in under a year. That mismatch creates inevitable conflict.CFO Brew quoted a finance leader: "CFOs must take an active role in AI governance. Although most view it as a technology 'system,' the necessary controls extend far beyond IT and cannot be managed by the CIO alone."**The CIO's Fear: Accountability Without Authority**Information Week's 2026 CIO trends analysis: "Enterprises rushed AI adoption without establishing who owns what. The technology moved faster than governance frameworks, leaving CIOs responsible for outcomes they can't fully control."One CIO was blunt: "The CIO's job is to establish guardrails, to provide a framework—not to absorb the consequences of ungoverned decisions."If marketing deploys a rogue AI tool, that's not an IT failure. If the CEO mandates a use case that bypasses governance, that's not an IT failure. But when something goes wrong, the board looks at IT first.**The CISO's Fear: Invisible Attack Surface**Digital Trends published analysis on "AI agent sprawl"—the uncontrolled expansion of AI agents across an organization.Their comparison: This is the shadow IT problem of the 2010s, but with exponentially more risk. Marketing deploys customer service agents. Finance deploys automated reporting bots. HR tests recruiting assistants. Each deployment expands the attack surface without centralized visibility.**The General Counsel's Fear: Undefined Liability**Forrester predicts 60 percent of Fortune 100 companies will appoint a head of AI governance in 2026. That tells you how urgent the problem has become—and how absent the accountability structure has ...
    Show More Show Less
    26 mins
  • The Anti-Silo: Why Your AI Governance Is Failing Before It Starts
    Jan 19 2026
    Why 80% of Your Employees Are Building an AI Ecosystem You Can't See—And Why Your Org Chart Made It InevitableThis episode launches The Anti-Silo—a seven-part series examining how organizational silos sabotage AI governance at every level, from the C-suite to frontline employees.Here's the uncomfortable truth: your shadow AI problem isn't a technology failure. It's the predictable result of organizational structures that were never designed for the speed of intelligence.The Shadow AI Crisis Is a Symptom, Not the DiseaseThe statistics are stark: 80% of employees are using unapproved AI tools daily. They're building workflows, automating decisions, and feeding proprietary data into systems your IT department has never reviewed.But before you blame employees, ask yourself: How long does it take to get an AI tool approved through your official channels?If the answer is "six months" while business needs can't wait six days, you've created the conditions for shadow AI. Employees aren't being reckless—they're being rational. When official pathways are too slow, people find unofficial ones.The disease isn't employee behavior. The disease is siloed governance that moves at organizational speed while AI moves at AI speed.The Three-Speed ProblemEvery organization now operates across three incompatible timeframes:AI Speed: New foundation models release weekly. Capabilities that didn't exist last month are commoditized this month. The technology itself assumes continuous adaptation.Adaptation Speed: Teams modify workflows in agile sprints. Business units experiment with automation. Innovation happens at the edge, not the center.Organizational Speed: Culture changes slowly. Regulations move through formal processes. Governance structures were designed for stability, not velocity.In siloed organizations, these gears grind against each other. Prototypes sit in legal review until the technology becomes obsolete. By the time governance catches up, the business has moved on—often to shadow alternatives.Why Digital Reformation Made It WorseThe "digital transformation" era optimized individual departments. Finance got better financial systems. HR got better HR systems. Marketing got better marketing systems.But each transformation calcified the walls between departments. Every silo now has its own "system of record," its own data ontology, its own workflows optimized for departmental success.AI governance requires exactly what this structure prevents: cross-functional data flows, integrated risk assessment, and coordinated decision-making.When your AI system needs training data from marketing, validation criteria from legal, fairness metrics from HR, security review from IT, and accountability structures from compliance—who owns that workflow? In most organizations, the answer is "no one." Or worse: "everyone," which means the same thing.The Linguistic Silo ProblemEven when departments want to collaborate, they often can't. Not because of politics—because of language.Technical teams speak in model architectures and confidence intervals. Legal teams speak in liability and regulatory exposure. Business teams speak in revenue and market share. HR speaks in culture and talent management.These aren't just different vocabularies. They're different ontologies—different ways of categorizing reality. When the data science team says "bias," they mean statistical deviation. When HR says "bias," they mean discriminatory impact. Same word, fundamentally different concepts.Without translation layers between these linguistic silos, governance meetings become exercises in mutual incomprehension. Everyone leaves thinking they agreed—until implementation reveals they were having different conversations entirely.Governance as Gate vs. Governance as PartnerMost organizations position governance as a gate at the end of the development lifecycle. Build first, get approval second.This guarantees bottlenecks. It guarantees shadow AI. It guarantees that by the time governance reviews a system, so much has been invested that saying "no" becomes nearly impossible.The Anti-Silo framework repositions governance as an integrated partner throughout the lifecycle. Not approval at the end—guidance from the beginning. Not gates that slow progress—guardrails that enable confident speed.The Anti-Silo Framework: Six Structural Elements1. Cross-Functional Governance Committee with Decision AuthorityNot advisory. Not consultative. Actual authority to approve, reject, and set conditions. Membership must include Legal, IT, HR, business unit leaders, and executive sponsorship. Meeting cadence must match AI speed, not organizational speed—weekly or bi-weekly, not quarterly.2. Governance Velocity MetricsYou measure time-to-market. You measure development velocity. Do you measure governance velocity? Time from concept to approved deployment. Time from risk identification to mitigation implementation. If you don't measure governance speed, you ...
    Show More Show Less
    23 mins
  • AI Governance Weekly Roundup: The Global South Pivot—Who Will Build the AI Future?
    Jan 18 2026
    While the United States shut down USAID and debates whether to engage internationally at all, China secured the co-sponsorship of more than 140 countries for its AI capacity-building resolution at the United Nations.One hundred forty countries. That's not a negotiation. That's a mandate.And if you're an executive whose supply chains, markets, or regulatory exposure spans the Global South, you're about to discover whose rules govern AI in most of the world—and it won't be American rules.**This week's roundup: The Global South pivot in AI governance—based on Lawfare analysis by Chinasa Okolo****The Numbers That Matter:****July 2024:** UN General Assembly unanimously adopted China's resolution on AI capacity-building- 140+ countries co-sponsored it (including the United States)- Passed by consensus—not controversial**July 2025:** China unveiled Global AI Governance Action Plan at World AI Conference in Shanghai- Premier Li Qiang announced creation of global AI cooperation organization (potentially headquartered in Shanghai)- China is quietly building the infrastructure of global artificial intelligence influence**What China Is Actually Doing:**- Workshops in Shanghai and Beijing drawing participants from 40+ countries- AI Capacity-Building Action Plan targeting developing nations- Group of Friends for International Cooperation in AI Capacity-Building (regular meetings)- AI-powered agriculture projects announced in Kenya and Nigeria- Joint AI research facility with Brazil focused on agricultural development- Nigerian government expressing support for Chinese AI governance initiatives- Indonesia seeking Chinese assistance for AI in aquaculture, agriculture, smart cities**What the U.S. Is Actually Doing:****July 1, 2025:** Secretary of State Marco Rubio announced official closure of USAID—the agency that historically served as primary vehicle for U.S. digital development initiatives**Current Status:**- State Department's Global AI Research Agenda: Non-operational- Partnership for Global Inclusivity on AI (launched with major tech companies 2024): Unclear status- Digital Connectivity and Cybersecurity Partnership: Unclear status after State Department dismissed diplomats from Bureau of Cyberspace and Digital Policy (July 2025)"The U.S. has systematically deconstructed the institutional capacity necessary for sustained international engagement."**The Funding Gap:****EU Horizon Europe Africa Initiative III:** €500.5 million across 24 calls for proposals to strengthen African-European research partnerships- €186.5M specifically for innovation and technology (including AI applications, fintech, data governance)**U.S. Announced:** $15 million for AI capacity-building + $33M from program that's now non-operationalLawfare analysis: "American governmental engagement remains fragmented and inadequately funded."**The Structural Problem:****November 2025:** State Department announced partnership with Zipline (drone delivery company)—up to $150M to expand AI-enabled medical supply deliveries across Africa**The Catch:** Pay-for-performance model contingent on African governments signing $400 million in contractsOkolo: "Garnering nearly half a billion in contracts may be unfeasible given the high debt burden across the continent that limits national spending on essential social services like health care."**Compare China's Approach:**- RAND Corporation analysis: China emphasizes respect for sovereignty and non-interference in domestic governance- Partnerships don't attach political or economic conditions Western partnerships require- Predictable, long-term funding commitments- Consistent capacity for rapid delivery on large-scale infrastructure (hydroelectric plants, shipping ports, railroads, airports)- "China's long-term geoeconomic interests have trumped concerns around immediate financial returns"**The Governance Gap for Executives:**The Global South represents the majority of the world's population. These nations will comprise the majority of future AI users and developers.The governance frameworks, technical standards, and ethical norms established through capacity-building partnerships will shape global AI development for decades.If you're not tracking which governance model—Chinese or Western—is gaining traction in your key markets, you're flying blind into regulatory fragmentation that will affect:- Data flows- Algorithmic accountability- Compliance requirements- Market access**The Accountability Structure:**Trump administration's AI Action Plan (January 2025) mandates: "American AI technologies, standards, and governance models are adopted worldwide."**The Problem:** The U.S. lacks comprehensive federal AI legislation."The government demands global adoption of American standards while simultaneously withdrawing from multilateral mechanisms necessary for collaborative development."Okolo calls this an "untenable proposition." Countries expected to embrace American governance models that don't ...
    Show More Show Less
    20 mins
  • Harmonizing Velocity and Vigilance: Why Your AI Innovation Speed Is Creating Liability
    Jan 15 2026
    Air Canada deployed a chatbot. The chatbot hallucinated a bereavement policy that didn't exist. A customer relied on it. When Air Canada refused to honor the fake policy, the customer sued.Air Canada's defense? "The chatbot was a separate legal entity—the company wasn't responsible for what it said."The tribunal's response was immediate and brutal: REJECTED. The airline is liable for all information on its website, regardless of whether a human or AI generated it.You cannot outsource liability to your software.**The Governance Gap: Why organizations are moving so fast on AI that they're creating the exact exposure that destroyed Air Canada's defense before it started.****The Scale of the Problem:**- By 2024, enterprise AI usage increased 600%- 77% of organizations admit they are unprepared to defend against the risks AI introduces- 93% believe AI is essential—but 77% can't govern it- That gap is where careers end and lawsuits are born**What the Governance Gap Actually Is:**The operational void that emerges when the speed of AI deployment exceeds your organization's ability to monitor and control it.Your Agile development teams are running two-week sprints. Your compliance process was designed for quarterly reviews. The math doesn't work.The result: **Shadow AI**—unsanctioned use of AI tools by employees seeking efficiency gains outside formal IT channels.**What Shadow AI Introduces:**- Leakage of proprietary data into public models that train on your inputs- Embedding of unmonitored bias into decision-making workflows- Violation of data sovereignty laws you didn't know appliedYour people aren't being malicious. They're being productive. They found tools that help them work faster. And your governance process feels like bureaucratic roadblock adding weeks to everything.So they bypass it. They use ChatGPT with customer data. They upload proprietary documents to AI services. They build automations with tools IT never approved.Every single action creates liability you don't know about, can't monitor, and can't defend.**The Regulatory Reality:****EU AI Act** imposes fines up to **€35 million or 7% of global annual turnover** for prohibited AI practices.Not profit. Turnover.If you're a US company doing any business in Europe—selling to European customers, processing European data, even marketing to European audiences—you're covered. The EU AI Act is now the global baseline for multinational corporate governance whether you like it or not.**Risk-Tiering System:****Unacceptable risk (prohibited entirely):**- Social scoring systems- Real-time biometric identification in public spaces- Subliminal manipulation- If your team is building anything in this category, the user story gets deleted. Period.**High risk (requires conformity assessments):**- AI in critical infrastructure, education, employment, credit scoring, law enforcement- Requires high-quality data governance, documentation, human oversight before release- Your definition of done must include regulatory approval- A feature isn't shippable until compliance signs off**Limited risk (requires transparency):**- Chatbots, emotion recognition, deep fakes- Users must be informed they're interacting with AI- This is exactly where Air Canada failed**Minimal risk:**- Spam filters, video games, most internal tools- Standard development can proceed**The Operational Problem:**Your Agile teams are not trained to make these classifications. Your product owners don't know which tier applies to the feature they're building. Your sprint planning doesn't include regulatory risk assessment.So features get built, shipped, deployed—and nobody knows whether they just created €35 million of exposure until the enforcement action arrives.**The Pacing Problem:**Agile methodologies prioritize working software over comprehensive documentation. Responding to change over following a plan.Traditional compliance is rooted in waterfall thinking. Point-in-time audits. Comprehensive reviews at fixed gates, usually just before major release.In AI, pre-deployment audit is insufficient. An AI model can drift after deployment. It can develop biases as it encounters new real-world data. The thing you certified last month is not the thing running in production today.**The Result:**Either Shadow AI (teams adopt tools without approval to maintain speed) or Compliance Paralysis (innovation stalls indefinitely in review boards while competitors ship).Neither outcome is acceptable. Both outcomes are common.**The Three-Framework Solution:****1. NIST AI Risk Management Framework** (your vocabulary)- Four core functions: Govern, Map, Measure, Manage- Common language so technical and non-technical stakeholders can communicate**2. ISO 42001** (your certifiable structure)- First international standard specifically designed for AI management systems- Mandates AI impact assessments before deployment- Data governance protocols ensuring quality and provenance- Continuous monitoring to detect ...
    Show More Show Less
    23 mins
  • The Year AI Hype Becomes AI Liability
    Jan 14 2026
    Only 5% of law firm leaders trust their current AI quality controls. 95% are concerned about AI governance.And here's the number that should terrify every executive: more than 80% of your employees are using unapproved AI systems right now—and 40% are doing it daily.You have no visibility. No governance. No defense when it goes wrong.**This week's roundup: Two major reports that change everything:****MD Communications "What Lies Ahead 2026" Report**- Surveyed 1,400 legal leaders globally (law firm leaders, partners, legal technologists, bar associations)- 95% concerned about AI governance- Only 5% trust current AI quality controls- William Peake (Harneys): "Law firms' sluggish adoption rates suggest they're not quite getting it"- International Bar Association: Opaque AI systems risk undermining fundamental rights and rule of law**Council on Foreign Relations: Six Expert Perspectives on AI in 2026****AI Takeoff (Chris McGuire):**- Claude Opus 4.5 (November 2025): Can solve 5-hour software engineering problems (2 years ago: only 2-minute tasks)- Anthropic CEO: "Vast majority" of new Claude code now written by Claude itself- U.S. cloud providers projected to spend $600 billion on AI infrastructure in 2026 (double 2024 spending)**The Shadow AI Crisis (Vinh Nguyen):**- 80% of U.S. workers use unapproved AI systems- 40% use them daily—bypassing security oversight entirely- You can't govern what you can't see**Three Shadow Dimensions:**1. **Shadow Autonomy** - No visibility into what decisions AI is making inside your workflows2. **Shadow Identity** - Can't validate whether digital identities operating AI systems are legitimate or adversary-controlled (GenAI clones voices from 20 seconds of audio, defeats biometric checks)3. **Shadow Code** - 80% of critical infrastructure enterprises (US, UK, Germany) deployed AI-generated code into production—including medical devices and energy networks—despite 70% rating security risk as moderate or high**The Regulatory Timeline (Kat Duffy):****January 2026 (NOW):**- Illinois requires employers to disclose AI-driven decisions- China's amended Cybersecurity Law (first to explicitly reference AI) enforceable**June 2026:**- Colorado comprehensive AI Act comes online**August 2026:**- EU AI Act high-risk requirements take full effect (penalties: up to €35M or 7% global turnover)- California AI Transparency Act mandates content labeling**The Accountability Question:**"Should AI agents be seen as 'legal actors' bearing duties—or 'legal persons' holding rights?"In the U.S., where corporations already enjoy legal personhood, 2026 may be banner year for lawsuits on exactly this point.But the governance reality: AI cannot be held accountable, sign agreements, be sued, or prosecuted. Liability stays with humans. It stays with you.**The Adoption Frame (Michael Horowitz):**AI is general purpose technology—like electricity and combustion engine. Debates about superintelligence and bubbles miss the point."The AI word of 2026 should be 'adoption.'"Question isn't whether AI is real or hype. Question is whether your governance structures can keep pace with adoption already happening—with or without your oversight.**The Adversarial Reality:**November 2025: Anthropic disclosed Chinese state-sponsored cyberattack leveraged AI agents to execute 80-90% of operation independently—at speeds no human hackers could match.Organizations without AI visibility are already being exploited by adversaries who have AI visibility into them.**Six-Element Governance Framework:**1. **AI System Inventory** - What AI is actually operating? Not what you approved—what's actually running (80% of workforce using unapproved systems)2. **Shadow AI Detection** - Implement threat intelligence platforms monitoring AI use across organization (can't govern invisible systems)3. **Identity Validation** - Continuous validation of machine identities (when AI clones voices from 20 seconds, your identity protocols are obsolete)4. **Governed Channels** - Create approved pathways for AI tool usage (if you don't give employees governed options, they'll use ungoverned ones)5. **Code Review Protocols** - Mandatory production code reviews regardless of whether humans or AI wrote code (80% of critical infrastructure deploying AI-generated code without adequate security review)6. **Board Reporting Structure** - How does AI governance status reach decision-makers? Track AI across four dimensions: innovation, adoption for competitive advantage, vendor relationships, global regulatory frameworks**Seven-Day Action Framework:****Days 1-2:** Audit shadow AI—survey workforce (anonymously if needed) to discover what AI tools are actually in use**Days 3-4:** Map regulatory exposure—which 2026 deadlines hit you? Build compliance calendar**Days 5-6:** Establish governed AI channels—make sanctioned adoption easier than shadow adoption**Day 7:** Brief board on visibility gap using CFR framework (What AI innovating? What ...
    Show More Show Less
    15 mins
  • Change Management in the Age of AI
    Jan 13 2026
    52% of companies accelerated AI adoption after COVID. But almost none accelerated their change management at the same rate.The result? Organizations racing to deploy AI are ignoring the human side of change—creating unprecedented governance failures that expose executives to personal liability.Consider Healthline Media: California fined them $1.55 million for improperly sharing sensitive health-related browsing data with advertisers and AI-driven personalization systems without valid consent. Someone had to know what was going on. But no one did anything.It's a governance failure rooted in the change management they never did.**This episode exposes why AI transformation fails when you ignore the human perspective:****The Technology-First Trap**- 52% of companies accelerated AI adoption after COVID (PwC study)—but failed to accelerate change management- Only 20% of public-sector transformations meet their objectives—primarily due to change management failures- Organizations deploy AI as purely technology problem—ignoring sociological, organizational, human dimensions- Result: Technology works, implementation succeeds technically—but 18 months later, regulatory action**The Real Incident:**Mid-size financial services firm deployed AI across operations—trading algorithms, customer service chatbots, risk assessment models. Technology worked. But when regulators asked "How do you govern AI decision-making?"—no answer. Not because technology failed, but because organizational change never happened.- IT deployed systems- Business units used them- Legal never reviewed governance implications- HR never addressed workforce transition- Nobody owned the change**The TOP Framework Gap**Research identifies Technology, Organization, and People—all three dimensions must be addressed. Most organizations focus exclusively on technology:- Ignore organizational culture- Dismiss individual skills, training, motivation- 80% of CISOs report insufficient funding for robust cybersecurity—but funding isn't the real problem- Real problem: investing in technology without investing in organizational change to govern it**Five Psychological Resistance Factors:**When employees don't trust AI, they work around it—creating shadow AI nobody governs:1. **Opacity** - AI as "black box"2. **Emotionlessness** - AI as "unfeeling"3. **Rigidity** - AI as "inflexible"4. **Autonomy** - AI as "in control"5. **Group membership** - AI as "non-human"Resistance that isn't managed becomes governance gaps.**The Organizational Structure Problem:**- Need to break silo mentality toward matrix-based structures- Most organizations deploy AI within existing departmental boundaries—creating fragmented governance- Average CISO tenure: 18-24 months—not enough time to implement real organizational change- When CISOs turn over before change management complete, transformation stalls—governance gaps become permanent**Your Personal Liability:**Under current regulatory frameworks, "We deployed the technology" is not a defense. Regulators ask:- How do you govern it?- Who is accountable?- Where is the human oversight?**EU AI Act** requires human oversight of high-risk AI systems—that's organizational requirement, not technology:- Need people trained to provide oversight- Need processes that enable oversight- Need culture that values oversight over speed**NIS 2** allows personal penalties for executives who fail to ensure adequate risk management**DORA** holds management bodies personally accountable**SEC** examines board involvement in cybersecurity governanceNone ask: "Did you buy good technology?" They ask: "Did you build organizational capability to govern it?"**GDPR Violation:**Fully automated decision-making with significant impact on individuals without human input is illegal in the EU. If your AI is making decisions affecting people and you can't demonstrate human oversight—you have legal problem, not just governance problem.**The Three-Component Change Management Framework:**1. **Creation** - Give employees tools and motivation to engage with AI - Not training on how to use technology—building understanding of why it matters - Research shows trust increases when users see AI as comprehensible and aligned with human values - Without this: resistance2. **Reframing** - Challenge assumptions that obstruct AI adoption - Address legitimate concerns about AI alienating workers - Organizations that dismiss employee concerns don't overcome resistance—they drive it underground - Result: shadow AI and governance gaps3. **Integration** - Channel AI initiatives through proper governance structures - Every AI deployment needs: accountability assignment, policy documentation, monitoring, board reporting - But governance structures only work if organization has been transformed to support them - Change management isn't something you do before AI deployment—it's foundation that makes governance possible**The ...
    Show More Show Less
    21 mins