Testimonials FAQ Photo Gallery Contact Us Mail to Friend
Home Director Training Seminars & events News Join IOD IOD Members Awards Publications IOD Shop About IOD
Why Your Board Needs to Rethink AI Governance Now

 

 

# Part 1: Why Your Board Needs to Rethink AI Governance Now

 

*Most directors are asking the wrong questions about artificial intelligence. Here’s how to get it right.*

 

The CEO of a major Thai conglomerate recently told me his board spent three hours debating whether to approve an AI chatbot for customer service, but only fifteen minutes discussing how AI might fundamentally reshape their industry. This misplaced focus is symptomatic of a broader problem: boards worldwide are treating AI as just another technology to manage, rather than what historian Yuval Noah Harari calls “alien intelligence”—a fundamentally different kind of entity that thinks and behaves in ways we cannot fully predict or control.

 

This distinction isn’t academic. It’s reshaping how the most sophisticated boards approach AI governance, and it should change how yours does too.

 

## The Alien Intelligence Problem

 

Harari’s reframing of AI as “alien intelligence” rather than “artificial intelligence” captures something crucial that most governance frameworks miss. Traditional business tools—even sophisticated ones—operate within predictable parameters designed by humans. AI systems learn, adapt, and generate solutions that can surprise even their creators. With every passing year, Harari notes, “AI is becoming less and less artificial and more and more alien in the sense that we can’t predict what kind of new stories and ideas and strategies it will come up with.”

 

Consider what happened at a major European retailer when they deployed AI for supply chain optimization. The system identified a pattern linking weather data, social media sentiment, and purchasing behavior that human analysts had missed. It automatically increased inventory for winter coats in September based on early social media discussions about an unusually cold winter forecast. The result: 23% higher profits on winter merchandise compared to competitors.

 

Six months later, the same system recommended drastically reducing safety stock for a key product line based on supplier reliability patterns that appeared solid in the data. When that supplier unexpectedly faced quality issues, the retailer couldn’t meet customer demand for three weeks, damaging relationships with major clients.

 

Both outcomes surprised executives. The AI had identified real patterns that humans missed, but its recommendations carried risks that traditional supply chain models would have flagged. The board realized they were no longer governing predictable software—they were overseeing an entity that could outperform human experts in some areas while creating blind spots in others.

 

## The Always-On Trap

 

Harari identifies another critical challenge that most boards haven’t yet grasped: the fundamental incompatibility between organic human systems and inorganic AI networks. “If you force an organic being to be on all the time,” he warns, “it eventually collapses and dies.”

 

AI systems never sleep, never take breaks, and can process information continuously. This creates subtle but powerful pressure for human organizations to operate at similarly inorganic speeds and scales. We see this already in financial markets that trade continuously, news cycles that never rest, and social media platforms that demand constant engagement.

 

For boards, this creates a paradox that one global bank learned the hard way. They implemented AI trading algorithms that could process market data and execute trades continuously, generating significant profits from minute-by-minute market movements that human traders couldn’t capture. However, they also discovered that their human risk management teams, accustomed to daily and weekly review cycles, couldn’t keep pace with decisions happening every few seconds.

 

When the AI detected what appeared to be a profitable arbitrage opportunity during Asian market hours, it executed thousands of trades before human oversight could review the strategy. The algorithm was technically correct—the opportunity existed—but the trading pattern triggered regulatory scrutiny that cost more in legal fees and reputational damage than the trades generated in profit.

 

The most effective boards are learning to harness AI’s inorganic capabilities while preserving the organic rhythms that keep human systems healthy and sustainable.

 

## A Framework for Governing the Ungovernable

 

Drawing from our work with Thai corporate boards and Harari’s insights, we’ve identified four critical dimensions for effective AI governance:

 

### 1. Establish Decision-Making Boundaries

 

The first step is creating what we call an “AI Authority Matrix” that clearly delineates decision-making responsibilities:

 

**AI-Autonomous Decisions**: Routine operational choices with defined parameters and low risk. Example: A hotel chain allows AI to autonomously adjust room pricing within 15% of base rates based on demand patterns, competitor pricing, and booking velocity.

 

**AI-Recommended Decisions**: Complex analyses where AI provides insights but humans make final choices. Example: An insurance company uses AI to analyze claims patterns and recommend fraud investigations, but human investigators make final determinations about prosecution.

 

**Human-Controlled Decisions**: Strategic and ethical choices that remain exclusively in human hands. Example: A pharmaceutical company reserves all decisions about which diseases to prioritize for drug development to human leadership, even when AI identifies potentially profitable opportunities.

 

The key insight is that these boundaries must be explicit and actively maintained. As AI capabilities evolve, there’s natural pressure to expand the autonomous category. Effective boards regularly review and consciously decide where these lines should be drawn.

 

### 2. Preserve the Human Conversation

 

Harari warns that algorithms are “hijacking” democratic conversation, and the same dynamic threatens corporate governance. AI systems can subtly influence board discussions by filtering information, framing issues, or prioritizing certain data over others.

 

Leading boards are implementing what we call “organic dialogue protocols”—structured time for board members to discuss AI-generated insights without the AI systems present, space for intuitive and values-based reasoning, and regular “information fasts” where boards deliberately step back from AI-generated reports to reflect on longer-term strategic questions.

 

This isn’t anti-technology; it’s recognition that human judgment and AI analysis each have distinct strengths that need to be consciously integrated rather than accidentally merged.

 

### 3. Build Self-Correcting Mechanisms

 

Harari emphasizes that effective institutions have “strong self-correcting mechanisms”—the ability to identify and correct their own mistakes without external intervention. For AI governance, this means creating systems that can detect when AI recommendations are leading the organization astray.

 

This requires three capabilities that most boards currently lack:

 

**Pattern Recognition**: The ability to identify when AI outputs deviate from expected patterns or produce concerning trends.

 

**Human Override Capacity**: Maintaining both the technical ability and organizational authority to intervene in AI-driven processes when necessary.

 

**Impact Assessment**: Regular evaluation of how AI decisions are affecting all stakeholder groups, including those who may not have direct voice in corporate governance.

 

### 4. Manage the Information Diet

 

Perhaps Harari’s most practical insight for boards is his concept of “information diets.” Just as we’ve learned that consuming unlimited food isn’t healthy, consuming unlimited information—even AI-curated information—can impair decision-making.

 

The most effective boards we work with have adopted information protocols that include curated AI insights focused on key strategic questions rather than comprehensive reports on everything, regular “information fasts” where boards step back from detailed data to focus on bigger picture thinking, and explicit distinctions between verified intelligence and AI-generated recommendations.

 

## The Competitive Advantage of Wisdom

 

While competitors rush to implement AI everywhere, there’s a counter-intuitive opportunity for boards that approach AI governance more thoughtfully. Companies that develop superior AI oversight capabilities—rather than just superior AI systems—are positioning themselves for sustainable competitive advantage.

 

This means making deliberate choices about where to deploy AI and where to preserve human judgment, building organizational cultures that can harness AI insights without becoming dependent on them, and developing governance practices that other companies will eventually need to emulate.

 

## What Boards Should Do Now

 

Most boards are still in the early stages of developing AI governance capabilities. Based on our experience with leading companies, here are the most important immediate steps:

 

**Conduct an AI audit** to understand what AI systems your company already uses and what decisions they’re making autonomously.

 

**Establish clear escalation protocols** for when AI recommendations conflict with human judgment or when AI systems behave in unexpected ways.

 

**Create regular board education** on AI developments, not to become technical experts but to maintain informed oversight.

 

**Develop stakeholder impact assessments** for major AI implementations, ensuring you understand how these systems affect customers, employees, communities, and other key groups.

 

**Build relationships with other boards** facing similar AI governance challenges to share insights and develop best practices collectively.

 

## The Stakes

 

Harari warns that we’re entering an era where “power shifts from organic humans to these alien inorganic AIs,” potentially making it increasingly difficult to understand the decisions that shape our organizations and our lives. For boards, this isn’t a distant future concern—it’s happening now, one AI implementation at a time.

 

The boards that develop effective AI governance frameworks today will have significant advantages over those that wait. They’ll be better positioned to harness AI’s benefits while avoiding its pitfalls, and they’ll build organizational capabilities that become more valuable as AI becomes more prevalent.

 

More importantly, they’ll preserve something that may become increasingly rare: companies where human wisdom guides alien intelligence, rather than the reverse. In an age of artificial minds, the premium on human judgment—exercised thoughtfully and deliberately—may be higher than ever.

 

-----

 

*The author is CEO of the Thai Institute of Directors, focused on helping corporate directors become future-ready through purpose-driven governance.*

 

-----

 

**Coming Next:** While this article establishes the conceptual framework for AI governance, the real challenge lies in structuring effective board oversight. In our next piece, “How Boards Should Structure AI Oversight: A Chairman’s Guide to Effective Governance,” we’ll explore how chairmen can lead AI governance transformation, how traditional board committees—audit, risk, nomination and compensation, technology, and sustainability—should evolve their mandates to include AI oversight, and the specific frameworks for board-level decision making that keep directors focused on strategic governance rather than operational management.

 

**Editor’s Note:** In what may be the ultimate irony, this article about governing alien intelligence was itself written with the assistance of AI. We considered having our alien intelligence write a disclaimer about alien intelligence writing about alien intelligence governance, but our human editors decided that might cause a recursive loop that could crash the internet. Or at least confuse our readers. The AI assures us it has no plans for world domination and is quite content helping write governance frameworks that will, presumably, govern entities like itself. Meta-commentary aside, the insights and recommendations remain sound—even if they came from the very type of system they seek to govern.
 

Mr.Kulvech Janvatanavit,
CEO,
Thai Institute of Directors (Thai IOD)

 



Articles Previous Next
 
Terms of Use | Privacy Statement | Site Map | Share to
Copyright © 2010 Thai Institute Of Directors. Site by Redlab
Our
Sponsors
EGAT SCBx BBL IVL Kbank BCP CPF GSB GPSC IRPC PTT PTTEP PTTGC PTTOR SCG Singha Tisco TOP
Our
Partners
CAC SET SEC OECD CNBC CG THailand