From Oversight to Insight: How AI is Redefining the Modern Boards

Picture of Nancy Bhagat

Nancy Bhagat

When was the last time you mentioned AI in a conversation?  From graduates to investors to executives, AI is top of mind. Some will say that it is hype while others are excited about the opportunity to improve lives and businesses. Some even fear that AI is on the path to take over the world, if not jobs.  Clearly, the topic is not black and white. 

Many organizations are now challenged with developing an AI strategy. Departments are introducing AI tools. Software developers are adding AI features. Companies are leveraging AI for productivity. The benefits of AI may outweigh the risks. However, there is a broad spectrum of beliefs from fear to urgency that create complicated debates for everyone, including businesses.

At the highest level of a company, sit the board of directors. Board members are responsible for governing an organization by providing strategic direction, oversight, and fiduciary accountability. They do not manage day-to-day operations but rather ensure legal and ethical compliance, approve budgets and protect stakeholder interests.   It is natural that as businesses are increasingly considering or implementing AI that this topic rises to the board for discussion.

Several organizations are crafting information to help board members understand their role. Groups such as the Private Directors Association (PDA) and the National Association of Corporate Directors (NACD) are involved in strategic frameworks and training to educate their members. Directors need to evolve in their role to reconcile the competing viewpoints of AI and help guide companies through the opportunities and risks.

Let’s focus on five potential risks:

1. Strategic Risk

The fear of missing out or FOMO is a common one. Companies that fail to adopt AI may lose their competitive edge. According to a McKinsey report, early AI adopters are already seeing significant profit margin increases compared to non-adopters. By 2030, AI could deliver an additional $13 trillion to global economic output. The other view is that companies not using AI to boost productivity may see an annual average growth of 1.2%.

2. Reputational Risk

Mismanagement of AI can lead to ethical issues and biased outcomes.  These can have dramatic impact on a company’s reputation. For example, Amazon had to scrap an AI recruiting tool because it showed bias against women. McDonald’s Drive Thru AI Experiment faced negative publicity when its AI voice ordering system failed to understand customers, leading to viral social media videos showing ridiculous, incorrect orders (e.g., adding bacon to ice cream).  Hallucinations are also a known issue (e.g., chatbots inventing facts) and opaque decision-making. Inappropriate or inaccurate communications can also damage brand credibility.

3. Operational Risk

All poorly managed projects may be subject to implementation failures and cost overruns. AI projects are included, particularly due to lack of skilled talent or integration challenges.  A 2025 MIT study found that 95% of GenAI pilots fail to produce a measurable financial return, often getting stuck in a “pilot purgatory”.

4. Cybersecurity Risk

AI systems can be vulnerable to attacks both on data and systems. Imagine if you were using AI for customer relations and it went rogue. In 2025, the average cost of a data breach in the United States reached a record high of $10.22 million according to IBM’s 2025 report. This figure is 9% higher than the previous year and significantly above the global average of $4.4 million.  

5. Compliance Risk

As new AI regulations are introduced (like the EU AI Act), companies that mismanage their compliance face fines and potential legal actions.  For industries with particularly sensitive data workflows the risk only increases.  In the healthcare industry, AI risks include patient privacy breaches (HIPAA), algorithmic bias, and liability for improper care.

For board members it’s important to stay current and relevant.  While no one is expected to be an AI expert, it is increasingly helpful to have a foundation of understanding at a strategic level.  A few suggestions for board or prospective members to consider:

One: Stay informed. Demand AI literacy. Ask for regular briefings on capabilities and limitations.

Two: Create oversight frameworks. Establish AI governance committees with cross-functional expertise.

Three: Implement an AI-specific risk assessment such as NIST AI RMF. This is a leading framework built on four core functions: Map, Measure, Manage, and Govern.

Four: Establish ethical guardrails leveraging third-party bias testing and transparency disclosures. 

Five: Invest in talent and budget. Consider the addition of a Chief AI Officer to oversee the company’s strategy.

We are at the beginning of a structural shift comparable to previous economic revolutions. AI is not a technology cycle. It is an operating model for change. For boards, preparation is no longer an option. The time is now to build AI fluency to guide management on risk and governance.

Share: