AI Is No Longer a Side Project: What M&A Operators Need to See Before and After the Deal

If you’re an operating partner or deal professional, you’ve probably heard some version of:

“Don’t worry, we’re already using AI.”

“We’ve got an AI governance committee.”

“Our vendor will indemnify us if anything goes wrong.”

Those three sentences should not make you feel better.

AI has moved beyond “innovation theater.” It’s now embedded in revenue engines, underwriting models, pricing tools, hiring platforms, customer service, and back-office processes. For M&A professionals, that means AI is now part of enterprise value—and part of the risk surface you’re taking on every time you close a deal.

The question is no longer: Are they using AI?

The right question is: How? Where? With what guardrails? And at whose risk?

This article is about how to think like an operator in an AI world—before and after the ink dries.

1. Stop Asking “Do You Use AI?” and Start Asking “Where Is AI in the Critical Path?”

Almost every portfolio company will say they’re using AI somewhere: marketing copy, a chatbot, maybe a sales tool. That’s not helpful.

For M&A operators, the key is to find where AI actually touches the critical path of the business:

  • Revenue: Is AI scoring leads, setting prices, making credit decisions, or prioritizing outreach?
  • Operations: Is it routing tickets, automating workflows, monitoring systems, or reading contracts?
  • People: Is AI screening resumes, ranking candidates, or making promotion or compensation recommendations?
  • Product: Is AI embedded into the core platform delivered to customers?

Where AI is “nice to have,” you have more time and optionality.

Where AI is in the critical path, it becomes a due diligence and integration priority.

Operator’s question set:

  • “Show me the top 3 workflows where AI is already essential.”
  • “If we turned off all AI tomorrow, what breaks first—revenue, operations, or customer experience?”
  • “Who owns these AI-enabled processes today—not the tool, the outcome?

2. AI Risk Isn’t Theoretical Anymore—It’s Operational

Many management teams still think of AI risk as abstract: hallucinations, generic bias, or “someday we might get regulated.”

From an operator’s seat, AI risk is much more concrete:

  • Bad decisions at scale – A flawed model can deny the right customers, misprice contracts, or quietly push the wrong candidates out of the hiring funnel.
  • Invisible bias – The team thinks they’ve “scrubbed” protected characteristics, yet the model learns to discriminate through proxies (schools, zip codes, etc.).
  • Data leakage & IP exposure – Shadow AI use (free tools, unsanctioned apps) silently moves sensitive information outside the company’s control.
  • Regulatory and litigation exposure – Employment claims, consumer protection actions, state AG investigations, or future AI-specific enforcement.

Most of these risks show up post-deployment, after the solution is in production and people begin relying on it. That’s why “we tested a POC and it worked great” is not enough.

Operator’s question set:

  • “What decisions has AI actually changed in the last 90 days?”
  • “How do you test for bias, error, or drift over time—not just at launch?”
  • “Who is responsible for shutting an AI system down if it misbehaves?”

3. Due Diligence: You Can’t Assess Risk You Don’t Understand

There’s a dangerous pattern in deals right now:

  • The tech team says, “It’s just an LLM, nothing special.”
  • The legal team says, “We got an indemnity clause, we’re covered.”
  • The operators assume the risk is under control.

In reality, you can’t meaningfully assess AI risk if you stay at the slideware level. Someone on the diligence team (internal or external) needs to dive into the actual solution:

  • What data is being ingested? From where? Under what rights or consents?
  • What models are being used (proprietary, third-party, fine-tuned)? Under what licenses?
  • Where are outputs going? Do they feed other systems, or retrain models downstream?
  • Who is monitoring performance, bias, and unexpected behavior over time?

This doesn’t mean operators need to become engineers. It means you need people on your team who can sit with the engineers and actually follow the threads.

Operator’s question set:

  • “Walk me through this AI solution end-to-end like I’m a new hire responsible for it.”
  • “Which external providers do you depend on—and what happens if they change their terms?”
  • “If we wanted to unwind or replace this AI component, how hard would that be?”

4. Indemnities Don’t Replace Oversight

One of the most common myths is:

“If we’re using a third-party AI solution, we’re safe as long as the vendor indemnifies us.”

In practice:

  • Most vendors will narrow their indemnities as much as possible.
  • Even with good contractual protection, regulators and courts increasingly expect deployers (the company using the AI) to exercise care and oversight.
  • You can’t outsource your duty to monitor how a system behaves on your data, in your environment, with your customers and employees.

From an M&A and operating perspective, that means:

  • You still need governance: approved tools, clear rules, human-in-the-loop where it matters.
  • You still need logging, monitoring, and escalation pathways.
  • You still need to show you took reasonable steps to prevent harm.

Indemnities can help with who pays, but they do not erase who’s responsible.

Operator’s question set:

  • “What are we personally on the hook to monitor with this solution?”
  • “Where is the human-in-the-loop today, and where are we considering removing it?”
  • “If regulators or plaintiffs asked for our AI governance story, what would we actually show them?”

5. Integration: AI Is a Change Management Problem, Not Just a Tech One

Post-close, AI can be a huge unlock:

  • Faster onboarding of new teams and customers
  • Smarter and more consistent contract review
  • Streamlined support and back-office processes
  • Better visibility across the combined data landscape

But there are also hidden integration costs:

  • Conflicting tools and policies – One company has sanctioned solutions and guidelines; the other has shadow AI everywhere.
  • Duplication and drift – Multiple models solving similar problems, each trained differently, each with its own risk and behavior.
  • Talent friction – Some teams are excited by AI; others are resistant, frustrated, or afraid.

An integration plan that treats AI as “just another system” will miss the bigger picture: AI changes how people work, how decisions are made, and how value is created (or destroyed).

Operator’s question set:

  • “Which AI-enabled processes do we standardize first across the combined entity?”
  • “Where do we need clear, simple guardrails so people know what’s allowed?”
  • “What training or communication do leaders need so they don’t default to either blind fear or blind adoption?”

Where This Leaves M&A Operators

For operating partners and M&A professionals, AI doesn’t require you to become a data scientist.

It does require you to:

  1. Treat AI as part of core enterprise value, not a side experiment.
  2. Demand specificity—about use cases, data, risk, and accountability.
  3. Pull AI into your playbooks—from diligence checklists to integration plans and board reporting.
  4. Bring in the right expertise—legal, technical, and operational—early enough to shape decisions, not just paper over them.

AI is here, with or without governance. The operators who lean in now—ask better questions, design thoughtful guardrails, and connect AI to real-world outcomes—will quietly create a new edge in value creation.

And that’s exactly what the art after the deal is all about.

If you’d like to go deeper on this topic, I recently sat down with Rob Taylor, JD, Of Counsel and Head of the AI Triage Center at Carstens, Allen & Gourley, to talk through real-world AI risk, due diligence, and where the law is heading. You can listen to that episode of M&A+: The Art After the Deal link below.

And as always, if you’re facing a transition or integration where AI, contracts, and operations are colliding, my team at In2Edge is in the business of making sure the “after the deal” actually works.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *