AI consulting guide
AI due diligence for private equity: separating margin from theater
By Bracken Fields
Why AI showed up on the diligence checklist
Private equity diligence used to cover quality of earnings, customer concentration, working capital, key person risk, and IT debt. AI was an optional bullet. That changed once portfolio companies started running AI tools at scale and once vendors started pasting "AI-powered" on every product page.
Today, if you are buying a lower-middle-market services or software-enabled business, you need a real read on AI before you close. Not because every deal needs an AI thesis, but because AI is now a source of margin expansion in some businesses and a source of hidden risk in others, and you cannot tell which is which from the CIM.
This is what AI due diligence for private equity looks like when somebody who has built and shipped these systems runs it.
What an AI assessment for PE firms covers that IT DD does not
A standard IT diligence walks the stack: licenses, infrastructure, security posture, ticket backlog, key vendor contracts. Useful. Not enough.
AI diligence is a different question. It asks: where can this business compress cost or grow revenue using AI in the next 12 to 24 months, and where is it exposed to risk it has not priced? You need three things to answer:
- A view of the workflows where AI can produce real margin
- A view of the AI a target is already running, including the vendor stack
- A view of the data, integration, and risk posture that determines what is buildable post-close
Most diligence reports cover one of those. The good ones cover all three.
Where AI creates margin in lower-middle-market businesses
I spend my days as a CTO in a services business. The places I see AI moving the needle on cost and throughput show up over and over again:
- Call review and QA at contact centers, intake teams, and customer service desks. Manual sampling covers 1 to 3 percent of calls. A well-tuned local model can cover 100 percent and free reviewers to coach the agents who need it.
- Document extraction and classification. Anywhere a back-office team is reading unstructured documents and typing fields into a system, there is structured-output money on the table.
- Sales motion support. Lead scoring from CRM history, draft outbound emails grounded in a real account brief, call summaries written into the CRM. The wins are measured in rep hours per week.
- Field operations and dispatch. Routing, scheduling, and technician triage from photos or videos.
- Engineering and product velocity in software-enabled businesses. Code generation, test scaffolding, and documentation. Real if measured well, theater if measured by lines of code.
The pattern across all of these is the same. A workflow that repeats, has structured inputs and outputs a person could write down, and currently consumes a meaningful share of labor or wait time.
Where vendors are selling AI labels without product depth
Every SaaS contract review I do now has at least one AI feature line item. About half are real. The others are rebranded autocomplete or a thin wrapper around a public model with no product judgment underneath.
How to tell the difference inside diligence:
- Ask what happens when the AI is wrong. A real product has an explicit human-in-the-loop step, an audit log, and a way to correct the model. A wrapper has a "regenerate" button.
- Ask for the model behind the feature, the rubric or schema it uses, and the eval suite. If the answer is "we use OpenAI," that is the start of the answer.
- Ask the people inside the company who use the feature. If frontline employees ignore the AI tab, the product depth is not there.
- Look at price. A 40 percent uplift attributed to an "AI add-on" with no change in delivery is a flag.
You will see vendor lock-in dressed up as innovation. Price it.
Data readiness and integration constraints
Most AI value in a portfolio company is gated by data, not by model choice. Diligence should answer:
- Is there a single source of truth for the workflows AI would touch, or three half-overlapping CRMs and a shared inbox?
- Can the systems be read from and written to via API, or is everything trapped in a vendor portal?
- What is the data quality on the records that matter? AI scoring built on dirty CRM data inherits the dirt.
- Are there contractual or regulatory restrictions on where data can move? HIPAA-covered, PCI-covered, lawyer-client privileged, or callers in two-party-consent states. These constrain the architecture.
A target with messy data is not disqualified. It does mean the AI value will lag a data cleanup. Price the cleanup into the model.
Customer support, sales, and back-office automation
Three areas show up most often in lower-middle-market diligence:
Customer support. Ticket triage, first-draft replies, knowledge base answers grounded in the company's own docs, call summarization, and QA scoring. The math is direct. Tickets per week times handle time times loaded labor rate, against a realistic capture rate.
Sales operations. Lead enrichment from public data, draft sequences grounded in real account research, meeting notes written back to the CRM, and deal-risk scoring from pipeline history. The biggest single win is usually CRM hygiene driven by an agent that reads the deals every Friday and flags the ones at risk.
Back office. Accounts payable matching, contract review, vendor onboarding, payroll exception handling, and document intake and indexing. Quiet work, but the hours add up and the error rate matters.
For each of these, the diligence question is the same. What is the current cost, what is the realistic capture rate inside 12 months, and what does it cost to build and run? An AI DD deliverable should include a sized estimate, not a vague "high opportunity."
Risk review the lawyers will not cover
This is where AI DD earns its fee. The risk surface is wider than it looks.
- Hallucinations. Where is the company already shipping AI output to customers without a human review step? That is a settlement risk waiting to happen. Get the audit trail.
- Compliance. HIPAA, PCI, state-level privacy regimes, two-party-consent recording laws, FCRA if there is any consumer scoring, and the patchwork of new state AI disclosure rules. Healthcare, financial services, legal, and insurance carry a higher bar.
- Privacy. What data is being sent to which third-party model APIs? Does the vendor train on inputs by default? Are BAAs in place where they need to be? Has anyone read the OpenAI or Anthropic terms in the last six months?
- Auditability. Can the company show a regulator or an aggrieved customer what the model saw, what it produced, and who reviewed it?
- Vendor lock-in. Per-call and per-token pricing scales with success. A unit-economics model that works at 100 calls a day can break at 10,000. Check the contracts and the exit options.
- Concentration. A single AI vendor across many critical workflows is key person risk in disguise.
A clean AI DD report flags each of these by workflow and gives a real read on what remediation costs.
Post-close 90-day AI roadmap
A buyer should walk into close with a 90-day plan, not a wish list. The shape of a good post-close AI roadmap:
Days 1 to 30: ground truth. Confirm the diligence. Sit with the operators of the top three target workflows. Pull the real volume and cost numbers. Validate data access. Pick one workflow as the first build.
Days 31 to 60: ship one thing. A focused build on the highest-confidence workflow. A pilot, with a measured baseline, an acceptance rubric, and a written rollback path. The win is a working system in production with a real metric, not a slide.
Days 61 to 90: scale and harden. Bring the pilot to full coverage. Stand up the second build. Put governance in place: an AI policy, vendor review process, and an audit trail standard. Hand the operating team a dashboard they will read.
By day 90, you should have one production AI system, a second build in flight, a governance baseline, and a sized backlog for the next four quarters.
What an AI DD deliverable should include
When you commission AI diligence, expect the report to contain:
- An AI maturity read on the target across data, tooling, talent, and governance
- A workflow-by-workflow value map with sized opportunities and confidence levels
- A vendor stack review with depth-versus-theater calls and contract risks
- A risk register covering hallucinations, compliance, privacy, auditability, and concentration
- A buildability assessment for data, integration, talent, and timeline
- A 12-month investment plan with capex, opex, and headcount
- A 90-day post-close roadmap with named owners
That is the document you can hand to your operating partner on day one of ownership. Without it, AI shows up in board decks as a line you cannot price.
How Indy AI Consulting helps
I run AI diligence for PE funds and family offices on lower-middle-market services and software businesses. The work is hands-on, written for operators, and grounded in what I have built and shipped as a sitting CTO. I have done the contact-center QA build, the back-office extraction work, the sales agent build, and the local-model versus cloud-API tradeoffs. I can tell when a vendor pitch deck is real and when it is paint.
If you have a deal in front of you and want a real read on the AI thesis and the AI risk before you close, <a href="/contact">get in touch</a>. I am in Indianapolis and I work on deals across the Midwest and beyond. The first call is a working session, not a sales pitch.