Why AI Projects Fail in Private Equity, And How to Fix Adoption
Dr. Leigh Coney
June 15, 2025
3 minutes
70% of AI projects miss their ROI target. The reason isn't technical. People feel threatened, their habits get disrupted, and they don't trust the tools. Fixing this means understanding people, not buying better software.
By Dr. Leigh Coney, Founder of WorkWise Solutions
A major PE-backed firm rolled out a state-of-the-art AI tool to automate competitive intelligence. The technology worked perfectly. Six months later, 12% of the team used it. Licenses sat unused. Partners went back to manual research.
This isn't a technology failure. It's a people failure, and it hits PE portfolio companies hard. 70% of AI projects miss their ROI, and the cause is almost never technical. It's human.
Firms treat AI adoption like a software rollout. It's not. It's a behavior change problem.
Three Reasons People Don't Use It
1. Status threat. Senior professionals built their careers on pattern recognition. On synthesizing complex information faster than junior colleagues. When AI does the same thing, it doesn't feel like a helper. It feels like a replacement.
A Partner who spent 20 years building investment intuition doesn't want an algorithm suggesting portfolio allocations. The reaction is predictable: rejection, skepticism, quiet resistance. If a tool threatens someone's identity, they'll find reasons not to trust it.
2. Workflow disruption. High performers have deeply ingrained habits. A senior analyst doesn't "think" about how to build a financial model. Their hands move automatically. Introduce an AI tool that needs a different interaction and you're not competing with the old process's efficiency. You're competing with 10,000 hours of muscle memory.
Learning a new system, even a better one, takes effort. People avoid it. They don't resist better tools. They resist different ones.
3. Trust deficit. When errors matter (a missed regulatory flag, a flawed valuation, a bad thesis), trust is everything. Generic AI tools give you answers without showing their work. For professionals trained to check every assumption, that's a dealbreaker.
A Managing Director won't stake their reputation on an answer they can't defend to the Investment Committee. No transparency, no trust. No trust, no adoption.
What's Hidden Below the Surface
In a 2023 study of 758 BCG consultants, those using AI saw a 43% quality improvement among below-average performers but only 17% among top performers (Dell'Acqua, Mollick et al., Harvard Business School). That gap tells you AI adoption fails not because the tech doesn't work, but because firms don't design for the people who need it most.
Beyond individual psychology, AI runs into organizational antibodies.
Power structures shift. When automation makes expertise more accessible, who loses influence? When junior analysts can do work that used to need senior oversight, something breaks.
Incentives don't line up. If KPIs reward hours billed or deals closed, why adopt a tool that makes your effort less visible?
Culture rejects foreign objects. Firms with "we've always done it this way" attitudes treat AI as a threat to their identity.
How to Fix It
Invisible integration. The best AI rollouts don't ask users to change workflows. They enhance the ones people already use. Our Pre-Screening Agent didn't replace the Investment Committee memo. It automated the data gathering so analysts could focus on interpretation. Users kept working in their familiar format. The AI ran underneath.
Start small. Begin with low-stakes tasks where errors are recoverable. Let users build trust over time. Only after they've seen the AI perform reliably on routine work do you bring it into high-stakes decisions. Trust is earned, not declared.
Let people take credit. Make sure humans get the credit for AI-assisted work. If the system makes people look good instead of making them obsolete, they'll use it.
Technology is the easy part. Changing habits, navigating power dynamics, and building trust is the hard part. AI projects that ignore the human side join the 70% that fail. The ones that treat the people problem as a design constraint succeed.
Related Articles
AI Governance for Portfolio Companies: Beyond the Compliance Checkbox
Compliance-driven AI governance creates false security. Build a framework that protects portfolio companies while accelerating AI value creation.
Why Generic AI Is a Liability in High-Stakes Settings
Generic AI lacks the security and compliance rigor PE and alternative investment firms require. Learn why zero-retention architecture matters.
Ready to build AI adoption your team will actually use?
See our High-Stakes AI Blueprint for change management, or read about Dr. Leigh Coney's approach to human-centered AI.
Book a Discovery Sprint