Approach
Services
Solutions
Tools
Case Studies
Resources
About
Contact
Private Credit

AI Agents for Private Credit: Where Covenant Tracking Actually Breaks

Author

Dr. Leigh Coney

Published

April 29, 2026

Reading Time

9 minutes

Covenant tracking sounds like a perfect job for an AI agent. Read the credit agreement, pull the borrower financials, calculate the ratio, flag the breach. Demos make it look trivial. Production deployments make it look hard, because the agreements are not as standardized as the demos pretend, the EBITDA in the calculation is not the EBITDA the borrower reports, and the cure periods do not behave the way they read.

By Dr. Leigh Coney, Founder of WorkWise Solutions

A direct lender I worked with last year was running a 60-position portfolio with one credit analyst doing covenant tracking. Every quarter, she pulled financials from each borrower, recalculated the leverage ratio, the fixed-charge coverage, and the minimum-EBITDA test, and updated the portfolio compliance log.

The work took her two weeks of every quarter. She did it well. The lender's covenant compliance reporting was accurate enough that LPs never asked questions.

Then she went on parental leave. The interim analyst who took over got the math right on every position. The covenant log was up to date. The lender was confident.

One position breached its leverage covenant a quarter later, and nobody caught it for six weeks. The breach had been visible in the data the interim analyst reviewed. He had calculated the ratio correctly. What he had missed was that the credit agreement defined leverage using a non-standard EBITDA definition that excluded a specific category of restructuring charges. The borrower had taken those charges in the previous quarter. The covenant calculation that had passed the math test failed the agreement-text test.

The lender lost two months of intervention time. The cost was material.

The Myth That Covenants Are Simple

Covenant tracking is the AI use case private credit teams ask about most. It looks like the obvious entry point. The work is repetitive, calculation-heavy, and runs on a quarterly cycle. The borrower sends the financials. The agent reads the credit agreement. The agent does the math. Done.

Demos by AI vendors make this look easy. The vendor has a sample credit agreement. The agent reads the leverage covenant clause. The agent reads the borrower's quarterly financial statement. The agent calculates the ratio. The dashboard turns green or red.

Production credit portfolios do not look like demos. The credit agreements are not the vendor's sample. They are 200-page documents with negotiated definitions, schedule references, and side-letter modifications. The borrower financials do not come in clean PDFs with consistent line items. They come in spreadsheets with custom chart-of-accounts, occasional errors, and seasonal one-time charges that may or may not be EBITDA add-backs depending on which agreement you are looking at.

The agent that handles the demo well will produce wrong answers on the production portfolio. The math will look right. The math will be wrong, because the agent missed the parts of the agreement that change what gets calculated.

According to S&P Global Market Intelligence research on private credit, the median direct lending credit agreement contains 12 to 18 negotiated definitions that affect covenant calculations, and the median portfolio holds positions across 8 to 15 different agreement structures. No two positions calculate covenant compliance identically.

Where Covenant Agents Actually Break

Four specific places, every time.

1. Definitional drift across credit agreements. Every credit agreement defines EBITDA. The definitions are similar. They are not the same. One agreement allows specific restructuring charges as add-backs up to a 15% cap. Another allows them only with consent of the agent. A third defines them entirely differently and references a schedule that the original closing binder includes but the AI agent's data set does not.

A generic covenant agent assumes a standard EBITDA definition and runs the math. The number it produces matches what the credit team would calculate using a textbook definition. It does not match what the credit agreement actually requires.

The fix is an agent that reads each agreement's specific definitions on a position-by-position basis and stores them as parameters that flow into every quarterly calculation. The credit team validates the parameters at deal closing. After that, the agent uses position-specific math, not portfolio-average math.

2. EBITDA add-back logic. The borrower's reported EBITDA is rarely the EBITDA the credit agreement uses for covenant calculations. Add-backs for synergies, restructuring charges, non-recurring items, owner compensation adjustments, and pro-forma adjustments for acquisitions all change the number.

A generic agent that pulls "EBITDA" from a borrower financial statement gets the wrong number. The right number requires understanding what add-backs the agreement permits, what add-backs the borrower has actually claimed, and what the running cap on add-backs is at this point in the agreement's life.

This is where most covenant agents quietly produce wrong answers. The math is right. The inputs are wrong.

3. Cure period timing. Most credit agreements include cure rights. The borrower can cure a covenant breach by injecting equity, prepaying debt, or recalculating with adjusted figures within a defined cure window. The window is usually 5 to 15 business days, sometimes more.

A naive agent flags a covenant breach the day the financial statement arrives. That alert is technically correct and operationally premature. The credit team needs to know whether the breach is a cured event, an uncured event still within the cure window, or an uncured event past the window. The right alert distinguishes between these three states. The wrong alert treats them all as breaches and trains the credit team to ignore alerts.

Alert fatigue kills covenant agents faster than any other failure mode. If the credit team gets three alerts per week and two of them are noise, the third alert that actually matters gets dismissed.

4. Springing covenants. Some covenants only apply when a specific condition is met. Senior leverage covenants might spring into effect when revolver utilization exceeds a threshold. Maintenance covenants might apply only after the first acquisition. Coverage tests might be tested only at quarter-end.

A generic agent that calculates every covenant against every reporting period either produces breaches that are not breaches (because the covenant was not active) or misses breaches that are breaches (because the covenant just sprung into effect this quarter and the agent did not notice).

Springing covenants require the agent to maintain a state machine for each position: which covenants are currently active, which are dormant, what conditions would activate them. That state machine has to be updated every quarter based on the borrower's actual operating posture, not just a static read of the credit agreement.

Dr. Leigh Coney, Founder of WorkWise Solutions, notes: "Every covenant agent we have rebuilt for a private credit client failed at month four for the same reason. The vendor had configured it as if the entire portfolio used a single covenant model. The portfolio actually used eleven different models because eleven different agreements defined leverage and EBITDA differently. Until the agent stores agreement-specific parameters per position, it produces good-looking output that nobody can trust."

What Good Covenant Agents Actually Do

A covenant agent that survives in production looks different from the demo version. Six things distinguish it.

1. Per-position parameter storage. The agent reads each credit agreement at closing and extracts the specific definitions, caps, baskets, and cure provisions. Those parameters are stored per position, validated by the credit team, and applied to every subsequent calculation. The credit agreement is not a document the agent re-parses every quarter. It is a parameter set established once and updated only when the agreement is amended.

2. Add-back tracking with running caps. Every reported add-back gets logged against the running cap permitted under the agreement. When the cap is approached, the agent flags it before the borrower bumps against it. This catches a class of issue the credit team often discovers at the wrong time.

3. Cure-aware alerts. Alerts distinguish between potential breaches (early flags before the financial statement is final), uncured breaches inside the cure window, and uncured breaches past the cure window. The credit team gets graduated alerts that match the actual urgency rather than uniform alerts that train them to dismiss.

4. State-machine handling for springing covenants. The agent tracks the activation conditions for each covenant on each position. When conditions change (revolver utilization exceeds threshold, first acquisition completed, leverage above a specified level), the agent updates which covenants are now active and applies them prospectively.

5. Trajectory modeling, not just point-in-time tests. The agent does not just calculate whether the borrower passed the test this quarter. It models the trajectory and flags positions tracking toward breach in the next 60 to 90 days. The credit team gets time to amend, restructure, or intervene before a breach materializes.

6. Source-cited outputs. Every covenant calculation in the agent's output traces back to the specific provisions of the agreement and the specific lines of the borrower financial statement. When the credit team questions a result, the agent shows its work. This is the difference between a tool the credit team uses and a tool the credit team second-guesses.

For more on monitoring agents that handle this complexity, see our guide to the best AI agents for private credit firms in 2026, and the related solution at AI covenant monitoring for private credit.

Why Most Deployments Fail at Month Four

The pattern is consistent across the deployments I have seen.

Month one: enthusiasm. The agent is configured. The first portfolio sweep produces output that looks reasonable. The credit team is happy.

Month two: the first borrower reports come in. The agent calculates covenants. Most positions look fine. A few flags appear. The credit team investigates and finds the flags are noise (cure-window timing the agent did not understand, or add-back categorization the agent guessed wrong).

Month three: a real issue surfaces, but the agent does not catch it because the issue is in a position with a non-standard agreement structure. The credit analyst catches it manually during the regular review. Confidence in the agent erodes.

Month four: the credit team is back to manual covenant tracking. The agent runs in the background but nobody trusts its output without independent verification. The deployment has effectively failed even if the software is still running.

The fix is calibration, not better models. The agent that survives is the one that absorbed the credit team's specific portfolio over the first six to eight weeks, with the credit team validating outputs against their own analysis on every position. By month three, the agent's outputs match the credit team's own work on 95%+ of positions. The flagged 5% are real signals, not noise. That trust is the entire deployment.

The Practical Test

Before signing on with any AI vendor for covenant tracking, run this test. Pick three positions in the existing portfolio that have non-standard agreement structures. The kind of positions where the credit analyst made specific judgment calls about EBITDA add-backs, springing covenant activation, or cure period treatment last quarter.

Hand the credit agreements and the borrower financials to the vendor. Ask them to produce a quarterly covenant report.

If the report matches the credit analyst's work on all three positions, the vendor's agent is calibrated for non-trivial credit work. If the report misses on any of the three, the agent is built for the standard case. It will fail on the rest of the portfolio the same way it failed on the test, just slower.

Most vendors fail this test. The ones that pass are usually the ones that customize on a per-portfolio basis, which is more expensive and less marketable but considerably more useful.

For private credit teams thinking through the broader AI agent stack, the complete guide to AI for private credit covers the full set of agent types beyond covenant monitoring.

Related Articles

Ready to deploy a covenant agent that actually holds up in production?

See our covenant monitoring solution for private credit, or read about Dr. Leigh Coney's approach to high-stakes AI.

Book a Discovery Sprint
Schedule Consultation