All posts/Humans and AI in the Credit Process: Who Owns What

AI & Credit

Humans and AI in the Credit Process: Who Owns What

JT

Joshua Tackaberry

Founder & CEO · April 2026 · 9 min read

The question is not whether AI belongs in commercial lending. It is where the line sits.

AI is already in the building

Most commercial lenders have not made a formal decision about AI. But AI has made a decision about them. Document processing tools, automated spreading platforms, and portfolio monitoring software with machine learning components are already inside many lending operations, often adopted team by team, tool by tool, without a clear institutional policy about where the technology is trusted and where it is not.

That ambiguity is a risk. Not because AI in commercial lending is inherently dangerous, but because unclear ownership in a credit process creates gaps. Gaps create errors. Errors in credit have consequences.

The lenders getting this right are not the ones who have banned AI or the ones who have handed the process to it. They are the ones who have drawn the line deliberately.

A framework: four layers of the credit process

Commercial lending decisions move through four distinct layers. Where AI fits, and where it does not, becomes clear when you separate them.

Preparation. Gathering documents, extracting data, normalizing formats, spreading financials, populating the credit memo template. This is high-volume, low-judgment work. Errors here are costly not because they are hard to catch, but because they are easy to miss when the analyst is moving fast.

Analysis. Interpreting what the data means. Identifying trends in cash flow, stress testing coverage ratios, comparing the borrower against industry benchmarks. This requires both data and context. AI can do parts of this well. It cannot do all of it.

Judgment. Weighing factors that do not fit neatly into a model. Management quality. Relationship history. Market conditions specific to a submarket or industry. The reason a coverage ratio is thin this year. This is where commercial credit has always lived, and where it still lives.

Accountability. The final credit decision and the signature that goes with it. Regulators, examiners, and borrowers all expect a human to own this. No current AI system changes that expectation, and none should.

What AI owns well

Preparation is AI's domain. Extracting data from tax returns, rent rolls, operating statements, and entity documents. Normalizing inconsistent formats from different borrowers and accountants. Populating spreading templates. Flagging missing pages or inconsistent figures before the analyst touches the file.

AI is also well suited to ongoing portfolio monitoring. Tracking covenant compliance across a large book. Flagging accounts where financial performance is drifting relative to underwriting assumptions. Surfacing credits that need attention before the annual review cycle forces the issue.

In both cases, the value is speed and consistency, not intelligence. AI does not get tired at the end of a long spreading session. It does not skip a line in a K-1 because it is working on three deals at once. That reliability is genuinely valuable.

What humans must own

Credit character cannot be extracted from a document. A borrower who has managed through a downturn, maintained relationships with vendors, and been transparent with their lender brings something to the table that a debt service coverage ratio does not capture. A human has to assess that.

Exceptions are another human domain. The loan that does not fit the policy box but represents a sound credit. The guarantor whose net worth is concentrated in one asset but whose track record justifies the exposure. AI can flag the exception. It cannot decide whether the exception is warranted.

Relationship risk is also human territory. Commercial banking is not a transaction business. The lender who understands why a borrower is struggling in year three of a five-year loan, and who has the relationship to work through it constructively, is doing something no model can replicate.

The regulatory dimension

Banking regulators have been clear: AI can support credit decisions, but it cannot make them. The OCC, FDIC, and Federal Reserve have all issued guidance reinforcing that fair lending obligations, model risk management requirements, and supervisory accountability all remain with the institution, not the tool.

This matters practically. If an AI-assisted decision results in a fair lending complaint, the examiner will ask who reviewed the output, who approved the decision, and what controls were in place. “The model said so” is not an answer. Institutions that cannot produce a clear human accountability trail for credit decisions are exposed.

Model risk management also applies. If AI is producing outputs that feed into credit decisions, those models need documentation, validation, and ongoing monitoring. Many community banks and debt funds are using AI tools without a model risk framework in place. That gap will not go unnoticed.

How to draw the line inside your institution

Start with a simple audit. List every point in your origination and portfolio management process where AI is currently involved, formally or informally. For each one, ask three questions: Is there a human reviewing this output before it influences a decision? Is there documentation of that review? If the output is wrong, who catches it and when?

Then build the policy from what you find. The goal is not to eliminate AI from the process. It is to be intentional about where it operates autonomously and where it operates as a first pass for human review. Those are very different things, and conflating them is where institutions get into trouble.

The lenders who will benefit most from AI in commercial credit are not the ones who trust it the most. They are the ones who understand it well enough to use it precisely.

JT

Joshua Tackaberry

Founder & CEO, CapitalMrkts

See it for yourself

Schedule a demo and we'll show you how CapitalMrkts can work for your team.