
20 years in infra & renewables, and 2+ GW developed & financed
Kevin Feldman - 12 February 2026, A simple discipline that reveals whether a financial model communicates understanding or hides it.Early in my career, I was an analyst in charge of the corporate financial model at an IPP.
One of the senior executives, old school and not an Excel user, asked me a simple question: “Can you print it so that I can review it?” He wanted it on paper, so he could sit down, go through it line by line, and understand how the business actually worked.
At the time, it felt archaic. In hindsight, it was one of the best modeling lessons I’ve had.
The “print test” has nothing to do with paper. It’s about whether a model can be understood without scrolling, clicking, or arcane knowledge of Excel formulas.
Being able to go through a model without looking at the underlying formulas/code, at least on a first read, forces discipline:
•a clear separation between inputs, calculations, and outputs
•a logical flow that does not require constant back-and-forth navigation across worksheets or within the same worksheet (what one of my seniors once called the “windshield wiper” effect)
•outputs that tell a coherent story at a glance
If a file only works when navigated by the person who built it, it is not really a model. It is a calculator and, in practice, a black box to most readers. That invites error and clouds judgment.
Renewable energy models tend to accrete complexity over time:
•assumptions evolve as information is discovered during development
•financing logic gets layered on (e.g., tax equity, debt sculpting, reserves)
•sensitivities and scenarios multiply
Eventually, the model becomes “too important to touch” and too hard to read, understand, or even challenge. When that happens, it cannot really be trusted.
Yet these files support irreversible decisions: investment approvals, financing, and capital allocation. If economics cannot be communicated clearly, technical accuracy alone does not help much.
A financial model is not just a computation engine: it is a persuasive narrative rendered in numbers. It communicates:
•how value is created
•where risks sit
•what actually matters
The print test forces a simple question: If I had five minutes with a senior decision-maker, what would I put in front of them?
If the answer is “they need to scroll around and press F2 in Excel until it makes sense,” the model has failed in its purpose as a communication/storytelling tool.
Passing the print test doesn’t mean simplifying economics or reducing rigor. It means complexity is contained, not smeared across the file.
The best models I have seen, including the most sophisticated ones, are often the easiest to read. That is not a coincidence.
Even today, with large screens and powerful spreadsheets, I still ask:
•could this be printed and read without looking through formulas?
•does the structure of the model match the logic of the deal?
•would someone senior understand it without my narration?
If the answer is no, the model needs work.
In practice, many transactions still happen with imperfect models. Deal teams adapt, rebuild intuition outside the spreadsheet, and spend time explaining what the file should have communicated on its own.
But that extra friction is not harmless. It slows decisions, increases misunderstanding, and concentrates knowledge in too few people.
A model that passes the print test does not make decisions for you: it allows the right people to make them confidently.