I. Brain Damage

“I could own ten times as many assets. What limits me is the brain damage.”

We hear this from GPs constantly. Fund sizes are capped by human attention. The context needed to make a single investment decision is scattered across dozens of sources: the broker, the seller, the property manager, the tenants, the municipality, the lender. The GP’s team spends weeks pulling fragments together, normalizing them, constructing a coherent picture.

Much of that assembly is mechanical — normalizing rent rolls, cross-referencing lease terms, rebuilding trailing financials into a comparable format. But not all of it. The analyst who builds the model learns things about the deal that a summary can’t carry. Some judgment is embedded in the grunt work. The opportunity is in the mechanical majority, not the whole process.

The GP bundles deal sourcing, underwriting, asset management, disposition, and LP reporting into a single package. The LP pays 2 and 20 for the whole bundle because they cannot see which component generated alpha. Did the fund outperform because of superior sourcing? Better underwriting? Skilled asset management? Fortunate timing? The LP cannot disaggregate the contribution. So the GP captures rent on everything.

The LP also pays for deal flow, relationships, and operational capacity they don’t have. These are real. But the information asymmetry is what prevents the LP from knowing how much of the fee is justified by those things and how much is captured as rent. The GP controls the quarterly reporting narrative. The LP has roughly two hours per week per relationship. The information asymmetry is what makes the bundle uncontestable — and the cognitive throughput constraint is what keeps the information asymmetry in place.


II. The Securitization Parallel

This has happened before.

Before securitization, banks held every mortgage function on one balance sheet. They originated, funded, serviced, and custodied loans. The depositor had no visibility into any of it. The bank captured rent on the entire bundle because nobody could see which function generated value and which destroyed it.

Securitization began as a standardization exercise. The first contribution was not a secondary market — it was a conforming loan standard: a shared specification for how a mortgage should be documented, evaluated, and compared. That standard made each loan legible to parties who had never seen the borrower or the property. Legibility enabled pooling. Pooling enabled tranching. Tranching enabled price discovery. And price discovery enabled a secondary market that expanded mortgage origination by an order of magnitude.

Securitization 1.0 standardized the cash flows and unbundled the bank. Securitization 2.0 standardizes the decision-making and unbundles the capital allocator.

Securitization 1.0 required the assets to conform. Real estate equity decisions are heterogeneous by nature. No two deals look the same. AI resolves this by standardizing the processing, not the assets. It can ingest a rent roll from Baltimore, a lease abstract from Houston, and an offering memorandum from Miami — and produce standardized output in each case. The conforming standard emerges at the information layer, not the asset layer.

In 2017, I was part of the Morgan Stanley team that structured the first non-performing mortgage securitization in Europe since the crisis. In every prior deal, the servicer was a line item — interchangeable, commoditized. But non-performing loans require workout judgment. The sponsor had to present their recovery methodology directly to bond investors and convince the rating agency that their servicing expertise justified the structure. The servicer was the deal. The difference between a commodity role and an essential one was whether the people providing capital could see what you were doing.

But securitization 1.0 also demonstrated a failure mode — when origination separated from ownership, quality collapsed. Rating agencies — paid by the originators they were supposed to judge — became the weak link. The information intermediary served the agent rather than the truth, and the system collapsed. The incentive structure of whoever builds the information layer determines whether this pattern repeats.


III. What AI Actually Changes

Prior software created systems of record without transferring information control. The GP still decided what to enter, what to report, how to frame it. The principal got a dashboard built on agent-curated data.

AI is architecturally different. It processes the raw artifact — the rent roll, the lease, the broker’s email, the maintenance log — without requiring the agent to structure the input. The agent is no longer the bottleneck between raw reality and the system of record. This is most true today for structured documents: rent rolls, leases, financials, offering memos. It gets weaker for unstructured relationship information — calls, negotiations, soft reads on counterparties — where the agent’s interpretation still dominates.

A concrete example at the task level. A tenant representing 12 percent of NOI calls about a broken HVAC in July. The lease expires in eight months. A co-tenancy clause would trigger a 5 percent rent haircut if the tenant leaves. The $24,000 repair exceeds the approval threshold. The property manager is paid to close tickets. The context lives across scattered systems, emails, and a filing cabinet — all agent-controlled.

An AI layer pulls the lease terms, reconciles notes, links prior repairs to renewal outcomes. The PM’s decision becomes auditable. You can see whether the repair was made, whether the tenant renewed, and whether the PM’s judgment was any good. Pay can shift from time toward outcome. Risk moves from none to partial. The information the principal needed to write a better contract now exists in a system the principal can access.

That’s task-level monitoring. The GP-level version is acquisitions — standardizing how deals are evaluated so that LPs can compare the GP’s underwriting assumptions against outcomes and against peers. This is harder than PM monitoring, but it’s the same mechanism: transfer information control from the agent to the system, and the principal can write better contracts.

When this happens at scale, what becomes measurable eventually gets commoditized. Electronic trading took decades to compress dealer spreads. What remains illegible — taste, relationships, conviction under uncertainty — becomes scarcer and more valuable. Those hiding behind information assembly lose their cover.

The GP transition follows the same logic as securitization 1.0. First, LPs get system-controlled visibility — the GP bundle is intact but the trust premium starts to erode. Then, observable components get benchmarked and fee pressure begins. Then, commodity functions migrate to specialized providers and the GP narrows to judgment. This sequence is structurally logical, but each step requires LPs to actively change behavior — renegotiate terms, source specialists, manage multiple relationships — and LPs are famously slow to move. The direction is clearer than the timeline.


IV. Where This Might Be Wrong

The 2008 lesson. When specification replaces discretion, you remove a certain kind of accountability. If the monitoring system is biased, it makes the same bad call in every building at once.

Goodhart’s law. Any monitoring system becomes the target of optimization. Agents will optimize for measured outcomes, not actual outcomes. The thesis holds only if the monitoring side iterates faster than the gaming side — plausible in high-volume workflows, uncertain in complex ones. And the monitoring infrastructure itself is not immune: if GPs are the paying customers, the system may optimize for what GPs want to see rather than what LPs need to know. This is why LP-facing transparency has to be a structural byproduct of the system’s design, not an afterthought.

The judgment question. The framework predicts that the GP’s bundled fee compresses as observable components get repriced. But how much of the current fee is trust-aggregation rent versus genuine judgment? If the answer is 80/20, the repricing is dramatic. If it is 40/60, the repricing is real but moderate.

Incumbent absorption. Large GPs may internalize the information tooling, preserving the bundled structure while adopting the technology internally. If so, the change happens inside firms, not between them — the GP role narrows without being disaggregated.

Regulatory inertia. SEC rules, ERISA requirements, state pension fund mandates — these prescribe specific fund structures and GP/LP relationships. Regulatory frameworks were built around the current bundle. Even if the information problem is solved, regulatory inertia could slow disaggregation for years.