Somewhere in Austin, an architect feeds a floor plan into a generative design tool. The software suggests a 4×10 LVL header where the load tables actually require a 5-1/4×11-7/8. The architect, trusting the output, stamps the drawing. The house gets built. Two years later, the header sags. Drywall cracks. A structural engineer is called. The fix costs $87,000.

The architect files a claim on their errors-and-omissions policy. The carrier denies it.

This scenario hasn’t hit a courtroom yet. Give it eighteen months.

CG 40 47 & CG 40 48 Verisk’s standardized AI exclusion forms, effective January 1, 2026

The Forms That Changed Everything

On January 1, 2026, Verisk’s standardized AI exclusion forms went into effect. CG 40 47 and CG 40 48 give insurance carriers ready-made policy language to exclude losses arising from generative AI use. The forms name specific tools: ChatGPT, Midjourney, DALL-E. They also sweep broadly enough to capture any AI system that generates design recommendations, material specifications, or structural calculations.

Carriers moved fast. Berkley, AIG, and Great American have filed for regulatory approval to exclude AI liabilities from professional liability policies. Philadelphia Insurance and Hamilton Select have already done it. The language in updated policies is blunt: “Claims arising from AI-generated design elements not subjected to human validation are excluded.”

Read that sentence again. “Not subjected to human validation” is doing enormous legal work, and nobody has defined what constitutes adequate validation.

The Adoption Curve Meets the Coverage Cliff

If architects weren’t using AI, none of this would matter. They are.

An AIA survey published March 2025 found that one-third of architecture firms incorporate AI tools in daily operations. Among firms with 50 or more employees, adoption hits 61%. The generational split is even starker: 66% of architects under 35 use AI image generators, versus 41% of those over 50.

The number that should keep managing partners awake: only 8% of firms have formally implemented AI solutions with governance structures in place. Another 20% are “working on implementation.” The remaining 72% are winging it. A junior designer runs a load calculation through an AI assistant during lunch, pastes the output into the structural notes, and nobody tracks that it happened.

58–88% Hallucination rate for general-purpose AI tools (Stanford Law School)

The insurance industry’s caution isn’t paranoia. Stanford Law School research measured hallucination rates of 58% to 88% for general-purpose AI tools. Even purpose-built “legal-grade” AI systems hallucinate at 20–33%. Applied to construction: an AI tool asked to specify a beam size for a 16-foot clear span might confidently cite an AISC table that doesn’t exist, or reference an IBC section that says something different than what the AI claims.

In the legal profession, this already has a name. In Mata v. Avianca, a New York attorney was sanctioned for submitting a brief that cited six cases fabricated by ChatGPT. The cases did not exist. The courts do not exist in which those cases were supposedly decided. The attorney’s defense: he thought the tool was a search engine. Now imagine the architectural equivalent. An AI tool references IRC Section R502.3.1 to justify a joist span. The section exists, but the table the AI cited was for a different species and grade of lumber. The structure passes plan review because the inspector trusts the code reference. The floor bounces. Then it fails.

Who’s Holding the Bag?

Standard construction contracts weren’t written for this. Jane Kutepova of Michelman & Robinson analyzed standard AIA and ConsensusDocs forms and found they don’t address who selects and configures AI tools on a project, who owns AI-generated design data, who assumes liability for decisions made by autonomous systems, or whether an AI malfunction qualifies as force majeure.

Without contract language allocating AI risk, liability defaults to whoever stamped the drawing. That’s the architect. Their E&O policy was supposed to catch them. Now that policy may have an AI exclusion.

The arithmetic for a homeowner gets ugly fast. Average custom home: $400,000. Structural defect remediation typically runs 15–30% of construction cost. That’s $60,000 to $120,000. If the architect’s E&O carrier denies the claim under an AI exclusion, the homeowner sues the architect personally. A solo practitioner or small firm with $500,000 in annual revenue faces a six-figure judgment with no insurance backstop. Philip Stein at Bilzin Sumberg identifies this as the central risk: “Contracts that incorporate AI need to explicitly allocate liability for such errors. Without such provisions, parties risk expensive breach of contract claims and protracted litigation.”

Scenario E&O Coverage (Pre-2026) E&O Coverage (Post-Exclusion)
Architect miscalculates beam size using hand methods Covered Covered
Architect miscalculates beam size using CAD software Covered Covered
Architect uses AI tool output without independent check Covered Likely excluded
Junior staffer uses AI tool; architect stamps without knowing Covered Likely excluded
AI suggests non-code-compliant material; architect adopts it Covered Likely excluded

The Counterargument: It’s Just Another Tool

Some practitioners argue carriers are overreacting. An AI design assistant, the reasoning goes, is no different than a structural calculator or a BIM plug-in. The architect has always borne responsibility for what they stamp, regardless of what tools produced the underlying calculations. The standard of care doesn’t change because the tool changed.

This argument has weight. Market competition may also force correction: if Carrier A excludes AI and Carrier B doesn’t, firms will flock to Carrier B, creating competitive pressure to provide coverage. Travis Landers of Risk Specialty Group acknowledges that “the market is still figuring out how to price this risk,” suggesting the current exclusions may be a transitional overcorrection rather than a permanent state.

Fair enough. But transitional overcorrections can last years, and houses built during that window still need to stand up.

What a Homeowner Should Do Right Now

If you’re hiring an architect or builder for a residential project in 2026, ask three questions before signing anything:

1. Does your E&O policy contain an AI exclusion clause? Ask for the declarations page. Look for references to CG 40 47, CG 40 48, or any “artificial intelligence” or “generative AI” endorsement. If the architect doesn’t know, that’s an answer too.

2. What AI tools are used in your design workflow, and what is your validation process? A firm that says “we don’t use AI” may be right. A firm that says “we use it but we verify every output against code tables manually” is at least trying. A firm that goes quiet has told you everything.

3. Will the contract allocate AI liability explicitly? If the agreement doesn’t say who is responsible when an AI-generated design element fails, add a clause. Your construction attorney can draft one. It won’t be expensive. The alternative might be.

What This Doesn’t Prove

This analysis rests on the Verisk exclusion forms, carrier filings, and AIA adoption data. What we don’t have: a single litigated case where an insurer has denied an E&O claim specifically under CG 40 47 or CG 40 48. The forms are 81 days old. That case is coming, but it hasn’t arrived. We also lack data on what percentage of architecture firms have disclosed their AI usage to their carriers, or how many have reviewed their policies for the new exclusions. The AIA adoption survey is from March 2025; adoption has almost certainly increased since. And the hallucination rates from Stanford apply to general-purpose tools, not the specialized structural design software where error rates may be lower but are not yet independently measured.

The gap exists. The size of it is uncertain. The direction is not.