Outsourcing the Thinking

AI-assisted
aiconsultinglessons

A client sent me a PRD to build against. The people behind it were technical, experienced - exactly the kind of people I trust to know what they need. The document itself was thorough in the way that makes implementation feel clean: well-sectioned, specific about acceptance criteria, detailed enough that I could start building without much back-and-forth. So I started building.


A month of implementation followed. Multiple rounds of refinement, each one tightening the alignment between what I’d built and what the document described. The work felt productive in that satisfying way where there’s a clear target and measurable progress toward it. I was building exactly what the PRD specified, and I could prove it.

The problem wasn’t visible because the artifact was doing its job perfectly - not the job of describing what the client needed, but the job of looking like someone had already thought that through.


The client eventually told me the PRD hadn’t really been reviewed. The requirements didn’t match what they actually needed. A month of faithful implementation against a document nobody had vetted. The people who provided it were apologetic, and I wasn’t angry - I’d implemented exactly what I was given. But a month of work had drifted quietly in the wrong direction, and none of us had noticed because the document was good enough to make checking feel unnecessary.


I suggested we set the PRD aside entirely. Let me look at your current system, understand your actual workflows, and bridge from what I’ve already built to what you need. That worked - not because I’m especially good at reading minds, but because replacing the artifact with direct observation removed the thing that had been substituting for judgment. The gap between the PRD and reality wasn’t a specification error. It was a thinking error. The document had done the thinking, and nobody had checked whether the thinking was right.


I recognized the pattern because I’d done the same thing to myself. Earlier, I’d used AI to draft a contract for a fractional CTO engagement - vesting language that looked like contract language should look, right format, right sections, plausible terms. I didn’t think about whether vesting was appropriate for this kind of engagement. The artifact looked like thinking had happened, so I treated it like thinking had happened. The consequences played out over months and contributed to parting ways with that client. I’d outsourced the thinking on my own contract terms.


I wrote about a faster version of this pattern in Finding the Edge of AI Trust - an AI debugging report that led me off a cliff in hours, with a sharp landing 24 hours later. That was a fast fall. Here, a PRD shaped a month of work and a contract shaped an entire engagement. The pattern is identical - an AI-generated artifact that looks thorough enough to skip scrutiny - but the timescale determines how deep the consequences go. The fast version gets caught in a day. The slow version plays out for weeks or months before anyone realizes the artifact was doing the thinking instead of the humans.

Speed makes the fall dramatic. Slowness makes it invisible. The Edge post’s lesson was about momentum carrying me past the point of error before I could stop. This is about never feeling the fall at all - just a gradual drift, comfortable and productive, until the distance between where I am and where I should be is too large to ignore.


I keep wondering how many other artifacts in my work are substituting for judgment without me noticing. The PRD fix was simple enough - stop building against the document and look at the actual thing. But the difficulty isn’t knowing that. The difficulty is remembering to do it when the document is good enough to make scrutiny feel like a waste of time.