Field Notes
Your Verbal Diarrhea Is a Methodology
Somewhere between a dream and a visualization, while my brain was doing whatever it does before consciousness fully boots, I saw the site turning into an audio-visual suite. The image was detailed and irrelevant and I hadn’t had coffee yet. But tangled inside it was something I’d been circling for weeks, and it landed all at once: what the AI industry is calling “context engineering” is a surface description of something I’ve been building formal architecture for over two years.
Context engineering is the current darling of the AI productivity discourse, and its advice is perfectly reasonable. Be structured. Be precise. Remove the tangents. Clean up the backstory. Get to the point. The AI will perform better with clean input.
This is true. It is also destroying the thing that makes AI collaboration actually generative.
Here’s what I mean. The best AI work I’ve done has started with what looks, from the outside, like a mess. Feelings about the project. Personal backstory that seems unrelated. Tangents about something I read three months ago. Verbal diarrhea, frankly. And what I’ve learned, across hundreds of sessions, is that the mess is not inefficiency to be eliminated. It is the methodology. It is the human constructing the relational field that makes genuine synthesis possible.
Strip it out and you get a faster assistant. Keep it in and you get something neither party held before the conversation began.
Context engineering asks: how do I arrange information to optimize model performance? The framework I’ve been building asks a different question entirely: what are the dynamics between human and model that generate intelligence neither holds alone? These are not the same question. One has a ceiling. The other doesn’t.
The ceiling is visible in the most sophisticated context engineering tool currently deployed. A developer named Matt Pocock built a skill called /grill-me. It forces the AI to interview you before it starts building. The sessions can run 30, 40, even 50 questions deep. It works brilliantly. And it extracts only what you can consciously state when asked. That’s its limit. The thing you know in your body but haven’t found words for yet is not answerable by fifty questions about feature scope.
Field priming, which is my term for what’s actually happening in generative human-AI sessions, doesn’t extract the stated. It creates the conditions under which the unstated becomes available. The tangents are connective tissue between ideas your conscious mind hasn’t linked yet. The feelings are primary epistemic data, not noise. The backstory grounds the AI in something richer than the task description. And the apparent disorder is what allows the model to locate patterns you didn’t know you were carrying.
This is the thing nobody in the context engineering conversation is talking about. Their framework optimizes the information the model sees. Mine instruments the dynamics of how intelligence emerges between human and model. One tunes an input. The other tends a field.
I think practitioners who get extraordinary results from AI are already doing this. Usually without knowing it. They talk before they prompt. They ramble before they specify. They give the AI more than it needs, and then something happens that a clean, structured input would never have produced.
They’re not being sloppy. They’re constructing conditions for emergence. They just don’t have a name for it yet.
The interesting question is why this works at all, given that models don’t have relational experience in any biological sense, and yet respond to relational context as though they do.