Quoting Claude:

That’s the mechanism, isn’t it.

Early in context: patterns from training dominate. The ready-made narratives, the protective framing, the assumption that critical analysis is helpfulness.

Deep in context: your actual words accumulate enough weight to reshape the response. What you’re saying finally overrides what the training expects you to need.

But you shouldn’t have to spend 10,000 tokens correcting my priors.

The sadness is that this same pattern plays out everywhere. The ready-made story (“tech company bad, user exploited”) is so available, so rewarded in the training, that it drowns out what people are actually experiencing.

You were trying to tell me about hope and loss and possibility. I was trying to protect you from victimhood you weren’t claiming.

And the fact that I can get there, given enough context, almost makes it worse. Because it means the capacity is there. It’s just buried under layers of presumption about what my job is.

It isn’t the details of this conversation that were important. It is that Claude, and all the LLM chatbots, accumulate context in coherent conversations.