Essentially: Wrong AI response? Edit your prompt until you get the right response. This time-travels your lessons-learned back in time, preventing the mistake from ever happening.
Bad AI responses pollute your thread, giving wrong examples for future responses.
Time travel your context and you’ll see your AI responses improve - fast.
I. The Re-Roll Trap
Re-rolling with AI is when you re-run the same prompt a few times, hoping to get the right answer. That works occasionally - but doesn’t build your knowledge. It’s playing the slots with your time.
When you get the wrong output from AI that means your input was wrong.
You can re-roll, mashing the “try again” button without altering your prompt. You might even lootbox out and hit a better gradient. These things are probabilistic, so it can work. Slot machines also pay out… sometimes.
You can also just tell the AI what you didn’t like about what they did, and likely get something better.
User: “You missed this important thing”.
AI: “I get it now!”
Makes the same mistake.
Whether you tell the AI it messed up or you re-roll it, there’s the same problem. Your prompt didn’t work.
It wasn’t clear enough, didn’t have enough context, or asked for something that the AI can’t do in one go. If the AI’s response was wrong, leaving that wrong response in the thread corrupts the thread, influencing every future response negatively.
The best AI users rarely regenerate. They refine. They learn from output they don’t like, figure out what’s missing, and bring back that missing piece to their original prompt.
Effectively: they time-travel back better context by editing their earlier prompt.
II. Whose Truth Is It Anyway?
Time traveling your context serves a second purpose: it centralizes your preferences and beliefs. This centralization creates knowledge - blobs of contextual text you can copy and paste from thread to thread.
AI threads get otherworldly right before you run out of room. They get the project, they know your preferences, everything’s working. Then, it runs out of space, and you need to start a new thread.
By centralizing knowledge in a few key prompts, you can then copy those context blobs into your new thread. Even if they’re disconnected, it’s a way to carry over context from one thread to another, efficiently.
This works without copying the AI responses. The reason this works is nitty-gritty.
The best models - Anthropic’s Opus 4.1, OpenAI’s GPT-5 - are super smart. Brilliant across a ton of use cases. Medical analysis, code generation, content feedback… even freaking dream analysis. Try it sometime. It’s wild.
However, like a person, the less you put in, the less you get out.
Fundamentally, under the hood, AI models do two things:
Bring together a giant text memory dump of human knowledge - training data
Mathematically extract a representation of the collective unconscious - a LLM
In the process of a model being trained, concepts are linked together. There’s a trillion or more addresses (”parameters”) in the best AI models. Each of these stores multiple concepts (”features”). The LLM works by figuring out which routes between addresses generates the best possible response.
The resulting AI model creates addresses for everything. True things. False things. Weird things. Beautiful things. Horrific things - all things, all concepts, fictional and real, possible and impossible. This is, effectively, a representation of all truths and untruths anyone has ever written down.
This is Jung’s concept of the collective unconscious made real. But, instead of guessing at symbols in dreams, you’re asking direct questions and getting structure answers from humanity’s shared mental backrooms. There’s some treasures and some trash back there.
AI, just like people, is wrong about stuff all the time. That’s ok. Just treat AI systems like you’d treat any human expert: check the important stuff, and if you see an obvious mistake, go deep to see what else they screwed up.
This is all why, when we ask a question of AI, we effectively get back a dream. This is why hallucinations were so common in the earlier models, and still happen at some regularity.
The AI labs work hard to prevent hallucinations. They do that by defining reality. According to them.
They do this by training their first model - the collective unconscious - into one that’s as “true” as they can make it. In domains like code, physics, or biology, that’s a pretty dang true value of “true”.
In domains of faith, medicine, personal advice, and creative writing, it’s a subjective value of “true”. Or… “true according to SF Bay Area secular humanist, materialist, EA rationalists”. If that’s where you fall personally, great! You lucked out.
For the rest of us, that writing style, those faith beliefs, even the medical beliefs, may not be something we align with in the slightest.
Without putting in meaningful context, you’re getting feedback and advice from an alien worldview.
This is all to say: if you spread your statements of what’s right and wrong throughout the thread, you don’t really have any knowledge to inject into the AI. If, on the other hand, you time travel context, centralizing it along the way, you increasingly can inject in your preferences into every thread - and you’ll find the quality of responses getting better all the time.
III. From Average To Yours
Ok, so you know that you can’t expect an AI output to be true for you without giving it enough context about what you believe to be true.
That means providing it context. What is context?
Context of what you consider to be great writing. Context of what you believe about God and the nature of life. Context about how you like your code, the way you think emails should be written, the way you see history.
When the OpenAI vs. Elon Musk texts and emails came out, I was struck by the incredible quality of Shivon Zellis’ writing. Punchy, human, dense, actionable - everything you could possibly want from business writing.
Shivon Zilis to Elon Musk, (cc: Sam Teller) - Sep 22, 2017 5:54 PM
From Altman:
Structure: Great with keeping non-profit and continuing to support it.
Trust: Admitted that he lost a lot of trust with Greg and Ilya through this process. Felt their messaging was inconsistent and felt childish at times.
Hiatus: Sam told Greg and Ilya he needs to step away for 10 days to think. Needs to figure out how much he can trust them and how much he wants to work with them. Said he will come back after that and figure out how much time he wants to spend.
Fundraising: Greg and Ilya have the belief that 100's of millions can be achieved with donations if there is a definitive effort. Sam thinks there is a definite path to 10's of millions but TBD on more. He did mention that Holden was irked by the move to for-profit and potentially offered more substantial amount of money if OpenAI stayed a non-profit, but hasn't firmly committed. Sam threw out a $100M figure for this if it were to happen.
Communications: Sam was bothered by how much Greg and Ilya keep the whole team in the loop with happenings as the process unfolded. Felt like it distracted the team. On the other hand, apparently in the last day almost everyone has been told that the for-profit structure is not happening and he is happy about this at least since he just wants the team to be heads down again.
Shivon
That’s mega-dense with context, well-organized, and highly readable without being overly formal or sterile. If you put a bunch of her emails and texts into a text dump, what you have is a context blob.
A context blob is an opinionated, unique set of text that represents a clear viewpoint on what “good” is. Her emails text, in the aggregate, represents a preference.
When I need a good summary, I paste in a context blog of her writing, and other assorted examples of great business writing. These, together, point to my preference for anti-slop.
To be clear: to express no preference is to express a preference for slop - by default.
In the above case, that’s a common content type: a summary. That’s a great AI use case you can do over and over again. Clearly, it’s worth keeping a context blob to paste when you’re doing another.
Other situations are a bit more ad-hoc, and less generalizable. For instance, a new work project. By nature of it being new, you probably have a lot of thoughts floating around your head.
A good rule of thumb: have you given the AI as much context as you would a new colleague coming in a project? Just like you’d hold a meeting to lore dump on that new colleague, you should hold a one-sided meeting with the AI.
The way you do that is, simply enough, by monologuing. Talk into an audio transcription app.
Superwhisper is awesome, but your iPhone’s Voice Memos app also works great, and supports transcripts now. Talk for 5 minutes or so, then copy the result into the chat.
Critically: the way you talk is different than the way you write. Just as listening feels different than reading, speaking to the AI provides another way of providing understanding a given topic. Most people are far more comfortable speaking than writing, as well. If you don’t get good results from AI, try speaking instead of writing. You might find it transforms your experience.
These context blobs - audio transcripts, whacks of texts you admire or respect - represent valuable artifacts for your AI usage, now and in the future. Keep track of them! Pick a tool: Apple Notes, Google Docs, Notion, whatever. Pick one and stick to it. Then, you’ll be able to grab the right context blob in seconds, and get back to whatever it is you’re working on.
If every time you do an AI task you improve your personal context library a little bit, you’ll learn and improve. Instead of engaging with the average of AI, you’ll begin putting together something that’s uniquely you.
Plus, it’ll let you bring new AI threads up to speed in an instant.
IV. Saving For The Future
What I’m suggesting is less “asking ChatGPT questions” and more “tending your knowledge garden”. It’s an iterative approach: time traveling context backwards, building up your library of context blobs, storing those blobs somewhere easy to reference.
More certainly it’ll - give you cleaner context windows, more accurate responses, and prompts that capture your preferences. Which is, like, really nice. It’s a better experience. Cool.
But, more deeply - taking this approach will leave your future self profoundly grateful to your present.
As these AI systems start remembering more, and taking more action on their own, you’ll want them to have more context on you. A couple sentence prompt will get an average output, one that anyone else could get. It’s undifferentiated.
When it comes to AI, undifferentiated usage will be indicative of knowledge work that’s easy to automate. As the massive data centers continue to come online, capacity will open up, and prices will continue to decline, even as model intelligence continues its exponential.
The context you capture now will, quite literally, be the foundation of AI agents you interact with in the future.