The Context Spectrum: Moving Beyond 'Good' and 'Bad' Prompts"
We are are at an interesting time in generative AI with more people adopting ChatGPT and a formerly obscure AI agent, Claude also growing in popularity. With adoption people usage ranges from using them for basic tasks to creating fully featured applications. While all of these things are interesting the thing that I find the most interesting is each persons approach to using AI. Some people create minimal prompts and expect magic to happen while others add so much detail that they pretty much undercut the value of using an AI agent in the first place. In the following text im going to layout out I think about prompting for generative AI and attempt to provide a spectrum going from minimal requirements for an effective prompt to a fully context rich prompt for project that require it.
The context spectrum
The way I think of prompting is a spectrum based on context provided for instance on the minimal side we have a basic text prompt that is the equivalent of someone using ChatGPT like Google and on the otherside people making use of a context co-creation process to build things like an application. I am not sure of how many people think of it that way which is part of the reason why I feel compelled to share this with you.
I don’t see prompts as “good” versus “bad” Instead, I think of it as a spectrum of context. On one end, you have the quick, minimal interactions, the kind where someone opens ChatGPT or Claude, tosses in a single line, and expects a masterpiece. On the other end, there’s a deeply collaborative process, almost like building something with a creative partner, where you and the AI co-design, iterate, and refine together.
Let’s walk through what that spectrum looks like in practice.
The Basic Interaction
This is where most people begin, you open a chat window, type a question, and hit enter. No context, no background, no defined goal — just curiosity meeting an AI model. And it’s not wrong; it’s how exploration starts. But at this stage, what you get back often feels generic. The responses mirror your input: low context in, low fidelity out. It’s like asking a stranger for directions without telling them where you’re coming from.
Moving Toward Structure: Role, Task, and Format
The next step up is Role, Task, and Format (RTF) framework. Here, you start giving the AI a bit more scaffolding. You tell it who it is, what it needs to do, and how the output should look.
For example: “You are a product manager tasked with creating a one-page feature brief in this format. ”
That one sentence transforms the entire interaction. Suddenly, the AI has a role to play, a direction to move in, and a something to shape its output. It’s not full context yet, but it’s a solid middle ground, enough structure to produce something coherent and purposeful.
Deepening the Dialogue: Follow-up and Refinement
Once you have a foundation, the next level is about interaction, asking follow-up questions (see reverse prompting), clarifying intent, and steering the response closer to what you envisioned. This stage is where prompting shifts from command to conversation.
You begin to realize that the AI isn’t just responding, it’s collaborating. Each follow-up sharpens the edges, clarifies the assumptions, and helps the AI build a better mental model of what you’re trying to do.
Adding Documentation
At this point, you start layering in reference materials — documents, notes, examples, or prior work that the AI can draw from. This approach moves beyond isolated prompts and into something closer to knowledge transfer.
It’s like giving a new teammate access to the shared drive before expecting them to contribute meaningfully. By supplying background materials, you help the AI ground its responses in your actual context, not just the generic patterns it has learned from the wider internet.
Context Co-Creation
The next level is what I call context co-creation — where you and the AI build shared understanding in real time. It’s not just about giving the model documents or examples; it’s about developing a working rhythm.
You might start with a process outline, feed that into the AI, then iteratively refine and expand together. This is the space where co-creation becomes tangible, where your thinking and the model’s synthesis combine to create something that neither of you could produce alone.
A great example of this is Spec-Driven AI development where the AI is provide a process document, some project documentation and asked to make use of the process and the project documentation to walkthrough a process which is typically follow up questions and task creation. From that the user and agent walkthrough the process and the output an application or complex creation that has been co-created.
Conclusion
Hopefully in walking through how I think about prompting it will deepen your approach to interacting with AI agents and or expand how you think of them. I hope to have highlighted the potential for AI to enhance precision and efficiency in various projects. I'm curious how our interactions will evolve overtime, will there be other factors to help provide context, can we use context co-creation for things like art or writing a book. Will we look back at this time like we look back at dial-up internet and flip phones. Only time will tell in the meantime I will continue to explore as I encourage you to do the same.
Stan Wilson Jr