Most people new to AI approach prompting as a one-shot or two-shot exercise. You ask a question, maybe clarify once, and then either accept the answer or move on. That approach works for simple tasks, but it breaks down quickly when the problem is complex, ambiguous, or genuinely creative.
The real leverage appears when prompting becomes iterative and sustained. When N gets large, prompting stops being about getting an answer and becomes a process of steering. You correct assumptions, reject partial solutions, tighten constraints, and stay with the problem until the outcome converges. Some problems resolve quickly. Others take hours, days, or even weeks. Not because the model is incapable, but because meaningful problems are rarely well defined at the start.
This shift from 1-shot to N-shot prompting changes the role of the human entirely. The human is no longer just a requester of outputs, but the steward of intent over time. This can require patience, persistence, and perseverance, three qualities (which I refer to as the “Three Ps”) that can help you go the distance when striving towards AI goals. In 2025, I had 14k prompt messages, according to my “Your year with ChatGPT,” putting me in the top 1% of messages sent, and first 0.1% of users. Vanity metric aside, this highlights the value and importance of N-shot prompting, and the value of the Three Ps.

In 2023, I was one of the first to coin the term Metaprompting (or Meta-prompting) and wrote a short ebook about it over a weekend (Read more about it here on Amazon Kindle).
Context Is the Hidden Constraint
As iteration increases, context becomes fragile. Early decisions fade, intent drifts, and the model may begin optimizing for something adjacent to, but not aligned with, the original goal. Long-horizon prompting requires active context management. That includes periodically restating objectives, summarizing progress, resetting conversations, and re-grounding the model in first principles. Models are getting better at this on their own, including keeping a summary of context condensed in a small part of their memory that persists, to help keep them on track. Agentic models within a code repository can traverse file directories and explore readme.md files to stay high level, before diving deep into the source code for a given file.
At higher N, success depends less on clever phrasing and more on discipline. Prompting becomes closer to systems design than conversation. You become the engineer/manger of the outcome and AI response, and AI is just your friendly developer/worker.
Agentic Vibe Coding Inside the IDE
This dynamic is now amplified by agentic tools integrated directly into development environments. Tools like Cursor, VS Code with integrated models, Claude Code, Antigravity, and similar systems allow developers to work with AI agents directly inside the IDE. These agents can operate in different modes, from deliberate planning to rapid execution. They can be configured to ask for approval at each step, or to act autonomously by staging changes, committing code, and even pushing updates without explicit permission.
This changes the nature of coding. The bottleneck is no longer typing or syntax. It is instruction quality, context framing, and narrative coherence.
In practice, much of the real work now happens before any code is written. Developers increasingly define intent in structured markdown files. These documents describe goals, constraints, architecture, and reasoning paths. The model uses this narrative as a reference point while carrying out tasks. When done well, this dramatically reduces back-and-forth and keeps agentic behavior aligned over longer runs.
Amplification, Not Automation
Agentic tools make it easier to move from idea to working prototype. They also make it easier to generate bad code, unnecessary abstractions, and bloated systems. The tools amplify intent, not judgment. Poor instructions scale just as effectively as good ones.
This is where N-shot thinking matters most. Vibe coding is not about letting agents run unchecked. It is about sustained steering. You allow autonomy where it helps, intervene where it matters, and continuously realign the system with the outcome you actually want.
The Real Bottleneck Has Moved
The distance between imagination and execution has collapsed. That part feels almost magical. But the fundamental constraint has not disappeared. It has shifted.
What now matters most is patience, clarity of intent, and the willingness to stay with a problem long enough to shape it properly. Prompting is no longer a transaction. It is a process. And when N gets large, the quality of that process determines everything that follows.
One concrete expression of this shift is ForexGPT Pro Terminal, a trading platform I built independently, and includes an MCP-enabled chat interface.

Instead of treating chat as a passive UI, the integrated chat in ForexGPT Pro Terminal allows over 30 distinct actions to be carried out directly through prompts, including voice-driven interaction layered on top of structured context.
- For instance, in ForexGPT Pro Terminal, you can say “Buy 10,000 euro against the USD at market, and add a 30 pip stop and 40 pip limit” with your voice or prompt and it will send this order to the integrated brokerage demo account to execute.
Voice is not just a convenience feature here. It changes how intent is expressed and how quickly decisions move from thought to execution. As models become better at maintaining context and tools become more tightly integrated, speaking instructions and having them reliably translated into actions will become increasingly common. The interface fades, and what remains is intent, context, and execution flowing through a single conversational surface.
In the future, it will be commonplace to speak to your chat interface to trigger all of the same type of actions that you normally would be clicking on a link or button within a website or application, bypassing the need to login and navigate complex menus. This will accelerate the economy on the microscale, leading to macroeconomic impact over time.
About ForexGPT Pro Terminal: The Pro Terminal is an AI-native trading and market analysis platform designed around a Model Context Protocol (MCP) architecture. Instead of relying on free-form text generation, the system exposes a set of deterministic, validated tools to a context-aware language model. This allows market analysis, data retrieval, and demo trade management to be carried out in a structured and controlled way using natural language or voice commands. By constraining what the model can execute, the platform reduces hallucination risk and enforces clear execution boundaries. Real-time market data, professional-grade charting, news aggregation, and agentic workflows are integrated into a single workspace, with explicit state management to support multi-step reasoning, long-horizon tasks, and human oversight throughout.

