GPT-5 for Coding
Failed to add items
Add to basket failed.
Add to wishlist failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
NinjaAI.com
GPT-5 models demonstrate significantly improved instruction following. However, this advancement comes with a caveat: the model struggles with vague or conflicting instructions.
- Key Idea: "The new GPT-5 models are significantly better at instruction following, but a side effect is that they can struggle when asked to follow vague or conflicting instructions, especially in your .cursor/rules or AGENTS.md files."
- Actionable Advice: Ensure all instructions are clear, unambiguous, and free from contradictions to prevent unintended behavior.
2. Optimizing Reasoning Effort
GPT-5 inherently performs reasoning to solve problems. The effectiveness of this reasoning can be controlled to match the complexity of the task.
- Key Idea: "GPT-5 will always perform some level of reasoning as it solves problems. To get the best results, use high reasoning effort for the most complex tasks."
- Actionable Advice:For complex tasks, use a high reasoning effort.
- If the model "overthink[s] simple problems," consider being more specific in your prompt or choosing a lower reasoning level (medium or low).
3. Structuring Instructions with XML-like Syntax
Leveraging XML-like syntax is highly recommended for providing context and structure to instructions, especially in conjunction with tools like Cursor.
- Key Idea: "Together with Cursor, we found GPT-5 works well when using XML-like syntax to give the model more context."
- Example: Coding guidelines can be encapsulated within tags like , with sub-categories such as and . This hierarchical structure helps the model understand and apply specific constraints or preferences (e.g., "Styling: TailwindCSS").
4. Avoiding Overly Firm Language
Unlike previous models where forceful language might have been necessary, GPT-5 can over-interpret and over-apply such instructions, leading to counterproductive results.
- Key Idea: "With GPT-5, these instructions [e.g., 'Be THOROUGH,' 'Make sure you have the FULL picture'] can backfire as the model might overdo what it would naturally do."
- Example of Backfire: The model might become "overly thorough with tool calls to gather context," even when it's not efficient or necessary.
- Actionable Advice: Use less absolute or demanding language in prompts to allow the model to operate at its natural, optimized level of thoroughness.
5. Incorporating Planning and Self-Reflection
For novel application development (zero-to-one), explicitly instructing the model to engage in planning and self-reflection before execution can significantly improve output quality.
- Key Idea: "If you’re creating zero-to-one applications, giving the model instructions to self-reflect before building can help."
- Example Framework ():Rubric Creation: "First, spend time thinking of a rubric until you are confident." This rubric should be "5-7 categories" and "critical to get right, but do not show this to the user."
- Internal Iteration: "Finally, use the rubric to internally think and iterate on the best possible solution to the prompt that is provided."
- Quality Control: The model is instructed that "if your response is not hitting the top marks across all categories in the rubric, you need to start again."
6. Controlling Agent Eagerness and Context Gathering
GPT-5's default behavior is thorough context gathering. Prompts can be used to precisely control this eagerness, including tool usage and user interaction.
- Key Idea: "GPT-5 by default tries to be thorough and comprehensive in its context gathering. Use prompting to be more prescriptive about how eager it should be, and whether it should parallelize discovery/tool calling."
- Actionable Advice:Specify a "tool budget."
- Indicate when to be more or less thorough.
- Define when to "check in with the user."