A doctor friend of mine told me how he's been playing with chatGPT and Lovable to make apps and he has fun but often gets frustrated because sometimes "the AI is just unable to fix it and it keeps saying I have". That turned into a fun little lesson where I explained some basics to him through a vibe coded Replit app. Later used it again to illustrate that and my workflow with tools like Cursor, Claude Code and Codex to a few other engineers over a discord call. Here's a summarised version of that call.
Disclaimer: there's always room for improvement but these are just some strats that have been giving me relatively high success with agentic coding.
For the most part, AI is often able to "one shot" features and fixes provided the context is clean and good. I put one shot in quotes because for the sake of argument I mean it for when the AI actually writes the code. Additionally (surprise for no one), AI is good at following patterns. This means that if your context window is getting polluted with bad code, it will produce bad code. The inverse is equally true. So if you lay down some patterns, guide it with those and keep it targeted, you can actually achieve that productivity multiplier that gurus try to sell.
-
Simply put, your conversation with LLM flows and so previous chats are appended. If this context is filled with 'bad messages' like "build $1M SAAS don't make mistakes", your future messages of "didn't work please fix" are not going to cut it. A simple fix for this is to just ask the AI to help break down what it has understood (or read the thinking). High chance you'll also end up with a more solid plan if you do this back and forth. Claude Code v2.0 also has a
AskUserQuestionToolwhich I've had great success with. -
Keep your chat sessions targeted. I often start my sessions with just a conversation and ask for a high level overview of how a certain part of the app/code is working. This is particularly useful for building the pattern which would follow once the coding begins. If there are existing services, folder structures or even examples of methods I can attach, I do that myself to save the token and context consumption. Side note: if you need to install a package, you'd be better off doing that yourself and just letting the AI know that the package is installed.
-
Like I said, AI is good at following patterns. So, if you need to implement a library or a package and you're unsure about it, you could save time and effort by finding an existing implementation of it online and using that to build an understanding. For instance, I once had to implement a headless note editor with custom stying and Codex was able to "one shot" it because I built sufficient understanding and context before getting to the code part. This was done with conversations and code examples I got online that were "good enough".
-
Don't be shy of starting a new conversation if your LLM is getting stuck on something over and over. Often times, a new perspective does you good and the LLM won't get stuck in tool calls plus, you'd have a cleaner context window. Claude is notorious for generating
.mdfiles over everything and often times, they're quite useless so I almost always just skip those generated.mdfile altogether. -
Do not be blind to the code generation. It goes without saying but different models have different guard rails. For example, Theo did a test with Codex High high and Gemini 3 Pro where he asked both to create a calculator tool. Codex implemented one with basic arithmetic operations whereas Gemini 3 Pro implemented a generic purpose tool call that is able to execute functions. The difference is life & death. One is calculator and the other is a nuclear bomb that can act as a calculator.
Useful reading: Cognition Codemaps