How one IBM architect actually uses AI coding tools on a daily basis

AI For Business


How Goodhart uses these tools depends on what he’s building. he said. IBM thinks You can see that his workflow is divided into three different patterns, each with a different level of AI autonomy.

For greenfield coding (new scripts or completely new projects), we go all-in on automation. He creates a textual description of the project, including high-level features and specific constraints such as licensing requirements and testing methods, dumps that description into a text file in the repository, and fills it into the first prompt. Next comes the ideation phase, where Goodhart uses AI to iterate on the plan. When his and AI’s approaches align, he gives the tool “almost YOLO mode within the current project repository” and then walks away.

“Create a single wholesale commit with the initial code (unedited) and then ‘scramble’ it by working on it in parts, either manually or using targeted prompts,” he explained.

Working and debugging smaller features follows a more strict script. He launches the AI ​​by entering a brief description of the problem, an idea of ​​where the problem is in the code, and a link to any open issues on GitHub. If necessary, he first asks the AI ​​to create a one-time script to reproduce the problem, then iterates until he’s confident that the problem has actually been reproduced and that “the AI ​​didn’t just claim success.”

At this stage, he closely monitors change requests. No YOLO here. He makes sure to give the AI ​​clear instructions on how to perform the tests. Repeat washing and rinsing until the bug is fixed.

Large features and complex debugging efforts fall somewhere between these two extremes. Goodhart said he’s investing more effort up front, including the progress he’s made by thinking through the problem himself and “as many external reference links as I can find (other PRs, similar features from other projects, online discussions, etc.).” He interacts closely and repeatedly with agents without the hands-off autonomy that greenfield work allows.

As Goodhart explains, the pattern is simple. The less defined the problem space, the more freedom you give the AI. The more important the integration point, or the higher the risk of subtle bugs, the tighter he keeps the reins.



Source link