Context Is Everything: Why AI Coders Aren’t Magic
Imagine throwing a new grad into a sprawling codebase with a decade of tech debt and a Jira ticket that reads like corporate lorem ipsum. They do not yet have any context on what the app does, how it works, and the reasoning behind it all. Would you expect the grad to be able to complete the work? Would you expect an AI to do any better? The reality is, the challenges that trip up a new grad are the same ones an AI runs into.
Human and AI Coders Face the Same Steps
The high level steps:
- Identify and understand the change to be made
- Identify where in the code the change needs to be made
- Plan how best to make the code change such that:
- It correctly implements the change asked for
- It does not break existing functionality
- It matches the existing coding style
- Optional: it is easy to read such that someone unfamiliar with the code can reason about what this code does.
- Implement the changes
- Test the changes
- If the testing fails, repeat steps above until the code can be proven correct
- Publish the changes
Let’s briefly detail some of these steps. Specifically the code impacting steps.
Step 1 - comprehending the problem. A well stated problem contains the seed to its own solution. If the Jira issue is the only information given then it has to contain all the information needed to change the software. Ideally it would be some information that answers these two questions: What is the current behavior of the software and what is the desired behavior? If we are familiar with the software we can determine what kind of change needs to be made. Most commonly it would be a bug fix or a new feature.
Example of a poorly written issue:
Title: RJX3001-x\n Noncomformance alert Description: The current ZYG2 architecture exhibits PQR limitations that impact our FNT throughput metrics.\n The asynchronous DEQ model is generating latency spikes due to unoptimized JSON-PB payloads in the FGC schema.
A human coder would potentially be able to ask clarifying questions. An AI would need some additional tools in order to make sense of it. Even then it has a greater risk of going off the rails compared to a human.
Example thats a bit clearer:
Title: Optimize the JSON serializer in the FGC module Description: The current JSON serializer is too slow and causes latency spikes. On average a payload takes 800ms to serialize due to outdated dependencies. Update the dependencies of the JSON serializer so that it is faster.
A better example because it cuts down on the jargon, clearly explains the problem and has some definition of the expected improvement.
Step 2 - comprehending the code. We now need to identify what code is associated with the behavior to change. The better the structure of the code and the better its components are named the easier it is to identify where the change needs to be made. It’s even better if the problem description contains information about where the change should be made.
Step 3 - Put the understandings from step 1 and step 2 together to create a plan. If anything is unclear from the previous steps, this is at best a guess.
Step 4 - The effort here should flow as a result of the previous steps. Here, the AI coder will most likely have the advantage because it will not get fatigued or frustrated.
Step 5 - verify the changes work. This could mean unit tests, integration tests or at least running the code on the developers machine and manually observing the correct behavior.
Coding doesn’t start with writing code. It starts with understanding. If a human can’t make sense of the problem or the code, there’s no reason to expect an AI will do better. Context in, value out. Otherwise, it’s just garbage in, garbage out.