I Don't Write Code Anymore. So What Do I Actually Do?

Engineering
Samuel Trstenský Samuel Trstenský
Apr 30, 2025 8 min read

It was a year and a half ago when we started our first project with an AI-first approach. Sonnet 3.5 was the best coding model at the time (at least based on our limited experience back then), and we decided it was mature enough to delegate all the non-critical logic to. Complex logic still needed to be done manually, but the vast majority of tasks were already good enough quality to save developers a nontrivial amount of time. Some developers were skeptical, but luckily everyone at Brackets is curious by nature, and we all spent time exploring tooling, models, and approaches.

Fast forward to today: there is no project without an AI-heavy approach. Programming as we knew it is becoming more automated by the week, and the number of lines I write the old-school way is getting close to zero.

So if I don't code, what do I actually do?

New workflow in the AI era

I always enjoyed problem solving. Designing the approach, evaluating possible solutions, figuring out how to do it within constraints of time, budget, and scope. Coding it afterwards was never my favourite part.

Two years ago, my work was 20% problem solving and 80% coding what had been brainstormed. Today it's closer to 50/50. And the ratio keeps shifting.

So what are the key things I do?

Brainstorming with client

We were developing a project for the European Association of Nuclear Medicine, and I remember that before every call with the client, my PM was reconsidering whether it was worth having me there. Somebody needed to code the solution. We didn't want to "waste" my time on calls.

I was always against it. I wanted to be on those calls, because I had insights the PM didn't. I knew how the database was structured. I knew there was a better and cheaper solution than what was being proposed. The PM couldn't know that. And why would they, it's not their job to know the schema.

This is what I consider the single best thing AI models gave us. Developers can finally redirect their problem-solving minds to actually solving client problems, rather than coding an already-solved problem (often worse than it could be, because the architect wasn't on the call).

Scope and project definition

Once the brainstorming sessions are done, I need to put the ideas and solutions into a project scope and definition. This is a crucial step. It needs to be crystal clear what was agreed on. This existed before AI and will always exist.

But in the past, remembering everything we agreed on wasn't always easy. And I hate making notes during meetings. It takes my focus from the important part: the actual brainstorming.

For more than a year now, we've been using Fireflies in our meetings, which transcribes all our brainstorming sessions. Generating a project scope is then a matter of one prompt: "Take my last 3 meetings with X and create a project scope and definition." Our preprepared Claude skills handle the rest.

Again a lot of time saved. I focus only on what matters most: reviewing whether the proposition actually solves the problem, rather than writing the whole document myself. It often happens that AI puts too much weight on parts that aren't that important, while missing the things that matter most to the client. AI can't read the room in a brainstorming meeting. It doesn't pick up on the moment when the client's voice shifts, when they lean in on a topic, when they gloss over something they don't care about. That's a purely human skill, and that's why a human in the loop isn't optional. It's the whole point.

Brainstorm again

After the document is ready, we go through it with the client to make sure we're on the same page and all problems are addressed. This is an iterative process, and that's a good thing. It has to be iterative, because…

A good plan is everything

The next step is generating technical tasks from the project brief. Either in Jira, or in my case, I prefer markdown files directly in the project repository. It works much better for AI coding agents to have all project scope (spec, PRD, …) directly within the codebase. Agents will understand it significantly better, which improves maintainability, and as a bonus, it becomes your documentation.

The core responsibility of the developer at this stage is to deeply review the generated tasks. Is the design correct? Is the database schema right? Are all the processes correctly covered? Do the tasks contain testing scenarios? You should never rely on the AI without proper supervision in place, otherwise you are risking a lot of troubles.

In the world of AI agents, a good plan is everything. It can be the difference between a 3-hour coding session and a 9-hour session full of tech debt.

And this isn't theoretical. We had two projects running in parallel. On the first one, I had a demo scheduled and wanted the MVP ready. There was no time to review the tasks deeply. The result? I spent the next day fixing issues for several hours. They weren't even bugs, just bad design decisions the agent made because the spec was loose. Since then, I always put the most effort into task definitions. It leads to minimum problems after implementation.

Coding

Ok, so we have the tasks. Now what? Our coding agents take over.

We've built an ecosystem of agents in Coder (if you want to know more about our setup, feel free to ping me on LinkedIn) that handle the heavy lifting. Tasks are implemented sequentially, one after another. We learned our lesson here — parallel tasks weren't worth it unless they're truly independent with zero interaction — which is rarely the case. Every task is then reviewed by a review agent, feedback is implemented, and a pull request is created.

Review by humans

That's it, pull requests are created, code is written, feature is implemented. Now it's time for proper human review. We need to be sure the system is secure and everything is implemented as intended.

The good thing is, this is done at scale. I wake up in the morning and there are 20 open PRs waiting for me.

"AI writes fast, but accountability is still ours."

The truth is that while the quality of AI-generated code keeps improving, it's not flawless. Every pull request still needs human eyes. Someone who checks whether the code is secure, maintainable, and does what it's supposed to do. I have one golden rule we apply across the team: never ship something where you don't understand every single line of code. If you can't explain why it's there, it doesn't go to production. AI writes fast, but accountability is still ours.

From MVP to perfection

Once everything is implemented, we show the MVP to the client. And here's where the new workflow really compounds: because change requests are cheap now, we can iterate. In the past, reworking the codebase after the first demo could take weeks, so we tried to lock down as much as possible upfront. Now it's often more economical to ship a rough MVP fast and shape the details together with the client, round after round.

Same budget, more iterations. Or same number of iterations, faster delivery. Either way, the product that lands at the end is much closer to what the client actually wanted because they helped shape it, not just spec it.

This is where we loop back to "Brainstorm again." The cycle can repeat several times before the product is right.

So what does it mean for clients?

  1. Senior thinking, not senior typing. The same engineer who designs your system spends more time understanding your business and less time typing what's already been figured out. That mix shifted in your favor.
  2. More iterations for the same budget. Cheaper implementation means we can afford to try the second and third version of an idea before you sign off. The final product ends up closer to what you actually need, not what we agreed on in the first meeting.
  3. Faster from idea to working software. What used to take months now takes weeks. That changes what's worth attempting. Experiments that weren't economical two years ago are routine now.
  4. Proof, not promises. On a recent healthcare project, we spent three hours with the client defining the data model — constraints, edge cases, the messy parts. Claude Code wrote the implementation in 30 minutes. The thinking is where your money goes; the typing is nearly free.

A quick note before I close this off: it sounds like AI solves everything. It doesn't. We'll cover where it works and where it breaks in a separate article.

So yes, I uninstalled my code editor. The typing is nearly free now. The part that needs me is the conversation with you. That's where I'll be.

Tell us what you're thinking about.
We'll tell you what we think.