AI-Considerate Architecture

When working with AI coding tools, it’s best to keep naming and structure small, tight, and repetitive. To paraphrase Rob Pike, a little duplication is better than a little dependency. Here, it might be more accurate to say that a lot of duplication is better than any dependency, especially if it’s deliberate and well-scoped.

Core Definition

AI-considerate architecture is a structural and naming approach that:

  • Keeps context small and meaningful
  • Aligns naming with semantic intent
  • Respects the way AI tools hold and process code
  • Makes code generation, reading, and refactoring more predictable and safe

Augmented Intelligence

I’ve been writing code faster with AI than I ever could before. It's vibecoding with less vibe and more intent. Vibecoding is when you let the AI express in code what you mean in natural language and just see where it lands. What I've found is that there is an optimal way to think about and structure projects that yields better results than just writing to my dear AI assistant. I've had to step back and build an environment where AI can understand and thrive in its own way. What I've begun to do is construct an architecture designed for code-generation agents.

Monoliths are out

Monoliths have had a resurgence in the past few years as a protest against over modularization with micro-services, but if we're working with AI-coding tools then monoliths are out. I want to emphasize here that it isn't monoliths specifically but rather unstructured, bloated monoliths because they break AI cognition. Monoliths can rarely maintain clean separation in a world of constantly changing business requirements, varying developer quality and the entropy that occurs over time.

It's difficult for Cursor, ChatGPT or Claude Code to reason about a large codebase with many different pieces needing to be tweaked to get a single result.

That’s not to say you can’t operate on a larger codebase, and absolutely doesn't mean you need to break everything down into micro-services, but modularization and boundaries should be top of mind while designing software that you intend to work on with AI-coding tools. LLMs require tightly scoped code to work with. To clarify beyond doubt, I want to reiterate that it's not micro-service vs. monolith but that structure is the real axis here, not deployment topology, whether the shape of the system helps or hinders reasoning.

Names matter more now

Generic terms are disadvantageous, but then again they always have been. Working with AI-coding tools, I've found even the names should reflect the scope or part of the project being worked on. I’ve been prefixing as much as I can to provide even more context to the LLM's. In an instance where I would've previously gone with Store.swift, I'll now name more clearly and go with a ThingStore.swift. Store.swift doesn’t mean anything to a model unless it’s already scoped inside something called Conversations or Threads. Make the model's job easier:

ConversationStore.swift
ThreadReducer.swift
MemoryClient.swift

An AI-Considerate naming convention

Directories are boundaries

Think in fractals. The way I've found to best organize my AI-assisted projects is to follow a feature-oriented design. I'm encapsulating features nearly as stand-alone within a project. That leads to duplication, but with a disciplined eye that isn't a terribly bad thing, though I will say it isn't the best either. In so far as increased maintenance, it's a tradeoff. The AI-coding tools are doing most of the heavy lifting in terms of code production and maintenance, but there is an added attention to detail and deeper scrutinization during code reviews. Duplication isn’t evil. It’s a tax for clarity. If AI has to choose between DRY and DUMB, pick dumb. Dumb is more predictable.

Code reviews are essential

AI tools will lie. They’ll skip tests. They’ll invent things that sound right. And if you don’t catch them, they’ll think you liked it. There is also this "box of chocolates, never know which one you gonna get"-ism when it comes to firing up Cursor, ChatGPT or Claude Code. How you initially prompt has a large impact on their code production ability. They adapt to you. If you don’t catch incoherence, they assume it’s acceptable. The system adapts to your negligence.

Commit religiously

Commit like you’re working with an intern who doesn’t remember what they just did. Git is your safety net. Commit often—ChatGPT, Claude, and Cursor can run amok on your codebase. There can sometimes be so many changes for a small request that you'll have no choice but to git reset and prompt again. And it can happen even with a detailed prompt, but the fault is most probably with the inputs. I've found that just a minor missed detail or forgetting to emphasize project norms can cause the AI-Coding tools to miss the mark.

In closing, consider AI in your approach

You’re not writing code for humans or AI. You’re writing it for both, at once. Structure is interface. Naming is signal. And architecture is the prompt before the prompt.