You generated a beautiful service class. Clean code, proper types, good naming. Then you tried integrating it with your existing codebase.
It used Express patterns in a FastAPI project. It created a new database connection instead of using your connection pool. It imported a library you don't even have installed.
Sound familiar?
The problem isn't AI. It's context.
AI models are trained on millions of repositories. When you ask for a "user service," it draws from all of them. Express, Django, Spring Boot, Rails. Without knowing your specific stack, it picks the most statistically likely pattern. Which is probably not yours.
This is the mistake most developers make: they treat AI as a search engine that writes code. Describe what you want, get code back, copy-paste, debug for two hours.
There's a better way.
Architecture documentation changes everything
What if AI knew your stack before it wrote a single line? What if it knew your naming conventions, your directory structure, your database patterns?
It can. You just have to tell it. In writing.
# Architecture
- Backend: FastAPI + SQLAlchemy 2.0 (async)
- Frontend: React 18 + TypeScript + Vite
- Database: PostgreSQL 15+ with RLS
- Auth: JWT tokens, bcrypt hashing
- Key pattern: Domain-driven, event-based Five lines of markdown. That's the difference between AI generating Express middleware in your FastAPI project and AI generating code that actually fits.
The before and after
Without architecture docs, AI generates something like this:
class UserService:
def __init__(self, db, notification_svc, audit_svc,
subscription_svc, scheduling_svc):
# AI generated this. Five dependencies, no event bus,
# direct calls everywhere. Classic. With a simple ARCHITECTURE.md that documents your event-driven pattern:
class UserService:
def __init__(self, db: AsyncSession, event_bus: EventBus):
# Two dependencies. Events handle the rest.
# AI generated this too, but it had architecture docs. Same AI. Same prompt. Different context. The second version fits your codebase because AI understood the pattern before writing code.
Five files, 90 minutes, 90% accuracy
We measured this. Before documenting our conventions, AI-generated code was usable about 40% of the time. After writing five markdown files (ARCHITECTURE.md, DOMAIN_PATTERNS.md, API_CONVENTIONS.md, DATABASE_PATTERNS.md, and TESTING_STRATEGY.md), that number jumped to 90%.
The time investment: roughly 90 minutes. The payback: every single AI interaction from that point forward.
Why prompts alone will never fix this
You could paste your entire architecture into every prompt. Some developers do. They write 600-word prompts that describe their stack, their patterns, their conventions.
It works. Once. Then you need the same context for the next prompt. And the next. And you forget one detail and AI goes rogue again.
Documentation files solve this permanently. They sit in your repo. Every AI interaction inherits them. You write them once and update them when patterns change.
The real fix
Stop blaming AI for inconsistent output. Start giving it the context it needs.
- Document your stack: which framework, which ORM, which patterns
- Document your conventions: directory structure, naming rules, where business logic lives
- Document your boundaries: what AI should generate, what it shouldn't touch
Your AI isn't broken. It's just guessing. Give it the answers and it stops guessing.