The scene you're living right now
You opened Claude Code three months ago. Or it was Cursor. Or Copilot. At first you thought it was cute, played around for a weekend, figured it would be just another wave. Then it became a habit. Today you code differently. Your productivity didn't double. It tripled on some tasks, and on others you simply ship things you wouldn't have shipped before.
Then came the meeting. Your CTO, your VP, the head of engineering, someone in leadership sits down with you and asks: how do we make this a company-wide practice? How do fifty people use this at the same time, the right way, without it becoming chaos?
You don't have a good answer. You have a textbook answer: do training, standardize prompts, write a guideline. But deep down you know that's not it. There's a technical layer missing, and nobody has properly talked about it yet.
The problem nobody told you about yet
When one person uses AI to code, it's magic. When fifty people use AI to code simultaneously, without coordination, it's industrial-scale technical debt forming. And the strange thing is the company takes months to notice.
What's happening in the meantime:
- Each dev uses a different tool. One team is on Cursor, another on Claude Code, the Java folks prefer Copilot, someone tried Kiro last week. Different conventions, different prompts, different quality.
- Nobody knows what AI wrote. There's no audit trail. If a bug shows up in production, you can't tell whether a human dev or an agent wrote it. No traceability.
- The agent runs on the dev's machine, with access to everything. Environment variables, local database, secrets in .env, private repositories. The attack surface expanded silently.
- Costs grow without visibility. Each dev pays their own subscription. The company doesn't see the whole picture. How much does real AI usage cost per team? Per feature? Per bug fix? Nobody knows.
- Implicit lock-in. In three months your company has coupled processes to a specific tool. Switching providers became a project, not a decision.
None of this is catastrophic in the short term. But by the second half of the year it becomes the kind of problem that shows up at a board meeting with the word governance on the slide.
The missing layer: Software Factory
Software Factory is the name being given to the layer that orchestrates AI use in software engineering, at enterprise scale, with control and portability. It's not a tool you buy. It's an architecture you design, with components that can be open source, commercial, or built in-house.
The core idea: the AI agent is the factory worker. The factory is what you build around it. And it's the factory that determines whether AI will be productive, safe, and scalable, or a beautiful demo chaos that nobody can put into production.
A well-designed Software Factory has four elements:
Orchestration
The layer that receives tasks, distributes them to agents, coordinates parallel execution, manages branches and merges. The heart of the factory.
Sandboxing
Each agent runs isolated, without access to credentials or networks beyond what's strictly necessary. Docker, Podman, microVMs, or ephemeral cloud environments.
Observability
Structured logs of everything the agent did, cost per execution, time, output quality. The equivalent of APM, but for cognitive work.
Governance
Policies that define who can request what, what approvals are needed, how sensitive data is handled, compliance with LGPD, CVM, and regulated sectors.
There's a fifth element that's more cultural than technical, but it changes everything strategically: model agnosticism. The factory is designed so you can swap Claude for GPT, GPT for Gemini, commercial for self-hosted (Llama, Qwen, DeepSeek), without rewriting the process. The agent is an interchangeable part. The factory is yours.
How this works in practice
Imagine the concrete flow of a company that adopted Software Factory. An issue arrives on GitHub: implement a new endpoint. Instead of a dev opening the IDE and typing with AI assistance, here's what happens:
- The issue is routed to the factory, which provisions a clean sandbox with a copy of the repository.
- A planning agent reads the issue, analyzes the relevant code, and generates an implementation plan.
- An implementation agent works on an isolated branch, writes the code, runs the tests.
- A review agent (or human) evaluates the result. If approved, it becomes an automatic pull request.
- All of this is logged: prompts, decisions, cost, time, model used, who approved.
The human dev participates at the edges: architecture, critical decisions, final review. The middle of the work is executed by an orchestra of agents that they designed.
You don't need to become this tomorrow. But you need to start designing it now, because the first components (sandboxing, logs, prompt standardization) can already enter your pipeline without revolution.
The market has already moved, and you may not have noticed
In April 2026, an American startup called Factory raised US$ 150 million at a US$ 1.5 billion valuation to sell exactly this: an enterprise Software Factory platform based on agents (they call them Droids) covering the entire development lifecycle. Customers: Morgan Stanley, EY, Palo Alto Networks, Nvidia.
In the same period, open source libraries emerged to build the same concept in an agnostic, self-hosted way. The whole ecosystem is pointing in the same direction: it's time to treat AI as infrastructure, not as an individual productivity tool.
Big techs have been internalizing this for months. What's reaching the market now is the possibility for any mid-sized company to build or contract its own factory, without having to become Google.
Why this matters even more in Brazil
A regulated Brazilian company can't casually throw sensitive data into American SaaS. LGPD, CVM, BACEN, ANPD, trade secrets in mining, clinical data in healthcare. The list of constraints is bigger than in any mature market in the northern hemisphere.
This means that directly adopting a platform like Factory.ai hosted in the US may not be an option for much of the corporate market here. And it also means that outsourcing the factory is not the same as outsourcing corporate ChatGPT. Here it's code being written on private repositories, with access to credentials and production data.
The reasonable read for a regulated Brazilian company is to build the factory internally, or with a local partner, using open source primitives for the orchestration layer and commercial or self-hosted models for the intelligence layer. This preserves sovereignty, meets compliance, and keeps the door open to switch model providers as the market evolves.
What changes for you, depending on the role
If you're a dev
Learn the layer above the tool. Knowing how to use Claude Code very well is commodity in 2026. Knowing how to design and operate a factory becomes a career edge over the next 24 months. Study orchestration, sandboxing, evals.
If you're a tech lead or eng manager
Start designing your team's factory now, even if at minimal scale. Standardize prompts, define sandboxes, instrument cost. You don't need a commercial platform to start; you need architectural intent.
If you're a CTO or VP of Engineering
Stop buying point tools. Start designing the company's AI infrastructure strategy. It's not a productivity item. It's the next generation of your engineering platform. Whoever understands this first will have compound advantage over the next five years.
The takeaway
We're at the moment when individual AI adoption becomes enterprise adoption. It's the turn that separates those who use AI from those who operate with AI. The difference between who wins and who loses in this turn isn't in the chosen model, it's in the layer built around the model.
If you had only heard of prompt engineering, copilot, and agent, now you have a fourth piece in the vocabulary: Software Factory. It's the missing piece for the conversation to make sense at scale.
I'll keep writing about this in the coming weeks. How to start. How to avoid the most common mistakes. How to decide between buying a platform and building in-house. How to think about compliance in the Brazilian context. If your company is thinking about this, talk to me.
Is your company thinking about this layer?
I'm having conversations with tech teams that are past the AI playground stage and want to design the next layer. To receive future articles and updates on the topic, sign up for the list.
Sign upWritten by Luiz Filipe Couto, founder of PixFly. PixFly helps Brazilian companies adopt AI in software engineering through workshops, consulting, and assisted implementation.