The premise: most code is not written by your engineers
Three years ago this paper would have started by listing the AI tools changing software development. Today the assumption is ambient. Your marketing team has Lovable. Your operations team has Bolt. Your engineering team has Claude and Cursor. Every department in the company is now, in some measure, a software department. Most of the code being written in your company is not being written by your engineers.
This is the premise of this paper, and the premise the compliance program has to absorb. The traditional shape of an information security program assumes that code originates inside engineering, is reviewed inside engineering, and is shipped inside engineering. None of these are true now in the way they were five years ago.
What this changes about compliance
Five things, in rough order of how often we see them surface in conversations with CISOs.
Provenance moves out of git
The traditional audit trail starts in git: every change to production has a commit, an author, a review. That trail assumes the change originated as a commit. AI-assisted builds often originate in a chat session, become a generated artifact, get pushed to a deployment platform, and never see the inside of the org’s primary git infrastructure. The audit trail either has to follow the build to the new platforms, or the build has to be required to come back to the canonical infrastructure before shipping.
The reviewer is no longer the author’s peer
A software engineer reviewing a junior engineer’s diff shares a lot of context. A software engineer reviewing a marketing manager’s AI-generated diff shares almost none. The review function still works, but the reviewer is now carrying more of the load, what would have been implicit author understanding is now explicit reviewer work.
Boundary discipline becomes the control
Where production data is allowed to live, where AI-tool environments are allowed to reach, and what crosses the line between the two are now first-order compliance questions. In practice this is the question that determines whether an AI build can ever serve customers safely.
The supply chain expands
Every AI builder your company uses is, technically, a sub- processor in your privacy program if any data flows through it. The supply chain expanded. The vendor inventory expanded with it. So did the work to maintain DPAs and sub-processor lists.
The audit becomes a sampling problem
Auditors used to sample commits. They now have to sample sessions. The unit of evidence for an SOC 2 review, in our posture, is the session record, what was built, who was in the room, what was approved.
SOC 2 in an AI-build company
The Trust Services Criteria do not need to be replaced. They need to be reframed. Three areas where the framing shifts most.
CC6, Logical and physical access
The control objective is the same: the right people have the right access to the right systems. What changes is the surface. AI-tool environments have to be in scope. The marketing manager’s Lovable account that ships to production is, by construction, in scope. The line is not this is not an engineering tool. The line is does this tool produce code that runs against company data?
CC7, Operations and monitoring
Logging the production system is necessary and not sufficient. The build process, including the AI session that generated the build, has to be loggable enough that an incident investigator can trace from a production failure back to the moment a particular line was generated, by which tool, and accepted by whom. This is hard. It is also where the sessioned record (CC7’s most important evidence in the new posture) does most of its work.
CC8, Change management
A change-management policy that was written for engineers committing to git does not survive contact with marketing managers deploying via Vercel. The policy has to specify what counts as a change, what the review path is, and what the ship-without-engineer-allowed surface is. We provide a template at the end of this paper.
ISO 27001 / 27701 in an AI-build company
ISO 27001:2022 added explicit controls for information security in development (A.8.25-A.8.31). They are the cleanest starting point we have seen for the AI-build environment. A.8.25 covers secure development life cycle. A.8.27 covers secure system architecture. A.8.28 covers secure coding. Each of these reads, in 2026, like it was written with AI builders in mind, even though it wasn’t.
ISO 27701, the privacy extension, becomes load-bearing once any AI tool processes personal data. Sub-processor management, retention, data subject rights all extend to the AI vendors your non-engineering teams have chosen. Most companies have significantly more 27701 sub-processors than they realize.
The sessioned record as audit artifact
The single most important architectural decision in the new posture is what becomes the system of record for builds. Three things have to be true of whatever you choose. It has to capture the AI session that produced the build. It has to capture the human review that approved the build. It has to be retained, integrity-preserved, and produceable on auditor request.
What we have seen work
Two patterns. Pattern A: a centralized engineering log that ingests AI-tool transcripts via API, attaches review decisions from a workflow tool, and indexes both. Works for companies with the engineering bandwidth to build it. Pattern B:a service relationship (us, in our customers’ case, but not necessarily us) that records every AI-build assistance session under a service-level recording obligation, with customer-tenanted retention. Works for companies that don’t want to build their own pipeline.
Data flows across the build perimeter
Map them. Every CISO we have worked with in the last year has discovered, on doing this, that production data is reaching AI-tool environments their security program did not know about. The maps are usually not pretty. The fix is rarely a rip-and-replace. It is a perimeter reset.
The three perimeters worth drawing explicitly
The production perimeter. What systems can read customer data. The build perimeter. What systems can produce code that runs against the production perimeter. The tool perimeter. What systems can be reached from inside the build perimeter. The compliance question is whether the three are stacked the way you think they are.
| Finding | Frequency | Severity |
|---|---|---|
| AI tool with production read access via copied secrets | High | Critical |
| Generated code deployed without human review | High | High |
| AI-tool DPA missing or sub-processor not listed | Medium | Medium |
| Production data pasted into LLM chat, ad hoc | Medium | High |
| AI-tool retention exceeds company retention policy | Medium | Medium |
Policy controls: what to write down
The three documents we recommend writing first.
The AI-build acceptable use policy
Specifies which AI tools are approved, which data classes can be sent to which tools, what review is required before what kind of build can ship. One page; concrete; named tools. Not a procurement document.
The shipping-without-engineering policy
Specifies what surfaces non-engineers are allowed to ship to without a software engineer in the room. The complement also matters: surfaces where they cannot. Specific. Production customer data, identity, payments, regulated workflows: never. Internal-only marketing tools, public-facing static sites, analytics dashboards reading approved sources: yes.
The session-record policy
Specifies what is recorded, retained, and produced on request. Where it lives. Who can read it. Who is alerted when someone reads it. The auditor will read this document first.
Tooling: what to actually deploy
Order of operations. The first three are unconditionally worth doing.
1. SSO every AI tool. If your marketing manager logs into Lovable with their personal Gmail, you have a deprovisioning problem and a sub-processor problem. SSO closes both.
2. Centralize secret management. AI tools are good at asking for credentials and bad at handling them. A vault that mediates is the difference between a leak and a near-miss.
3. Mediate the deploy step. Every shipping path goes through the same review surface, even if the build originated outside engineering. The press, in our terms, is the review surface. Yours might be a workflow tool, a CI pipeline gate, or a ticket. Pick one and make it mandatory.
A 30-day compliance retrofit
For companies starting from zero. Each item is a working day, give or take.
Week 1. Inventory: list every AI builder in use, by team, with screenshots of what they ship to. Map data flows. Identify the worst three.
Week 2. SSO and secrets: enroll the tools in SSO, rotate credentials that have lived in screenshots and chats, centralize the vault.
Week 3. Policy: draft the three documents above. Have legal review. Have engineering review. Get the CEO to sign.
Week 4. Mediated deploy: pick one shipping surface (start with the highest-volume non-engineer team). Wire it through a review gate. Roll out to the second team in week five.