
Remember when open meant visible? When a bug in open-source code left breadcrumbs you could audit? When you could trace commits, contributors, timestamps, even heated 2:13 a.m. debates on tabs versus spaces?
That kind of openness created confidence in the code and made it possible to hold contributors accountable when issues arose. Today, as AI changes how code is created and shared, those familiar markers of trust and transparency are becoming harder to find.
Transparency was the open-source promise, the safety net. Given enough eyeballs, all bugs are shallow.
Then AI walked in and replaced the safety net with a smoke screen—and vibes. In this new landscape, developers often find themselves relying more on gut feelings or surface-level impressions about code quality, rather than the clear, transparent review processes that open-source communities traditionally provided.
Now, as developers adopt no-code/low-code AI platforms—tools that let users create software or automate tasks with minimal or no traditional programming—they aren’t so much writing code as they are summoning it. Lines appear, logic compiles, and entire applications blink into existence in seconds.
AI code generation feels like magic, mainly because, for the sake of convenience, we often stop questioning how the trick actually works. In other words, the process behind the code becomes a black box. In truth, it’s a science many have chosen not to analyze too deeply.
But every shortcut leaves traces, not to mention risks.
That’s why I’m proposing a new rule: if you vibe code it, vibe check it. Then, ensure your development environment is built on a modern, identity-first security foundation.
How AI-generated code is changing software development
In the past few years, AI code generation has done two things spectacularly well: supercharged development while simultaneously dismantling accountability.
GitHub and open repos gave us provenance. AI, on the other hand, gives us opaque models and plausible deniability—should something go awry—and both pose a serious challenge for enterprise security teams.
In fact, CyberArk research has found that 92% of leaders have concerns about AI-generated code, with 78% believing AI-developed code will lead to a “security reckoning.” Similarly, nearly two-thirds admitted to losing sleep over the implications, stating it was “impossible” to govern the safe use of AI, as they lack visibility into where it’s being used. Meanwhile, 2025 GitGuardian research states that AI tools are 40% more likely to leak a secret.
And all of that’s before we even add AI agents to the equation.
Translation? We’ve lost sight into the systems that should be the most transparent, making it harder to spot issues or understand how decisions are made. And because anyone can now spin up software with a prompt, our supply chains run less on logic and more, again, on faith in tools we don’t have complete clarity on.
Why human judgment is critical in AI code development
Every era of innovation eventually needs its own circuit breaker. For AI code, one of the primary breakers is human judgment.
Why? When discussing AI, you’ve no doubt heard about the importance of the “human-in-the-loop,” but that can’t be just any human. What we need are the right humans, in the right roles, knowing what to look for to mitigate AI-coding-related risks before they make their way into production—or erase an entire production environment.
In other words, you need what I’m calling judgment-in-the-loop. A skilled developer gives deliberate pause before committing—a discernment pass before they push to prod.
A, you guessed it, “vibe check.” That’s a deliberate pause to assess whether the code feels right and makes sense before moving forward.
What does that entail? You should, at a minimum, ask these three questions:
- Does this code make sense?
- Should this code exist?
- If this code fails, can we trace who—or what—decided it was ready?
Whenever someone is about to push an AI-generated commit, a human must review it. Now, I understand concerns about efficiency bottlenecks, and AI tools are emerging that can help alleviate them. But human verification is still needed for today’s vibe coding.
That reflection might be the difference between reliable automation and an incident report caused by potentially catastrophic issues like AI hallucination, logic-based defrauding, or codebase erasure.
When the vibes don’t vibe: Cautionary tales from AI code in action
Even with the best intentions, AI-generated code can go sideways in unexpected ways. Consider the following three cautionary tales where the “vibe check” was missed, and what can happen as a result:
1. The hallucinated dependency
Sometimes, AI-generated code might include a reference to an outdated or insecure cryptographic library. Even if automated tests pass, this hidden vulnerability can lead to security incidents. The real problem? Because AI generated the code, it’s often difficult to trace how or why that risky dependency was added.
2. The literalist agent
An AI agent automates refunds. It executes every instruction literally without context or guardrails. A mission condition in the logic means it applies refunds twice, draining an account before anyone notices. Because the agent reused a payment API key stored in plain text, the issue spread across environments. The logic works perfectly, even if judgment doesn’t.
3. The silent saboteur
An AI “cleans up” a codebase and deletes an error-handling function it deems unnecessary. When the next outage hits, teams scramble, unable to tell what failed first, because the AI has also “tidied up” the evidence. Put simply, when AI attempts to optimize code by cleaning it up, it may remove critical safeguards—like error-handling routines—without fully understanding their importance. If something breaks later, the clues needed to diagnose the issue may have been tidied up and erased, making troubleshooting much more challenging for human teams.
To avoid these pitfalls, teams need to pair AI’s speed with a deliberate vibe check—so every line of code gets the scrutiny it deserves before it goes live.
Building a modern SBOM for AI-generated code
With open-source code, trust is communal. You can trace dependencies and decisions more easily. AI, by contrast, trades transparency for convenience. Where the lineage of a given function used to end at a human name, it now stops at the AI model boundary.
Proprietary, web-based models and coding platforms are, in essence, black boxes. We don’t necessarily know who trained it, what data they used, or whose logic the model borrows.
All we know is that the output works … until it doesn’t.
To reestablish accountability, the modern software bill of materials (SBOM) can’t stop at packages or versions. We now have to account for prompts, models, and interaction flows.
Additionally, we also need to consider good taste as coders.
Here’s why that matters: recent industry research finds that AI routinely repeats human mistakes like over-commenting, rewriting existing logic, and settling for “good enough,” when it’s really not. AI mimics developer experience without truly having it.
However, as skilled developers know, good code functions with intent and shows restraint. It’s clean, and its creator understands why the code exists.
A vibe check helps reinstate and preserve those instincts—while also protecting the craft of coding—helping calibrate developers’ internal compasses beyond mere speed and simple, superficial fixes.
And, just as good judgment keeps code intentional, identity helps keep it accountable.
Identity security and traceability in AI code development
While taste and judgment matter, code provenance is still one of the ultimate survival tactics in AI-assisted development. Yet with AI, tying code back to a known source isn’t easy. The links often disappear, and you don’t have a community of contributors verifying what’s been added or changed.
That’s why trust in AI-generated code has to come from cryptographic proof. In other words, verified machine identities for workloads like microservices, containers, and AI agents. By building an identity-first development pipeline, teams can restore visibility while enabling code signing, author traceability, and intent pathing, so you know not just what’s shipped but why.
The real vibe check: Moving beyond AI magic to practical security
Policies can guide, and guardrails can help, but discernment—that tiny pause between build and deploy—that’s the real magic. And as AI advances, we must remember that it’s not just about how fast we can build, but how deliberately we choose to ship.
Matt Barker is vice president and global head of Workload Identity Architecture at CyberArk.





















