AI Recommends non-existent projects 28% of the time, security report finds

Cloud development platforms comparison 2025

Report: sdtimes.com

Artificial intelligence is speeding up software development, but a new industry report suggests developers might want to pump the brakes before blindly trusting AI-generated code recommendations.

According to Sonatype's newly released 2026 State of Software Supply Chain report, AI tools are getting it wrong more often than many developers realize. The cybersecurity firm analyzed over 37,000 AI-driven upgrade suggestions for open-source software and discovered something troubling. Nearly 28% of the recommendations pointed developers toward package versions that simply don't exist.

Brian Fox, who co-founded Sonatype and serves as its chief technology officer, put it bluntly. AI can make good developers work faster, but it can also amplify their mistakes at an alarming rate. The problem stems from AI models making educated guesses when they lack access to real-time package registries or vulnerability databases. When the system doesn't actually know which software versions exist or which ones contain security flaws, it fills in the blanks with its best guess. Sometimes those guesses are completely wrong.

The consequences aren't trivial. Developers waste time chasing down phantom versions, build pipelines break unexpectedly, and teams start questioning whether automation is worth the headache. Even worse, AI might recommend a version that does exist but contains known vulnerabilities or malicious code. Without proper guardrails pulling from actual registry data and current threat intelligence, organizations are essentially automating plausible-sounding nonsense.

Separate research from IDC adds another layer of concern to this picture. Their data shows developers accept 39% of AI-generated code without making any changes. Katie Norton, who leads DevSecOps research at IDC, noted that when you combine this acceptance rate with Sonatype's hallucination findings, it becomes clear that AI recommendations need to be grounded in real supply chain data and organizational policies. Otherwise, faster development just means a bigger attack surface.

The Sonatype report covered more than just AI mishaps. The firm examined 1.2 million malicious packages and 1,700 vulnerability records, painting a broader picture of open-source security challenges. Open-source package downloads jumped 67% year-over-year across major repositories like Maven Central, PyPI, npm, and NuGet. Meanwhile, malicious open-source packages grew even faster at 75%.

Much of this traffic isn't coming from individual developers typing commands into terminals. It's automated systems running at massive scale. The report found that the three largest cloud providers alone generated over 108 billion download requests, accounting for 86% of all package pulls. These downloads often happen because of inefficient automation patterns like cold caches, throwaway CI runners, and builds that start from scratch every single time.

Fox emphasized that he's not advocating for slower development. Instead, he's pushing for smarter engineering practices that match the industrial scale at which modern teams operate. That means setting up durable caching, properly configuring proxies and mirrors, and avoiding pipeline designs that re-download the entire dependency tree with every build. These aren't glamorous improvements, but they keep shared infrastructure healthy, reduce environmental impact, and make builds more reliable.

The message is clear. AI can be a powerful development accelerator, but only when it's properly constrained by real data, current security intelligence, and sensible organizational policies. Speed without accuracy is just expensive chaos.