https://thenewstack.io/mcp-vulnerability-exposes-the-ai-untrusted-code-crisis/
It’s unclear why this is being treated as a new problem. Even before LLMs, code got approved not because a human wrote it, but after tests, scanners, reviews, manual QA checks, and everything else that could be “crammed” into CI and SDLC. There were exceptions for certain people and certain projects, but those were exceptions.
And what really changed now is the significantly increased volume of code per programmer and the visual plausibility of code, which dulls vigilance. And these factors will only grow: creating code will get easier and easier.
And this is why QA as part of the development process is becoming more and more important.
I see several directions for solving the problem, but none of them is simple or universal:
- Establishing clear code-approval processes: even if code looks fine at first glance, it can still contain any errors
- Improving documentation: the more context LLMs have, the lower the chance of errors
- Training all developers in Context Engineering: the better developers understand how to build context for LLMs, the lower the chance of errors
- Explaining all risks and their responsibility: developers must understand that they are responsible for the code they commit, regardless of who wrote it
- Automating code checks with old methods: tests at all levels, linters, static analyzers, dead-code scanners (this is a real LLM pain), placeholder detection
- Involving QA early in the SDLC: this improves specs, enables QA tests that developers can use, and so on
- Using LLMs for code review
- Introducing Canary Deployment and/or Ring-based Deployment to reduce expected losses from errors
- Introducing Feature Flag to disable new features for critical projects
- Improving Observability, monitoring, and alerting