Untrusted Code
The first comment contains a link to the article “MCP Vulnerability Exposes the AI Untrusted Code Crisis” https://thenewstack.io/mcp-vulnerability-exposes-the-ai-untrusted-code-crisis/
It is unclear why this is considered a new problem. Even before the emergence of LLM, code was approved not because it was written by a human, but after passing tests, scanners, reviews, manual QA checks, and everything else that could be “pushed” into CI and SDLC. There were exceptions for certain people and specific projects, but those were exactly exceptions.
And what has really changed now is the significantly increased volume of code per programmer and the visual adequacy of the code, which dulls vigilance. And these factors will only grow: creating code will become easier.
And this is why QA as part of the development process is becoming increasingly important.
I see several directions for solving the problem, but none of them is simple and universal:
- establishing clear code approval processes: even if the code looks adequate at first glance, it can still contain any errors
- improving documentation: the more context the LLM has, the less likely errors are
- training all developers in Context Engineering: the better developers understand how to build context for LLM, the less likely errors are
- explaining all risks and their responsibility to them: developers must understand that they are responsible for the code they commit, regardless of who wrote it
- automating code checks with old methods: tests of all levels, linters, static analyzers, dead-code scanners (this is a real problem for LLM), stub detection
- involving QA at early stages of SDLC: this will improve specifications, enable building QA tests accessible to developers, and so on
- using LLM for code review
- implementing Canary Deployment and/or Ring-based Deployment to reduce expected losses from errors
- implementing Feature Flags to disable new features for critical projects
- improving Observability, monitoring, and alerting