LLM Context and Microservices

It is interesting that the limitation on context size and the attentiveness of LLM models push towards loose coupling and microservices when developing applications with their help. There are at least two reasons:

  1. Context size and attention: It is not possible to fit one large application into an LLM model, but a separate module or microservice is quite feasible. Problems begin much earlier than the context space runs out, apparently due to issues with attention to different parts. Different models have different problems, but in general, they all work better if the context size is small.

  2. Maintaining control: When using LLM for code generation, there is a huge temptation not to understand what it has written. And if the service is large, this temptation only increases. Therefore, to avoid losing control over the code, it is much easier to keep the codebase of each module/service small and control their interaction through strictly defined specifications.

And writing tests for them is easier. And we seem to be moving towards a moment when tests will become more important than code. ;)