---
title:

agents and humans: the same interaction problems

date: 2026-04-19
draft: false
---

The longer I work with multi-agent systems, the more clearly I see it: communication problems with ai agents are practically identical to the problems of communicating with people. The only difference is that with agents they surface faster and are cheaper to diagnose.

The parallels that jump out:

  • Delegation: A vague wording always leads to a wrong result. You have to spell out boundaries, success criteria, and constraints. “Do a good job” works with neither a junior nor an agent.
  • Shared context across all stages: If the original intent is lost along the way, qa ends up testing not the solution to the user’s problem but mere compliance with code changes. If the task was handed over in pieces or “verbally”, the developer’s, tester’s, and manager’s views will inevitably diverge. With agents it’s exactly the same.
  • Artifacts over words: Words ≠ facts. A subagent, just like a live colleague, can confidently claim “all done” while in fact not running the tests or mixing up branches. Serious claims require verifiable evidence and proof.
  • Agreement instead of objection: In agents this is a systemic tendency to please; in humans it’s blind trust in authority or fear of conflict. The outcome is the same: the executor agrees with everything where they should have pushed back.
  • Asymmetry of skills: Different agents are strong in different things and have different instructions — just as people on a team have different roles. You need to assemble a balanced lineup for the task, not bet on a single generalist. Moreover, you need to deliberately design points of constructive conflict (task-conflict) with controlled discussion formats and independent “judges”.
  • Reflection: Without a retrospective, the same mistakes repeat endlessly. Both agents and people need to set aside time for debriefs and updating instructions, not just for nonstop solution generation.

The similarity holds down to the details. For example, bugs aren’t fixed from a description alone, without reproduction. This golden rule works for both sides: if one agent (or human) reports a bug, the other shouldn’t take it on faith — the problem must first be localized and reproduced in their own environment.

Funny thing: while learning to work with ai agents, we are simultaneously fixing human communication. In many companies, agent management turned out to be an excellent catalyst for adopting basic management practices that should have been put in place many years ago.

Takeaways:

  1. Companies with already-established processes (where there’s a culture of delegation, context handoff, artifact verification, constructive objection, and reflection) adapt to ai agents far faster. The foundation is absolutely the same.
  2. The transfer works both ways: processes honed on people map onto agents almost without changes. And skills sharpened on agents (writing clear prompt-tasks, fixing definition-of-done criteria, demanding evidence logs, structuring context) come back to the human environment and improve it.