The current level of LLM usage we have.
In the company:
- General questions and questions about the company knowledge base — a Slack assistant with RAG, in-house, based on OpenAI 4o + Zed
- Copilot — GitHub Copilot for developers, QA, and Ops/DevOps/SRE
- IDE — Cursor, Windsurf for some developers, in test mode
- Pull Request — an in-house service comments on PRs, based on OpenAI 4o, with a prompt about finding potential issues and as an example of healthy feedback
- Calls — using third-party services for transcription, summaries, next steps formulation, etc.
Personally:
- General questions — ChatGPT, using projects to separate topics and add additional context
- Narrowly specialized questions — ChatGPT via GPTs
- Translation, error checking, style change — Raycast with custom commands, based on Claude Sonnet, because it follows instructions better
- “Explain this”, “Check facts from the text”, “Summarize a video” and other small automations — Raycast, custom commands based on OpenAI 4o
- Creating commit messages, fixing console commands, and everything else related to CLI — github.com/sigoden/aichat
- IDE — Cursor in “normal” mode, with full context control