LLM in SDLC

The current level of LLM usage as of early 2025.

In the company:

  • General questions and company knowledge base questions - assistant in Slack, with RAG, custom-built, based on OpenAI 4o + Zed
  • Copilot - GitHub Copilot for developers, QA, and Ops/DevOps/SRE
  • IDE - Cursor, Windsurf for individual developers, in trial mode
  • Pull Request - custom service comments on PR, based on OpenAI 4o, with a prompt on identifying potential issues and as an example of healthy feedback
  • Calls - use of third-party services for transcription, summarization, formulation of next steps, etc.

Personally:

  • General questions - ChatGPT, using projects to separate questions and add additional contexts
  • Specialized questions - ChatGPT through GPTs
  • Translation, error checking, style change - Raycast, with custom commands, based on Claude Sonnet, as it better follows instructions
  • “Explain this,” “Check facts from the text,” “Summarize video,” and other small automations - Raycast, custom commands based on OpenAI 4o
  • Creating commit messages, correcting console commands, and everything else related to CLI - github.com/sigoden/aichat
  • IDE - Cursor, in “normal” mode, with full context management