OpenAI wrote about how they ported Sora from iOS to Android.
Nothing fundamentally new, but it’s always interesting to see how the creators use their own tool.
If I had their LLM budgets, I’d be running 3–4 agents too :)
TLDR:
- Timeline & outcome: Oct 8 — Nov 5, 2025; ~5B tokens; release at 99.9% stability; an early version of GPT‑5.1‑Codex; #1 in the Play Store; 1M videos in the first 24 hours.
- Approach (Brooks’ Law): a small team (4 engineers) instead of “more people”; AI multiplies effectiveness, but requires coordination and high-quality review.
- How they worked with Codex: AI needs explicit rules and context (architecture, patterns, UX);
AGENTS.mdand auto-formatting; humans make architectural decisions, Codex fills in code within the given structure. - Process: first understand the system and make a plan (mini design doc), then implement step by step; long (up to >24h) unattended sessions on a saved plan; parallel sessions as a “distributed team”.
- Cross-platform: porting logic from iOS to Android by reading real code (Swift → Kotlin); context from iOS/backend makes Codex more accurate; takeaway — AI needs maximum context.
- Conclusion: AI speeds up development, but raises the bar for discipline, architecture, and human oversight; the future belongs to engineers who can work with AI over long horizons.
Unusual bits:
- they wrote some parts themselves first and then used them as a reference
- long autonomous sessions up to 24 hours
- they didn’t immediately figure out the importance of building context and planning
- at the peak they used 3–4 specialized agents in parallel