The Accidental Orchestrator For over two decades, I’ve written about software development for practitioners, covering coding, architecture, project management, and team dynamics. In recent years, my focus has shifted to AI and its role in software engineering. Despite the growing interest in AI tools like Claude Code, Copilot, and Cursor, I’ve struggled to find a structured approach for experienced developers to integrate these tools effectively. While there are plenty of tips and hype, there’s little guidance on how to practice, teach, or improve agentic engineering—a discipline that combines AI agents with human expertise. The debate around AI in software development often splits into two extremes: one claiming AI will render developers obsolete, the other insisting it’s just another tool. Neither view is accurate. AI doesn’t replace human expertise; it raises the bar for what developers need to know. The gap between theoretical understanding and practical application is a major source of anxiety for engineers. Many know they should review AI-generated code, maintain architecture, write tests, and stay in control of the codebase, but applying these principles in practice remains challenging. This tension led me to experiment with agentic engineering by building a production system from scratch, with AI writing all the code. The goal was to test a structured approach to using AI tools while addressing the complexities of real-world development. I chose Monte Carlo simulations as the project, a decision rooted in my childhood fascination with the technique. My father, an epidemiologist, introduced me to the concept of using simulations to uncover patterns in chaotic data.#ai #monte_carlo_simulations #drunken_sailor_problem #agentic_engineering #software_development