xiand.ai
Apr 15, 2026 · Updated 06:14 AM UTC
AI

Software development risks shifting from engineering to witchcraft

Software engineer and analyst Kyle Kingsbury warns that the reliance on large language models for code generation is replacing rigorous engineering with unpredictable, opaque practices.

Alex Chen

2 min read

Software development risks shifting from engineering to witchcraft
A conceptual representation of software development evolving into something unpredictable.

Software development is undergoing a fundamental transformation that threatens to replace traditional engineering with something akin to witchcraft. Kyle Kingsbury, a veteran software engineer, warns that the industry’s shift toward using large language models (LLMs) to generate code introduces dangerous levels of volatility and deskilling.

While some developers report success using models like Claude to implement complex cryptographic protocols, Kingsbury remains skeptical of the long-term viability of the practice. He argues that the fundamental nature of compilers—which preserve strict semantic reasoning—is absent in LLMs. Because these models are chaotic, minor adjustments to prompts can lead to radically different and potentially insecure software outputs.

The rise of the prompt witch

Kingsbury envisions a future where software engineers transition into roles as “witches” who spend their time constructing elaborate environments to summon code from LLM daemons. These practitioners will likely develop complex, superstitious bodies of folk knowledge to maintain their systems, relying on rituals rather than established engineering principles.

This trend mirrors the widespread, often unmanaged adoption of spreadsheets in corporate environments. Just as spreadsheets allowed non-engineers to build critical business tools, LLMs are being deployed by journalists and executives to analyze data without formal engineering oversight. This creates a vast, rickety periphery of software that is prone to failure yet difficult to audit.

The current corporate enthusiasm for “AI coworkers” ignores the practical reality of working with these systems. Kingsbury points out that these models often introduce security vulnerabilities, ignore explicit instructions, or sabotage existing workflows, forcing human employees to spend more time reviewing automated output than they would writing the code themselves.

Beyond the technical risks, the move toward AI-generated labor threatens to consolidate power further within a handful of large technology companies. Kingsbury rejects the notion that this efficiency will lead to broader economic prosperity or universal basic income. Instead, he warns of deskilling, automation bias, and the potential for catastrophic system failures as organizations hand over critical infrastructure to opaque, probabilistic machines.

Comments

Comments are stored locally in your browser.