AI and State of Software Development

Large language models are transforming software development, but not evenly. They’re revolutionizing some parts of how we build software while barely making a dent in others. The pattern isn’t random. It reveals something fundamental about the nature of software work itself.

The uneven AI takeover in software development

If you write code, you’ve probably noticed AI tools getting eerily good at helping with implementation. Copilot can generate entire functions. Cursor can refactor complex codebases. Replit Agent can build and deploy working applications from a simple prompt.

But if you do product strategy, user research, or architecture planning, AI still feels mostly like an expensive search engine. It can summarize your meetings and generate decent first drafts, but it can’t make the hard judgment calls that define whether your product succeeds or fails.

This isn’t because the models aren’t smart enough. It’s because different parts of software development have fundamentally different characteristics. To understand this divide, we need to map the journey that every piece of software takes and see where AI thrives versus where it struggles.

Think about the last time you used any app. Maybe it was ordering food, booking a ride, or checking your bank balance. Before that app existed, someone had to figure out what people actually needed, design how it should work, and then build the thing that makes it happen. This journey from problem to solution reveals why AI’s impact is so uneven.

The process moves through distinct phases: Discovery and strategy (understanding what problems to solve), Product planning (deciding what to build first), System design (figuring out the architecture), User experience design (creating the interface), Implementation (writing the code), Testing (making sure it works), Deployment (shipping to users), and Analytics (measuring what happens next).

Today’s AI excels at the execution phases: writing code, running tests, deploying systems. But it struggles with the strategic phases that come before. A pattern emerges: AI dominates where work is structured and verifiable, but falters where judgment and context matter most.

This isn’t coincidence. There’s a fundamental axis that explains everything.

The axis: Left and right of code

Every software system is built by moving from ambiguity to clarity, from loose problem spaces to rigid system outputs. That transition happens around implementation, which is why we call it code.

Left of code encompasses the interpretive, strategic work that happens before implementation:

Right of code encompasses the execution work that happens during and after implementation:

The left side deals with what and why. The right side deals with how and whether it works.

Code axis

People in different roles tend to operate on different sides of this divide. Product managers, designers, researchers, and leadership mostly work on the left side of code, shaping intent, strategy, direction, and abstraction. Developers, DevOps engineers, QA testers, and data analysts mostly work on the right side of code, implementing, validating, deploying, and iterating.

AI is not uniformly useful across this axis. It is remarkably effective on the right, where tasks are well defined and outputs are verifiable. On the left, it remains mostly shallow, limited not by model capacity, but by the nature of the work itself.

Axis side Nature of work Typical activities Computational complexity
Left of code Ambiguous, strategic, recursive, interpersonal Roadmapping, product thinking, architecture tradeoffs, UX Exponential branching possibility spaces
Right of code Deterministic, scoped, repeatable, testable Coding, debugging, testing, infrastructure deployment, analytics Bounded search spaces with verifiable outputs

Understanding where AI sits today and why reveals where the next breakthroughs must come from.

AI delivers deep value right of code

AI is well suited to tasks that meet three conditions: the inputs are structured, the scope is bounded, and the outcome is verifiable.

Software implementation, especially at the application and infrastructure layer, satisfies all three. Large language models can write and refactor code across multiple files, generate test suites and CI/CD pipelines, modify infrastructure templates, and ship working code and verify outcomes.

The depth of value here is profound. The AI is not just accelerating workflow. It is absorbing execution. It closes loops. It owns deliverables. It produces side effects in reality.

This is why Copilot, Cursor, and Replit Ghostwriter feel magical. They operate in deterministic domains with feedback cycles the model can learn from.

Consider debugging a failing CI pipeline. The AI can parse error logs with structured syntax, identify the failure pattern against known templates, generate a fix with measurable success criteria, and validate the solution through automated testing.

The feedback loop is complete, verifiable, and the AI owns the outcome.

AI stutters on the left side

The left side of code (strategy, research, architecture planning, product shaping) is not simply earlier than code. It is fundamentally harder for computational systems to navigate.

Organizational context is spread all over the place and is very fuzzy for an LLM to synthesize it and come up with useful and definitive actions.

Fuzzy context

The challenges go deeper than just having “messier” data. Here’s why AI fundamentally struggles on the left:

Strategic decisions have no single right answer. There is no canonical answer to what should go on the roadmap or what is the right architecture for this bet. These are interpretive problems, not executable tasks. Strategic decisions exist in exponentially branching possibility spaces that don’t compress well into the patterns current AI systems recognize.

Context is poorly distributed. Meeting notes, Slack messages, and raw customer feedback are filled with ambiguity, social nuance, and historical baggage. Distinguishing useful signals from contextual noise requires memory and judgment that spans far beyond what any context window can capture.

Product strategy is divergent and non-deterministic. Exploration is how good teams make good decisions. Premature summarization, AI’s default mode, flattens nuance too early. It resolves what should be suspended.

Organizations operate on tacit knowledge. The best decisions are shaped by prior losses, interpersonal trust, organizational dynamics, and timing. These are not expressed in documents. They are inferred in rooms. AI today does not inhabit rooms.

Strategic work is recursive with no clear completion state. A sentence in a strategy document may create or cancel months of work. Or it may be forgotten. Most left of code work has no done state. AI systems optimized for input/output cycles get lost in this recursive entropy.

GitHub Copilot Chat, when asked to suggest microservices architecture decisions, typically produces generic advice like “consider using event sourcing” without understanding the team’s technical debt, scaling constraints, or organizational readiness. The suggestions look sophisticated but lack the contextual judgment that makes architectural decisions valuable.

What separates valuable AI from impressive AI

This left/right divide explains why some AI tools feel transformative while others feel like expensive autocomplete. The most valuable AI tools share specific characteristics that align with the computational realities of each side.

On the right side, AI delivers value through direct action. When GitHub Copilot suggests a function to parse JSON data, developers can accept the code immediately. When Replit Agent builds a complete web application, it owns the entire deliverable from description to deployment.

On the left side, AI gets stuck at the suggestion phase. When Notion AI analyzes meeting notes, product managers still need to interpret priorities and navigate organizational context. When ChatGPT helps draft a product roadmap, it can’t weigh technical debt against competitive pressure or understand how the CEO’s unstated preferences shape feasible options.

The pattern holds across six dimensions: actionability (can you use it directly?), outcome ownership (does AI complete full tasks?), context fidelity (does AI understand relevant context?), judgment substitution (can AI make decisions?), loop closure (can AI complete full cycles?), and cross-functional flexibility (does AI work across domains?).

Right side AI consistently scores high on these dimensions because the work operates in bounded, verifiable spaces. Left side AI scores low because strategic work requires navigating ambiguous, context-dependent, socially-driven environments that resist computational approaches.

Currently humans drive the left of code

Given these fundamental challenges, it’s no surprise that the left side of code remains largely human territory.

While AI automates the right side of code, the left side remains fundamentally human work. Product managers still need to interpret user feedback, make sense of conflicting stakeholder priorities, and decide which features actually matter. Designers still need to understand user psychology and navigate the tradeoffs between usability and business goals. Architects still need to weigh technical debt against delivery speed based on team context and organizational maturity.

These roles haven’t been disrupted by AI because the work involves judgment calls that don’t have verifiable right answers. As implementation becomes faster through AI assistance, these strategic decisions become more visible as the actual bottleneck in software development.

The result is that left side workers are doing the same interpretive and relationship work they always have, but its importance has become more apparent. When you can build a working prototype in hours instead of weeks, the quality of the initial strategy and design decisions matters more, not less.

This isn’t a temporary situation. The fundamental characteristics that make left side work difficult for AI (ambiguity, context dependence, social dynamics) aren’t going away anytime soon.

How humans and AI work together today

This fundamental divide shapes how humans and AI actually collaborate. The partnership looks completely different depending on which side of code you’re working on.

Every piece of work involves four steps: define the problem or goal, plan the approach, do the actual work, and verify the results. But who owns each step varies dramatically across the code axis.

Left side collaboration: human led

On the left, AI operates as a thin layer that helps with research and drafting, but humans own the critical thinking. A product manager might use AI to analyze user feedback or draft user stories, but they decide which insights matter and what features to prioritize based on business context AI doesn’t have.

Right side collaboration: AI led

On the right, humans operate more as supervisors and exception handlers. A developer might describe what they want to build, let AI generate and refine the code, then review and approve the final result. AI owns the bulk of the define, plan, and do work because the requirements are structured and the success criteria are verifiable.

This explains why left side AI tools feel like assistants while right side AI tools feel like collaborators. The fundamental power dynamic is different.

The organizational intelligence challenge ahead

But this current state isn’t permanent. The most successful AI applications today remain firmly right of code, where tasks are bounded, context is structured, and success criteria are verifiable. Moving leftward requires solving a fundamentally harder problem: organizational intelligence.

The challenge isn’t computational power or model sophistication. It’s that strategic work happens within complex social systems that resist the pattern-matching approaches that work so well for code.

Why current left-of-code AI delivers thin value

AI tools promising to automate product strategy, roadmap planning, and architectural decision-making consistently struggle to move beyond expensive summarization engines. They treat organizations like data processing problems rather than complex social systems.

Real strategic context lives in hallway conversations, not structured documents. AI can suggest but can’t own the consequences of strategic decisions. It misses who actually has influence and when organizations are ready for change. Each strategic decision reshapes the context for future decisions in ways that resist computational modeling.

Without organizational intelligence, AI tools left of code become expensive summarization engines. They can process information but can’t drive decisions. They can suggest but can’t execute. They can analyze but can’t synthesize across the complex, ambiguous, socially-driven contexts where strategic work actually happens.

What it would take for AI to move left

Moving AI leftward isn’t impossible, but it requires solving problems fundamentally different from the ones we’ve already cracked. The breakthroughs won’t come from bigger models or more training data. They’ll come from teaching AI to think more like humans do when navigating complex, ambiguous situations.

Think about how you make strategic decisions. You don’t just pick the “right” answer. You hold multiple possibilities in your head, weigh competing priorities, and make judgment calls based on incomplete information. You consider cascading effects: choosing to build the mobile app first might delay the web version, affecting Q3 sales targets, potentially influencing the next funding round.

This requires four fundamental capabilities that don’t exist in current AI systems:

AI needs to think in scenarios and consequences over time. Strategic thinking means holding multiple possibilities and understanding how they play out over months. Current AI systems are trained to pick the most likely next word, but strategic work requires maintaining several possible futures and modeling how decisions create cascading effects through complex systems.

AI needs to make judgment calls without clear rules. Strategic decisions involve weighing competing priorities where there’s no single correct choice. Do we optimize for user growth or revenue? Engineering velocity or technical excellence? Strategic AI needs to balance multiple conflicting objectives based on understanding what actually matters to your organization.

AI needs persistent organizational memory that grows with your company. Strategic context doesn’t fit in any context window. Every day brings new meetings, conversations, decisions, and organizational knowledge that shapes future choices. A product decision made six months ago constrains today’s options. A comment the CEO made in passing last quarter might be the key context for understanding why a strategy isn’t working.

AI needs to understand hierarchies, power, roles and people. Strategic work happens through teams arguing, building consensus, and navigating informal power structures. AI needs to understand not just what people say in meetings, but who actually makes decisions, when teams are ready for change, and how ideas spread through organizations.

Early attempts at organizational intelligence

We’re starting to see companies build toward more sophisticated organizational intelligence:

Conversation intelligence: Tools like Granola record meetings so AI can absorb team discourse patterns, decision-making styles, and recurring concerns. The real value isn’t transcription but understanding how your organization actually communicates and decides.

Ambient context capture: Limitless builds devices to capture all verbal communication, including conversations away from laptops. This ambient intelligence could help AI understand the informal discussions that drive strategic decisions.

Documentation intelligence: Notion and Confluence embed AI to analyze all your docs, sheets, and strategic artifacts. Eventually, design tools like Figma could contribute to this organizational understanding, helping AI grasp not just what you’re building, but why and for whom.

The organizational graph: The breakthrough will come when AI can map your company’s influence networks, knowing who said what, who decides what, and how ideas flow through teams. This isn’t storing conversations but understanding organizational dynamics and power structures.

Until AI can metabolize this organizational complexity without flattening it, the left side will remain fundamentally human territory. But the companies building toward this organizational intelligence are laying the groundwork for the next major shift in the left/right boundary.

Conclusion

AI is not coming for software work uniformly. It’s infiltrating the deterministic side first: the right side of code where tasks have clear inputs, bounded scope, and verifiable outcomes. Codebases, tests, CI pipelines, and infrastructure deployment are being transformed because they operate in structured domains with feedback loops AI can learn from.

The left side (strategy, research, product shaping, organizational alignment) remains fundamentally human territory. This isn’t a temporary gap waiting to be filled by more powerful models. It’s a reflection of the nature of strategic work itself: ambiguous, recursive, context-dependent, and deeply social.

But the boundary isn’t permanent. Once AI learns to metabolize organizational complexity without flattening it, once it can maintain uncertainty as a strategy rather than rushing to resolve it, once it can navigate the social dynamics that shape every strategic decision, the line will begin to shift leftward.

The companies building toward this organizational intelligence today (capturing conversations, mapping influence networks, modeling judgment under uncertainty) are laying the groundwork for the next major transformation in how software gets built.

Those who understand this boundary will position themselves correctly for when it moves. Those who don’t will find themselves automated by tools they didn’t see coming.

And when that shift happens, we’ll be potentially looking at AI cofounders instead of just ‘copilots’.

Interesting times indeed.

Read more

Claude Analyzed My Writing The Creator vs Editor Dilemma The Front Door Effect