Organized by A Ying
Boris Cherny, founder of Claude Code, shared insights at the Sequoia Conference that were incredibly rich—I heard many of these perspectives for the first time. This guy truly has a solid understanding of AI.
I’ll share my summary.
01 Code is no longer scarce
For a wide range of mainstream development scenarios, manually writing code has begun to become an inefficient process.
In the past, when delivering a feature, engineers would sit down, think through how to implement it, and then write the code line by line. During this process, the engineer’s greatest value lay in whether they could code, how well they coded, and how quickly they coded.
The way things work now is different.
For the same feature, what engineers do is more like: first clearly defining the requirements, breaking the task into parts and assigning them to agents, setting clear acceptance criteria, then checking whether the results generated by the agents are correct—if not, adjusting the prompts and running it again.
AI can now handle most coding tasks. Of course, it’s not 100%—there are still large, complex codebases, obscure languages, or specialized environments where today’s models fall short.
Overall, the value of engineers has shifted from whether they can write code to whether they can break down tasks, clearly articulate goals, validate outcomes, and manage agents.
This change is actually very similar to the Industrial Revolution.
Before the Industrial Revolution, a blacksmith did everything themselves—from forging and shaping to polishing and assembling. A skilled blacksmith was naturally valuable.
Later, the assembly line appeared. Each worker was responsible for only one step, yet the overall output increased dozens or even hundreds of times compared to the handcraft era.
At this point, the most valuable role in the factory is no longer the craftsman who excels at performing a single task, but the person who can design, manage, and keep the production line running smoothly.
Workers haven't disappeared, but their roles have changed.
Software engineering is now undergoing a similar turning point. Code itself is no longer scarce. The ability to write code is becoming a basic skill, much like knowing how to use PowerPoint.
What's truly scarce is whether you can break down vague requirements into clear tasks, whether you can pick the best option from the several solutions provided by an Agent, and whether you can get a group of AIs to collaborate to accomplish a single goal.
Many veteran engineers initially struggled to accept this. The act of writing code by hand has been a reason many people loved this field for the past several decades.
Giving this to a machine isn't just a change in how many people work—it's a reshaping of their identity.
But a trend is just a trend.
02 Like the Gutenberg printing press
Coding is transitioning from a specialized skill to a fundamental ability. This can be compared to printing technology in 15th-century Europe.
Before the invention of printing, only about 10% of people in Europe could read and write. These individuals were often employed by literate nobles to read and write on their behalf.
Then printing technology emerged. In 50 years, the number of books published in Europe exceeded the total of the previous thousand years, and book prices dropped by about 100 times. It took several more centuries for supporting systems—such as education and economic structures—to catch up before global literacy rates reached today’s 70%.
Boris believes that AI's impact on software is an accelerated version of the printing press revolution. Software will become fully democratized within decades, turning into something anyone can master.
Eventually, being able to create software will be as natural as sending a text message.
03 What ability is most important?
Once AI has lowered the barrier to writing code, what truly distinguishes a person’s ability is their product sense and genuine understanding of a specific domain.
For example, two people are developing a product for doctors: one is an engineer who codes quickly, and the other has worked for several years in a hospital’s information department.
In the past, engineers had a higher chance of bringing something to life because they could turn ideas into reality.
Now it’s the other way around. Anyone can turn an idea into reality. At this point, the person who truly understands the daily workflow of a hospital becomes even more valuable—because they know which features doctors will actually use and which only sound reasonable.
In other words, once AI levels the playing field for execution, the differences in judgment become more pronounced.
This directly redefined the meaning of the term "generalist."
In the past, when we talked about a generalist, we usually meant an engineer who could write iOS code, web code, and backend code. Such a generalist was essentially a full-stack engineer within engineering.
The generalist of the future is an interdisciplinary full-stack expert.
Some people understand product, design, and engineering all at once. Others understand product, data science, and engineering together. Such combinations were nearly impossible in the past, as each required extensive specialized training.
But now AI has lowered the entry barrier for each task, allowing one person to operate across multiple fields while still maintaining professional depth.
The Claude Code team is like this: engineering managers, PMs, designers, data scientists, finance staff, and user researchers—all of them write code.
Designers can now run their own interactive prototypes to show the team, rather than just delivering static designs and waiting for engineers to implement them.
Finance teams can now build their own analysis tools to run complex financial models without waiting in line for BI. Colleagues in user research have started running their own data, taking over tasks that previously required coordination with the data team.
Everyone’s depth of expertise remains. But with AI assistance, coding has become a shared language for everyone.
04 The moat of SaaS is eroding
Over the past decade, the SaaS industry has had several widely accepted axioms.
The first is the switching cost. Once a company uses your system, it gradually accumulates years, even decades, of data, configurations, fields, and permission relationships.
Moving to another system is daunting enough just by thinking about migrating all these things over and back again.
The second is workflow locking. All daily operations, cross-department collaboration, and approval nodes are built around this SaaS.
Switching systems isn't just about moving data—it's about tearing down and rebuilding the company's accumulated muscle memory from the past several years.
Together, these two factors formed the deepest moat in the past SaaS industry. But with a sufficiently powerful model, the logic of things begins to change.
First, consider the switching cost. In the past, moving from one SaaS platform to another required engineering teams to work overtime for months just to align fields and replicate data structures.
Now, feed both sides' interfaces and data structures directly into the model, letting it figure out the mapping relationships on its own, gradually climbing toward the optimal solution. What used to take months might now produce a usable version in just a few days.
Now, looking at workflow lock-in, it’s even more interesting. In the past, workflows were able to lock in customers because these processes were inherently complex, opaque, and reliant on human intervention.
The unwritten rules in employees' minds about who needs to approve what and at which step things get stuck can't be directly transferred.
But models like Opus 4.7 excel precisely at understanding, breaking down, and rebuilding a complex process in a new environment—often creating a version that’s even smoother than the original.
So the moat built on data accumulation and process consolidation is breaking down.
This may be bad news for those building SaaS, but it’s a real window of opportunity for all SaaS customers and teams preparing to build the next generation of SaaS.
05 The best time for entrepreneurs
The number of startups that will truly disrupt the industry over the next 10 years could be 10 times greater than in the past 10 years.
The reason is actually not complicated.
Small teams can use AI to create products that match or even surpass those of large companies. Conversely, for large companies to truly leverage AI, it can become a liability.
How to say it?
A company with over a decade of history has developed its own complete set of business processes, role divisions, collaboration habits, training systems, and KPI evaluations. These elements were once assets and barriers.
But truly embedding AI means reexamining everything: business processes must be restructured, all employees need retraining, and every step forward encounters significant internal resistance, requiring coordination across N departments and N layers of approval.
A three-person startup team has treated AI as the default foundation from day one. They have no legacy baggage to dismantle, no habits to change, and no one to convince. They clarify discussions today, build a demo tomorrow, and launch for users the day after.
This speed difference already existed before AI. Startups inherently had a speed advantage over large companies. But AI has amplified this gap many times over.
Why?
The stronger the AI, the greater the leverage an individual can exert within a given time frame. A small team that truly leverages AI today may produce as much as ten people did in the past, and tomorrow may produce as much as thirty.
But the organizational weight of large companies hasn’t lightened; instead, it has grown heavier due to the need to absorb AI. The stronger AI becomes, the greater the gap widens between the acceleration of small teams and the drag of large corporations.
This is what Boris means by negative assets. It’s not that large companies lack money, people, or willingness—it’s that the very capabilities that once made them profitable are now getting in the way of AI realizing its true value.
06 MCP will not die
MCP will not die.
After Skill became popular, many people felt that MCP was no longer necessary. The founder of OpenClaw held a similar view.
But Boris doesn’t see it that way. He believes MCP will become the software glue layer of the AI era.
In the past, software connected over the internet via APIs.
But the core issue with APIs is that they are designed for engineers. To use an API, you must first read the documentation, request a token, write code, map fields, and handle exceptions. In short, APIs are built for human developers.
MCP is different. It allows models to be directly integrated and invoked simply by the model understanding them, without requiring a programmer to translate them in between.
So Boris calls the API the Human Developer Interface and MCP the Model Interface Protocol—one is for humans, and one is for models.
This is actually quite similar to back then. In the mobile internet era, it was assumed that all services needed to be API-enabled. In the AI era, it’s assumed that all services need to be MCP-enabled.
07 Computer Use is still important
Many people now discussing Computer Use feel that this direction may not work.
The reasoning is also reasonable: it consumes too many tokens, runs slowly, and is unstable. It seems more like a flashy demo, still far from being truly usable.
But Boris saw an entirely different level.
What he truly values is that Computer Use addresses one of the biggest pain points in AI deployment: in the real world, there are countless systems that lack both APIs and MCP.
Especially in the corporate world.
Once you’ve worked inside the company, you’ll realize that many of its core systems are very outdated—ERP, OA, financial systems, internal approval workflows, supply chain backends, and various custom systems. Many lack APIs, documentation, or automation capabilities. They simply sit there, manually operated by countless employees every day.
Why not just create an API for them?
Because it's no longer feasible. The vendors that developed these systems may no longer exist. The IT department lacks both the motivation and the budget to refactor them.
The business units certainly can't afford to wait six months or a year. These systems will never wait for a perfect API to save them.
In the short term, major models will likely continue to enhance their Computer Use capabilities.
