Anthropic President Emphasizes Human Values in the AI Era at Stanford Talk

iconMetaEra
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
Anthropic President Daniela Amodei spoke at Stanford, emphasizing the importance of human values in the AI era. A literature major, she argued that empathy and communication cannot be replaced by technical metrics. Amodei stated that AI supports—not replaces—human work, and warned against over-reliance, which could distort the fear and greed index. She added that Anthropic’s hiring prioritizes uniquely human qualities, not just technical skills.
From a market structure perspective, Anthropic’s leadership emphasis on human-centric capabilities is not merely a value statement, but a pragmatic response to the restructuring of the AI industry’s value chain.

Article author and source: Toutiao

Recently, Daniela Amodei, co-founder and president of Anthropic, gave a 50-minute speech at Stanford Graduate School of Business.

Unlike many AI giants, she did not come from a computer science background but was a humanities graduate in English literature.

In her sharing, she pointed out that at the end of hardcore technology, the most scarce element remains the oldest human wisdom.

Liberal arts generalists are more competitive than CS majors.

Daniela graduated in 2009 during the financial crisis. She jokingly describes her resume as a “history of eras”: from international development to a political aide on Capitol Hill, then to an early member of Stripe.

She believed that "career plans" are often narratives constructed after success; at the time, she had only three filters: What am I good at? What am I passionate about? What can make a big impact?

And this generalist background has become a unique competitive advantage in the field of AI.

Faced with neural networks and scaling laws, her approach was incredibly simple: keep asking questions until she understood.

She deeply understands that her comparative advantage does not lie in writing code, but in understanding the “lanes”—knowing where technical experts are charging forward and where she is building bridges.

She said that if she could live her life again, she would still choose literature.

During her speech, Daniela strongly recommended the historical work "The Guns of August." In her view, the book examines how individual personalities accumulate layer by layer, ultimately leading to catastrophic global consequences.

This is no different from the博弈 she faces daily in the field of AI—how individual algorithmic decisions gradually escalate into consequences that reshape civilization.

According to her, when Anthropic hires, it places special emphasis not on computer science expertise, but on human qualities such as good communication skills, high emotional intelligence, kindness, curiosity, and a willingness to help others.

Daniela also shared that she is often asked by CEOs: "My daughter is a sophomore at Stanford; she was planning to study computer science—should she still do it?"

Her response was that software developers will still exist, but they won't write as much code.

Because the parts of a developer’s work involving communication with product managers and collaboration with clients will expand, while the parts that AI can more easily complete will shrink.

Before starting a business together, take a vacation and share a room.

Regarding how to choose a partner, Daniela offered a down-to-earth suggestion: “Before starting a business together, go on vacation and share a room.”

She said, if after your vacation you still want to be with that person, then it’s the right relationship.

At the end of 2020, Daniela left OpenAI with her brother Dario and five core team members to found Anthropic, an move often interpreted by outsiders as a "defection."

She defined this experience as “running toward” an organizational vision that inherently values safety and responsibility.

According to her, the seven co-founders of Anthropic share a deep network of trust: she is Dario’s younger sister, and the two have argued for 40 years—this level of honesty, where they can drop their masks and speak the harshest truths, has been the anchor that keeps the company steady amid the turbulent waves of AI.

She also proposed a counterproof method: the "drawing in the room" experiment.

If the co-founders each draw the company’s vision in separate rooms—one a unicorn, the other an echidna—that lack of alignment would be devastating.

AI complements work more than it replaces it.

Daniela explained that currently, AI primarily plays a role of "complementary skills" in the workplace—helping people do their jobs better rather than replacing them directly.

Cases of complete replacement are rare and primarily concentrated in the customer service sector.

She made a joke on stage: If you email Comcast, you probably won’t get a reply from a real person—but that was likely already true five years ago.

As of March this year, 49% of professions had at least one-quarter of their tasks completed using Claude, and highly experienced users not only attempted higher-value tasks but also achieved significantly higher success rates.

However, large-scale replacement has not yet occurred.

She believes that job displacement is only surface-level; the more fundamental issue is that when AI can perform a large volume of everyday productive work, the relationship between work, meaning, and social life needs to be reinterpreted.

These three things have been bundled together for the past several decades, and in the future, they may become decoupled. But Daniela didn’t provide an answer; she believes society needs to start practicing how to adapt to this change.

Learning or cheating? AI is causing people to give up thinking.

The most alarming part of the speech came from a survey of 80,000 users.

Daniela discovered a paradox: the places where people rely most on AI are often the very places they fear most.

Research reveals an unnamed but widespread anxiety: "My brain doesn't need to turn on anymore."

This feeling is different from the passive consumption of scrolling through short videos; it is an active regression—because AI is so convenient, humans are beginning to choose to give up seeking their own ideas.

Daniela frankly admitted, "Claude also makes mistakes, but people are starting to get used to believing it outright."

To this end, Anthropic is committed to developing a "Socratic questioning" learning model designed to engage, rather than shut down, the user's mind.

A sharp contrast is that handing homework over to ChatGPT for it to answer directly is already known by a ready-made term: cheating.

Using Claude’s learning mode is like having a personal tutor who understands you and knows why you chose this course.

The former shuts down the brain, while the latter activates it.

She believes that in the AI era, the line between "cheating" and "learning" is thin and worth noting.

Bedside manner

When AI surpasses humans in diagnosis, programming, and management coaching, what remains uniquely human?

Daniela gave an incredibly warm answer: “Bedside manner.”

She used the medical profession as an analogy: while AI’s diagnostic capabilities will inevitably surpass those of humans, it cannot provide patients with the clinical miracle of feeling cared for by a doctor.

Medical literature suggests that patients with good relationships with their doctors indeed have better clinical outcomes. This is difficult to explain, but possible reasons include doctors making greater efforts to understand their patients’ conditions and perhaps ordering additional, unexpected tests.

The ability to understand and empathize—making people feel better—will be worth five times more after AI takes over intellectual tasks.

Even in a managerial role, she found that Claude could keenly identify management blind spots she hadn’t even realized she had, by analyzing past performance reviews—and even suggested, “You should get a coach.”

She also gave an easier-to-understand example.

She has two children, nearly five years old and nearly one year old. She said the best thing Claude has helped her with is guiding her through potty training—empathetic, practical, and even illustrated.

Every time I search Google for "Is there something wrong with my child," the answer is always "yes"; Claude is more balanced and interactive, offering immense value to overwhelmed parents.

She said that, in her own experience, Claude provided the correct answer more often than her doctor on complex cases.

Even so, she would never take action without a licensed doctor.

Being a good person and doing good deeds leads to successful business.

Faced with the AI bubble theory and the risks of capital expenditure in 2026, Daniela describes the feeling of being at the heart of the storm with one word: harrowing.

Faced with great uncertainty, she left two pieces of advice for the next generation of entrepreneurs.

First, do what you truly care about.

She said it sounds so clichéd that she almost didn’t want to say it, but this advice truly shines when things aren’t going well, aren’t fun, or are painful.

You must be able to return to the beginning and remember why I did this.

Second, doing business and doing good are not contradictory.

She believes this is a relatively new idea that has only emerged over the past five to ten years. She rejects the notion that “only cold, uncomfortable people can succeed in business.”

“There is a clear positive correlation between the desire to do good and running a successful business,” she said.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.