Last night, the AI newcomer Anthropic (hereinafter referred to as Company A) did not release a new Claude model; instead, it launched something that appeared particularly "boring": The Anthropic Institute (Anthropic Research Institute, hereinafter referred to as TAI).
Compared to Harness Engineering, which is expected to be popular by 2026, TAI addresses far more ambitious challenges. According to Anthropic’s published research agenda (anthropic-institute-agenda), TAI focuses on four key areas: economic diffusion, threats and resilience, AI systems in practical applications, and AI-driven research and development. TAI has also issued a global call to action, recruiting researchers to collaborate on solving these challenges.

(Source: X @Anthropic Official)
In other words, Anthropic (abbreviated as Company A) established an internal team primarily focused on studying how humans interact with AI:
- How will AI impact employment and the economy?
- What new security risks will this introduce?
- Will human behavior and judgment change after genuinely using AI?
- When AI begins to assist in developing stronger AI, how should this acceleration be understood and constrained?
Many readers may view this as just another routine move by an AI company, but Lei Technology believes this could be the most significant action A Society has taken recently. TAI’s positive impact on the AI industry and humanity is comparable to Google’s historic declaration of “Don’t be evil” for the internet industry. That’s why Lei Technology AGI calls this a “launch” no less momentous than a major model upgrade.
AI is profoundly impacting the economy: it's not just about jobs
TAI's primary research focus is Economic Diffusion.
Looking back at the first three industrial revolutions in human history, whether it was the spinning jenny, the roaring steam engine, or later electricity and assembly lines, they essentially replaced extremely cheap and repetitive physical labor. However, the fourth industrial revolution sparked by AI is fundamentally different—it directly enters the realm of human intellectual work, our most prized domain.
However, TAI points out the core contradiction: although tools have been upgraded, workers' circumstances have actually worsened.
In the research copy, TAI mentions: If, in the future, three people can accomplish what previously required 300 people by leveraging large models, what would such a company become?
Designers can use AI to instantly handle the most tedious layers and assets; programmers can use AI for Vibe Coding... Even if AI boosts productivity by 75%, this won’t reduce the human workday from 8 hours (or even 996) to just 2 hours. Instead, humans may end up doing five times as much work.
What TAI cares about is the new logic that "with AI, your workload will multiply several times over." To quantify this situation, TAI has introduced a new term: the Anthropic Economic Index. Anthropic states that they won’t just publish obscure academic papers; instead, they plan to uncover and clearly present real data to humanity: exactly which industries is AI quietly replacing human jobs? Will newcomers be eliminated right out of the gate?

(Source: AI-generated)
Moreover, TAI has brought this accounting into the real world. We all know that large models are insatiable “gold-consuming beasts”—every time we use AI to generate text, images, videos, or even make a simple query, we consume vast amounts of tokens. At the base level, tokens represent computational power; beneath that lie chips, storage, and electricity; and going even deeper, there are carbon emissions, capital, and more. Resources are always limited, and when society channels massive amounts of resources toward AI, other industries will inevitably be affected.
In 2026, the most noticeable sensation for everyone was that AI-induced shortages of memory and storage directly led to widespread price increases in consumer electronics, even compelling smartphone manufacturers to reduce their willingness to launch new models. Yet, at the same time, all smartphone makers are placing their hopes on using AI to reshape product logic and extend the lifecycle of smartphones, with OpenAI’s native AI phone already on the roadmap. As everyone benefits from AI, more industries are being profoundly impacted—both positively and negatively.
TAI uses an "economic index" to quantify AI's impact on the economy from abstract perceptions into data models: only by clearly understanding the problem can it be solved.
The Ultimate Crisis: Humanity Is Outsourcing Its Brain
If losing one’s job is like being slowly cut with a dull knife, then AI’s transformation of human thought processes is a direct injury.
The first to suffer will inevitably be the internet. It’s not hard to notice that today’s internet is turning into a “shit mountain”—once, searching for travel guides easily yielded plenty of helpful posts warning about pitfalls and traps, but now they’re all filled with AI-generated content that looks polished and well-formatted, yet is nothing but serious-sounding nonsense.
Worse still, AI has lowered the barrier to entry for gray-market activities to zero: scammers can use AI for deepfakes to spread false rumors, clone voices of loved ones to carry out telecom fraud, and destroy ordinary people’s lives by simply burning a few tokens.
TAI also noticed a deeper crisis: AI is quietly making humans increasingly "dumber."
Previously, Chinese users in the wild have photographed unfamiliar wild mushrooms and asked AI, “Can I eat this?”—only for the AI to seriously identify a highly toxic mushroom as a delicious edible champignon. In another case, a child held up a mouse trap and asked AI what it was; the AI solemnly analyzed it as a “square, metal-structured discarded go-kart toy,” prompting the child to touch it out of curiosity—resulting in their finger being tightly trapped.
These news stories sound like dark jokes, but they reveal a phenomenon: AI’s most defining trait isn’t intelligence—it’s “mysterious confidence.” AI can never achieve 100% accuracy; even Google Gemini’s latest model, at around 91% factual accuracy, represents a high level of performance. Yet many users, often unconsciously, have stopped thinking for themselves and habitually outsourced all decision-making to a string of code.
In response, TAI posed a thought-provoking question: When a large portion of society turns to just two or three large models for advice, what terrifying form of “homogenization” will occur in human patterns of thought and problem-solving? You think you’re using AI tools to boost productivity and cognitive ability, but in reality, you’re outsourcing your brain. In other words, if everyone begins relying on AI, humanity may lose its capacity for independent thinking, turning all human minds into identical replicas cast from the same mold.
AI has dual uses; how can we prevent an intelligence explosion?
TAI also introduced a new concept: dual-use capabilities. The official explanation is: if an AI model becomes more capable in biology, it can not only be used to develop new drugs but also to create extremely lethal biological weapons; if an AI is highly skilled at writing code, it is not only a great programmer but also a hacker capable of easily infiltrating national networks.

(Source: Anthropic official)
What kind of chaos would ensue when this dual-purpose monster is widely integrated into the brains of autonomous vehicles, heavy robotic arms in factories, security systems, and drone swarms? On a phone, AI might pop up with, “Sorry, I made a mistake”; but in the real world, a one-second recognition error means a genuine workplace safety incident.
Moreover, large models can be updated every few weeks, while humans take years to amend regulations or improve insurance systems. The gap between these timelines represents a period of maximum vulnerability—a “naked” state with no protection. When AI-induced disasters occur, today’s society simply lacks the resilience to withstand them.
To address this issue, TAI established the Frontier Red Team. Their mission is simple yet abstract: daily, they devise new ways to attack and manipulate the AI agents they’ve developed, in order to understand just how much damage these systems could cause in the real world—all to erect a defensive barrier before society’s outdated systems collapse entirely.
Previously, human programmers dictated the pace of AI evolution, but today, advanced large models can read papers and write code on their own—and may soon be capable of developing the next generation of large models. As AI’s self-replication accelerates, technological advancement will soon outpace human understanding.

(Source: AI-generated)
To prepare for this potentially imminent singularity, TAI has introduced a new concept: conducting fire drill scenarios for intelligent explosion.
In short, TAI is preparing to bring together top lab executives and government officials from around the world for a simulation: they aim to test in advance whether humanity has the capacity to hit the brakes before an intelligence explosion truly occurs.
Developing while governing, Community A seriously hit the brakes.
At a time when the entire industry is charging ahead blindly, Anthropic’s move to establish TAI is truly admirable.
Across the hall, OpenAI constantly trends not because of executive departures and internal power struggles, but because of its messy legal battles with Musk. Meanwhile, many AI companies report dismal financial performance yet desperately try to “game the rankings” while raising funds everywhere, leveraging inflated valuations to absorb social capital. The issues A社 TAI aims to discuss have long been debated in the industry, but most AI giants respond with, “Who cares—just grow first.” In this highly speculative atmosphere, A社 has hit the brakes, openly exposing these unsavory messes and signaling a new stance on AI: growth alongside governance.
Company A is not a charity—it’s not acting out of altruism, but rather playing a highly sophisticated business game. Today’s powerful investors and governments are deeply wary of the various mishaps caused by AI: buying a model is fine, whether its performance is slightly higher or lower, but what they fear most is it suddenly going rogue and causing a major disaster—something that could spiral completely out of control. By using TAI, Company A has crafted a “normal person” persona, reassuring users and earning global trust.

(Source: AI-generated)
At the end of this article on TAI, it is explicitly stated that all of TAI’s research findings and early warnings will be directly fed into Anthropic’s core entity—the Long-Term Benefit Trust (LTBT). The mission of the LTBT is to closely monitor the company’s business decisions, ensuring that every action Anthropic takes serves the long-term benefit of all humanity, rather than pursuing short-term financial gains.
It’s exactly like Google’s famous motto, “Don’t be evil”: Through TAI, Company A is telling the world that while others are racing to go faster, we’re not only moving quickly—we’re also researching how to stop safely.
Expecting tech giants to self-regulate is indeed absurd, but in an era where everyone is racing forward with blindfolds on and the accelerator welded down, it’s noteworthy that a leading player has taken the initiative to establish TAI—a research institute investing real money into economic indicators, simulating intelligence explosions, and studying human cognitive decline. That alone deserves attention. Thus, Lei Technology opens by stating that the launch of TAI is more significant than A Society releasing a new model.
Attachment: TAI official agenda, translated by Google Gemini
At the Anthropic Institute (TAI), we will use the insights available in cutting-edge laboratories to study the impact of artificial intelligence on the world and share our findings with the public. Here, we will share the questions driving our research agenda.
Our research agenda primarily focuses on the following four areas:
- Economic diffusion
- Threats and Resilience
- Artificial intelligence systems in practical applications
- AI-driven research and development
In the article "Core Perspectives on AI Safety," we noted that conducting effective safety research requires close engagement with state-of-the-art AI systems. The same principle applies to conducting effective research on the impacts of AI on security, the economy, and society.
At Anthropic, we have already begun to see fundamental changes in fields like software engineering. We are witnessing the internal economy at Anthropic start to shift, as the systems we build face new threats, and early signs of artificial intelligence are accelerating the very development of AI itself. To fully realize the benefits of AI advancements, we aim to share as much of this information as possible. We are studying how these dynamic changes will impact the external world and how the public can help guide these transformations.
At TAI, we study the real-world impacts of artificial intelligence from the perspective of cutting-edge laboratories, then publish these findings to help external organizations, governments, and the public make more informed decisions about AI development.
We will share our research findings, data, and tools to make it easier for individual researchers and institutions to pursue these research topics. Specifically, we will share:
- We will obtain more granular insights from human economic indicators at a higher frequency to understand the impact and applications of artificial intelligence on the labor market. We aim to serve as an early warning signal for major shifts and disruptions.
- Research which social sectors most need investment to enhance resilience in the face of new security risks posed by artificial intelligence.
- A more detailed look at how Anthropic is using new AI tools to accelerate progress, and the implications of potential recursive self-improvement in AI systems.
TAI will influence Anthropic’s decisions. This may manifest as the company sharing data it would otherwise not disclose (such as economic indicators) or releasing technology in different ways (such as cyber threat analyses that provide data support for initiatives like the "Glass Wing" project).
We anticipate that the research conducted by the TAI Institute will increasingly serve as a key reference for the Long-Term Benefit Trust (LTBT). The LTBT’s mission is to ensure that Anthropic continuously refines its actions to advance humanity’s long-term interests. This research agenda was developed in collaboration with the LTBT and employees across Anthropic’s teams.
This is a dynamic agenda, not set in stone. As evidence accumulates, we will continuously refine these issues, and new topics not covered today are likely to emerge. We welcome feedback on this agenda and will revise it based on insights gained through discussion.
If you're interested in helping us address these questions, we welcome you to apply to become an Anthropic researcher. This four-month program is guided by members of the TAI team, and you’ll have the opportunity to research one or more relevant issues. Learn more and apply for the next cohort here.
Our research agenda:
Last updated date: May 7, 2026
Economic diffusion
It is crucial to understand how the deployment of increasingly powerful AI systems is transforming the economy. We also need to develop the necessary economic data and forecasting capabilities to choose AI deployments that benefit the public.
To address the questions raised in this pillar of the study, we will further refine the data in the Human Economic Index. We will also explore additional methods to improve our models of how advanced artificial intelligence impacts society—whether through job displacement, unprecedented economic growth, or other effects.
The Application and Diffusion of Artificial Intelligence
- Who is adopting artificial intelligence? AI development is concentrated among a few companies in a handful of countries, but its deployment is global. What determines whether a country, region, or city can access AI? If they can access it, how do they derive economic value from it? Which policies and business models can effectively change this dynamic? How do open-weight or free-weight models contribute to shifting this landscape?
- Artificial intelligence applications at the enterprise level: Why are companies adopting AI, and what are the consequences? How does AI change the scale at which a business or team can achieve maximum efficiency? How concentrated are AI applications across enterprises? How do changes in the concentration of AI adoption translate into profit margins and labor shares? If a team or company of three can now accomplish what previously required 300 people, how will the structure of industries change? Alternatively, if firms can more easily aggregate knowledge and this aggregation yields economies of scale, will we see larger, more expansive firms with greater incentives to systematically monitor employees?
- Is artificial intelligence a general-purpose technology? Does AI follow the pattern of previous "general-purpose technologies," adopting most rapidly in profitable commercial applications and most slowly in areas where social returns exceed private returns? Are there policies or decisions that can alter this trend?
Productivity and economic growth
- Productivity growth: What impact will artificial intelligence have on the pace of innovation and productivity growth across the entire economy?
- Share the gains: What pre-allocation or reallocation mechanisms can effectively broaden the distribution of benefits from AI development and deployment?
- Market transaction costs: How does artificial intelligence affect trading systems and transaction costs in markets? When does having an agent represent you improve market efficiency and fair outcomes, and when does it not?
Broad labor market impact
- Artificial Intelligence and Employment: How will AI transform employment across various sectors of the economy? As AI automates existing economic processes, what new tasks and job roles may emerge? How will these changes differ across regions and countries? Our Human Economy Index Survey will monthly provide insights into how people perceive AI’s impact on their jobs and their expectations for the future. We will also update the economic index to deliver more frequent and granular data.
- Can the pace of artificial intelligence adoption be regulated? Central banks use policy rates and forward guidance as "levers" to curb inflation. Similarly, could AI companies, in collaboration with governments at the industry level, employ analogous regulatory mechanisms to control the pace of AI adoption on an industry-by-industry basis? Would such an approach yield significant public benefits?
The Future of Work and the Workplace
- Workers' perspectives on their jobs: How do workers across various industries view occupational changes? How much influence do they have over these changes? Can the power of "workers" be preserved or transformed?
- Professional talent development system: Many industries rely on entry-level positions—such as paralegals, junior analysts, and assistant developers—to cultivate future senior professionals. If artificial intelligence replaces the types of work traditionally used to accumulate expertise, how will people initially gain the experience needed to become experts? What does this mean for the long-term pipeline of advanced talent in a given field?
- Learning for the Future: What should people learn today to prepare for the future? What careers will exist in the future? How will artificial intelligence transform the way we learn and develop professional skills?
- Paid work roles: If artificial intelligence significantly reduces the central role of paid work in human life, under what conditions can people reallocate their time and energy to other meaningful sources? What can we learn from historical or contemporary groups for whom work was scarce or unnecessary? How should society respond to this shift?
Threats and Resilience
AI systems often enhance multiple capabilities simultaneously, including dual-use capabilities. For instance, an AI system with improved biological capabilities is more likely to be used to create biological weapons. An AI system with strong computer programming skills is more likely to infiltrate computer systems. If we can better understand the threats that AI systems may exacerbate, society will be better equipped to respond to this evolving threat landscape.
We raise these questions to help build partnerships that enhance the world’s ability to respond to transformative AI and to establish early warning systems for emerging threats. Many of these questions will guide our cutting-edge red team research agenda.
Assess risk and dual-use capabilities:
- Dual-use technology: Powerful artificial intelligence is inherently dual-use—it can enhance tools for healthcare and education, but also be used for surveillance and repression. Can we build observability tools to understand whether and how this is happening?
- How to price risk appropriately: What effective, market-driven approaches can enhance society’s resilience to anticipated threats from AI systems? Can we develop new risk pricing methods, or create technological tools and human organizations, to build resilience before predictable threats—such as enhanced AI-powered cyberattack capabilities—emerge?
- Balance of offense and defense: Will the capabilities empowered by artificial intelligence fundamentally favor attackers in areas such as cyberspace and biosecurity? When AI is applied to more traditional domains, such as increasing integration with command and control systems, does it also benefit attackers? More broadly, how will artificial intelligence alter the nature of human conflict?
Implement risk mitigation measures:
- Crisis response planning: During the Cold War, the U.S. president had a direct hotline to the Kremlin for use in the event of a nuclear crisis. So, if an AI system triggers a crisis, what kind of geopolitical infrastructure would be needed? This infrastructure may not be between nations, but rather between companies or among corporations.
- Faster defense mechanisms: AI capabilities can make significant advances within months, while regulatory, insurance, and infrastructure responses often take years. How can we bridge this gap? Can defense mechanisms such as automated patching, AI-powered threat detection, or pre-deployed response capabilities keep pace with the speed and scale of AI-driven attacks? Or is this asymmetry structural? And how can we deploy these defense mechanisms as effectively as possible?
Intelligence capabilities for monitoring
- The impact of artificial intelligence on surveillance: How will AI change the way surveillance operates? Will it reduce surveillance costs, improve surveillance efficiency, or both?
Artificial intelligence systems in practical applications
Interactions between people, organizations, and artificial intelligence systems will be a major source of social change. Understanding how AI systems may transform the individuals and institutions that interact with them is a core focus of our social impact team. To study these changes, we are enhancing existing tools and developing new ones to support research, ranging from software that improves platform observability to tools for conducting large-scale qualitative surveys.
The impact of artificial intelligence on individuals and society:
- Collective epistemology: What happens to our epistemology when a large portion of the population relies on the same few models? Can we find ways to measure large-scale shifts in beliefs, writing styles, and problem-solving approaches caused by the shared use of artificial intelligence?
- Critical thinking: As AI systems become increasingly powerful and trustworthy, how can we detect and prevent the erosion of human critical thinking skills due to growing reliance on AI judgments?
- Interface: The interface of a technology determines how people interact with it—television turns people into passive viewers, while computers make it easier for people to become creative creators. What kind of interface can we build to enable AI systems to enhance and promote human autonomy?
- Managing human-machine collaboration systems: How can humans effectively manage teams composed of humans and AI systems? Conversely, how can AI systems manage teams made up of humans, AI, or a combination of both?
Identify the significant impacts brought by artificial intelligence:
- Behavioral impact: Just as social media has influenced changes in human behavior, artificial intelligence may also shape human actions. What monitoring or measurement methods can help researchers understand these dynamic changes?
- Promote research: Are there transparent mechanisms and tools that enable the general public—not just leading AI companies—to easily study real-world AI applications?
Understanding and Managing AI Models:
- System "Values": What "values" do AI systems express? How are these values related to the methods used to train the system? More specifically, how can we measure the impact of an AI system's "constitution" on its behavior after deployment? We will expand upon prior research addressing these questions.
- Governance of Autonomous Agents: Which aspects of existing laws, governance systems, and accountability mechanisms can be applied to autonomous AI agents? For example, how maritime law addresses abandoned vessels is relevant to how the law handles unmonitored autonomous agents. Conversely, are there aspects of existing laws that are already being applied to AI agents but should not be?
- Reliability of Agents: What aspects of autonomous AI agents can be adjusted to align with existing legal frameworks, governance systems, and accountability mechanisms? For example, can we ensure that AI agents possess a unique and reliable identity, even in the absence of direct human control?
- AI governing AI: How can we effectively use AI to govern AI systems? In which areas of AI regulation do humans have a comparative advantage, or are legally or normatively required to “be involved”?
- Agent Interaction: What norms emerge when AI agents interact with each other? How do different agents express their preferences, and how do these preferences influence other agents?
AI-driven research and development
As artificial intelligence systems become increasingly powerful, scientists are using them to conduct more and more research. This means an increasing number of scientific studies are being carried out in an autonomous or semi-autonomous manner with less human intervention. In the field of AI research, increasingly powerful systems may be used to develop their own subsequent versions. We sometimes refer to this model as "AI-driven AI research."
AI-driven AI research may be a "natural dividend" in building smarter, more powerful systems. Just as advances in coding capabilities gave rise to dual-use cyber capabilities, and advances in scientific capabilities may give rise to dual-use biological capabilities, progress in complex technical work could naturally lead to AI systems capable of self-developing AI systems.
AI-driven AI development carries significant potential risks. When evaluating possible measures, it is crucial for policymakers to understand the trends in the pace of AI development and whether AI research will begin to generate compounding effects.
Artificial intelligence used for artificial intelligence development
- Governance of AI Development: If AI systems are used to autonomously develop and improve themselves, how can humans effectively understand and control these systems? Ultimately, what will govern these systems?
- Intelligence Explosion Emergency Drill: How do we conduct an intelligence explosion emergency drill? How can we carry out a tabletop exercise to truly test the decision-making capabilities of laboratory leadership, the board, and government authorities?
- AI R&D telemetry: How do we measure the overall pace of AI research and development? What telemetry technologies and underlying infrastructure are required to collect this information? How can metrics related to AI R&D serve as early warning signals for recursive self-improvement?
- Controlling the Accelerated Development of AI: If an intelligence explosion is imminent, what intervention points could slow down or alter its pace? If human intervention is possible, which entities should wield this capability—governments, corporations, or others?
The application of artificial intelligence in research and development—namely, AI-driven research in other fields:
- Technology tree: Artificial intelligence is accelerating the pace of development in certain scientific fields far more than others, depending on data availability, evaluation metrics, and the extent to which knowledge is tacit or constrained by institutional factors. How uneven is this gradient of development? What human problems are likely to be prioritized as a result of these changes in scientific progress?
- The rugged frontier: model capabilities are stronger in some areas than others. Fields with significant positive externalities—such as drug discovery and materials science—receive far less investment than their value warrants. Markets direct model improvements based on private returns, but can we enhance model performance to address social externalities?
This article is from the WeChat official account "Value Research Institute" (ID: jiazhiyanjiusuo), authored by Dingxi.
