Experts Discuss OpenClaw's Impact on AI Agents and Compute Challenges

iconTechFlow
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
AI and crypto news from the Zhongguancun Forum shows OpenClaw is enabling AI agents to perform real-world tasks. Panelists, including Zhang Peng from Zhipu AI and Luo Fuli from Xiaomi MiMo, noted rising compute demands. Token usage has increased tenfold, with potential for 100-fold growth. On-chain developments highlight the role of open-source frameworks in enhancing AI infrastructure and agent development.

Author: Chen Junda

Zhixidongxi reported on March 27 that today, at the Zhongguancun Forum, Zhang Peng, CEO of Zhipu; Yang Zhilin, CEO of Moonshot AI (moderating); Luo Fuli, head of Xiaomi’s MiMo large model; Xia Lixue, CEO of Wuwen Xinqiong; and Huang Chao, Assistant Professor at the University of Hong Kong, appeared together for a rare in-depth discussion on the future of open-source large models and agents.

This conversation begins with OpenClaw, the hottest topic today, with all panelists agreeing that agents are enabling large models to truly “get to work.” OpenClaw expands the capability boundaries of large models but also demands more from them. Zhipu is researching long-term planning and self-debugging capabilities, while Luo Fuli’s team is more focused on reducing costs and increasing speed through architectural innovation—even enabling model self-evolution.

Infrastructure must also keep pace with agents. Xia Lixue believes that current computing systems and software architectures are still designed for humans, not for agents—effectively limiting agent capabilities through human operational constraints. Therefore, we need to build Agentic Infra.

In the eyes of multiple guests, open source is one of the core drivers behind the development of large models and agents. Assistant Professor Huang Chao of the University of Hong Kong believes that the thriving open-source ecosystem is key to transitioning agents from mere “playthings” to genuine “workers.” Only through community collaboration can software, data, and technology fully evolve into agent-native forms, ultimately building a sustainable global AI ecosystem.

In addition, several guests discussed topics such as the price increases of large models, the surge in token usage, and the key words for AI over the next 12 months. Below are the core insights from this panel discussion:

1. Zhang Peng: As models grow larger, inference costs also increase accordingly. Zhipu’s recent price increase is essentially a return to reasonable commercial value; long-term low-price competition is detrimental to the industry’s development.

2. Zhang Peng: The surge in new technologies such as agents has increased token usage by 10 times, but actual demand may have grown by 100 times, leaving substantial unmet demand—making computing power a critical issue over the next 12 months.

3. Luo Fuli: From the perspective of foundational large model providers, OpenClaw ensures the lower bound of foundational large models while raising the upper bound. The task completion rate of domestic open-source models combined with OpenClaw is now very close to that of Claude.

4. Luo Fuli: DeepSeek has given domestic large model manufacturers courage and confidence. Some model architecture innovations, seemingly made as compromises for efficiency, have triggered real transformation, enabling the industry to achieve the highest level of intelligence under fixed computational resources.

5. Luo Fuli: The most important milestone in the next year of AGI development is "self-evolution." Self-evolution enables large models to explore like top scientists and is the only way to "create something new." Xiaomi has already increased its research efficiency tenfold by leveraging Claude Code and state-of-the-art models.

6. Xia Lixue: When the AGI era arrives, the infrastructure itself should be agents, autonomously managing the entire infrastructure and iterating based on the needs of AI clients to achieve self-evolution and self-improvement.

7. Xia Lixue: OpenClaw has ignited token usage. The current rate of token consumption feels like the early days of 3G, when users had only 100MB of monthly data.

8. Huang Chao: In the future, many software applications will not be designed for humans; software, data, and technology will evolve into an Agent-Native form, and humans may only need to use those “GUIs that make them happy.”

Here is the full transcript of this roundtable discussion:

01. OpenClaw is just a "scaffold"; large model token consumption is still in the 3G era.

Yang Zhilin: It’s a great honor to have such distinguished guests here today. Our speakers represent the model layer, compute layer, and agent layer. Today’s key themes are open source and agents.

Let’s start with the first question: let’s talk about OpenClaw, currently the most popular. What aspects of using OpenClaw or similar products do you find most imaginative or memorable in your daily use? From a technical perspective, how do you view the evolution of OpenClaw and related agents today?

Zhang Peng: I started playing with OpenClaw back when it was still called Clawbot. Since I’m a programmer by background, I enjoyed tinkering with it myself and gained some personal insights.

I believe OpenClaw’s biggest breakthrough and most exciting innovation is that it’s no longer exclusive to programmers or tech enthusiasts—ordinary users can now easily access the capabilities of cutting-edge models, especially in programming and agents.

So far, in my interactions with everyone, I prefer to refer to OpenClaw as a "scaffold." It provides a solid, convenient, yet highly flexible framework built on top of foundational models, allowing you to leverage novel features offered by various underlying models according to your own needs.

I used to think my ideas might be limited by my inability to code or lack of other related skills, but now with OpenClaw, I can finally accomplish them through simple conversations.

OpenClaw has had a tremendous impact on me, or rather, it has made me reconsider this matter.

Xia Lixue: Actually, when I first started using OpenClaw, I found it hard to adjust because I was used to interacting with large models, and after using it, I felt that OpenClaw responded quite slowly.

But then I realized one thing: it’s fundamentally different from previous chatbots—it’s like a “person” who can help me accomplish complex tasks. When I started giving it more complicated tasks, I found that it could handle them very well.

This experience has deeply moved me. The model started by chatting based on tokens, and now it has evolved into an agent—a lobster—that can help you complete tasks. This represents a significant expansion of the overall imaginative potential of AI.

At the same time, it also places much higher demands on the overall system’s capabilities. This is why, when I first used OpenClaw, I found it a bit sluggish. As a provider of infrastructure-level solutions, I see that OpenClaw brings both greater opportunities and challenges to the large-scale systems and ecosystems underlying AI.

The resources we currently have are insufficient to support such a rapidly growing era. For example, taking our company as an example, since the end of January, our token usage has roughly doubled every two weeks, resulting in a tenfold increase to date.

The last time I saw speeds like this was back when I was using a 3G phone and watching my data usage. I have a feeling that current token usage feels just like those days when I only had 100MB of mobile data per month.

In this scenario, we need to better optimize and integrate all our resources, enabling everyone—not just in the field of AI, but across society as a whole—to harness OpenClaw’s AI capabilities.

As a participant in the infrastructure space, I am deeply excited and moved by this era. I also believe there is still much room for optimization that we should continue to explore and experiment with.

02. OpenClaw raises the ceiling for domestic models; the breakthrough in interactive mode is highly significant.

Luo Fuli: I view OpenClaw as a highly revolutionary and disruptive event in the evolution of agent frameworks.

Actually, everyone I know who is doing very deep coding still chooses Claude Code as their first option. But I believe users of OpenClaw will feel that many of its design elements in the Agent framework are ahead of Claude Code. Recently, many updates to Claude Code have been moving closer to OpenClaw’s approach.

When I use OpenClaw, my experience is that this framework greatly expands my creativity anytime and anywhere. While Claude Code initially only extended my ideas on my desktop, OpenClaw enables me to expand my creativity anytime and anywhere.

OpenClaw brings two primary values. First, it is open source. Being open source greatly enables the entire community to deeply engage, value, and drive the evolution of this framework—an essential prerequisite.

I believe a major value of AI frameworks like OpenClaw is that they significantly raise the upper limit of models within China—models that are nearly on par with closed-source models but have not yet fully caught up.

In the vast majority of scenarios, you’ll find that its task completion rate—using a domestic open-source model combined with OpenClaw—is extremely close to that of Claude’s latest model. At the same time, it effectively maintains a high baseline by ensuring task completeness and accuracy through a Harness system and other design elements such as its Skills framework.

In summary, from the perspective of developers at foundational large model providers, OpenClaw ensures the baseline performance of foundational large models while raising their upper limit.

In addition, I believe another value it brings to the entire community is that it has sparked awareness, revealing that the Agent layer holds tremendous potential beyond large models.

I’ve also noticed that, beyond researchers, more people in the community are getting involved in the AGI revolution, increasingly engaging with more powerful agent frameworks like Harness and Scaffold. These individuals are, in effect, using these tools to automate parts of their own work, thereby freeing up time to focus on more imaginative endeavors.

Huang Chao: I think, first and foremost, from the perspective of interaction design, one reason OpenClaw has gone viral is that it offers a more “human-like” experience. We’ve been working on agents for about a year or two, but previous agents like Cursor and Claude Code felt more like “tools.” OpenClaw, for the first time, integrates as an instant messaging feature, giving users a sense closer to their ideal “personal JARVIS.” I believe this represents a breakthrough in interaction design.

Another point it highlights for the broader community is that simple yet efficient frameworks like Agent Loop have once again been proven viable. At the same time, it prompts us to reconsider a key question: Do we need a single, all-powerful super-agent capable of doing everything, or would we benefit more from a better “little assistant”—like a lightweight operating system or scaffold?

The idea brought by OpenClaw is to create a small system—or “Lobster OS”—and its ecosystem, encouraging everyone to adopt a playful mindset, thereby unlocking all the tools within the ecosystem.

With the emergence of capabilities like Skills and Harness, more people can now design applications for systems like OpenClaw, empowering industries across the board. I believe this naturally aligns closely with the entire open-source ecosystem. In my view, these two points represent the greatest insights we’ve gained.

03. The new GLM model is specifically designed for practical tasks; the price increase reflects a return to its legitimate commercial value.

Yang Zhilin: I’d like to ask Zhang Peng. Recently, we saw Zhipu release the new GLM-5 Turbo model, and I understand there’s been a significant enhancement in Agent capabilities. Could you introduce how this new model differs from previous ones? Additionally, we’ve noticed a price increase strategy—what market signals does this reflect?

Zhang Peng: That’s a great question. We did indeed roll out an urgent update a couple of days ago, but this is actually just one phase of our overall development roadmap—we simply released it ahead of schedule.

The main goal is to shift from the original "simple conversation" to "actually getting work done"—something everyone has recently felt: large models are no longer just capable of chatting, but can truly help people get things done.

However, the capabilities required behind "getting things done" are extremely high. The model must independently plan long-term tasks, continuously iterate and debug, compress context, and potentially handle multimodal information. Therefore, its requirements on model capability differ significantly from those of traditional dialogue-oriented general-purpose models. GLM-5 Turbo has been specifically enhanced in these areas—particularly in enabling it to work continuously for extended periods, such as running for 72 hours without stopping—where we have invested substantial effort.

Additionally, users are also very concerned about token consumption. Running complex tasks with a powerful model can consume a huge number of tokens. While casual users may not notice it immediately, they’ll see their costs drop rapidly when checking their bills. We’ve optimized for this—when handling complex tasks, the model now achieves higher token efficiency. Overall, the model still uses a multi-task collaborative general architecture, but with enhanced capabilities in specific areas.

Raising prices is actually quite straightforward to explain. As mentioned earlier, it’s no longer as simple as asking a question and getting a direct answer—the underlying reasoning process is extremely lengthy. Many tasks require interacting with code and underlying infrastructure, along with continuous debugging and error correction, which consumes a tremendous amount of resources. The number of tokens needed to complete a complex task may be ten or even a hundred times greater than what was previously required to answer a simple question.

Therefore, the price needs to increase appropriately, as the model has become larger and inference costs have risen accordingly. We are returning to a normal commercial value, as long-term price competition is not beneficial for the industry’s overall development. This approach enables us to establish a healthy commercial feedback loop, continuously optimize our model capabilities, and provide you with better service.

04. Build a more efficient token factory—infrastructure itself should also be an Agent.

Yang Zhilin: Open-source models are becoming increasingly numerous and are beginning to form ecosystems, allowing various models to deliver greater value to users across different computing platforms. With the explosive growth in token usage, large models are transitioning from the training era to the inference era. I’d like to ask Li Xue: From an infrastructure perspective, what does the inference era mean for Wuwen?

Xia Lixue: We are an infrastructure provider born in the AI era, currently supporting companies such as Zhipu, Kimi, and Mimo to help users operate their token factories more efficiently. We are also collaborating with numerous universities and research institutions.

So we’ve been thinking deeply about one thing: What kind of infrastructure is needed for the AGI era, and how can we progressively build and envision it? We are now fully prepared to address the challenges required at short-term, medium-term, and long-term stages.

The most immediate issue right now is what everyone just discussed—the massive increase in token volume driven by Open—which has created a greater need for system optimization. Price adjustments, in fact, are one way of responding to this demand.

We have consistently addressed this through integrated hardware and software solutions. For example, we have connected nearly all types of computing chips, unifying over a dozen different domestic chips and dozens of distinct computing clusters. This approach effectively alleviates the shortage of computing resources in AI systems—when resources are limited, the best strategy is to utilize every available resource and ensure each unit of computing power is applied optimally to maximize conversion efficiency.

At this stage, our goal is to build a more efficient token factory. We’ve made numerous optimizations, including achieving optimal alignment between models and hardware resources such as GPU memory, and exploring whether deeper synergies can emerge between the latest model architectures and hardware designs. However, solving today’s efficiency challenges has only established a standardized token factory.

In the age of Agents, we believe this is not enough. Because Agents are more like humans—you can hand them a task. I firmly believe that much of today’s cloud computing infrastructure was designed to serve programs and human engineers, not AI. It’s like building infrastructure with interfaces meant for humans, then adding a layer on top to connect Agents. This approach actually limits the potential of Agents by constraining them with human operational capabilities.

For example, an agent can think and initiate tasks at the millisecond level, but underlying capabilities like Kubernetes (K8s) aren’t designed for this, since human-initiated tasks typically occur on a minute-scale. Therefore, we need more advanced capabilities, which we call “Agentic Infra”—a “smart token factory”—and this is precisely what Wuwen Xinqiong is developing.

Looking further ahead, when the true AGI era arrives, we believe even the infrastructure itself should be agents. The factory we are building should also be capable of self-evolution and self-iteration, forming an autonomous organization. It would be like having a CEO—a single Agent, possibly OpenClaw—that manages the entire infrastructure, autonomously identifying needs and iterating on the infrastructure based on AI customers’ demands. Only then can AI systems effectively interoperate with one another. We are also exploring initiatives such as enabling better communication between agents and capabilities like Cache-to-Cache interaction.

So what we’ve always been thinking about is that the development of infrastructure and AI should not be isolated—where I simply fulfill requirements as they come—but should instead generate a rich chemical reaction. This is true hardware-software collaboration, and the synergy between algorithms and infrastructure—the very mission that Wuwen Xinqiong has always sought to achieve. Thank you.

05. Innovations that compromise on efficiency still hold value; DeepSeek has given domestic teams courage and confidence.

Yang Zhilin: Next, I’d like to ask Fuli. Xiaomi has made significant contributions to the community recently by releasing new models and open-sourcing the underlying technology. I’d like to ask you: What do you think are Xiaomi’s unique advantages in large models?

Luo Fuli: I think we could set aside the topic of Xiaomi’s unique advantages for now—I’d prefer to discuss the overall strengths of Chinese teams building large models. I believe this topic has broader value.

About two years ago, China’s foundation model teams had already made significant breakthroughs—how to overcome the limitations of lower-end compute resources, particularly under constrained NVLink interconnect bandwidth, by innovating model architectures that appeared to compromise efficiency for performance, such as the DeepSeek V2 and V3 series, MoE, MLA, and others.

But later, we realized that these innovations sparked a transformation: how to achieve the highest level of intelligence under fixed computational resources. This has given all domestic foundation model teams the courage and confidence brought by DeepSeek. Although today our domestic chips—especially inference and training chips—are no longer bound by such limitations, it was precisely under these constraints that we were inspired to explore new model architectures for higher training efficiency and lower inference costs.

Structures such as Hybrid Sparse and Linear Attention, recently introduced like DeepSeek’s NSA, Kimi’s KSA, and Xiaomi’s HySparse for next-generation architectures, represent model design innovations distinct from the MoE generation, specifically tailored for the Agent era.

Why do I find structural innovation so important? In fact, if users truly engage with OpenClaw, they’ll find it becomes easier and smarter the more they use it. One key prerequisite is the length of the reasoning context. Long context has been a topic we’ve discussed for a long time—but are there truly models today that perform exceptionally well, with strong performance and low inference costs, on long contexts?

Many models aren't incapable of handling 1M or 10M context lengths—they simply can't afford the high cost and slow speed of reasoning at those scales. Only by reducing costs and increasing speed can we entrust models with truly high-productivity tasks, enable them to perform more complex operations within long contexts, and even achieve model self-iteration.

Self-iteration of a model refers to its ability to evolve itself within a complex environment by leveraging extremely long contexts. This evolution may pertain to the Agent framework itself or even to the model’s parameters—since I believe the context itself is essentially a form of parameter evolution. Therefore, how to design an architecture that supports long contexts and how to achieve efficient long-context inference during reasoning constitute a comprehensive competitive frontier.

In addition to the pre-training phase, where we focused on building architectures optimized for long-context efficiency—a topic we began exploring about a year ago—we are now iterating on an innovative paradigm during the post-training phase to achieve stable performance and high upper bounds on long-range tasks.

We are currently working on constructing more effective learning algorithms, collecting text that truly exhibits long-range dependencies in real-world scenarios with contexts of 1M, 10M, and 100M tokens, and integrating trajectory data generated by complex environments.

But in the longer term, driven by the rapid advancements in large models themselves and enhanced by Agent frameworks—as Li Xue mentioned—reasoning demand has already grown nearly tenfold over the past period. Could total token usage this year increase by as much as 100 times?

Here, another dimension of competition emerges—computing power, or inference chips, and even further down to energy. So I believe that if we all think about this together, I might learn even more from everyone. Thank you.

Agent 06 has three key modules; the surge of multi-agent systems will bring significant impact.

Yang Zhilin: A very insightful sharing. Next, I’d like to ask Huang Chao: You’ve developed influential Agent projects like Nanobot and have a large community following. From the perspective of Agent harnessing or applications, what technical directions do you think are important and worth paying attention to next?

Huang Chao: I think if we abstract the technology behind agents, the key components are Planning, Memory, and Tool Use.

Let’s start with planning. The current issue is that for long-term tasks or very complex contexts—such as tasks involving 500 steps or more—many models struggle to make effective plans. I believe this is fundamentally because models may lack this kind of implicit knowledge, especially in complex vertical domains. In the future, it may be necessary to embed knowledge of various complex tasks directly into the models, which could be one promising direction.

Of course, Skill and Harness also mitigate some of the errors introduced by Planning, as they provide high-quality Skills that essentially guide the model toward completing more challenging tasks.

Let’s talk about Memory. Memory often gives the impression of suffering from inaccurate information compression and unreliable retrieval—especially under long-term tasks and complex scenarios, where its workload surges dramatically. Currently, projects like OpenClaw rely on the simplest form of Memory: file-system-based Markdown files shared across systems. In the future, Memory is likely to evolve toward a layered architecture and must become more general-purpose.

To be honest, the current Memory mechanism is hard to make universal—because coding scenarios, deep research scenarios, and multimodal scenarios differ greatly in data modalities; achieving effective retrieval and indexing of these Memories while maintaining efficiency is always a trade-off.

Additionally, now that OpenClaw has significantly lowered the barrier to creating Agents, there may soon be more than just one “lobster.” I’ve also noticed that Kimi has introduced mechanisms like Agent Swarm—soon, everyone might have a whole “school of lobsters.”

Compared to a single lobster, the context explosion caused by a group of lobsters is easy to imagine, placing tremendous pressure on memory. Currently, there is no effective mechanism to manage the context generated by such “groups of lobsters,” especially in complex scenarios like coding or scientific discovery, where both the model and the entire agent architecture face significant strain.

Now let’s talk about Tool Use, specifically the Skill aspect. The current issues with Skills are similar to those that existed with MCP initially—MCP faced problems such as inconsistent quality and security risks. Today, Skills face the same challenges: although there appear to be many Skills available, high-quality ones are scarce, and low-quality Skills can significantly impair an Agent’s ability to complete tasks accurately. Additionally, there is the risk of malicious injection. Therefore, from the perspective of Tool Use, improving the entire Skill ecosystem may require community-driven efforts—even enabling Skills to autonomously evolve new capabilities during execution.

Overall, from Planning and Memory to Tool Use, these are current pain points for agents and potential directions for the future.

07. Keywords for the Next 12 Months: Ecosystem, Sustainable Tokens, Self-Evolution, and Computing Power

Yang Zhilin: It’s clear that both guests have discussed a shared issue from different perspectives— as task complexity increases, context length grows dramatically. On the model level, we can enhance native context length; on the Agent Harness level, mechanisms like Planning, Memory, and Multi-Agent systems can support more complex tasks within the constraints of specific model capabilities. I believe these two directions will generate increasingly powerful synergies in the near future, further enhancing task completion capabilities.

Finally, let’s end with an open-ended outlook. Please use one word to describe the trend in large model development over the next 12 months and your expectations. Let’s start with Huang Chao.

Huang Chao: Twelve months in the field of AI seems so far away—it’s hard to imagine what it will look like in twelve months.

Yang Zhilin: It originally said five years, but I changed it.

Huang Chao: Yes, hahaha. The word that comes to mind is “ecosystem.” Right now, OpenClaw has gotten everyone excited, but in the future, agents need to truly become “workers”—not just something people play with for a novelty. We need them to genuinely take root as tools for grunt work and become real coworkers.

This requires the effort of the entire ecosystem, especially open source—after opening up technological exploration and model technologies, the entire community must collaborate to build together—whether it’s iterating on models, enhancing the Skill platform, or developing various tools, all must be better designed to foster an ecosystem for lobsters.

A clear trend is whether future software will still be designed for humans. I believe that many future software applications may not be intended for humans at all—since humans require GUIs, while the future may be natively oriented toward agents. Interestingly, people may only interact with GUIs that bring them joy. Meanwhile, the entire ecosystem has shifted from GUI and MCP to a CLI model. This requires the ecosystem to transform software systems, data, and various technologies into Agent-Native forms, enabling richer overall development.

Luofuli: Narrowing the question to one year is very meaningful. From my definition of AGI, I believe it has already been achieved over five years. So, if I had to describe the most critical event in the next year of AGI’s journey in one sentence, I would say it’s “self-evolution.”

This term may sound a bit abstract, and people have mentioned it many times over the past year. But recently, I’ve gained a deeper understanding—or rather, developed a more practical and feasible approach to “self-evolution.” The reason is that, with powerful models, we’ve barely scratched the surface of what pre-trained models can achieve under the Chat paradigm, while the Agent framework unlocks their full potential. When we task models with longer-duration operations, we observe that they can learn and evolve on their own.

A simple experiment is to add a verifiable constraint to the existing agent framework and set up a loop, allowing the model to continuously iterate and optimize toward its goal—you’ll find that it consistently delivers better solutions. This self-evolution is already capable of running for one to two days, though it depends on the task’s complexity.

For example, in certain scientific research tasks, such as exploring better model architectures—where there are clear evaluation metrics like lower PPL—we have found that it can already autonomously optimize and execute for two to three days.

From my perspective, self-evolution is the only area capable of “creating something new.” It doesn’t replace human productivity as we know it; instead, it acts like top scientists, exploring what doesn’t yet exist in the world. A year ago, I would have thought this timeline would stretch three to five years, but recently I believe it should be shortened to one to two years. We may soon be able to overlay a powerful self-evolving agent framework on large models, achieving at least an exponential acceleration in scientific research.

Recently, I’ve noticed that the workflow of my peers working on large models is highly uncertain and highly creative. However, with Claude Code combined with state-of-the-art models, our research efficiency has improved by nearly tenfold. I’m eager to see this paradigm extend to broader disciplines and fields, which is why I believe “self-evolution” is crucial.

Xia Lixue: My keyword is "sustainable token." I see that the development of AI is still a long-term, ongoing process, and we hope it will have lasting vitality. From an infrastructure perspective, a major issue is that resources are ultimately limited.

Just as sustainability was discussed in the past, as a token factory, whether we can sustainably, stably, and at scale provide tokens so that top-tier models can truly serve more downstream applications is a critical issue we see.

We need to broaden our perspective to the entire ecosystem—from energy and computing power to tokens and ultimately applications—to create a sustainable, economically iterative cycle. We are not only leveraging domestic computing resources but also exporting these capabilities overseas, enabling global resources to be interconnected and integrated.

I also feel that "sustainability" is essentially about building China's unique token economy. In the past, we talked about "Made in China," turning China's low-cost manufacturing capabilities into high-quality products exported worldwide.

What we are doing now is “AI Made in China”—sustainably converting China’s advantages in energy and other areas into high-quality tokens through token factories, exporting them globally to become the world’s token factory. This is the value I want to see China bring to the world through artificial intelligence this year.

Zhang Peng: I’ll keep it brief. While everyone is gazing at the stars, I’ll stay grounded. My keyword is “computing power.”

As mentioned just now, all the technologies and agent frameworks have increased everyone’s creativity and efficiency tenfold—but only if people can actually use them effectively. You can’t pose a question and have it ponder for ages without giving an answer; that simply won’t work. Because of this, progress in many areas of research and many desired initiatives are being hindered.

A couple of years ago, I remember an academician saying at the Zhongguancun Forum: “No cards, no emotion; talking about cards hurts feelings.” I feel we’ve reached a similar point today, but the situation is different. We’re now entering the inference phase, and demand is truly exploding—growing tenfold, even a hundredfold. Just now you mentioned usage has increased tenfold, but perhaps the actual demand is a hundred times higher? There’s still massive unmet demand—what should we do? Let’s all think together about solutions.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.