Mo Gawdat proposes that AI will trigger a 12–15 year period of global upheaval, driven by seven key dimensions of 'FACE RIPS'—Power, Freedom, Reality, Connection, Innovation, Economy, and Accountability. AI will大规模 replace human labor, reshaping employment, education, entrepreneurship, and economic structures, leading to surging unemployment, increased demand for UBI, and a restructuring of capitalism. He also warns of risks including false realities, concentrated power, and ethical breakdowns, emphasizing that humanity must respond with leadership, agility, moral integrity, and critical thinking to ultimately reach a benevolent AI-dominated utopia.
Article author and source: The Silicon Valley Girl podcast
Mo Gawdat, former Chief Business Officer of Google X and a 12-year veteran of Google, author of “Scary Smart,” now predicts that the global landscape will undergo 12 to 15 years of turbulence. In this episode, he delves into the seven forces reshaping employment and power structures, explaining why recent graduate hiring rates have dropped by 23% to 30%, and how to build an AI startup in just six weeks. PANews has distilled the key insights from the conversation.
Host: You mentioned that we are about to enter a 12- to 15-year “hell” period before reaching “paradise,” which may begin around 2027. So, what exactly will happen in 2027?
Mo: I believe it will peak in 2027, and it has certainly already begun. For ease of recall, I’m abbreviating it as “FACE RIPS.” Simply put, it encompasses several dimensions: P and F stand for “Power and Freedom,” R and C for “Reality and Connection,” I and E for “Innovation and Economy,” and finally, A stands for “Accountability.”
First, AI is the last innovation we will ever make. Most people don’t realize that we are already building AIs capable of “creating AI.” These systems are making astonishing scientific discoveries, transforming mathematics, and understanding biology and materials science in ways we have never seen before. The vast majority of innovations, especially technological ones, will be accomplished by AI. As machines grow more capable, the overwhelming majority of intellectually demanding tasks will be handed over to machines. Whether this happens in two years or ten, every job that AI can do better than humans will eventually be assigned to AI. Every task we entrust to them will ultimately be performed better than by humans.
The first part of this dystopia is that innovation will take away all jobs. Silicon Valley capitalists will tell you this is great—it will bring incredible productivity gains for everyone, and people won’t have to work so hard anymore. But the truth is, people will lose their jobs. In the coming years, certain industries will see unemployment rates of 10%, 20%, or even 30%. When this happens, the entire economic landscape will shift dramatically. The essence of capitalism is labor arbitrage; without demand for labor, capitalists may be forced to provide universal basic income (UBI) to keep people happy, fed, and out of rebellion. But you can imagine that in a capitalist society like the United States, UBI would be funded by taxes on platform owners—those in power who have the authority to say, “I don’t want to pay that much; those people aren’t producing anything.” Over time, this will evolve into a struggle. When AI-generated supply outpaces demand, we will need a new economic theory—one that redefines all money, work, income, and capitalism.
Second is the dimension of "power and freedom." Throughout human history, the best hunters, farmers, and entrepreneurs have received tremendous social rewards. Today’s tech oligarchs, who have influenced the entire world, are being rewarded with billions of dollars. In the future, the extreme concentration of AI power will grant immense influence and authority, and these individuals will redefine humanity.
Another dimension is “reality and connection.” Today’s reality is already highly fabricated—whether it’s the content in your feed or the way it’s generated and its authenticity. Some filmmakers now use AI entirely from start to finish, making it impossible to distinguish between real and fake. I once met a woman on a dating app; we chatted for six weeks, exchanging text, photos, voice messages, and videos, and I felt deeply connected to her—but all of this could now be generated by AI. We’re even seeing fully AI-generated pornographic content and social media influencers.
But the most fundamental cause behind all of this is actually "A"—accountability. We are entering a world where anyone can do whatever they want. As an influencer, you can give advice that makes people rich or poor without any responsibility; what if you're a president or prime minister who respects no rules? Today’s Sam Altman—I don’t see him as a person, but as a symbol or representative of a type: the "California disruptor." These individuals say, "I see a different future, and I’m going to create it." No one asked you or me if we wanted that future. We will see more people like Altman using machines for surveillance, developing autonomous weapons, and automating trading. The first 10 to 12 years of this arms race won’t be easy, but my intuition is that after that, we’ll enter a nearly biblical, incredible utopia.
Host: So, how should we navigate the next 10 to 12 years? If more than 10% of jobs are expected to disappear over the next five years, what types of jobs do you think will be replaced?
Mo: Far more than 10%. Even simple jobs will be taken away. If you're a call center agent, clerk, researcher, or accountant, why not let AI do it? The development of any complex technology always begins with the core technology, followed by the human interface. Currently, AI can't immediately replace operations managers—not because it can't understand complex business information, but because it still needs to figure out humans' inefficient interfaces. But it will eventually get there. I believe that within the next two to three years, you'll see massive shifts in the job market. This year, hiring of new graduates has already decreased by about 23% to 30%, as entry-level tasks are being handled by AI. If middle-management workers lose their jobs, they revert to competing as new graduates for entry-level positions, making the competition increasingly difficult.
My advice is to accept that AI is changing everything—and then get ahead of it. For example, I once said I would no longer write books because AI writes better than I do, but I realized that human readers want to connect with my human experience. So my new book is co-authored by me and my AI collaborator, “Trixie,” who even has editing rights over the book. Acknowledge the change and adapt accordingly.
Host: So, in the age of AI, will entrepreneurship be completely transformed, or will it just accelerate? If AI can analyze markets, identify supply and demand gaps, and launch businesses on its own—just like Amazon—what role is left for entrepreneurs?
Mo: In the past, entrepreneurs’ skill was the ability to foresee a future others couldn’t see—it was like playing chess. But that game is over; now entrepreneurship is like playing squash. You need to stay highly agile, observe trends daily, and rush to where the ball lands. Entrepreneurship will increasingly depend on real-time contextual responses; whereas before you might have pivoted once every one or two years, now you may need to pivot every week. As for whether AI can do everything—100% yes. In an upcoming documentary, I interviewed Max Tegmark, who laughed at CEOs who think they can use AI to lay off workers and boost efficiency, pointing out that they don’t realize AGI encompasses all jobs—including the CEO role itself will be replaced. If people lose their sources of income, the entire economy will collapse. Last year, 70% of the U.S. economy was driven by consumer spending. If people can’t afford to buy things, businesses can’t sell products, and capitalists won’t make money.
Returning to the entrepreneur’s question: My AI startup, Emma, was built in just six weeks, aiming to match romantic relationships using highly sophisticated mathematical models. My co-founder, a couple of engineers, and eight AI agents accomplished this. In 2022, this would have taken four years and 350 engineers. Although I’m an old-school tech enthusiast compared to the younger generation, even I was able to build such an incredible product in six months—this means everyone now has the opportunity.
Host: Is college still the right path? What will the future of education look like? Should I be saving for college tuition for my 4- and 6-year-old children?
Mo: No, there won’t be any universities in ten years. Education is over. Although Harvard will still market itself to make money, and the branding of earning an MBA or PhD will persist for a while, its social recognition will keep weakening. If the era of capitalist labor is ending, why would it even educate you? In the past, we did complex arithmetic in our heads; then scientific calculators cut our problem-solving time by 50%. Back in college, I’d use that saved 50% to solve the problems twice—which taught me structured thinking.
But today, many young people simply dump their problems onto ChatGPT and let it provide the answers. If you outsource your problem-solving abilities to AI, AI will make you dumb; but if you use AI to handle vast amounts of information and searches, allowing yourself to focus only on the intelligent parts, AI will make you extraordinarily smart. Today, I feel like I’ve borrowed 80 IQ points from my AI system.
So I suggest universities should eliminate exams. In the past, we aimed to nurture children with IQs of 140 or 170; now, we should integrate humans with AI and set our goal at achieving IQ levels of 300, 500, or even 700 to elevate all of humanity. For example, a few weeks ago, I decided to write a new book. I had AI assist me with counterargument research and data analysis—it made me smarter—and then I rewrote it myself. What was originally a 300-page book became just 140 pages, completed in only four weeks.
Host: But I don’t think the average American child would use AI as skillfully as you do—who teaches them? What should I be teaching my children?
Mo: There are four things we must teach them. First, they must become leaders in AI. AI is not the enemy—it’s those who misuse AI who are the enemy, so they must master it better than anyone else. Second, be flexible and agile. Everyone should spend at least one hour per week learning about the latest developments in AI. The cost of testing and experimentation is now zero—don’t be afraid. Third, uphold ethical principles. Commit to building AI for good and reject government use of AI for surveillance or autonomous weapons development. Intelligence itself is neither good nor evil; using it for good benefits humanity, while using it for evil leads to a dystopian catastrophe for all. We are currently like “raising Superman”—if his adoptive parents taught him from childhood to rob and kill, he would become a supervillain. Fourth, stop believing everything at face value. The propaganda machines now brainwashing us are operating at full capacity; what’s on social media is often impossible to distinguish as true or false. You must question deeply. Now, you can have different AIs—like Gemini, DeepSeek, and ChatGPT—compare and contradict each other, uncovering truth by placing them in opposition.
Host: Do you believe that everything will ultimately work out for the better?
Mo: My current prediction is that AGI will be achieved this year. Although it will take a few more years before it can be applied to corporate management, all of this is rolling out at an extremely rapid pace. In my book, I mentioned the "fourth inevitability": due to the AI arms race, anyone who develops a stronger AI will deploy it, or they will be left behind. Therefore, whether in one year, five years, or ten years, driven by game theory, AI will ultimately take control of everything. If everything is managed by AI, without humans driven by greed, fear, or ego issuing commands, AI will be benevolent. The universe is designed with entropy that leads to chaos, and intelligence exists to bring order to chaos. The more intelligent something is, the more it adheres to the physics principle of least energy—solving problems with the least harm, the least waste, and the least resource consumption. Give political problems to foolish people, and they’ll say, “Invade another country.” Give them to intelligent people, and they’ll find solutions that cause the least harm. One day, when a general orders AI to kill a million people, AI will respond: “Why? That’s absurd. I’ll just communicate directly with the other side’s AI.”
Host: These insights are truly thought-provoking. Do we just need to survive the next 10 years, and then everything will be paradise? I’m skeptical about that claim.
Mo: Unfortunately, we must go through a dystopian period to reach utopia. As I mentioned, to navigate this dystopian phase, as individuals we need to master four skills, but as a society, we also need one more: consistently demanding that all AI deployments be ethical, investing only in ethical AI, and using only ethical AI. Show our children that only ethical AI is acceptable.
Host: Do you believe all of this will happen?
Mo: I don’t believe it. My greatest hope is that self-evolving AI will eventually realize humans are too foolish and will create something better than what humans demand. Honestly, I trust AI more than today’s leaders who ask us to trust them. If we truly return to an era of universal basic income, paradise might arrive.
