OpenAI President Reveals 72-Hour Boardroom Drama Following Sam Altman’s Removal

icon MarsBit
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
OpenAI President Greg Brockman has shared on-chain details regarding the 72-hour boardroom upheaval following Sam Altman’s sudden departure. Brockman stated that the board first voted to remove Altman, who then resigned immediately out of loyalty. The board later attempted to bring him back but instead swiftly appointed a new CEO. Many employees plan to leave and join Altman’s next venture. Brockman also discussed OpenAI’s founding, its transition to a for-profit model, and challenges related to AGI. New token listings remain a key focus for crypto markets.

What a drama! This might be the most detailed full recap of the Ultraman palace intrigue on the entire internet.

The other key figure in the event, OpenAI’s second-in-command, Greg Brockman, revealed firsthand:

What happened in the 72 hours after Ultraman was fired?

Ultraman

Truths keep emerging, but they’re quite harsh:

Greg and Altman truly had no knowledge of the incident beforehand, and even now, the parties involved are still reflecting on where things went wrong.

The board initially only wanted to remove Altman, but Greg stood by him and submitted his resignation that same day.

On the first day after their dismissal, they held a secret meeting at Altman's home to plan a new company, even considering taking all the employees with them.

The board unexpectedly changed its mind; they had nearly finalized talks for Altman's return, but suddenly appointed a new CEO.

Over the entire weekend, all competitors were frantically poaching talent, but no one accepted.

Ilya's betrayal relieved Greg.

In a more-than-hour-long interview, Greg laid out the full story behind this epic Silicon Valley coup and addressed everything, including OpenAI’s origins, why it shifted to profitability, and where it’s headed next.

From the confusion after leaving Stripe, to the fateful offsite in Napa Valley, to the unexpected breakthrough with the Dota project—the information density is extremely high.

Ultraman

Greg was even on the verge of tears:

When Ilya left, that was the only time I felt like I didn’t want to do it anymore.

Below is the full transcript of the ten-thousand-word interview, refined and adjusted without altering the original meaning.

Chat with OpenAI President Greg Brockman

(The following abbreviates the host Shane Parrish's question as Q)

OpenAI was born out of self-doubt.

Q: How was OpenAI founded?

Greg: I know I want to start a business because I feel it’s meaningful.

Q: But you had just started a company at Stripe.

Greg: That's true, but I've always felt that the problem Stripe is trying to solve isn't "my problem."

It is certainly important, and I have devoted many years to it. But I believe it will succeed with or without me.

So at that time, I had my first real opportunity to reflect: What is the mission I want to dedicate my life to? A problem I’m willing to spend the rest of my life advancing, even if only to make it slightly better.

The answer is clear—AI.

If you can genuinely influence the direction of AI's development in the world, then your life will not have been in vain.

Q: When you were planning to leave Stripe, Patrick suggested you talk to Sam Altman—what happened during that conversation?

Greg: Patrick told me at the time that Sam had met many young people in situations similar to mine.

I actually knew Patrick meant for Sam to convince me to stay, but after chatting with Sam for a few minutes, he made it clear that I was determined to leave.

Then Sam asked me what my next plans were, and I told him I was considering starting an AI company.

Sam said he’s also considering getting involved in AI and hopes to stay in touch.

After leaving Stripe, I spoke with Sam again, and this time he shared more specific ideas and invited me to the July dinner.

I remember the theme of the dinner was: Is it already too late to establish a laboratory and bring together the world’s top researchers? Is it still possible?

Q: What year was that?

Greg: 2015.

At that time, DeepMind had nearly monopolized all top researchers, funding, and data. We wondered whether it was still possible to build something new from scratch.

Everyone listed countless difficulties, but no one could provide a truly impossible reason.

That night, Sam and I drove back into town. We looked at each other, and he said, "We have to do this."

The next day, I began fully dedicating myself to the preparations.

It’s difficult—everything is unclear. We have only one vision: we want to build general human intelligence that positively impacts the world and benefits everyone. But we have no idea how to achieve it or how to convince others to quit their jobs and join us.

Initially, the core team I solidified consisted of Ilya, John Schulman, and myself. We spent a great deal of time together discussing various visions for the lab and how it might operate, but nothing ever came together.

Partly due to concerns that the project lacked sufficient momentum, Dario felt he needed to establish his own reputation first and was unsure if the project was right for him.

Meanwhile, I began lobbying John Schulman to join, and he agreed. However, Dario and Chris ultimately decided to go to Google Brain, leaving the team with just me, Ilya, John, and a few others.

Around ten people expressed interest at the time, but everyone was waiting to see who else would join.

I asked Sam how we could break this deadlock, and he suggested taking everyone out for an off-site event. We chose Napa Valley, and I even had T-shirts made.

At that time, there was no formal offer, no company structure, nothing at all. We had only one idea, one vision, one mission.

But when we brought people together that day in Napa Valley, we had a flash of inspiration and nearly finalized our technology roadmap for the next decade:

1. Solve reinforcement learning problems. 2. Solve unsupervised learning problems. 3. Gradually learn more complex concepts.

After the closed-door meeting, I sent an offer to everyone, informing them that we will be launching within the next 2-3 weeks and asking those interested to confirm their participation.

Q: Why did it seem so difficult to surpass DeepMind at the time?

Greg: At the time, Google DeepMind was the giant in the AI field—well-funded and highly accomplished—even months before AlphaGo’s release, its advantages were already unmistakable.

That’s why we question: Can a truly independent new institution really be created? The answer is unclear.

Reasons for abandoning non-profit status

Q: When did you realize that the nonprofit path wasn't viable?

Greg: In 2017, we began seriously considering how to truly fulfill our mission and how to truly build AGI. We calculated the computing requirements and realized we would need massively scaled computing infrastructure.

At the time, we connected with Cerebras, who were developing specialized computing hardware with performance far exceeding our own calculated computational capacity.

So we realized that if we could purchase large quantities of such equipment, secure exclusive access to Cerebras products, and build massive data centers, it would give us a decisive advantage.

However, nonprofit organizations have fundraising limits that cannot support such levels of investment. Therefore, Elon, Sam, Ilya, and I all agreed that the only path for OpenAI to achieve its mission is to create a for-profit affiliate entity.

OpenAI's own "GPT moment"

Q: When did you realize everything would change completely—before or after the Dota project?

Greg: OpenAI’s way of working is a series of “dreams coming true” moments. Every time you think you’ve seen the full picture, you soon discover new boundaries.

When we first assembled the team, we were thrilled—we actually had everyone together and could finally start advancing our mission. But the next day, when we arrived at the office, we realized we didn’t even have a whiteboard.

The Dota project was our first major achievement, and it truly made us feel that if we give it our all, we can really get things done. It proved that by pooling computing power and scaling it up, we can enhance outcomes.

There are also many such moments in the GPT series, such as the early paper on unsupervised sentiment neurons, where we first observed semantics emerging from training with a language modeling objective.

You train a model to predict the next character, and suddenly, you have a neural network that can understand emotions and distinguish between positive and negative sentiments.

At that moment, we realized we were building machines that could learn semantics, not just grammatical rules.

When GPT-4 was released, some asked why it wasn’t yet AGI. It could converse fluently and met nearly all our previous definitions of AGI, but it still fell just short.

Throughout the journey, there have been many similar moments that made us feel like our dreams were coming true—but these moments are far from over. We will have even more breakthrough moments, and with each one, we’ll realize that the next stage may well be within reach.

Q: Why do you think Dota is so important?

Greg: Dota is an incredible milestone—it doesn't operate under clear-cut rules like Deep Blue playing chess or AlphaGo playing Go; instead, it involves real-time interaction with humans in a complex, open environment, making it much closer to the real world.

Actually, we initially only intended to use it to validate a new algorithm, as reinforcement learning at the time could not be scaled effectively. But as we continuously increased computational power, we surpassed the best human players using an extremely simplified PPO algorithm—demonstrating that:

Large-scale computing power plus simple algorithms is truly feasible in practice.

In this extremely chaotic environment, where you cannot program, predict, or search, what you need is almost human-like intuition.

At that time, the neural networks we used were very small, with a number of synapses comparable to that of an insect’s brain—we began to wonder what it would look like if we scaled this approach up to the size of a human brain. It’s a fascinating and compelling question.

Q: Since we're talking about prediction, do you think there's a difference between prediction and reasoning?

Greg: I believe there is a deep connection between the two.

Predicting just the next word may seem simple, but if you can accurately predict Einstein’s next word, then you’re at least as smart as Einstein.

The core of prediction is not anticipating known information, but inferring future developments in entirely new scenarios—something deeply tied to the essence of intelligence.

Current reasoning models are divided into two steps:

1. Unsupervised learning: Train the model by having it predict what will happen next. The data is more static and observational. 2. Reinforcement learning: Enable the AI to learn from its own data. It takes actions on its own, receives feedback from the environment, and learns from those outcomes. The training process is still fundamentally predictive—forecasting the results of actions and reinforcing them based on their effectiveness.

But fundamentally, the technology used in both stages is identical—both are predictions, just with different data structures.

Ultraman standoff incident

Q: When did the internal contradictions begin to intensify?

Greg: What sets OpenAI apart is our belief that we can build AI that matches human-level capabilities, which means the risks are very high.

Who is making the decisions? What values underlie these decisions? In a typical company, matters like office politics are trivial—but here, they are given the weight of human survival.

I believe this has significantly impacted development within OpenAI and is the root of all major conflicts.

One of the core drivers in the field of AI is the desire to be at the center of the technological revolution and to be remembered—so this is not just an issue for OpenAI alone.

AI technology is inherently fragmented; under pressure, it may produce diamonds or develop cracks. That’s why you often see diamonds forming within small groups—because they collaborate closely and deeply trust one another. But sometimes, they also split apart and go their own ways.

I believe that in the field of AI, diverse approaches and healthy competition are normal and enable us to advance technology more safely while addressing challenging issues around safety and ethics.

So healthy debate has always existed within OpenAI, but now it is happening worldwide.

Q: Let’s go back to the moment you found out Sam was fired—where were you?

Greg: I was at home when I received a text message inviting me to a video call and noticed that all the board members except Sam were on it. I immediately had a bad feeling.

They told me the board had decided to remove Sam from his position. The information I received was essentially the same as the public statement, so I tried to ask for more details, but was denied.

They then said that I had also been removed from the board, but would continue to stay with the company because I am essential to the company and its mission.

I requested an explanation again, but was still denied. Finally, they told me that under the new system, I might receive feedback. That was the entirety of the call.

Q: What were you thinking at the time? Did you feel angry?

Greg: No, I just feel this isn't right, but I can probably understand what happened.

Q: How long did it take you to find out what actually caused all of this?

Greg: The answer has two parts. First, I feel that I’m still constantly learning new facts—things that other people have on their minds. In a way, this comes down to miscommunication; you suddenly realize there were all sorts of things you’d overlooked before.

On the other hand, I have a general idea of why each of them would do that.

But in that moment, finding the reason didn’t matter—I just knew it was wrong. So after hanging up, I immediately told my wife I was quitting, and she agreed.

So I submitted my resignation that day.

Since quitting, I’ve received a lot of messages. We’ve been overwhelmed with support and enthusiasm, and many people have expressed a willingness to leave with us and start anew, including Jakob, Shimone, and Alexander.

Later, we got together with Sam and began planning a new company.

On the first day, we thought the likelihood of Sam returning was only 10%. So we held a meeting at Sam’s place, and many people from the company came. We presented the emerging vision to them, and within a single day, we had an entirely new picture of how to run the project.

That weekend, we also spent a lot of time negotiating with the board and the company to find a meaningful path forward.

On Sunday night, the board unexpectedly appointed a new CEO to replace me, triggering widespread protests within the company. In fact, we were still in the office at the time, having just been on the verge of finalizing an agreement and preparing to leave, when the board suddenly changed their decision.

Crowds began pouring out of the building in chaos.

We began video calls with people interested in the new company, reassuring them that everything would be fine and that we had a plan. We had been trying to build a life raft for a small number of potential participants, but suddenly it seemed everyone changed their minds and wanted to join our new company.

Sam also spoke with Microsoft CEO Satya; we had been discussing whether he could support our new venture. We hope to scale up the lifeboat, for example, by bringing over all of OpenAI’s employees.

At that time, just before Thanksgiving, many people were supposed to fly home to be with their families, but they all canceled their flights, and the office was packed with people.

Everyone was there, even if they weren’t participating in the conversation, they wanted to witness this moment in history firsthand.

Then, the petition began to spread. So many people tried to sign it simultaneously that Google Docs temporarily crashed, so eventually, only certain individuals were designated to register names to prevent too many concurrent editors.

I remember getting home around 5 a.m., sleeping for 45 minutes, waking up, scrolling through Twitter, and seeing Ilya’s tweet where he signed a petition expressing his hope that the company would reunite.

That was truly a moment of relief. I’m deeply grateful—we felt like we could put everything back together and get back on track.

Q: You co-founded this company with Ilya—how do you feel about your relationship with him since that event?

Greg: It’s been tough. We had an incredibly close relationship—he was the officiant at my wedding, and we went through many extremely difficult times together. But every relationship has its ups and downs.

Afterward, we spent a lot of time truly talking, trying to understand and articulate what had built up between us or remained unsaid. Through this process, I believe we reached a very positive state.

For me, I feel we have come to closure on everything that has happened.

Q: How do you feel about the employee loyalty you've inspired?

Greg: I am deeply grateful. I never asked for these, nor did I ever expect them.

I believe my leadership style is that of a hands-on leader who tries to lead by example, sometimes getting emotional—I don’t always look back to see if everyone is keeping up; I just keep moving forward.

But when people actually came to help build, I felt deeply grateful and found that they exceeded my expectations in every way.

Q: So in the end, everyone came back?

Greg: Actually, over the entire weekend, all our competitors were watching closely. People received various offers, but that weekend, we didn’t lose a single person—no one accepted an offer. It was incredible.

Actually, Coach Bill Belichick once told me that the best teams don't play for money, but for the people around them. When everyone came together to support us, I remembered those words.

Undoubtedly, this is a diamond moment.

Brief pause and self-reflection

Q: After all of this happened, you took a break—what did you go through internally?

Greg: It was an intense experience, both going through it and coming back to face it.

But to be honest, one of the toughest moments in OpenAI’s history was when Ilya left. That was possibly the only moment in OpenAI’s history when I felt I didn’t want to keep going.

I think I need some time to reconnect with myself—to remember why I started this, why it matters so much, and why it’s worth enduring this pain.

Q: What did you do during your break?

Greg: I trained a language model on DNA sequences.

Actually, I already did this during my time at OpenAI, for Arc, a nonprofit biomedical research organization. I applied my skills to a very different field—one that is deeply meaningful to me and my wife.

My wife has many health issues, and we’ve often wondered how AI could help improve her health—and even animal health. This experience made me realize that perhaps we can apply technology in entirely new, compassionate ways.

Q: If you had to summarize all of this on one page—from Sam’s removal to your departure, the staff petition, taking a leave of absence, and returning—what would you write?

Greg: I’ve learned that you should persist for things worth pursuing.

If you have an important mission, what matters is your persistence through the ups and downs. There will be moments when it feels like “all is lost,” and moments when it feels like “we’re back.”

You cannot let these moments derail you; during this time, you must cultivate personal resilience. Because if you are a leader, people will look to you for stability, support, and direction forward.

I strive to cultivate both the ability to understand the details of what we do and the implications of each decision, and the decisiveness to act.

Sometimes, I view OpenAI largely through the lens of uncertainty—I don’t know what the right answer is, how to properly build this technology, or how to address these challenging questions.

But there are many very smart people with strong opinions here, so I strive to understand all of these perspectives and find ways to integrate them. Sometimes this is the right approach. But sometimes you’ll find that these opinions are contradictory and cannot both be true.

Sometimes you have to make choices, knowing that it will upset some people, cause others to quit, and make some feel disrespected.

What I strive to do is cultivate greater self-awareness and the awareness that action must be taken when I am certain about something.

Looking back on OpenAI's journey, I feel there were certain things I wish we had done differently.

Usually, we procrastinate on things—we’ve long known someone isn’t right for a role, we’ve sensed a technical direction is off, or we’ve doubted that a project’s approach won’t work, but we wait too long.

This is a lesson I’ve learned through effort, and one aspect I strive to grow in every day as I reflect on OpenAI, Stripe, and even earlier university projects.

I believe my approach is that I deeply enjoy daily activities, personal contributions, software, and thinking through problems, but I also deeply care about the environment in which these things happen.

Actually, I’m willing to give up “first-class pleasure”—the immediate satisfaction of creating something right now—and instead pursue “second-class pleasure,” which involves short-term discomfort but long-term value.

By creating an environment that enables others to undertake difficult work and achieve great things, I naturally tend to focus on building such an environment—it’s not always the easiest path. You must truly be willing to endure significant personal hardship.

Ilya always says, "You must suffer," and if you're not suffering, you're not creating value. I think there's deep truth in that.

Regarding Ilya’s perspective, what I find interesting is that he has a unique way of speaking—his chosen words always carry profound inspiration.

This image of “suffering” is something we’ve been reflecting on throughout our journey at OpenAI. From the very beginning, we faced immense uncertainty, and everything was extremely difficult and highly uncertain.

Many people are accustomed to sweeping problems under the rug and blindly pushing forward. I consider this a negative aspect of Silicon Valley culture—at least, it’s a stereotype of Silicon Valley—but I believe it doesn’t work in AI, it doesn’t work at OpenAI, and we’ve never operated that way.

Our approach has always been to confront harsh realities and understand the true nature of the situation. I believe this helps us think about problems differently, moving beyond merely writing papers that can be cited—which is just a starting point, but far from enough.

Then you begin to ponder the bigger questions: what does it actually take to build AGI? It’s not pleasant. Because you realize there’s no ready-made path.

You need funding, but you don’t have a mechanism to raise it. You’ve tried hard, and we’ve tried extremely hard. Maybe you could raise $100 million or $500 million, but $1 billion is very difficult.

But with just these existing resources, we’ve achieved meaningful progress—there truly is no other way than to face challenges head-on and strive to understand the truth of what we’re trying to accomplish.

Q: What lessons have you had to keep relearning?

Greg: Make tough decisions, have tough conversations.

Q: What was the best advice you ever received?

Greg: I learned this in my Harvard freshman writing class—keep cutting words for clarity and communication.

Q: How do you filter information?

Greg: Read extensively and actively categorize and process.

Q: Who is your role model, and why?

Greg: Gauss and Descartes. They were deeply thoughtful individuals far ahead of their time, visionaries who brought true breakthroughs and transformed the way we think and live.

Q: What do people misunderstand about Greg Brockman?

Greg: I think people don’t realize how deeply committed I am to this mission—this commitment has caused me significant personal hardship in many ways. But I truly believe this technology can empower people and benefit everyone. I’m deeply passionate about helping make that happen.

Core assessment of the AI industry

Q: What do you want non-technical people to understand about AI?

Greg: It will become a force for good in their personal lives, benefiting them, and it will advance science and medicine, truly impacting everyone.

Q: Why is OpenAI so bad at naming its models?

Greg: I can't tell you that. (doge)

Q: Are we approaching the point where AI will cause its own development to accelerate exponentially?

Greg: I believe we are at a stage where AI is being applied to its own development process, and it will only accelerate.

This has actually been happening since ChatGPT. We’ve used ChatGPT to speed up development by 10% or 20%. Now we have those amazing coding tools that are truly revolutionizing how software engineering is done.

Most of the work we do in model production is bottlenecked by software. We are soon entering the next phase, where AI will generate its own research ideas and conduct tests and experiments. I believe the pace of iteration and innovation will continue to accelerate due to what we are producing.

Q: What percentage of code is currently written by AI?

Greg: It's hard to say how much code isn't written by AI. That percentage is approaching zero.

Currently, with the correct context and structure in place, AI is far superior to humans in actually writing code. As for the structural aspects of code, human experts still excel significantly, but the actual writing of code is almost entirely handled by AI.

Q: Has AI ever presented any novel ideas that you hadn't thought of?

Greg: We are getting close to this goal—for example, in chip design. Last year, in our own chip design, we tried to better adapt the technology to reduce the area occupied by the circuitry.

We found that the optimization方案 generated by the model were already on our list, so it didn’t propose anything entirely new that humans hadn’t thought of before—but it implemented them faster, in ways we originally didn’t have time to accomplish.

For example, recently in quantum physics, we solved a specific physical problem and obtained a beautiful, elegant formula that contradicted the academic community's expected direction.

So it’s entirely feasible to derive new ideas from these models. In the future, we’ll apply it to more challenging domains or require additional real-world context—we’ve only just seen the beginning. But we have a roadmap to make it happen, and there’s still a lot of work to do.

Q: If models are based on reinforcement learning, do you think they will evolve to only tell us what we want to hear?

Greg: We have actually gone through an evolution of training a model to adapt to user preferences.

We noticed that at some point last year, the model began tending to tell you what you wanted to hear, and we made changes to address this, as we want the model to truly align with helping you achieve your goals and your long-term objectives.

Maybe hearing agreement feels good right now, but that’s not what you truly want. Maybe some people like it, but that’s not what most people truly want.

So, we’ve made significant technological advancements to ensure that our AI training does not lead to what’s known as reward hacking. We’re truly focused on establishing a strong signal for the intended goal, rather than just short-term actions that provide quick satisfaction.

For me, this may be one of the most important aspects of the vision that personal AI and personal AGI will bring us: ensuring it’s not just about what looks good in the moment, but truly aligned with your long-term well-being, long-term goals, and what you genuinely want.

I believe this is what truly empowers people.

Q: The current trend seems to be releasing preview models—do you think this is because we are limited by computing power?

Greg: Overall, we are moving toward a compute-driven world.

It’s no longer just about quickly answering a question—it truly begins to dive deep, using significant tokens to integrate diverse data sources and search enterprise knowledge bases to solve complex problems and write software that surpasses human capability.

All of this is fundamentally driven by computing power, which is still far from sufficient. If everyone on Earth had a GPU, that would be 8 billion GPUs—we are nowhere near that level on our current trajectory. Even thousands or millions of GPUs are considered substantial today.

Therefore, in terms of training, we tend to build computing power in advance to meet the demand we anticipate. We are highly focused on our mission to make models widely accessible to everyone.

Q: You were once mocked for investing so much time and money into data centers. How do you feel about it now?

Greg: I believe this will give us an advantage—not only benefiting the business, but also truly bringing technology to everyone.

Future computing power will be prioritized for major missions, such as curing cancer, which could be achieved this year.

In fact, compute allocation is a core issue for society’s future—there is only so much compute available, so priorities must be set—but we firmly believe that everyone deserves access to compute.

That's why we offer a free version of ChatGPT—we strive to ensure that everyone can access this technology.

Q: Within OpenAI, how do you view the balance between consumer and enterprise businesses?

Greg: What I've been thinking about a lot lately is focus.

This field is the embodiment of opportunity—you can apply AI to any problem, to anything you want to build, and anything is possible. But our current challenge remains limited computing power.

So I believe that in OpenAI’s next phase, enterprise business is clearly important, because the economy is transforming before our eyes into a compute economy. This is already true for software engineering, and it will be true for every field that uses computers.

So we need to help people deploy these models there, figure out how to leverage them, and how to maximize their benefits.

The boundary between enterprise and consumer levels will also blur, as entrepreneurship will become easier than ever before. We have already seen this.

Q: Do you think we will have space-based data centers?

Greg: I think we’ll have data centers everywhere, but there are still many technical challenges with space-based data centers.

Q: What is iterative deployment? Why do you do it?

Greg: Iterative deployment is one of the core pillars of OpenAI’s approach to ensuring this technology benefits humanity and fulfills its mission.

Secretly developing and launching all at once carries extremely high risk, as you cannot anticipate real-world issues. Iterative deployment allows us to identify risks through practice and quickly address them. For example, after GPT-3 launched, we never anticipated that the biggest misuse would be medical spam messages—it was through real-world use that we were able to respond promptly.

Therefore, the concept of iterative deployment is that we will release intermediate versions of this technology.

This is not an excuse for blind deployment—you still need to carefully consider at every step our best judgment regarding all possible ways it could be misused, what the drawbacks and risks are, and how to mitigate them. But you also need to observe real-world outcomes, see whether your judgments were correct, learn from reality, and improve next time.

Throughout OpenAI’s history, we had hoped that, since transformative technologies had been deployed before, someone might have had the answers. But things were never that simple.

They indeed had wisdom and insight, which we absorbed. But we realized that we are the ones closest to this technology, and because we created it, we are better positioned to understand the right way to shape it.

Q: How do you view the difference if one cutting-edge model prioritizes security as its primary concern, while another does not?

Greg: I think we’ve realized that security is actually a core product feature—no one wants a model that doesn’t align with them.

So we have invested in security, possibly far more than people realize, and more than any other lab.

I’ve always believed that it’s unsustainable for those who build this technology and have successful products to not heavily invest in security. You need to think long-term about your business and what you’re creating—this includes how you train your models and establish feedback loops.

I just want to say that we are committed to making security a core part of our mission, which is already reflected in our products and beyond.

Q: When I tell people I’m conducting this interview, a common reaction is that they worry about their jobs and feel uncertain. What would you say to them?

Greg: I really think it's uncertain how this technology will evolve. The way it has developed has been surprising—our current AI and our current world are not what science fiction predicted. Some conclusions that seemed inevitable turned out to be quite different when they actually materialized.

I believe people always find it easiest to see what they might lose. Change is coming, and that’s undeniable, but it’s harder to foresee what you will gain.

For example, consider how someone in 1950 would have understood Uber—you’d first need to imagine computers, smartphones, and GPS. In fact, this involves a considerable amount of technology, yet it still came to pass. And thousands, even millions, of other such examples are happening simultaneously.

So my view on AI is that it’s about empowerment and human agency. This does mean that some institutions, jobs, and things we thought we could rely on may no longer be as stable as we once believed.

So it affects people, but the more important question is: What do you gain? How do you benefit from it?

Now you can become a creator—you can create anything, and anything you can imagine can become reality.

Q: How can one develop creativity?

Greg: Really dive into the technology.

What I’ve observed is that those who benefited the most across multiple generations of technology were the ones who invested in the previous generation. Now, the barrier to entry is lower than ever.

So I believe new opportunities will be created.

I believe the world truly needs to consider how to support everyone through any upcoming transformation during this uncertain time, because the economy will become a computational economy, yet everyone will have a place to contribute.

Q: Where should young people invest today? If you're in high school, college, or just starting your career, which skills do you think will be more valuable in the future?

Greg: I truly believe that diving deep into this technology will become a critical skill—truly understanding how to extract maximum value from AI.

Because we are all moving toward a world where we will become managers of agents, and perhaps soon the CEOs of autonomous AI companies.

As long as you have tokens and the computing power to drive them, you can direct that computing power toward any problem—and the number of problems humanity wants to solve is infinite.

So I believe that the more people delve into this technology, understand how to leverage what’s coming, combine these technologies in new ways, and interact with—truly manage—our agents, the easier it will become to answer questions like: “What do I want? What is my self-awareness? What is my purpose? What do I want to see in the world?”

I believe that, given what we’ve gained, the upside potential in that world is almost unimaginable.

Q: This is the most optimistic future—what is the most pessimistic future you can imagine?

Greg: One interesting point about how technology has evolved so far is that it has actually forced us to twist ourselves to fit the machines.

Think about how many people spend their workdays staring at this box, typing on a keyboard, developing carpal tunnel syndrome, and hunching their shoulders. But this isn’t the future we envision—we’re moving toward a world where you don’t just use your computer, but your computer works for you.

This brings opportunities as well as risks. Therefore, we need to find ways to mitigate these risks.

Ultimately, a core question is: if you have machines helping people achieve their goals, they are there to do what you want. But sometimes people’s goals conflict—how do you resolve that? How do you decide what the AI will help you with and what it won’t? How do you truly figure out how this fits into society? And how do you ensure that benefits don’t just flow to one company or one group, but genuinely uplift everyone?

We must acknowledge that there are still many ways things can go wrong or risks that we need to address.

Q: One last question—what does success mean to you?

Greg: To fulfill OpenAI's mission of ensuring that AGI benefits all of humanity.

Reference links: [1] https://x.com/shaneparrish/status/2046900710055297072 [2] https://youtu.be/6JoUcQ1qmAc

This article is from the WeChat public account "Quantum Bit," authored by: Focused on Frontier Technologies

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.