OpenAI co-founder and chief scientist Ilya Sutskever has announced his departure from the company, six months after a failed attempt to oust CEO Sam Altman. His resignation is followed by the departure of Jan Leike, co-lead of OpenAI's Superalignment team, which was dedicated to mitigating the long-term dangers of advanced AI. Leike cited disagreements with OpenAI's leadership over the company's core priorities, stating that safety culture and processes have taken a backseat to 'shiny products.' Following these resignations, OpenAI has disbanded the Superalignment team, integrating its efforts into broader safety research initiatives. Jakub Pachocki, who served as Director of Research, has been named the new Chief Scientist. The dissolution of the team and the departure of key figures have raised concerns about OpenAI's commitment to AI safety.
Times of India @timesofindia: A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company. #AI #ArtificialIntelligence #aistrategy https://t.co/a97epDXA3i
OpenAI leader resigns, accuses company of putting ‘shiny products’ above safety #OpenAI #ChatGPT https://t.co/eUV9ypCj3q
OpenAI putting ‘shiny products’ above safety, says departing researcher #OpenAI #AI #TechAI https://t.co/M5QgBzNbMD
Former OpenAI leader says safety has ‘taken a backseat to shiny products’ at AI company https://t.co/izVLqzS10T
OpenAI has effectively dissolved a team focused on ensuring the safety of possible future ultra-capable AI systems, following the departure of the group’s two leaders. https://t.co/V0MLTbEi8P
Financial Times @ft: A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company. #AI #ArtificialIntelligence #robotics https://t.co/a97epDXA3i
OpenAI disbands team devoted to artificial intelligence risks #AI #artificialintelligence #Cybersecurity #llm #machinelearning #OpenAI #supersmartAI https://t.co/djEwCXrkTu https://t.co/tsygb80wsq
Jan Leike, a former OpenAI leader who resigned from the company this past week, said that safety has “taken a backseat to shiny products” at the influential artificial intelligence company. https://t.co/ElUwWjgdz2
Major changes at @OpenAI: The team dedicated to addressing #AI existential risks has been dissolved. Co-founder Ilya Sutskever and Jan Leike, co-lead of the superalignment team, have both resigned. https://t.co/Zxq76bObFV
🧠Ilya Sutskever, co-founder and chief scientist of OpenAI, has left the company. His departure comes six months after a failed coup against CEO Sam Altman. Sutskever hints at a new project on the horizon. Full story ⬇️ https://t.co/WtCLExSOvJ
OpenAI leader resigns, accuses company of putting ‘shiny products’ above safety #OpenAI #AGI https://t.co/eUV9ypCj3q
OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence. https://t.co/IV11xfcwsz
A former OpenAI leader who resigned from the company this week said Friday that safety has “taken a backseat to shiny products” at the influential artificial intelligence company. https://t.co/Klphsn75xB
ilya Sutskever, co-founder of openai, and jan leike, a leading research in ai super alignment, have abruptly resigned this week 20 hours ago, jan posted a thread stating that openai isn't taking alignment seriously. greg and sam just replied here's everything you need to know:
➡️ OpenAI discontinues its team dedicated to studying the risks associated with 'rogue' AI, raising concerns about AI safety. 🤔🤖 https://t.co/QTy5YTXzSt
This is a lot of text, but doesn’t answer fundamental questions: - Did OpenAI give the superalignment team the compute it promised to? - How is shuttering that team consistent with a need for an “enormous amount of foundational work”? - Why are there such restrictive NDAs? https://t.co/z0eRpK44cG
OpenAI created the 'Superalignment' team to help keep 'superintelligent' AI safe, and now it has been disbanded: Report https://t.co/QucA0f27DI https://t.co/5kJ3FKvIH4
What happened to OpenAI’s long-term AI risk team? https://t.co/qTQwj0Ffy1
#OpenAI has confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence. https://t.co/Jlnt27lv1G
I still can't believe the Superalignment project at OpenAI is just gone. It's only been 10 months since it was announced. The world deserves an explanation from "Open"AI.
OpenAI has dissolved its long-term AI risk team It was assembled last year to safeguard against the existential threats posed by future powerful AI A huge win for the robots who seek to rule us https://t.co/NAydv2S3Vf
Ilya Sutskever, co-founder of OpenAI, and Jan Leike, a leading mind in AI alignment, have abruptly resigned this week. Just 20 hours ago, Jan dropped a tweet thread stating that OpenAI isn't taking alignment seriously. Here's everything you need to know:
#OpenAI dissolves safety team led by Ilya Sutskever and Jan Leike, integrating efforts for #AI system safety goals. https://t.co/6h1kQtMz1y
OpenAI witnesses key departures as AI safety researchers resign due to prioritization issues, leading to the dissolution of the 'Superalignment' team.
🤖🇺🇸 OpenAI prioritizing flashy products over safety, says ex-researcher. Departing safety lead Jan Leike reveals internal clashes on AI goals just before a crucial AI summit in Seoul. https://t.co/xktvzePNJZ
OpenAI’s AI safety team disbanded amid priority disagreements. https://t.co/Phon9IIuej
OpenAI dissolved its team dedicated to preventing rogue AI The company ended the project less than a year after it started. OpenAI has disbanded its “superalignment team” tasked with staving off the potential existential risks from artificial intelligence less than a year after… https://t.co/7EpJozm31S
OpenAI putting ‘shiny products’ above safety, says departing researcher https://t.co/LuEMGjvag8
Lot of absurd takes like this on the superalignment team leaving OpenAI. The more likely reason they left is not because Ilya and Jan saw some super advanced AI emerging that they couldn't handle but that they didn't and as the cognitive dissonance hit, OpenAI and other… https://t.co/TH36beJnTX
OpenAI has disbanded its team focused on the long-term risks of artificial intelligence, a person familiar with the situation confirmed to CNBC. https://t.co/Q4RxgdMKW5
Superalignment Team Dissolved at OpenAI "OpenAI is now integrating the group more deeply across its research efforts to help the company achieve its safety goals, the company told Bloomberg News." https://t.co/yKMLhhIxWe
OpenAI disbands team devoted to artificial intelligence risks https://t.co/y6VfOdWq3b
Cofounder Ilya Sutskever made headlines when he announced his exit, but he's far from the only OpenAI employee to jump ship. https://t.co/AGPsehh5IE
A former OpenAI leader who resigned from the company earlier this week said Friday that safety has “taken a backseat to shiny products” at the influential artificial intelligence company. https://t.co/8sqqFkRqR0
A former OpenAI leader who resigned from the company earlier this week said Friday that safety has "taken a backseat to shiny products" at the influential artificial intelligence company. https://t.co/qOwiNP76bn
OpenAI's safety leader has resigned from the company, stating that "over the past years, the safety culture has taken a backseat to shiny products." https://t.co/YInDacamz4
A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company https://t.co/WO7hX83Ts5
Ex-OpenAI team leader says safety has 'taken a backseat to shiny products' at the company https://t.co/jZTLgAdJQn
OpenAI chief scientist Ilya Sutskever is leaving https://t.co/FvTn7kAMkc
Wow. Jan Leike, who leads alignment at OpenAI, is resigning, and he explains why. 😮 TL;DR - Jan claims that: 1. OpenAI is getting close to AGI 🚨 2. AGI is dangerous for humanity ⚠️ 3. OpenAI is putting safety in the backseat in favor of new shiny products 🛑 Here is his full… https://t.co/14fXcMcdff
OpenAI Dissolves High-Profile Safety Team After Chief Scientist Sutskever’s Exit https://t.co/96KklGDtCa
Former OpenAI leader Jan Leike blasts company for letting 'safety culture' take a 'backseat to shiny products.' https://t.co/9F74thW1PZ
🚨 WHO IS PROTECTING HUMANITY AT OPENAI? THEY ALL RESIGNED… Ilya Sutskever, co-founder and former chief scientist, and Jan Leike, co-leader of the superalignment team, have resigned from OpenAI amid growing concerns over CEO Sam Altman's leadership and commitment to AI… https://t.co/JuoNmQLvK5
OpenAI no longer has a separate "superalignment" team tasked with ensuring that artificial general intelligence (AGI) doesn't turn on humankind. https://t.co/SMX0sV1sru
One of OpenAI's safety leaders quit on Tuesday. He just explained why. https://t.co/4U8rM49i2V
Why did @janleike depart @OpenAI this week? The former exec tweeted a long explanation, claiming that safety took "a backseat to shiny products," among other complaints. Click to read: https://t.co/CuD5G7k26i
OpenAI disbands safety team focused on risk of artificial intelligence causing ‘human extinction’ https://t.co/fMBfWDKSTy https://t.co/yyEWtIfiZL
Jan Leike, an OpenAI researcher who resigned after co-founder Ilya Sutskever's departure, posted that “safety culture and processes have taken a backseat to shiny products” at the company. Serious allegations about AI safety—what are your thoughts? 🤔 https://t.co/MnfvMpPZRh
A former OpenAI leader says safety has “taken a backseat to shiny products” at the AI company https://t.co/Cbx8GFpRFO
Jan Leike, an OpenAI researcher who resigned earlier this week after co-founder Ilya Sutskever's departure, posted that “safety culture and processes have taken a backseat to shiny products” at the company. Making serious allegations against OpenAI about AI safety. What are your… https://t.co/LDquW1YyQi
Jan Leike, an OpenAI researcher who resigned earlier this week after co-founder Ilya Sutskever's departure, posted on X that “safety culture and processes have taken a backseat to shiny products” at the company. Making serious allegations against OpenAI about AI safety. What are… https://t.co/ikIsqEbqCW
Jan Leike, an OpenAI researcher who resigned earlier this week after co-founder Ilya Sutskever's departure, posted on X that “safety culture and processes have taken a backseat to shiny products” at the company. Making serious allegations against OpenAI about AI safety. What… https://t.co/Qpt0ZQtM0s
A former OpenAI leader who resigned from the company earlier this week said on Friday that product safety has "taken a backseat to shiny products" at the influential artificial intelligence company. https://t.co/bpdVP65fnA
HOLY CRAP! OpenAi’s head of alignment and superalignment lead, Jan Leike has just resigned, writing the following: "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point. I believe much… https://t.co/ugcGoXJsgs
Former OpenAI leader says safety has ‘taken a back seat to shiny products’ at influential AI company https://t.co/t5SuTtjtxe
OpenAI put ‘shiny products’ over safety, departing top researcher says https://t.co/KvZ5H0u9wJ
OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation says. https://t.co/JUpMn9WeyF
🚨🇺🇸 OPENAI DISBANDS AI RISKS TEAM* OpenAI has dissolved its team focused on long-term AI risks less than a year after its formation. This follows the departures of key leaders Ilya Sutskever and Jan Leike, who criticized the company for prioritizing products over safety.… https://t.co/t018XhI4g2
🤖🇺🇸 OpenAI dissolves team focused on long-term AI risks, less than a year after its announcement. Leadership exits and resource struggles prompt concerns over the company’s safety-first approach. #AI #TechNews https://t.co/yZETQEWQX3
OpenAI Exec Who Quit Says Safety 'Took a Backseat to Shiny Products' ► https://t.co/Nmh3andT70 https://t.co/Nmh3andT70
OpenAI researcher resigns, claiming safety has taken ‘a backseat to shiny products’ https://t.co/Oe9CNukqYN
The OpenAI team tasked with protecting humanity is no more https://t.co/2IxsuSKsEg
🤖🇺🇸 #AI Crisis: OpenAI's Safety Team Implodes Amid Leadership Fallout. Key minds like Ilya Sutskever and Jan Leike exit, citing lost faith in leadership and company priorities. What does this mean for AI safety? https://t.co/qSkfA7NyY1
OpenAI just dissolved its team dedicated to managing AI risks, like the possibility of it 'going rogue' https://t.co/cwDJqtBEGW
OpenAI Prioritizes 'Shiny Products' Over AI Safety, Ex-Researcher Says https://t.co/Bz2CTYYnWZ
Much respect to @janleike for this thread explaining why he quit as a leader of the AI safety team at @OpenAI: 'over the past years, safety culture and processes have taken a backseat to shiny products'. (Note that OpenAI employees must sign draconian non-defamation agreements,… https://t.co/y8aTCKw7id
OpenAI has disbanded its team focused on the long-term risks of AI - CNBC Jan Leike who co lead the team just posted this 👀 https://t.co/61yYKg8ILQ
🚨OpenAI dissolves team focused on long-term AI risks. #OpenAI #LLMs #GPT4 https://t.co/ypsAZnLU7V
OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it https://t.co/zSPsoWAJDE
NEW: Less than 1yr after OpenAI announced its team focused on long-term AI risks, the company has disbanded it, a source familiar confirmed to CNBC on Friday. The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever & Jan Leike, quit. https://t.co/ab0g0vthPa
The Ilya-Sam Drama Is Fundamentally About OpenAI's Mission As I suspected, OAI didn't have the compute to spare for "super alignment." Jan Leike, who resigned from Open AI, just confirmed it! The whole Sam-Ilya fight is one of focus - focus on AGI and superalignment OR focus on…
The Ilya-Sam Drama Is Fundamentally About OpenAI's Mission As I suspected, OAI didn't have to compute to spare for "super alignment." Jan Leike, who resigned from Open AI, just confirmed it! The whole Sam-Ilya fight is one of focus - focus on AGI and superalignment OR focus on…
🤖🇺🇸Drama at OpenAI! Their "superalignment team" tasked with averting AI risks has disbanded. With key players exiting, and the company's AI ambitions facing a rough patch, there's a lot to unpack here! https://t.co/5a01fAkfd5
[Thread] Superalignment team co-lead explains why he has left, says OpenAI's safety culture and processes took a backseat to shiny products over the past years (@janleike) https://t.co/62hPNcGjm3 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/ZY8HlfwtQa
This is a great thread from the ex-head of superalignment at OpenAI who resigned due to safety concerns. Worth a read. https://t.co/z8SVVKtlWc
Top OpenAI researcher resigns, saying company prioritized "shiny products" over AI safety. https://t.co/j0UR0z3zCU
Jan Leike, who resigned from OpenAI recently, describes why he left the AI lab. https://t.co/Kv65PAlhuY
Head of the “alignment” team (safety & reliability) at OpenAI is leaving. “I have been disagreeing with OpenAI leadership about…core priorities for quite some time, until we finally reached a breaking point…safety culture and processes have taken a backseat to shiny products.” https://t.co/qAxZSu39oY
OpenAI's Superalignment team is reportedly no more. At the same time, lead researcher Jan Leike says the team has struggled for months to secure enough resources to do its work. https://t.co/daPKSkpqxq
worthwhile read 👇 Jan co-headed the Superalignment team at OpenAI with Ilya, and has left yesterday, he goes into why: https://t.co/QcP3Aa1vDQ
Source: the Superalignment team was promised 20% of OpenAI's compute resources but requests for a fraction of that were often denied (@kyle_l_wiggers / TechCrunch) https://t.co/EX2r70YGTx 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/eHDaCR068E
OpenAI Reportedly Dissolves Its Existential AI Risk Team https://t.co/VEnEK2eNzr https://t.co/oottMQplQw
OpenAI's former superalignment leader blasts company: 'safety culture and processes have taken a backseat' https://t.co/CGj5DgQ7gp
Jan Leike, who resigned as head of alignment at OpenAI this week, says "safety culture and processes have taken a backseat to shiny products". This is not good. https://t.co/S5yeDy1OMG
It's a bit scary when your head of safety resigns cause he sees AI safety isn't a priority but, shiny products are. OpenAI is probably the most culturally significant company at this point in time and I doubt how much they're entirely aligned with humanity... https://t.co/fTK32IlDzQ
When OpenAI announced the superalignment team, it said it would be “dedicating 20% of the compute we’ve secured to date to this effort”. Yet co-lead Leike says the team was struggling for resources — curious if that means OpenAI reneged on its commitment. https://t.co/4HbFkQFlCL
NEW: OpenAI has dissolved its superalignment team that Ilya Sutskever used to lead. The group will be integrated across OAI's broader safety research efforts. Jan Leike, the frmr co-lead w/ Ilya, said the team was fighting for resources. w/ @rachelmetz https://t.co/YhxaWNywvS
Why Jan Leike resigned from OpenAI: - Disagrees with OpenAI leadership - Should focus on future model preparation/safety - Not on right trajectory - Team has struggled for resources - Safety culture and processes been deprioritized - OpenAI must become a safety-first AGI company https://t.co/STzEvqdpsz
NEW: OpenAI dissolved its high profile superalignment team that Ilya Sutskever used to lead. The group will be integrated across broader safety research efforts. Jan Leike, frmr co-lead w/ Ilya, said the team was fighting for resources. w/ @rachelmetz https://t.co/YhxaWNywvS
Top OpenAI researcher resigns, saying company prioritized ‘shiny products’ over AI safety’ https://t.co/ui89U6XcbD
➡️ Disbandment decision! OpenAI's long-term AI risk team has been disbanded, indicating a shift in their approach. https://t.co/iCoKaItrym
OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups (@willknight / Wired) https://t.co/WSAxxQgk5R 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/FoqgL4uFYV
OPENAI’S LONG-TERM AI RISK TEAM HAS DISBANDED: WIRED
$MSFT OPENAI’S LONG-TERM AI RISK TEAM HAS DISBANDED: WIRED
SCOOP: The entire OpenAI team focused on the existential dangers of AI has either resigned or been absorbed into other research groups, WIRED has confirmed. https://t.co/vd3QSbc5co
The entire OpenAI team focused on the existential dangers of AI has either resigned or been absorbed into other research groups, WIRED has confirmed. https://t.co/E7AZs6cVwP
When Ilya Sutskever left OpenAI this week, the firm lost its last influential leader known to question CEO Sam Altman's push to deploy AI fast. https://t.co/lg025zUoPV
OpenAI Co-Founder Ilya Sutskever Departs, Sam Altman Hails Him As 'Guiding Light' https://t.co/b7WEnlV2VT
OpenAI co-founder and chief scientist Ilya Sutskever departs #OpenAI #AI #TechAI #LearningAI #GenerativeAI #DeepbrainAI #ArtificialIntelligence https://t.co/aKNFQOnDSJ
Jan Leike And Ilya Sutskever Abruptly Resign From OpenAI https://t.co/FQeEuWMjZi
Is OpenAI's superalignment team dead in the water after two key departures? https://t.co/YlVHYJPMuB
OpenAI co-founder Ilya Sutskever QUITS startup just months after sensationally kicking CEO Sam Altman out - then rehiring him https://t.co/4BDHc75Qr9 https://t.co/JsftldcqF1
OpenAI chief scientist Ilya Sutskever is officially leaving #DisruptiveTech https://t.co/kpn3YfUp4e
A large fraction of OpenAI's safety and governance researchers were fired or resigned in the last few months, including the two leads of the superalignment team, which was premised on making AGI safe. https://t.co/ItNlNgi8qK
Key player in on-again, off-again ouster of OpenAI CEO Sam Altman is leaving the company. https://t.co/w9HpW0x5uH
Ilya Sutskever has announced he is leaving OpenAI. He has played a key part in developing the conversational AI systems that are starting to change society https://t.co/F4hMBYrXUk
OpenAI co-founder who had key role in attempted firing of Sam Altman departs https://t.co/bsRbj0GNX0
Two veteran OpenAI employees, Ilya Sutskever and Jan Leike, have announced their resignation. https://t.co/SgFuoqf63d
Ilya Sutskever to xAI? OpenAI's Chief Scientist Ilya Sutskever is leaving the company. My sense is the writing was on the wall since Nov-23 when Ilya regrettably recommended the board remove Sam Altman. Despite that misstep, he is still one of the world's greatest AI talents.…
OpenAI co-founder Ilya Sutskever departs ChatGPT maker https://t.co/dQNMmx1Myy
I thought Superalignment was a positive bet by OpenAI, and I was happy when they committed to putting 20% of their compute towards it. I stopped thinking about that kind of approach because OAI already had competent people working on it. Several of them are now gone. It seems…
It's indeed disturbing to see the leaders of OpenAI's superalignment efforts leave . This is the second wave of mass departures from OpenAI's safety team (the first time resulted in Anthropic)... 100% agree we can't trust any centralized actor to control AI and make it safe https://t.co/TZlrVhpwSc
Ilya Sutskever, the OpenAI co-founder and chief scientist who played a major role in an ultimately unsuccessful bid to push chief executive Sam Altman out of the business last year, is now on the way out of the company himself. https://t.co/Q0AZMx6D6m
OpenAI's leadership shakeup continues as another of Sam Altman’s chiefs follows Ilya Sutskever out the door within hours. https://t.co/YDxJKnINQT
OpenAI co-founder to depart ChatGPT https://t.co/Y5wujVJcxV
OpenAI’s chief scientist and co-founder Ilya Sutskever has left the startup just months after taking part in a failed attempt to oust CEO Sam Altman. https://t.co/97ZmnXrm9o
OpenAI chief scientist Ilya Sutskever who helped lead coup against CEO Sam Altman quits https://t.co/pS85U2DQR6 https://t.co/50mNvctP7I
Canadian Ilya Sutskever out as OpenAI's chief scientist https://t.co/Y3L2B1NWIX https://t.co/F0WovhHKHF
OpenAI chief scientist Ilya Sutskever is officially leaving. https://t.co/VVl7neu0E7
OpenAI Co-Founder & Chief Scientist Departs AI Firm https://t.co/9YmeAQ1aaQ
A clear and disturbing pattern is emerging in OpenAI: whether of their own accord or not, figures involved in AI safety are leaving the company. Ilya Sutskever (co-founder & Chief Scientist) and Jan Leike have joined William Saunders and Daniel Kokotajlo in departing the company.… https://t.co/39qm1OF0Kq https://t.co/C36AyP9HAY
Ilya Sutskever, OpenAI co-founder and chief scientist quits - Check Sam Altman's reaction @sama https://t.co/VYRREoD0kU
So now the leaders of OpenAI’s super alignment team @ilyasut and @janleike are both leaving the company… 😬 https://t.co/LoH6inBu6c
Both co-leads of OpenAI's much-publicised superalignment team have just resigned without explanation. This was the team tasked with avoiding catastrophic risk from the company's future AI systems. https://t.co/Dyffjg0HSF
OpenAI chief scientist Ilya Sutskever is leaving. But what did he see? https://t.co/aFQ50OacTe
OpenAI co-founder Ilya Sutskever parts ways with company, shares what's next https://t.co/0oy2zQNVwV
Ilya Sutskever leaves OpenAI, with Jakub Pachocki named as the new Chief Scientist - Ilya is leaving OpenAI to work on a personal project - Jakub has served as Director of Research, leading the development of GPT-4 and OpenAI Five https://t.co/Any4xPl11b
#OpenAI co-founder and chief scientist Ilya Sutskever is leaving the startup at the center of today’s artificial intelligence boom. https://t.co/bbuj3bSEiB
Another top OpenAI exec just announced he's out, hours after Ilya Sutskever said he's leaving the company https://t.co/fIgAGZjMAw
OpenAI co-founder and Chief Scientist Ilya Sutskever is leaving the company https://t.co/uz3rYHiRlz
Jan Leike, who was co-leading OpenAI's Superalignment team with Ilya Sutskever to "steer and control" more powerful AI, has also resigned from the company (The Verge) https://t.co/2cshg6DuXp 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/D8AYHVzNx4
OpenAI’s co-founder and chief scientist Ilya Sutskever is leaving the company https://t.co/gt6kitFZbp
OpenAI co-founder Ilya Sutskever announces departure #ARYNews https://t.co/gpTJQ0gcVs
OpenAI cofounder Ilya Sutskever is leaving the company 6 months after the failed Sam Altman ouster https://t.co/bpWzHXaokR
OpenAI co-founder Ilya Sutskever announces departure from ChatGPT maker https://t.co/Swrko0ggUo
Ilya and Jan have both left OpenAI. https://t.co/ZDYAhTcdjG https://t.co/tkokFDFvF1
🚨 Jan Leike resigned from OpenAI after Ilya left the company. He was part of superalignment team. How many more will leave after Ilya left? https://t.co/Ti5R74nebk
OpenAI's Chief AI Wizard Ilya Sutskever Has Left the Building https://t.co/q7zEtyKcDU
Jan Leike, who headed up the superalignment team at OpenAI with Ilya Sutskever, says he resigned hours after Ilya announced he’s left the company https://t.co/WWf5UIJMDR
OpenAI Chief Scientist Ilya Sutskever is leaving the artificial intelligence company https://t.co/Z33bv2P8PT
OpenAI Co-Founder Ilya Sutskever Is Leaving The Company 6 Months After Trying To Remove Sam Altman https://t.co/u0E1Mt9xw6
OpenAI co-founder and chief scientist Ilya Sutskever is leaving the startup at the centre of today's artificial intelligence boom. https://t.co/Qr1xSuKvQk