Current and former employees of OpenAI and Google DeepMind are speaking out against what they describe as a culture of recklessness and secrecy at OpenAI. They are advocating for a 'right to warn' for employees of advanced AI labs, urging for better protections and transparency in the AI industry.
"A Right to Warn about Advanced Artificial Intelligence" @OpenAI needs to provide this! "the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public" https://t.co/inhBgan7H0
🤖🇺🇸 #AI whistleblowers call for 'right to warn' on risks! Ex-employees of OpenAI, Anthropic, and DeepMind launch a petition to enhance protections, allowing them to speak out on AI dangers. Could this revolutionize AI industry transparency? https://t.co/wDJwMrPVE3
BREAKING: An open letter signed by over a dozen current and former OpenAI and Google DeepMind employees calls for better protections for AI whistleblowers. It's also "endorsed" by three pioneering AI researchers. https://t.co/U7FZo0Wvey
"A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness" https://t.co/OvupUO6xIA
"A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness" Safety culture starts at the top, these whistleblowers have identified very poor AI safety leadership at OpenAI. https://t.co/OvupUO6xIA
Breaking: a group of current and former OpenAI employees is speaking out about what they say is a culture of recklessness and secrecy at the company. They are asking for a “right to warn” for employees of frontier AI labs. https://t.co/KHz45Wi0t4
A group of nine current and former OpenAI staff, and two Google DeepMind staff, blow the whistle on what they say is a culture of recklessness and secrecy (@kevinroose / New York Times) https://t.co/GY1I4ehWmF 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/s0Cf0GK7sp