The U.S. Department of Homeland Security has established a new Artificial Intelligence Safety and Security Board aimed at advising on the safe and secure use of AI technologies. The board has drawn criticism for its composition, which includes the CEOs of Delta Airlines and Occidental Petroleum, and lacks prominent open-source AI advocates like Elon Musk and Mark Zuckerberg. Concerns have been raised about the presence of many members with vested interests in AI and the absence of bipartisan or open-source representation. The board is part of an initiative to foster the responsible development of AI, amidst calls for effective regulation to ensure AI benefits rather than poses risks. Notably, less than half of the 22 members have significant AI knowledge, and the board includes controversial figures such as the ultra-woke former head of AI ethics at Twitter.
Safeguarding Critical Infrastructure: Tech Leaders Join Forces with Biden Administration for AI Safety #advisorypanel #AI #AIregulation #AIsafety #AIthreats #artificialintelligence #autonomousdrones #CEOs #criticalservices #cyberattacks https://t.co/loR3rLXmow https://t.co/A1RI0ZSoAz
Biden's AI Safety Board doesn't include @elonmusk Who does it include? CEOs of some companies who can't even spell AI, and also Rumman Chowdhury, Twitter's former head of AI Ethics, which @elonmusk fired for being ultra-woke 🤡 _ https://t.co/7ddGdyufgu
Artificial Intelligence Safety and Security Board. @sama antropic, google, nvidia... https://t.co/FEc78iS6HT
Funny how Biden’s AI safety board doesn’t include Elon Musk, but it does include the ultra-woke head of AI ethics he fired from Twitter.
Can’t decide whether it is funny, ridiculous or sad that the CEOs of Occidental Petroleum and Delta Airlines are on the new AI safety board. Less than half of the 22 members have any real AI knowledge. Also odd that it’s not even vaguely bipartisan, has zero open source AI… https://t.co/o5zO0fsqRy
The current administration brought all these Closed AI folks together to create an "AI Safety Board." Noticeably absent from this list are two of the most prominent leaders in the space - Zuck and Elon. This is absolutely terrifying, to say the least!! https://t.co/WxTVuHFdc3
US sets up board to advise on safe, secure use of AI #ARYNews https://t.co/DYOyojjCHu
US sets up board to advise on safe, secure use of AI https://t.co/1iM1JrN6IB
🤖🇺🇸 Urgent Call to Action! The U.S. needs to nail it on AI safety regulation. An expert panel weighs in on how to make AI beneficial, not dangerous. Let's get this right! https://t.co/f58ZjodVy8
Seattle mayor joins tech leaders on Homeland Security’s new AI safety and security board https://t.co/iwn2xlddnu
Looking forward to serving on the @DHSgov AI Safety & Security Board as we work to strengthen American resilience in today’s rapidly evolving threat landscape. https://t.co/9nBvUbuENE
The current administration brought all these Closed AI folks together to create an "AI Safety Board." Noticeably absent from this list are two of the most prominent leaders in the space - Zuck and Elon This is absolutely terrifying, to say the least!! https://t.co/g8wpN3rnGl
Why does the new Artificial Intelligence Safety and Security Board have so many members with clear vested interests in rapid, unregulated AI adoption, and so few who have AI expertise without those vested interests? https://t.co/U31GvDnGVM
Why is the CEO of Delta Airlines and Occidental Petroleum on the AI Safety board? Policy wonks unite! https://t.co/bLT6H7wGRf
The concerning part of this list isn’t that the CEO of Delta is considered an expert of AI. The concerning part is that supporters of open-source AI (Musk, Zuckerberg) did not make the list, https://t.co/Ey7Ei9O5BD
No Elon on the new Homeland Security AI Board. https://t.co/ZfD3n7pe5r
AWS is proud to serve as an inaugural member of the @DHSgov Artificial Intelligence Safety and Security Board. As one of the world’s leading developers and deployers of #AI tools and services, we support fostering the safe, secure, and responsible development of AI technology. We… https://t.co/tnAHiLaVjb
>the people who are leading the closed source commercializing AI are on the "Safety" and "Security" Board no conflicts of interests here at all https://t.co/iN4N9mBEfa
Given the potentially catastrophic consequences of unchecked AI, there is a clear need for international guardrails to ensure that it serves the common good, argues @UniLuzern’s Peter G. Kirchschläger. https://t.co/QK65XZo9ZH