OpenAI is developing a tool that can detect images created by artificial intelligence with 99% accuracy, as concerns about transparency in AI models continue to grow. The tool aims to address the lack of transparency in AI-generated content, which has been highlighted by recent research from Stanford, MIT, and Princeton. The research revealed that major AI companies, including OpenAI, scored poorly in terms of transparency. The tool's development comes after OpenAI's previous AI-generated text detection effort was discontinued due to its inaccuracy. As the arms race in AI continues, companies are increasingly focusing on developing tools to detect AI-generated content.
As predicted the arms-race continues as AI companies are also building tools to detect AI-generated images. https://t.co/E53MoZUG7i https://t.co/HW25g1zg9K
Researchers at Stanford graded 10 major AI companies on the transparency of their flagship AI models. Every single one got an F, writes @_KarenHao. https://t.co/sSXG7HR5Xa
A new index created by US researchers suggests companies in the foundational AI model space are becoming less transparent about their creations. https://t.co/X06AB985Mk
AI Is Becoming More Powerful—but Also More Secretive https://t.co/0knb5nFQxp
Leading AI models are lacking in transparency, report claims https://t.co/X06AB98DBS
Stanford researchers have ranked 10 major A.I. models on how openly they operate. https://t.co/Gcnz3I67Wb
Major AI models not very transparent, says report Artificial intelligence based foundations models such as Meta's Llama 2 and OpenAI's GPT-4 are low on transparency, according to a global report on Thursday. https://t.co/c9yaiBSQ5c
AI models lack transparency: research https://t.co/Scudk77C5s
Generative AI is set to transform crisis management https://t.co/zN7dK93xdV
Stanford researchers unveil the Foundation Model Transparency Index, with 100 indicators; Llama 2 leads at 54%, GPT-4 is third at 48%, PaLM 2 is fifth at 40% (@kevinroose / New York Times) https://t.co/PIpHxQKmu5 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/IVeMSW4p8r
interesting research just dropped from Stanford/MIT/Princeton: the foundation model transparency index, constructed by analyzing public info from 10 key AI companies. nobody did great! @AIatMeta #1 w/ Llama 2, @huggingface #2 w/BLOOMZ, @OpenAI #3 w/GPT-4. https://t.co/bydkF0jBA6 https://t.co/72ItABXznZ
Stanford researchers issue AI transparency report, urge tech companies to reveal more https://t.co/o7SWn4scGV https://t.co/Pe7C9K2quO
Will be interesting to see how well this works when it’s eventually released; OpenAI had an AI-generated text detection effort but it was shuttered in July because it was too inaccurate. OpenAI Claims Tool to Detect AI-Generated Images Is 99% Accurate https://t.co/P1qTJVa3wO
OpenAI has said that it is building a tool that can detect images generated by artificial intelligence (AI) with 99% accuracy. https://t.co/YqaPwPzQE0
OpenAI is building a tool to detect images created by artificial intelligence that claims to be 99% accurate https://t.co/hoPp8Y21e7