If Microsoft, Google and OpenAI refuse to share any parameters of their generative artificial intelligence platforms can we trust them?
The secrecy-by-default culture of “big AI” is a dangerous precedent when considering society’s increased acceptance of generative AI and the possibility the tech could fall prey to bad actors. That’s the argument laid out by Baldur Bjarnason who warned in a recent blog that when AI is a black box it leaves the company’s using the tech vulnerable to a new form of "black-hat keyword manipulation.”
“We’ve known . . .
Black Hot Fire Network
June 6, 2023