Non-profit AI research firm OpenAI successfully created an AI that can generate entire fictional stories, using just a few human-seeded words.

Now the firm thinks they may have created a monster.

Called GPT-2, the AI is capable of predicting the next likely word to be used in a sentence, enabling it to form convincing, “human-like” sentences. The AI is able to do this thanks to a 40gb database of internet text to string cohesive sentences, which it uses to form entire “stories”. Elon Musk, one of its founding members, was initially panned along with OpenAI after the company stated that it wouldn’t publicly release a “full version” of GPT-2. Musk made it clear on Twitter that he was no longer affiliated with OpenAI when they developed GPT-2.

Despite its success, OpenAI decided not to release the full version of GPT-2 to the public just yet. OpenAi’s Policy Director, Jack Clark, expressed apprehensions about it potentially being used by “bad actors”.

Clark emphasized that OpenAI’s policy is “not enabling malicious of abusive uses of the technology”, and his concern is not without basis – GPT-2 can write so well that it could be used to impersonate other writers, generate fake news items and automate spam or abusive comments on social media.

In an era of fake news and “weaponized” social media, is OpenAI right to not release a fully-functioning GPT-2? Should they never have developed such a potentially damaging AI in the first place? Let us know your thoughts in the comments!

 

Leave a Reply