CIOTechie_ArtificialIntelligence_Open-Source_OpenAI_Fake-News

OpenAI Holds Back To Develop Its Artificial Intelligence-Enabled Writer Open Source Over Fears of Creating Fake News

Artificial Intelligence News

OpenAI-Holds-Back-To-Develop_Its-Artificial-Intelligence-Enabled-Writer-Open-Source-Over-Fears-of-Creating-Fake-News OpenAI Holds Back To Develop Its Artificial Intelligence-Enabled Writer Open Source Over Fears of Creating Fake NewsAt OpenAI Institute, a team of researchers said that they were trying to develop an algorithm which could generate natural sounding text based upon wide-ranging research and language processing. But soon they recognized that it was capable of producing forged news stories, taking cues from the 8 million web pages it trawled to learn about language.

The researchers utilized content posted on link-sharing site Reddit that had achieved a score of 3 or more to make the sources more reliable. It then employs these sources to create stories, making up attributions and quotes to make them sound more convincing. But, they found that the occasionally included inaccuracies, with names and places being used incorrectly. The research will now be utilized as a platform to show that Artificial Intelligence applications should be used carefully and to introduce a debate on the subject of whether it should ever be used for things like news writing. As Director of Berkeley’s CITRIS Policy Lab, Brandie Nonnecke noted that it is not a matter of whether bad actors will utilize AI to produce convincing fake news articles and deep fakes, they will. However, platforms must recognize their role in alleviating their reach and impact.

The era of platforms requiring immunity from liability over the sharing of content is over. Platforms must engage in assesses of how their systems will be manipulated and develop in transparent and responsible mechanisms for recognizing and mitigating the spread of maliciously fake content.