The team of researchers at OpenAI has demonstrated a new Artificial Intelligence model, named GPT-2, which is competent in generating coherent paragraphs of text without requiring any task-specific training. For instance, if a user gives it the first line of the story, it will write the rest. Excluding this feature, the company’s AI model also can do basic reading comprehension, machine translation, summarization, and answers questions.
The OpenAI’s GPT-2 is an unsupervised language model that comprises 1.5 billion parameters and is trained on a dataset of 8 million web pages. According to the company, it is simply trained to envisage the next world in 40 GB of internet Tex. They further said that GPT-2 is better to other language models trained on explicit domains, such as news, Wikipedia, or books, as it doesn’t require using these domain-specific training datasets. GPT-2, for languages related tasks like question answering, reading comprehension, and summarization, can learn these tasks directly from the raw text and doesn’t require any training data. The GPT-2 model is chameleon-like and easily adapts to the style and content of the input text, as the company added. Apart from these, the team has also observed certain failures in its model-like world modeling failures, repetitive text, and abnormally switching the topics. Alternatively, the model performs poorly in case of highly technical or intricate content.
The OpenAI team has predicted the use of GPT-2 in development of AI writing assistants, advanced dialogue agents, unsupervised translation between languages and enhanced speech recognition systems. They have also specified the potential exploitations of GPT-2 as it can be used to produce misleading news articles, and automate the large scale production of fake and phishing content on social media. However, OpenAI has decided to release a small version of GPT-2 with its sampling code and a research paper for researchers to experiment with it, due to its misuse of language generating models.