Yes! An AI-enabled software can create its own article and you will have a pretty hard time trying to differentiate it from the human-written article
Artificial intelligence is growing at a rapid rate. Breakthroughs and creative uses seem to make headlines weekly. Applications have been developed to do everything from legal analysis of contracts to brewing better beer to someday beating you in a debate. Just recently, a California-based nonprofit artificial intelligence lab OpenAI has cautiously revealed the capabilities of its latest AI, which it’s calling GPT-2. The system can generate surprisingly convincing text to follow any sample you throw at it, like a news article headline, or the opening paragraph of a fictional tale, or a prompt for an essay on a specific topic.
Let say for instance you type in “the world will be defiled by sexbot”- in response, the software will develop a well-written article, and trust me, you would literally think you had written the computer’s response instead. Below are some samples that OpenAI shared to illustrate what it can do:
The organization has made its AI available to a few media outlets to test, and you can see some great and not-so-great examples in coverage from the verge and The Register. So, first up – yes, it’s not perfect, and it can sometimes make mistakes, such as repeating itself and losing the plot. OpenAI noted that it can sometimes take a few tries to get a good result; the quality of its output depends on its familiarity with the subject matter in the prompt. It can perform poorly if it hasn’t encountered the content in the prompt before.
However, the organization says it can deliver better results than other AI models trained on specific datasets, like Wikipedia articles – without training on those same datasets itself. This is called zero-shot learning, and achieving high scores on this front is a monumental achievement in AI development because it suggests that GPT-2 is flexible enough to work competently in a wide range of use cases.
Trust me, I’m fascinated by the output of the software, and while it can be the change the world has been waiting for, it can also be the doom, if not well utilized or probably used by the wrong set of people. OpenAI further attested that it doesn’t know the full capability of the software just yet, however, it will keep feeding it with more data and watch its cons and pros.
To prevent it from falling into the wrong hands right away, OpenAI isn’t sharing the dataset is used to train GPT-2, and it’s only revealing part of the code behind the system for now. That may not stop malicious actors from trying to create similar AIs though. If this development is anything to go by, the future of AI that can synthesize content that’s indistinguishable from what we produce is bright – and unnervingly dark at the same time.