ChatGPT: artificial intelligence penetrates deeper into written texts

The newest development in artificial intelligence (AI) is called ChatGPT. GPT is the acronym of generative pre-trained transformer. Meaning: a self-learning program fed with large amounts of texts that generates new texts looking just like original ones. What is the potential of this program, and what are the possible problems and threats?

ChatGPT
ChatGPT headquarters: Pioneer Building, San Francisco. Photo: HaeB Wikimedia Commons

ChatGPT, just excellent

ChatGPT has been developed by OpenAI, founded by a number of American digital entrepreneurs, among which Elon Musk and Peter Thiel. The most recent version has been introduced to the market as recently as November 2022, but it has already made its mark. For instance, check the major Wikipedia article on this subject. The model contains 175 billion (!) parameters and it has been fed with 300 billion words. Much more than can be handled by the human brain. It can be used on the pc, through https://chat.openai.com/. This will require registration. And competitors have already popped up, like Microsoft’s Bing search machine. In the same category are Neeva, Perplexity.ai and Bard (the latter developed by Google).

The main strength of ChatGPT lies in creating texts that require little or no originality. Like a press release (although the ‘author’ will have to check for factual correctness!). A letter of application. A lifestyle coach’s advice for a healthy life and personal growth. Even complete conversations. General texts intended to appeal to a large audience. Maybe boring, but quite effective for many purposes. And a development that is going to have a lot of consequences: Microsoft has acquired 49% of the stocks of ChatGPT and is going to integrate it into their Office 365. All uses of MS will have this functionality at their disposal.

journalists in training
Will ChatGPT substitute journalists? Photo: training to be journalists, Australian Department of Foreign Affairs and Trade, Wikimedia Commons.

Potential

ChatGPT has been trained in human communication; therefore, it will always answer in understandable terms. And it is very versatile. It can be used for the production of entire webtexts, as well as poems and essays. But, according to science journalist Arno van der Hoog (link in Dutch), the program can also be used as a fact checker. Understandable, for in order to produce texts, the program has been fed with very much information. Unfortunately, it doesn’t perform well at some tasks well done by humans: selection of information on relevance. Therefore, for the production of reliable texts, results will always require inspection by a human being.

AI programs like ChatGPT carry a major potential for the automation of our news supply. For journalists are expensive, and computer programs are relatively cheap. It would seem to be a matter of course to substitute expensive man-made texts by fairly trustworthy AI-generated ones. Leaving more time for journalists to do what they are good at. No longer the tedious work of gathering background information, but real press coverage.

What are the sources?

But it will come as no surprise: already, computer generated texts have been published as if they were ‘real’ texts. Saving on checks by human beings. And remember: ChatGPT has been fed so well with human-made texts that we will not be able to judge, just by looking at the makeup of the text, whether this has been man-made or computer generated. Even though computer generated texts will often be rather general in nature, which could betray them. And ChatGPT isn’t much good at indicating where it found its wisdom. We would have to ask for its sources explicitly – and even then, it will be up to the reader to judge whether the sources cited will be the most relevant ones.

Even though the new instrument is quite seductive, ChatGPT texts will not be ‘true’. The instrument has not been constructed with that goal in mind. It generates texts, that’s what it does. And in doing so it depends entirely on the texts that have been fed into it. Errors in the input will return as errors in the output. The program excels in waging a conversation – but the input hasn’t been checked for its truth, and therefore the output can be dubious as well. A lack of precision or truthfulness will show better in lengthy texts – but then, our world prefers short texts.

Potential developments

Yet, we could very well make AI programs dedicated to generation of factually true texts. But the world doesn’t seem to be interested in that. Programs like ChatGPT are language programs primarily. They produce intelligible texts and have been developed mainly with the eye on that goal. Possibly, we would have to develop trustworthy AI programs from scratch.

From where we are now, the program could develop into a number of directions. Over time, the results could become ever more trustworthy. Being able to substitute the traditional search engines. Or developers could improve its ability to produce texts; then they could become a threat to journalism. In the future, will we primarily read texts written by the robot journalist, checked by humans (or not)? Or will journalism beat the machine after all, because real journalists can check and double check, connect diverse facts and make discoveries?

Regulation

The present situation might ask for a form of regulation. We cannot ban the technology, for its development goes its own way. But we can regulate it. By posing demands on the process (which sources it uses as its input) or on the output (checking the truth of the program’s results). But the entire development is still in its infancy and politicians have not yet shown any interest. Politicians, unite! Before AI generated texts start being used as an input again; and we would no longer be able to tell truths from lies.

Interesting? Then also read:
Biomedicines are coming
Chemistry vs. bacteria, # 54. If chemistry and biotech join forces
The energy transition is a digital transition too

(Visited 241 times, 1 visits today)

Leave a Comment