Skip to content

GPT-3 and ChatGPT: the Next Step in the Natural Language Processing (NLP) Revolution

This month, OpenAI introduced ChatGPT, a new large-language model based on the latest version of GPT-3, capable of coding, rhyming poetry, drafting essays, and passing exams. This model has proven to respect ethical boundaries and can generate perfect language with an authoritative tone.

The timing was perfect, as I just explained how transformer models worked in my course Advanced NLP at Maastricht university.

However, large language models can also provide nonsensical advice or non-factual information, leading to potential risks such as providing inaccurate legal or medical advice, or simply generating non-sense during a conversation. In order to use them responsibly, an understanding of its architecture and limitations is required.

In this post, the workings, applications, limitations, and capabilities of these large language models will be explained: GPT-3 and ChatGPT: the Next Step in the Natural Language Processing (NLP) Revolution? Or is it not?

Hereby, I hope to contribute to the discussion and understanding of these phenomenal new algorithms, what to use them for, and when to be careful.

Thanks to all the creative humans providing excellent examples of the good, the bad and the ugly (use) of the GPT models in the past weeks!

 

“Disclaimer: this post is written by a human being, without the help of any generative large-language model. The author did use automated tools to check spelling, grammar, clarity, conciseness, formality, inclusiveness, punctuation conventions, sensitive geopolitical references, vocabulary, synonym suggestions, and occasionally hit tab on good predictions by MS-Word.”