The current whirlwind of interest in artificial intelligence is largely down to the sudden arrival of a new generation of AI-powered chatbots capable of startlingly human-like text-based conversations. The big change came last year, when OpenAI released ChatGPT. Overnight, millions gained access to an AI producing responses that are so uncannily fluent that it has been hard not to wonder if this heralds a turning point of some sort.
There has been no shortage of hype. Microsoft researchers given early access to GPT4, the latest version of the system behind ChatGPT, argued that it has already demonstrated “sparks” of the long-sought machine version of human intellectual ability known as artificial general intelligence (AGI). One Google engineer even went so far as to claim that one of the company’s AIs, known as LaMDA, was sentient. The naysayers, meanwhile, insist that these AIs are nowhere near as impressive as they seem.
All of which can make it hard to know quite what you should make of the new AI chatbots. Thankfully, things quickly become clearer when you get to grips with how they work and, with that in mind, the extent to which they “think” like us.
At the heart of all these chatbots is a large language model (LLM) – a statistical model, or a mathematical representation of data, that is designed to make predictions about which words are likely to appear together.
LLMs are created by feeding huge amounts of text to a class of algorithms called deep neural networks, which are loosely inspired by the brain. The models learn complex linguistic patterns by playing a simple game:…