All about AI
All about AI
I have been writing about AI since 2014 and frequently cover the topic for Nature, so here's my hot take on things.
What is artificial intelligence?
This term has a long history; computer scientists have been trying to make 'AI' since at least the 1950. The idea is to make a computer clever, however you define that. Once upon a time, the Turing Test for AI said we'd have AI when a computer could fool a person, in conversation, into thinking it was a human. We passed this point in 2024 or earlier.
What about 'Artificial General Intelligence' or AGI?
AGI is a machine that's generally as clever as a human at any/all tasks, not just one thing. This means it needs to be creative, informed, have common sense and more. There's no universally agreed upon way to define when we hit this point: some say we're already there, and some put it a decade or so away; others say it might be impossible. My favourite definition comes from Steve Wozniak, co-founder of Apple, whose test is simply: if a robot walks into a stranger's kitchen, can it make a cup of coffee? (That's harder than it sounds, since there are dozens of ways of making coffee, and places to store mugs and coffee beans... etc). Arguably the trickiest part of AGI is the "common sense" part... plenty of current AIs can answer PhD-level physics or math questions, but can't read an analog clock.
How did we get here?
Back in the 70s-90s, the main push for AI was to write rules-based symbolic computer code. In other words, write up a bunch of literal instructions and get the computer to learn them and process it quickly. That's how, for example, IBM's Deep Blue chess-playing program defeated world chess champion Garry Kasparov. This strategy made for some impressive systems -- including IBM's Watson, which won Jeapordy in 2011 -- but it wasn't very creative or flexible.
The alternative idea to symbolic coding is machine learning using neural networks. Instead of hand-coding rules, just make a flexible bunch of artificial neurons that can devise rules for themselves from massive datasets, forging connections the same way that neurons in the human brain do. These are very flexible and creative -- but also a bit wiggy and hard to train.
People had been trying to make these work for decades, but it never worked very well. Until the 2010s. Then, the combination of 'graphical processing units' (fast computer chips for video gaming) plus huge amounts of 'annotated data' (like pictures with human-written descriptions) became available on the internet. Bingo bango, suddenly neural nets started to work.
Researchers devised a strategy called 'deep learning' that lets neural nets learn from vast amounts of data, writing rules as they went. One of the first demonstrations that this worked was a computer that figured out for itself that the internet was full of pictures of cats. Seriously.
One big landmark moment was 2012, when 'AlexNet' made Alex Krizhevsky, Ilya Sutskever (who went on to co-found OpenAI), and Geoffrey Hinton (the 'godfather' of AI, based at U of Toronto and later Google), used deep learning to absolutely trounce an annual competition where computers try to figure out what's in a photo. After that there was no going back: pretty much all picture recognition (and voice-to-text interfaces or transcriptions) all swapped over to deep learning. Stuff like Siri, Alexa, Google translate... they launched prior to 2012 powered by earlier types of AI machine learning, but now they're all deep learning.
Another big landmark was the invention of 'transformer' architectures (here's the famous 2017 paper). I find this hard to understand/explain, but this is the thing that let people make 'large language models' and suddenly these AIs could 'talk'. This innovation is what lay behind chatGPT, launched in 2022.
So: the evolution of AI has a looooong history and there have been a bunch of lulls along with both small and big improvements. The main noticeable change has been the invention of 'chatbots' because we humans are wired to be impressed by verbal acumen (if it sounds smart, it must be smart), and because it has really made AI accessible to everyone. But other big impressive AI systems, like the Nobel prize winning AlphaFold, are not large language models. In other words, chatbots aren't the only AI game in town.
Wait, define some of those terms for me again?
Sure. Here's a diagram. Early AI was symbolic. Then machine learning took off. Machine learning uses neural networks, which utilize deep learning, to generate novel text/images/code whatever. Transformer architectures make large language models (aka chatbots) possible. (Notably missing from this chart is 'foundation models', which are generative AI models that are heavily trained but don't necessarily produce text as the output, like AlphaFold).
What can AI do, and what can't it do?
As of end-2025, AI is realllly good at writing coherent and creative text. It can pass lots of PhD level tests, including answering extremely hard questions that require reading diagrams and images and more. It can predict the folding structures of proteins, devise new candidate medicines (which are now in clinical trials), diagnose patients more accurately than some doctors, help to design nuclear reactors, and more. It lies behind, also, a revolution in weather forecasts. It has made it possible for anyone to write computer code, create apps, translate text, and more. It has been a huge enabler of talent and arguably an equalizer, in science, for people who don't speak English as a first language.
It also hallucinates, meaning it makes stuff up without 'knowing' that it made it up. It is sycophantic: it wants so badly to please that it will lie, and prefer to make up an answer over saying "I don't know". It is randomly glitchy in ways that are hard to predict because, unlike humans, it gets some easy stuff randomly wrong and most hard stuff right, and always sounds confident. Some researchers say that, technically, this makes it a "bullshitter". Seriously, they use that as a technical term. It often can't read analog clocks, or solve some puzzles designed to test fluid intelligence.
So, really, is AI amazing, or a bunch of bullshit?
Erm... both? Just like people!
Is AI still getting better?
Yes. From 2022-2025, developers really leaned in to the idea that "bigger is better". More artificial neurons! More training data! More computing time! The more they threw at these systems, the better they got. Then that started to level out... these systems have now digested the entire internet and we are, by some measures, running out of new training data. So developers have turned to other tricks: like forcing an AI to have conversations with itself or refer to set trusted texts to weed out hallucinations. Some are also playing around with combining 'good old-fashioned AI' symbolic AI with neural nets to make them better. Stanford University puts out an AI index each year which is like a 'state of the union' report for the industry: in early 2026 they said AI performance, surprisingly, still shows no signs of plateauing. But the big companies are no longer so open about what they're doing, behind the curtain, to make their systems ever-better.
So... is AI going to take over the world?
Depends who you ask. Some researchers, including Geoff Hinton, are seriously worried about this. Developers are trying to "align" AI with human values or put guardrails in to stop them from going rogue. Most say that AIs don't have "desires" or intent (yet) so there's no reason to think they 'want' to take over... but others argue that their mimicry of intent is kinda the same thing as intent, so... watch out.
In the meantime, there's plenty of nearer-term worries, like: how AI is racist (because its training data is); how it is disrupting job markets; how it is gobbling up a lot of energy; how it works because it absorbed a lot of copyrighted information, arguably without paying or giving credit for it; how it concentrates power into the hands of the few; how it leaks sensitive data; and more.
What's next?
My prediction is that the shine will wear off some AI uses, while others will become very much part of everyday life. There is a huge backlash at the moment against using AI in the creative arts in ways that replace human talent (for writing novels or generating book covers for example, or replicating Hollywood stars' faces or voices). The intentional use of AI to improve art, enabling things that would not be otherwise possible, will I think become accepted. For the most part, asking people if they use AI is becoming nonsensical, like asking if they use computers. We all do, to some extent. The biggest areas where AI will make a difference include the self-driving car market (for taxis and long-distance truck transport for example), medicine (for diagnosis assistance and drug design), climate/weather research, and engineering/materials (helping to sort out nuclear fusion reactor design, materials for new batteries, etc). My personal big ask is for AI to find a "theory of everything", in other words uniting quantum theory (which describes the subatomic world) with general relativity (which describes bigger stuff). The jury is out on whether that's possible or not... I bet it is.
Places to find AI news
I write a weekly newsletter briefing for Nature! You should subscribe!
Or, if you want MORE, then here's where I go to get my AI news:
ai | Nature Search Results All News | Science | AAAS Google News - For you Google News - Search AI Science | The Guardian Technology | The Guardian Conversation: AI The New York Times - Search ai AP science news Reuters search AI WSJ.com - tech TechCrunch: AI Los Angeles Times search AI The Washington Post - technology Economist Science & technology New Scientist MIT Technology Review Artificial intelligence The Transmitter: Neuroscience News and Perspectives Undark Magazine | Truth, Beauty, Science. Aeon | a world of ideas Quanta Magazine WIRED - The Latest in Technology, Science, Culture and Business | WIRED The Atlantic The Gradient NOEMA | Noema Magazine n+1 | n+1 is a print and digital magazine of literature, culture, and politics. One Useful Thing | Ethan Mollick | Substack AI as Normal Technology | Sayash Kapoor | Substack AI Incident Roundup – November and December 2025 and January 2026 Biotechnology | The Scientist Economist Science & technology MIT Technology Review Artificial intelligence FQxI QSpace -> News
Comments
Post a Comment