NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
TrumpTariffTradeAnnounceLaunchNewsPricesStrikesMajorFebruaryPhotosYourCarLotSayCourtDigestSundayTimelineSafetyGlobalMarketTechChina
TrumpTariffTradeAnnounceLaunchNewsPricesStrikesMajorFebruaryPhotosYourCarLotSayCourtDigestSundayTimelineSafetyGlobalMarketTechChina
All Articles
Hannah Fry: 'AI can do some superhuman things – but so can forklifts'
New Scientist
Published 4 days ago

Hannah Fry: 'AI can do some superhuman things – but so can forklifts'

New Scientist · Feb 18, 2026 · Collected from RSS

Summary

Mathematician Hannah Fry travels to the front lines of AI in her new BBC documentary AI Confidential with Hannah Fry. She talks to Bethan Ackerley about what the technology is doing to us – for better and for worse

Full Article

BBC/Curious Films/Rory Langdon The chances are that you think about artificial intelligence far more today than you did five years ago. Since ChatGPT was launched in November 2022, we have become accustomed to interacting with AIs in most spheres of life, from chatbots and smart home tech to banking and healthcare. But such rapid change brings unexpected problems – as mathematician and broadcaster Hannah Fry shows in AI Confidential With Hannah Fry, a new three-part BBC documentary in which she talks to people whose lives have been transformed by the technology. She spoke to New Scientist about how we should view AI, its role in modern mathematics – and why it will upend the global economy. Bethan Ackerley: In the show, you explore what AI is doing to our relationships and sense of reality. Some of this stems from “AI sycophancy” – the idea that these tools give us what we want to hear, not what we need to hear. How does this happen? Hannah Fry: Earlier models were extremely sycophantic. Everything you would write, they would be like, “Oh my God, you’re so amazing, you are the best writer I’ve ever experienced”. They are slightly better now, but there’s this fundamental contradiction. We want them to be helpful, encouraging and make us feel like we’re important, which are the things that you get from a really good human relationship. At the same time, a really good human relationship will say the difficult things out loud. If you put too much of that into the AI, it stops being helpful and starts being argumentative and not fun to be around. There is also this huge swathe of people who have broken up with partners because they used it as a therapist and the AI said, “Get rid of him”. There are people who’ve given up their jobs. There are people who tried to use AI to make money and lost fortunes because they over-believed its abilities. Once you start including all those people, this is a really big group. I think all of us know someone who has been affected by social media bubbles and radicalisation. I think this is the new version of that. Has witnessing these problems changed how you use AI? What it has changed is the way that I prompt it. So now I regularly prompt it to, say, tell me the thing I’m not seeing, find my biases. Don’t be sycophantic, tell me the hard stuff. If we don’t want AI to be like that, what do we want it to be like? The answer probably depends on the situation. In scientific spaces, there are amazing examples – I’m thinking AlphaFold [an AI that predicts protein structures]. In mathematics, incredible advances are being made, where algorithms have an intelligence that isn’t like humans’. But I don’t think you can have a good reasoning model unless it has a conceptual overlap with things humans understand the world to be. So I think that needs to be more human-like. “ There are certain situations where AI can do superhuman things, but so can forklifts “ It seems like every day there’s a news story about a mathematical problem that was unsolved for years, but has now been solved using AI. Does that make you excited? I like to think of it as though there is this great map of mathematics, and that human mathematicians are in a particular territory and circle around it. They don’t always see connections to things close by. Amazing mathematicians have found bridges between two regions of the map, like the Taniyama-Shimura conjecture, where Japanese mathematicians found a bridge between two otherwise disconnected areas of mathematics. Then, everything that we knew from over here applied over there and vice versa. I think AI is really good at saying, “Have a little look over here, it looks like fruitful territory that’s been under-explored”, and that is really, really exciting. What AI isn’t so good at is pushing the boundaries further. And what it’s really not good at… is full-on abstraction, of having broader, larger theories. The one people always say is, if you gave AI everything up until 1900, it wouldn’t come up with general relativity. So I’m still excited we’re in this very sweet spot where AI will make human mathematics faster, more efficient, more exciting, but it still needs us. There are a lot of misconceptions about AI. Which one would you dispel, if you could? People imagine it to be all-powerful, almost almighty. “The AI said this; the AI told me to buy these stocks.” There are certain situations where AI can do superhuman things, but so can forklifts. We’ve built tools that can do things humans can’t for a long time. It doesn’t mean they’re god-like or have untouchable knowledge. You’re not going to give a forklift access to your bank account… No! Exactly. I think that’s it – the framing of these things. Because they speak in language and talk to us, they feel like a creature. We don’t have that problem with Wikipedia. It would be better to think of this stuff as an Excel spreadsheet that’s really capable, rather than a creature. Why do we tend to anthropomorphise AI? Our bodies are tuned for cognitive social relationships. We’re the smart, social species. And this is a seemingly smart, seemingly social entity. Of course we put a character on it. There’s nothing in our past, in our design, that would make us do anything else. Is there no way to guard against that anthropomorphic urge? I think it’s unfair to put it in the hands of individuals really. It’s a little bit like saying junk food is freely available and it’s your responsibility to make sure you don’t have too much of it. The way these interfaces are designed, the conversations it has with you, we now have really good evidence that all of this leads to people falling into this trap more and more. And I think it’s only in the design of these systems that you’re ever going to be able to prevent people from falling down these rabbit holes. There are many social problems that AI highlights, such as people being very isolated and lonely. But couldn’t AI help with these issues? If you say, “OK, you cannot talk to any chatbots if you’re feeling lonely, let’s ban that”, then you still have lonely people. And, of course, it’d be amazing if there were abundant human relationships for everybody, but that doesn’t happen. So, given that this is the world that we’re in, I do think that there are some situations where talking to a chatbot can alleviate some of the worst issues around loneliness. But these are delicate topics. When you start to use technology to address really human questions, there’s an incredible fragility to it all. Let’s talk about the far future. We often think about extreme scenarios with AI – say a superintelligent AI designed to make paperclips turns us all into paperclips. How helpful is it to think about that kind of doomsday scenario? There was one point where I thought these crazy, far-out scenarios were a distraction from what really mattered, which is that decisions were being made by algorithms that affected people’s lives. I’ve changed my mind in the last few years, because I think it’s only by worrying about things like that that you can build in technical safety mechanisms to prevent it from happening. So, worrying is not pointless, worrying genuinely has power. There are genuinely bad potential outcomes from AI, and the more honest we are about that, the more likely we are to be able to mitigate them. I want this to be like Y2K, you know? I want this to be the thing that we worried and worried about, and so we did the work to stop it from happening. Do you think we’ll ever reach artificial general intelligence? We don’t really have a clear definition of what AGI is. But if we’re taking AGI to mean at least as good as most humans on any task that involves a computer, then, yeah, we’re almost there, really. Some people take AGI to mean beyond human ability at every possible task. That I don’t know. But I think AGI is really not far away at all. I really think that in the next five to 10 years, we’re going to see seismic changes. What kind of changes? I think there’s going to be profound changes to the economic models that we’ve become accustomed to for the whole history of humanity. I think there’ll be really giant leaps forward in science, which I’m really excited about, in medicine design as well. The whole structure of our society is built on the idea that you exchange your labour and knowledge and human intelligence for money that you then use to buy stuff – I think that there’s some fragility to that. AI will almost certainly change our relationship with work. What do we need to do to ensure that AI leads us to all work less, rather than some being out of work entirely? I have an answer to this – I can just see how much trouble I’m going to get into if I say it out loud. OK, I’ll give you a version of it. There’s just some undeniable facts, right? So far, society has been based on exchanging labour for money. Our tax system is based on taxing income, not wealth. I think those two things are going to have to change. Topics:


Share this story

Read Original at New Scientist

Related Articles

New Scientist2 days ago
Fish-based pet food may expose cats and dogs to forever chemicals

A survey of 100 commercial foods for dogs and cats revealed that PFAS chemicals appear in numerous brands and types, with fish-based products among those with the highest levels

New Scientist2 days ago
We've spotted the strongest microwave laser in the known universe

Colliding galaxies can create a beam of focused microwave radiation known as a maser, and astronomers have discovered the brightest one ever seen

New Scientist2 days ago
Fresh understanding of the causes of migraine reveals new drug targets

New insights into the causes of migraine is prompting a fresh look at a drug target that was sidelined 25 years ago

New Scientist2 days ago
Search for radio signals finds no hint of alien civilisation on K2-18b

Planet K2-18b, an apparent water world 124 light years away, has been seen as a promising location in the search for aliens, but telescopes on Earth failed to pick up any radio transmissions

New Scientist2 days ago
Ultra-processed foods could be making you age faster

We’ve been missing an important contributor to ageing, says columnist Graham Lawton. Ultra-processed foods are known to be associated with many chronic health problems, but studies have now shown they may also speed up ageing

New Scientist3 days ago
New fossils may settle debate over mysterious sail-backed spinosaurs

Spinosaurs have sometimes been portrayed as swimmers or divers, but a new species of these dinosaurs bolsters the idea that they were more like gigantic herons