NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
FebruaryChinaSignificantMilitaryTimelineDigestFaceDiplomaticFederalTurkeyFridayTrumpDrugGovernanceTensionsCompanyStateIranParticularlyEscalatingCaliforniaTargetingNuclearDespite
All Articles
Czech AI hope at Stanford :  Our brains and ears can no longer discern what real or fake
english.radio.cz
Published about 8 hours ago

Czech AI hope at Stanford : Our brains and ears can no longer discern what real or fake

english.radio.cz · Feb 27, 2026 · Collected from GDELT

Summary

Published: 20260227T184500Z

Full Article

Matyáš Boháček is hailed as one of the most promising figures in Czech AI. He has been described as a genius and a “wunderkind” of Czech science. In an interview with Radio Prague International, he shares a refreshingly optimistic view of the future of AI. Matyáš Boháček coded his first website at the age of six - an online shop for his grandmother, with whom he enjoyed playing shopkeeper. Later, he developed apps for his classmates and, while still in high school, created a sign language translation app that caught the attention of the United Nations. Matyáš Boháček|Photo: Jan Jaskmanický, Czech RadioToday, he studies at Stanford University in California - a global hub of technological innovation - where he is working on one of the most discussed topics of our time: the risks and potential of artificial intelligence.One of his first projects there involved collaborating with renowned professor Hany Farid to create a virtual clone of CNN anchor Anderson Cooper - a deepfake so convincing that even Cooper’s colleagues could not tell it was not real.Boháček is also conducting research at Berkeley University and works as a researcher for Google DeepMind - a British-American AI lab and subsidiary of Alphabet Inc. Currently, he is visiting the Massachusetts Institute of Technology (MIT) in Boston for the autumn term. The young scientist, who goes by Maty, described what a typical week looks like amid his busy schedule.“My week looks like this: I wake up, go to the gym or go for a run, and then I just hit the lab and I'm at the lab until the evening. That's life.”It sounds like your whole life revolves around research into AI. What sparked this interest?“The first encounter with AI that I had was a very direct task that I had to complete. Even before I got into AI, I worked on apps, programmes and websites as a kid. I started at six, and then I sort of moved more specifically towards iOS and iPhone apps.“I got this scholarship from Apple to travel to San Jose in California and spend a week at their conference. I was 14 at that point and I'm still not quite sure why my parents let a 14-year-old travel alone across the Atlantic to a conference. At the conference they had these labs, where you could talk directly to Apple engineers and get their insights and feedback on whatever you were working on at the time. "For sign language you can’t design a firm set of rules, which is done in conventional programming – so, that's where AI comes into place." “I went to this one lab that was all about disability. The goal there was to make your apps and services more accessible to people with special needs and disabilities. And at this lab, there was a deaf person, whom I was immediately drawn to. I had a lot of questions. And so I asked him: how do you use your phone and your laptop? And he told me that written English, or the English that he had to use to interact with these devices, was not at all what he would have preferred. He would have preferred to use American Sign Language (ASL) to both sign and receive output from these devices. And so I said: I got you. Just give me your card and I'll get you set up with an app.I took his card, went back to Prague, and thought to myself: Okay, how do I build a translator for sign language? And that's how I ended up self-studying AI, because my conventional coding or programming didn't really meet the challenge. For sign language you can’t design a firm set of rules, which is done in conventional programming, as it's too variable. So, that's where AI comes into place.“The idea was that deaf people would be able to use this system as a layer between themselves and whatever apps, services or websites they are using. For example, if you’re watching a TV show, captions are not really ideal. Instead, you can imagine having a live avatar signing in the corner of the screen.”So you achieved this through AI... But what exactly is the AI in it? And what is AI in general? “That's actually a pretty difficult question that a lot of my colleagues, including myself, spend a lot of time arguing about. But a non-controversial definition would be: AI systems are synthetic systems that we code up or create as humans that are able to observe patterns in data and to generalise problems in a way that is non-deterministic, and that can solve different problems based on some input."There are different flavours of this; you have AI systems that work with images or text, and then there are even robots that incorporate these sort of principles or technologies.“So, in this case, the sign language translating app follows the template perfectly. You want to solve a problem: converting sign language or a video of someone signing into text. But there are many ways to sign hello - you can move your hand in a similar sort of direction, but at different speeds etc. There’s a lot of variability to it. So you have a very complicated problem that you want to convert to text. And then you have to collect data, to train this model, to give it examples etc. And if it works correctly, it's going to learn from the data what it is that makes a sign mean hello or goodbye.”Deepfakes Illustration Photo: Pixabay/Radio Prague International“Deep fakes are fake or synthetic images, videos, audio that record an event or a speech that never happened. I'm specifically mentioning audio, video and images, because that’s the understanding that people have these days. Back in the day, deep fakes, really referred only to video. But over time, it has really expanded to any sort of form of fake content.“It's super easy to make deepfakes. All you need is access to the internet. There are these apps or services where you just type a prompt description of a scene or a speech, and then the AI model creates that scene or speech even though it never happened."You can include, your face, your voice, or faces and voices of people you've never even met. All you need is about 5 to 10 seconds of audio as a sample of the person's voice and one picture of their face. That's all you need to create a fake video of them saying whatever you want. "We have seen a lot of the negative aspects of deepfakes unfold, at least here in the US. In Europe, the adoption for fraud has not been as pervasive yet." Why is this dangerous or troublesome? "Well, they can be fun. There are funny cameos you can make with this. But at the same time, you can use this to spread disinformation, to commit fraud, to create false evidence or incriminating documents. And to me, it just doesn't seem like the benefits outweigh the actual downsides."We have seen a lot of the negative aspects unfold, at least here in the US. In Europe, the adoption for fraud has not been as pervasive yet. But I think it's modelling a similar pattern and is just delayed compared to the US. So I think it will reach Europe, too.”So, deepfakes, not AI in general, have more negative than positive impact? "We're entering, and in many ways we've already entered, an era where you can no longer trust what you see online." “In my opinion, right now, I see them having more downsides. We're entering, and in many ways we've already entered, an era where you can no longer trust what you see online. And it's really difficult to understand what actually happened, who actually shared it, and all these different parameters that are critical to navigating the online world.“Even myself, as someone who spends a lot of my day looking at deepfakes, studying deepfakes and making deepfakes… Just yesterday, I got a call from someone and I was not sure if the person on the phone was AI or if it was a real person. I had no idea if this was AI. You really need additional tools and support to be able to understand these things. Our own hardware - our brains and ears - can no longer discern what's real or fake, at least if it's done at the levels that are available to us right now.”I was going to ask for tips on how to detect deepfakes, but I guess there really aren't any?“No. If you asked me like a year or two ago, I would say that there are these things like six fingers or four fingers, or sometimes the AI models would produce inconsistent earrings etc. But, at this point, it's really not possible to detect it without additional tools, if it's done at the state of the art level.”To regulate, or not to regulate, that is the questionDo the dangers associated with deepfakes and AI make you want more regulations on who and how gets to use AI? Or do you think that there should be complete liberty? Photo: Shutterstock“I definitely think that there should be some regulation. And perhaps more importantly, I think there should be accountability for the big companies that create these systems and enable anyone to use them without the consent of those who are actually depicted in the generated images, audio or video. In the US, there's something called Section 230, which gives companies like Google, Facebook and others, a lot of leeway. Whatever content is generated and shared on their platforms is not considered something that they're responsible for. The responsibility lies on the actual user who made the content on their platforms. To some extent this makes sense, but I think there needs to be more nuance and this needs to be updated to meet the current state of affairs in AI and media.“I think that in Europe there have been good steps towards regulating AI. I will say that we need to be careful about regulating only certain domains or parts of the industry that actually have pretty objectively, I think, more negative than positive impact on society. However, I think we do want to make sure that the EU can be competitive against the US and China in AI and core research and capabilities. So, I think there's a balance that needs to be struck. But yes, the upshot is that I would like to see more regulation of this.”A positive outlook not only for young peopleWe have shifted more towards AI as a whole. You mentioned that there are some positives aspe


Share this story

Read Original at english.radio.cz

Related Articles

Euronewsabout 15 hours ago
France and Czech SAFE defence loan plans cleared, sources say, but Hungary row looms large

Hungary is holding up two packages of support to Ukraine, including a €90 billion loan the war-torn country needs as a matter of urgency.

dmnews.comabout 22 hours ago
Neuroscientists found a protein that appears to keep certain brains from aging , and it explains why some 80 - year - olds think like theyre 50

Published: 20260227T041500Z

Politico Europeabout 23 hours ago
IVF, lingerie and funeral flowers: The lesser-known businesses of Czech PM Andrej Babiš

A POLITICO examination maps the quieter pillar of Andrej Babiš's empire and the regulatory blind spots that surround it.

prnewswire.com1 day ago
Study finds more parents saying No to vitamin K , putting babie brains at risk

Published: 20260226T233000Z

yahoo.com1 day ago
Some 80 - year - olds still have razor - sharp brains and now scientists know why

Published: 20260226T153000Z

Euronews3 days ago
How chemicals from our laptops and TVs have ended up in the brains of dolphins and porpoises

A new study has warned that liquid crystal monomers from electronic devices are accumulating in the organs of endangered marine species.