NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
TrumpFebruaryMilitaryCampaignProtestsNewsTariffDigestSundayTimelinePartyHealthIranOneFacesPolicyDespiteGameStrikesTargetsPublicIranianNuclearDigital
TrumpFebruaryMilitaryCampaignProtestsNewsTariffDigestSundayTimelinePartyHealthIranOneFacesPolicyDespiteGameStrikesTargetsPublicIranianNuclearDigital
All Articles
We endorse the need for global AI regulation : OpenAI Chris Lehane
hindustantimes.com
Clustered Story
Published 3 days ago

We endorse the need for global AI regulation : OpenAI Chris Lehane

hindustantimes.com · Feb 20, 2026 · Collected from GDELT

Summary

Published: 20260220T041500Z

Full Article

OpenAI supports the creation of global regulatory standards for artificial intelligence and believes democratic countries should lead the process, the company’s chief global affairs officer Chris Lehane told Hindustan Times on Thursday, hours after CEO Sam Altman called for “something like the IAEA” to govern the technology.OpenAI supports the creation of global regulatory standards for artificial intelligence and believes democratic countries should lead the process, the company’s chief global affairs officer Chris Lehane (AP)“Yes, we are,” Lehane said when asked whether OpenAI was endorsing the need for global regulation. “We believe you need new rules for a new thing, consistent with the idea of democratising access.”Major US technology companies have historically resisted binding regulation during their growth phases, and the current American administration has moved sharply in the opposite direction — revoking Biden-era AI safeguards and signalling a preference for industry self-governance. India too has a largely voluntary mechanism.For the maker of the world’s most widely used AI chatbot to publicly endorse global rules, at a summit in the Global South, represents a departure from that pattern.Lehane laid out the tentative nature of such approaches at present, markedly different from the IAEA model Altman had invoked in his keynote at the India AI Impact Summit. The International Atomic Energy Agency conducts inspections, sets binding standards, and operates under treaty obligations. What Lehane outlined was what he called a “nascent” network of national AI Safety Institutes — advisory bodies that test frontier models for catastrophic risk — gradually converging on shared standards among democratic nations.“The US has one — we were one of the first frontier models to participate. The UK has one, Japan has one, Australia has an early version, and I think India is thinking of one,” he said. “You could see these different entities coming together among democratic countries to create a standard.”But he said the eventual framework could be similar to international aviation regulation. “The FAA was created in the US, then basically replicated and adopted globally. The whole world works together, and that’s why millions of flights take off and land safely every day — because of shared standards,” he said.The aviation analogy carries a history that could be comparable to the issue at hand. Federal aviation regulation in the US began not with a government initiative but with an industry plea. In the 1920s, commercial flying was expanding rapidly but fatal accidents were routine and standards non-existent. It was airline industry leaders who concluded that aviation could not reach its commercial potential without federal oversight, leading to the Air Commerce Act of 1926 — regulation requested by the industry it would govern.Asked whether OpenAI was trying to get ahead of the kind of regulatory crises that engulfed earlier technology companies — Facebook’s Cambridge Analytica scandal, Google’s antitrust battles, Twitter’s alleged role in election interference — Lehane acknowledged the pattern. “I think some version of that is going to happen,” he said. “We believe users and society need to build trust in this technology, and part of that is working through democratic processes to create the rules.”But he drew a sharp distinction between AI and the social media platforms that preceded it. “The private sector thinks AI is just cloud computing 2.0,” he said. “The public sector thinks it’s a supercharged social media. AI can be built into social media, but it is not inherently social media. AI is a productivity-driving technology, closer to electricity.” Social media, he said, was “inherently an extractive technology.”Lehane also laid out what he called a “democratic vision” of AI, contrasting it with what he described as a “dark, doomer” philosophy that would concentrate the technology in a few hands. “Either it’s designed to keep power with the powerful, or it ultimately takes us backwards,” he said. He said India, with 100 million weekly ChatGPT users and what he called an “entrepreneurial spirit in the DNA,” was central to that democratic alternative, comparing the country’s current position to the US in the late 19th century, “poised to benefit from the industrial age.”He invoked the printing press and cited the immediate aftermath of its invention: Europe’s fragmented politics allowed knowledge to spread; imperial China censored what could be printed and “stayed relatively static for the next 500 years.”On sovereign AI — how a nation builds and controls its AI capability — and the differing restrictiveness between European, American and Indian models, Lehane offered flexibility without specifics. But he framed the underlying stakes bluntly: “The commonality is realising that this is a nation-building, general-purpose technology,” he said. “It’s not just about building the ‘wheel’ or ‘electricity’ itself. The more important question is: once you have electricity coming out of the wall, what are you doing with it,” said Lehane.Every country, he said, would fall somewhere on a spectrum: some would want full infrastructure and their own models, some would prioritise local-language performance, others would focus on data localisation, and some would be comfortable sourcing AI externally and building applications on top. “We will be responsive and flexible to whatever approach a country takes because we are confident we will bring value that will be important to the success of both the country as a whole and its individual citizens,” he said.Lehane’s remarks came on the same day OpenAI announced “OpenAI for India,” a partnership with Tata Group to build sovereign AI infrastructure, and as the summit produced the New Delhi Frontier AI Impact Commitments — a set of voluntary pledges by AI companies that stopped short of enforceable frameworks.


Share this story

Read Original at hindustantimes.com

Related Articles

Al Jazeera3 days ago
OpenAI’s Sam Altman: Global AI regulation ‘urgently’ needed

OpenAI CEO Sam Altman says the world “urgently” needs global regulation of fast-evolving AI.

hindustantimes.comabout 15 hours ago
Shah Rukh Khan meets ailing Salim Khan at Lilavati Hospital in late night visit

Published: 20260222T033000Z

hindustantimes.com1 day ago
Nepal youth protests : A warning for South Asian democracies

Published: 20260221T133000Z

hindustantimes.com1 day ago
Iran - US Tensions LIVE Updates : Tehran prepares response as Trump gives 15 - day deadline , considers strike option

Published: 20260221T071500Z

hindustantimes.com3 days ago
BJP must come to power on its own , BS Yediyurappa tells cadre

Published: 20260220T041500Z

hindustantimes.com3 days ago
Trump to attack Iran ? Nick Fuentes , Candace Owens issue chilling warning ; can forget 2026

Published: 20260220T021500Z