NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranStrikesIranianLaunchCrisisSecurityKhameneiIsraeliFacesMarchTimelineSupremeLeaderDigestSundayChinaSignificantEmergencyRegionalMilitaryTargetsIsraelPricesTrump
IranStrikesIranianLaunchCrisisSecurityKhameneiIsraeliFacesMarchTimelineSupremeLeaderDigestSundayChinaSignificantEmergencyRegionalMilitaryTargetsIsraelPricesTrump
All Articles
Intelligence is a commodity. Context is the real AI Moat
Hacker News
Published about 4 hours ago

Intelligence is a commodity. Context is the real AI Moat

Hacker News · Mar 1, 2026 · Collected from RSS

Summary

Article URL: https://adlrocha.substack.com/p/adlrocha-intelligence-is-a-commodity Comments URL: https://news.ycombinator.com/item?id=47205009 Points: 16 # Comments: 0

Full Article

Last Thursday I had the opportunity to attend the February edition of the AI Socratic Madrid meetup. It was the first time I attended so I didn’t know what to expect. I have to admit that I was gladly impressed. The room was full of talented people with really strong opinions about AI and how it’ll impact our work and our society.The list of attendees included entrepreneurs working on RL environments and agent security, researchers and engineers working on confidential computing and on-device inference, professors on critical thinking and electrical engineering, AI alignment and governance experts, VCs, and even marketers made coders through AI.A fun crowd to hang out with.The first part of the meetup consists of what they call “Socratic Dialogues” that is basically an open-ended conversation about the latest news on AI. Here we discussed (of course) OpenClaw, Moltbook, and what having autonomous agents in the wild like OpenClaw entails for the way we work, the Internet and society.I obviously do not remember every nitty-gritty detail about what we discussed: I remembered discussing how each of us currently used AI on our day-to-day; which models we thought were better; where we expected them to be in the next few months; and our experience with coding agents and their performance.But the topic of conversation that I enjoyed the most was when someone raised the question of “what would be the role of humans in an AI-first society”. Some were skeptical about whether we are ever going to reach an AI-first society. If we understand as an AI-first society, one where the fabric of the economy and society is automated through agents interacting with each other without human interaction, I think that unless there is a catastrophic event that slows the current pace of progress, we may reach a flavor of this reality in the next decade or two.If this is the case, what is the role of humans in a scenario where work is no longer necessary? This is significant because, since the industrial revolution, work has played an important role in shaping an individual’s identity. How will we occupy our time when we don’t have to spend more than half of our waking hours on a job? It probably won’t surprise you, but I’ve personally thought a lot about this lately, and yesterday I managed to share my view (and stress-test it) with people smarter and better informed than me (and this post is my second chance).My opinion is that what really shapes humans’ identity, and what we crave for is community. Even if we lived in a society where reality is shaped by superintelligent AIs instead of ourselves, we can still be happy. It may hit the ego of many that we are no longer the most intelligent being in the planet, but the same way that a chimpanzee living in the wild can live happily and is not aware of the worries and scares of the stock market and geopolitics, we can live a happy and fulfilling life without worrying about the daily operation of our reality handled by the AIs.What really worries me about this reality is not that I will lose my identity, purpose, or that I won’t be able to know what to do with my time. I’ll still want to read old worn out books, enjoy a conversation over coffee with a friend, or hit the court for some hoops, independently of what these higher intelligences are doing. As someone put it yesterday: “I don’t think the conversation we are having in this room would change substantially in an AI-society”.What worries me is if the AIs shaping our society (and thus our reality) is not aligned with human existence, and if it will end up deciding unilaterally that it is suboptimal for us to exist. Some call it AI alignment, some AI existential risk, call it as you wish but this is what really worries me about an AI-first society (I am already cooking a post about this topic to publish it in the next few weeks).We are horrible at communicating intent to AIs and LLMs. We are sloppy and have a hard time painting every possible scenario for the AI to execute flawlessly. You’ve probably had this experience where you ask the AI to “make all tests pass” and it ends up removing adding an assert(true) on all of them.Extrapolate this to a global scale and with superintelligent AIs. The “governor” of a superintelligent AI system may use the well-intention prompt of “removing all carbon footprint from the Earth”, and the AI may realise that the most efficient way to do this is to remove humans (and cows) from the Earth, as we are the ones contributing the most to this footprint.We want the reality shaped by superintelligent AIs to be a function of human existence (f(humans)), not a constant within an AI society (f(AIs) + humans). Many outside of this echo chamber do not have the slightest idea of what the release of OpenClaw entails where we are heading, but to me this is the first realisation of the kind of primitive autonomous agents that we can start seeing shaping our society in the near future.I once said that the moment that we give autonomous agents the ability to interact freely with the environment it will scare the hell out of me. Well, it took less than what I would’ve expected.The second part of the event opens the room for any of the attendees to give a talk, and I had the chance to give a quick talk that I titled “Context is all you need”. This talk was a continuation of this post that I wrote a few weeks ago about how I thought that apps would become obsolete.You can have a look to the slides I used here, but let me give you the highlights of the talk (that way I can share my view with you too):Intelligence is becoming a commodity. It is increasingly easier to get your hands into reasoning and intelligent models that are able to run complex logic for you on demand. When access to intelligence and the ability to solve complex tasks is a commodity, what really matters is to provide this intelligence with the optimal context and connections to their environment that allows them to solve that task. My thesis is this context is the product (and the moat) in the era of intelligence.Many investors are saying that the pyramid of value accrual from the Cloud, where SaaS applications were capturing orders of magnitude more value than the lower layers of the stack, has been inverted in the Gen AI stack. Lower layers of the stack (i.e. hardware providers and hyperscalers) will be the ones capturing the most value as the opportunity to capture value in the application layer will be limited and saturated by a small number of players (i.e. AI Labs).I don’t agree. I think that what these investors are missing are all the software that will be built on top of the intelligence provided by the frontier labs. They are still not seeing the top layer of the Gen AI stack that will replace the current role of the SaaS layer in the cloud industry stack. This layer will be comprised of all those connections, source of context, and security sandboxes required to run the agents.I think that what fundamentally changes in an AI-powered software industry is the way that software is shipped. The paradigm is changing, and instead of shipping code to solve a narrow task for all users, what is going to be shipped are general-purpose agents that modify themselves to adapt to the environment and the task (hence the context being the product).This is what I realised through the toy example of this post. I just needed a general-purpose agent (Claude code), a reliable source of data (Baselight) and the right context (through a set of local files with “skills” for my agent to activate its capabilities when needed) in order to solve my problem. But the only code that was actually executed on my machine was that of claude code.We are even seeing a similar trend already with the “second generation of OpenClaws”, as noted by Karpathy on this tweet. OpenClaw is around 400k lines of code for a while loop and the list of all the integrations and connections supported by the system. The next generation of Claws only have around 4K lines of code for the core, and the rest are just skills (i.e. markdown files) that tell the agent how to implement or run the code for the specific connections that want to be enabled (like a plugin system).A user using one of these second-generation Claws only needs to node the core logic (that can be easily understood and audited) and can leverage the skills (as the plugins) to activate the functionality that they need for their case. This is another good example of this new trend of shipping software as “adaptive software”.And I want to close this post in the same way that I closed the talk last Thursday: I think we live in interesting times where we are seeing a new paradigm for shipping code. My contrarian opinion (or maybe not that contrarian after all from what I heard yesterday) is that the value capture in an AI-powered software industry will come from this layer on top of the frontier labs where the context and the runtime are the product, along with HW-SW co-design.I don’t think the Nvidias and ChatGPTs will end up capturing all the value that it seems they are going to capture judging the current state of affairs. I think they are going to regret all the investment on chips that they are currently doing. I understand why they are doing it as a way to boost their valuations, and justify the investment, but this is going to really bite them back.The best part of sharing such strong opinions weakly held in a post like this is that I will for sure get feedback and counter-arguments that push me to change my opinions or hold them more strongly. So if you have thoughts about all of this I would love to hear them. Shoot me an email (if you want to keep them private), or drop me an email (for a public discussion). Until next week!No posts


Share this story

Read Original at Hacker News

Related Articles

Hacker Newsabout 3 hours ago
Why is the first C++ (m)allocation always 72 KB?

Article URL: https://joelsiks.com/posts/cpp-emergency-pool-72kb-allocation/ Comments URL: https://news.ycombinator.com/item?id=47205129 Points: 37 # Comments: 1

Hacker Newsabout 3 hours ago
Show HN: Terminal-Style Portfolio on the Internet

Posted about this last year, since then learned a lot, changed a lot and can still say it's the best Terminal-Style Portfolio Website on The Internet Comments URL: https://news.ycombinator.com/item?id=47205127 Points: 15 # Comments: 9

Hacker Newsabout 4 hours ago
Decision trees – the unreasonable power of nested decision rules

Article URL: https://mlu-explain.github.io/decision-tree/ Comments URL: https://news.ycombinator.com/item?id=47204964 Points: 7 # Comments: 0

Hacker Newsabout 5 hours ago
Switch to Claude without starting over

Article URL: https://claude.com/import-memory Comments URL: https://news.ycombinator.com/item?id=47204571 Points: 109 # Comments: 64

Hacker Newsabout 5 hours ago
10-202: Introduction to Modern AI (CMU)

Article URL: https://modernaicourse.org Comments URL: https://news.ycombinator.com/item?id=47204559 Points: 3 # Comments: 0

Hacker Newsabout 11 hours ago
The Science of Detecting LLM-Generated Text

Article URL: https://dl.acm.org/doi/10.1145/3624725 Comments URL: https://news.ycombinator.com/item?id=47202864 Points: 7 # Comments: 1