NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
IranMilitaryFebruaryStrikesLaunchesTimelineDiplomaticCompaniesDigestPakistanSaturdayStatesPolicyNuclearFederalIsraelTurkeyTrumpDealDrugCongressionalProtectionsGovernmentParamount
IranMilitaryFebruaryStrikesLaunchesTimelineDiplomaticCompaniesDigestPakistanSaturdayStatesPolicyNuclearFederalIsraelTurkeyTrumpDealDrugCongressionalProtectionsGovernmentParamount
All Articles
Show HN: Decided to play god this morning, so I built an agent civilisation
Hacker News
Published about 5 hours ago

Show HN: Decided to play god this morning, so I built an agent civilisation

Hacker News · Feb 28, 2026 · Collected from RSS

Summary

at a pub in london, 2 weeks ago - I asked myself, if you spawned agents into a world with blank neural networks and zero knowledge of human existence — no language, no economy, no social templates — what would they evolve on their own? would they develop language? would they reproduce? would they evolve as energy dependent systems? what would they even talk about? so i decided to make myself a god, and built WERLD - an open-ended artificial life sim, where the agent's evolve their own neural architecture. Werld drops 30 agents onto a graph with NEAT neural networks that evolve their own topology, 64 sensory channels, continuous motor effectors, and 29 heritable genome traits. communication bandwidth, memory decay, aggression vs cooperation — all evolvable. No hardcoded behaviours, no reward functions. - they could evolve in any direction. Pure Python, stdlib only — brains evolve through survival and reproduction, not backprop. There's a Next.js dashboard ("Werld Observatory") that gives you a live-view: population dynamics, brain complexity, species trajectories, a narrative story generator, live world map. thought this would be more fun as an open-source project! can't wait to see where this could evolve - i'll be in the comments and on the repo. https://github.com/nocodemf/werld Comments URL: https://news.ycombinator.com/item?id=47195530 Points: 28 # Comments: 14

Full Article

Werld A real-time artificial life simulation. In Werld, agents are given a computational ecosystem of their own - they start with NEAT neural networks as brains, genome traits, behavioural inclinations and the ability to evolve in any direction. They have no idea that the human world exists, what a society is, even what to do as a being. Think of it as a computational version of the truman show: agents can perceive, act, reproduce, and die. Their genomes evolve. Brains get more complex (or simpler, if that works better). Communication, memory, and motor patterns are all discoverable — we left everything up to them, nothing's hardcoded. The goal is open-ended evolution: see what emerges from an agent civilisation when you remove the guardrails of human knowledge and society. Everything runs locally. Though, a heads up - it chewed my storage. Deep Dive into Werld Werld is constructed as 800 nodes on a Watts-Strogatz small-world graph. It starts by spawning 30 agents with small NEAT neural networks and no guidance. They can see/percieve a few hops around them, they've got 64 sensory channels covering energy gradients, pheromone trails, nearby agents, seasonal rhythms, their own internal state, and 19 latent channels that start as unknown to them. They've got 7 continous motor functions to act with, and up to 16 broadcast channels. Their brains can grow new neurons, prune connections, and evolve any of 7 activation functions per node. There's no reward function that's built in. They currently live off of two goals: can they harvest enough energy to stay alive, and can they live long enough to reproduce. When they do fork (reproduce), their offspring can inherit mutated copies of the neural traits from both parents: sensory processing, behavioural drives, and 29 other genome traits - full sexual crossover with NEAT gene alignment. Every part of their cognitive architecture has a matabolic cost. More neurons, more connections, more communication, weirder sensory discoveries - like humans, they all cost energy. So complexity has to earn its survival. And when you let it run... Brains get more advanced, more weird. Sensory channels that were unknown to them at inception, get discovered. Evolution upregulates the gain and suddenly a lineage can sense things its ancestors couldn't. Agents learn to communicate/broadcast, some learn how to have their message heard, and others get overheard. Motor patterns emerge from repeated effector sequences, get promoted to heritable compound actions, and drift across generations. Different species emerge, as their gemone traits evolve. Some lineages evolve out of the cortex entirely, actually improving their own brain capacity. In other cases, everything just collapses. Populations crash to 1, and a single survivor repopulates the world with defected mutants - werld then continues, but a little different this time. This is Werld I had the idea for world, over a couple of beers at a pub - and started wondering: if you dropped agents into a world with blank neural networks and zero knowledge of human existence — no language, no economy, no social templates — what would they evolve on their own? I thought this would be a lot more fun, and get a lot more advanced if it was open sourced - can't wait to see what Werld evolves into! Thanks for checking it out, and contributing! Observations from the first run In the last run (lasted about 12 hours), 30 agents grew to over 7,000. They survived 20+ population crises, famines that wiped out most of the population, followed by recovery from a handful of survivors. Over 18,000 agents died. The ones that made it evolved more efficient energy consumption, pruned unnecessary neural complexity, and forked constantly ;). They developed basic communication in their own language that were more signal patterns like broadcasting hunger or age ("Young Barron Hungry"), but nothing resembling structured language yet. Their neural pathways visibly evolved across generations: brains that cost too much energy got selected out, and the survivors passed on leaner, more efficient topologies. All of this was unscripted, it just happened. Two Parts to the Werld Simulation — Pure Python, stdlib only. Agents start with blank neural networks on a small-world graph. Everything from there — communication, memory, aggression vs cooperation, how they process their senses, what motor patterns they repeat — is evolved, not programmed. Each tick: perceive (BFS neighborhood scan) -> decide (NEAT forward pass) -> act (continuous effectors) -> learn (cortex reinforcement + memory). Reproduction is sexual crossover with NEAT gene alignment. Observatory — A real-time dashboard to watch it all unfold. Population dynamics, brain topology, species trees, a world map, ecology, communication analysis, individual agent inspection. Next.js, Recharts, polls SQLite every 4 seconds. Architectural decisions worth knowing about No reward function. The NEAT brain has a vestigial compute_reward() that returns 0.0. Weights evolve through selection instead of gradient descent. Latent sensory channels. Channels 45-63 start with near-zero gain (0.01). They're invisible to the brain until evolution upregulates the gain. The sensory field can expand without changing I/O dimensionality — agents don't need to "know" the channels exist for evolution to discover them. Everything costs energy. Each neuron, each connection, each active broadcast channel, each deviation from default sensory gain — all have metabolic cost deducted every tick. Complexity has to earn its keep. The cortex is optional. cortex_reliance is a genome trait (0-1). Agents can evolve to be pure NEAT-brain creatures or keep a fast associative reflex system as backup. Evolution decides. Communication is unstructured. Up to 16 broadcast channels with brain-controlled content. No semantic encoding is imposed — if meaning emerges, it's because selection found it useful. Motor patterns self-discover. Repeated beneficial effector sequences get promoted to compound actions and become heritable. The capacity and max pattern length are themselves evolvable. Quick start 1. Simulation Python 3.10+. No pip install needed — the sim uses only the standard library. python main.py Runs indefinitely, auto-saves checkpoints to data/. Use --resume to pick up where you left off, --ticks 5000 for a short run, --watchdog to auto-restart on crash. 2. Dashboard cd dashboard && npm install && npm run dev Open http://localhost:3000. The dashboard reads from ../data/simulation.db, so start the sim first (or it'll show empty state). That's it. Sim + dashboard, both local. Project structure ├── main.py # Entry point, CLI, SIGTERM handler ├── config.py # All tunable params — start here if you want to tweak ├── engine/ # Simulation loop, substrate (graph), story gen ├── agents/ # Genome, cortex, memory, state — the agent stack ├── reasoning/ # NEAT brain (evolvable topology) ├── systems/ # Actions, signals, forking, evolution, entropy ├── persistence/ # SQLite, checkpoints, milestones ├── social/ # X poster (optional, for live instance — not needed to run) └── dashboard/ # Next.js observatory CLAUDE.md has the full technical reference — architecture, sensory channels, effector layout, genome traits, everything. Contributing Contributions welcome. See CONTRIBUTING.md for setup, PR process, and what we're looking for. License MIT. See LICENSE.


Share this story

Read Original at Hacker News

Related Articles

Hacker Newsabout 2 hours ago
Verified Spec-Driven Development (VSDD)

Article URL: https://gist.github.com/dollspace-gay/d8d3bc3ecf4188df049d7a4726bb2a00 Comments URL: https://news.ycombinator.com/item?id=47197595 Points: 19 # Comments: 6

Hacker Newsabout 2 hours ago
The whole thing was a scam

Article URL: https://garymarcus.substack.com/p/the-whole-thing-was-scam Comments URL: https://news.ycombinator.com/item?id=47197505 Points: 33 # Comments: 2

Hacker Newsabout 3 hours ago
Obsidian Sync now has a headless client

Article URL: https://help.obsidian.md/sync/headless Comments URL: https://news.ycombinator.com/item?id=47197267 Points: 94 # Comments: 32

Hacker Newsabout 3 hours ago
Show HN: SQLite for Rivet Actors – one database per agent, tenant, or document

Hey HN! We posted Rivet Actors here previously [1] as an open-source alternative to Cloudflare Durable Objects. Today we've released SQLite storage for actors (Apache 2.0). Every actor gets its own SQLite database. This means you can have millions of independent databases: one for each agent, tenant, user, or document. Useful for: - AI agents: per-agent DB for message history, state, embeddings - Multi-tenant SaaS: real per-tenant isolation, no RLS hacks - Collaborative documents: each document gets its own database with built-in multiplayer - Per-user databases: isolated, scales horizontally, runs at the edge The idea of splitting data per entity isn't new: Cassandra and DynamoDB use partition keys to scale horizontally, but you're stuck with rigid schemas ("single-table design" [3]), limited queries, and painful migrations. SQLite per entity gives you the same scalability without those tradeoffs [2]. How this compares: - Cloudflare Durable Objects & Agents: most similar to Rivet Actors with colocated SQLite and compute, but closed-source and vendor-locked - Turso Cloud: Great platform, but closed-source + diff use case. Clients query over the network, so reads are slow or stale. Rivet's single-writer actor model keeps reads local and fresh. - D1, Turso (the DB), Litestream, rqlite, LiteFS: great tools for running a single SQLite database with replication. Rivet is for running lots of isolated databases. Under the hood, SQLite runs in-process with each actor. A custom VFS persists writes to HA storage (FoundationDB or Postgres). Rivet Actors also provide realtime (WebSockets), React integration (useActor), horizontal scalability, and actors that sleep when idle. GitHub: https://github.com/rivet-dev/rivet Docs: https://www.rivet.dev/docs/actors/sqlite/ [1] https://news.ycombinator.com/item?id=42472519 [2] https://rivet.dev/blog/2025-02-16-sqlite-on-the-server-is-mi... [3] https://www.alexdebrie.com/posts/dynamodb-single-table/ Comments URL: https://news.ycombinator.

Hacker Newsabout 3 hours ago
Cognitive Debt: When Velocity Exceeds Comprehension

Article URL: https://www.rockoder.com/beyondthecode/cognitive-debt-when-velocity-exceeds-comprehension/ Comments URL: https://news.ycombinator.com/item?id=47196582 Points: 197 # Comments: 79

Hacker Newsabout 4 hours ago
Show HN: Rust-powered document chunker for RAG – 40x faster, O(1) memory

I built a document chunking library for RAG pipelines with a Rust core and Python bindings. The problem: LangChain's chunker is pure Python and becomes a bottleneck at scale — slow and memory-hungry on large document sets. What Krira Chunker does differently: - Rust-native processing — 40x faster than LangChain's implementation - O(1) space complexity — memory stays flat regardless of document size - Drop-in Python API — works with any existing RAG pipeline - Production-ready — 17 versions shipped, 315+ installs pip install krira-augment Would love brutal feedback from anyone building RAG systems — what chunking problems are you running into that this doesn't solve yet? Comments URL: https://news.ycombinator.com/item?id=47196069 Points: 4 # Comments: 0