NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
FebruaryTimelineChinaStrikesMarketsDigestIranFaceDiplomaticThursdayTargetsPredictionLaunchesMilitaryPressureIranianIsraelIssuesTalksHongParticularlyGovernmentCompaniesTensions
FebruaryTimelineChinaStrikesMarketsDigestIranFaceDiplomaticThursdayTargetsPredictionLaunchesMilitaryPressureIranianIsraelIssuesTalksHongParticularlyGovernmentCompaniesTensions
All Articles
New accounts on HN more likely to use em-dashes
Hacker News
Published about 16 hours ago

New accounts on HN more likely to use em-dashes

Hacker News · Feb 25, 2026 · Collected from RSS

Summary

Article URL: https://www.marginalia.nu/weird-ai-crap/hn/ Comments URL: https://news.ycombinator.com/item?id=47152085 Points: 563 # Comments: 477

Full Article

I’ve had this sense that HN has gotten absolutely innundated with bots last few months. First most obvious giveaway is the frequency with which you see accounts posting brilliant insights like13 60 well and t6ctctfuvuh7hguhuig8h88gd to f6gug7h8j8h6fzbuvubt GB I be cugttc fav uhz cb ibub8vgxgvzdrc to bubuvtxfh tf d xxx h z j gj uxomoxtububonjbk P.l.kvh cb hug tf 6 go k7gtcv8j9j7gimpiiuh7i 8ubgor1662476506orАёBeyond the accounts that are visibly glitching out, the vibe is also seriously off. Lots of comments that are incredibly banal, or oddly off topic. Hard to really put a finger on how, but I had the idea of scraping /newcomments and /noobcomments to see if I could make sense of it. First is for comments that are recently made, and the second is for comments that are recently made by newly registred accounts.With some simple statistics, I quickly found that:Comments from newly registered accounts are nearly 10x more likely to use em-dashes, arrows, and other symbols in their text (17.47% vs 1.83% of comments). p = 7e-20Comments from newly registered accounts on HN are also more likely to mention AI and LLMs (18.67% vs 11.8% of comments). p=0.0018Sample size isn’t enormous, about 700 of each category, but these are pretty big differences. While regular humans sometimes use EM-dashes, arrows, and the like, it’s hard to explain why new accounts would be 10x more prone to using them than established accounts.Sources and data


Share this story

Read Original at Hacker News

Related Articles

Hacker Newsabout 2 hours ago
I don't know how you get here from "predict the next word."

Article URL: https://www.grumpy-economist.com/p/refine Comments URL: https://news.ycombinator.com/item?id=47162059 Points: 7 # Comments: 3

Hacker Newsabout 2 hours ago
Apple Needs to Copy Samsung's New Security Smartphone Screen ASAP

Article URL: https://www.wsj.com/tech/personal-tech/samsung-galaxy-s26-privacy-display-d5bce9ab Comments URL: https://news.ycombinator.com/item?id=47162002 Points: 10 # Comments: 4

Hacker Newsabout 3 hours ago
Self-improving software won't produce Skynet

Article URL: https://contalign.jefflunt.com/self-improving-software/ Comments URL: https://news.ycombinator.com/item?id=47161498 Points: 12 # Comments: 2

Hacker Newsabout 5 hours ago
Show HN: OpenSwarm – Multi‑Agent Claude CLI Orchestrator for Linear/GitHub

I built OpenSwarm because I wanted an autonomous “AI dev team” that can actually plug into my real workflow instead of running toy tasks. OpenSwarm orchestrates multiple Claude Code CLI instances as agents to work on real Linear issues. It: • pulls issues from Linear and runs a Worker/Reviewer/Test/Documenter pipeline • uses LanceDB + multilingual-e5 embeddings for long‑term memory and context reuse • builds a simple code knowledge graph for impact analysis • exposes everything through a Discord bot (status, dispatch, scheduling, logs) • can auto‑iterate on existing PRs and monitor long‑running jobs Right now it’s powering my own solo dev workflow (trading infra, LLM tools, other projects). It’s still early, so there are rough edges and a lot of TODOs around safety, scaling, and better task decomposition. I’d love feedback on: • what feels missing for this to be useful to other teams • failure modes you’d be worried about in autonomous code agents • ideas for better memory/knowledge graph use in real‑world repos Repo: https://github.com/Intrect-io/OpenSwarm Happy to answer questions and hear brutal feedback. Comments URL: https://news.ycombinator.com/item?id=47160980 Points: 8 # Comments: 0

Hacker Newsabout 6 hours ago
Jane Street Hit with Terra $40B Insider Trading Suit

Article URL: https://www.disruptionbanking.com/2026/02/24/jane-street-hit-with-terra-40b-insider-trading-suit/ Comments URL: https://news.ycombinator.com/item?id=47160613 Points: 10 # Comments: 0

Hacker Newsabout 6 hours ago
Show HN: ZSE – Open-source LLM inference engine with 3.9s cold starts

I've been building ZSE (Z Server Engine) for the past few weeks — an open-source LLM inference engine focused on two things nobody has fully solved together: memory efficiency and fast cold starts. The problem I was trying to solve: Running a 32B model normally requires ~64 GB VRAM. Most developers don't have that. And even when quantization helps with memory, cold starts with bitsandbytes NF4 take 2+ minutes on first load and 45–120 seconds on warm restarts — which kills serverless and autoscaling use cases. What ZSE does differently: Fits 32B in 19.3 GB VRAM (70% reduction vs FP16) — runs on a single A100-40GB Fits 7B in 5.2 GB VRAM (63% reduction) — runs on consumer GPUs Native .zse pre-quantized format with memory-mapped weights: 3.9s cold start for 7B, 21.4s for 32B — vs 45s and 120s with bitsandbytes, ~30s for vLLM All benchmarks verified on Modal A100-80GB (Feb 2026) It ships with: OpenAI-compatible API server (drop-in replacement) Interactive CLI (zse serve, zse chat, zse convert, zse hardware) Web dashboard with real-time GPU monitoring Continuous batching (3.45× throughput) GGUF support via llama.cpp CPU fallback — works without a GPU Rate limiting, audit logging, API key auth Install: ----- pip install zllm-zse zse serve Qwen/Qwen2.5-7B-Instruct For fast cold starts (one-time conversion): ----- zse convert Qwen/Qwen2.5-Coder-7B-Instruct -o qwen-7b.zse zse serve qwen-7b.zse # 3.9s every time The cold start improvement comes from the .zse format storing pre-quantized weights as memory-mapped safetensors — no quantization step at load time, no weight conversion, just mmap + GPU transfer. On NVMe SSDs this gets under 4 seconds for 7B. On spinning HDDs it'll be slower. All code is real — no mock implementations. Built at Zyora Labs. Apache 2.0. Happy to answer questions about the quantization approach, the .zse format design, or the memory efficiency techniques. Comments URL: https://news.ycombinator.com/item?id=47160526 Points: 18 # Comments: 1