NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
StrikesIranMilitaryFebruarySignificantStatesTimelineTensionsTargetsCrisisDigestFaceEvacuationsPotentiallyLaunchesEmbassyWesternIranianTuesdayIsraelEmergencyRegionalSecurityConducts
StrikesIranMilitaryFebruarySignificantStatesTimelineTensionsTargetsCrisisDigestFaceEvacuationsPotentiallyLaunchesEmbassyWesternIranianTuesdayIsraelEmergencyRegionalSecurityConducts
All Articles
The Om Programming Language
Hacker News
Clustered Story
Published about 10 hours ago

The Om Programming Language

Hacker News · Feb 25, 2026 · Collected from RSS

Summary

Article URL: https://www.om-language.com/ Comments URL: https://news.ycombinator.com/item?id=47154971 Points: 219 # Comments: 47

Full Article

Introduction The Om language is: a novel, maximally-simple concatenative, homoiconic programming and algorithm notation language with: minimal syntax, comprised of only three elements. prefix notation, in which functions manipulate the remainder of the program itself. panmorphic typing, allowing programming without data types. a trivial-to-parse data transfer format. unicode-correct: any UTF-8 text (without byte-order marker) defines a valid Om program. implemented as a C++ library and: embeddable into any C++ or Objective-C++ program. extensible with new data types or operations. The Om language is not: complete. Although the intent is to develop it into a full-featured language, the software is currently at a very early "proof of concept" stage, requiring the addition of many operations (such as basic number and file operations) and optimizations before it can be considered useful for any real-world purpose. It has been made available in order to demonstrate the underlying concepts and welcome others to get involved in early development. stationary. Om will likely undergo significant changes on its way to version 1.0. License This program and the accompanying materials are made available under the terms of the Eclipse Public License, Version 1.0, which accompanies this distribution. For more information about this license, please see the Eclipse Public License FAQ. Using The Om source code can be used for: Building a stand-alone interpreter from a script-generated build project. Including as a C++ header-only library. Downloading The Om source code is downloadable from the Om GitHub repository: The Development version (to which this documentation applies) can be obtained via Git clone or archive file. Released versions can be obtained via archive files from the GitHub tags page. Dependencies Programs To run scripts which build the dependency Libraries and generate the build project, the following programs are required: CMake Mac OS X: Xcode Windows: Visual Studio Cygwin (with bash, GNU make, ar, and ranlib) Ubuntu: Build-Essential package (sudo apt-get install build-essential) To build the Documentation in the build project, the following additional programs are required: Doxygen Graphviz To ensure that correct programs are used, programs should be listed in the command line path in the following order: Graphviz, Doxygen, and CMake Cygwin ("[cygwin]/bin") (Windows only) Any other paths Libraries The following libraries are required to build the Om code: ICU4C (the C++ implementation of the ICU library) Boost Building A build project, containing targets for building the interpreter, tests, and documentation, can be generated into "[builds directory path]/Om/projects/[project]" by running the appropriate "generate" script from the desired builds directory: "generate.sh" (Unix-based platforms) "generate.bat" (Windows, to be run from the Visual Studio command line) Arguments include the desired project name (required), followed by any desired CMake arguments. By default, this script automatically installs all external dependency libraries (downloading and building as necessary) into "[builds directory path]/[dependency name]/downloads/[MD5]/build/[platform]/install". This behaviour can be overridden by passing paths of pre-installed dependency libraries to the script: -D Icu4cInstallDirectory:Path="[absolute ICU4C install directory path]" -D BoostInstallDirectory:Path="[absolute Boost install directory path]" Interpreter The Om.Interpreter target builds the interpreter executable as "[Om build directory path]/executables/[platform]/[configuration]/Om.Interpreter". The interpreter: Accepts an optional command-line argument that specifies the desired UTF-8 locale string. The default value is "en_US.UTF-8". Reads input from the standard input stream, ending at the first unbalanced end brace, and writes output to the standard output stream as it is computed. Test The Om.Test target builds the test executable, which runs all unit tests, as "[Om build directory path]/executables/[platform]/[configuration]/Om.Test". These tests are also run when building the RUN_TESTS target (which is included when building the ALL_BUILD target). Documentation The Om.Documentation target builds this documentation into the following folders in "[Om build directory path]/documentation": "html": This HTML documentation. To view in a browser, open "index.html". "xml": The XML documentation, which can be read by an integrated development environment to show context-sensitive documentation. Including Om is a header-only C++ library that can be incorporated into any C++ or Objective-C++ project as follows: Add the Om "code" directory to the include path and include the desired files. Inclusion of any operation header files will automatically add the corresponding operation to the global system. Include "om.hpp" to include all Om header files. Configure the project to link to the code dependencies as necessary, built with the correct configuration for the project. See the dependency "build.cmake" scripts for guidance. Call the Om::Language::System::Initialize function prior to use (e.g. in the main function), passing in the desired UTF-8 locale string (e.g. "en_US.UTF-8"). Construct an Om::Language::Environment, populate with any additional operator-program mappings, and call one of its Om::Language::Environment::Evaluate functions to evaluate a program. For more in-depth usage of the library, see the Om code documentation. Language Syntax An Om program is a combination of three elements—operator, separator, and operand—as follows: Operator An operator has the following syntax: Backquotes (`) in operators are disregarded if the code point following is not a backquote, operand brace, or separator code point. Separator A separator has the following syntax: Operand An operand has the following syntax: Functions The Om language is concatenative, meaning that each Om program evaluates to a function (that takes a program as input, and returns a program as output) and the concatenation of two programs (with an intervening separator, as necessary) evaluates to the composition of the corresponding functions. Prefix Notation Unlike other concatenative languages, the Om language uses prefix notation. A function takes the remainder of the program as input and returns a program as output (which gets passed as input to the leftward function). Prefix notation has the following advantages over postfix notation: Stack underflows are impossible. Prefix notation more closely models function composition. Instead of storing a data stack in memory, the Om evaluator stores a composed partial function. The evaluator can read, parse and evaluate the input stream in a single pass, sending results to the output stream as soon as they are evaluated. This cannot be done with a postfix, stack-based language because any data on the stack must remain there as it may be needed by a function later. Functions can be optimized to only read into memory the data that is required; stack-based postfix languages have no knowledge of the function to apply until the data is already in memory, on the stack. Incoming data, such as events, become simple to handle at a language level: a program might evaluate to a function that acts as a state machine that processes any additional data appended to the program and transitions to a new state, ready to process new data. An integrated development environment can provide hints to the user about the data that is expected by a function. Evaluation Only the terms (operators and operands) of a program are significant to functions: separators are discarded from input, and are inserted between output terms in a "normalized" form (for consistent formatting and proper operator separation). There are three fundamental types of functions: Identity: A function whose output program contains all the terms in the input program. Constant: A function whose output program contains a term, defined by the function, followed by all terms in the input program. Operation: A function that is named by an operator and defines a computation. An operation processes operands at the front of the input program as data for the computation, and pushes any terms generated by the computation onto the output program, until one of two things happens: If the computation is completed, the rest of the input terms are pushed onto the output program. If the computation cannot be completed (due to insufficient operands), the operator that names the operation is pushed onto the output program, followed by all remaining input terms. Programs are evaluated as functions in the following way: The empty program evaluates to the identity function. Programs that contain only a single element evaluate to functions as follows: Separator: Evaluates to the identity function. Operand: Evaluates to a constant function that pushes the operand, followed by all input terms, onto the output program. Operator: Evaluates to the operation defined for the operator in the environment. If none, evaluates to a constant function that pushes the operator, followed by all input terms, onto the output program. Programs that contain multiple elements can be considered a concatenation of sub-programs that each contain one of the elements. The concatenated program evaluates to the composition of the functions that each sub-program evaluates to. For example, program "A B" is the concatenation of programs "A", " ", and "B". The separator evaluates to the identity operation and can be disregarded. The programs "A" and "B" evaluate to functions which will be denoted as A and B, respectively. The input and output are handled by the composed function as follows: Function B receives the input, and its output becomes the input for function A. Function A receives the input, and its output becomes that of the composed function. Any programs may be concatenated together; however, note that concatenating programs "A" and "B" wi


Share this story

Read Original at Hacker News

Related Articles

Hacker News3 days ago
The JavaScript Oxidation Compiler

Article URL: https://oxc.rs/ Comments URL: https://news.ycombinator.com/item?id=47117459 Points: 210 # Comments: 106

Hacker Newsabout 2 hours ago
Show HN: OpenSwarm – Multi‑Agent Claude CLI Orchestrator for Linear/GitHub

I built OpenSwarm because I wanted an autonomous “AI dev team” that can actually plug into my real workflow instead of running toy tasks. OpenSwarm orchestrates multiple Claude Code CLI instances as agents to work on real Linear issues. It: • pulls issues from Linear and runs a Worker/Reviewer/Test/Documenter pipeline • uses LanceDB + multilingual-e5 embeddings for long‑term memory and context reuse • builds a simple code knowledge graph for impact analysis • exposes everything through a Discord bot (status, dispatch, scheduling, logs) • can auto‑iterate on existing PRs and monitor long‑running jobs Right now it’s powering my own solo dev workflow (trading infra, LLM tools, other projects). It’s still early, so there are rough edges and a lot of TODOs around safety, scaling, and better task decomposition. I’d love feedback on: • what feels missing for this to be useful to other teams • failure modes you’d be worried about in autonomous code agents • ideas for better memory/knowledge graph use in real‑world repos Repo: https://github.com/Intrect-io/OpenSwarm Happy to answer questions and hear brutal feedback. Comments URL: https://news.ycombinator.com/item?id=47160980 Points: 8 # Comments: 0

Hacker Newsabout 3 hours ago
Jane Street Hit with Terra $40B Insider Trading Suit

Article URL: https://www.disruptionbanking.com/2026/02/24/jane-street-hit-with-terra-40b-insider-trading-suit/ Comments URL: https://news.ycombinator.com/item?id=47160613 Points: 10 # Comments: 0

Hacker Newsabout 3 hours ago
Show HN: ZSE – Open-source LLM inference engine with 3.9s cold starts

I've been building ZSE (Z Server Engine) for the past few weeks — an open-source LLM inference engine focused on two things nobody has fully solved together: memory efficiency and fast cold starts. The problem I was trying to solve: Running a 32B model normally requires ~64 GB VRAM. Most developers don't have that. And even when quantization helps with memory, cold starts with bitsandbytes NF4 take 2+ minutes on first load and 45–120 seconds on warm restarts — which kills serverless and autoscaling use cases. What ZSE does differently: Fits 32B in 19.3 GB VRAM (70% reduction vs FP16) — runs on a single A100-40GB Fits 7B in 5.2 GB VRAM (63% reduction) — runs on consumer GPUs Native .zse pre-quantized format with memory-mapped weights: 3.9s cold start for 7B, 21.4s for 32B — vs 45s and 120s with bitsandbytes, ~30s for vLLM All benchmarks verified on Modal A100-80GB (Feb 2026) It ships with: OpenAI-compatible API server (drop-in replacement) Interactive CLI (zse serve, zse chat, zse convert, zse hardware) Web dashboard with real-time GPU monitoring Continuous batching (3.45× throughput) GGUF support via llama.cpp CPU fallback — works without a GPU Rate limiting, audit logging, API key auth Install: ----- pip install zllm-zse zse serve Qwen/Qwen2.5-7B-Instruct For fast cold starts (one-time conversion): ----- zse convert Qwen/Qwen2.5-Coder-7B-Instruct -o qwen-7b.zse zse serve qwen-7b.zse # 3.9s every time The cold start improvement comes from the .zse format storing pre-quantized weights as memory-mapped safetensors — no quantization step at load time, no weight conversion, just mmap + GPU transfer. On NVMe SSDs this gets under 4 seconds for 7B. On spinning HDDs it'll be slower. All code is real — no mock implementations. Built at Zyora Labs. Apache 2.0. Happy to answer questions about the quantization approach, the .zse format design, or the memory efficiency techniques. Comments URL: https://news.ycombinator.com/item?id=47160526 Points: 18 # Comments: 1

Hacker Newsabout 3 hours ago
Tech Companies Shouldn't Be Bullied into Doing Surveillance

Article URL: https://www.eff.org/deeplinks/2026/02/tech-companies-shouldnt-be-bullied-doing-surveillance Comments URL: https://news.ycombinator.com/item?id=47160226 Points: 34 # Comments: 1

Hacker Newsabout 5 hours ago
Banned in California

Article URL: https://www.bannedincalifornia.org/ Comments URL: https://news.ycombinator.com/item?id=47159430 Points: 119 # Comments: 109