Small AI lab Noeum.ai debuts Noeum-1-Nano, an open-source model matching major nano-level LLMs with 600x less data.
VIENNA, AUSTRIA, January 21, 2026 /EINPresswire.com/ — Noeum.ai, a small independent AI research lab, today announced the public release of Noeum-1-Nano, an open-source Small Language Model (SLM) that challenges the industry assumption that high performance requires massive compute.
By utilizing a highly efficient “Mixture-of-Experts” (MoE) architecture, Noeum-1-Nano matches the capabilities of major nano-level LLMs despite being trained on just 18 billion tokens—roughly 600× less data than typical baselines. The release validates Noeum.ai’s efficiency-first thesis: that innovative techniques and intelligent design can deliver competitive reasoning without trillion-token budgets.
“We proved that you don’t need a trillion tokens to build a reasoning engine,” said Bledar Ramo, Founder of Noeum.ai. “Noeum-1-Nano demonstrates that architectural discipline beats brute force, giving developers a model that punches significantly above its weight class without the massive compute bill. We are validating what truly improves reasoning and reliability before committing resources to larger-scale training.”
Key Highlights
– From-scratch training: No inherited pretrained weights; built entirely from the ground up.
– Data-efficient by design: ~20× to 667× less training data than common nano/small baselines.
– Fair reporting: Baseline benchmarks published with “thinking mode” disabled to ensure honest comparisons.
– Optional System-2 reasoning: A dedicated mode with a controllable reasoning budget.
– Replication-friendly: Includes reference inference scripts, evaluation tools, and configuration notes.
What is Being Released
– Noeum-1-Nano (Post-trained): The chat and reasoning model featuring the optional mode.
– Noeum-1-Nano-Base: The raw pre-trained foundation model for completion and fine-tuning.
– Tooling: An all-in-one script to run the model locally, toggle think mode, sweep decoding settings, and log outputs for comparison.
Results at Nano Scale
Noeum-1-Nano utilizes a sparse MoE design (0.6B total parameters / ~0.2B active at runtime) to prioritize capability per unit of compute. This allows it to run efficiently on edge devices, consumer laptops, and cost-constrained inference environments.
To maintain scientific rigor, headline benchmark numbers are reported with thinking mode disabled. In these published results, Noeum-1-Nano achieves SciQ 77.5% accuracy and MRPC 81.2 F1, achieving a #1 ranking on MRPC and BoolQ (complex yes/no reasoning) versus comparable nano-class models. Additional scores include PIQA 62.9% and ARC-Easy 47.1%.
Think Mode & System-2 Reasoning
Noeum-1-Nano includes an optional System-2 style mode designed for multi-step verification and self-correction. Noeum.ai has reported baseline benchmarks separately so users can evaluate raw capability first, then measure the reliability lift—and latency trade-offs—provided by deliberate reasoning.
Why This Matters (The European Angle)
While high-profile model development remains concentrated in the U.S. and China, Noeum.ai aims to strengthen the European ecosystem. By demonstrating how efficiency-first workflows allow smaller teams to iterate faster, the lab provides a blueprint for validating scaling recipes before committing to major compute— broadening participation without lowering evaluation standards.
What’s Next
With additional compute infrastructure and strategic partners, Noeum.ai plans to scale beyond nano models. Future comparisons will focus on a realistic-sized multilingual and multimodal system trained on 1–3 trillion tokens, prioritizing long-context efficiency and self-correcting reasoning pipelines.
The rule remains the same: iterate small, measure honestly, and scale only what survives evaluation.
Availability
Researchers and practitioners are invited to run the models, inspect the benchmark configurations, and reproduce results via the following links:
– Noeum-1-Nano (Chat/Reasoning): https://huggingface.co/noeum/noeum-1-nano
– Noeum-1-Nano-Base: https://huggingface.co/noeum/noeum-1-nano-base
– Project Site: https://noeum.ai
About Noeum.ai
Noeum.ai is an independent AI research and engineering lab based in Vienna, Austria. Founded by Bledar Ramo, who architected and trained the released models, the lab focuses on efficiency-first training, reproducible evaluation, and high-performance reasoning architectures.
Bledar Ramo
Noeum.ai
+43 681 20603110
contact@noeum.ai
Visit us on social media:
LinkedIn
Other
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
![]()

