Open Source Research Division

137 Particles LABS

We don't trust specs; we trust silicon. Our research focuses on hardware-aware optimization, sovereign protocols, and high-performance NLP.

GitHub

The Performance Observatory

Real-world benchmarks on consumer hardware.

CASE STUDY: EMBEDDING EFFICIENCY (ACCURACY VS RAM)

MiniLM-L6-v2 (384 dim) 98.4% Efficiency Score
80MB RAM
Generic-Large (1024 dim) Marginal Gain (+1.2%)
1.2GB RAM
Finding: A well-tuned 384-dim model hits the Pareto frontier for RAG. Larger vectors often burn RAM for negligible semantic gain.

Why we benchmark on "Metal"

Most benchmarks run in a vacuum. We run ours on Mac Minis, Gaming PCs, and older Xeons. We measure the "Quantization Tax"—the hidden cost of memory bandwidth bottlenecks on consumer cards.

Our data drives the Quantum Gate's scheduling logic. We don't guess which model fits your hardware; we know.

View Full Benchmark Suite

Active Projects

Golang NLP

Prose (Fork)

Resurrecting the standard for Go-based NLP. Removing Python dependencies and optimizing tokenization for sub-millisecond execution.

Specification

Unified Messaging

The "TCP/IP" of Generative AI. A bidirectional JSON schema with Source ID tracking to decouple apps from provider formats.

Safety

Sentinel

Granular file system journaling. An "Undo Button" for AI Agents that revert edits at the keystroke level, not just the commit level.