Embeddings: 6,144 bytes per item, approximate, changes when the model updates, needs a GPU. SBS: 64 bytes, exact, deterministic forever, runs on any CPU. 178 lookups to search a billion items — not a linear scan of millions of vectors.
96× smaller storage. 56× less energy. $0 GPU cost. Same answer every time you ask.
67 end-to-end tests passing. Buy credits. No subscriptions. Your vault stays forever.
Get a free API key to start encoding.
Get API Key — 10,000 free creditsOne API call. Under 200ms. Same answer every time you ask. No model to train. No vectors to store. No GPU to rent.
Documents, code, logs, emails — anything with characters
Deterministic spectral address — 96× smaller than embeddings
No model drift. Run it tomorrow, next year — identical result
Flip-graph walk: O(N^¼). At 1M items = 31 lookups. At 1B = 178. Exact, not approximate.
Every level is explicitly encoded — nothing “emerges”. Each level adds signal that makes your users' search, matching, and discovery measurably better.
Your text receives a deterministic spectral fingerprint — 64 bytes, identical every time
Content is analysed, understood, and encoded in a single API call. All AI processing included. 96× smaller than embeddings.
For your product: Ship an instant search feature. No per-query AI cost. Zero model risk. Same result in 2026 and 2046.
5-axis meaning decomposition — what content IS and what it ISN'T
Content understood across intent, category, mechanism, context, and exclusions. Distinguishes "fatigue from exertion" vs "fatigue despite rest" automatically.
For your product: Your search distinguishes meaning, not just words. Users find what they actually need.
Differential encoding — how this item differs from its neighbours
Catches anomalies and relevance shifts. The document that CHANGED relative to context surfaces, not every mention.
For your product: Anomaly detection, relevance ranking, change tracking — built into the encoding itself.
Topic coherence — items cluster into knowledge folds automatically
Your data self-organises. Related content gravitates into the same fold. Cross-fold discovery surfaces non-obvious connections.
For your product: Auto-categorisation. Your users' data organises itself. Onboarding cost drops to near-zero.
Cross-fold crystallisation — how knowledge structures relate across your entire corpus
The flywheel. Every new item enriches every fold. The knowledge space crystallises over time — more data = better results for EVERY query.
For your product: Compounding moat. Your product improves with every customer's data. Switching cost is real — they can't export the crystallised structure.
Screenshot any card. Send it to your CTO, your CFO, your board. This is the entire business case.
Embeddings
6,144 bytes
OpenAI ada-002: 1,536 floats × 4 bytes each. 1M items = 6.1 GB. 1B items = 6.1 TB. Requires a dedicated vector database.
SBS
64 bytes
8 spectral bins × 8 bytes. 1M items = 64 MB. 1B items = 64 GB. Fits in RAM on one server.
96× smaller. Your entire vector DB bill disappears.
Embeddings
~400 distance calculations
HNSW approximate nearest-neighbour: 400 comparisons × 1,536 dimensions = 614,400 FLOPs per query. Still approximate — misses 5–15% of true nearest matches. Index takes hours to build.
SBS
178 flip-graph lookups
O(N^¼) walk through 4D tetrahedra. Each flip = one orientation test (4 multiplies). At 1M: 31 lookups. At 100M: 100. At 1B: 178. Exact — zero misses.
863× fewer operations. Exact, not approximate. No index to rebuild.
Embeddings
$50,000–$200,000/yr
A100 cloud: $2–$3/hr × 24/7 = $17K–$26K/yr minimum. Production cluster with redundancy: $50K–$200K/yr. Plus CUDA lock-in, driver updates, procurement.
SBS
$0
Pure DFT arithmetic runs on any CPU. Your laptop. A $500 server. A Raspberry Pi. An edge device. No CUDA. No drivers. No procurement cycle.
Delete your entire GPU line item. Redeploy that $200K.
Embeddings
~4.5 kWh
A100 GPU at 300W for inference + vector DB servers running 24/7 + cooling overhead. At scale: tonnes of CO₂ per year.
SBS
~0.08 kWh
CPU at 65W × 1ms per encode × 1M items = 65 Wh total. No idle GPU. No vector DB baseline draw. No cooling premium.
56× less energy. Put this number in your ESG report.
Embeddings
$72,000/yr
Pinecone p2: ~$6,000/mo for 1B vectors. Self-hosted: 6.1 TB index across 8+ servers with 768 GB RAM each. Either way — expensive, complex, fragile.
SBS
$0 extra
64 GB total index. One server. Your existing Postgres or Supabase. No new vendor. No new infrastructure. No new ops burden.
$72K/yr saved on vector DB alone. No new vendor contract.
Embeddings
Re-embed everything
OpenAI deprecated ada-002 for ada-003. New model = new vectors = re-embed your entire corpus. Weeks of compute. Broken similarity during migration. Explain the regression to your users.
SBS
Zero risk — mathematical proof
DFT is a theorem, not a trained model. Same input produces the same 64 bytes today, next year, and in 2046. No migration. No re-embedding. No search regressions. Ever.
Never re-embed. Never migrate. Never explain a regression.
Encode
1–5p
Gets faster as axioms freeze
Search
Free – 1.5p
First 1,000/mo free. Then volume-priced.
Surface
3–10p
Discovery worth more at scale
You build the product. SBS powers the intelligence underneath. Each market shows where embedding APIs fail and SBS compounds.
Click any market to see the engagement, retention, and cost case for your product.
Five steps from payment to production. No onboarding calls. No integration meetings. Your lead engineer can ship this before lunch.
Stripe checkout. Key provisioned automatically. No human in the loop. Works immediately.
Authorization: Bearer sbs_live_your_api_keySend up to 100 items per request to /v2/batch. Each item gets a unique ID you provide. 12,000 contracts = 120 requests = under 30 seconds.
POST /v2/batch
{ "items": [{ "id": "doc-1", "content": "..." }, ...], "maxLevel": 4 }
→ 100 items encoded in ~2 secondsEvery item you encode is added to your private flip graph — searchable immediately. You also get the raw 64-byte encoding back to store in YOUR database for analytics, backup, or client-side filtering.
Your users' queries hit /v2/search. Results in <3ms. The flip graph walks 178 tetrahedra at 1B items — not a linear scan. Exact results, zero misses.
POST /v2/search
{ "query": "penalty clause for delayed shipment", "maxResults": 10 }
→ 10 ranked results in 2.3msNew content joins the flip graph and auto-folds into topic clusters (L3). Over time, the knowledge space crystallises (L4) — more data = better results for EVERY query. This is the compounding moat your competitors can't replicate.
Four endpoints. One API key. Full schema below.
// Encode any content — 64 bytes out, deterministic, no GPU
const res = await fetch('https://api.sbs/api/sbs/v3', {
method: 'POST',
headers: {
'Authorization': 'Bearer sbs_your_api_key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
action: 'ingest',
text: 'Indemnity clause for late delivery under FOB terms'
})
})
const { binary, omega, bigOmega, level, foldId } = await res.json()
// binary: "10110101" — 8-bit spectral address
// omega: [0.42, 0.87, 0.15, 0.63] — Kähler moduli (IS/NOT twist)
// bigOmega: 0.034 — volume form (crystallisation)
// level: 4 — deepest encoding achieved
// foldId: "shipping-law" — auto-detected topic cluster| Field | Type | Description |
|---|---|---|
| binary | string | 8-bit spectral address |
| omega | number[4] | Spectral moduli — the encoding's position in 4D space |
| bigOmega | number | Volume form — 0 = uncrystallised, >0 = knowledge crystallised |
| level | 0-4 | Deepest encoding level achieved (L0 surface → L4 portfolio) |
| foldId | string | Auto-detected topic cluster this item belongs to |
| similarity | number | Search only — spectral similarity score (0–1) |
| flipCount | number | Search only — tetrahedra traversed (not linear scans) |
Authentication
Bearer token in Authorization header. Key provisioned on payment.
Rate Limits
Encode: 1K/sec. Search: 10K/sec. Batch: 100 items/req, 100 req/sec.
Namespaces
Content isolated per API key. Multi-tenant by default. One key per environment.
SDKs
Node.js, Python, Go — available post-checkout. Or raw HTTP: four endpoints, that's it.
Buy credits. Use them when you want. Stop when you want. Your vault stays forever. Credits never expire. The more you use it, the faster it gets.
No free tier. No sales calls. Same door for everyone.
Crystallise knowledge
Includes L0-L4, OWL decomposition, IS/NOT ledger, fold assignment, axiom freezing, persistence forever. Gets faster as axioms accumulate.
Find what you need
All search types: multi-level, flip search O(N¼), compare, disambiguate. Gets faster as vault grows — sublinear algorithms + cached topology.
Discover what you didn't know to look for
Resonance check against entire vault. Cross-fold L4 mining. Gets smarter as folds crystallise. Price rises with vault value — discovery in a 100K vault is worth more.
Not a subscription. A deposit. Use them whenever.
Solo developer
200 encodes → £10
800 searches → Free
300 surfaces → £9
~£19/mo
Startup (10 devs)
15K encodes → £225
50K searches → £250
10K surfaces → £150
~£625/mo
Enterprise
700K encodes → £7,000
5M searches → £75,000
200K surfaces → £20,000
~£102K/mo
96×
smaller than embeddings
<200ms
encode latency
64
bytes per signature
67/67
E2E tests passing
No onboarding calls. No integration meetings. The API key works the moment payment clears.
Stripe checkout. Pick a credit package. Your API key is provisioned automatically — no human in the loop. Credits never expire.
POST your content. Credits deducted per operation. Store encodings in YOUR database — Supabase, Postgres, anything. You own the data.
Build search, matching, discovery into your product. Your users see better results. You keep the margin. The flywheel starts.
Credits never expire. Your vault never deletes. Stop using = vault goes read-only (L0 search stays free). Resume = everything's there. No subscriptions. No lock-in. The maths works or it doesn't — we don't need to convince you.
Every API.SBS key is backed by BioCaptcha topology verification. Your vascular fingerprint — not a password — proves you're human. Zero PII stored. GDPR-proof by architecture.