Engineering BlogArchitecture

The Speed
Paradox

Why we chose Bun and Elysia to power our real-time trading infrastructure, processing millions of box calculations and signal detections per day with sub-100ms latency.

The RTHMN Architecture

Two specialized Rust microservices handle the heavy computation, box calculation and signal detection. The Elysia server acts as the central orchestrator: receiving processed data, managing user sessions, and broadcasting to all connected clients.

External Rust Services
Bun + Elysia Core
Box Server
Rust
Signal Server
Rust
Elysia Server
Bun + TypeScript
User Config
Auth & Settings
WebSocket Hub
Broadcast
Clients
Web / Mobile / API
gRPC (Binary)
Internal
WebSocket
01

The Iteration
Tax

In the world of algorithmic trading, there is an unspoken rule: performant code is written in C++ or Rust. Anything else is considered a toy. The logic is sound, garbage collection pauses and interpreted languages introduce unpredictable latency spikes that can kill an arbitrage strategy.

But this view ignores a critical metric: Developer Velocity.

Writing a new strategy in C++ involves header files, complex build systems, memory management, and long compilation times. If you have a hypothesis about market movement, testing it might take hours. In a volatile market, by the time you've compiled your bot, the opportunity is gone.

We found ourselves in a bind. We needed the raw I/O throughput of a systems language, but we needed to iterate on strategies with the speed of a scripting language. We needed a paradox.

Our system, RTHMN, has three core jobs: calculate "boxes" (quantized price movements) from tick data, detect fractal patterns across millions of paths, and broadcast everything to connected clients in real-time. The first two are pure computation. The third is pure I/O. This distinction matters.

We split the work: two Rust microservices handle the heavy math, box calculation and signal detection. A central Elysia server receives their output via gRPC, manages user sessions, and broadcasts to WebSocket clients. Each component does what it's best at.

02

Why Bun
Changes Everything

Bun isn't just another Node.js wrapper. It's a complete reimagining of the JavaScript runtime, written from scratch in Zig, a language designed for systems programming with explicit control over memory allocation.

Unlike Node.js which uses V8 (Chrome's engine), Bun uses JavaScriptCore, the engine that powers Safari. JSC is optimized for instant startup and fast execution, which matters when you're processing thousands of WebSocket messages per second.

Zero-Overhead Runtime

Architecture Decision

The most radical thing about Bun is what it doesn't have. It doesn't have a build step. It runs TypeScript natively.

This means our deployment pipeline is effectively zero. We push code, and it runs. No Webpack, no Babel, no tsc. The distance between "idea" and "production" is measured in seconds.

Native WebSocket Performance

Critical For Broadcasting

Bun's WebSocket implementation is written in Zig, not JavaScript. This means WebSocket message parsing and serialization happens at near-native speed.

The Elysia server maintains hundreds of concurrent WebSocket connections to clients, web browsers, mobile apps, third-party integrations. When our Rust services emit new box data or signals, Elysia broadcasts to all subscribed clients simultaneously. Every microsecond of overhead compounds.

gRPC Client Performance

Receiving From Rust

Our Rust services stream data via gRPC, binary Protocol Buffers over HTTP/2. Bun handles these incoming streams efficiently, parsing protobuf messages and routing them to the appropriate handlers.

The Elysia server is a consumer of high-frequency data streams. It doesn't compute boxes or signals, it receives them, enriches them with user context, and forwards them. This I/O-bound workload is exactly what Bun excels at.

_

Built-in SQLite

User Data & Sessions

Bun ships with SQLite compiled directly into the runtime. We use this for user preferences, session tokens, and subscription state. No external database driver, no connection pooling complexity, just native file I/O.

03

Elysia's
Magic Trick

Fast runtime is meaningless if your framework is slow. Traditional frameworks like Express rely on dynamic routing and middleware chains that allocate objects on every request. This creates "garbage pressure", the enemy of consistent latency.

Elysia does something brilliant: Ahead-of-Time (AOT) Compilation. When your server starts, Elysia analyzes all your routes and generates optimized handler functions. By the time the first request hits, the routing table is a flat lookup, no regex matching, no middleware chain traversal.

Static Analysis

The Secret Sauce

When you define a route in Elysia, it doesn't just register a callback. It analyzes your validation schema and generates optimized machine code for that specific route before the first request hits.

For our box calculation endpoints, this means validation of incoming price data is essentially free. The schema is compiled to a tight loop that runs at JIT-compiled speed.

Type Inference

Developer Experience

Because Elysia understands your schema, it infers your TypeScript types automatically. We don't write separate type definitions. The code is the documentation, and the documentation is the validation.

When we add a new signal type or modify our box structure, the types flow through the entire codebase automatically. Refactoring is safe. Mistakes are caught at compile time.

Plugin Architecture

Composable By Design

Elysia's plugin system allows us to compose functionality without the overhead of traditional middleware. Our authentication, rate limiting, and WebSocket handlers are all plugins that get compiled into the main application at startup.

This means adding cross-cutting concerns doesn't add per-request overhead. The plugin code is inlined into the generated handlers.

04

Unconventional
By Design

Let's be clear: using JavaScript for trading infrastructure is unusual. The industry standard is C++ for ultra-low-latency HFT or Python for quantitative research with execution in compiled languages.

We're not doing HFT. We're not competing on microsecond arbitrage. RTHMN is a pattern detection system that identifies multi-timeframe fractal structures in price data. Our edge isn't in being 10 nanoseconds faster, it's in detecting patterns that other systems miss.

For this use case, the bottleneck isn't execution speed, it's iteration speed. The ability to test a new pattern hypothesis, deploy it to production, and validate it against live data in minutes rather than hours.

Bun and Elysia give us the performance floor we need (sub-100ms p99 latency) while maximizing our ability to experiment. We can push dozens of strategy updates per day. Traditional stacks would limit us to weekly releases.

There's also a talent consideration. Our team writes TypeScript. Our frontend is TypeScript. Our mobile app is TypeScript. Having the server in the same language means any engineer can contribute to any part of the stack. No context switching, no specialized knowledge silos.

05

The Real
Numbers

0msg/s

We've been running this stack in production for months. Here's what we've measured:

P99 Latency
~45ms
End-to-end
Messages/sec
~12K
Sustained
Memory Usage
~180MB
Baseline
Cold Start
~25ms
Full boot

For comparison, our previous Node.js + Express implementation had P99 latency around 120ms and used ~512MB of memory. The Bun + Elysia stack is 3x faster and 3x lighter.

But the real win isn't in the benchmarks, it's in the deployment velocity. We went from weekly releases to continuous deployment. Every commit to main is live within minutes. That's the paradox resolved: we got both speed and velocity.

The Verdict

We are processing millions of box calculations and signal events per day on this stack. Our P99 latency is indistinguishable from our previous Go experiments, but our feature delivery speed has tripled.

Bun and Elysia aren't just "fast enough", they're fast enough that we stopped thinking about performance and started thinking about product. That's the real win.

Sometimes the best engineering decision isn't the one that benchmarks the absolute fastest in a vacuum. It's the one that lets you move fast enough to win.

Live
Running in production