Rust Development

Home Technologies Rust Development

Overview

Rust is the language we reach for when the requirements are demanding in ways that other languages handle imperfectly: when performance must be high and predictable, when memory usage must be controlled, when correctness must be guaranteed by the compiler rather than assumed at runtime, and when the system must operate reliably under conditions that would cause other implementations to degrade or fail.

These requirements appear across a wide range of the systems we build. The real-time trading execution engine where microsecond latency matters. The financial data processing pipeline that must handle millions of records per second without garbage collection pauses. The signal processing system that applies complex calculations to high-frequency data streams. The backend service that must handle thousands of concurrent connections with predictable resource consumption. The system integration layer where data corruption or unexpected failure has direct financial consequences.

Rust's ownership and borrowing system eliminates at compile time the entire class of memory safety errors — use-after-free, data races, null pointer dereferences, buffer overflows — that cause the majority of serious bugs in systems-level software. These errors are not caught at runtime and logged; they are prevented from existing. A Rust program that compiles has been checked by the compiler for the most common categories of serious correctness problems. This is a fundamentally different reliability proposition from languages where these errors are runtime possibilities.

The performance is genuine. Rust compiles to native machine code without a runtime or garbage collector. There are no GC pauses, no JVM warmup, no interpreter overhead. Rust programs routinely achieve performance comparable to C and C++. For the systems we build where performance is a primary requirement, Rust delivers that performance with a safety guarantee that C and C++ cannot provide.

Rust is our primary language for the systems-level and performance-critical components of the applications we build. It is where we implement the computation engines, the data processing pipelines, the real-time services, and the integration layers that determine the performance and reliability characteristics of the overall system.


Where We Use Rust

Trading execution and signal processing. The latency-sensitive core of trading systems. The execution engine that receives a trade signal and places an order in the minimum possible time. The real-time market data processor that ingests tick data, constructs bars, and feeds indicators without introducing latency. The signal processing pipeline that applies indicator calculations to high-frequency data streams and produces actionable signals. The risk calculation engine that evaluates portfolio exposure and enforces risk limits on every trade.

In trading systems, performance is a direct competitive advantage — a faster execution engine captures better fills, a lower-latency signal processor acts on information before it becomes stale, a more efficient risk calculator can enforce tighter real-time risk controls without adding unacceptable latency to the execution path. Rust's performance characteristics, combined with its predictable low-latency behaviour (no GC pauses), make it the appropriate language for these components.

Financial data processing. The processing of large volumes of financial data — tick data, bar data, factor values, fundamental data — at the throughput and with the reliability that production financial infrastructure requires. Multi-million record datasets processed for strategy backtesting. Real-time tick data normalisation and bar construction at exchange-level throughput. Factor calculation engines that compute hundreds of signals across a universe of thousands of instruments.

The throughput that Rust enables for data processing is meaningful: processing that would take minutes in Python takes seconds in Rust, enabling research iteration cycles that would otherwise be impractically slow, and enabling production data pipelines to handle data volumes that would exceed other languages' throughput.

Backend services with demanding performance requirements. Web services and API backends where request throughput, response latency, or concurrent connection capacity are primary design requirements. The real-time market data API that serves hundreds of concurrent subscribers. The order book aggregation service that processes exchange updates and serves aggregated data to multiple consumers simultaneously. The portfolio analytics service that performs complex calculations on demand for many concurrent users.

Rust's async/await model with the Tokio async runtime provides the concurrency model for high-throughput async services — handling thousands of concurrent connections on modest hardware with predictable memory consumption and without the overhead of a thread-per-connection model.

System integrations with correctness requirements. Integration layers where data transformation errors or message handling failures have direct business consequences. The ERP integration that must transform and route financial records without corruption. The HL7 message processor where incorrect handling of patient data is a compliance failure. The financial reconciliation system where data errors propagate into financial reporting.

Rust's type system and the impossibility of null pointer dereferences and unchecked array access in safe Rust code mean that a class of integration errors that would be runtime exceptions in other languages are compile-time errors in Rust. The integration code that handles these cases correctly because the compiler requires it to, rather than because the developer remembered to.

CLI tools and operational utilities. Command-line tools, data processing utilities, and operational scripts where Rust's compilation to a single static binary with no runtime dependencies is a significant deployment advantage. The data extraction tool that runs as a scheduled job, the migration utility that processes large datasets, the operational tool that performs administrative operations — all distributed as single executables without requiring a runtime environment on the target system.


What Rust Development Covers

Async programming with Tokio. The async runtime that powers Rust's concurrent service development — the Tokio executor that runs async tasks, the async I/O primitives that allow thousands of concurrent network operations without blocking threads, and the channels, mutexes, and other synchronisation primitives that coordinate concurrent tasks.

Tokio-based service architecture: async HTTP handlers with Axum, async database queries with sqlx, async Redis operations with the redis crate, async TCP and WebSocket connections. The select! macro for handling multiple concurrent async operations. Task spawning for background work that runs concurrently with request handling.

The correct use of async in Rust — understanding when async is appropriate (I/O-bound operations where waiting is the bottleneck) and when synchronous or CPU-bound computation should run in a blocking thread pool rather than on the async executor (to avoid blocking the executor and degrading throughput for other tasks). The spawn_blocking pattern that correctly offloads CPU-intensive work from the async executor.

Web services with Axum. Axum as the primary web framework for Rust HTTP services — the type-safe routing, the extractors that parse request data into typed Rust values, and the middleware layers that handle authentication, logging, rate limiting, and other cross-cutting concerns. The tower service ecosystem that Axum builds on, providing composable middleware components.

Handler functions that take typed extractors and return typed responses — the function signature that the compiler checks for correctness. State management with Arc<AppState> for shared application state across handlers. The error handling pattern that converts domain errors into appropriate HTTP responses.

Data serialisation with serde. serde is the foundation of Rust data serialisation — the derive macros that generate serialisation and deserialisation code for custom types, with zero-overhead abstraction that produces as-efficient code as hand-written serialisation. JSON with serde_json for API request and response handling. Binary formats — MessagePack, CBOR, Bincode — for performance-critical internal data exchange. CSV parsing and generation with csv-serde integration.

The serde attribute annotations that customise serialisation behaviour — field renaming for camelCase API conventions, optional field handling, flatten and skip attributes, and the custom serialisers for types that need non-standard serialisation.

Database access with sqlx. sqlx for async SQL database access with compile-time query checking — the macro that sends SQL queries to the database at compile time and verifies that the query is valid SQL and that the result columns can be mapped to the expected Rust types. The compile-time safety that prevents the SQL syntax errors and type mismatches that only surface at runtime in other ORMs.

Parameterised queries with the query! and query_as! macros. Connection pool management with sqlx's built-in pool. Transaction management with the begin/commit/rollback pattern. Migration management with sqlx-cli.

High-performance numerical computation. Rust for the numerical kernels that strategy research and production trading systems depend on — the performance-critical inner loops of factor calculation, backtesting simulation, and risk computation.

The SIMD intrinsics available through the std::arch module for the computations where manual vectorisation is required for maximum throughput. Rayon for data-parallel computation — the parallel iterator that distributes computation across all available CPU cores with minimal boilerplate. The nalgebra and ndarray crates for linear algebra and multi-dimensional array operations.

Python interoperability through PyO3 — the Rust extension module that Python code can import and call. The pattern that keeps the research and orchestration code in Python while implementing the performance-critical computation in Rust, combining Python's analytical ecosystem with Rust's computational performance.

WebSocket and real-time communication. The tokio-tungstenite crate for WebSocket client and server implementation — the async WebSocket connection that receives real-time market data feeds, the WebSocket server that pushes live updates to browser clients, the persistent connection that maintains low-latency communication between services.

WebSocket connection management: reconnection logic that handles dropped connections without missing events, backpressure management that prevents unbounded message queues, the heartbeat detection that identifies stale connections.

Error handling. Rust's error handling model — the Result<T, E> type that makes error handling explicit, the ? operator that propagates errors with minimal boilerplate, and the thiserror and anyhow crates that provide ergonomic error type definition and error context.

Domain-specific error types with thiserror: the typed error enums that represent the specific failure modes of each module, with Display and Error implementations derived by the crate. Error context with anyhow: the error wrapper that adds contextual information to errors as they propagate up the call stack. The conversion from domain errors to HTTP responses in web service handlers.

Testing. Rust's built-in test framework — the #[test] attribute, the assertion macros, the integration test infrastructure in the tests/ directory. Property-based testing with proptest for testing invariants across a wide range of generated inputs. Mock objects with mockall for testing components in isolation from their dependencies. tokio-test for testing async code.


Rust in the Architecture

Rust rarely operates alone. In the systems we build, Rust is the performance and reliability layer that works alongside C# business logic, Python research code, and TypeScript frontends.

Rust as the computation engine. The Rust library that Python calls via PyO3 for the performance-critical calculations in the research stack — the backtesting simulation, the factor calculation, the statistical analysis that Python's NumPy layer is too slow for at production data volumes. Python remains the research interface; Rust provides the computation throughput.

Rust as the real-time backend, C# as the integration layer. The Rust service that handles the real-time data processing, the high-frequency event stream, or the low-latency execution path — integrated with a C# service that handles the complex business logic, the ERP integrations, and the document generation that leverage .NET's ecosystem strengths.

Rust as the API service. The Axum-based API service that fronts a high-concurrency data access requirement — serving many concurrent clients from a Rust backend that handles the concurrency efficiently, alongside a Next.js frontend and a C# integration service.


Technologies Used

  • Rust (latest stable) — language and toolchain
  • Tokio — async runtime for concurrent services
  • Axum — web framework for HTTP services
  • sqlx — async SQL database access with compile-time query verification
  • serde / serde_json — data serialisation and deserialisation
  • tokio-tungstenite — WebSocket client and server
  • reqwest — async HTTP client
  • thiserror / anyhow — error handling
  • Rayon — data-parallel computation
  • nalgebra / ndarray — linear algebra and multi-dimensional arrays
  • PyO3 — Python interoperability for Rust extensions called from Python
  • redis-rs — async Redis client
  • Prometheus / metrics — application metrics instrumentation
  • tracing / tracing-subscriber — structured logging and distributed tracing
  • cargo — package management and build system
  • Docker — containerised deployment

The Compiler as a Collaborator

The thing that distinguishes working in Rust from working in most other languages is the relationship with the compiler. The Rust compiler is strict — it refuses to compile code with memory safety issues, data races, or null pointer dereferences. This strictness requires more time upfront: writing code that satisfies the borrow checker, thinking carefully about ownership, being explicit about lifetime boundaries. The payoff is software that arrives at correctness faster, fails less in production, and behaves predictably under conditions that cause other implementations to corrupt memory, crash, or produce wrong answers.

For the systems where these properties matter — the execution engine that cannot be allowed to corrupt a position record, the data pipeline that cannot silently produce incorrect factor values, the service that cannot crash under memory pressure — Rust's compiler strictness is not a cost. It is the point.


Performance and Correctness Where Both Are Required

Rust development for the components where performance and correctness are both requirements that other languages handle imperfectly — the real-time systems, the high-throughput pipelines, the integration layers where data correctness has direct business consequences, and the services that need to handle demanding concurrency requirements reliably.