Hey everyone, Tim Baker here, CTO at Hoops Finance. I've spent the last couple of decades as a network engineer and software dev, working everywhere from healthcare to Web3 startups building globally distributed supercomputers. Throughout that time, I've seen one constant: building reliable, high-performance financial infrastructure is hard, and getting clean, trustworthy data is even harder. That’s why we’re building Hoops. I'll be using these devlogs to pull back the curtain on some of our engineering challenges and share the journey as we build out our platform.
The Challenge
A few weeks ago, I was looking at our backend services, and honestly, I wasn't impressed. We had a handful of different workers all trying to pull market data, each with its own slightly different logic for pricing, pairs, and history. It was a classic case of logic drift, and it was getting fragile. Our arbitrage scanner was also... optimistic, let's say. It was great at finding theoretical paths that you couldn't actually execute in the wild due to liquidity constraints.
It was time for a full overhaul. I decided to document the journey of building a single, canonical system for all things market-related on Stellar.
Step 1: Taming the Chaos → A Single Source of Truth
The first problem was duplication. The same logic for naming a market pair or pricing an asset against USDC was scattered across multiple files. This is how bugs happen.
So, the first step was to centralize everything. We created two core files:
src/utils/types.ts
: A single home for all our shared interfaces. No more guessing what aMarket
object should look like.src/utils/shared_market_functions.ts
: This became the heart of the system. Around 20 pure functions that handle everything: discovering pairs, converting prices to USD, calculating 24h volumes, and fetching both AMM and orderbook liquidity.

Markets all unified in one place with our new data api
Now, whether it's a real-time trigger or a historical backfill, it calls the exact same code. No more drift. We also standardized our precision rules: all prices are handled with 7 decimal points (using Postgres NUMERIC
and careful TS typing), and canonical pairs are always ordered lexicographically (ASSET1-ASSET2
). Simple rules, but they prevent a world of pain.
Step 2: Rebuilding History Without Breaking the Bank
Our old historical market data was inconsistent. The new system needed to be precise and scalable. We landed on a hybrid approach for the history_markets
table.
Here’s how this works:
Real-time State: The main
markets
table is kept constantly up-to-date by database triggers firing on every new trade or offer change. It’s always fresh.Historical Snapshots: The
history_markets
table appends data in two ways:Hourly Worker: A simple background worker (
markets_worker.ts
) runs every hour and snapshots any market that has changed in that period. This is efficient and great for general time-series analysis.Ledger-Pinned: For perfect, high-fidelity reconstructions, we can call
snapshotMarketsHistory(ledger)
to capture the exact state of all markets for a given ledger sequence.
This gives us the best of both worlds: efficient, periodic history for charting and the ability to get surgically precise, on-ledger state when we need it for things like backtesting.
Step 3: The Pre-Migration Paranoia Check
Migrating a core system like this is terrifying. A single bad row or a missing index can cause chaos. So, I built a pre-flight check I called the "paranoia script": test_migration.ts
.
Before we even thought about running migrate:up
, we ran this script. It performs a full dry-run of the entire ingestion process in a read-only transaction.
Here’s what it does:
It builds the TypeScript code.
It simulates ingesting all trades and offers using the new shared functions.
It validates the shape of every single generated row against our TS interfaces.
It checks that the target database has the correct schema, indexes, and triggers already in place.
It spits out a report.
Running it gave us the confidence to proceed. Here’s what the output looked like:
✅ Ingested ~150 market rows
✅ All rows pass TS validations
✅ History snapshots processed (1.2k+)
┌─ liquidity classification ─┐
│ non-zero: 85 | null: 62 │
└────────────────────────────┘
Top 10 pairs by 24h USD …
Seeing this, we knew we were good to go.
Step 4: Making the Arbitrage Engine… Actually Feasible
This was the fun part. The old engine was a bit naive. The new one is a two-stage parallel worker built for one thing: finding profitable cycles that are actually executable.
The Journey:
Adjacency Build: The first worker,
adj_build_worker.ts
, constantly builds an adjacency map of all possible trade routes. It pulls from both AMMs and orderbooks. But it's smart about it. We use skyline pruning to discard any quote that doesn't improve on both price and available capacity. No point in exploring a path that offers a slightly better price but can only handle $5 of volume.Cycle Scan: The second worker,
arb_scan_worker.ts
, takes this pruned map and performs a depth-bounded DFS to find profitable cycles.The Key Insight—Flow Clipping: This is what makes the engine work. Instead of just adding up prices, it calculates the maximum feasible amount that can be pushed through an entire multi-hop path, considering the liquidity constraints of each edge. Think of it like water flowing through pipes; the total flow is limited by the narrowest pipe in the series. This clips the potential trade size to a realistic number.
The result? The engine now surfaces arbitrage opportunities with a feasible_profit_usd
that accounts for slippage and capacity. It's the difference between a theoretical fantasy and a real-world edge.
Final Thoughts
This was a massive overhaul, but it established a foundation we can build on for years. The key takeaway for me was the power of unifying logic and then building specialized, efficient workers on top of that shared core. The old system was a tangled web; the new one is a clean, reliable assembly line.

Start building with Hoops Finance’s API
Because our repos are private, you can’t see the code itself. But you can see the direct result of all this work. The accuracy, speed, and depth of the data we’re now serving has taken a quantum leap forward.
For real-time and historical market data, check out our API at api.hoops.finance.
For details on the new schemas and endpoints, our docs are live at docs.hoops.finance.
Hope this deep dive was helpful. Happy coding!
Tim