Quote Service Rewrite: Clean Architecture for Long-Term Maintainability

27 minute read

Published:

πŸŽ„ Merry Christmas and Happy New Year! πŸŽ„

On this Christmas Day 2025, I’m taking a moment to reflect on the journey of building this Solana HFT trading system. As we celebrate with family and friends, I’m also planning the next major evolution of our architecture.

Wishing everyone a Merry Christmas and a prosperous Happy New Year! May 2026 bring successful trades, robust systems, and minimal bugs! πŸŽ‰

Today’s post is a bit differentβ€”instead of implementation details, I’m sharing the architectural rewrite plan for our quote-service. It’s a story of technical debt, lessons learned, and the path to sustainable architecture.


TL;DR

Planning a comprehensive rewrite of quote-service with clean architecture principles AND HFT integration:

  1. 85% Code Reduction: 50K lines β†’ 15K lines through proper separation of concerns
  2. Sub-10ms Cached Quotes: < 10ms HFT-critical latency (vs current 200ms uncached)
  3. 4x Better Test Coverage: 20% β†’ 80%+ with dependency injection and interfaces
  4. Dramatically Better Maintainability: Internal packages, clean architecture, single responsibility
  5. Service Separation: 3 services (quote, pool discovery, RPC proxy) vs 1 monolith
  6. Technology Decision: Go for speed (2-3 weeks), Rust RPC proxy for shared infrastructure
  7. HFT Pipeline Integration: Shredstream cache (300-800ms head start), FlatBuffers events (20-150x faster), NATS MARKET_DATA stream

The Core Insight: The current quote-service works, but it’s unmaintainable and not HFT-ready. We need to rebuild the foundation now before technical debt makes future changes impossible, AND we need to integrate with the HFT pipeline for sub-200ms end-to-end execution.


Table of Contents

  1. The Problem: Why Rewrite a Working System?
  2. Current Architecture: Design Flaws
  3. New Architecture: Clean Separation
  4. Go vs Rust Decision
  5. HTTP + gRPC: Combined vs Split
  6. HFT Integration Requirements ← NEW
  7. Clean Architecture Benefits
  8. Technology Stack Decisions
  9. Expected Improvements
  10. Conclusion: Building for the Future

The Problem: Why Rewrite a Working System?

It Works, But…

The current quote-service is feature-complete and functional:

  • βœ… Serves quotes via HTTP and gRPC
  • βœ… Supports 6 DEX protocols (Raydium, Meteora, Orca, Pump.fun)
  • βœ… Real-time WebSocket updates
  • βœ… 99.99% availability with RPC pool
  • βœ… Redis crash recovery
  • βœ… Full observability (Grafana LGTM stack: Loki, Grafana, Tempo, Mimir)

So why rewrite?

Because β€œworks” is not enough for long-term success. The system has critical architectural flaws that make it:

  1. Difficult to maintain - 96KB cache.go file with 50+ methods
  2. Hard to test - Tightly coupled components, 20% test coverage
  3. Slow to extend - Adding features requires touching multiple files
  4. Risky to deploy - No confidence in changes due to poor testing
  5. Impossible to reason about - Mixed concerns everywhere

The Technical Debt Reality

Current Codebase Health:
β”œβ”€β”€ Lines of Code: 50,000+ (monolithic)
β”œβ”€β”€ Test Coverage: ~20% (hard to test)
β”œβ”€β”€ Files in cmd/: 20+ files (violates Go standards)
β”œβ”€β”€ Largest File: 96KB cache.go (unmaintainable)
└── Architectural Pattern: Big Ball of Mud ❌

This is a ticking time bomb. Every feature we add makes it worse. Every bug fix becomes harder. Eventually, we’ll reach a point where the system is too complex to understand and too risky to change.

The time to fix this is NOW, while we still can.


Current Architecture: Design Flaws

Flaw #1: Monolithic cache.go (96KB, 50+ methods)

The Problem:

// cache.go mixes EVERYTHING in one file:
type QuoteCache struct {
    router            *pkg.SimpleRouter      // Pool routing
    solClient         *sol.Client            // RPC client ❌
    wsPool            *subscription.WSPool   // WebSocket ❌
    oraclePriceFetcher *oracle.PriceFetcher  // Oracle
    cache             map[string]*CachedQuote // Actual cache
    poolLiquidity     map[string]float64     // Pool state ❌
    // ... 20 more fields
}

// 50+ methods that do everything:
func (c *QuoteCache) UpdateQuote()          // Quote refresh
func (c *QuoteCache) DiscoverPools()        // Pool discovery ❌
func (c *QuoteCache) ManageRPCPool()        // RPC management ❌
func (c *QuoteCache) HandleWebSocket()      // WebSocket ❌
// ... 46 more methods

Why This Is Bad:

  • Violates Single Responsibility Principle - Does 5 different things
  • Impossible to test in isolation - Too many dependencies
  • Cannot reason about code - 96KB file is too large to hold in your head
  • Changes have unpredictable side effects - Everything is interconnected

What Should Happen:

  • QuoteCache should ONLY cache quotes (1 responsibility)
  • Pool discovery β†’ Separate service
  • RPC management β†’ Rust RPC Proxy
  • WebSocket β†’ Pool discovery service

Flaw #2: RPC Logic Embedded in Service

The Problem:

pkg/sol/rpc_pool.go (1200+ lines)
β”œβ”€β”€ RPC pool management
β”œβ”€β”€ Health monitoring
β”œβ”€β”€ Rate limiting
β”œβ”€β”€ Failover logic
└── Cannot be reused by other services ❌

Why This Is Bad:

  • Code duplication - Scanner needs RPC pool, must copy-paste
  • Inconsistent behavior - Each service implements RPC differently
  • Wasted effort - Solving the same problem multiple times
  • Bugs multiply - Fix a bug in quote-service, scanner still broken

What Should Happen:

Flaw #3: Pool Discovery During Quote Serving

The Problem:

Every 30 seconds:
1. UpdateQuote() triggered
2. For each pair:
   β”œβ”€ QueryAllPools() ← Makes RPC calls! ❌
   β”œβ”€ Fetch pool state from blockchain (200ms)
   β”œβ”€ Calculate quote
   └─ Cache result

PROBLEM: Discovery blocks quote serving!

Why This Is Bad:

  • Slow - Discovery takes 200ms, blocks quote serving
  • Unreliable - RPC failures cause quote serving to fail
  • Wasteful - Discovering same pools every 30s
  • Tight coupling - Quote logic mixed with discovery logic

What Should Happen:

  • Separate pool-discovery-service (runs every 5 minutes)
  • Writes discovered pools to Redis
  • Quote-service just reads from Redis (0.5ms)
  • No blocking, no coupling

Flaw #4: No Internal Packages

The Problem:

Current (WRONG):
go/cmd/quote-service/
β”œβ”€β”€ main.go
β”œβ”€β”€ cache.go
β”œβ”€β”€ grpc_server.go
β”œβ”€β”€ handler_*.go (10 files)
└── ... all logic in cmd/ ❌

Problems:
- Violates Go project layout standards
- Cannot import logic in other services
- Difficult to test (no interfaces)
- Everything is tightly coupled

What Should Happen:

Correct Structure:
go/
β”œβ”€β”€ cmd/quote-service/
β”‚   └── main.go (ONLY DI wiring, 100 lines)
β”‚
└── internal/quote-service/
    β”œβ”€β”€ domain/       # Interfaces + models
    β”œβ”€β”€ repository/   # Data access (Redis, cache)
    β”œβ”€β”€ calculator/   # Quote calculation
    β”œβ”€β”€ service/      # Business logic
    └── api/          # HTTP + gRPC handlers

Benefits:

  • βœ… Clean separation of concerns
  • βœ… Easy to test (inject mocks via interfaces)
  • βœ… Each package has ONE responsibility
  • βœ… Follows Go best practices

Flaw #5: Hard to Test

Current Test Coverage: 20% ❌

Why So Low?

// Current code (impossible to test):
func (c *QuoteCache) UpdateQuote() {
    // Hard-coded RPC client ❌
    pools := c.solClient.QueryAllPools(...)

    // Hard-coded WebSocket ❌
    c.wsPool.Subscribe(...)

    // No interfaces, cannot inject mocks ❌
}

// To test this, you need:
- Real RPC endpoint (flaky, slow)
- Real WebSocket connection (flaky, slow)
- Real Redis (integration test, not unit test)
- Full infrastructure (NATS, Prometheus, etc.)

Result: Nobody writes tests, coverage stays at 20%

What Should Happen:

// New code (easy to test):
type QuoteService struct {
    poolRepo      domain.PoolReader      // Interface! βœ…
    calculator    domain.PriceCalculator // Interface! βœ…
    cacheManager  domain.CacheManager    // Interface! βœ…
}

// To test this:
func TestQuoteService(t *testing.T) {
    // Inject mocks! No real infrastructure needed!
    mockPoolRepo := &MockPoolReader{}
    mockCalculator := &MockPriceCalculator{}
    mockCache := &MockCacheManager{}

    service := NewQuoteService(mockPoolRepo, mockCalculator, mockCache)

    // Test business logic in isolation βœ…
    quote, err := service.GetQuote(ctx, "SOL", "USDC", 1000000000)
    assert.NoError(t, err)
    assert.Equal(t, expectedOutput, quote.OutputAmount)
}

Result: 80%+ test coverage, fast unit tests βœ…

New Architecture: Clean Separation

High-Level Architecture

Before (Monolithic):

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚          Quote Service (Single Monolith)          β”‚
β”‚                                                   β”‚
β”‚  β€’ Quote caching     (Good βœ…)                    β”‚
β”‚  β€’ Pool discovery    (Blocks serving ❌)          β”‚
β”‚  β€’ RPC management    (Should be shared ❌)        β”‚
β”‚  β€’ WebSocket updates (Blocks serving ❌)          β”‚
β”‚  β€’ HTTP API          (Good βœ…)                    β”‚
β”‚  β€’ gRPC streaming    (Good βœ…)                    β”‚
β”‚                                                   β”‚
β”‚  PROBLEMS:                                        β”‚
β”‚  - 50K lines, unmaintainable                      β”‚
β”‚  - Discovery blocks quote serving                 β”‚
β”‚  - RPC logic cannot be reused                     β”‚
β”‚  - Hard to test (20% coverage)                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

After (Clean Separation + HFT Integration):

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    Shredstream Scanner (Rust - 300-800ms Advance)   β”‚
β”‚  β€’ QUIC protocol for unconfirmed slot data          β”‚
β”‚  β€’ Publishes: pool.state.updated.* (NATS)           β”‚
β”‚  β€’ Provides 300-800ms head start over RPC           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     ↓ NATS pool.state.updated.*
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      Pool Discovery Service (NEW - Independent)     β”‚
β”‚  β€’ Discovers pools every 5 minutes                  β”‚
β”‚  β€’ Writes to Redis (pool metadata)                  β”‚
β”‚  β€’ Solscan enrichment (TVL, 24h volume)             β”‚
β”‚  β€’ Pool quality filtering (liquidity, status)       β”‚
β”‚  β€’ 8K lines, single responsibility βœ…               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     ↓ Redis (pool metadata)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Quote Service (REWRITTEN - Clean + HFT Ready)     β”‚
β”‚                                                     β”‚
β”‚  INPUTS:                                            β”‚
β”‚  β€’ Redis pool metadata (5-10ms)                     β”‚
β”‚  β€’ NATS pool.state.updated.* (Shredstream cache)    β”‚
β”‚                                                     β”‚
β”‚  CORE:                                              β”‚
β”‚  β€’ Hybrid cache: Shredstream (5ms) β†’ In-memory      β”‚
β”‚  β€’ Slot-based consistency (only update if newer)    β”‚
β”‚  β€’ Thread-safe pool cache (sync.RWMutex)            β”‚
β”‚  β€’ 15K lines, clean architecture βœ…                 β”‚
β”‚  β€’ 80%+ test coverage βœ…                            β”‚
β”‚                                                     β”‚
β”‚  OUTPUTS:                                           β”‚
β”‚  β€’ HTTP API :8080 (< 10ms quotes)                   β”‚
β”‚  β€’ gRPC streaming :50051                            β”‚
β”‚  β€’ NATS market.swap_route.* (FlatBuffers events)    β”‚
β”‚                                                     β”‚
β”‚  Internal Structure:                                β”‚
β”‚  β”œβ”€β”€ domain/      (interfaces, models)              β”‚
β”‚  β”œβ”€β”€ repository/  (Redis, cache, oracle)            β”‚
β”‚  β”œβ”€β”€ cache/       (Shredstream pool cache) ← NEW    β”‚
β”‚  β”œβ”€β”€ calculator/  (pool math, routing)              β”‚
β”‚  β”œβ”€β”€ service/     (business logic)                  β”‚
β”‚  β”œβ”€β”€ events/      (FlatBuffers publisher) ← NEW     β”‚
β”‚  β”œβ”€β”€ nats/        (NATS subscriber) ← NEW           β”‚
β”‚  └── api/         (HTTP + gRPC)                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     ↓ NATS MARKET_DATA stream
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      Scanner Service (Stage 1: Opportunity Det.)    β”‚
β”‚  β€’ Subscribes: market.swap_route.*                  β”‚
β”‚  β€’ Detects arbitrage opportunities                  β”‚
β”‚  β€’ Publishes: opportunity.* (< 50ms)                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     ↓ HTTP (RPC calls)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Rust RPC Proxy (Shared Infrastructure)      β”‚
β”‚  β€’ Centralized RPC management                       β”‚
β”‚  β€’ Used by ALL services (quote, scanner, executor)  β”‚
β”‚  β€’ Rate limiting, health monitoring                 β”‚
β”‚  β€’ Connection pooling, circuit breaker              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

HFT Pipeline Flow (Stage 0 β†’ Stage 1):

Stage 0: Quote Service (< 10ms per quote)
    ↓ publishes: market.swap_route.* (FlatBuffers, <1ms)
Stage 1: Scanner (< 50ms detection)
    ↓ publishes: opportunity.*
Stage 2: Planner (< 50ms planning)
    ↓ publishes: execution.planned
Stage 3: Executor (< 90ms execution)
    ↓ publishes: execution.completed

TOTAL: < 200ms end-to-end (vs current 1.7s = 8.5x faster)

Key Improvements

AspectBefore (Monolithic)After (Clean)Benefit
Quote Latency~200ms (discovery included)< 10ms (Redis lookup)20x faster
Code Size50K lines15K lines (quote) + 8K (discovery)85% reduction
Test Coverage20%> 80% target4x better
MaintainabilityPoor (monolithic)Excellent (clean architecture)High
RPC ReusabilityNo (embedded)Yes (shared proxy)High
Deployment RiskHigh (single service)Low (independent services)Lower

Go vs Rust Decision

Performance Analysis: Is Rust Worth It?

Go (Optimized):

Redis pool lookup:      0.5ms
Pool math calculation:  0.2ms
Price calculation:      0.1ms
Response serialization: 0.1ms
─────────────────────────────
TOTAL:                  0.9ms βœ… Excellent

Rust (Theoretical):

Redis pool lookup:      0.3ms  (faster client)
Pool math calculation:  0.1ms  (zero-cost abstractions)
Price calculation:      0.05ms (SIMD)
Response serialization: 0.05ms (serde zero-copy)
─────────────────────────────
TOTAL:                  0.5ms βœ… Better, but marginal

Verdict: 0.4ms improvement (44% faster) is NOT worth 5 extra weeks

Decision Matrix

FactorGoRustWinner
Development Speed2-3 weeks βœ…6-8 weeks ⚠️Go
Team KnowledgeProven βœ…Learning curve ⚠️Go
Performance<10ms βœ…<5ms βœ…Tie (both good enough)
Code ReuseCan reuse router/pool βœ…Rewrite everything ❌Go
RiskLow βœ…High ⚠️Go

Decision: Go for Quote Service βœ…

Rationale:

  1. Solo developer - stick to known language
  2. Time to market - 2-3 weeks vs 6-8 weeks
  3. Performance - <10ms target easily met with Go
  4. Code reuse - can reuse existing pkg/router, pkg/pool
  5. Risk mitigation - proven technology, easy rollback

Hybrid Approach (Best of Both Worlds)

Use Go for:
βœ… Quote Service (fast delivery, good enough performance)
βœ… Pool Discovery (I/O bound, Go is perfect)

Use Rust for:
βœ… RPC Proxy (shared infrastructure, worth investment)
βœ… Transaction Builder (memory-critical, zero-copy)
βœ… Shredstream Parser (ultra-low latency)

Result: Fast delivery where it matters, peak performance where it counts


HTTP + gRPC: Combined vs Split

The Question

Should HTTP and gRPC be in one service or split into two separate services?

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    Quote Service (Single Process)       β”‚
β”‚                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚ HTTP :8080  β”‚   β”‚ gRPC :50051    β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚         β”‚                    β”‚          β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β”‚                  β–Ό                      β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”        β”‚
β”‚    β”‚  In-Memory Cache         β”‚        β”‚
β”‚    β”‚  (SHARED! βœ…)            β”‚        β”‚
β”‚    β”‚  0.3ms access            β”‚        β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Performance:

  • HTTP cached quote: 0.3ms βœ…
  • gRPC stream update: 0.15ms βœ…
  • Throughput: 10,000 req/s βœ…
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  HTTP Service :8080      β”‚
β”‚  Uses Redis cache        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        β–Ό
   Redis (1ms overhead)
        β–²
β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  gRPC Service :50051     β”‚
β”‚  Uses Redis cache        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Performance:

  • HTTP cached quote: 1.2ms (4x slower ❌)
  • gRPC stream update: 1.05ms (7x slower ❌)
  • Throughput: 1,000 req/s (10x less ❌)

Performance Comparison

ScenarioCombinedSplit (Redis)Difference
Cached Quote (HTTP)0.3ms βœ…1.2ms ⚠️4x slower
gRPC Stream Update0.15ms βœ…1.05ms ⚠️7x slower
Throughput10K req/s βœ…1K req/s ⚠️10x less
Memory300MB βœ…600MB ⚠️2x more
Services to Deploy1 βœ…2 ⚠️2x ops

Decision: COMBINED βœ…

Why Combined Wins:

  1. Performance - 4-7x faster (CRITICAL for HFT)
    • In-memory cache: 0.3ms
    • Redis cache: 1.2ms
    • Redis overhead kills performance
  2. Throughput - 10x higher capacity
    • Combined: 10K req/s
    • Split: 1K req/s (Redis bottleneck)
  3. Simplicity - Solo developer
    • 1 service vs 2 services
    • 1 deployment vs 2 deployments
  4. Memory Efficiency - 50% less RAM
    • Combined: 300MB (single in-memory cache)
    • Split: 600MB (2x Redis storage)

The Insight: For HFT systems targeting sub-10ms latency, in-memory cache sharing between HTTP and gRPC is non-negotiable. The 1ms Redis overhead destroys performance gains from service separation.


HFT Integration Requirements

Quote-service is Stage 0 of the HFT pipeline. These requirements are NON-NEGOTIABLE for sub-200ms end-to-end execution.

Performance Targets ⚑

CRITICAL: Quote-service must meet these latency targets to enable the full HFT pipeline.

MetricTargetHFT Requirement
Cached Quote (Cache Hit)< 10msMANDATORY
Cached Quote (Shredstream)< 5msOPTIMAL
NATS Event Publishing< 1ms10,000 events/sec
Pool State UpdateSlot-basedOnly if newer slot
Cache Hit Rate> 95%Minimize RPC calls

1. Shredstream Pool State Cache (300-800ms Advance)

Shredstream provides unconfirmed slot data via QUIC protocol, giving us a 300-800ms head start over RPC.

Implementation:

// internal/quote-service/cache/shredstream_cache.go

type PoolStateCache struct {
    mu     sync.RWMutex
    pools  map[string]*PoolState // key: pool address
    config CacheConfig
}

type PoolState struct {
    Address      string
    BaseMint     string
    QuoteMint    string
    BaseReserve  uint64
    QuoteReserve uint64
    Liquidity    float64
    Price        float64
    Slot         uint64       // CRITICAL: For consistency
    LastUpdated  time.Time
}

// Slot-based consistency: ONLY update if newer slot
func (c *PoolStateCache) Update(state *PoolState) {
    c.mu.Lock()
    defer c.mu.Unlock()

    existing, exists := c.pools[state.Address]
    if exists && existing.Slot >= state.Slot {
        return // Ignore stale update
    }

    state.LastUpdated = time.Now()
    c.pools[state.Address] = state
}

// Thread-safe read
func (c *PoolStateCache) Get(address string) (*PoolState, bool) {
    c.mu.RLock()
    defer c.mu.RUnlock()

    state, exists := c.pools[address]
    if !exists {
        return nil, false
    }

    // Check staleness (30s threshold)
    if time.Since(state.LastUpdated) > 30*time.Second {
        return nil, false
    }

    return state, true
}

2. NATS Subscriber for Shredstream Events

Subscribe to pool.state.updated.* events from Shredstream Scanner.

Implementation:

// internal/quote-service/nats/subscriber.go

type ShredstreamSubscriber struct {
    nc    *nats.Conn
    js    nats.JetStreamContext
    cache *cache.PoolStateCache
}

func (s *ShredstreamSubscriber) Start(ctx context.Context) error {
    // Subscribe to pool state updates
    sub, err := s.js.Subscribe(
        "pool.state.updated.*",
        func(msg *nats.Msg) {
            s.handlePoolUpdate(msg)
            msg.Ack()
        },
        nats.Durable("quote-service-pool-updates"),
        nats.DeliverAll(),
    )
    if err != nil {
        return fmt.Errorf("subscribe failed: %w", err)
    }

    // Background eviction loop
    go s.evictionLoop(ctx)

    return nil
}

func (s *ShredstreamSubscriber) handlePoolUpdate(msg *nats.Msg) {
    var state cache.PoolState
    if err := json.Unmarshal(msg.Data, &state); err != nil {
        log.Warn("Failed to unmarshal pool state", "error", err)
        return
    }

    // Update cache with slot-based consistency
    s.cache.Update(&state)
}

// Evict stale entries every 60s
func (s *ShredstreamSubscriber) evictionLoop(ctx context.Context) {
    ticker := time.NewTicker(60 * time.Second)
    defer ticker.Stop()

    for {
        select {
        case <-ticker.C:
            s.cache.Evict(30 * time.Second)
        case <-ctx.Done():
            return
        }
    }
}

3. FlatBuffers Event Publishing (20-150x Faster)

Publish swap route events to NATS MARKET_DATA stream using FlatBuffers for zero-copy serialization.

FlatBuffers Schema:

// internal/quote-service/events/schemas.fbs

namespace events;

table SwapRouteEvent {
  token_in: string;
  token_out: string;
  amount_in: uint64;
  amount_out: uint64;
  price: double;
  price_impact_bps: uint32;
  route: [RouteHop];
  protocol: string;
  pool_address: string;
  slot: uint64;
  timestamp: uint64;
  trace_id: string;
}

table RouteHop {
  protocol: string;
  pool_address: string;
  input_mint: string;
  output_mint: string;
  amount_in: uint64;
  amount_out: uint64;
  fee_bps: uint32;
}

Publisher Implementation:

// internal/quote-service/events/publisher.go

type FlatBuffersPublisher struct {
    js      nats.JetStreamContext
    builder *flatbuffers.Builder
}

func (p *FlatBuffersPublisher) PublishSwapRoute(
    ctx context.Context,
    quote *domain.Quote,
) error {
    // Reset builder for reuse
    p.builder.Reset()

    // Build FlatBuffers message
    tokenIn := p.builder.CreateString(quote.InputMint)
    tokenOut := p.builder.CreateString(quote.OutputMint)
    protocol := p.builder.CreateString(quote.Protocol)
    poolAddr := p.builder.CreateString(quote.PoolAddress)
    traceID := p.builder.CreateString(observability.TraceID(ctx))

    SwapRouteEventStart(p.builder)
    SwapRouteEventAddTokenIn(p.builder, tokenIn)
    SwapRouteEventAddTokenOut(p.builder, tokenOut)
    SwapRouteEventAddAmountIn(p.builder, quote.AmountIn)
    SwapRouteEventAddAmountOut(p.builder, quote.AmountOut)
    SwapRouteEventAddPrice(p.builder, quote.Price)
    SwapRouteEventAddPriceImpactBps(p.builder, quote.PriceImpactBps)
    SwapRouteEventAddProtocol(p.builder, protocol)
    SwapRouteEventAddPoolAddress(p.builder, poolAddr)
    SwapRouteEventAddSlot(p.builder, quote.Slot)
    SwapRouteEventAddTimestamp(p.builder, uint64(time.Now().Unix()))
    SwapRouteEventAddTraceId(p.builder, traceID)
    event := SwapRouteEventEnd(p.builder)

    p.builder.Finish(event)

    // Publish to NATS (< 1ms)
    subject := fmt.Sprintf("market.swap_route.%s.%s",
        quote.InputMint[:8], quote.OutputMint[:8])

    _, err := p.js.Publish(subject, p.builder.FinishedBytes(),
        nats.MsgId(traceID))

    return err
}

Performance Comparison:

FormatEncodeDecodeSizePerformance
FlatBuffers100ns50ns400 bytes20-150x faster βœ…
JSON500ns2000ns1200 bytesBaseline
Protobuf200ns800ns600 bytes2-10x faster

4. Hybrid Cache Strategy

Three-tier cache strategy for optimal latency:

// internal/quote-service/service/quote_service.go

func (s *QuoteService) GetQuote(
    ctx context.Context,
    inputMint, outputMint string,
    amount uint64,
) (*domain.Quote, error) {

    // Strategy 1: Try Shredstream pool cache (5-10ms)
    if s.config.Shredstream.Enabled {
        quote, err := s.getQuoteFromShredstream(inputMint, outputMint, amount)
        if err == nil {
            s.metrics.CacheHits.Inc()
            return quote, nil
        }
    }

    // Strategy 2: Try in-memory quote cache (< 5ms)
    if cached, ok := s.cache.Get(inputMint, outputMint, amount); ok {
        if time.Since(cached.Timestamp) < s.config.Cache.TTL {
            s.metrics.CacheHits.Inc()
            return cached, nil
        }
    }

    // Strategy 3: Calculate fresh quote (100-200ms fallback)
    s.metrics.CacheMisses.Inc()
    quote, err := s.calculateQuote(ctx, inputMint, outputMint, amount)
    if err != nil {
        return nil, err
    }

    // Cache for future requests
    s.cache.Set(inputMint, outputMint, amount, quote)

    return quote, nil
}

5. Configuration

Environment variables for HFT integration:

# Shredstream Integration
SHREDSTREAM_ENABLED=true
SHREDSTREAM_CACHE_MAX_STALENESS=30s
SHREDSTREAM_EVICTION_INTERVAL=60s

# NATS Configuration
NATS_URL=nats://localhost:4222
NATS_SUBJECT_POOL_UPDATES="pool.state.updated.*"
NATS_SUBJECT_SWAP_ROUTE="market.swap_route"
NATS_DURABLE_NAME="quote-service-pool-updates"

# HFT Performance Targets
HFT_QUOTE_LATENCY_TARGET_MS=10
HFT_EVENT_PUBLISH_RATE_TARGET=10000
HFT_CACHE_HIT_RATE_TARGET=0.95

# FlatBuffers
FLATBUFFERS_ENABLED=true
FLATBUFFERS_BUILDER_INITIAL_SIZE=1024

6. Updated Package Structure

internal/quote-service/
β”œβ”€β”€ cache/              # NEW: Shredstream pool cache
β”‚   β”œβ”€β”€ shredstream_cache.go
β”‚   └── eviction.go
β”œβ”€β”€ events/             # NEW: FlatBuffers event publishing
β”‚   β”œβ”€β”€ publisher.go
β”‚   └── schemas.fbs     # FlatBuffers schema
└── nats/               # NEW: NATS integration
    β”œβ”€β”€ subscriber.go   # Pool state updates
    └── kill_switch.go  # Emergency stop

7. Why FlatBuffers Over JSON/Protobuf?

FlatBuffers Advantages:

  1. Zero-copy deserialization - Access data without parsing
  2. 20-150x faster than JSON encoding/decoding
  3. Smaller message size - 400 bytes vs 1200 bytes (JSON)
  4. Backward/forward compatible - Schema evolution
  5. No runtime serialization - Data stored in-memory ready to send

When to Use FlatBuffers:

  • βœ… High-frequency events (10,000/sec)
  • βœ… Latency-critical paths (< 1ms publish)
  • βœ… Large message volumes
  • ❌ Human-readable debugging (use JSON for admin APIs)

8. HFT Pipeline Integration

Quote-service is Stage 0 of the 4-stage HFT pipeline:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Stage 0: Quote Service (< 10ms)             β”‚
β”‚ ─────────────────────────────────────────── β”‚
β”‚ INPUT:  HTTP/gRPC request                   β”‚
β”‚ PROCESS: Hybrid cache (Shredstream β†’ Mem)  β”‚
β”‚ OUTPUT: FlatBuffers event β†’ MARKET_DATA     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 ↓ NATS: market.swap_route.*
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Stage 1: Scanner (< 50ms)                   β”‚
β”‚ Detects arbitrage opportunities             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 ↓ NATS: opportunity.*
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Stage 2: Planner (< 50ms)                   β”‚
β”‚ Plans execution strategy                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 ↓ NATS: execution.planned
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Stage 3: Executor (< 90ms)                  β”‚
β”‚ Submits Jito bundle                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

TOTAL: < 200ms end-to-end (vs current 1.7s)

Quote Service Responsibilities:

  • βœ… Serve quotes in < 10ms (Stage 0 target)
  • βœ… Publish FlatBuffers events to MARKET_DATA stream
  • βœ… Subscribe to Shredstream pool state updates
  • βœ… Maintain > 95% cache hit rate
  • βœ… Handle 10,000 events/sec throughput

Clean Architecture Benefits

Internal Package Structure

New Directory Layout:

go/
β”œβ”€β”€ cmd/
β”‚   β”œβ”€β”€ quote-service/
β”‚   β”‚   └── main.go                    # 100 lines (ONLY DI wiring)
β”‚   └── pool-discovery-service/
β”‚       └── main.go
β”‚
└── internal/
    β”œβ”€β”€ quote-service/
    β”‚   β”œβ”€β”€ domain/                    # Core business logic
    β”‚   β”‚   β”œβ”€β”€ interfaces.go          # PoolReader, PriceCalculator
    β”‚   β”‚   β”œβ”€β”€ quote.go               # Quote, Pool models
    β”‚   β”‚   └── errors.go              # Business errors
    β”‚   β”‚
    β”‚   β”œβ”€β”€ repository/                # Data access
    β”‚   β”‚   β”œβ”€β”€ pool_repository.go     # Redis pool reader
    β”‚   β”‚   β”œβ”€β”€ cache_repository.go    # In-memory cache
    β”‚   β”‚   └── oracle_repository.go   # Pyth/Jupiter
    β”‚   β”‚
    β”‚   β”œβ”€β”€ calculator/                # Business logic
    β”‚   β”‚   β”œβ”€β”€ pool_calculator.go     # AMM math
    β”‚   β”‚   β”œβ”€β”€ slippage_calculator.go # Price impact
    β”‚   β”‚   └── route_optimizer.go     # Best route
    β”‚   β”‚
    β”‚   β”œβ”€β”€ service/                   # Orchestration
    β”‚   β”‚   β”œβ”€β”€ quote_service.go       # Quote orchestration
    β”‚   β”‚   β”œβ”€β”€ price_service.go       # Price calculation
    β”‚   β”‚   └── cache_service.go       # Cache management
    β”‚   β”‚
    β”‚   └── api/                       # HTTP + gRPC
    β”‚       β”œβ”€β”€ http/handler.go        # Gin handlers
    β”‚       └── grpc/server.go         # gRPC streaming
    β”‚
    └── pool-discovery/
        β”œβ”€β”€ scanner/                   # DEX scanners
        β”œβ”€β”€ storage/                   # Redis writer
        └── scheduler/                 # Periodic job

Code Size Reduction

Before (Monolithic):

cmd/quote-service/
β”œβ”€β”€ main.go           52,844 bytes ❌
β”œβ”€β”€ cache.go          96,419 bytes ❌
β”œβ”€β”€ grpc_server.go    40,734 bytes ❌
└── ... 17 more files

TOTAL: 317KB (50K+ lines) ❌

After (Clean Architecture):

internal/quote-service/
β”œβ”€β”€ domain/           4,500 bytes βœ…
β”œβ”€β”€ repository/       10,000 bytes βœ…
β”œβ”€β”€ calculator/       10,000 bytes βœ…
β”œβ”€β”€ service/          9,000 bytes βœ…
└── api/              10,000 bytes βœ…

cmd/quote-service/
└── main.go           3,000 bytes βœ…

TOTAL: 46.5KB (15K lines) βœ…

REDUCTION: 85% less code! βœ…

Testability Example

Before (Impossible to Test):

// All dependencies hard-coded
func (c *QuoteCache) UpdateQuote() {
    pools := c.solClient.QueryAllPools(...) // Hard-coded RPC ❌
    c.wsPool.Subscribe(...)                  // Hard-coded WS ❌
    // Cannot inject mocks, must use real infrastructure
}

// Test coverage: 20% (too hard to test)

After (Easy to Test):

// All dependencies are interfaces
type QuoteService struct {
    poolRepo     domain.PoolReader      // Interface βœ…
    calculator   domain.PriceCalculator // Interface βœ…
    cacheManager domain.CacheManager    // Interface βœ…
}

// Test with mocks
func TestGetQuote(t *testing.T) {
    mockPoolRepo := &MockPoolReader{
        pools: testPools, // Inject test data
    }
    mockCalculator := &MockPriceCalculator{
        output: expectedOutput,
    }
    mockCache := &MockCacheManager{}

    service := NewQuoteService(mockPoolRepo, mockCalculator, mockCache)

    quote, err := service.GetQuote(ctx, "SOL", "USDC", 1000000000)

    assert.NoError(t, err)
    assert.Equal(t, expectedOutput, quote.OutputAmount)
}

// Test coverage: 80%+ (easy to test with mocks) βœ…

Single Responsibility Principle

Each package has ONE job:

PackageResponsibilityExample
domain/Define interfaces and modelstype PoolReader interface { ... }
repository/Data access (Redis, cache)GetPoolsByPair(...)
calculator/Business logic (pool math)CalculateQuote(pool, amount)
service/OrchestrationGetQuote() - coordinates repositories + calculators
api/HTTP + gRPC handlersParse request, call service, return response

Benefits:

  • βœ… Easy to understand (each package is small and focused)
  • βœ… Easy to test (inject dependencies via interfaces)
  • βœ… Easy to change (modify one package without affecting others)
  • βœ… Easy to extend (add new calculators, repositories, etc.)

Technology Stack Decisions

Final Technology Stack

ComponentTechnologyRationale
Quote ServiceGoFast delivery (2-3 weeks), proven, <10ms easily met, can reuse code
Pool DiscoveryGoI/O bound (RPC calls), Go perfect for concurrency
RPC ProxyRustShared by ALL services, worth investment, ideal for connection pooling
HTTP + gRPCCombined in ONE serviceShared cache critical (4-7x faster), simpler deployment

Architecture Principles

  1. Clean Architecture βœ…
    • Domain layer (interfaces + models)
    • Service layer (business logic)
    • Repository layer (data access)
    • API layer (HTTP + gRPC handlers)
  2. Service Separation βœ…
    • Pool Discovery: Independent background job
    • Quote Service: Pure calculation + serving
    • RPC Proxy: Centralized RPC management
  3. Cache Strategy βœ…
    • Pool metadata: Redis (slow-changing, shared)
    • Quote cache: In-memory (fast, instance-local)
    • NO shared quote cache via Redis (defeats performance)
  4. Testing Strategy βœ…
    • Unit tests: >80% coverage (table-driven, mocks)
    • Integration tests: Real Redis, synthetic data
    • Load tests: 1000 req/s sustained

Expected Improvements

Performance Metrics

MetricBeforeAfter (Clean)After (HFT)Improvement
Quote Latency (cached)~5ms< 5ms< 5ms βœ…Same (already fast)
Quote Latency (Shredstream)N/AN/A< 5ms βœ…NEW: 300-800ms advance
Quote Latency (uncached)~200ms< 50ms< 50ms4x faster
NATS Event PublishingN/AN/A< 1ms βœ…NEW: 10K events/sec
Throughput500 req/s10K req/s10K req/s βœ…20x higher
Memory Usage800MB300MB350MB56% reduction
Cache Hit Rate~80%~90%> 95% βœ…HFT: Critical

HFT Pipeline Metrics (NEW)

StageServiceLatency TargetCurrentStatus
Stage 0Quote Service< 10ms5-10msβœ… HFT Ready
Stage 1Scanner< 50msTBD🚧 In Progress
Stage 2Planner< 50msTBD🚧 In Progress
Stage 3Executor< 90msTBD🚧 In Progress
TOTALEnd-to-End< 200ms1.7s8.5x improvement planned

Code Quality Metrics

MetricBeforeAfterImprovement
Lines of Code50K+15K70% reduction
Test Coverage~20%> 80%4x better
Largest File96KB< 10KB90% reduction
Package StructureMonolithicClean architectureExcellent

Maintainability Improvements

Before:

  • ❌ Adding a new DEX protocol: Touch 5+ files, 200+ lines
  • ❌ Fixing a bug: Search through 50K lines, unpredictable side effects
  • ❌ Writing tests: Requires full infrastructure (Redis, NATS, RPC)
  • ❌ Understanding code: Must read entire 96KB cache.go

After:

  • βœ… Adding a new DEX protocol: Implement Protocol interface, register in DI (50 lines)
  • βœ… Fixing a bug: Isolated in one package (100-200 lines to search)
  • βœ… Writing tests: Unit tests with mocks (no infrastructure)
  • βœ… Understanding code: Read one package at a time (500-1000 lines max)

Conclusion: Building for the Future

Why This Matters

Building trading systems is not just about making it work todayβ€”it’s about building for tomorrow. The difference between a successful system and a failed one often comes down to maintainability.

Bad architecture compounds:

  • Year 1: β€œIt’s a bit messy, but it works”
  • Year 2: β€œAdding features is getting harder”
  • Year 3: β€œWe can’t change anything without breaking something”
  • Year 4: β€œWe need to rewrite everything” ← Too late

Good architecture scales:

  • Year 1: β€œClean architecture takes more time upfront”
  • Year 2: β€œAdding features is still easy”
  • Year 3: β€œWe can refactor safely with 80% test coverage”
  • Year 4: β€œThe system is maintainable and growing” ← Success

The Investment

Time Required: 6 weeks

  • Week 1-3: Parallel development (no disruption)
  • Week 4: Canary testing (10% traffic)
  • Week 5: Gradual rollout (10% β†’ 100%)
  • Week 6: Production hardening

Risk: Low (incremental, rollback-friendly)

Outcome: Production-ready, maintainable, performant quote service for the next 5+ years

The Alternative

If we don’t rewrite:

  • Technical debt grows exponentially
  • Adding features becomes impossible
  • Bug fixes become dangerous
  • Team velocity grinds to zero
  • Eventually forced to rewrite under pressure (high risk)

The choice is clear: Invest 6 weeks now, or pay 10x more later.

Merry Christmas! πŸŽ„

As we close out 2025 and look toward 2026, I’m excited about this architectural evolution. Building robust, maintainable systems is what separates hobby projects from production systems.

Here’s to clean architecture, sustainable codebases, and successful trading in 2026! πŸŽ‰

Wishing everyone a Merry Christmas and a Happy New Year! May your trades be profitable and your bugs be few! πŸš€


References


Next Post: Quote Service Rewrite - Phase 1 Implementation (Foundation Skeleton)

Stay tuned for the journey from architectural debt to clean, maintainable code! πŸŽ„