Google Interview Question

Design a Trading App like Coinbase / Bitvavo (Android) — Google Interview

hard22 minAndroid System Design

How Google Tests This

Google is known for asking some of the most challenging system design questions in the industry, covering distributed systems, data infrastructure, and large-scale web services. Their interviews emphasise designing systems that handle billions of users and petabytes of data.

Interview focus: Distributed systems, search infrastructure, real-time data processing, and global-scale services.

Key Topics
trading appcoinbasewebsocketroomkotlin flowworkmanagermpandroidchartkotlin

Android System Design: Design a Trading App like Coinbase

A trading app is one of the most demanding Android system design questions you can get. It shows up at Coinbase, Robinhood, Revolut, N26, and at larger tech companies building financial products on mobile.

What makes it genuinely different from most mobile design problems is that it stacks several hard constraints on top of each other. The app must display prices that change hundreds of times per second — on a device whose battery you cannot drain. An order book can receive 50 updates per second — through a RecyclerView that must hold 60fps. A chart must respond to pinch-zoom instantly — while receiving live candle updates simultaneously. And all of this must degrade gracefully when the user goes offline, which on a mobile device is not an edge case, it's a regular occurrence.

These constraints are what senior interviewers are probing for. This guide walks through each one, in the kind of back-and-forth you'd actually have in the room.


Step 1: Clarify the Scope

Interviewer: Design a crypto trading app like Coinbase for Android.

Candidate: A few questions before I start. Are we designing the complete app or specific flows? I'm thinking market overview with live prices, an asset detail screen with a chart and order book, a portfolio view, and a buy/sell flow. Is real-time streaming data a hard requirement, or is polling acceptable? Do both market orders and limit orders need to be in scope? And what's the offline expectation — should last known prices and portfolio still display without network?

Interviewer: Full core experience. Real-time WebSocket data is a hard requirement — polling is not acceptable. Both order types in scope. Offline resilience is required — the user should always see something meaningful, never a blank screen.

Candidate: Got it. The interesting depth in this design is in the real-time data pipeline — how we get high-frequency market data into the UI without dropping frames and without draining the battery. Let me start with requirements and numbers and then walk through each system block.


Requirements

Functional

  • Market overview: list of trading pairs with live prices and 24h change percentage
  • Asset detail: candlestick chart (1m, 5m, 1h, 1d timeframes), live order book (bid/ask depth)
  • Portfolio view: current holdings with real-time P&L
  • Buy/sell flow: market and limit orders with confirmation screen
  • Price alerts: notify the user when an asset crosses a configured price threshold
  • Trade history: past orders persisted locally and syncable from the server

Non-Functional

  • Real-time UI — price changes must reflect within 200ms of a tick arriving
  • 60fps throughout — no dropped frames during order book updates or chart interactions
  • Battery-aware — high-frequency WebSocket data must not drain the battery
  • Offline resilience — last known prices and portfolio data must display without network
  • Data consistency — portfolio state must survive process kills and network drops

Back-of-the-Envelope Estimates

Interviewer: What numbers are we designing around?

Candidate: On the client side, the dominant concern is update frequency — not data volume.

plaintext
WebSocket channels per active screen:
  Market overview:      ticker_batch (all pairs, batched every 5s)
  Asset detail:         ticker (single pair, every trade match, 100–500ms)
                      + level2 (order book depth, up to 50 updates/second)
 
Order book visible depth:
  Top 20 bids + top 20 asks = 40 rows
 
Candlestick history (REST):
  500 candles × 5 timeframes × ~80 bytes = ~200 KB (negligible)
 
Portfolio data:
  ~10 assets per typical user — trivially small for Room
 
Price alert poll cycle:
  WorkManager minimum interval: 15 minutes

The critical number is the order book at 50 updates/second. At 60fps you have 16ms per frame. If you let every raw update trigger a DiffUtil computation and a RecyclerView rebind, you're scheduling work 50 times per second on the main thread — guaranteed frame drops. Throttling the order book is not an optimisation, it's a correctness requirement.


High-Level Client Architecture

Interviewer: Walk me through the overall architecture before we go deep.

Candidate: Clean Architecture with MVVM at the presentation layer, and a dedicated MarketDataManager that owns all WebSocket connections.

plaintext
UI Layer
  MarketListScreen         ← MarketViewModel
  AssetDetailScreen        ← AssetDetailViewModel
  PortfolioScreen          ← PortfolioViewModel
  OrderScreen              ← OrderViewModel
 
Domain Layer
  Use Cases:
    GetLivePrices, GetOrderBook, GetCandleHistory,
    PlaceOrder, GetPortfolio, ManagePriceAlerts
 
Data Layer
  MarketRepository
    ├── MarketDataManager  (WebSocket — live data)
    └── ExchangeApiService (Retrofit — REST for history, orders)
 
  PortfolioRepository
    └── Room               (portfolio, trade history, last known prices, alerts)
 
Alert Layer
  PriceAlertManager        (WorkManager CoroutineWorker)

The most important architectural decision: Room is the single source of truth for everything that needs to survive a process kill — portfolio holdings, last known prices, trade history, and price alerts. The WebSocket pipeline writes into Room. The UI observes Room via Flow. This means the UI doesn't care where the latest price came from — it just observes Room and reacts.

The MarketDataManager is a singleton scoped to the Application. It owns the OkHttp WebSocket instance, manages subscriptions, and exposes data as Kotlin Flows. Critically, its lifecycle is not tied to any Activity or ViewModel — it survives screen rotations and back-stack navigation.


Block 1: WebSocket Channel Management

What it is: WebSockets provide a persistent, full-duplex TCP connection where the server pushes data without the client polling. For a trading app where prices change hundreds of times per second, WebSocket is the only viable delivery mechanism.

Coinbase's WebSocket API offers channels including ticker (every trade match), ticker_batch (all pairs every 5 seconds), and level2 (full order book updates).

Interviewer: Walk me through how you manage the WebSocket lifecycle.

Candidate: The MarketDataManager connects once when the app starts and maintains the connection for the app's lifetime. Subscriptions are dynamic — the app subscribes to channels when screens open and unsubscribes when they close.

kotlin
class MarketDataManager(
    private val client: OkHttpClient,
    private val appScope: CoroutineScope     // ApplicationScope, survives all screens
) {
    private var webSocket: WebSocket? = null
    private val activeSubscriptions = mutableSetOf<Subscription>()
 
    // SharedFlow with DROP_OLDEST — stale ticks should never queue
    private val _tickerFlow = MutableSharedFlow<TickerUpdate>(
        replay = 1,
        extraBufferCapacity = 64,
        onBufferOverflow = BufferOverflow.DROP_OLDEST
    )
    val tickerFlow: SharedFlow<TickerUpdate> = _tickerFlow
 
    private val _orderBookFlow = MutableSharedFlow<OrderBookUpdate>(
        replay = 1,
        extraBufferCapacity = 128,
        onBufferOverflow = BufferOverflow.DROP_OLDEST
    )
    val orderBookFlow: SharedFlow<OrderBookUpdate> = _orderBookFlow
}

The DROP_OLDEST overflow policy is deliberate. If the UI is briefly busy — mid-recomposition, mid-scroll — and can't consume ticks as fast as they arrive, we drop the oldest tick, not the newest. A 200ms-old price is worse than no price in a trading app. Stale data must never queue.

Subscription lifecycle tied to screen lifecycle:

kotlin
// In AssetDetailViewModel — subscribe when the screen is active
init {
    viewModelScope.launch {
        marketDataManager.subscribe(Channel.TICKER, listOf("BTC-USD"))
        marketDataManager.subscribe(Channel.LEVEL2, listOf("BTC-USD"))
    }
}
 
override fun onCleared() {
    marketDataManager.unsubscribe(Channel.TICKER, listOf("BTC-USD"))
    marketDataManager.unsubscribe(Channel.LEVEL2, listOf("BTC-USD"))
}

When the user is on the market overview screen, only ticker_batch is active. When they open BTC-USD detail, we additionally subscribe to granular ticker and level2 for that pair. When they navigate away, both are unsubscribed. This directly controls battery — the radio only receives the data the current screen actually needs.

Reconnection: Exponential Backoff + NetworkCallback

Interviewer: What happens when the connection drops?

Candidate: OkHttp fires onFailure() on the WebSocketListener. The MarketDataManager schedules a reconnect with exponential backoff — 1s, 2s, 4s, 8s, capping at 30 seconds.

Additionally, a ConnectivityManager.NetworkCallback is registered. When onAvailable() fires, we attempt reconnection immediately without waiting for the backoff timer. A trader reconnecting after a tunnel shouldn't wait 30 seconds for live prices.

On successful reconnect, re-register all active subscriptions from activeSubscriptions. The ViewModels don't know a reconnection happened — they just keep observing the same Flow.

kotlin
private fun onConnectionEstablished() {
    retryCount = 0
    // Re-subscribe to everything that was active before the drop
    activeSubscriptions.forEach { sub ->
        webSocket?.send(buildSubscribeMessage(sub.channel, sub.productIds))
    }
}

Block 2: Real-Time Data Pipeline (Kotlin Flow)

What it is: Kotlin Flow is a cold asynchronous stream from Coroutines. StateFlow and SharedFlow are hot streams — they hold state and broadcast to multiple collectors. This is the glue between the raw WebSocket messages and the ViewModels.

Interviewer: How does a raw WebSocket message get from the socket to the UI?

Candidate: A four-step pipeline entirely on coroutines, no Rx, no manual threads:

plaintext
[WebSocket thread]
onMessage() called by OkHttp


Parse JSON → TickerUpdate data class


_tickerFlow.emit(update)                         // immediate, non-blocking
  │                                              // (DROP_OLDEST if buffer full)

[ViewModels collect on their own coroutine scope]
AssetDetailViewModel collects tickerFlow


.throttleLatest(200ms)                           // cap at 5 UI updates/second


.map { it.toUiState() }                          // format strings, compute delta
  │                                              // runs on Default dispatcher

.stateIn(viewModelScope, WhileSubscribed(5000))  // exposed as StateFlow<UiState>


[Compose recomposition / View observation]
UI renders updated price

The throttleLatest(200) step is the critical performance gate. Even if 50 order book updates arrive per second, the ViewModel emits to the UI at most 5 times per second. The UI always shows the latest value — not an averaged or debounced value.

toUiState() runs on Dispatchers.Default. All string formatting, percentage calculations, and color logic happen here — off the main thread, before the data reaches the UI. By the time onBind fires in a RecyclerView or a Compose item() recomposes, it's setting pre-computed strings and pre-computed colors. No work happens in the render pass.

SharingStarted.WhileSubscribed(5000) is worth explaining explicitly. This keeps the upstream tickerFlow collection alive for 5 seconds after the last subscriber disconnects — a screen rotation, for example. Without the 5-second window, navigating away and immediately back would disconnect and reconnect the Flow, causing a brief stale-data flash. With it, the Flow stays warm across brief navigations.


Block 3: Order Book — Rendering at 50 Updates/Second

The order book is the most performance-sensitive component in a trading app.

Interviewer: How do you render the order book without dropping frames?

Candidate: Three things working together.

Throttle in the ViewModel:

kotlin
val orderBook: StateFlow<OrderBookUiState> = marketDataManager
    .orderBookFlow
    .throttleLatest(200)               // max 5 UI updates/sec
    .map { snapshot ->
        OrderBookUiState(
            bids = snapshot.bids.take(20).map { it.toRowUiState() },
            asks = snapshot.asks.take(20).map { it.toRowUiState() }
        )
    }
    .flowOn(Dispatchers.Default)       // all mapping off main thread
    .stateIn(viewModelScope, WhileSubscribed(5000), OrderBookUiState.Empty)

ListAdapter with a precise DiffUtil.ItemCallback:

kotlin
class OrderBookAdapter : ListAdapter<OrderBookRow, OrderBookViewHolder>(
    object : DiffUtil.ItemCallback<OrderBookRow>() {
        override fun areItemsTheSame(old: OrderBookRow, new: OrderBookRow) =
            old.price == new.price        // same price level = same row
 
        override fun areContentsTheSame(old: OrderBookRow, new: OrderBookRow) =
            old.size == new.size && old.depthFraction == new.depthFraction
    }
)

When submitList() is called, DiffUtil computes the diff on a background thread. Only the rows whose size changed get rebound. In a typical order book update, 2–3 rows change out of 40. Without DiffUtil, you'd rebind all 40 rows on every update — 5 × 40 = 200 onBind calls per second.

Nothing computed in onBindViewHolder:

The toRowUiState() in the ViewModel pre-computes every value the view needs: the formatted price string ("$67,432.50"), the formatted size string ("0.142 BTC"), and depthFraction as a Float between 0 and 1 (the proportion of max depth, used to drive the depth bar width). onBindViewHolder sets these directly into views — no formatting, no math, no allocation. It's a dumb setter.


Block 4: Candlestick Chart Architecture

Interviewer: How do you implement the candlestick chart?

Candidate: This is a genuine design decision with trade-offs between integration speed and performance control. Let me walk through the options.

Option 1: MPAndroidChart

The most widely used Android chart library. Provides CandleStickChart with built-in pinch-zoom, panning, and CandleEntry data model.

plaintext
Pros:  battle-tested, minimal code, handles gestures out of the box
Cons:  not actively maintained (last major release 2020), renders 
       synchronously on the main thread via Canvas, full chart redraw
       on every data update — can drop frames with 500+ candles

Option 2: Custom Canvas View (View.onDraw())

Override onDraw(canvas: Canvas) and draw candle bodies, wicks, price axes, and time axes directly. Full control over rendering — update only the newest forming candle without a full redraw.

plaintext
Pros:  GPU-hardware-accelerated canvas, surgical updates (only redraw
       the current forming candle on new ticks), full visual control
Cons:  significant development time, must implement ScaleGestureDetector
       and GestureDetector manually for zoom/pan

Option 3: Compose Canvas (Modern Recommended Approach)

In a Jetpack Compose app, the Canvas composable with drawscope drawing functions. Use remember and derivedStateOf to ensure only the chart recomposes when chart data changes — not when unrelated screen state changes.

plaintext
Pros:  first-class Compose integration, declarative updates, 
       natural state-driven invalidation via derivedStateOf
Cons:  requires a Compose codebase, less ecosystem tooling than 
       MPAndroidChart

For an interview, state a choice and justify it. For a new Compose-first app: Compose Canvas — it's the modern correct answer. For a View-based codebase where you need something working quickly: MPAndroidChart for historical candles, with the live-updating "current candle" drawn on a separate hardware-accelerated overlay canvas to avoid triggering a full chart redraw on every tick.

The two data sources:

Historical candles (everything except the current period) are loaded once via REST: GET /products/BTC-USD/candles?granularity=300. The current forming candle updates via the WebSocket ticker stream — each trade updates the current candle's close, and potentially its high/low.

kotlin
val chartData: StateFlow<ChartUiState> = combine(
    repository.getCandleHistory(pair, granularity),   // Flow<List<Candle>> from Room
    assetDetailViewModel.liveTicker                   // Flow<TickerUpdate> from WebSocket
) { historicalCandles, latestTick ->
    val updatedCandles = historicalCandles.toMutableList()
    updatedCandles.updateCurrentCandle(latestTick)    // mutate last candle in-place
    ChartUiState(candles = updatedCandles)
}.stateIn(viewModelScope, WhileSubscribed(5000), ChartUiState.Empty)

Historical candles are cached in Room with a (pair, granularity, openTime) primary key. Loading a previously viewed timeframe is instant from the local cache. Cache invalidation uses a TTL: candles older than 1 minute are re-fetched from the server on next view.


Block 5: Portfolio View — Room + Live Prices

Interviewer: How does the portfolio screen work?

Candidate: Two Room tables — one for holdings (quantities and cost basis), one for last known prices (written by the WebSocket pipeline). The ViewModel combines them with Kotlin Flow's combine operator.

kotlin
@Entity(tableName = "holdings")
data class HoldingEntity(
    @PrimaryKey val assetId: String,    // "BTC"
    val quantity: Double,
    val avgBuyPrice: Double,            // cost basis
    val updatedAt: Long
)
 
@Entity(tableName = "last_known_prices")
data class LastKnownPriceEntity(
    @PrimaryKey val pair: String,       // "BTC-USD"
    val price: Double,
    val change24h: Double,
    val updatedAt: Long
)

The MarketDataManager writes to last_known_prices on every ticker update via Room. The portfolio ViewModel observes both tables:

kotlin
val portfolio: StateFlow<List<PortfolioRowUiState>> = combine(
    holdingDao.getAllHoldings(),           // Flow<List<HoldingEntity>>
    priceDao.getAllPrices()                // Flow<List<LastKnownPriceEntity>>
) { holdings, prices ->
    val priceMap = prices.associateBy { it.pair.substringBefore("-") }
    holdings.map { holding ->
        val currentPrice = priceMap[holding.assetId]?.price ?: return@map null
        val currentValue  = holding.quantity * currentPrice
        val costBasis     = holding.quantity * holding.avgBuyPrice
        val pnl           = currentValue - costBasis
        val pnlPercent    = if (costBasis > 0) (pnl / costBasis) * 100.0 else 0.0
        PortfolioRowUiState(
            assetId      = holding.assetId,
            quantity     = holding.quantity,
            currentValue = currentValue,
            pnl          = pnl,
            pnlPercent   = pnlPercent
        )
    }.filterNotNull()
}.flowOn(Dispatchers.Default)
 .stateIn(viewModelScope, WhileSubscribed(5000), emptyList())

When a new price lands in Room, the prices Flow emits, combine re-runs, and the portfolio P&L updates in real time. The user sees their portfolio value changing live as prices move. No polling, no manual refresh — pure reactive data flow.

The updatedAt column on last_known_prices is what powers the "Prices as of 2 minutes ago" label when the user is offline. If the updatedAt value is older than 30 seconds, the UI shows a staleness warning rather than silently displaying potentially misleading data.


Block 6: Trade Execution Flow

Interviewer: Walk me through the buy/sell flow from a system design perspective.

Candidate: The trade flow has two key architectural concerns: idempotency and state persistence.

Order submission with idempotency:

Mobile networks are unreliable. If the user taps "Place Order" and the network times out before the server responds, the client must be safe to retry without creating a duplicate order.

The solution: generate a clientOrderId UUID on the client when the user opens the order form. This ID is sent with the order request and stored in Room as a PendingOrderEntity. The server uses it as an idempotency key — if the same clientOrderId arrives twice, the second is treated as a retry and returns the original response.

plaintext
User taps "Place Order"


1. Generate clientOrderId = UUID.randomUUID()
2. Write PendingOrderEntity to Room (status = SUBMITTING)
3. POST /orders { clientOrderId, pair, side, type, amount }

   ├─ SUCCESS: Update Room → status = FILLED, amount, fillPrice
   │           Update HoldingEntity (quantity ± filled amount)
   │           Navigate to confirmation screen

   └─ NETWORK TIMEOUT: Retry with same clientOrderId
                       Server deduplicates, returns original result

Order state persisted in Room:

Pending and recent orders live in a trade_history Room table. If the app is killed after submission but before the server responds, the order is still in Room as SUBMITTING. On next app launch, the app can check the server for its status and reconcile.

kotlin
@Entity(tableName = "trade_history")
data class TradeEntity(
    @PrimaryKey val clientOrderId: String,   // UUID, client-generated
    val serverId: String? = null,            // null until server confirms
    val pair: String,
    val side: String,                        // "BUY" or "SELL"
    val orderType: String,                   // "MARKET" or "LIMIT"
    val requestedAmount: Double,
    val fillPrice: Double? = null,
    val filledAmount: Double? = null,
    val status: OrderStatus,                 // SUBMITTING, FILLED, CANCELLED, FAILED
    val createdAt: Long = System.currentTimeMillis()
)

Block 7: Price Alerts Pipeline

Interviewer: A user sets an alert — notify me when BTC-USD crosses $100,000. How does that work?

Candidate: Price alerts have two delivery paths depending on whether the app is in the foreground or the background.

Foreground path — WebSocket tick checking:

While the app is active and the WebSocket is running, every tick processed by the MarketDataManager is checked against active alerts in Room. This gives near-instant alert delivery with no battery cost above what the WebSocket already uses.

kotlin
// In MarketDataManager, on every ticker update
private suspend fun checkPriceAlerts(update: TickerUpdate) {
    val alerts = alertDao.getActiveAlertsForPair(update.pair)
    alerts.forEach { alert ->
        val triggered = when (alert.direction) {
            AlertDirection.ABOVE -> update.price >= alert.targetPrice
            AlertDirection.BELOW -> update.price <= alert.targetPrice
        }
        if (triggered) {
            notificationManager.showPriceAlert(alert, update.price)
            alertDao.markTriggered(alert.id)
        }
    }
}

Background path — WorkManager periodic check:

When the app is backgrounded, WebSocket connections are unsubscribed (covered in the battery section below). WorkManager takes over:

kotlin
class PriceAlertWorker(
    context: Context,
    params: WorkerParameters
) : CoroutineWorker(context, params) {
 
    override suspend fun doWork(): Result {
        val alerts = alertDao.getActiveAlerts()
        if (alerts.isEmpty()) return Result.success()
 
        // Batch-fetch current prices for all pairs with active alerts
        val pairs = alerts.map { it.pair }.distinct()
        val prices = try {
            exchangeApiService.getBatchPrices(pairs)
        } catch (e: IOException) {
            return Result.retry()
        }
 
        alerts.forEach { alert ->
            val price = prices[alert.pair] ?: return@forEach
            val triggered = when (alert.direction) {
                AlertDirection.ABOVE -> price >= alert.targetPrice
                AlertDirection.BELOW -> price <= alert.targetPrice
            }
            if (triggered) {
                showNotification(alert, price)
                alertDao.markTriggered(alert.id)
                // Also update last_known_prices while we're here
                priceDao.updatePrice(alert.pair, price)
            }
        }
        return Result.success()
    }
}

Enqueued as unique periodic work with a 15-minute interval — the minimum WorkManager allows:

kotlin
WorkManager.enqueueUniquePeriodicWork(
    "price_alert_checker",
    ExistingPeriodicWorkPolicy.KEEP,
    PeriodicWorkRequestBuilder<PriceAlertWorker>(15, TimeUnit.MINUTES)
        .setConstraints(Constraints(requiredNetworkType = NetworkType.CONNECTED))
        .build()
)

The 15-minute floor is a trade-off to name explicitly. For a user with an alert set 0.5% from the current price, 15 minutes of background delay is meaningful. The foreground WebSocket path eliminates this when the app is open. For true sub-minute background precision, a Foreground Service with a persistent WebSocket is the only option — but it requires a persistent notification and has real battery cost. That's a product decision, not a purely technical one.


Block 8: Battery Optimisation

Interviewer: This is a lot of real-time data for a battery-constrained device. How do you manage it?

Candidate: The principle is clear: use the coarsest data channel that satisfies the current screen's needs, and stop receiving data the moment the screen is no longer visible.

Adaptive subscription strategy by screen:

ScreenChannelUpdate RateWhy
Market overviewticker_batchEvery 5s, all pairsOne channel for all assets
Asset detailticker + level2Per match + per changeGranularity needed for chart and order book
Background / PortfolioNonen/aWorkManager handles alerts

Lifecycle-driven unsubscription:

Register MarketDataManager as a ProcessLifecycleOwner observer — not a per-Activity observer. ProcessLifecycleOwner.ON_STOP fires when the app has no visible activities, not when a single screen is replaced in the back stack. This is the right scope for a singleton that should be active "while the app is visible."

kotlin
class MarketDataManager : DefaultLifecycleObserver {
 
    fun initialize(processLifecycleOwner: ProcessLifecycleOwner) {
        processLifecycleOwner.lifecycle.addObserver(this)
    }
 
    override fun onStart(owner: LifecycleOwner) {
        connect()                // WebSocket connects when app comes to foreground
        resubscribeActive()      // Restore subscriptions from activeSubscriptions set
    }
 
    override fun onStop(owner: LifecycleOwner) {
        unsubscribeAll()         // Stop data flowing — but keep TCP connection alive
        // The socket stays connected; reconnect cost is avoided on foreground return
    }
}

The WebSocket connection stays alive when backgrounded — it uses almost no battery when idle (no data flowing). Only the subscriptions are removed. When the user foregrounds the app, the connection is already established and re-subscription is a single lightweight JSON message. No TCP handshake, no TLS handshake, no latency spike.

RadioBurst awareness: OkHttp's keepAlive default keeps the connection alive. The cellular radio is promoted to full power only when data flows. Between ticks, the radio drops to low-power state. By reducing tick frequency on the overview screen and unsubscribing when backgrounded, we minimise radio promotion events — the single biggest source of battery drain in network-heavy apps.


Block 9: Offline Resilience

Interviewer: The user opens the app in airplane mode. What do they see?

Candidate: Everything in Room — last known prices, portfolio P&L, chart history, trade history. No blank screen, no loading spinner, no error state.

The MarketRepository checks connectivity before attempting a WebSocket connection. If offline, the app immediately serves Room data and shows a non-blocking "Offline — last updated 3 minutes ago" banner. When ConnectivityManager.NetworkCallback.onAvailable() fires, the WebSocket reconnects, Room updates, and the UI transitions to live data automatically.

The updatedAt timestamp on last_known_prices drives the staleness label. A price updated 3 minutes ago is shown with context. A price updated 2 days ago during a long offline period is shown with a more prominent warning — "Prices may be significantly outdated."

One subtlety for the order book offline state: the order book is not cached in Room. It's live-only data — a 24-hour-old order book is meaningless. When offline, the order book section shows a clear "Live data unavailable offline" placeholder. The chart still works from the Room cache; only the real-time components are greyed out.


Common Interview Follow-ups

"How does the order book handle the initial snapshot vs incremental updates?"

This is a real protocol detail that Coinbase's level2 channel requires handling correctly. On initial subscription, the server sends a full order book snapshot. All subsequent messages are deltas (price-level changes — add, update, or remove). The MarketDataManager maintains an in-memory TreeMap<Double, Double> for bids (descending price order) and one for asks (ascending). The snapshot replaces both maps entirely. Each delta applies an incremental update: put(price, size) for adds/updates, remove(price) for removes (size = 0). The ViewModel receives a copy of the top 20 bids and asks on each throttled emission.

If the WebSocket reconnects, a fresh snapshot is received on re-subscription — the in-memory maps are replaced, not merged with the stale state. A reconnection always starts from a clean, server-authoritative snapshot.

"How do you handle a limit order that gets partially filled?"

The exchange sends partial fill events via WebSocket on the user channel (an authenticated channel for account-specific data). Each partial fill event includes filled_quantity and remaining_quantity. The TradeEntity in Room is updated with the cumulative filled amount. The HoldingEntity is updated by the filled delta — not the full order size. The UI shows "Partially filled: 0.05 BTC of 0.10 BTC" on the order detail screen.

"How do you keep the price chart and the live ticker in sync when the user changes timeframe?"

When the user switches from 5-minute candles to 1-hour candles, the AssetDetailViewModel cancels the current candle history Flow and starts a new one for the new granularity. Room may have the 1-hour candles cached (if the user viewed this timeframe before). If not, the ViewModel shows a loading state while the REST fetch completes. The live WebSocket ticker continues uninterrupted — the current forming candle is recalculated for the new timeframe from the existing ticker data.

"What if two ViewModels are both collecting the same ticker Flow — is there double subscription?"

No. SharedFlow with SharingStarted.WhileSubscribed in the MarketDataManager is a hot stream. Multiple collectors receive the same emissions from a single source. The MarketDataManager has one WebSocket subscription per channel regardless of how many ViewModels are observing the Flow. This is a key advantage of the SharedFlow model over a Flow builder — the upstream work runs once and fans out to all collectors.


Quick Interview Checklist

  • ✅ Clarified scope — live prices, order book, chart, portfolio, buy/sell, price alerts, offline
  • ✅ Clean Architecture + MVVM, Room as single source of truth for all persisted state
  • MarketDataManager singleton scoped to Application — owns WebSocket, outlives screens
  • SharedFlow with DROP_OLDEST buffer overflow — stale market data never queues
  • ✅ OkHttp WebSocket with exponential backoff reconnection
  • ConnectivityManager.NetworkCallback for immediate reconnection on network return
  • ✅ Re-subscribe active channels on reconnect from activeSubscriptions set
  • ticker_batch on market overview, ticker + level2 on asset detail — adaptive channels
  • ProcessLifecycleOwner observer — unsubscribe all on app background, resubscribe on foreground
  • ✅ WebSocket stays connected when backgrounded (no data flowing) — avoids reconnect cost
  • ✅ Order book throttled to 200ms via throttleLatest() in ViewModel
  • ListAdapter + DiffUtil on order book — only changed rows rebound
  • ✅ All UI data pre-computed in ViewModel (flowOn(Dispatchers.Default)) — nothing in onBind
  • ✅ Candlestick: MPAndroidChart vs custom Canvas vs Compose Canvas — trade-offs named
  • ✅ Historical candles from REST cached in Room; live candle from WebSocket ticker merged in ViewModel
  • SharingStarted.WhileSubscribed(5000) — Flow stays warm across brief navigations
  • ✅ Portfolio: combine(holdingsFlow, pricesFlow) — reactive P&L without polling
  • last_known_prices written by WebSocket pipeline, read by portfolio and offline view
  • clientOrderId UUID for idempotent order submission — safe to retry on network timeout
  • TradeEntity in Room with SUBMITTING status — survives process kill mid-submission
  • ✅ Price alerts: foreground via WebSocket tick check; background via WorkManager 15-min periodic
  • updatedAt staleness label on prices — "Offline — last updated 3 minutes ago"
  • ✅ Order book not cached offline — clear "Live data unavailable" placeholder shown

Conclusion

Designing a trading app for Android is a multi-axis challenge that most interview questions aren't. It requires simultaneously reasoning about real-time data flow, RecyclerView performance under extreme update frequency, battery impact of persistent network connections, offline resilience across every screen, and consistency guarantees when orders are submitted over mobile networks.

The candidates who do well at Coinbase, Robinhood, and Revolut don't just name libraries. They explain why DROP_OLDEST buffer overflow policy is correct for financial ticks, why throttleLatest(200) is a correctness requirement rather than an optimisation, why the ProcessLifecycleOwner is the right scope for the WebSocket manager, and why clientOrderId idempotency exists.

The design pillars:

  1. Room as single source of truth — every screen observes Room; the WebSocket writes into Room
  2. MarketDataManager at Application scope — owns all WebSocket connections; screens subscribe and unsubscribe
  3. Adaptive channel subscriptionsticker_batch on overview, granular channels on detail, nothing when backgrounded
  4. DROP_OLDEST SharedFlow — stale market data must never queue; always show the latest
  5. Throttled order book with ListAdapter DiffUtil — 200ms throttle + precise diffing = 60fps at 50 updates/second
  6. combine(holdings, prices) — reactive portfolio P&L without a single polling call
  7. clientOrderId idempotency — mobile networks drop; orders must be safe to retry


Frequently Asked Questions

Why use WebSockets instead of polling for real-time price data in Android?

WebSockets maintain a single persistent TCP connection where the server pushes data as it changes. Polling makes repeated HTTP requests on a timer. For a trading app, polling is the wrong choice.

Why polling fails for live market data:

  1. Latency — a poll every 1 second means prices are up to 1 second stale. During volatile markets, this is meaningless data
  2. Battery cost — each HTTP request wakes the cellular radio, negotiates a connection, and transfers data. At 1 poll/second, the radio never drops to low-power state
  3. Bandwidth waste — polling fetches the full price list even when nothing has changed. WebSocket only sends what changed
  4. Server load — 1 million users polling every second = 1 million HTTP requests/second. WebSocket holds 1 million persistent connections with minimal per-message overhead

Why WebSockets work:

  1. One TCP connection per user, established once via an HTTP upgrade handshake
  2. Server pushes new prices as they arrive — sub-100ms delivery
  3. The cellular radio stays in low-power state between pushes
  4. Coinbase's ticker_batch channel delivers all-pairs updates every 5 seconds on the overview screen — further reducing radio activity

What is DROP_OLDEST SharedFlow and why is it correct for financial ticks?

DROP_OLDEST is a buffer overflow policy on SharedFlow. When the buffer is full and a new emission arrives, the oldest buffered item is discarded to make room — rather than suspending the producer or dropping the newest item.

Why it is the correct policy for market data:

  1. Stale prices are worse than no prices — a queued price from 500ms ago, displayed now, shows the user a misleading picture of the market
  2. The UI only needs the latest value — if 10 ticks arrive while the UI is recomposing, only the 10th matters. The first 9 are noise
  3. DROP_LATEST would be wrong — that would keep old data and discard the newest tick, the opposite of what we want
  4. Suspension would be wrong — blocking the WebSocket producer while the UI catches up creates back-pressure that delays all downstream consumers

In practice: if the UI misses ticks during a heavy recomposition or scroll, the next emission it receives will be the most current price. No stale data, no data loss, no back-pressure.


How do you render an order book at 50 updates per second without dropping frames?

Order book throttling is a correctness requirement at 50 updates/second — not an optimisation. At 60fps you have 16ms per frame. Unbounded updates would schedule DiffUtil work 50 times per second, guaranteed frame drops.

Three mechanisms work together:

  1. Throttle in the ViewModel to 200ms — use a throttleLatest(200) Flow operator. The UI receives at most 5 updates per second, always showing the latest snapshot. The human eye cannot distinguish order book changes faster than ~100ms anyway
  2. ListAdapter with precise DiffUtilareItemsTheSame checks price level identity; areContentsTheSame checks size. Only rows that actually changed get rebound. In a typical update, 2–3 of 40 rows change — DiffUtil eliminates 37 unnecessary rebinds per update cycle
  3. Pre-compute everything before onBind — format price strings, format size strings, and compute depth bar fractions (Float 0.0–1.0) in the ViewModel on Dispatchers.Default. onBindViewHolder sets pre-formatted values only — zero allocation, zero computation at bind time

Result: 5 throttled updates/second × 3 changed rows × trivial bind = well under 1ms of UI work per frame.


What are the options for a candlestick chart in Android and which should you choose?

Three viable approaches exist for rendering a candlestick chart in Android, each with distinct trade-offs.

ApproachProsConsBest for
MPAndroidChartMature library, pinch-zoom built-in, minimal codeNot actively maintained (last major release 2020), renders synchronously on main thread, full redraw on every updateView-based apps where development speed matters
Custom Canvas ViewFull GPU-accelerated control, surgical updates (redraw only the live candle), no dependencySignificant dev time, must implement gesture handling manuallyHigh-performance apps, maximum control
Compose CanvasFirst-class Compose integration, declarative updates, derivedStateOf prevents unnecessary recompositionRequires Compose codebase, less ecosystem toolingModern Compose-first apps

How to handle historical vs live candles regardless of approach:

  1. Load historical candles via REST (GET /products/{pair}/candles?granularity=300) — cached in Room
  2. The live forming candle (current period) receives updates from the WebSocket ticker stream
  3. Merge both in the ViewModel using Kotlin Flow's combine operator before passing to the chart
  4. Only the rightmost candle updates in real time — avoid triggering a full chart redraw on every tick

Recommended answer in an interview: name all three, state your choice based on the codebase (Compose Canvas for modern apps), and explain the historical + live candle merge strategy.


How do you manage WebSocket battery consumption on Android?

WebSocket battery drain comes almost entirely from keeping the cellular radio active — not from the TCP connection itself. An idle WebSocket (open but no data flowing) uses negligible battery.

The strategy: subscribe to the coarsest channel that satisfies the current screen, and unsubscribe when data is not being displayed.

Channel strategy by screen:

  1. Market overviewticker_batch (all pairs, batched every 5 seconds). One channel, low frequency
  2. Asset detailticker (per-trade updates) + level2 (order book). High frequency, but scoped to one asset
  3. Any other screen or backgrounded → unsubscribe all channels

Implementation via ProcessLifecycleOwner:

  1. Register MarketDataManager as a DefaultLifecycleObserver on ProcessLifecycleOwner
  2. onStop() — call unsubscribeAll(). Data stops flowing; the TCP connection stays alive
  3. onStart() — call resubscribeActive(). Re-subscription is a single lightweight JSON message — no TCP/TLS handshake cost

Why keep the TCP connection alive when backgrounded?

Closing and re-establishing a WebSocket connection requires a full TCP handshake + TLS negotiation — typically 200–400ms of latency and a radio burst. Keeping the idle connection open costs almost nothing. When the user foregrounds the app, prices appear immediately rather than after a 400ms reconnect delay.


What is clientOrderId and why does an Android trading app need it?

clientOrderId is a UUID generated on the client device and sent with every order submission. It acts as an idempotency key — the server uses it to detect and deduplicate retried requests.

The problem it solves:

  1. User taps "Buy" — the app sends POST /orders { clientOrderId: "uuid-abc", amount: 100 }
  2. The network drops after the server processes the order but before the response reaches the app
  3. The app times out and retries: POST /orders { clientOrderId: "uuid-abc", amount: 100 }
  4. Without idempotency: two orders are placed. The user buys $200 when they intended $100
  5. With clientOrderId: the server recognises the UUID, returns the original order result, and ignores the duplicate

How to implement correctly:

  1. Generate clientOrderId = UUID.randomUUID() when the user opens the order form — not when they tap submit
  2. Store it in the TradeEntity Room record with status = SUBMITTING before sending
  3. Submit the order with the UUID included in the request body
  4. On network timeout: retry with the same clientOrderId — the server deduplicates
  5. On success: update TradeEntity with the server's orderId and status = FILLED
  6. On process kill mid-submission: the SUBMITTING record in Room is visible on next app launch — the app can show "Order may be pending" and query the server by clientOrderId

How does the order book snapshot vs delta model work on Android?

Order book initialisation uses a full snapshot; all subsequent updates are deltas. Getting this distinction wrong causes a permanently stale or incorrect order book.

The Coinbase level2 channel flow:

  1. Client subscribes to level2 for BTC-USD
  2. Server sends one full snapshot: the complete list of all bid and ask price levels with sizes
  3. All subsequent messages are deltas: { side: "buy", price: "67432.50", size: "0.00" } — a size of 0 means remove that price level
  4. Client maintains two in-memory TreeMap<Double, Double> — one for bids (descending), one for asks (ascending)
  5. Snapshot → replace both maps entirely. Delta → put(price, size) for non-zero, remove(price) for zero

Why this matters on reconnect:

On WebSocket disconnect and reconnect, the client re-subscribes to level2. The server sends a fresh snapshot. The in-memory maps must be replaced entirely — not have deltas applied on top of the stale pre-disconnect state. Applying deltas to stale state produces an incorrect order book that drifts further from reality with every update.

If you see an order book showing impossible price levels (bids above asks, phantom liquidity), it almost always means snapshot/delta handling is wrong.


How do price alerts work in the background on Android?

Background price alerts use two paths depending on whether the app is in the foreground or not — because background execution on Android is heavily restricted.

Foreground path (app is active):

  1. Every tick processed by MarketDataManager triggers an alert check
  2. The check reads active AlertEntity records from Room for the current asset
  3. If currentPrice >= alert.targetPrice (for ABOVE alerts) or currentPrice <= alert.targetPrice (for BELOW alerts): fire a NotificationCompat notification and mark the alert as triggered in Room
  4. Latency: sub-second from price crossing the threshold to notification appearing

Background path (app is killed or backgrounded):

  1. PriceAlertWorker (CoroutineWorker) runs via WorkManager
  2. Minimum interval: 15 minutes — the floor WorkManager enforces for periodic work
  3. On each run: fetch current prices for all pairs with active alerts via REST, compare against AlertEntity thresholds, fire notifications for any crossings
  4. Enqueued as enqueueUniquePeriodicWork with ExistingPeriodicWorkPolicy.KEEP

The 15-minute limitation is a deliberate Android constraint:

WorkManager does not support sub-15-minute periodic intervals. For near-real-time background alerts, a Foreground Service with a persistent WebSocket is the only option — but it requires a permanent notification and has real battery cost. For most users, foreground path (when app is open) + 15-minute background polling is an acceptable trade-off. Name this honestly in an interview.


Which companies ask the Android trading app system design question?

Coinbase, Robinhood, Revolut, N26, Binance, and Stripe ask variants of this question for senior Android engineer roles. It also appears at Google and Amazon when teams are building financial or real-time data products.

Why it is a popular interview question:

  1. Real-time constraints are Android-specific — WebSocket lifecycle, ProcessLifecycleOwner scoping, SharedFlow buffer policies, and throttleLatest are platform-level decisions that reveal genuine expertise
  2. Battery awareness — every senior fintech Android role expects candidates to explain how they minimise radio bursts and subscription overhead
  3. 60fps under pressure — the order book at 50 updates/second is a deliberate stress test that reveals whether candidates understand DiffUtil, frame budgets, and thread dispatch

What interviewers specifically listen for:

  1. DROP_OLDEST on SharedFlow — and the specific reasoning that stale ticks are harmful, not just wasteful
  2. throttleLatest(200) as a correctness requirement — not "I added throttling to optimise performance"
  3. ProcessLifecycleOwner over Activity lifecycle — and why scoping to the Activity would cause subscribe/unsubscribe on every screen transition
  4. clientOrderId idempotency — and the exact failure scenario (network drop after server processes but before response) it prevents
  5. Order book snapshot replacement on reconnect — not delta application on stale state

The trading app question is one where the follow-up questions are as hard as the initial design — reconnection protocol, partial fills, timeframe switching, the order book snapshot/delta distinction. Having clean answers to all of them under 45 minutes of interview pressure is a skill that genuinely requires practice out loud. Mockingly.ai has Android-focused system design simulations for engineers preparing for roles at Coinbase, Robinhood, Revolut, and beyond.

Companies That Ask This

Related System Design Guides