INTERVIEW_QUESTIONS

Low-Level Design Interview Questions for Senior Engineers (2026)

Essential low-level design interview questions with detailed answer frameworks covering object-oriented design, SOLID principles, design patterns, class modeling, and implementation-level architecture expected at senior engineering roles.

20 min readUpdated Apr 20, 2026
interview-questionslow-level-designsenior-engineerobject-oriented-designdesign-patterns

Why Low-Level Design Mastery Matters in Senior Engineering Interviews

Low-level design (LLD) interviews evaluate your ability to translate requirements into clean, extensible, and maintainable code architectures. While system design interviews test your ability to reason about distributed infrastructure at scale, LLD interviews assess whether you can design class hierarchies, define interfaces, apply design patterns appropriately, and produce code that other engineers can understand, extend, and maintain over years of evolution.

For senior engineers, LLD is not about memorizing patterns from a textbook. Interviewers expect you to demonstrate judgment: knowing when a pattern adds clarity versus when it introduces unnecessary abstraction, understanding the trade-offs between inheritance and composition, and recognizing when SOLID principles should be relaxed for pragmatic reasons. The best candidates show that their design decisions emerge from deep experience shipping and maintaining production systems.

At companies like Google, Amazon, and Microsoft, the LLD round typically involves designing a complete system at the class and interface level within 45 minutes. You are expected to identify entities, define their relationships, choose appropriate data structures, handle concurrency, and discuss how your design would evolve with changing requirements. This guide covers 15 essential LLD questions with structured frameworks that demonstrate senior-level thinking. For broader interview preparation, explore our system design interview guide and learning paths.

1. Design a parking lot management system.

What the interviewer is really asking: Can you identify entities, define clear interfaces, handle different vehicle types polymorphically, and think about concurrency for a real-world physical system with constraints?

Answer framework:

Start by clarifying requirements: multiple floors, different spot sizes (compact, regular, large), support for motorcycles, cars, buses, a ticketing system with entry/exit timestamps, and payment calculation.

Identify core entities and their relationships. The ParkingLot has multiple Floors, each Floor has multiple ParkingSpots. ParkingSpots have a type (COMPACT, REGULAR, LARGE) and a status (AVAILABLE, OCCUPIED). Vehicles have a type that determines which spots they can use.

Apply the Strategy pattern for spot allocation: define a ParkingStrategy interface with a method findSpot(Vehicle). Implement NearestToEntranceStrategy, NearestToElevatorStrategy, and EvenDistributionStrategy. This allows the allocation algorithm to change without modifying the core parking logic, demonstrating the Open-Closed Principle from SOLID.

For vehicle-spot compatibility, use a mapping rather than if-else chains: Motorcycle can park in any spot, Car in REGULAR or LARGE, Bus requires multiple consecutive LARGE spots. Implement this with a CompatibilityChecker that encapsulates the rules.

For the ticketing system, a Ticket contains: ticket ID, vehicle info, spot assignment, entry timestamp. On exit, calculate duration and compute the fee using a PricingStrategy (hourly rate, daily maximum, weekend discounts). The PricingStrategy pattern allows different pricing models without changing the exit flow.

For concurrency, the critical section is spot allocation: two vehicles arriving simultaneously must not be assigned the same spot. Use optimistic locking on spot status, or a synchronized allocation method per floor. Discuss the trade-off: a global lock is simple but creates a bottleneck at peak hours; per-floor locks reduce contention but require more complex spot searching.

Design for extensibility: how would you add electric vehicle charging spots? Add a ChargingSpot subclass with a chargerType attribute and extend the ParkingStrategy to prefer charging spots for EVs. How would you add a reservation system? Add a RESERVED status and a Reservation entity with time bounds.

Follow-up questions:

  • How would you handle the scenario where a bus needs 5 consecutive large spots and some are occupied in the middle?
  • How do you design the payment system to support multiple payment methods without modifying existing code?
  • How would you implement a real-time display showing available spots per floor?

2. Design a chess game engine.

What the interviewer is really asking: Can you model complex rules with clean abstractions, handle the interplay between pieces with different movement patterns, and manage game state transitions correctly?

Answer framework:

Identify the core entities: Game, Board, Square, Piece (abstract), and specific piece types (King, Queen, Rook, Bishop, Knight, Pawn). Each piece has a color (WHITE, BLACK) and a position on the board.

For piece movement, define an abstract method getValidMoves(Board) on the Piece class. Each subclass implements its own movement logic. The Knight returns all L-shaped positions that are on the board and not occupied by a friendly piece. The Bishop returns all diagonal positions until blocked. The Pawn has the most complex logic: forward one (or two from start), diagonal capture, en passant, and promotion.

Separate move validation from move execution. A Move object contains: source square, destination square, piece moved, piece captured (if any), and special move type (CASTLING, EN_PASSANT, PROMOTION). The MoveValidator checks legality: the piece can reach the destination, the path is not blocked, and the move does not leave the king in check. This separation follows the Single Responsibility Principle.

For check and checkmate detection: after every move, compute all opponent attacking squares. If the king's square is attacked, it is check. If the king is in check and no legal move resolves it, it is checkmate. Optimize with incremental attack maps rather than recomputing from scratch.

Use the Command pattern for moves: each Move is a command that can be executed (applying the move to the board) and undone (reverting the board state). This enables undo/redo functionality and is essential for the move search algorithm if you add an AI opponent.

For game state management, use a State pattern: the Game transitions between states (WHITE_TURN, BLACK_TURN, CHECK, CHECKMATE, STALEMATE, DRAW). State transitions are triggered by moves and validated by the game rules.

Discuss how SOLID principles apply: Open-Closed (new piece types can be added without modifying existing code), Liskov Substitution (any Piece subclass can be used wherever a Piece is expected), Interface Segregation (separate interfaces for Movable, Capturable, Promotable).

Follow-up questions:

  • How would you implement an AI opponent using minimax with alpha-beta pruning?
  • How do you handle the draw-by-repetition rule efficiently without storing all historical board states?
  • How would you extend the design to support chess variants like Chess960?

3. Design an elevator system for a 50-story building with 8 elevators.

What the interviewer is really asking: Can you design a system with multiple concurrent actors (elevators), complex scheduling decisions, and real-time state management? This tests your ability to model state machines and optimization algorithms.

Answer framework:

Identify entities: Building, Elevator, Floor, Request (with direction UP/DOWN and floor number), and ElevatorController (the scheduler). Each Elevator has state: current floor, direction (UP, DOWN, IDLE), a queue of floor stops, current load, and door status.

Model the Elevator as a state machine with states: IDLE (waiting at a floor), MOVING_UP, MOVING_DOWN, DOORS_OPENING, DOORS_OPEN, DOORS_CLOSING. Transitions are triggered by events: new request assigned, floor reached, timer expired (doors close after 5 seconds), obstruction detected.

For the scheduling algorithm, implement the ElevatorScheduler interface. The simplest implementation is SCAN (elevator algorithm): each elevator moves in one direction, servicing all requests in that direction, then reverses. More sophisticated: LOOK (reverses when no more requests in the current direction rather than going to the end).

For the dispatching strategy (which elevator handles a new request), implement multiple strategies. NearestElevator: assign to the closest elevator, ignoring direction. DirectionalNearest: assign to the nearest elevator already moving in the right direction. Optimized: minimize average waiting time using a cost function that considers distance, direction, current load, and number of pending stops.

Apply the Observer pattern: when an elevator arrives at a floor or changes state, it notifies the ElevatorController, which updates its scheduling decisions. Internal panels and external displays observe elevator state for UI updates.

For concurrency, each elevator runs as an independent actor (thread or coroutine) with its own event loop. The controller communicates with elevators via message passing (command queue). This avoids shared mutable state and the bugs that come with it.

Handle edge cases: overweight detection (elevator does not accept new stops until load decreases), maintenance mode (elevator is removed from service), fire mode (all elevators return to ground floor and stop), VIP/express mode (dedicate elevators to specific floor ranges in a tall building).

Relate to real-time system design principles discussed in our learning paths where similar scheduling and optimization problems appear in distributed task scheduling.

Follow-up questions:

  • How would you optimize for rush hour when everyone goes to the lobby at 5 PM?
  • How do you handle an elevator breakdown and redistribute its pending requests?
  • How would you implement destination dispatch where users enter their floor before entering the elevator?

4. Design a URL shortener at the class level.

What the interviewer is really asking: Can you design clean interfaces, handle concurrent URL generation without collisions, and think about storage abstraction, caching, and analytics at the code level?

Answer framework:

Identify the core classes: URLShortenerService (facade), URLGenerator (creates short codes), URLRepository (persistence abstraction), AnalyticsService (tracks clicks), and CacheManager (hot URL caching).

For the URLGenerator, apply the Strategy pattern. Define a URLGenerationStrategy interface with method generate(String longUrl): String. Implementations include: Base62CounterStrategy (uses an atomic counter, converts to base62 for a short deterministic code), HashBasedStrategy (SHA-256 of the URL, take first 7 characters, handle collisions), and RandomStrategy (generate random base62 string, check uniqueness). The service can switch strategies without changing client code.

For the URLRepository, define an interface with methods: save(ShortURL), findByCode(String code), findByLongUrl(String url). Implement with InMemoryURLRepository for testing and PostgresURLRepository for production. This Dependency Inversion principle keeps the core logic independent of storage details, following SOLID principles.

For the ShortURL entity: shortCode, longUrl, createdAt, expiresAt, userId (creator), clickCount, isActive. Use the Builder pattern for construction since there are many optional fields. Make the entity immutable after creation (all fields final, no setters).

For caching, implement a CacheManager using the Decorator pattern wrapping the URLRepository. On findByCode, check cache first (Redis). On miss, query the database and populate the cache. Since URLs are immutable, cache invalidation is only needed for expiration and deactivation.

For analytics, use the Observer pattern: when a URL is resolved, publish a ClickEvent to registered observers. The AnalyticsObserver records the event asynchronously (to avoid adding latency to redirects). The click event contains: shortCode, timestamp, referrer, userAgent, ipAddress, country.

For concurrency in the counter-based approach, use an AtomicLong for single-node deployment or a distributed counter (Redis INCR, ZooKeeper sequential node) for multi-node. Discuss the trade-off: atomic counter guarantees uniqueness but creates a sequential pattern that is predictable; hash-based is random but requires collision handling.

Design for extensibility: how would you add custom aliases? Add a CustomAliasValidator that checks length, character restrictions, profanity filter, and uniqueness. How would you add link expiration? A scheduled ExpirationService queries for expired URLs and marks them inactive.

Follow-up questions:

  • How would you design rate limiting for URL creation without tightly coupling it to the core service?
  • How do you handle the case where two users shorten the same long URL simultaneously?
  • How would you implement A/B testing where a short URL randomly redirects to different destinations?

5. Design an in-memory cache with LRU eviction.

What the interviewer is really asking: Can you implement a data structure that combines O(1) reads, O(1) writes, and O(1) eviction? This tests fundamental data structure knowledge and your understanding of composition.

Answer framework:

The core insight: LRU cache requires a HashMap (O(1) key lookup) combined with a Doubly Linked List (O(1) removal and insertion at head/tail). On access, move the node to the head (most recently used). On eviction, remove from the tail (least recently used).

Define the classes: LRUCache<K, V> (the public API), CacheNode<K, V> (doubly-linked list node containing key, value, prev, next), and DoublyLinkedList<K, V> (manages head/tail pointers with addToHead, removeNode, removeTail operations).

The LRUCache API: get(key) returns the value and promotes the node to head; put(key, value) inserts or updates, promotes to head, and evicts from tail if over capacity; remove(key) explicitly removes an entry; size() returns current size; clear() removes all entries.

For thread safety, the basic implementation is not thread-safe. Discuss three approaches. Synchronized methods: simplest but creates a global lock bottleneck for concurrent reads. ReadWriteLock: allows concurrent reads but exclusive writes, better for read-heavy workloads. Segmented locking (ConcurrentHashMap approach): partition the cache into segments, each with its own lock. Reduces contention dramatically but complicates the LRU ordering (each segment has its own LRU list).

Discuss the approximated LRU approach (used by Redis): instead of maintaining a perfect LRU order, sample N random keys and evict the one with the oldest access timestamp. This trades perfect LRU accuracy for much better concurrent performance. In practice, with N=5, it is nearly as good as perfect LRU.

For extensibility, apply the Template Method pattern: define a CacheEvictionPolicy interface with methods shouldEvict() and selectVictim(). Implement LRUPolicy, LFUPolicy (least frequently used), and TTLPolicy (time-to-live). The cache delegates eviction decisions to the policy.

Add cache statistics: hit count, miss count, eviction count, hit ratio. Use atomic counters for thread-safe increment without locking. Implement a CacheStats class that provides a snapshot of metrics.

Relate to how production systems at companies like Google use multi-level caching strategies where the LRU eviction policy is one component in a larger caching architecture.

Follow-up questions:

  • How would you implement an LRU cache that supports entry-level TTL in addition to capacity-based eviction?
  • How do you handle the thundering herd problem when a popular cache entry is evicted?
  • How would you implement a write-behind cache that asynchronously persists evicted entries?

6. Design a library management system.

What the interviewer is really asking: Can you model real-world entities with appropriate relationships, handle state transitions (book checkout/return), and think about search, reservations, and notifications?

Answer framework:

Identify entities: Library, Book, BookItem (physical copy), Member, Librarian, Loan, Reservation, Fine, and Catalog. Distinguish between Book (the abstract concept with title, ISBN, author) and BookItem (a specific physical copy with barcode, condition, rack location). A Book has many BookItems.

For the Member entity, apply inheritance thoughtfully: define a Person base class with name, email, phone. Both Member and Librarian extend Person but have different capabilities. Use the role-based permission model rather than deep inheritance: a User has a Set where Role determines permitted operations (CHECKOUT, RESERVE, ADD_BOOK, MANAGE_FINES).

For book checkout flow, model as a state machine. BookItem states: AVAILABLE, CHECKED_OUT, RESERVED, LOST, UNDER_REPAIR. Transitions: AVAILABLE to CHECKED_OUT (on checkout), CHECKED_OUT to AVAILABLE (on return), AVAILABLE to RESERVED (on reservation). The Loan entity records: bookItem, member, checkoutDate, dueDate, returnDate, renewalCount.

For search, implement the Catalog class with the Specification pattern. Define BookSpecification interface with method isSatisfiedBy(Book). Implementations: TitleSpecification, AuthorSpecification, ISBNSpecification, CategorySpecification. Compose specifications with AndSpecification, OrSpecification, NotSpecification. This creates a flexible query system without hardcoding search logic.

For notifications, apply the Observer pattern. When relevant events occur (due date approaching, reserved book available, fine assessed), notify the member via their preferred channel. Define a NotificationService interface with implementations: EmailNotification, SMSNotification, InAppNotification. Members configure their notification preferences.

For fine calculation, use the Strategy pattern: FineCalculationStrategy with implementations for different fine policies (flat daily rate, progressive rate, maximum cap). This allows the library to change fine policies without modifying the loan logic.

Apply SOLID principles throughout: Single Responsibility (Loan handles borrowing logic, Fine handles penalty logic, Notification handles communication), Open-Closed (new book types or notification channels require no core changes), Dependency Inversion (services depend on interfaces, not concrete implementations).

Follow-up questions:

  • How would you handle a scenario where a member wants to reserve a book that is currently checked out by another member?
  • How do you design the renewal system with limits (max 2 renewals) and blocking conditions (another member has reserved it)?
  • How would you extend the system to support inter-library loans?

7. Design a food delivery system like DoorDash at the class level.

What the interviewer is really asking: Can you model a multi-actor system (customers, restaurants, delivery drivers) with complex state transitions, real-time status tracking, and event-driven communication between actors?

Answer framework:

Identify the actors and entities: Customer, Restaurant, DeliveryDriver, Order, OrderItem, Menu, MenuItem, Payment, Rating, and the orchestrating services (OrderService, DispatchService, TrackingService).

For the Order lifecycle, model as a state machine with states: PLACED, CONFIRMED (by restaurant), PREPARING, READY_FOR_PICKUP, DRIVER_ASSIGNED, PICKED_UP, IN_TRANSIT, DELIVERED, CANCELLED. Each transition is triggered by a specific actor: restaurant confirms, restaurant marks ready, driver picks up, driver delivers. The Order entity tracks state history with timestamps for analytics and dispute resolution.

For menu modeling, Restaurant has a Menu containing MenuItems. Each MenuItem has: name, description, price, category, availabilitySchedule, customizationOptions. Use the Composite pattern for customizations: a MenuItem can have CustomizationGroups (like Size, Toppings) each containing CustomizationOptions (Small/Medium/Large, each with a price modifier). An OrderItem references a MenuItem plus selected customizations.

For the dispatch algorithm, design a DriverDispatcher with a pluggable DispatchStrategy. Factors: driver proximity to restaurant (requires integration with maps), driver current load (already carrying an order), estimated delivery time, driver rating, and order priority. Model this as a cost function that scores each available driver for a given order.

For real-time tracking, each DeliveryDriver publishes location updates. The TrackingService maintains driver positions and computes ETAs. Customers subscribe to their order's tracking feed via WebSocket connections. Use the pub-sub pattern: when a driver updates their location, publish to a topic that the customer's WebSocket handler subscribes to.

For pricing, apply the Decorator pattern: base price from menu items, plus delivery fee (distance-based), plus service fee (percentage), minus promotions (coupon codes, first-order discount). Each pricing component is a PriceModifier that wraps the base calculation. This allows new fees or discounts to be added without modifying existing pricing logic.

Handle edge cases: restaurant cancellation after driver is dispatched (reassign driver or compensate), driver cancellation mid-delivery (find nearest available driver), customer cancellation at different stages (different refund policies per state).

Relate the dispatch system to how Uber solves similar driver-rider matching problems at scale.

Follow-up questions:

  • How would you design the system to handle peak dinner hours when demand exceeds driver supply?
  • How do you model batched orders where a driver picks up from multiple restaurants on one trip?
  • How would you implement surge pricing for delivery fees during high demand?

8. Design a movie ticket booking system like BookMyShow.

What the interviewer is really asking: Can you handle seat selection concurrency, temporal locking (hold seats while user pays), and the complexities of a booking system with limited inventory?

Answer framework:

Identify entities: Movie, Theater, Screen, Show (a movie at a specific screen and time), Seat, Booking, User, Payment. A Theater has multiple Screens. A Screen has a fixed SeatMap (rows and columns with seat types: REGULAR, PREMIUM, VIP). A Show associates a Movie with a Screen at a DateTime.

For seat selection concurrency, this is the critical design challenge. When User A selects seats and opens the payment page, those seats must be temporarily locked so User B cannot book them. But if User A abandons payment, the seats must be released.

Implement a temporal locking mechanism: when a user selects seats, create a SeatHold with a TTL (typically 10 minutes). The SeatHold contains: showId, seatIds, userId, expiresAt. While held, these seats appear unavailable to other users. If payment completes within TTL, convert to a confirmed Booking. If TTL expires, release the hold automatically.

For the locking implementation, use the State pattern on seats for a given show. Each seat in a show has status: AVAILABLE, HELD, BOOKED. Transitions: AVAILABLE to HELD (on selection, with TTL), HELD to BOOKED (on payment success), HELD to AVAILABLE (on TTL expiration or user cancellation), BOOKED to AVAILABLE (on cancellation within policy).

For concurrency control, use optimistic locking: when attempting to hold seats, use a compare-and-swap operation (UPDATE seats SET status='HELD' WHERE showId=? AND seatId IN (?) AND status='AVAILABLE'). If the affected row count is less than requested, some seats were taken. Return an error and let the user re-select.

For pricing, apply the Strategy pattern: SeatPricingStrategy varies by seat type (Premium costs more), show time (evening more expensive), day of week (weekend surcharge), and demand (dynamic pricing for popular shows). Compose pricing rules using the Chain of Responsibility pattern.

For the booking flow: browse movies, select show, view seat map, select seats (create hold), enter payment details, process payment, confirm booking (convert hold to booking), send confirmation notification.

Design for scale: during popular movie releases (like a Marvel premiere), thousands of users compete for seats simultaneously. Discuss how the optimistic locking approach handles this gracefully: most users get fast success, and those who lose the race get immediate feedback to try other seats.

Follow-up questions:

  • How would you implement a waitlist for sold-out shows that notifies users when cancellations occur?
  • How do you handle partial refunds for group bookings where one person cancels?
  • How would you design the seat selection UI to show real-time availability updates as other users book?

9. Design a social media feed ranking system at the class level.

What the interviewer is really asking: Can you design a scoring and ranking pipeline with clean abstractions, pluggable ranking signals, and the flexibility to support A/B testing of different ranking algorithms?

Answer framework:

Identify the core components: FeedService (orchestrator), CandidateGenerator (sources content), RankingModel (scores candidates), PostFilter (removes unwanted content), FeedAssembler (final composition), and ExperimentRouter (A/B testing).

For the CandidateGenerator, use the composite pattern: multiple generators each produce candidate posts. FriendPostsGenerator fetches recent posts from friends. GroupPostsGenerator fetches from joined groups. TrendingGenerator fetches viral content. RecommendedGenerator suggests content from non-followed accounts. Each generator implements CandidateSource interface with method getCandidates(userId, limit): List. The composite merges and deduplicates.

For the RankingModel, design a pluggable scoring system. Define a ScoringFeature interface: computeScore(FeedCandidate, UserContext): double. Implementations: RecencyFeature (newer = higher), EngagementFeature (more likes/comments = higher), AffinityFeature (posts from close friends = higher), ContentTypeFeature (user prefers videos over text). The RankingModel combines feature scores with learned weights: finalScore = sum(weight_i * feature_i). This linear model is interpretable; for production, wrap a neural network behind the same interface.*

For filtering, apply the Chain of Responsibility pattern: a chain of PostFilter implementations. BlockedUserFilter (remove posts from blocked users), ContentPolicyFilter (remove policy-violating content), SeenPostFilter (remove already-viewed posts), DiversityFilter (prevent too many posts from same author). Each filter either passes or removes the candidate.

For A/B testing, the ExperimentRouter assigns users to experiments. Each experiment configures: which CandidateGenerators to use, which RankingModel weights, which Filters to apply. This allows testing completely different ranking approaches without code changes. Define an ExperimentConfig class containing all tunable parameters.

For pagination, implement cursor-based pagination: the feed response includes a cursor (encoded position) that the client sends with the next request. The server resumes ranking from that position. Handle the case where new posts arrive between pages (use a snapshot timestamp).

Relate the ranking pipeline to how systems described in the WhatsApp system design handle message ordering and the broader challenges of real-time content delivery.

Follow-up questions:

  • How do you handle the cold start problem for new users with no engagement history?
  • How would you implement a "why am I seeing this" explanation feature?
  • How do you prevent engagement-bait content from dominating the ranking?

10. Design an online auction system like eBay.

What the interviewer is really asking: Can you handle time-sensitive operations, concurrent bidding, auction lifecycle management, and the complex business rules around winning and payment?

Answer framework:

Identify entities: User, Auction, Bid, Item, AuctionType (ENGLISH, DUTCH, SEALED_BID), WatchList, Notification, and Payment.

For the Auction lifecycle, model with the State pattern. States: DRAFT (not yet listed), ACTIVE (accepting bids), CLOSING (final minutes with anti-sniping), ENDED (winner determined), PAYMENT_PENDING, COMPLETED, CANCELLED. Each state defines allowed operations: only ACTIVE accepts bids, only DRAFT allows editing.

For bid processing in an English auction (ascending bids), the critical invariant is: every accepted bid must be higher than the current highest bid. With concurrent bidders, this requires careful synchronization. Use optimistic locking: each bid submission checks if its amount exceeds the current highest bid atomically. Use a compare-and-swap pattern: UPDATE auction SET currentBid = newBid WHERE auctionId = ? AND currentBid < newBid. Only one concurrent bid will succeed; others must retry with a higher amount.

Implement proxy bidding (auto-bidding): a user sets a maximum bid. The system automatically bids the minimum increment above any new bid, up to the maximum. Model as an AutoBidRule: userId, auctionId, maxAmount, currentProxy. When a new bid arrives, check all active proxy bids and trigger the lowest one that exceeds the new bid.

For anti-sniping (preventing last-second bids from being unfair), extend the auction end time by 2 minutes whenever a bid arrives in the final 5 minutes. Model this in the CLOSING state: on each new bid, compute newEndTime = max(currentEndTime, now + 2 minutes).

For the notification system, use the Observer pattern with pub-sub mechanics: when an auction event occurs (new bid, outbid, auction ending soon, auction won), notify all watchers and participants. Implement notification preferences: email, push, SMS, with different urgency levels per event type.

For different auction types, apply the Strategy pattern: AuctionBidStrategy interface with implementations for English (ascending), Dutch (descending price that drops until someone bids), and Sealed-Bid (all bids submitted privately, highest wins). The Auction delegates bid processing to its strategy.

Apply SOLID principles: the Auction class handles lifecycle, BidProcessor handles bid validation and acceptance, NotificationService handles communication. Each has a single responsibility and can be modified independently.

Follow-up questions:

  • How would you detect and prevent bid shilling (seller bidding on their own item)?
  • How do you handle a reserve price that is not met when the auction ends?
  • How would you implement a "Buy It Now" option that coexists with the auction format?

11. Design a rate limiter framework.

What the interviewer is really asking: Can you design a reusable framework with multiple algorithms, configurable policies, and clean interfaces that other services can easily integrate?

Answer framework:

Define the public API: RateLimiter interface with method isAllowed(String clientId): boolean (or a richer response with remaining quota and reset time). The framework should support multiple algorithms and be configurable per client or per endpoint.

Implement multiple algorithm strategies. TokenBucket: a bucket fills at rate R tokens per second, max capacity C. Each request consumes one token. If empty, reject. Good for burst handling (capacity allows short bursts). SlidingWindowLog: maintain a log of request timestamps. Count requests in the window [now - windowSize, now]. If count exceeds limit, reject. Exact but memory-intensive. SlidingWindowCounter: divide time into fixed windows, track counts per window. For the current moment, interpolate between the current window count and the previous window count weighted by the overlap. Approximate but memory-efficient. LeakyBucket: requests enter a queue processed at a fixed rate. If queue is full, reject. Guarantees smooth output rate.

For configuration, define a RateLimitPolicy: algorithm type, rate, window size, burst capacity. Support hierarchical policies: global default, per-service override, per-endpoint override, per-client override. Use a PolicyResolver that finds the most specific matching policy for a given request.

For distributed deployment, the rate limiter must work across multiple application instances. Define a RateLimitStore interface with implementations: LocalStore (in-memory, per-instance limits only), RedisStore (centralized, global limits using Redis atomic operations), and HybridStore (local for approximate enforcement, periodic sync with Redis for global accuracy). Discuss the trade-off between latency (local is zero-cost) and accuracy (centralized is exact).

For the response model, go beyond boolean. Return a RateLimitResult containing: allowed (boolean), remaining (tokens/requests remaining), retryAfter (seconds until quota resets), limit (the configured maximum). This enables clients to implement backoff strategies and display quota information.

For framework integration, provide a middleware/interceptor pattern: a RateLimitInterceptor that hooks into the HTTP request pipeline, extracts the client identifier (API key, IP, user ID), resolves the applicable policy, checks the rate limiter, and either passes the request through or returns HTTP 429.

Relate to how rate limiting is implemented at scale at Google where different services have different rate limiting needs but share a common framework.

Follow-up questions:

  • How would you implement graceful degradation where rate-limited requests get degraded service rather than rejection?
  • How do you handle clock skew across distributed instances in the sliding window algorithm?
  • How would you implement rate limit quotas that reset monthly for a SaaS billing context?

12. Design a notification service that supports multiple channels.

What the interviewer is really asking: Can you design a system with clean abstractions for multiple delivery channels, handle priority and deduplication, and think about user preferences and delivery guarantees?

Answer framework:

Define the core abstraction: NotificationService receives a Notification (recipient, template, channel preferences, priority, metadata) and delivers it through the appropriate channel(s). Channels include: Email, Push, SMS, In-App, and Slack/Webhook.

Apply the Strategy pattern for channel delivery: DeliveryChannel interface with method deliver(Notification, Recipient): DeliveryResult. Implementations: EmailChannel (via SMTP/SES), PushChannel (via FCM/APNs), SMSChannel (via Twilio), InAppChannel (via WebSocket), WebhookChannel (via HTTP POST). Each handles its own formatting, rate limits, and error handling.

For notification routing, implement a RoutingEngine that determines which channels to use for each notification. Factors: user's channel preferences (stored in UserPreferences), notification priority (CRITICAL goes to all channels, LOW only to in-app), quiet hours (do not push at 3 AM, queue for morning), channel availability (if push token is expired, fall back to email).

For templating, use the Template Method pattern. NotificationTemplate defines placeholders. A TemplateEngine renders templates with context data. Each channel may need different rendering: email needs HTML, push needs short text (under 100 chars), SMS needs plain text under 160 chars. Define a ChannelFormatter interface that adapts the template output for each channel.

For delivery guarantees, implement a retry mechanism with exponential backoff. Define a DeliveryAttempt log: notificationId, channel, attemptNumber, timestamp, status, errorMessage. Use a dead-letter queue for notifications that fail all retry attempts. Implement idempotency: each notification has a unique deduplication key; resubmitting the same notification does not send duplicates.

For priority handling, implement a priority queue: CRITICAL notifications skip the queue and are processed immediately. HIGH notifications are processed before NORMAL. LOW notifications are batched for digest delivery (daily summary email). Use the pub-sub pattern for scalable processing: publish notifications to different Kafka topics by priority.

For user preference management, model granular preferences: per notification type (comments on my posts, new followers, promotional), per channel (email only for digests, push for everything else), per schedule (do not disturb 10 PM to 7 AM).

Relate to how notification delivery intersects with real-time systems like WebSocket connections for in-app notifications.

Follow-up questions:

  • How would you implement notification batching to avoid sending 50 individual emails when someone gets 50 likes?
  • How do you handle unsubscribe compliance (CAN-SPAM, GDPR) at the framework level?
  • How would you implement cross-device notification synchronization where dismissing on one device dismisses on all?

13. Design a file synchronization service like Dropbox at the class level.

What the interviewer is really asking: Can you handle file chunking, delta synchronization, conflict detection and resolution, and the client-side state machine that manages sync operations?

Answer framework:

Identify core components: SyncClient (runs on user's device), FileWatcher (detects local changes), ChunkManager (splits files into chunks), SyncEngine (orchestrates upload/download), ConflictResolver (handles concurrent edits), MetadataStore (tracks file state), and ServerAPI (communication with cloud storage).

For the FileWatcher, use the Observer pattern with platform-specific file system event APIs (inotify on Linux, FSEvents on macOS, ReadDirectoryChanges on Windows). On detecting a change (create, modify, delete, rename), queue a SyncEvent for processing.

For chunking, split files into fixed-size chunks (4MB). Compute a content hash (SHA-256) for each chunk. This enables: deduplication (identical chunks across files stored once), delta sync (on file modification, only upload changed chunks), and resumable uploads (each chunk is independently uploadable).

For the SyncEngine state machine per file, define states: IN_SYNC, LOCAL_CHANGE_DETECTED, UPLOADING, DOWNLOADING, CONFLICTED. The sync algorithm: (1) detect local change, (2) compute new chunk hashes, (3) compare with server's chunk list for this file, (4) upload only chunks that differ, (5) update server metadata with new chunk list and version.

For conflict detection, use vector clocks or version numbers. Each client and the server maintain a version for each file. On upload, if the server's version has advanced since the client's last sync (another client modified the file), a conflict is detected. For resolution, implement ConflictResolutionStrategy: CreateConflictCopy (save both versions, let user resolve manually as Dropbox does), LastWriterWins (latest timestamp wins, risk of data loss), and MergeStrategy (for supported file types like text, attempt automatic merge).

For bandwidth optimization, implement delta encoding: instead of uploading entire changed chunks, compute the binary diff between old and new chunk versions and upload only the diff. Use rsync-style rolling checksums to identify matching blocks efficiently.

For the MetadataStore (local database on client), track: filePath, fileHash, chunkHashes[], lastSyncVersion, lastModifiedLocal, lastModifiedServer, syncStatus. Index by path for fast lookup when the FileWatcher reports changes.

Connect the sync mechanism to the challenges discussed in system design for distributed storage where similar consistency challenges arise in distributed message storage.

Follow-up questions:

  • How do you handle a user renaming a folder containing 10,000 files? Does that trigger 10,000 sync operations?
  • How would you implement selective sync where the user only syncs specific folders?
  • How do you handle the case where a file is too large to chunk in memory on a device with limited RAM?

14. Design a task management system like Jira.

What the interviewer is really asking: Can you model complex entity relationships (projects, sprints, tasks, subtasks), configurable workflows, and permission systems with clean abstractions?

Answer framework:

Identify the entity hierarchy: Organization, Project, Board, Sprint, Issue, Comment, Attachment, User, Team. An Organization has Projects. A Project has a Board (Kanban or Scrum). Scrum Boards have Sprints. Issues belong to a Project and optionally a Sprint.

For the Issue entity, apply the Composite pattern: an Issue can have sub-issues (subtasks). An Epic is a top-level Issue containing Stories, which may contain Tasks and Bugs. Use a type enum (EPIC, STORY, TASK, BUG, SUBTASK) rather than separate classes, since the behavior is largely the same (different icons and validation rules but same state machine).

For customizable workflows, use the State pattern with configurable transitions. Define a Workflow as a graph of Status nodes with Transition edges. A simple workflow: TODO to IN_PROGRESS to IN_REVIEW to DONE. Allow project admins to define custom workflows: add statuses (BLOCKED, QA_TESTING), add transitions (IN_PROGRESS to BLOCKED), add transition conditions (only assignee can move to IN_PROGRESS), and add transition actions (notify reviewer when moved to IN_REVIEW).

For permissions, implement Role-Based Access Control (RBAC) layered with project-level permissions. Roles: ADMIN, PROJECT_LEAD, DEVELOPER, VIEWER. Permissions: CREATE_ISSUE, EDIT_ISSUE, DELETE_ISSUE, MANAGE_SPRINT, MANAGE_MEMBERS. Define a PermissionResolver that checks: user's organization role, project-specific role, and issue-specific permissions (reporter and assignee have edit rights).

For search and filtering, implement a query builder using the Specification pattern (similar to the library system). Filter by: status, assignee, reporter, priority, label, sprint, due date, custom fields. Support compound filters with AND/OR logic. Implement a saved filter (JQL in Jira terms) that persists a query for reuse.

For the activity stream, every action on an issue creates an ActivityEvent: type (STATUS_CHANGED, ASSIGNED, COMMENTED, ATTACHMENT_ADDED), actor, timestamp, before/after values. Store as an append-only log per issue. Use the Observer pattern to trigger side effects: notifications, webhook deliveries, sprint burndown recalculation.

Apply SOLID principles rigorously: the Workflow is separated from the Issue (Single Responsibility), new workflow steps don't require Issue changes (Open-Closed), the PermissionResolver is injected as a dependency (Dependency Inversion).

Relate to how learning management systems use similar task tracking and progress modeling.

Follow-up questions:

  • How would you implement time tracking with automatic timer start/stop based on status changes?
  • How do you handle bulk operations (move 50 issues to a new sprint) without triggering 50 individual notifications?
  • How would you design the sprint planning drag-and-drop interface at the API level?

15. Design a payment splitting system like Splitwise.

What the interviewer is really asking: Can you model financial transactions between multiple parties, handle the algorithmic challenge of debt simplification, and ensure data integrity in a system where money is involved?

Answer framework:

Identify entities: User, Group, Expense, Split, Balance, Settlement, and the core algorithm DebtSimplifier. An Expense records: payer (who paid), amount, description, group, split type, and the individual shares.

For split types, apply the Strategy pattern with a SplitStrategy interface: computeShares(Expense, List): Map<User, Amount>. Implementations: EqualSplit (divide equally among all participants), ExactSplit (each participant's share is specified), PercentageSplit (each participant pays a percentage), ShareBasedSplit (divide proportionally to shares, like paying by income ratio). The Strategy pattern makes adding new split types trivial without modifying existing code.

For balance computation, maintain a balance ledger: for each pair of users in a group, track the net amount owed. When User A pays $60 for a group of 3 (A, B, C), the result is: B owes A $20, C owes A $20. Store as directed edges in a graph: Balance(from=B, to=A, amount=20).

For debt simplification (the key algorithmic challenge), the graph of debts can be simplified to minimize the number of transactions needed to settle. Example: A owes B $10, B owes C $10. Simplified: A owes C $10 directly (one transaction instead of two). The algorithm: compute net balance for each user (sum of what they're owed minus what they owe). Users with positive balance are creditors, negative are debtors. Match debtors to creditors greedily. This is equivalent to the minimum cash flow problem, solvable by iteratively settling the largest debtor with the largest creditor.

For data integrity, every expense creates immutable transaction records. Use double-entry bookkeeping principles: every debit has a corresponding credit. Implement an audit log that records every change. Never delete records since mark as voided and create a reversal entry instead.

For settlements, when User B pays User A what they owe, create a Settlement record that zeros out their balance. Support partial settlements. Track settlement method (cash, bank transfer, payment app) for record-keeping.

For groups, compute group-level balances efficiently: maintain a materialized view of each member's net balance within the group. Update incrementally on each new expense rather than recomputing from scratch.

Relate financial system design to the precision and audit requirements discussed in stock trading platform design where financial accuracy is equally critical.

Follow-up questions:

  • How do you handle currency conversion when group members pay in different currencies?
  • How would you implement recurring expenses (monthly rent split) with automatic balance updates?
  • How do you handle the scenario where a user leaves a group but still has outstanding balances?

Common Mistakes in Low-Level Design Interviews

  1. Over-engineering with unnecessary patterns. Applying every design pattern you know makes code harder to understand, not easier. Use patterns only when they solve a real problem. A simple if-else is often better than a full Strategy pattern for two cases.

  2. Ignoring concurrency. Many LLD problems have concurrent access scenarios (ticket booking, auction bidding, cache access). Failing to address thread safety signals a lack of production experience. Always identify the critical sections and discuss synchronization approaches.

  3. Deep inheritance hierarchies instead of composition. Senior engineers prefer composition over inheritance. Inheritance creates tight coupling and the fragile base class problem. Use interfaces for polymorphism and compose behavior through delegation.

  4. Not discussing trade-offs. Every design decision has alternatives. When you choose a HashMap over a TreeMap, explain why (O(1) vs O(log n) lookup, but unordered). When you choose inheritance over interface, justify it. Interviewers want to see that you considered alternatives.

  5. Focusing on implementation details over design. In an LLD interview, the class structure, interfaces, and relationships matter more than the exact syntax of methods. Spend time on the high-level object model before diving into method implementations.

How to Prepare for Low-Level Design Interviews

Practice by designing systems you use daily: your email client, a ride-sharing app, a music streaming service. For each, identify entities, define their relationships, and write the key interfaces. Implement the core classes to validate your design compiles and works.

Study SOLID principles deeply, not as rules to follow blindly, but as trade-offs to evaluate. Know when Single Responsibility leads to too many tiny classes and when Open-Closed adds unnecessary abstraction layers. Understanding when to break the rules demonstrates more mastery than rigid adherence.

Review design patterns from the Gang of Four book, but focus on the ones you will actually use: Strategy, Observer, Factory, Builder, State, Command, Decorator, and Composite cover 90% of interview scenarios. For each pattern, know a real-world example from your own codebase.

Practice whiteboarding: draw class diagrams quickly, show relationships (inheritance, composition, association), and walk through a request flow. The ability to communicate your design visually is as important as the design itself.

For comprehensive preparation, explore our system design interview guide for the high-level counterpart, study learning paths tailored to object-oriented design, and review company-specific interview formats. Consider our pricing plans for full access to interactive design practice problems.

Related Resources

GO DEEPER

Master this topic in our 12-week cohort

Our Advanced System Design cohort covers this and 11 other deep-dive topics with live sessions, assignments, and expert feedback.