INTERVIEW_QUESTIONS

Cryptography & Encryption Interview Questions for Senior Engineers (2026)

Master cryptography and encryption interview questions with detailed answer frameworks covering symmetric/asymmetric encryption, TLS, PKI, key management, hashing, digital signatures, zero-knowledge proofs, and encryption at rest and in transit.

20 min readUpdated Apr 25, 2026
interview-questionscryptographyencryptionsecuritysenior-engineer

Why Cryptography & Encryption Questions Matter in Senior Engineering Interviews

Cryptography is no longer the exclusive domain of security specialists. At the senior engineer level, you are expected to make architectural decisions that directly impact the confidentiality, integrity, and authenticity of data flowing through distributed systems. Interviewers at companies like Google, Stripe, and Cloudflare routinely ask cryptography questions not because they expect you to implement AES from scratch, but because they need to know you can reason about threat models, select appropriate primitives, and avoid the subtle mistakes that lead to catastrophic breaches.

A senior engineer who cannot explain why ECB mode is dangerous, how TLS 1.3 reduced round trips, or when to choose HMAC over a digital signature is a liability in any system that handles user data, financial transactions, or authentication tokens. These questions test your ability to bridge theory and practice: understanding the mathematics well enough to make sound engineering decisions without falling into the trap of rolling your own crypto.

This guide covers 15 questions that span the full breadth of cryptography topics you will encounter in interviews, from foundational primitives to advanced constructions like zero-knowledge proofs. Each question includes what the interviewer is really probing for and a structured answer framework you can adapt to your experience.

For related interview preparation, see our guides on System Design Interviews and API Design interview questions.


Question 1: Explain the difference between symmetric and asymmetric encryption. When would you use each?

What the interviewer is really asking

They want to confirm you understand the fundamental trade-off between performance and key distribution. They are also checking whether you can map these primitives to real-world use cases rather than reciting textbook definitions.

Answer framework

Symmetric encryption uses a single shared key for both encryption and decryption. Algorithms like AES-256-GCM operate at hardware-accelerated speeds (often exceeding 10 GB/s on modern CPUs with AES-NI instructions) and are the workhorse for bulk data encryption. The challenge is key distribution: both parties must securely possess the same key before communication begins.

Asymmetric encryption uses a key pair (public and private). RSA and elliptic curve algorithms (ECDSA, X25519) solve the key distribution problem since anyone can encrypt with the public key, but only the private key holder can decrypt. The cost is performance: RSA-2048 encryption is roughly 1000x slower than AES-256.

In practice, you almost always use both together in a hybrid scheme:

Use symmetric encryption for: data at rest, database field encryption, session data, file encryption, and any scenario where you control both endpoints and can securely provision keys.

Use asymmetric encryption for: key exchange, digital signatures, certificate-based authentication, end-to-end encrypted messaging (where you cannot pre-share symmetric keys), and any scenario involving untrusted parties.

A strong answer will mention that modern systems like Signal Protocol use a ratcheting mechanism (Double Ratchet) that combines both paradigms to achieve forward secrecy and post-compromise security.


Question 2: How does TLS 1.3 work, and what improvements does it make over TLS 1.2?

What the interviewer is really asking

This tests whether you understand the protocol that secures virtually all web traffic. They want to see that you can reason about handshake latency, cipher suite negotiation, and the security properties that matter in production systems.

Answer framework

TLS 1.3 (RFC 8446) is a ground-up simplification of the TLS handshake that reduces latency and removes insecure legacy options.

Key improvements over TLS 1.2:

1-RTT Handshake (down from 2-RTT):

0-RTT Resumption: For repeat connections, TLS 1.3 supports 0-RTT data using pre-shared keys from a previous session. This eliminates handshake latency entirely for idempotent requests, though it introduces replay attack risks that must be mitigated at the application layer.

Removed insecure primitives: TLS 1.3 eliminates RSA key exchange (no forward secrecy), CBC mode ciphers (vulnerable to padding oracles like POODLE), RC4, SHA-1, and static DH groups. Only AEAD ciphers (AES-GCM, ChaCha20-Poly1305) with ephemeral key exchange (ECDHE, DHE) are permitted.

Encrypted handshake: The server certificate is now encrypted, preventing passive observers from identifying which server the client is connecting to (when combined with Encrypted Client Hello / ECH).

In production, TLS 1.3 typically reduces connection setup time by 100-300ms compared to TLS 1.2, which compounds significantly for mobile users on high-latency networks.

For more on how protocols are evaluated in system design, see our System Design Interview guide.


Question 3: What is a Public Key Infrastructure (PKI), and what are its weaknesses?

What the interviewer is really asking

They want to know if you understand the trust model that underpins HTTPS and certificate-based authentication, and whether you can critically evaluate its limitations rather than treating it as infallible.

Answer framework

PKI is the system of Certificate Authorities (CAs), digital certificates, and validation procedures that binds public keys to identities. The chain of trust works as follows:

  1. Root CAs are pre-installed in operating systems and browsers (roughly 150 root certificates in a typical trust store)
  2. Intermediate CAs are signed by root CAs and issue end-entity certificates
  3. End-entity certificates (e.g., your server's TLS cert) are signed by intermediate CAs
  4. Verification walks the chain from end-entity up to a trusted root

Weaknesses of PKI:

  • CA compromise: If any CA in the trust store is compromised, an attacker can issue valid certificates for any domain. This has happened (DigiNotar 2011, Symantec trust revocation 2017).
  • Domain validation weaknesses: DV certificates only prove domain control, not organizational identity. BGP hijacking can be used to fraudulently obtain DV certs.
  • Revocation is broken in practice: CRL (Certificate Revocation Lists) are too large and slow. OCSP adds latency and leaks browsing history. OCSP stapling helps but is not universally deployed.
  • Certificate pinning trade-offs: Pinning mitigates CA compromise but creates operational risk if you lose your pinned key or need to rotate quickly.

Modern mitigations:

  • Certificate Transparency (CT): All publicly trusted certificates must be logged in append-only CT logs, making unauthorized issuance detectable.
  • CAA DNS records: Specify which CAs are authorized to issue certificates for your domain.
  • Short-lived certificates: Let's Encrypt issues 90-day certs; some organizations use even shorter lifetimes (hours) to reduce the window of compromise.
  • ACME protocol: Automates certificate issuance and renewal, reducing human error.

Question 4: Explain the difference between hashing, encryption, and encoding. When would you use each?

What the interviewer is really asking

This is a fundamentals check. Confusing these three operations is a red flag. They want crisp definitions and practical examples that demonstrate you would not make mistakes like encrypting passwords or base64-encoding secrets.

Answer framework

Encoding transforms data into a different format for compatibility. It is reversible and uses no key. Base64 encoding, URL encoding, and UTF-8 are examples. Encoding provides zero security.

Encryption transforms data to preserve confidentiality. It is reversible only with the correct key. AES-256-GCM, ChaCha20-Poly1305, and RSA are examples. Use encryption when you need to recover the original data (database fields, API payloads, files at rest).

Hashing produces a fixed-size digest from arbitrary input. It is a one-way function: you cannot recover the input from the hash. SHA-256, BLAKE3, and bcrypt are examples. Use hashing when you need to verify data without storing the original (passwords, integrity checks, content addressing).

python

Critical nuance for passwords: Never use plain SHA-256 for password hashing. Use a purpose-built password hashing function (bcrypt, scrypt, or Argon2id) that incorporates a salt and a tunable work factor to resist brute-force attacks. Argon2id is the current recommendation from OWASP, with parameters of at least 19 MiB memory, 2 iterations, and 1 degree of parallelism.


Question 5: How does key management work in production systems? What are the challenges?

What the interviewer is really asking

They want to know if you have dealt with the hardest part of applied cryptography: keeping keys safe throughout their lifecycle. This separates engineers who have built real secure systems from those who only understand algorithms.

Answer framework

Key management encompasses the entire lifecycle: generation, distribution, storage, rotation, and destruction of cryptographic keys.

Key generation: Use cryptographically secure random number generators (CSPRNG). On Linux, read from /dev/urandom or use getrandom(). Never use Math.random() or language-level PRNGs for key material.

Key storage hierarchy:

This is called envelope encryption and is the standard pattern used by AWS KMS, Google Cloud KMS, and Azure Key Vault.

Key rotation: Rotate DEKs regularly (e.g., every 90 days). The KEK in the HSM rotates less frequently since it never leaves secure hardware. During rotation, you must support decryption with old keys while encrypting with the new key (key versioning).

Challenges in practice:

  • Key escrow vs. availability: If you lose the only copy of a key, encrypted data is permanently lost. But every backup copy increases the attack surface.
  • Multi-region: Keys used in one region may need to decrypt data replicated to another. This conflicts with data residency requirements.
  • Access control: Which services and humans can access which keys? Overly broad access negates the benefit of encryption. Use IAM policies with the principle of least privilege.
  • Audit logging: Every key access must be logged for compliance (SOC2, HIPAA, PCI-DSS). Cloud KMS services provide this automatically.

For a deeper dive into related security topics, see our concepts on distributed systems and authentication patterns.


Question 6: What are digital signatures and how do they differ from MACs?

What the interviewer is really asking

They are testing whether you understand non-repudiation, which is the critical property that separates digital signatures from message authentication codes. This matters for audit trails, legal compliance, and multi-party protocols.

Answer framework

Both digital signatures and MACs (Message Authentication Codes) provide integrity and authenticity, but they differ in a fundamental way:

MAC (e.g., HMAC-SHA256):

  • Uses a shared symmetric key
  • Both sender and receiver can generate and verify
  • Provides integrity and authenticity but NOT non-repudiation
  • If Alice and Bob share a key, Bob cannot prove to a third party that Alice sent a message (Bob could have created it himself)

Digital Signature (e.g., ECDSA, Ed25519):

  • Uses an asymmetric key pair: sign with private key, verify with public key
  • Only the private key holder can sign, but anyone with the public key can verify
  • Provides integrity, authenticity, AND non-repudiation
  • Alice signs with her private key; anyone can verify with her public key; Alice cannot deny sending the message
python

When to use each:

  • HMAC: API request authentication (e.g., AWS Signature V4), session tokens, internal service-to-service authentication where both parties are trusted.
  • Digital signatures: Code signing, certificate issuance, blockchain transactions, audit logs where non-repudiation is legally required, JWTs that must be verified by third parties.

Performance-wise, HMAC is significantly faster (symmetric operations). Ed25519 signatures are the fastest asymmetric option, producing 64-byte signatures with signing at roughly 100k operations/second on modern hardware.


Question 7: What is forward secrecy and why does it matter?

What the interviewer is really asking

This tests your understanding of a critical security property that affects how you design key exchange protocols. They want to see that you understand the practical implications: what happens if a private key is compromised after the fact.

Answer framework

Forward secrecy (also called perfect forward secrecy / PFS) ensures that compromise of long-term keys does not compromise past session keys. Each session uses ephemeral keys that are discarded after use.

Without forward secrecy (RSA key exchange in TLS 1.2):

With forward secrecy (ECDHE key exchange):

This matters enormously in practice because:

  • Nation-state adversaries are known to record encrypted traffic for later decryption ("harvest now, decrypt later")
  • Private keys can be extracted via server compromise, Heartbleed-style vulnerabilities, or legal compulsion
  • TLS 1.3 mandates forward secrecy by only allowing ephemeral key exchange (ECDHE/DHE)

Forward secrecy is also critical in messaging protocols. Signal's Double Ratchet provides forward secrecy at the message level: compromising the current ratchet state does not reveal past messages.


Question 8: How would you implement encryption at rest for a multi-tenant database?

What the interviewer is really asking

This is a system design question that tests whether you can apply cryptographic concepts to a real architectural problem. They want to see you reason about tenant isolation, key hierarchy, performance, and operational concerns.

Answer framework

There are multiple layers of encryption at rest, each with different trade-offs:

Layer 1: Full-disk encryption (FDE)

  • Transparent to the application (LUKS, BitLocker, cloud provider default)
  • Protects against physical theft of storage media
  • Does NOT provide tenant isolation: all data encrypted with the same key

Layer 2: Database-level encryption (TDE)

  • Transparent Data Encryption in PostgreSQL, SQL Server, etc.
  • Encrypts at the tablespace or database level
  • Still does not isolate tenants within the same database

Layer 3: Application-level encryption (field-level)

  • Encrypt sensitive fields before they enter the database
  • Each tenant gets their own Data Encryption Key (DEK)
  • DEKs are wrapped with a KEK stored in a KMS (envelope encryption)

Key considerations:

  • Per-tenant keys allow you to cryptographically delete a tenant's data by destroying their KEK (crypto-shredding), which is faster and more reliable than deleting individual rows.
  • Performance: Cache unwrapped DEKs in memory (with TTL) to avoid calling KMS on every operation. AES-GCM encryption adds negligible overhead (~1-2% CPU).
  • Searchability: Encrypted fields cannot be indexed or queried by the database. Solutions include deterministic encryption (same plaintext always produces same ciphertext, enabling equality searches but leaking frequency), blind indexes, or specialized encrypted search schemes.
  • Key rotation: Rotate DEKs by re-encrypting data with a new DEK. Rotate KEKs by re-wrapping DEKs with the new KEK (much faster since DEKs are small).

For more on multi-tenant architecture patterns, explore our system design interview guide.


Question 9: Explain zero-knowledge proofs. What are practical applications?

What the interviewer is really asking

Zero-knowledge proofs (ZKPs) are increasingly relevant in blockchain, privacy-preserving authentication, and compliance. The interviewer wants to see if you understand the concept beyond buzzword level and can identify legitimate use cases.

Answer framework

A zero-knowledge proof allows a prover to convince a verifier that a statement is true without revealing any information beyond the truth of the statement itself.

Three properties define a ZKP:

  1. Completeness: If the statement is true, an honest prover can convince the verifier.
  2. Soundness: If the statement is false, no cheating prover can convince the verifier (except with negligible probability).
  3. Zero-knowledge: The verifier learns nothing beyond the fact that the statement is true.

Classic analogy (Ali Baba's cave): A circular cave has a locked door in the middle. The prover claims to know the password. They enter from a random side, the verifier calls out which side to exit from. If the prover knows the password, they can always exit from the requested side. After many rounds, the verifier is convinced without ever learning the password.

Practical applications:

  • Privacy-preserving authentication: Prove you are over 18 without revealing your birthdate. Prove you are a member of a group without revealing which member.
  • Blockchain scalability (ZK-Rollups): Batch thousands of transactions off-chain and submit a single ZK proof to the main chain. Ethereum's ZK-rollups (zkSync, StarkNet, Polygon zkEVM) use this to achieve 1000+ TPS while inheriting L1 security.
  • Compliance: Prove to an auditor that your financial reserves exceed liabilities without revealing individual account balances.
  • Password-authenticated key exchange (PAKE): Protocols like OPAQUE use ZKP-like constructions so the server never sees the password, even during registration.

Types of ZK proof systems:

  • zk-SNARKs: Succinct proofs (small and fast to verify), require a trusted setup. Used by Zcash and many ZK-rollups.
  • zk-STARKs: No trusted setup, transparent, larger proofs but post-quantum secure. Used by StarkNet.
  • Bulletproofs: No trusted setup, used for range proofs in Monero. Proof size grows logarithmically.

The engineering trade-off is proof generation time (computationally expensive) versus verification time (fast) and proof size. Modern ZK systems can generate proofs for complex computations in seconds using GPU acceleration.


Question 10: What are the common modes of operation for block ciphers, and why does the choice matter?

What the interviewer is really asking

This is a depth check on symmetric encryption. They want to see that you understand why using AES alone is not enough and that the mode of operation determines critical security properties.

Answer framework

Block ciphers like AES encrypt fixed-size blocks (128 bits). Modes of operation define how to encrypt messages longer than one block. The choice dramatically affects security.

ECB (Electronic Codebook) - NEVER use for real data:

  • Each block encrypted independently with the same key
  • Identical plaintext blocks produce identical ciphertext blocks
  • Leaks patterns in structured data (the famous ECB penguin image)

CBC (Cipher Block Chaining) - Legacy, avoid:

  • Each block XORed with previous ciphertext block before encryption
  • Requires an IV (Initialization Vector)
  • Vulnerable to padding oracle attacks (POODLE, Lucky13)
  • Not parallelizable for encryption

CTR (Counter) - Acceptable:

  • Turns block cipher into stream cipher using incrementing counter
  • Parallelizable for both encryption and decryption
  • No padding needed
  • Does NOT provide integrity (malleable)

GCM (Galois/Counter Mode) - Recommended:

  • CTR mode + GMAC authentication tag
  • AEAD (Authenticated Encryption with Associated Data)
  • Provides confidentiality AND integrity in a single operation
  • Hardware-accelerated on modern CPUs (PCLMULQDQ instruction)
  • 12-byte nonce; MUST NOT reuse a nonce with the same key
python

Key rule: Always use AEAD modes (AES-GCM or ChaCha20-Poly1305). If you are using a mode that does not provide authentication, you are almost certainly doing it wrong. ChaCha20-Poly1305 is preferred on platforms without hardware AES support (older mobile devices).

For comparisons of different security approaches, see our tech comparisons.


Question 11: How do you securely store and verify passwords?

What the interviewer is really asking

Password storage is a topic where getting it wrong has immediate, headline-making consequences. They want to verify you know the current best practices and understand the reasoning behind each layer of defense.

Answer framework

The correct approach uses a slow, memory-hard, salted password hashing function. Never use fast hashes (MD5, SHA-256) for passwords.

Recommended algorithms (in order of preference as of 2026):

  1. Argon2id - Winner of the Password Hashing Competition. Memory-hard, resistant to both GPU and side-channel attacks. OWASP recommended.
  2. scrypt - Memory-hard, widely available. Good alternative if Argon2 is not available in your platform.
  3. bcrypt - Time-tested, widely supported. Not memory-hard, so more vulnerable to GPU/ASIC attacks at scale, but still acceptable.

Why these specific algorithms?

  • Salt: A unique random value per password prevents rainbow table attacks and ensures identical passwords produce different hashes.
  • Work factor (cost): Deliberately slow (100ms-1s per hash). Makes brute-force attacks computationally infeasible. Attackers must spend the same time per guess.
  • Memory-hardness (Argon2, scrypt): Requires significant RAM per hash computation, making GPU parallelism (thousands of cores, limited memory per core) ineffective.
python

Additional defenses:

  • Pepper: A server-side secret (stored separately from the database) appended to the password before hashing. Protects against database-only breaches.
  • Rate limiting and account lockout: Limits online brute-force attempts.
  • Breach detection: Check passwords against known breach databases (Have I Been Pwned API) during registration and login.
  • Credential stuffing defense: Require MFA for high-value accounts.

Question 12: What is a Merkle tree and where is it used in practice?

What the interviewer is really asking

Merkle trees appear in Git, blockchain, certificate transparency, and distributed file systems. The interviewer wants to see that you understand the data structure and can explain its efficiency properties for integrity verification.

Answer framework

A Merkle tree (hash tree) is a binary tree where every leaf node contains the hash of a data block, and every internal node contains the hash of its two children. The root hash (Merkle root) is a single digest that commits to the entire dataset.

Key property: efficient verification (Merkle proof) To prove that Data B is part of the tree, you only need:

  • Hash(B) itself
  • Hash(A) (sibling)
  • Hash(CD) (uncle)

The verifier computes Hash(AB) = Hash(Hash(A) || Hash(B)), then Root = Hash(Hash(AB) || Hash(CD)), and compares against the known root. This requires O(log n) hashes instead of O(n).

Practical applications:

  • Git: Every commit contains a Merkle tree of the repository's file contents. This enables efficient detection of which files changed between commits.
  • Blockchain: Bitcoin's block header contains the Merkle root of all transactions. SPV (Simplified Payment Verification) clients verify transaction inclusion without downloading the entire block.
  • Certificate Transparency: CT logs are append-only Merkle trees. Auditors can efficiently verify that a certificate was (or was not) logged.
  • Distributed file systems (IPFS): Content is addressed by its hash. Large files are split into chunks organized in a Merkle DAG, enabling deduplication and parallel download.
  • Amazon DynamoDB / Apache Cassandra: Anti-entropy repair uses Merkle trees to efficiently identify divergent data ranges between replicas.

For more on how Merkle trees fit into distributed systems concepts, see our related guide.


Question 13: How would you design an end-to-end encrypted messaging system?

What the interviewer is really asking

This is a system design question that tests your ability to combine multiple cryptographic primitives into a coherent architecture. They want to see you address key exchange, forward secrecy, multi-device support, and the practical challenges of E2EE.

Answer framework

The gold standard is Signal Protocol (used by Signal, WhatsApp, and Google Messages). Here is the architecture at a high level:

1. Key registration:

  • Each device generates: one long-term identity key pair (Ed25519), one signed pre-key pair (rotated periodically), and a batch of one-time pre-keys (ephemeral, each used once)
  • Public keys are uploaded to the server

2. Session establishment (X3DH key agreement):

3. Double Ratchet (ongoing messages):

  • DH ratchet: Each message exchange introduces new ephemeral keys, providing forward secrecy (past messages cannot be decrypted if current keys leak) and future secrecy / post-compromise security (recovering from a key compromise).
  • Symmetric ratchet: Within a single DH ratchet step, a KDF chain derives unique per-message keys, ensuring each message uses a different encryption key.

4. Multi-device support:

  • Each device has its own identity and pre-keys
  • Sending a message to a user with N devices requires encrypting the message N times (once per device)
  • Alternative: use a device-group key and re-encrypt via a fan-out service

5. Server's role:

  • Stores encrypted messages for offline delivery
  • Stores public key bundles
  • CANNOT decrypt messages (does not have private keys)
  • Provides delivery receipts and presence (metadata, which is a separate privacy concern)

Challenges to discuss:

  • Group messaging: Sender Keys (Signal) or MLS (Message Layer Security, RFC 9420) for efficient group key management
  • Key verification: Safety numbers / QR code comparison to prevent MITM
  • Metadata protection: Sealed sender (encrypt the sender's identity)
  • Backup: How to back up message history without compromising E2EE (client-side encrypted backups with a user-held key)

For related architectural patterns, see our system design interview guide.


Question 14: What is key derivation and when would you use a KDF?

What the interviewer is really asking

They want to see that you understand why raw key material (from a DH exchange, a user password, or a master secret) needs to be processed before use, and that you know which KDF to use in which context.

Answer framework

A Key Derivation Function (KDF) takes input key material and produces one or more cryptographically strong keys. There are two main categories:

1. Password-based KDFs (slow, for human-chosen secrets):

  • Argon2id, scrypt, bcrypt, PBKDF2
  • Deliberately slow to resist brute-force attacks on low-entropy passwords
  • Used for: password hashing, deriving encryption keys from passphrases

2. Key-based KDFs (fast, for high-entropy secrets):

  • HKDF (HMAC-based Key Derivation Function, RFC 5869)
  • Used when input already has sufficient entropy (DH shared secrets, random seeds)
  • Two phases: Extract (compress input into a fixed-length pseudorandom key) and Expand (derive multiple output keys from the extracted key)
python

Why not just use the raw shared secret?

  • DH outputs may have biased bit distributions (not uniformly random)
  • You often need multiple keys from one secret (encryption key, MAC key, IV)
  • The info parameter enables domain separation: deriving different keys for different purposes from the same secret, preventing cross-protocol attacks
  • The salt parameter provides resistance to pre-computation attacks

KDFs are used extensively in TLS 1.3, Signal Protocol, WireGuard, and any system that derives session keys from a key exchange.


Question 15: How do you handle encryption in transit for microservices?

What the interviewer is really asking

They want to see how you apply encryption concepts to a concrete microservices architecture. This tests your understanding of mTLS, service mesh, certificate management at scale, and the trade-offs between different approaches.

Answer framework

In a microservices architecture, encryption in transit protects against network-level attackers who can observe or modify traffic between services (including compromised hosts in the same data center).

Approach 1: mTLS (Mutual TLS) Both client and server present certificates and verify each other's identity.

  • Certificate management: At scale (hundreds of services), you need automated certificate issuance and rotation. Solutions include SPIFFE/SPIRE (identity framework), HashiCorp Vault PKI, or a service mesh.
  • Short-lived certificates: Issue certificates with lifetimes of hours (not years) to limit the impact of compromise and eliminate the need for revocation.

Approach 2: Service Mesh (Istio, Linkerd) A sidecar proxy handles mTLS transparently. Applications communicate in plaintext to localhost; the sidecar encrypts/decrypts all inter-service traffic.

Advantages: zero application code changes, centralized policy, automatic certificate rotation, traffic observability.

Approach 3: Application-level encryption (gRPC with TLS) The application itself manages TLS connections. More control but more operational burden. Suitable when you need end-to-end encryption through load balancers that should not terminate TLS.

Key considerations:

  • Performance: TLS handshakes add latency. Use connection pooling, HTTP/2 multiplexing, and session resumption to amortize handshake costs.
  • Certificate rotation without downtime: Support loading new certificates without restarting services. Go's tls.Config supports dynamic certificate reloading; Envoy supports hot restart.
  • Internal CA: Do not use public CAs for internal services. Run your own CA (Vault PKI, step-ca, or cloud provider private CA) with an internal trust store.
  • Zero-trust networking: mTLS is a cornerstone of zero-trust architectures. Combine with service-level authorization policies (not just authentication).

Explore pricing for hands-on practice with these architectures on Algoroq.


How to Practice

  1. Set up a local PKI: Use step-ca or OpenSSL to create a root CA, intermediate CA, and issue server/client certificates. Configure mTLS between two services.

  2. Implement envelope encryption: Write a small application that uses AWS KMS (or LocalStack for free) to wrap/unwrap DEKs and encrypt database records per-tenant.

  3. Analyze TLS handshakes: Use Wireshark to capture TLS 1.2 and 1.3 handshakes. Count the round trips and identify the cipher suite negotiation.

  4. Break insecure crypto: Try the Cryptopals challenges (sets 1-4) to understand why ECB, CBC padding, and stream cipher nonce reuse are dangerous. Nothing teaches security like exploiting vulnerabilities.

  5. Build a password hashing benchmark: Compare the time-per-hash for MD5, SHA-256, bcrypt (cost 12), and Argon2id with varying parameters. This makes the "why slow hashing" argument visceral.

  6. Practice system design: Use Algoroq's system design interview guide to practice designing systems with security requirements (E2EE messaging, secure file storage, payment processing).


Common Mistakes to Avoid

  1. Rolling your own crypto. This is the cardinal sin. Use well-audited libraries (libsodium, OpenSSL, the cryptography Python package, Web Crypto API). Never implement AES, RSA, or ECDSA yourself.

  2. Using encryption without authentication. AES-CBC provides confidentiality but not integrity. An attacker can flip bits in the ciphertext without detection. Always use AEAD modes (AES-GCM, ChaCha20-Poly1305) or encrypt-then-MAC.

  3. Reusing nonces/IVs. AES-GCM with a repeated nonce completely breaks both confidentiality and authenticity. Use random 96-bit nonces or a counter that never repeats. For high-volume encryption, consider AES-GCM-SIV (nonce-misuse resistant).

  4. Confusing encoding with encryption. Base64 is not encryption. JWTs signed with HMAC are not encrypted (the payload is merely base64-encoded and readable by anyone). Use JWE if you need encrypted tokens.

  5. Hardcoding keys in source code. Keys in Git repositories are keys in the hands of every developer, CI system, and anyone who ever gains access to the repo. Use a secrets manager (Vault, AWS Secrets Manager, SOPS).

  6. Ignoring key rotation. A key that has been in use for years has had years of exposure to potential compromise. Design your systems for key rotation from day one: version your keys, support decryption with old versions, and encrypt with the latest.

  7. Over-engineering crypto for the wrong threat model. Before choosing a scheme, define your threat model. Encrypting data at rest with AES-256 is pointless if the application server has an SQL injection vulnerability that returns plaintext. Security is a chain; cryptography is one link.

  8. Neglecting metadata. Encryption protects content but not metadata. An attacker who can see that Service A calls Service B 1000 times per second learns about traffic patterns. Consider what metadata your encryption scheme leaves exposed.

For more interview preparation, explore our interview questions library and concept deep dives.

GO DEEPER

Master this topic in our 12-week cohort

Our Advanced System Design cohort covers this and 11 other deep-dive topics with live sessions, assignments, and expert feedback.