
Preventing Timing Attacks in Rust: Constant-Time Comparisons and Secure Verification
Why timing attacks matter
A timing attack works when an attacker can repeatedly measure response times and correlate them with secret-dependent behavior. Common examples include:
- comparing API keys or password hashes with early-exit equality checks
- verifying MACs, HMACs, or signatures with non-constant-time logic
- branching on secret values before all checks are complete
- returning different error paths that reveal how much of a secret matched
Even tiny differences can be amplified over many requests. In networked systems, noise makes attacks harder, but not impossible. If the secret is valuable enough, assume an attacker will try.
Rust makes memory safety easier, but timing safety is a separate concern. You must design for it explicitly.
The core rule: never compare secrets with ==
A normal equality check may stop at the first mismatch. That is fine for public data, but not for secrets.
Unsafe pattern
fn verify_api_key(provided: &str, expected: &str) -> bool {
provided == expected
}This is simple, but it can leak how many leading bytes matched.
Safer approach: constant-time comparison
Use a constant-time comparison routine designed for secrets. The subtle crate is a common choice.
use subtle::ConstantTimeEq;
fn verify_api_key(provided: &[u8], expected: &[u8]) -> bool {
provided.ct_eq(expected).into()
}This avoids early exit and compares all bytes in a way intended to reduce timing leakage.
Important caveat
Constant-time comparison only helps if:
- both inputs are already the same length, or
- length differences are not secret
If length itself is sensitive, normalize the data first or compare fixed-size values such as hashes or MACs.
Compare fixed-size digests, not raw secrets
A strong pattern is to hash or MAC the secret into a fixed-size value, then compare the digest in constant time.
For example, instead of storing raw API tokens and comparing them directly, store a keyed HMAC or a salted hash. Then compare the computed result with the stored value.
Example: HMAC-based token verification
use hmac::{Hmac, Mac};
use sha2::Sha256;
use subtle::ConstantTimeEq;
type HmacSha256 = Hmac<Sha256>;
fn verify_token(token: &[u8], expected_mac: &[u8], key: &[u8]) -> bool {
let mut mac = HmacSha256::new_from_slice(key).expect("invalid key length");
mac.update(token);
let computed = mac.finalize().into_bytes();
computed.ct_eq(expected_mac).into()
}This pattern is useful because:
- the output length is fixed
- the comparison can be constant-time
- the secret token itself is never compared directly
If you are validating passwords, use a password hashing algorithm such as Argon2 or scrypt, then compare the derived hash using the library’s verification API rather than hand-rolling comparisons.
Avoid secret-dependent branching
Timing leaks are not limited to equality checks. Any branch that depends on secret data can create measurable differences.
Example of risky branching
fn process_secret(secret_flag: bool, data: &[u8]) -> usize {
if secret_flag {
data.len() + 1
} else {
data.len()
}
}This example is trivial, but the same issue appears in real code when deciding whether to:
- parse additional fields
- perform extra validation
- return early on partial matches
- select one cryptographic path over another
Better pattern: do the same work regardless
Where possible, structure code so both paths execute the same operations, then select the result at the end using constant-time primitives or by deferring the branch until after all secret-dependent processing is complete.
For byte-level selection, subtle also provides constant-time choice helpers:
use subtle::Choice;
fn select_u8(a: u8, b: u8, choose_b: Choice) -> u8 {
let mask = u8::from(choose_b).wrapping_neg();
(a & !mask) | (b & mask)
}This is lower-level than most application code needs, but it illustrates the principle: avoid secret-dependent control flow.
Normalize error handling to reduce leakage
Different errors can reveal where verification failed. For example:
- “user not found”
- “token malformed”
- “signature invalid”
- “timestamp expired”
These distinctions may be useful internally, but they can also help an attacker narrow the search space.
Recommended practice
Return a generic failure to the caller, and log the detailed reason only in protected internal logs.
#[derive(Debug)]
enum AuthError {
InvalidCredentials,
InternalError,
}
fn authenticate(input: &[u8]) -> Result<(), AuthError> {
// Perform all checks, but do not reveal which one failed.
if input.is_empty() {
return Err(AuthError::InvalidCredentials);
}
// Additional verification...
Err(AuthError::InvalidCredentials)
}A uniform error response reduces oracle quality. If you need observability, record the detailed cause in a secure internal context, not in the client response.
Use constant-time APIs from trusted crates
Rust’s ecosystem includes libraries that already implement timing-resistant operations. Prefer them over custom code unless you have a strong reason to do otherwise.
| Use case | Recommended approach | Notes |
|---|---|---|
| Secret comparison | subtle::ConstantTimeEq | Good for byte slices and fixed-size outputs |
| HMAC verification | hmac + constant-time tag comparison | Compare finalized tags, not raw input |
| Password verification | argon2 or similar password hash crate | Use library verification APIs |
| Signature verification | Crypto library verification method | Avoid manual parsing and branching on secret state |
When choosing a crate, check whether it documents constant-time behavior. Not every function in a crypto crate is constant-time, and not every operation needs to be.
Practical example: secure API token verification
Suppose your service stores a token digest in the database and receives a token from a client. A secure verification flow might look like this:
- normalize the input
- compute a fixed-size keyed digest
- compare the digest in constant time
- return a generic failure on mismatch
use hmac::{Hmac, Mac};
use sha2::Sha256;
use subtle::ConstantTimeEq;
type HmacSha256 = Hmac<Sha256>;
pub struct TokenVerifier {
key: Vec<u8>,
expected_tag: [u8; 32],
}
impl TokenVerifier {
pub fn new(key: Vec<u8>, expected_tag: [u8; 32]) -> Self {
Self { key, expected_tag }
}
pub fn verify(&self, token: &[u8]) -> bool {
let mut mac = HmacSha256::new_from_slice(&self.key).expect("invalid key length");
mac.update(token);
let computed = mac.finalize().into_bytes();
computed.ct_eq(&self.expected_tag).into()
}
}Why this is better
- The comparison is fixed-size.
- The verification result is a single boolean.
- The token itself is never compared with
==. - The code is easy to audit.
What to avoid
- comparing raw strings directly
- trimming or parsing token data differently based on partial matches
- returning distinct errors for “bad length” versus “bad content”
Be careful with length checks
Length checks can be safe or unsafe depending on what they reveal.
Safe example
If the secret is already public-length, checking length first is fine and can improve performance.
fn verify_tag(input: &[u8], expected: &[u8; 32]) -> bool {
if input.len() != 32 {
return false;
}
let mut buf = [0u8; 32];
buf.copy_from_slice(input);
buf.ct_eq(expected).into()
}Risky example
If the length itself is sensitive, an early return leaks information. In that case, prefer fixed-size encodings or normalize the input before comparison.
A common strategy is to encode secrets as fixed-length binary values or fixed-width text encodings, then compare the normalized form.
Don’t forget parsing and validation
Timing leaks often happen before the final comparison. For example, a parser may reject malformed input faster than nearly-correct input. That can still be useful to an attacker.
Guidelines
- Parse inputs into a canonical form before verification.
- Avoid repeated secret-dependent parsing attempts.
- Keep validation steps uniform when possible.
- Reject malformed input with the same generic error used for invalid secrets.
If you must accept multiple formats, normalize them first, then compare the normalized output in constant time.
Testing for timing regressions
You cannot prove timing safety with a unit test alone, but you can catch obvious regressions.
Good test targets
- direct use of
==on secret values - branches based on secret bytes
- distinct error messages for verification failures
- early returns in authentication code
Example audit checklist
| Question | Desired answer |
|---|---|
| Are secrets compared with constant-time APIs? | Yes |
| Are error messages uniform to clients? | Yes |
| Is secret-dependent branching minimized? | Yes |
| Are lengths fixed or non-sensitive? | Yes |
| Are crypto operations delegated to vetted crates? | Yes |
For deeper analysis, use code review and profiling tools. A timing-safe implementation should be boring: predictable, uniform, and easy to reason about.
When constant-time is not enough
Constant-time comparison is only one part of secure verification. You also need to protect the surrounding system.
Consider these controls
- rate limit authentication attempts
- lock out or slow repeated failures
- use TLS to prevent passive observation
- store secrets with strong hashing or MACs
- rotate keys and tokens regularly
- keep detailed diagnostics out of client-visible responses
If an attacker can make millions of requests, even small leaks become more dangerous. Defense in depth matters.
Summary
Timing attacks exploit differences in execution time to infer secrets. In Rust, the main defenses are straightforward:
- do not use
==for secrets - compare fixed-size digests in constant time
- avoid secret-dependent branching
- return generic failures to clients
- use vetted crypto crates instead of custom verification logic
The safest code is usually the simplest: normalize input, compute a fixed-size verification value, compare it with a constant-time API, and keep the rest of the control flow uniform.
