Cryptography & Blockchain

Your Stack Is Leaking: How Memory Operations Expose Cryptographic Secrets in Rust

Learn how Rust's stack and heap retain sensitive cryptographic key material after drop, how attackers exploit residual memory through cold boot attacks, crash dumps, and timing oracles — and how to prevent it using zeroize, secrecy, and constant-time comparisons.

By Luis SoaresMarch 17, 202616 min readOriginal on Medium

There is a class of security vulnerabilities that doesn't appear in your logic, your algorithms, or your API design. It lives in the space between when your program finishes using a value and when that memory is actually cleared. It's quiet, it doesn't crash anything, and the compiler won't warn you about it.

This article is about that gap — specifically how the stack and heap retain sensitive values after you think you're done with them, how an attacker can extract them, and how Rust gives you the tools to close the window completely.

We'll go hands-on: first reproducing the leak, then exploiting it in a controlled setting, then applying the correct mitigations with working code. The context throughout is cryptographic key material, which is the highest-stakes case — but the same principles apply to passwords, tokens, seeds, and any other secret your program handles.

The Fundamental Problem: Drop Does Not Mean Zero

When a variable goes out of scope in Rust, its Drop implementation runs and the memory is deallocated — returned to the allocator for future use. But deallocated is not the same as erased. The bytes that used to represent your private key are still sitting in memory, unchanged, until something else happens to write over them.

On the stack, this happens predictably. A stack frame is just a contiguous region of memory carved out by moving the stack pointer. When the frame is popped, the pointer moves back. The old values are still there — they're just "below" the stack pointer and considered available for the next allocation.

Let's prove this empirically.

Demo 1: Observing Residual Key Material

fn generate_and_drop_key() {
    // Simulate a 32-byte secret key
    let key: [u8; 32] = [
        0xDE, 0xAD, 0xBE, 0xEF, 0xCA, 0xFE, 0xBA, 0xBE,
        0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
        0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10,
        0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88,
    ];

    println!("Key in use at: {:p}", key.as_ptr());
    // key goes out of scope here - dropped but NOT zeroed
}

fn observe_stack() {
    // Allocate a new array in the same region the key just occupied
    let stack_region: [u8; 32] = unsafe {
        // Read whatever happens to be in this stack slot
        // In a real exploit this is done via a buffer over-read or
        // a use-after-free vulnerability
        std::mem::MaybeUninit::<[u8; 32]>::uninit().assume_init()
    };
    print!("Stack region after key drop: ");
    for byte in &stack_region {
        print!("{:02x} ", byte);
    }
    println!();
}

fn main() {
    generate_and_drop_key();
    observe_stack();
}

Run this in release mode. In many cases — especially without inlining — you will see the exact bytes of the key still present in the stack region. The output will look something like:

Stack region after key drop: de ad be ef ca fe ba be 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10 11 22 33 44 55 66 77 88

The key is gone from Rust's perspective. From the hardware's perspective, it's exactly where it was.

Note: The behavior is technically undefined because we're reading uninitialized memory. In a controlled attack, the adversary doesn't need to rely on this — they use a legitimate read primitive (a buffer over-read, a format string vulnerability, a side channel) to reach the same data.

Demo 2: A Realistic Attack Surface — Use-After-Free in an Unsafe Context

Residual stack data is interesting, but a more realistic attack surface is when an allocator reuses heap memory that previously held key material. This happens in any program that allocates and deallocates Vec<u8> buffers for cryptographic operations.

use std::alloc::{alloc, dealloc, Layout};

fn sign_and_free() -> *mut u8 {
    let layout = Layout::array::<u8>(64).unwrap();

    unsafe {
        let ptr = alloc(layout);

        // Simulate writing a secret key into heap-allocated memory
        let key_data: [u8; 64] = [0x42u8; 64]; // 64 bytes of "secret" key material
        std::ptr::copy_nonoverlapping(key_data.as_ptr(), ptr, 64);

        // Sign something, use the key...
        println!("Key loaded at heap address: {:p}", ptr);

        // Deallocate - common mistake: no zeroing before free
        dealloc(ptr, layout);

        ptr // Return the now-dangling pointer to observe residual data
    }
}

fn allocate_in_same_region() -> Vec<u8> {
    // The allocator will likely hand out the same memory region
    vec![0u8; 64]
}

fn main() {
    let dangling_ptr = sign_and_free();
    let new_buffer = allocate_in_same_region();

    // In a heap inspection attack (e.g., after a process crash dumps memory,
    // or via a controlled heap spray), the attacker reads the new_buffer's
    // backing memory at the address that used to hold key material.
    // The allocator hasn't zeroed it. The key is still there.

    unsafe {
        print!("Heap region contents: ");
        for i in 0..64 {
            print!("{:02x} ", *dangling_ptr.add(i));
        }
        println!();
    }
}

This pattern is not theoretical. It shows up in:

  • Crash dumps: A process crash writes all memory to disk. Any secret that was allocated and freed without zeroing is on disk now.
  • Memory inspection by co-tenant processes: In containerized environments without memory isolation, a privileged co-tenant can inspect /proc/<pid>/mem.
  • Cold boot attacks: Physical access + freezing RAM preserves contents after power-off. DRAM retains data for seconds to minutes at room temperature, much longer when cooled.
  • Swap files: The OS can page out heap memory containing your unzeroed keys to disk, where they persist indefinitely.

Demo 3: Timing Oracles — The Subtler Leak

Memory retention is not the only way sensitive data leaks through operations. Comparison operations that short-circuit are a second major attack surface.

Consider MAC verification:

fn verify_mac_insecure(expected: &[u8], computed: &[u8]) -> bool {
    // This is the most natural way to write this in Rust.
    // It is also wrong for cryptographic use.
    expected == computed
}

The == operator on byte slices returns false as soon as it finds the first differing byte. This means the function returns faster when the first byte is wrong than when the first 31 bytes are right and only the last byte differs.

An attacker who can make many verification requests and measure response times can use this timing difference to recover the expected MAC byte by byte — a timing oracle attack. The complexity is O(256 * N) guesses instead of O(256^N). For a 32-byte MAC, that's 8,192 requests instead of 2²⁵⁶.

Here's a controlled demonstration:

use std::time::{Duration, Instant};

fn verify_timing_leak(expected: &[u8], guess: &[u8]) -> (bool, Duration) {
    let start = Instant::now();
    let result = expected == guess;
    let elapsed = start.elapsed();
    (result, elapsed)
}

fn main() {
    let secret_mac = vec![0xAAu8; 32];

    // Guess with correct first byte, wrong rest
    let mostly_wrong = {
        let mut g = vec![0x00u8; 32];
        g[0] = 0xAA; // First byte correct
        g
    };

    // Guess with all wrong bytes
    let all_wrong = vec![0x00u8; 32];

    // In practice you'd average thousands of measurements.
    // Even here you'll often see a nanosecond difference.
    let (_, t1) = verify_timing_leak(&secret_mac, &mostly_wrong);
    let (_, t2) = verify_timing_leak(&secret_mac, &all_wrong);

    println!("Correct first byte: {:?}", t1);
    println!("All wrong:          {:?}", t2);
    println!("Difference:         {:?}", t1.checked_sub(t2));
}

This timing difference is usually single-digit nanoseconds, which is too small to measure reliably over a network. But with local access, or over a very low-latency connection, it's been demonstrated successfully in practice — including against real-world TLS implementations.

Prevention Part 1: Zeroizing Memory with the zeroize Crate

The zeroize crate is the standard Rust solution for explicit memory erasure. It provides guaranteed zeroing that the compiler cannot optimize away — which is a critical distinction.

The naive approach:

// WRONG: the compiler may eliminate this as a "dead store"
// since the value is never read after writing
fn bad_zero(key: &mut [u8]) {
    for b in key.iter_mut() {
        *b = 0;
    }
    // Compiler sees: this write is followed by no read → dead store → can be removed
}

Modern compilers are very good at eliminating dead stores. If the key is never read after you zero it, the zeroing write has no observable effect on program behavior — so the optimizer removes it. Your zeroing code disappears at -O2.

The zeroize crate prevents this by using platform-specific techniques (volatile_set_memory, memset_explicit, or inline assembly depending on the target) that the compiler cannot prove are dead:

use zeroize::Zeroize;

fn sign_message(key_bytes: &[u8], message: &[u8]) -> Vec<u8> {
    let mut working_key = key_bytes.to_vec();

    // ... perform signing operation using working_key ...
    let signature = simulate_signing(&working_key, message);

    // Explicitly zero before drop
    // This CANNOT be optimized away
    working_key.zeroize();

    signature
}

fn simulate_signing(key: &[u8], message: &[u8]) -> Vec<u8> {
    // Placeholder
    message.to_vec()
}

The ZeroizeOnDrop Derive Macro

For types that own secret data, ZeroizeOnDrop implements Drop automatically, making the zeroing impossible to forget:

use zeroize::{Zeroize, ZeroizeOnDrop};

#[derive(Zeroize, ZeroizeOnDrop)]
struct SigningKey {
    key_bytes: Vec<u8>,
    // All fields are zeroed when this struct drops,
    // even if the drop happens due to a panic
}

impl SigningKey {
    fn new(bytes: Vec<u8>) -> Self {
        assert_eq!(bytes.len(), 32, "Key must be exactly 32 bytes");
        Self { key_bytes: bytes }
    }

    fn sign(&self, message: &[u8]) -> Vec<u8> {
        // use self.key_bytes...
        message.to_vec() // placeholder
    }
}

fn main() {
    let raw_key = vec![0x42u8; 32];

    {
        let signing_key = SigningKey::new(raw_key);
        let _sig = signing_key.sign(b"important message");
        // signing_key drops here - key_bytes is zeroed before deallocation
    }

    // At this point, no trace of the key material remains in
    // the heap memory that backed key_bytes.
}

Practice what you learned

Reinforce this article with hands-on coding exercises and AI-powered feedback.

View all exercises

The ZeroizeOnDrop derive works even when the drop is triggered by a panic unwinding the stack. This is important because panic paths are exactly where secret material is most likely to leak — the programmer didn't anticipate the path, so they didn't put manual cleanup there.

Zeroing Stack-Allocated Secrets

For fixed-size keys on the stack, Zeroize is implemented for arrays:

use zeroize::Zeroize;

fn process_key() {
    let mut key = [0u8; 32];

    // Fill key from secure source...
    fill_from_rng(&mut key);

    // Use key...
    let _result = use_key(&key);

    // Explicit zero - guaranteed not optimized away
    key.zeroize();

    // key now contains all zeros before it goes out of scope
}

fn fill_from_rng(buf: &mut [u8]) {
    // In production: use rand::RngCore or getrandom
    for (i, b) in buf.iter_mut().enumerate() {
        *b = i as u8; // placeholder
    }
}

fn use_key(key: &[u8]) -> bool {
    key.len() == 32
}

Prevention Part 2: The secrecy Crate — Wrapping Secrets at the Type Level

zeroize is the mechanism. secrecy is the abstraction built on top of it that enforces safe usage at the type level. A Secret<T> wrapper:

  • Implements ZeroizeOnDrop — the inner value is zeroed automatically
  • Does not implement Debug — secrets cannot accidentally appear in logs or error messages
  • Requires explicit opt-in to access the inner value via .expose_secret()
  • Makes secret usage visible and grep-able in code review
use secrecy::{ExposeSecret, Secret};
use zeroize::Zeroize;

#[derive(Zeroize)]
struct RawKeyBytes(Vec<u8>);

fn load_key_from_store() -> Secret<RawKeyBytes> {
    // In production: load from HSM, keychain, or encrypted storage
    Secret::new(RawKeyBytes(vec![0x42u8; 32]))
}

fn sign_with_key(key: &Secret<RawKeyBytes>, message: &[u8]) -> Vec<u8> {
    // The .expose_secret() call is the audit point:
    // searching for this in your codebase shows every place
    // that touches raw key material
    let raw = key.expose_secret();

    // Use raw.0 (the Vec<u8>) here
    // ... signing logic ...
    message.to_vec() // placeholder
}

fn main() {
    let key = load_key_from_store();

    // This won't compile - Secret<T> has no Display or Debug:
    // println!("{:?}", key);  // ← compile error ✓

    let _signature = sign_with_key(&key, b"hello");

    // key drops here, RawKeyBytes is zeroed via ZeroizeOnDrop
}

The expose_secret() pattern makes a critical difference during code review. Every access to raw key material is tagged with a distinctive call site that stands out in diffs. You can grep for it. Your CI can flag new occurrences for mandatory review.

Prevention Part 3: Constant-Time Comparisons with the subtle Crate

Returning to the timing oracle from Demo 3, the fix is a comparison operation that always takes the same amount of time regardless of how many bytes match:

use subtle::ConstantTimeEq;

fn verify_mac_secure(expected: &[u8], computed: &[u8]) -> bool {
    // ct_eq() compares all bytes regardless of where they differ.
    // The result is a Choice (0 or 1), not a bool - this prevents
    // the compiler from introducing conditional branches.
    expected.ct_eq(computed).into()
}

fn verify_mac_insecure(expected: &[u8], computed: &[u8]) -> bool {
    // DO NOT USE for cryptographic MACs
    expected == computed
}

The subtle crate's Choice type is not a bool. It's a u8 that happens to be 0 or 1, and the crate carefully avoids generating branch instructions during its operations. Conversion to bool via .into() is the only exit point, and by then all the secret-dependent work is done.

A complete MAC verification example showing all the pieces together:

use hmac::{Hmac, Mac};
use sha2::Sha256;
use subtle::ConstantTimeEq;
use secrecy::{ExposeSecret, Secret};
use zeroize::{Zeroize, ZeroizeOnDrop};

type HmacSha256 = Hmac<Sha256>;

#[derive(Zeroize, ZeroizeOnDrop)]
struct MacKey(Vec<u8>);

fn compute_mac(key: &Secret<MacKey>, message: &[u8]) -> Vec<u8> {
    let raw_key = key.expose_secret();
    let mut mac = HmacSha256::new_from_slice(&raw_key.0)
        .expect("HMAC can take any key length");
    mac.update(message);
    mac.finalize().into_bytes().to_vec()
}

fn verify_mac(key: &Secret<MacKey>, message: &[u8], received_mac: &[u8]) -> bool {
    let expected = compute_mac(key, message);

    // Constant-time comparison - no timing oracle
    expected.as_slice().ct_eq(received_mac).into()
}

fn main() {
    let key = Secret::new(MacKey(vec![0x42u8; 32]));
    let message = b"transfer 1000 EUR to account 12345";

    let mac = compute_mac(&key, message);

    // Correct MAC
    assert!(verify_mac(&key, message, &mac));

    // Tampered MAC - returns false in constant time
    let mut tampered = mac.clone();
    tampered[0] ^= 0x01;
    assert!(!verify_mac(&key, message, &tampered));

    println!("MAC verification correct. Key material zeroed on drop.");
}

Prevention Part 4: What About Heap Fragmentation and Allocator Reuse?

Even with zeroize and ZeroizeOnDrop, there's a subtler problem: the allocator itself. When Vec<u8> is reallocated because it grew beyond its capacity, the old backing allocation is returned to the allocator — and zeroize only zeroes what the Vec currently points to. The previous, smaller allocation that held partial key data is not zeroed.

The correct pattern for heap-allocated secrets is to pre-allocate at the correct size and never grow:

use zeroize::ZeroizeOnDrop;

#[derive(ZeroizeOnDrop)]
struct HeapSecret {
    data: Vec<u8>,
}

impl HeapSecret {
    fn new(size: usize) -> Self {
        // Pre-allocate exactly the right size.
        // Vec::with_capacity avoids reallocations that leave residual copies.
        let mut data = Vec::with_capacity(size);
        data.resize(size, 0u8);
        Self { data }
    }

    fn fill(&mut self, source: &[u8]) {
        assert_eq!(source.len(), self.data.len(),
            "Source must exactly match pre-allocated size");
        self.data.copy_from_slice(source);
        // No reallocation occurred - capacity was already correct
    }
}

For the highest-security scenarios — HSM integrations, kernel-space code, or post-quantum key generation where even a brief copy is unacceptable — you want memory that the OS will never page to disk. On Linux, this is mlock(). The memsec crate provides this.

The Post-Quantum Dimension: Why This Gets Harder with ML-DSA

ML-DSA-65 signing key material is 4,032 bytes. That's 126x larger than a 32-byte Ed25519 key. Every principle in this article applies — but the attack surface scales up:

  • More bytes in memory for longer, because lattice operations take more time than ECC
  • Higher likelihood of heap reallocations during key deserialization (keys are large enough to trigger multiple growth cycles in naive Vec usage)
  • More stack pressure during the signing operation itself, meaning more residual material on stack frames

Additionally, ML-DSA's rejection sampling step involves polynomial arithmetic over intermediate values that are derived from the private key. If those intermediate buffers are not zeroed, an attacker with memory inspection can extract partial key information even without accessing the key directly.

The fips204 crate in Rust handles this correctly by implementing ZeroizeOnDrop on its SigningKey type. But the moment you copy bytes out of it for serialization, storage, or transmission — even temporarily — you own that copy, and you are responsible for zeroing it.

use fips204::ml_dsa_65::{KG, SK_LEN};
use fips204::traits::{SerDes, Signer};
use zeroize::Zeroizing;

fn serialize_signing_key_safely() -> Vec<u8> {
    let (sk, _vk) = KG::try_keygen().expect("keygen failed");

    // Zeroizing<Vec<u8>> is a Vec<u8> that calls zeroize() on drop
    // Use this any time you must copy key bytes out of fips204's types
    let mut sk_bytes: Zeroizing<Vec<u8>> =
        Zeroizing::new(sk.into_bytes().to_vec());

    // Encrypt sk_bytes here before persisting...
    // When sk_bytes drops, the raw key bytes are zeroed
    // even if an error occurs during encryption

    sk_bytes.to_vec() // In practice: return the encrypted form
}

Zeroizing<T> from the zeroize crate is exactly this — a transparent newtype wrapper that adds ZeroizeOnDrop to any type that implements Zeroize.

A Production Checklist

Bringing everything together, here is what secure key material handling looks like in Rust:

use secrecy::{ExposeSecret, Secret};
use subtle::ConstantTimeEq;
use zeroize::{Zeroize, Zeroizing, ZeroizeOnDrop};

// ── 1. Key types implement ZeroizeOnDrop ──────────────────────────────────────
#[derive(Zeroize, ZeroizeOnDrop)]
struct PrivateKeyMaterial {
    bytes: Vec<u8>,
}

// ── 2. Wrap in Secret<T> to prevent accidental logging ───────────────────────
fn load_key() -> Secret<PrivateKeyMaterial> {
    Secret::new(PrivateKeyMaterial {
        bytes: vec![0x42u8; 32], // In production: load from secure storage
    })
}

// ── 3. Pre-allocate to the correct size - no reallocation ────────────────────
fn prepare_key_buffer(expected_size: usize) -> Zeroizing<Vec<u8>> {
    let mut buf = Vec::with_capacity(expected_size);
    buf.resize(expected_size, 0);
    Zeroizing::new(buf)
}

// ── 4. Constant-time comparison for all MAC/signature/token equality checks ──
fn verify_token(expected: &[u8], supplied: &[u8]) -> bool {
    expected.ct_eq(supplied).into()
}

// ── 5. Explicit audit point for every key access ─────────────────────────────
fn use_key_for_signing(key: &Secret<PrivateKeyMaterial>, msg: &[u8]) -> Vec<u8> {
    let material = key.expose_secret(); // ← visible in code review
    // sign with material.bytes...
    msg.to_vec()
}

fn main() {
    let key = load_key();
    let _sig = use_key_for_signing(&key, b"important data");
    // key drops here → PrivateKeyMaterial.zeroize() runs → bytes = [0, 0, ...]
}

The checklist in brief:

  1. ZeroizeOnDrop on every type that holds key bytes — including intermediate buffers
  2. Zeroizing<T> for temporary heap allocations that hold key copies
  3. Secret<T> as the public API surface for any function that needs key material — makes accesses grep-able
  4. subtle::ConstantTimeEq for all comparisons involving secret values or derived values (MACs, tokens, nonces)
  5. Pre-allocate at the correct size to avoid allocator leaving residual copies
  6. Never log, format, or debug-print types containing secret material — Secret<T> enforces this at compile time
  7. Verify your dependency chain: check that the crypto crates you use (like fips204) implement ZeroizeOnDrop themselves — don't assume

Conclusion

The stack and heap do not forget. When your code drops a signing key, a password, or a cryptographic seed, the bytes that represented that value linger in memory until something deliberately overwrites them. The allocator doesn't do it. The OS doesn't do it. The compiler won't do it for you — and may even undo your attempt if it determines the write is dead.

Rust gives you better tools for this than almost any other language: zeroize for guaranteed erasure, ZeroizeOnDrop for automatic cleanup even through panics, secrecy for type-level visibility into key access, and subtle for constant-time operations that close the timing oracle surface. These are not optional hardening measures for paranoid security engineers. They are the baseline for any code that handles secrets.

The proof is in the gap between what "drop" means to Rust and what it means to an attacker. They are not the same thing. Now they can be.

Practice what you learned

Reinforce this article with hands-on coding exercises and AI-powered feedback.

View all exercises

Related Articles

Master Cryptography & Blockchain hands-on

Go beyond reading — solve interactive exercises with AI-powered code review, track your progress, and get a Skill Radar assessment.