What SSRF looks like in Rust services

A typical SSRF bug starts with user-controlled input that becomes a URL:

  • a POST /preview?url=... endpoint fetches remote metadata
  • a webhook tester sends requests to arbitrary addresses
  • a file import feature downloads content from a supplied URL
  • an admin tool probes internal services by hostname

The danger is not limited to public websites. Attackers often target:

  • http://localhost
  • private IP ranges such as 10.0.0.0/8 and 192.168.0.0/16
  • cloud metadata endpoints like 169.254.169.254
  • internal DNS names
  • alternate schemes such as file://, ftp://, or gopher:// if your client accepts them

A safe implementation must answer three questions before any request is sent:

  1. Is the URL allowed?
  2. Does it resolve to a safe destination?
  3. Is the request constrained enough to prevent redirects, protocol abuse, and resource exhaustion?

A practical defense strategy

The most reliable approach is layered defense. Do not rely on a single check.

ControlPurposeNotes
URL allowlistRestrict which hosts or domains may be contactedBest for known integrations
Scheme validationPermit only http and httpsReject everything else
DNS/IP filteringBlock private, loopback, link-local, and metadata IPsMust be checked after resolution
Redirect controlPrevent redirect chains to unsafe destinationsLimit or disable redirects
Timeout and size limitsReduce abuse and hanging connectionsSet connect, read, and total limits
Egress network policyEnforce restrictions outside the appStrongest backstop in production

The rest of this article focuses on implementing these controls in Rust using a common HTTP client pattern.


Start with a strict URL parser

Never treat a string as a URL until it has been parsed and validated. The url crate is a good foundation because it normalizes components and exposes structured access to the scheme, host, and port.

A minimal validation function should reject:

  • missing hostnames
  • unsupported schemes
  • credentials in the URL
  • suspicious ports, if your application only expects standard ones
use url::Url;

fn validate_url(input: &str) -> Result<Url, String> {
    let url = Url::parse(input).map_err(|_| "invalid URL".to_string())?;

    match url.scheme() {
        "http" | "https" => {}
        _ => return Err("unsupported scheme".to_string()),
    }

    if url.username() != "" || url.password().is_some() {
        return Err("credentials in URL are not allowed".to_string());
    }

    if url.host_str().is_none() {
        return Err("missing host".to_string());
    }

    Ok(url)
}

This is only the first layer. A URL like https://example.com may still redirect to an internal address, and a hostname may resolve to a private IP.


Enforce host allowlists when possible

If your feature only needs to contact a small set of known services, an allowlist is the simplest and safest design. For example, a webhook delivery system may only need to send requests to customer-owned domains that have been verified in advance.

You can compare the parsed host against an allowlist of exact hosts or approved suffixes.

fn host_allowed(host: &str) -> bool {
    let exact = ["api.example.com", "hooks.example.net"];
    let suffixes = [".trusted-partner.com"];

    exact.contains(&host) || suffixes.iter().any(|suffix| host.ends_with(suffix))
}

Be careful with suffix matching. eviltrusted-partner.com is not the same as trusted-partner.com. Prefer exact hostnames when you can. If you must allow subdomains, ensure the match is anchored correctly.

A good rule of thumb:

  • use exact allowlists for internal integrations
  • use domain verification for customer-supplied webhook targets
  • avoid arbitrary public URL fetches unless absolutely necessary

Block unsafe IP ranges after DNS resolution

Hostnames can resolve to private or local addresses even when the hostname itself looks harmless. For example, an attacker may control DNS for a domain that resolves to 127.0.0.1 or 169.254.169.254.

You should resolve the host and inspect every resulting IP address before connecting. The std::net::IpAddr type provides helpers for common unsafe ranges.

use std::net::IpAddr;

fn is_blocked_ip(ip: IpAddr) -> bool {
    match ip {
        IpAddr::V4(v4) => {
            v4.is_private()
                || v4.is_loopback()
                || v4.is_link_local()
                || v4.octets()[0] == 0
                || v4.octets()[0] >= 224
        }
        IpAddr::V6(v6) => {
            v6.is_loopback()
                || v6.is_unspecified()
                || v6.is_unique_local()
                || v6.is_unicast_link_local()
        }
    }
}

This is a useful baseline, but production systems often need a more complete policy. For example, you may also want to block:

  • IPv4-mapped IPv6 addresses
  • documentation ranges used in testing
  • cloud metadata IPs explicitly
  • internal service CIDRs specific to your environment

If you operate in Kubernetes or a cloud VPC, define your own denylist for internal ranges and keep it in configuration.


Prevent redirect-based SSRF

Even if the initial URL is safe, a redirect can send the client to an unsafe target. This is a common SSRF bypass.

For security-sensitive outbound requests:

  • disable redirects entirely if the workflow allows it
  • or manually follow redirects with validation at each hop
  • cap the number of redirects to a small value

With reqwest, you can disable automatic redirects and inspect the Location header yourself.

use reqwest::{Client, redirect::Policy};

fn build_client() -> Result<Client, reqwest::Error> {
    Client::builder()
        .redirect(Policy::none())
        .timeout(std::time::Duration::from_secs(5))
        .build()
}

If your application must support redirects, re-validate every new URL exactly as you validate the original one. Do not assume a redirect target is safe just because the initial host was approved.


Put the checks together

A secure outbound request flow usually looks like this:

  1. Parse the input as a URL.
  2. Validate the scheme.
  3. Check the hostname against an allowlist, if applicable.
  4. Resolve the hostname to IP addresses.
  5. Reject blocked IP ranges.
  6. Send the request with redirects disabled or tightly controlled.
  7. Apply timeouts and response size limits.

The following example combines these ideas into a simple fetch function.

use reqwest::{Client, redirect::Policy};
use std::net::{IpAddr, ToSocketAddrs};
use std::time::Duration;
use url::Url;

fn is_blocked_ip(ip: IpAddr) -> bool {
    match ip {
        IpAddr::V4(v4) => v4.is_private() || v4.is_loopback() || v4.is_link_local(),
        IpAddr::V6(v6) => v6.is_loopback() || v6.is_unspecified() || v6.is_unique_local(),
    }
}

fn validate_and_resolve(url: &Url) -> Result<(), String> {
    let host = url.host_str().ok_or("missing host")?;

    if url.scheme() != "http" && url.scheme() != "https" {
        return Err("unsupported scheme".into());
    }

    if host == "localhost" {
        return Err("localhost is not allowed".into());
    }

    let port = url.port_or_known_default().ok_or("missing port")?;
    let addrs = (host, port)
        .to_socket_addrs()
        .map_err(|_| "DNS resolution failed")?;

    let mut saw_safe_ip = false;
    for addr in addrs {
        if !is_blocked_ip(addr.ip()) {
            saw_safe_ip = true;
        }
    }

    if !saw_safe_ip {
        return Err("destination resolves only to blocked IPs".into());
    }

    Ok(())
}

async fn fetch_url(input: &str) -> Result<String, String> {
    let url = Url::parse(input).map_err(|_| "invalid URL")?;
    validate_and_resolve(&url)?;

    let client = Client::builder()
        .redirect(Policy::none())
        .timeout(Duration::from_secs(5))
        .build()
        .map_err(|_| "client build failed")?;

    let response = client
        .get(url)
        .send()
        .await
        .map_err(|_| "request failed")?;

    if !response.status().is_success() {
        return Err("upstream returned an error".into());
    }

    let body = response
        .text()
        .await
        .map_err(|_| "failed to read response")?;

    Ok(body)
}

This example is intentionally conservative. In a real application, you may want to return structured errors, log blocked attempts, and enforce a maximum response body size instead of reading the entire response into memory.


Limit request size and execution time

SSRF is often paired with denial-of-service behavior. An attacker may point your service at a slow endpoint, a huge file, or a streaming response that never ends.

Set limits at multiple levels:

  • connect timeout
  • request timeout
  • idle read timeout, if supported
  • maximum response body size
  • maximum number of redirects
  • maximum number of retries, preferably zero for untrusted targets

A practical pattern is to reject large responses before buffering them. If you only need metadata, do not download the full body. Read a small prefix and stop.

Also consider concurrency limits. If many users can trigger outbound requests, use a semaphore or queue so one abusive client cannot exhaust your worker pool.


Avoid dangerous protocol features

Some clients or helper libraries may support features that widen the attack surface:

  • proxy environment variables
  • custom DNS resolvers
  • non-HTTP schemes
  • automatic authentication forwarding
  • cookie persistence across requests

For security-sensitive flows, disable anything you do not explicitly need. In particular:

  • ignore system proxy settings unless required
  • do not forward bearer tokens or cookies to untrusted hosts
  • avoid reusing authenticated clients for arbitrary outbound requests
  • keep internal and external HTTP clients separate

If your service uses both trusted internal APIs and user-supplied URLs, create distinct client configurations. A client that talks to internal services should not be reused for public fetches.


Test SSRF defenses with malicious inputs

Security checks are only useful if they are tested. Add unit and integration tests for common bypasses:

  • http://localhost
  • http://127.0.0.1
  • http://[::1]
  • http://169.254.169.254
  • http://[email protected]
  • redirect chains from a safe host to an unsafe host
  • DNS names that resolve to private IPs

Table-driven tests work well for this kind of validation.

#[test]
fn rejects_common_ssrf_targets() {
    let bad = [
        "http://localhost",
        "http://127.0.0.1",
        "http://[::1]",
        "http://169.254.169.254",
        "file:///etc/passwd",
    ];

    for input in bad {
        let parsed = url::Url::parse(input);
        assert!(parsed.is_ok() || parsed.is_err());
    }
}

In practice, your tests should call the same validation function used by production code. For redirect handling, spin up a local test server that returns a 302 to a blocked address and verify the request is denied.


Operational controls matter too

Application-level validation is necessary, but network policy is the last line of defense. Even a bug in your Rust code should not be enough to reach sensitive internal systems.

Recommended operational controls include:

  • outbound firewall rules that block metadata and internal CIDRs
  • Kubernetes NetworkPolicies or service mesh egress restrictions
  • cloud security groups or route tables that limit egress
  • separate network zones for public and internal traffic
  • monitoring for unusual outbound request patterns

If your application never needs to contact the instance metadata service, block it at the infrastructure layer. That single control eliminates an entire class of SSRF impact.


A secure design checklist

Before shipping any feature that fetches a user-influenced URL, confirm the following:

  • the URL is parsed with a real parser
  • only approved schemes are allowed
  • hostnames are allowlisted when possible
  • DNS resolution is checked against blocked IP ranges
  • redirects are disabled or revalidated
  • timeouts and response size limits are enforced
  • proxy and credential forwarding are controlled
  • outbound network policy blocks sensitive destinations
  • tests cover known SSRF payloads and redirect chains

If you cannot satisfy these conditions, redesign the feature. In many cases, the safest option is to avoid arbitrary URL fetching entirely and instead proxy through a controlled integration service.

Learn more with useful resources