Why loop optimization matters

Every iteration in a Solidity loop consumes gas for:

  • condition checks
  • index increments
  • array length reads
  • storage access
  • external calls, if present
  • memory allocation, if the loop builds intermediate data

A loop that runs 10 times may be fine. A loop that runs 1,000 times can become a denial-of-service risk if gas costs grow with input size. In smart contracts, this is not just a performance issue; it can affect liveness and reliability.

The core rule is simple: make loops shorter, cheaper, or both.


Common sources of loop overhead

1. Repeated storage reads

Storage access is one of the most expensive operations in Solidity. If a loop repeatedly reads the same storage value, you pay for that read every iteration.

2. Dynamic array length checks

Reading array.length is cheap compared to storage access, but it still adds overhead if evaluated repeatedly in a loop condition.

3. Unnecessary arithmetic and branching

A loop body that performs extra calculations or conditional checks on every iteration can often be simplified.

4. External calls inside loops

Calling another contract from inside a loop is usually the most expensive pattern. It also increases failure risk and can create reentrancy concerns.

5. Unbounded iteration

If a function iterates over user-controlled data without a cap, gas usage can grow unpredictably and make the function unusable for large inputs.


Use cached loop bounds

A classic optimization is to cache the array length before entering the loop.

Less efficient

function sum(uint256[] storage values) internal view returns (uint256 total) {
    for (uint256 i = 0; i < values.length; i++) {
        total += values[i];
    }
}

In this version, values.length is evaluated on every iteration.

Better

function sum(uint256[] storage values) internal view returns (uint256 total) {
    uint256 len = values.length;
    for (uint256 i = 0; i < len; i++) {
        total += values[i];
    }
}

Caching the length is a small change, but it is a good habit in any loop that uses the same bound repeatedly.

When this helps most

This optimization is most useful when:

  • the loop runs many times
  • the array is in storage
  • the bound does not change during iteration

If the array is in memory, the gain is smaller, but caching still improves clarity and consistency.


Prefer unchecked increments when safe

Solidity 0.8+ adds overflow checks to arithmetic by default. In a loop counter, that check is usually unnecessary if the loop condition guarantees safety.

Standard increment

for (uint256 i = 0; i < len; i++) {
    // work
}

Optimized increment

for (uint256 i = 0; i < len; ) {
    // work
    unchecked {
        ++i;
    }
}

This removes the overflow check on i++. For large loops, the savings can be meaningful.

Important safety rule

Only use unchecked when the loop cannot overflow. That is usually true when:

  • i starts at 0
  • i increments by 1
  • the loop condition is i < len
  • len is a realistic array length, not a manually manipulated extreme value

For most production loops over arrays, this pattern is safe and widely used.


Avoid iterating over storage when a smaller view is enough

Sometimes the loop itself is not the problem; the data source is. Iterating over storage arrays or mappings can be much more expensive than iterating over memory copies or precomputed lists.

Example: copy once, then iterate

function process(uint256[] calldata input) external pure returns (uint256 total) {
    uint256 len = input.length;
    for (uint256 i = 0; i < len; ) {
        total += input[i];
        unchecked {
            ++i;
        }
    }
}

Using calldata is especially efficient for external functions because the data is read directly from the transaction input without copying into memory.

Practical guidance

Use the cheapest data location that fits the use case:

Data locationBest use caseLoop cost profile
calldataExternal read-only inputsVery low
memoryTemporary in-function dataModerate
storagePersistent contract stateHighest

If your function only needs to inspect input data, prefer calldata. If it needs to mutate state, try to minimize the number of storage reads and writes inside the loop.


Reduce work inside the loop body

A loop is not just about the counter. The body often contains the real gas cost. The best optimization is frequently to move invariant work outside the loop.

Example: compute once before looping

function applyDiscounts(uint256[] calldata prices, uint256 discountBps)
    external
    pure
    returns (uint256 total)
{
    uint256 len = prices.length;
    uint256 factor = 10_000 - discountBps;

    for (uint256 i = 0; i < len; ) {
        total += (prices[i] * factor) / 10_000;
        unchecked {
            ++i;
        }
    }
}

Here, 10_000 - discountBps is computed once instead of every iteration.

General rule

Before writing a loop, ask:

  • Which values are constant across iterations?
  • Which calculations can be moved outside?
  • Which values can be cached in local variables?
  • Which branches can be eliminated by preprocessing?

Even small reductions matter when multiplied by hundreds or thousands of iterations.


Replace nested loops with indexed lookups

Nested loops are often a red flag in Solidity. They can quickly become too expensive as input size grows.

Problem pattern

function countMatches(uint256[] calldata a, uint256[] calldata b)
    external
    pure
    returns (uint256 matches)
{
    for (uint256 i = 0; i < a.length; i++) {
        for (uint256 j = 0; j < b.length; j++) {
            if (a[i] == b[j]) {
                matches++;
            }
        }
    }
}

This is O(n × m) and becomes expensive very quickly.

Better approach

Use a mapping or hash-based lookup when possible:

function countMatches(uint256[] calldata a, uint256[] calldata b)
    external
    pure
    returns (uint256 matches)
{
    // Example only: in practice, build a lookup structure in storage or memory
    // if the data set is suitable.
}

For on-chain state, a mapping can turn repeated scans into direct lookups. For off-chain prepared inputs, sorting and using a single pass may also help.

Best practice

Avoid nested loops unless the input sizes are strictly bounded and small. If the algorithm requires repeated searching, consider:

  • mappings for membership checks
  • sorted arrays with linear merge logic
  • precomputed indexes
  • off-chain preprocessing

Batch work instead of processing everything at once

A common performance mistake is trying to process an entire dataset in one transaction. Even if the code is optimized, the total gas may exceed block limits.

Why batching helps

Batching spreads work across multiple transactions. This does not reduce total work, but it prevents functions from becoming unusable as data grows.

Example pattern

contract BatchProcessor {
    uint256[] public items;
    uint256 public nextIndex;

    function processBatch(uint256 batchSize) external {
        uint256 len = items.length;
        uint256 end = nextIndex + batchSize;
        if (end > len) end = len;

        for (uint256 i = nextIndex; i < end; ) {
            // process items[i]
            unchecked {
                ++i;
            }
        }

        nextIndex = end;
    }
}

This pattern is useful for:

  • reward distribution
  • migration scripts
  • queue processing
  • periodic maintenance tasks

Design considerations

When batching, make sure the contract:

  • stores progress safely
  • handles partial completion
  • avoids duplicate processing
  • allows resuming after failure

Batching is often the difference between a scalable contract and one that eventually stalls.


Use early exits to stop unnecessary iteration

If a loop is searching for a condition, stop as soon as the result is known.

Example

function contains(uint256[] calldata values, uint256 target)
    external
    pure
    returns (bool found)
{
    uint256 len = values.length;
    for (uint256 i = 0; i < len; ) {
        if (values[i] == target) {
            return true;
        }
        unchecked {
            ++i;
        }
    }
    return false;
}

This is better than scanning the entire array after the target has already been found.

When early exit is valuable

Use early exits in:

  • membership checks
  • threshold detection
  • validation scans
  • first-match searches

If the loop result can be determined early, do not keep iterating.


Compare common loop optimization choices

TechniqueBest forGas impactRisk
Cache array lengthAny repeated bound checkLow to mediumVery low
unchecked incrementSimple countersLow to mediumLow if used correctly
Use calldataExternal read-only inputsMediumVery low
Move invariant work outside loopRepeated calculationsMediumVery low
Early exitSearch-style loopsHigh when matches are commonVery low
Batch processingLarge datasetsHigh for scalabilityMedium if state tracking is poor
Avoid nested loopsLarge pairwise comparisonsVery highLow if redesigned well

This table is a practical checklist when reviewing a contract for iteration cost.


A realistic optimization example

Suppose you are writing a payout function that distributes rewards to a list of recipients.

Naive version

function distribute(address[] calldata recipients, uint256 amountPerRecipient) external {
    for (uint256 i = 0; i < recipients.length; i++) {
        payable(recipients[i]).transfer(amountPerRecipient);
    }
}

This version has several issues:

  • repeated length reads
  • no batching
  • transfer can fail due to gas stipend limits
  • the loop may become too expensive for large recipient lists

Improved version

function distribute(address[] calldata recipients, uint256 amountPerRecipient) external {
    uint256 len = recipients.length;
    for (uint256 i = 0; i < len; ) {
        (bool ok, ) = payable(recipients[i]).call{value: amountPerRecipient}("");
        require(ok, "payment failed");
        unchecked {
            ++i;
        }
    }
}

This is better, but still not ideal for very large recipient sets. In production, the best design is often to avoid pushing payments in a single loop at all. Instead, record entitlements and let recipients withdraw their funds individually.

Lesson

The fastest loop is often the one you do not need to run on-chain.


Practical review checklist

Before shipping a contract with loops, verify the following:

  • Is the loop bound cached?
  • Can the loop use calldata instead of storage?
  • Are invariant calculations moved outside the loop?
  • Is the increment safe to place in unchecked?
  • Can the loop exit early?
  • Is there any nested iteration that can be replaced with a mapping or index?
  • Could the work be batched across transactions?
  • Is the function still safe for the largest expected input size?

If the answer to any of these is “no,” there may be room for improvement.


Conclusion

Loop optimization in Solidity is about more than shaving off a few gas units. It is about making contracts scalable, predictable, and safe under real network constraints. The most effective improvements usually come from reducing repeated storage access, caching loop bounds, using unchecked increments carefully, and redesigning algorithms to avoid nested iteration.

When you review a contract, do not just ask whether a loop works. Ask whether it needs to exist in its current form, whether it can be shortened, and whether the same result can be achieved with less on-chain work.

Learn more with useful resources