
Preventing Denial-of-Service in Solidity Through Untrusted Loops
Why loops become a security risk
In Ethereum, every transaction has a gas limit. If a function performs too much work, it may run out of gas and revert. That is not just a performance issue; it can become a security issue when an attacker can deliberately increase the work required for a function until no one can execute it successfully.
The most common pattern is a loop over storage:
- a list of participants
- a queue of pending withdrawals
- an array of orders, claims, or votes
- a mapping simulated through an array of keys
If the function must finish all items in one transaction, the cost grows with the data set. Eventually, the function becomes too expensive to call.
Typical failure modes
| Pattern | Risk | Result |
|---|---|---|
| Loop over all users in one transaction | Gas grows with user count | Function becomes uncallable |
| External call inside a loop | One failing recipient reverts the whole batch | Partial progress is lost |
| Deleting array items while iterating | Index shifts or skipped entries | State corruption or stuck data |
| Unbounded pagination-free reads | Frontend or off-chain worker cannot finish | Operational failure |
The core lesson is simple: do not make contract liveness depend on processing all items at once.
A vulnerable example
Consider a contract that distributes rewards to all stakers in one call.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
contract RewardPool {
address[] public stakers;
mapping(address => uint256) public rewards;
mapping(address => bool) public isStaker;
function addStaker(address user) external {
if (!isStaker[user]) {
isStaker[user] = true;
stakers.push(user);
}
}
function accrueRewards() external payable {
require(stakers.length > 0, "no stakers");
uint256 share = msg.value / stakers.length;
for (uint256 i = 0; i < stakers.length; i++) {
rewards[stakers[i]] += share;
}
}
function claim() external {
uint256 amount = rewards[msg.sender];
require(amount > 0, "nothing to claim");
rewards[msg.sender] = 0;
payable(msg.sender).transfer(amount);
}
}At first glance, this looks reasonable. The contract tracks stakers and credits each one with a share of incoming ETH. The problem is the accrueRewards() loop:
- every new staker increases gas cost
- an attacker can add many stakers to make the function too expensive
- if the array becomes large enough, nobody can call
accrueRewards()anymore
This is a classic liveness failure. The contract is not necessarily exploitable in the sense of theft, but it becomes unusable.
Prefer pull-based accounting over push-based loops
The safest design is often to avoid distributing value to everyone in a loop. Instead, record global accounting data and let each user claim their own share.
Better pattern: cumulative reward index
Rather than updating every user on each deposit, maintain a global reward-per-share accumulator. Each user stores the last value they observed.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
contract RewardPoolPull {
uint256 public totalStaked;
uint256 public accRewardPerShare; // scaled by 1e18
mapping(address => uint256) public balanceOf;
mapping(address => uint256) public rewardDebt;
mapping(address => uint256) public pendingRewards;
function depositReward() external payable {
require(totalStaked > 0, "no stake");
accRewardPerShare += (msg.value * 1e18) / totalStaked;
}
function stake() external payable {
require(msg.value > 0, "zero stake");
_harvest(msg.sender);
balanceOf[msg.sender] += msg.value;
totalStaked += msg.value;
rewardDebt[msg.sender] = (balanceOf[msg.sender] * accRewardPerShare) / 1e18;
}
function unstake(uint256 amount) external {
require(balanceOf[msg.sender] >= amount, "insufficient");
_harvest(msg.sender);
balanceOf[msg.sender] -= amount;
totalStaked -= amount;
rewardDebt[msg.sender] = (balanceOf[msg.sender] * accRewardPerShare) / 1e18;
payable(msg.sender).transfer(amount);
}
function claim() external {
_harvest(msg.sender);
uint256 amount = pendingRewards[msg.sender];
require(amount > 0, "nothing to claim");
pendingRewards[msg.sender] = 0;
payable(msg.sender).transfer(amount);
}
function _harvest(address user) internal {
uint256 accumulated = (balanceOf[user] * accRewardPerShare) / 1e18;
uint256 debt = rewardDebt[user];
if (accumulated > debt) {
pendingRewards[user] += accumulated - debt;
}
rewardDebt[user] = accumulated;
}
}This design has a major advantage: reward distribution no longer depends on the number of users. Each user pays the gas cost for their own accounting when they interact.
Why this is safer
- No global loop is needed for deposits
- The contract remains usable as user count grows
- Failed claims affect only the caller, not everyone else
- State updates are localized and predictable
Use bounded batches when iteration is unavoidable
Some applications genuinely need to process a list: migrating records, settling auctions, or cleaning up expired entries. In those cases, the goal is not to eliminate loops entirely, but to bound the amount of work per transaction.
Batch processing with explicit limits
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
contract BatchProcessor {
address[] public users;
uint256 public nextIndex;
mapping(address => uint256) public processed;
function addUser(address user) external {
users.push(user);
}
function process(uint256 maxItems) external {
uint256 end = nextIndex + maxItems;
if (end > users.length) {
end = users.length;
}
for (uint256 i = nextIndex; i < end; i++) {
processed[users[i]] += 1;
}
nextIndex = end;
}
}This pattern improves liveness because each call handles only a limited number of items. However, it still needs careful design:
maxItemsshould be capped to avoid accidental or malicious oversizing- progress must be stored in contract state
- the function should be resumable
- failures should not force the whole dataset to restart
Best practices for batch functions
- Use a cursor
Store the current position so processing can continue across transactions.
- Limit the batch size
Consider a hard upper bound, not just a user-supplied parameter.
- Make each item independent
Avoid designs where one bad record blocks the rest.
- Emit progress events
Off-chain systems can monitor completion and resume if needed.
Avoid “all-or-nothing” external effects inside loops
Loops become especially dangerous when they include external calls, because one recipient can revert or consume too much gas. If a batch transfer sends ETH or tokens to many recipients, a single failure can revert the entire transaction.
Risky pattern
for (uint256 i = 0; i < recipients.length; i++) {
payable(recipients[i]).transfer(amounts[i]);
}If one recipient is a contract that reverts in its receive function, the whole loop fails. If the recipient list is large, gas exhaustion becomes another failure mode.
Safer alternatives
- let recipients withdraw individually
- record owed balances and allow claims
- process recipients in bounded batches
- skip failed recipients only if the business logic allows it and the skipped state is tracked explicitly
A common mistake is to “catch and continue” without recording failures. That can silently lose funds or create inconsistent state. If a recipient can fail, the contract should define what happens next:
- retry later
- mark as failed
- allow manual recovery
- exclude from future rounds
Design data structures for iteration cost
Security against DoS is not only about loops. It is also about how data is stored and retrieved.
Prefer direct mappings over arrays when possible
Mappings provide constant-time access by key. Arrays are useful for ordered iteration, but they are expensive when you need to search, remove, or process every element.
| Data structure | Strength | Weakness | Good use case |
|---|---|---|---|
mapping(address => uint256) | O(1) lookup | No native enumeration | Balances, permissions, claims |
address[] | Ordered iteration | Expensive removal and growth | Small sets, bounded batches |
| Enumerable set pattern | Fast membership + iteration | More storage overhead | Moderate-size registries |
| Linked list | Efficient removals | Complex logic, easy to break | Specialized queues |
If you need both membership checks and iteration, consider maintaining a mapping plus a separate index structure. But only do this when iteration is truly necessary.
Be careful with deletion
Removing an element from an array by shifting all later elements is O(n). If users can trigger frequent deletions, that can become a DoS vector. A common technique is “swap and pop,” which removes in O(1) time but does not preserve order.
function removeAt(uint256 index) internal {
uint256 last = users.length - 1;
if (index != last) {
users[index] = users[last];
}
users.pop();
}This is often the right tradeoff for security and gas efficiency.
Checklist for DoS-resistant Solidity code
Use this checklist when reviewing a contract that processes collections or repeated actions:
- Can any public function loop over unbounded storage?
- Can a user increase the size of the loop cheaply?
- Does one failing item revert the entire operation?
- Is there a way to process work incrementally?
- Are batch sizes capped?
- Does the contract rely on iterating over all users to remain functional?
- Can storage deletions cause O(n) work?
- Are external calls made inside loops?
- Is there a fallback path if processing cannot complete in one transaction?
If the answer to any of these is “yes,” the design likely needs a safer accounting model.
Practical development guidance
1. Treat gas as part of your security model
When you design a function, estimate how its gas cost scales with state growth. A function that is cheap with 10 items may fail with 10,000.
2. Keep critical paths constant-time when possible
Functions such as deposits, claims, and permission updates should ideally not depend on the number of users.
3. Separate accounting from execution
Record what should happen first, then let users or workers execute the expensive part in smaller steps.
4. Make progress resumable
If a process can be interrupted, store the cursor or checkpoint on-chain.
5. Test with large datasets
Security testing should include stress cases:
- thousands of array entries
- repeated additions and removals
- failing recipients
- maximum batch sizes
- low gas limits
6. Review for griefing opportunities
Ask whether an attacker can increase the cost of a function without paying a proportional cost. If yes, they may be able to grief the system even without stealing funds.
When a loop is acceptable
Not every loop is dangerous. Small, bounded loops are often fine when:
- the maximum length is enforced on-chain
- the data set is controlled by the protocol
- the function is not essential for liveness
- the loop is used in administrative or emergency-only code
For example, a loop over a fixed list of 5 governance signers is very different from a loop over all depositors in a growing DeFi protocol.
The key question is not “does the code contain a loop?” but “can the loop grow until it breaks the contract?”
Conclusion
Denial-of-service in Solidity is often a design problem, not a syntax problem. The most robust contracts avoid unbounded loops in critical paths, use pull-based accounting where possible, and process large workloads in bounded, resumable batches.
If you remember one rule, make it this: never require one transaction to finish work that can grow without limit. That single principle prevents many liveness failures and makes your contracts much easier to operate safely in production.
