Back to Learn

HTTP Flood Attacks | NOC.org

What Is an HTTP Flood Attack?

An HTTP flood is a type of distributed denial of service (DDoS) attack that operates at Layer 7 — the application layer — of the OSI model. Instead of trying to saturate a network link with raw traffic, HTTP floods target the web server and its backend infrastructure by sending what appear to be legitimate HTTP requests at an overwhelming rate.

This is what makes HTTP floods so dangerous and difficult to defend against. Each individual request is syntactically valid and follows the HTTP protocol perfectly. The server cannot simply reject malformed packets. It must receive the request, parse it, execute whatever application logic the request triggers — database queries, file reads, API calls — and return a response. When thousands or millions of these requests arrive per second, the server's CPU, memory, database connections, and application threads are exhausted, and legitimate users are denied service.

How HTTP Floods Differ from Volumetric Attacks

The distinction between HTTP floods and volumetric DDoS attacks is fundamental to understanding why they require different defenses:

Characteristic HTTP Flood (Layer 7) Volumetric Attack (Layer 3/4)
Target Web application, database, application server Network bandwidth, router capacity
Traffic appearance Looks like legitimate HTTP traffic Clearly anomalous (e.g., UDP flood to random ports)
Measurement Requests per second (rps) Bits per second (bps) or packets per second (pps)
Bandwidth required Low — a few hundred Mbps can be devastating High — often requires hundreds of Gbps to Tbps
Detection difficulty Very hard — requests look normal individually Easier — traffic patterns are clearly abnormal
Filtering approach Behavioral analysis, rate limiting, challenge-response Volumetric scrubbing, blackhole routing, ACLs

A 10 Gbps UDP flood is annoying but can be absorbed by most cloud providers. A 50,000 requests-per-second HTTP flood aimed at a login page that triggers database lookups on every request can bring down a well-provisioned application server cluster.

Types of HTTP Flood Attacks

HTTP GET Floods

In a GET flood, the attacker sends massive numbers of HTTP GET requests to the target. The goal is to force the server to retrieve and serve content repeatedly. Attackers typically target:

  • Resource-intensive pages: Search results pages, product listing pages with filtering and sorting, or report generation endpoints that require heavy database queries.
  • Large static files: Requesting large images, PDFs, or downloads to consume server I/O and bandwidth.
  • Uncached endpoints: Pages that bypass the cache layer and hit the application server and database directly on every request.

GET floods are the simplest form of HTTP flood and can be generated by basic scripting tools. However, sophisticated attackers rotate URLs, user agents, and request headers to make the traffic appear organic.

HTTP POST Floods

POST floods are more resource-intensive on the server side because POST requests typically trigger write operations — form submissions, file uploads, API calls that create or modify data. Common targets include:

  • Login forms: Each POST to a login endpoint requires the server to hash the submitted password and compare it against the database, consuming significant CPU.
  • Registration forms: Creating new accounts triggers email verification, database inserts, and potentially external API calls.
  • File upload endpoints: Uploading files forces the server to receive, parse, validate, and store data, consuming disk I/O and memory.
  • API endpoints: REST or GraphQL APIs that accept complex JSON payloads can be especially expensive to process.

POST floods are harder for the attacker to execute because they require constructing valid form data or API payloads, but they cause disproportionate damage relative to the traffic volume.

Slowloris Attacks

Slowloris takes a fundamentally different approach from volumetric HTTP floods. Rather than overwhelming the server with a high volume of complete requests, Slowloris opens many connections to the server and keeps each one alive for as long as possible by sending partial HTTP headers at a slow rate.

The attack works as follows:

  1. The attacker opens hundreds or thousands of connections to the web server.
  2. For each connection, the attacker sends a partial HTTP request header — enough to be accepted but never completing the request.
  3. Periodically, the attacker sends an additional header line (e.g., X-a: b\r\n) to prevent the server from timing out the connection.
  4. The server keeps each connection open, waiting for the request to complete. Eventually, all available connection slots are consumed.
  5. New legitimate connections are refused because the server has no available worker threads or connections.

Slowloris is devastatingly efficient. A single machine with a modest internet connection can take down an Apache web server because Apache allocates a thread (or process) per connection by default. Servers that use event-driven architectures (like Nginx) are more resistant but not immune.

RUDY (R-U-Dead-Yet) Attacks

RUDY is a slow-rate POST attack similar in concept to Slowloris. The attacker sends an HTTP POST request with a legitimate Content-Length header indicating a very large body, then transmits the body one byte at a time at extremely slow intervals. The server must keep the connection open until it receives the full body, tying up resources for extended periods.

Why HTTP Floods Are Hard to Detect

HTTP floods present a unique detection challenge because they exploit the fundamental design of web servers rather than protocol weaknesses:

  • No protocol violations: Each request is a perfectly valid HTTP request. There are no malformed packets, spoofed IPs (in most cases), or protocol anomalies to flag.
  • Encrypted traffic: Most HTTP floods use HTTPS, meaning the request content is encrypted in transit. Network-level inspection tools cannot see the HTTP layer without terminating the TLS connection.
  • Distributed sources: Botnets spread the attack across thousands of IP addresses, so no single IP sends enough requests to trigger simple rate limits.
  • Mimicked behavior: Sophisticated bots rotate user agents, accept cookies, follow redirects, and even execute JavaScript — behaviors that make them indistinguishable from real browsers at the individual request level.
  • Low bandwidth footprint: Because each request is small (typically under 1 KB), the total bandwidth consumed may not trigger network-level volumetric alerts even while the application is collapsing under the load.

Mitigation Strategies

Web Application Firewall (WAF)

A web application firewall is the primary defense against HTTP flood attacks. Unlike network-level firewalls that inspect packet headers, a WAF operates at Layer 7 and can analyze the full HTTP request, including headers, body, cookies, and behavioral patterns. NOC.org's WAF service provides managed rule sets that detect and block application-layer floods based on request rate, geographic origin, behavioral fingerprinting, and known bot signatures.

Rate Limiting

Implementing rate limits at the application and infrastructure level restricts the number of requests a single client (identified by IP, session, or API key) can make within a given time window. Effective rate limiting requires careful tuning:

  • Set different limits for different endpoints — a search API may tolerate 10 requests per second, while a login form should allow no more than 5 per minute.
  • Use sliding windows rather than fixed windows to prevent burst abuse at window boundaries.
  • Return HTTP 429 (Too Many Requests) responses with a Retry-After header so legitimate clients can back off and retry.

Challenge-Response Mechanisms

JavaScript challenges, CAPTCHAs, and proof-of-work challenges force clients to prove they are running a real browser before the request reaches the application server. This is highly effective against automated bot traffic but must be deployed carefully to avoid degrading the experience for legitimate users.

CDN and Caching

A CDN absorbs GET flood traffic by serving cached content from edge nodes without forwarding requests to the origin server. For pages that can be cached, this effectively neutralizes GET floods. The NOC.org CDN distributes traffic across a global network, ensuring that attack traffic is absorbed at the edge rather than reaching your origin infrastructure.

Connection Limits and Timeouts

To defend against slow-rate attacks like Slowloris and RUDY:

  • Set aggressive connection timeouts — if a client has not completed its request headers within 10-20 seconds, close the connection.
  • Limit the maximum number of concurrent connections from a single IP address.
  • Use event-driven web servers (Nginx, LiteSpeed) instead of thread-per-connection servers (Apache prefork) to handle more concurrent connections with fewer resources.
  • Deploy a reverse proxy in front of the application server to buffer and validate complete requests before forwarding them.

Protect Your Applications from HTTP Floods

HTTP flood attacks are among the most common and most difficult DDoS techniques to defend against because they exploit the normal operation of web applications. A layered defense that combines a WAF for behavioral detection, a CDN for edge caching and traffic distribution, and proper application-level rate limiting provides comprehensive protection against all forms of application-layer floods. Explore NOC.org's pricing plans to secure your web applications against Layer 7 attacks.

Improve Your Websites Speed and Security

14 days free trial. No credit card required.