Let's talk about slow HTTP method and why it is like one of the trickiest methods to cope with it. So, this is how a slow HTTP attack works. Suppose that we have an attacker here and we have web server, which is waiting for connections and suppose that the attacker has established an HTTP connection. So the connection is established. And then when it comes to actually sending the data packets sending the requests, what attacker does is it just sends a packet in which indicates that there is more to come, you know, it does a server that okay, this is like the packet number one, but there is more coming. So just wait.
So per the HTTP protocol rules, the server just waits and at the same time, of course, the attacker establishes multiple such connections. And in each connection, it says yeah, more is coming, more is coming, etc. In the end, the attacker just takes over all the resources, the server reserved for establishing new connections, because all these connections are pending, waiting for new packets to come. And of course, attacker doesn't let these connections get terminated either by sending keeper lives. The reason why it's hard to deal with is you as the server can almost never know whether it's an attacker, or it's just a legitimate user with a low bandwidth on the other side. In other words, you don't know if the client is trying to exploit your resources or his just, let's say, trying to download the file.
That's why he is establishing so many connections but probably having a connection issue at the same time. Maybe he's on mobile, and he's traveling. And maybe at certain moments, basically, he has some connection issues. And that's why he just keeps his connections on hold. You can almost never 400% make sure that It's an attacker, or it is a legitimate user with low bandwidth. Yet, there are, of course, some indicators, which tell us that, you know, this is Roger an attacker, and we are going to talk about those indicators.
But before that, let me just go ahead and show you the two types of this kind of attack. The first one is slow HTTP headers. And the second one is slow posts. The logic behind them is the same. The only difference is in HTTP headers. As the name implies, it keeps the connection pending by making the server expect to final crlf tag for headers.
And actually, this example here is for HTTP headers. As you can see, headers are divided into crlf tags. If it wasn't an attack, then that would be another crlf tag, a double zero left tag indicating that you know This is the end of this pagination on the headers. And basically, this is the last packet. But of course, the attacker ever since this packet, it just keeps sending the packet that you see on the screen, indicating that you know, more is coming, wait for more, and etc. As I just mentioned, in slow post, again, the logic the same.
The only thing is, this is used with POST requests, and it uses the same logic, but not in this headers, rather under the form section in the data section. So this indicates that the more of this data below them the post request is coming. And that case, the server expects more of this data apart. Now, how can we detect it? before we're actually going to detection what we need to do is get the web server timeout. 300 chickens is difficult for Apache for example, after that, Single crlf texts are sent in headers.
And the time gap between two requests is less than 300 seconds, then it's wise to raise an alert. Of course, it doesn't mean that again, you're under attack, it just indicates that there might be an attack. So basically, this is I can see the only way to detect them. Remember, that is not true. So just because we have both of them set doesn't mean that you know, you will be basically protected. You need to be constantly checking the connection attempts in order to avoid slow HTTP attacks, and be prepared to get many false positives.
And just like in the case of detection, mitigation is not trivial either. And correct settings for that, where I depending on the web server you use, I have just provided a link in resources which explains you what you should do. Best Practices per server type. But as a general rule, I can recommend you to limit the connection attempts and tractor alerts raised in this detection part. And as a second suggestion, I can recommend you to buffer the requests at your proxy before reaching to your server, although this might create performance issues and finally of course, following the instructions in the link