Jump to content

Recommended Posts

ICS has a component TIcsBlackList that can be used by servers to count access attempts by IP address, and block after a specified number of attempts until after several hours of inactivity.  It's use is illustrated in the OverbyteIcsSslMultiWebServ sample.

 

Just noticed these lines in the log for one of my web servers, someone using Alibaba Cloud in Hong Kong has made almost three million access attempts to my web site over several weeks, trying to read access data that is limited to 50 accesses per day.  And still trying despite those requests being rejected. 

 

47.76.209.138 attempts 1,481,269, first at 12:18:52, last at 20:00:17 BLOCKED

47.76.99.127 attempts 1,478,638, first at 12:04:36, last at 19:58:57 BLOCKED

 

Should really be reporting the date of first access, but don't normally see hackers continuing this long.

 

The sample shows various ways to detect hackers, such as web site access by IP address instead of host name, that stops hundreds daily on my sites (no HTTP allowed).  

 

Angus

 

  • Like 3

Share this post


Link to post

My Chinese hackers have changed strategy to get around my IP address blocks and access my web site database, that restricts free access to 50 requests a day, paying for unlimited access seems beyond them. 

 

So now they are using VPNs, making two requests at a time from thousands of different IP addresses around the world, 3,500 over the last 48 hours, with requests now repeating after 24 hours, |previously I cleared the block list after six hours of no repeat access. 

 

I've not yet managed to define an automated strategy to block relatively random IPs, a  CAPTCHA would work, but don't want to annoy my users, likewise giving them a free login.   Has anyone got a better strategy for blocking unwanted access by IP?

 

Meanwhile, I'll add /24 level IP blocks manually for a few dozen VPN ranges, to means the server will immediately close any connections from those ranges.  Last time I did this to block TOR  nodes, I accidentally blocked some large corporates resulting in some interesting telephone calls. 

 

Angus

 

Share this post


Link to post

Unfortunately it's an uphill battle. You can't win. Whatever you do, they will find a way around it, as you cannot differentiate between legal individuals and a mass-request from different IPs. They can hit you from such diverse places as Australia, India, Brazil, Germany and Canada at the same time, and you have no way of knowing if they are from the same person.

 

Is there a pattern in their requests? Do they go alphabetically? If so, you can do heurestics detection (but they can circumvent this by going random).

 

You can also limit the speed with which you accept requests (ie. a minimum of 1 min. between requests), but they will soon detect this and space it out to a random value between 1:10 and 1:30, and you risk annoying your legitimate users.

Edited by HeartWare

Share this post


Link to post
1 hour ago, Angus Robertson said:

Has anyone got a better strategy for blocking unwanted access by IP?

Hard to say.

One effective approach could be behavior-based rate limiting rather than relying solely on IP tracking. Since these hackers are making two requests at a time from each IP and cycling through thousands of addresses, you could analyze request patterns to flag suspicious behavior.
For example:

- Track request timing and frequency per IP: Legitimate users rarely hit your site with precise, repetitive timing (e.g., two requests exactly every 24 hours). You could set a threshold where IPs making requests in a tight, unnatural cadence (say, under a second apart) get temporarily soft-blocked (e.g., delayed responses or a 429 Too Many Requests status) without affecting users with more organic patterns.
- Fingerprinting beyond IP: Use a lightweight client-side fingerprinting technique (e.g., based on HTTP headers like User-Agent, Accept-Language, or even subtle timing differences in TCP handshakes). If the same fingerprint appears across multiple IPs in a short window, it’s a strong signal of VPN rotation from a single source.
You could then throttle or block those requests without needing a CAPTCHA.
- Perhaps you need only a few countries, so you could block most of all other country requests
- or something like Fail2Ban
https://github.com/fail2ban/fail2ban
 

Edited by Rollo62

Share this post


Link to post

Thanks for the thoughts.  

 

The user agent strings are partly randomised, lots of different Chrome/xx versions, the Safari version seems to be the same, but is probably legitimate.  The SSL HELO packet has some unknown EC groups, but Chrome often has test groups.  The ALPN is always blank, and the requests use a URL without www, but blocking either of those would also hit legitimate API users. 

 

The server does not currently log any request headers, not sure if VPNs would add anything to identify themselves, as proxies normally do.  

 

One possible solution would be counting IP accesses within a /24 or larger block, although that might include some corporates with outgoing blocks, I'd need to update my white lists as well. 

 

Don't want to spend too much time on a rare problem...

 

Angus

 

Share this post


Link to post

Hi,

 

Well there is nothing much i can suggest here that you already doesn't know, but i can give an idea, CloudFlare, this can stop it or at least remove %99 of these connections, this from experience, but of course you thought about that and didn't use for reason or two, also we known, 

 

My suggestion is utilize CloudFlare as a step, meaning redirect all connections into subdomain this sub domain is the one with CF or vice versa, i am trying to give you an idea about sieving the connection with CF, so it one of these as example(s)

"->" means HTTP redirect

1) Your server on main domain -> subdomain on CF -> return main domain after checking cookies and what CF can offer here

2) Your main domain on CF -> your actual server on sub domain, after white listing the connection, 

3) combine both (1) and (2) and use CF worker to handle whitelisting and let CF handle the blacklisting.

 

Just thoughts and hope that helps.

Share this post


Link to post

CloudFlare is the obvious solution for most commercial web sites, although I find my link site checker app being blocked from some sites CF 'protects'. 

 

But this is an ICS web server, and developers have vastly more control over checking and blocking connections than sites using Apache, etc, that need extras to protect them. 

 

Although I get the usual general hackers, they are normally easy to block, anyone accessing the SSL site using an IP address immediately goes on the blocked list, or trying to access CGI script, etc.  

 

Anagus

 

Share this post


Link to post

@Kas Ob. 
Yes, I have read about CloudFlare a lot of positive things, but I'm not yet an active user.
Your proposal will add additional cost, I think minimum 5$ per month, right?
My question would be, how far do I get with their free tier service?
As far as I understand, this will be an unlimited, somehwhat protected DNS server, perhaps thats already solving such issues.
Is the free tier usable for a commercial server, with not too wild traffic?


 

Share this post


Link to post
2 hours ago, Rollo62 said:

I think minimum 5$ per month, right?

I never went above 0, for my personal usage and my recommendation for my clients, though some went for extra functionality and paid more.

CF free plan is pretty damn good, and it does protect and isolate many if not all of these DoS or DDoS, from their infrastructure capability it is negligible for a site or two. 

 

2 hours ago, Rollo62 said:

how far do I get with their free tier service?

Pretty good damn protection, as you don't even want to care or think about any attacks on all layers up to 5, to understand these and as reminder about layers refer to Wikipedia

https://en.wikipedia.org/wiki/OSI_model#Layer_architecture

Up layer 3 it is really hard to protect yourself, this involve many raw sockets and very low level networking which even harder on Windows without involving Drivers and Filter Drivers.

as for 4 and 5, these are were CF can offload this huge pain in the back to manage,

then you will be left with the last two layers 6 and 7, these are absolutely your job to protect against, to explain, if your server miss handling a JSON payload that cause a crash, or freeze .... these are your job as developer to handle and protect against.

 

There is many can be written here, but i hope i gave a good start point to start your own research about these layers how things can go wrong with them, as Denial of Service (Dos) and Distributed DoS (DDoS), can be to deplete server resources or just cause havoc and instability for the service, in this case with Angus, the hackers were using DoS but after blocking by IPs they switched to DDoS, still the attack target itself is not clear for me, is it brute forcing a password or just scraping data  or.... this must be handled by Angus, and again most cases need a login, hence session come to play, and delegate this to CF is nice, and please be careful here and don't confuse the session for the HTTP(s) CF established and your server session for logged/not logged aka your own server session, these are two different sessions, but can be combined or in other words co-exist and utilized.

2 hours ago, Rollo62 said:

As far as I understand, this will be an unlimited, somehwhat protected DNS server, perhaps thats already solving such issues.
Is the free tier usable for a commercial server, with not too wild traffic?

It is unlimited, at least from i witnessed, and yes it was wild traffic and CF chewed it like nothing for static/cached/cdn content and for dynamic, yet your server was hidden and relaxing.

 

Also there is these features/APIs like 

https://www.cloudflare.com/application-services/solutions/api-security/

https://developers.cloudflare.com/api-shield/

You can have a look at these case-studies which i rarely trust or believe in almost even read, but with CF it is true and it is everywhere and doing it job, 

https://www.cloudflare.com/case-studies/

 

With that being said about CF, i used OVH and my own server redirections, OVH filtered DDoS attacks up to level 4, my server was free from needing to handle those, and for higher level i didn't use captcha but utilized some redirection, to filter out any bots, see most bots even the sophisticated ones can be fooled or identified by this redirection, redirect to a page on sub domain use your own headers and cookies then return them to another one, if you ever watched what Microsoft does for web Outlook/Hotmail login (it was their standard in not so long past), you will get the idea, though this practice is dying due to cross origin policy on browsers.

 

Anyway, the whole thing of stopping such attacks will comes to identify what is their target, simply put the server/service out of work ? grabbing public data ? grabbing valuable data ? brute forcing to gain access ?.... for each case you need build a solution, 

But in general such just repeated http requests, CF will filter them out, most of them, see CF does know what each and every IP does and to whom it belongs, so VPN and proxies are the easiest to block.

Also i wouldn't suggest to block IP(s) by /24, that is excessive, i always use limit per second per one IP no ranges, and combine it with minutes, the more ip connect and request the more delay to keep blocking and unblock after one hour no matter what, of course if the HTTP server is handling keep-connection right and doesn't trigger the auto block by dropping the connection itself, in other words will not allow HTTP/1.0 and old browsers, these will block them selves.

Most valuable tool to identify and block is dynamic cookies, not static ones, for established connection dynamic is good, for new will be handled as suspect under provision, combining these with IP(s), see, let say /24 range so i can allow 255 different cookies for that range and start to block, even go after them all, but if one still having my cookie and updating it consequently then it is fine, but this include keep tracking cookies at least as the server is running, unless the login cookie is kept in DB then the infrastructure is there to expand and track them all.

  • Like 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×