Securely and reliably delivering web services are a challenge due to relentless hacking attempts and DDOS attacks. Even if the attacks aren’t successful, the connection log grow very quickly and the actual webserver will need more maintenance just to keep the lights on.

Defense in depth methodology works possibly best to keep services running and to deliver. One such methodology is described here.

We will use google compute engine and Cloudflare.

  • Cloudflare hosts DNS and acts as a DDOS mitigator and caching proxy
  • Google compute engine firewall (ip rules) allows only connection from Cloudflare
  • One or multiple google compute instances host the content and accepts traffic only from cloudflare

Cloudflare has a variety of security levels to stop most DDOS (even slow ones) attacks and it also has an excellent analytics dashboard. IP/country blocks are available. Using ‘page rules’, we turn on caching of most content. With this method Cloudflare keeps serving content without sending request to origin server.

Next, we will whitelist Cloudflare ips, in Google cloud’s VPC firewall. All other ips are blocked from accessing 80/443 ports. This way, if bad actors are attempting to access origin servers by ip instead of going through hostname, the attempts are blocked at vpc firewall level. Another advantage is that junk traffic will not pollute the origin servers logs. SSH should be allowed only from administration computers’ ips.

In origin server, say nginx, should have ‘ssl_verify on’. This is to make sure that, origin server is only responding to Cloudflare.

Next step is to drop connections (instead of responding with 404) from bad actors’ attempt to retrieve non-existent URLs. They can still pollute the nginx log files. One way to limit logs is to set maximum size

In /etc/systemd/journald.conf, set

SystemMaxUse=100M

Above steps take care of the most hostile bots/actors scenarios. Managing logs for a heavily visited site can always be taken care of using compression and centralized archiving.