16 November, 2015

A DDoS Attack Can Really Ruin Your Day

I hope I don't have to go through too many more days like this past Saturday, 14-Nov.  The insidius thing about DDoS attacks is there is little that can be done about them, except a few things.  Ordinarily (the DoS attack) you identify a host or netblock which is sending the nuisance traffic and you add some rules in your router to drop those packets.  But add that extra "D", and it's like being surrounded by a swarm of bees, you can't possibly swat them all dead.

The few things which can be done (which I know about) are:
  • direct traffic somewhere else.  There are companies which specialize in this sort of mitigation, and sometimes sink/absorb multiple gigabits per second of spurious traffic.  Although I've never approached anyone who does this, I'm guessing that doesn't come cheap.
  • insert some sort of rules to limit the rate at which these packets are handled.  This still diminishes service, but hopefully it's not shut down completely.  At least processing is occuring on the packet level, and it's not an application wasting time handling meaningless connections.  Also the kernel doesn't waste time maintaining socket state and such.
  • coordinate with your upstream Internet provider to block or reduce the rate of ingress of these packets.  This is more useful than acting on one's own due to the limits of your upstream link...i.e., the attackers are simply filling your pipe with useless bits.  Considering my link is what you might call residential-class service, this is impractical.  (Besides, shhhhhhhhhhh!  their ToS say I'm not supposed to be running a server.  But let's think about this a second.  If they were truly serious about this, all they would have to do is block the well-known proto/port used, just like they do with outbound TCP/25.)
  • shut off service entirely, and hope that the attacker botnet eventually "loses interest" in in your host, or gets shut down by others' actions.
This last is the strategy I used yesterday.  I know it's much less than ideal and is not sustainable.  From what I can tell in the logs, it began about 0130 local (US Eastern) time, and lasted for around 15 hours or so.  I was merrily puttering around (I think Web browsing) when I noticed the Internet NIC activity LED on my router was flashing an awful lot, accompanied by a lot of disk seeking sound (which would have been syslogd(8) writing out maillog).

The first thing I did of course was log onto the (Linux) router and do a tcpdump of the "WIC" (wide area network (WAN) interface card), and discovered a lot of TCP/25 traffic.  So I pulled up the maillog, and discovered a lot of instances of "host such-and-such did not issue MAIL/EXPN/VRFY/ETRN" and messages about refusing connections because the configured number of (MTA) children has been reached (happens to be 7 in my case).  By far my biggest concern is the attackers will be sending something which trips up my MTA and makes it spam other people, possibly getting me knocked off my ISP.  So I did what I usually do, look at a representative sample of recent maillog entries and add some iptables(8) rules and DROP traffic coming from these hosts (or maybe netblocks) which are causing the "did not issue MAIL/EXPN/VRFY/ETRN" entries to be generated (basically, connecting and doing a whole lot of nothing, just tying up a socket and pushing me over my child process limit).

There came a point where I realized I needed more than just a line or three of rules, so I decided to grep(1) for "did not issue" in the logs, take the latest 150 such entries with tail(1), extract the IP addresses with sed(1), and use sort(1) to get a unique set of them.  As I recall, I came up with 30 or so addresses, and used a for loop to append DROPs of them to the PREROUTING chain in the mangle table.  (The idea is to cut off traffic as soon as the router takes packets in, so a PREROUTING chain.)  Unfortunately, it became apparent that a handful of addresses wasn't going to be effective, because the log messages kept on a-comin'.  So I decided maybe a little PTR record and whois(1) investigation (to block entire netblocks instead of individual addresses) was in order.  A disturbing trend started to emerge.  The IP addresses were generally purportedly to be Russian and other Eastern Bloc countries, who, at least to me, are notorious for being the origin of a lot of DDoS attacks and spam.


I really did not want to shut off services entirely, but I saw little choice.  I did not want my MTA compromised to be yet another source of spam.  I put in an iptables(8) rule which dropped all TCP/25 traffic arriving over the WIC.  I noticed this did not stop the popping of message into maillog and realized the attackers were also attempting connections to TCP/587 and TCP/465, so I put in rules for those too.  For the moment, IPv6 is, in a sense, only in its infancy (for being a twenty or so year-old infant!), so there was no apparent reason to add any ip6tables(8) rules.  And in fact, email still flowed in over IPv6, most notably from Google (thanks for being a leader in the adoption of IPv6!).

It was at this point I was very glad that (according to Wikipedia) Dana Valerie Lank, D. Green, Paul Vixie, Meng Weng Wong, and so many others collaborated on initiating the SPF standard.  I began to think, from whom was it particularly critical that I receive email?  I could go to those domains' SPF records and insert ACCEPT rules for those addresses or netblocks.  Google was already "covered," as noted above about IPv6.  Oddly enough, the first domain I thought of was aol.com, because my volleyball team captain might want to tell me something about the soon-upcoming match on Monday.  The SPF for them looked knarly, with some include: directives.  I settled for looking at Received: headers in previous emails from Matt and identifying the netblock AOL had been using previously (turns out it could be summarized with a single /24).  Next I thought of Discover, then of First Niagara, then Verizon (for them informing me of impending ejection from their network for violation of their ToS).  I also thought that although it's not all that critical, I receive an awful lot of email from nyalert.gov, especially considering we had that extended rain and wind storm, and the extension of small craft advisories and so on.  All in all, I made exceptions for a handful of mailers.

Then to evaluate the extent of the problem, I used watch(1) to list the mangle PREROUTING table every 60 seconds, to see the packet counts on the DROP rules.  I'd say they averaged deltas of around 20, or one attempt every three seconds, and the peak I saw once was 51, or nearly an attempt per second.  I know as DDoS attacks go, this is extremely mild.  If it were a seriously large botnet controlled by a determined entity, they could likely saturate my 25 Mbit/s link, making doing anything Internet-related extremely challenging or impossible.  Always in the back of my mind was that this was unsustainable in the long run, and hoped that the botnet was dumb enough to "think" that if not even ICMPs for connection refusal were coming back, that it would assume they in one way acheived their one possible objective, which was knocking my entire MTA host offline.

I then contemplated the work which would be involved in logging onto various Web sites (power, gas, DSLReports/BroadbandReports, First Niagara, Amazon, Newegg, and on and on) and updating my email address to be GMail.  I also started trying to search for email hosting providers who would basically handle this entire variety of mess on my behalf, and forward email for me, whereby I could block all IP addresses except for those of said provider.  Or maybe I could rejuvenate fetchmail(1) or similar to go get my email from them over IMAP and reinject it locally, as I used to do decades ago with my dialup ISP's email.  To my amazement at the low prices, it looks as if, for example, ZoneEdit will handle email forwarding for only on the order of $1/month/domain (so $3 or maybe $4 in my case, because there is philipps.us, joe.philipps.us, philippsfamily.org, and philipps-family.org).  This is in contrast (as far as I know) to Google's $5/month/user, and I have a ton of custom addresses (which might be separate "users").  (Basically, it's one of the perks of having your own domain and your own email server.  Every single sender gets a separate address, and if they turn into a not-so-savory sender, their specific address ceases to work.)  The search terms needed some tweaking because trying things like "mail exchangers" (thinking in terms of MX records) turned up lots of hits for Microsoft Exchange hosting.

A friend of mine runs a small Internet services business and has some Linux VPSes which I can leverage.  He already lets me run DNS and a little "snorkel" where I can send email through a GRE tunnel, and not appear to be sending email from a "residential-class IP address."  So I called him up and thankfully he answered right away.  I got his permission to use one of the IPv4 addresses (normally used for Apache/mod_ssl) for inbound email, in an attempt to see if these attackers are more interested in my machine (the specific IP(v4) address) or my domain(s).  If I add an additional address to accept email, and the attacks do not migrate to that address, I then know that it's far more likely that this botnet came across my address at random or by scanning the Verizon address space.  So, I picked an address on one VPS, added a NAT rule to hit my end of the GRE tunnel, had to debug a routing table issue, redid the MTA configuration to listen on the tunnel's address, all-in all about an hour and a half's worth of work to implement and do preliminary testing.

It was at this time I realized in my watch window that the packet counts were no longer increasing, even over several samples (minutes).  Even after I added the A record to supplement the one for my Verizon address, I noticed there was basically no activity in maillog.  So as besst as I can tell, it was as I suspected, the botnet was really likely only interested in my specific address/host.  And thankfully it "saw" the lack of any TCP response as an indicator that the host had gone offline, and to cease directing any resources to the attack on me.  I hate to give these worms any ideas, but you could also try other things, like ICMP echo, to determine if that host is still alive.  Then again, if your sole objective is compromising an MTA, maybe that doesn't matter.

Eventually, I inserted a rule to accept TCP/25 traffic again, thinking if attacks resumed, I could easily kick that rule out and spring the shutoff trap again.  Or even better, I could replace it with a rule including the limit match, so only something like 10 or 15 connection attempts could be made per minute.  At least the MTA would not be wasting time/CPU on these useless connections, and the kernel would not have to keep track of state beyond the token bucket filter.  I almost hit the panic button when I saw some more "did not issue" messages, as well as a little later a notice of refusing connections because of child limit.  But I reasoned, just wait a few minutes, and see if it's persistent.  Howver, I had a lot of anxiety that it was not over yet, and that it was the domain and not the host the attackers wanted compromised, because some of that unwanted activity was through the newly set up IP address.

In retrospect, I question whether I want to continue doing this myself.

  • It's against the ISP ToS, and they could blast me off their service at any time.  I'd likely have to pay their early termination fee and I'd have to go slinking back to Time Warner (and their slow upstream speed).
  • What happens the next time this occurs?  Will it be more persistent?
  • What if a future attack is attacking the domain, and any IP address which I use get similarly bombarded?
  • What if next time it's not around an attempt per second but instead hundreds or more attempts per second?
  • I should have traced traffic on my GRE "snorkel" tunnel to see if they managed to compromise my MTA, and were actually sending email surreptitiously.
At least the experience has uncovered flaws in my router's configuration, and I'll be more ready to switch to using the VPS to recieve email.  And I'll have some ideas about hosting companies if I want to migrate operations there instead.

UPDATE Mon., 16-Nov: There are some more folks at it again.  What I've decided to do for now is to put in some iptables(8) rules which use the limit match to allow only a certain number of connections per minute.  So far this has been fairly effective at keeping the chaos to a minimum, but it's not ideal.  I really think I'm going to have to look at an email hosting service, at least until I can put up a more robust email server, or maybe permanently.


Direct all comments to Google+, preferably under the post about this blog entry.

English is a difficult enough language to interpret correctly when its rules are followed, let alone when the speaker or writer chooses not to follow those rules.

"Jeopardy!" replies and randomcaps really suck!

Please join one of the fastest growing social networks, Google+!