Huh, I got attacked from 170 countries last year (HTTP) and Cloudflare's autonomous detection (machine learning powered) rules did almost nothing. It was millions of the same requests over and over and the only thing that we could do to stop it was manually put in rules to block routes. Not only that, some of the attacking traffic came from within Cloudflare workers or it was at least going through their WARP client (those details are now fuzzy). Was a pretty miserable failure to perform on their part.
Similar experience last week. But tbh I'm using the free plan so I wasn't expecting too much from them. What it worked was to use nginx rate limiter aggressively, parse logs and deny top ips with nginx. Because all traffic comes through CF I wasn't able to use iptables for blocking
If you can thwart it with your own nginx, then it can’t be much of an attack. Cloudflare is one of your only hopes against a volumetric attack especially when paying $0.
Support - yes. Turn on without a bit of hassle - no. I'm not sure how they found that many active services. Honestly, at that small percentage I suspect misclassification instead.
Yeah, I think this is misclassification based on UDP port.
If you take their random source ports (21,925), ~0.004% come from any single port, which lines up with what they said was "Other" traffic. The numbers don't quite work out right, but it seems like its within a factor of 2, so I wouldn't be surprised if it was something like udp source/dest port = 17 => QOTD.
They're not an April fool's joke. A 90's linux might have these services enabled by default. I assume they were built to make network debugging slightly less boring
Huh, this sounds kind of cool, I like the idea of there being a few QOTD servers dotted around the internet. Shame that the first I'm heading about it is it being abused to launch a DDOS.
While not a random server in the internet, here is the start of the ssh banner on my router (before the legal "fuck off")
_______ __ __ __
|_ _|.-----.----.| |--.-----.|__|.----.-----.| |.-----.----.
| | | -__| __|| | || || __| _ || || _ | _|
|___| |_____|____||__|__|__|__||__||____|_____||__||_____|__|
N E X T G E N E R A T I O N G A T E W A Y
--------------------------------------------------------------------
NG GATEWAY SIGNATURE DRINK
--------------------------------------------------------------------
* 1 oz Vodka Pour all ingredients into mixing
* 1 oz Triple Sec tin with ice, strain into glass.
* 1 oz Orange juice
--------------------------------------------------------------------
Including a cocktail recipe in the login banner has been a signature of OpenWRT for a long time. Looks like Technicolor came up with their own recipe for their OpenWRT distribution.
Is it part of Microsoft Services for Unix? That seemed to be the primary source of chargen reflectors when I was getting hit by that; and it feels like a similar thing.
A lot of security is just making stuff up to sound smart, since the clients aren't very technical. Someone saw packets on port 17 and looked up port 17 and decided that meant the QOTD service was involved in the attack. Probably.
A: Cloudflare is feeding the trolls because they think that they are invincible. Or: These post-mortems don't establish any proof that the attack was successful, especially if they are covering DDoSes that were barely even noticed by the public until CF publishes a blog post 1 month later -- so it's actually embarrassing for them and hurts their ability to market botnets for rent, at least once they no longer have the literal world record.
B: Cloudflare is feeding the trolls for free testing scenarios to improve the mitigation
C: The trolls don't really care if you feed them, large DDoS is something that's happening all the time anyways
← Inserting standard complaint about Cloudflare protecting the sites selling these DDoS attacks here (at best: a conflict of interest selling the cure while protecting the disease).
When your IP is found to have been part of a botnet, I think ISPs should just limit you to like 20Mbps for at least a year, so you think twice about buying that 10$ wifi baby monitor next time.
When you get caught speeding on the road or being a nuisance otherwise you can and will get punished by the courts, including temporary restrictions on your driver license. When you money mule for others, even if you don't know that you actually fell victim to a scam, you get punished as well. When you litter in Singapore, you can get ordered to work community service.
I see no issue in handing out similar punishments in the digital space. The Internet is a shared medium, everyone who connects to it has a responsibility to not be a nuisance to others.
On the road you could have killed someone. Your 20$ baby monitor bought from an authorized store you know... whatever happens, it's not gonna kill anyone very directly ...
The main ingredient of crime is intent, whatever you say. A smaller ingredient can be recklessness, but maybe it's the ISPs sending all those millions of empty packets to a single server that should start feeling some heat ?
If that could make people think about it, I'd be all for it. But the people buying that junk are absolutely clueless, and would remain so even after the punishment was well-underway.
Obviously they are - everyone's clueless about everything except the one thing they know about. I imagine for the clothes you're wearing you're clueless about the conditions of the people who made them.
Anybody know who the "Cloudflare customer, a hosting provider" was and what IP they were targeting and why? I'm curious why someone would go to such great lengths to try to take down a service.
The article says it was a 45 second attack. I used to run a high profile website which used to get a lot of 90 second attacks. Best I could figure was some of the ddos as a services would give a short attack as a free sample, and people picked us cause we were high profile. Thankfully, these would almost always attack our website rather than our service, and availability for our website didn't really matter. Most of the attacks weren't a big deal, and they'd get bored and move on to something else. The ones that did take a web server down were kind of nice... I could use those to tune both the webservers and the servers doing real work.
I don't know who the provider is, but the attack was almost certainly not targeting the provider, but a site hosted on their platform. Many hosting companies upsell their customers into stuff like providing Cloudflare DDoS prevention. The target site was probably something political or controversial. I work at a hosting provider and we deal with this type of thing constantly.
> As the internet gets more users and more devices connected, the ratio of DDoS volume to a single connections volume will only get larger.
I'm not sure if that's the case. Large volumetric DDoS records have been increasing, but connection bandwidths have also been increasing.
7 tbps is a lot of traffic, but it only takes 7,000 nodes with 1G symetric connections to do it. Botnet sizes don't seem to be getting that much bigger.
The basic solution to volumetric DDoS is to get a bigger pipe; this works, kind of, but it's hard to get 7 Tbps of downstream capacity, and you need to be careful that you don't become a 7 Tbps reflector.
The more scalable way is using BGP to drop traffic before it gets to you. Depending on your relationship with your hosting facility and their ISPs or your ISPs, it's often pretty easy to get packet to a given IP dropped one network before yours. Ocassionally, those blocks could propagate, and things like BGP Flow Spec promise more specific filtering... dropping all packets to an attacked IP mitigages the attack for the rest of the IPs on the path, but dropping all UDP to an attacked IP might get all the attack traffic and let most non-attack traffic through... More specific rules are possible if you wanted to try to let DNS and HTTP/3 survive while being attacked.
To work against a 45 second attack, BGP based measures need a lot of automation.
You don't think the proliferation of inexpensive dogshit IoT products from the Far East, running already-10-years-out-of-date versions of Linux (bonus if it has a hidden Telnet daemon with hardcoded root password!), hooked to ever-expanding 1Gbps residential fibre lines, has anything to do with it?
This represents like 75% of surveillance camera systems out there btw.
I think the increase in 1G residential connections is a bigger factor than the IoShit products. I don't think botnet node counts are getting that much bigger, but the amount of garbage each one can push certainly is.
One simple way to do it is configure the customers routers to drop/reject all UDP/TCP packets where SRC address does not match Private IP/WAN Assigned Public IP.
I cannot believe this is still not commonly done. I remember discussing this with some people in the industry over ten years ago and the sentiment was “if ISPs just stopped IP spoofing that would solve most problems”.
It would solve a ton of other people’s problems, but cause a few for you, so it won’t be done until required by law.
E.g., customer does something stupid with addresses but the “wrong address” is something they control on another network, so it works. Egress filtering breaks it, support call and crying.
I think ideally the customers router shouldn’t be touched, but the ISP can still do packet filtering on the next hop to drop any packets which don’t have a src ip matching the assigned WAN address of the router.
Wouldn't that need a huge amount of extra hardware to do that filtering when the routers in each customer's home are mostly idle? Just setting egress filtering as the default and letting users override that if they need to for some reason should be a good outcome. The few that do change the default hopefully know what they are doing and won't end up part of a DDoS but they'll be few anyway so the impact will still be small.
> Wouldn't that need a huge amount of extra hardware to do that filtering
20 years ago Cisco (probably much longer) routers were able to do this without noticeable performance overhead (ip verify unicast reverse-path). I don't think modern routers are worse. Generally filtering is expensive if you need a lot of rules which is not needed here.
The router in the customer's home cannot be trusted. With cable at least, you are able to bring in your own modem and router. Even if not, swapping it is easy, you just have to clone the original modem's MAC. In practice this is probably quite common to save money if nothing else (cable box rental is $10+/mo).
Note that spoofing source IPs is only needed by the attacker in an amplification attack, not for the amplyfing devices and not for a "direct" botnet DDOS.
I would in fact guess that it's not common at all. Setting up your own cable modem and router is going to be intimidating for the average consumer, and the ISP's answer to any problems is going to be "use our box instead" and they don't want to be on their own that way. I don't know anyone outside of people who work in IT who runs their own home router, and even many of them just prefer to let the ISP take care of it.
Common no, very easy to proliferate though as people become aware of the savings possible. And the 2 cases I've seen where litteraly order the same model online and swap it, no configuring required. And it wasn't even the family tech support guy(me) who came up with the idea. The ISPs incuding the router as a monthly line item on the bill are litteraly indirectly asking you to do this.
Comcast/Xfinity in fact gives me a discount for using their router. Probably because (a) it lowers their support burden and (b) they are logging and selling my web traffic or at least DNS lookups.
I think it is less common now, but ISP routers on average used to be trash with issues — bufferbloat, memory leaks, crashes — so a number of people bought a higher end router to replace the ISP provided one. Mostly tech savvy people who were not necessarily in IT.
Nowadays my ISP just uses dhcp to assign the router an address so you can plug any box into it which talks ethernet and respects dhcp leases to be a router which is nice, albiet 99.9% of people probably leave the router alone.
All large ISPs have fancy network visibility and DDoS mitigation solutions.[1] But getting them to actually USE them for problems that aren't lighting up their monitoring dashboards is another story entirely.
(1. I know this, because I used to work for a company that made them, and the majority of worldwide ISPs were our customers.)
Hundreds of Gbps of UDP traffic to random ports of a single destination IP from residental (?) network should be pretty easy pattern to automatically detect and throttle.
More advanced attacks are more tricky to detect, but plain dumb UDP flood should be easily detectable.
Have you ever uploaded 100's of Gbps over QUIC from your residential connection to a single IP?
And the aggregate across the ISP's network could in theory be monitored - so if you were uploading 1Gbps, yes, it could be legitimate. If you and 582 others were all uploading 1Gbps to the same IP at the same time, much less likely legitimate.
I.e. no traffic beyond my legitimate saturation can reach the ISP
I have saturated my link with quic or wireguard (logical or) plenty of times.
The lack of any response on high data rates would be an indicator
I've only tried that once and it failed gloriously due to congestion.
I don't think there's many real protocols that are unidirectional without even ACKs
If someone is reporting malicious traffic coming from the ISP's network then an ISP should be obligated to investigate and shut off the offending customer if necessary until they've resolved the problem.
How would this ever work at scale? These attacks come from thousands of compromised devices usually. e.g. Someone's smart fridge with 5 year old firmware gets exploited
As dijit (above this comment) has noted, this is somewhat possible and automated today.
For example, one method has the attacked IP get completely null-routed, and the subsequent route is advertised. Upstream routers will pick up the null-route advertisement and drop the traffic ever closer to the source(s). The effect of the null route is that the attacked IP is unreachable by anyone until the null-route is lifted... so the aim of the DDoS isn't averted, but at least the flood of traffic won't pummel any network paths except for (ideally) the paths between the attacker(s) and the first router respecting the null-route. In my experience the DDoS tends to stop more quickly and shift away to other targets if the folks directing the attack can no longer reach the target (because: null-route) and then the null-route can be lifted sooner relative to a long-running DDoS that hasn't shifted away to other targets.
With SMTP there are services who provide a list of malicious servers so that they can be blocked at the receiving end.
I wonder if this would work in reverse, having a standardised, automated protocol that allow providers like Cloudflare to notify upstream networks of attacks in real time, so malicious traffic can be blocked closer to the source.
Genuinely curious, I'm not an expert in low-level networking ops.
Your ISP likely knows you're part of a botnet quite early. For example many of them use magic domains as either shutoff switches or CC endpoints, so could be detected. But when was the last time anyone's ISP ever told them "hey one of your hosts is infected"?
I don't have a specific answer for that but it is really a problem that residential ISPs are going to have to solve now that gigabit or faster symmetric internet connections are becoming the norm.
We pay internet providers healthy amounts of money each month. Surely they can afford to hire some staff to monitor the abuse mailbox and react on it - we know they can when the MAFIAA comes knocking for copyright violations, because if they don't comply they might end up getting held liable for infractions.
Banks have already figured out fraud detection through pattern recognition, ISPs can do the same. When a connection has never used more than 300/10 of a 1000/1000 link and 80% of that was TCP with dstport 80 or 443, then it starts doing /900 UDP to every possible dstport, maybe something is wrong?
"Your network is generating an extraordinary amout of traffic, which is likely the result of a virus-infected device. As a result, we have lowered your speed to 100/20. Please read the steps to check your devices and unlock your connection here: ____"
IoS botnets depend on total number of devices and not individual bandwidth. Most IoT devices have cheap network chipsets and unoptimized networking stacks, I wouldn't expect them to saturate a 100mbps connection.
Most ISPs are already a pain in the ass to deal with. (Fuck you Charter/Spectrum). I don’t trust them to do their due diligence and implement this correctly. Or worse, abuse it.
“hey you pay for 1000/300 package. We detected abnormal traffic. Now you get throttled to 100/100. But still pay 1000/30”. Then they will drag on the resolution process until you give up.
Apparently no solution that has gained traction, and no single solution that works everywhere. Source address filtering (BCP 38) got us part of the way, but it's difficult/undesired to do in data centers.
IoT devices (speculated to be used here) would have to have a solution upstream. Things like MUD (RFC 8520) have been proposed, but have problems too - developers need to be able to list all communications of their device and make that available somehow (MUD profile server). Some consumers will never do it on their own, and may want to prevent alerting a device manufacturer they have a device (think connected adult toy...).
Also given that IoT devices may never be updated by their owners, expect to see IoT botnet DoS attacks for years.
Consumer home/office routers provide their clients IP connectivity without reserve. Why is that the case?
The default is to allow all available bandwidth, which presumably should be the case from ISP to consumer (most likely a paid-for service), but why should that be the default at consumer router <-> IoT? What need has your printer for 500Mbps outgoing? Or my fancy toothbrush?
Residential ISPs need to better police abuse of the network and they need to better respond to reports of abuse by cutting off the abusive, botnet-infected users. Of course, until there is a financial or regulatory incentive to cut off these customers, they won’t.
How would you expect capachas to help against UDP flood? The attack works by oversaturating the network channel. Capachas is a (terribly bad) solution to prevent the server from spending CPU and transmit bandwidth on garbage request, but these wouldn't do anything if the server have too much packets receive in the first place.
For each separate endpoint the impact is minimal. Being part of the attack would cost you an extra $1 and you wouldn't even notice. On the other hand, ensuring the metering works correctly, reporting to the billing system works, invoicing it properly, providing support, etc. likely costs more per-customer.
L4 level ddos is useless and is easily protected by Cloudflare.
App level DOS use Cloudflare evasion techniques and directly DOS the destination server, while keeping itself undetected by Cloudflare's systems.
Do not assume that Cloudflare will protect you from all attacks, if your app is dogshit python/js/php then even cloudflare wont protect you from L7 DDOS
That would just be a target list for hackers. Most of the devices that take part are going to be in homes or SMBs with old firmware that’s subject to known vulnerabilities. They will give the list to AS operators who request the offending IPs (presumably restricted to the AS ranges) but dropping it out on the public internet just invites trouble.
What does this botnet do when it's not performing a 7.3 Tbps DDoS? Yea it's probably regular folks computers, but what "wakes up" the botnet to attack? What makes an attack target worthwhile? Presumably something this large would be on someone's radar...
The Command-and-Control part of the botnet would be whatever component they build to instruct it to attack; often using some dummy website they register and have the compromised clients poll for changes with instructions.
I think an increasing amount of them are state actors or groups offering the botnet as a service.
I mean... 7 Tbps sounds like a lot, but 1Gbps symetric connections are common in many areas. 7,000 botnet nodes with good connectivity can deliver that. The article says the attack traffic came from 122,145 source IPs, but I would expect at least some traffic to be spoofed.
The current optics are 400gbps, and 800gbps are sampling; next up is 1.6 tbps; so this is 20x400gbps, basically 1 expensive switch’s worth of traffic. Which is itself a scary prospect!
> DDoS sizes have continued a steady climb over the past three decades.
This is a bit misleading; according to Wikipedia[1], the first DDoS is said to have occurred less than three decades ago.
[1] "Panix, the third-oldest ISP in the world, was the target of what is thought to be the first DoS attack. On September 6, 1996, Panix was subject to a SYN flood attack, which brought down its services for several days while hardware vendors, notably Cisco, figured out a proper defense.", source: https://en.wikipedia.org/wiki/Denial-of-service_attack
I'm going to give you the benefit of the doubt and assume you aren't just being pedantic to be a troll, and point out that when rounding 29 to the nearest 10, you get 30.
In my defense, reading that for the first time gave me an impression that DDoS attacks themselves were older; I was disappointed and wanted to share so that others wouldn’t get similar hopes. Next time I’ll round more decimals.
So the change from 0 sized ddos in June 1995 (30 years ago aka 3 decades ago) to a >0 sized ddos in September 1996 (29 years ago aka basically 3 decades ago) doesn't constitute an increase in size?
1) I'm not sure what your problem with the reasonable rounding of 29 years ago to 3 decades is... but the one that comes across is "extra pedantry for no reason"
2) According to wikipedia the "first dos" attack was in 1996. There are other sources most of which attribute that 1996 panix attack as "one of the first" or "the first major" ddos attack. Before that there were other DoS attacks using udp and/or syn floods, and some of them likely involved several computers (and possibly people) working in coordination. Those several computers were probably not compromised machines that had malware responding to a cnc server, so the squishiness has to do in part with how exactly one defines DDoS - some definitions include a botnet requirement, others just need multiple computers working in coordination. It's claimed that Kevin Mitnick was targeting his prosecutor with syn floods in 1994 (over 30 years ago), but its not fully verified and the details are unknown from my research... likely though >1 computers were involved in that flood if it happened.
In the early 90s there were all sorts of fun and games where people would knock over IRC servers by triggering bugs/behaviors in a lot of connected clients. It's primitive but it seems to have a huge number of elements of DDoS. Similar for attacks on various telecomms infrastructure as the soviet union/eastern bloc fell apart in that time period.
Trying to put a hard "29 years ago" line in the sand is difficult to do... techniques evolve from previous ones and there are shared elements that make the line necessarily fuzzy.
So yeah... theres no reason to quibble about "three decades" since theres 35+ years of history around "things that look like DDoS attacks but don't fit a strict definition that requires botnets"
Huh, I got attacked from 170 countries last year (HTTP) and Cloudflare's autonomous detection (machine learning powered) rules did almost nothing. It was millions of the same requests over and over and the only thing that we could do to stop it was manually put in rules to block routes. Not only that, some of the attacking traffic came from within Cloudflare workers or it was at least going through their WARP client (those details are now fuzzy). Was a pretty miserable failure to perform on their part.
Similar experience last week. But tbh I'm using the free plan so I wasn't expecting too much from them. What it worked was to use nginx rate limiter aggressively, parse logs and deny top ips with nginx. Because all traffic comes through CF I wasn't able to use iptables for blocking
If you can thwart it with your own nginx, then it can’t be much of an attack. Cloudflare is one of your only hopes against a volumetric attack especially when paying $0.
Cloudflare’s has a free rate limit feature, btw. Not as configurable as nginx but it’s nice to not have the requests touch your server at all.
How many requests per second?
> QOTD DDoS attack
> How it works: Abuses the Quote of the Day (QOTD) Protocol, which listens on UDP port 17 and responds with a short quote or message.
Does any reasonable operating system those days support this protocol? Sounds like "IP over Avian Carriers" to me.
Support - yes. Turn on without a bit of hassle - no. I'm not sure how they found that many active services. Honestly, at that small percentage I suspect misclassification instead.
Yeah, I think this is misclassification based on UDP port.
If you take their random source ports (21,925), ~0.004% come from any single port, which lines up with what they said was "Other" traffic. The numbers don't quite work out right, but it seems like its within a factor of 2, so I wouldn't be surprised if it was something like udp source/dest port = 17 => QOTD.
They're not an April fool's joke. A 90's linux might have these services enabled by default. I assume they were built to make network debugging slightly less boring
[dead]
Huh, this sounds kind of cool, I like the idea of there being a few QOTD servers dotted around the internet. Shame that the first I'm heading about it is it being abused to launch a DDOS.
You can always ssh to random hosts and read the netbanners.
Of course nearly all of them are a long paragraph or two of legal jargon that more or less boils down to "fuck off."
While not a random server in the internet, here is the start of the ssh banner on my router (before the legal "fuck off")
Including a cocktail recipe in the login banner has been a signature of OpenWRT for a long time. Looks like Technicolor came up with their own recipe for their OpenWRT distribution.
OpenWRT stopped doing this 10 years ago, as it was too much hassle to pick a drink that satisfy everyone.
SSH banners come over TCP, requiring the 3-way handshake first, meaning you can't use it for traffic reflection (beyond the SYN-ACK itself).
Right, in general unless you're going to put a lot of care into the state machine to deal with network congestion/abuse it's better to stick with TCP.
I was glad to see QUIC did a pretty good job of limiting its usefulness for reflection attacks. Hopefully we’ll see more uses of UDP move to it
I ran a qotd server for a while, only retired two months ago actually. It wasn't very popular.
Did you have some sort of rate limiting on it?
QOTD can also be used with TCP, which avoids a problem that it has if it is being used with UDP.
Is it part of Microsoft Services for Unix? That seemed to be the primary source of chargen reflectors when I was getting hit by that; and it feels like a similar thing.
A lot of security is just making stuff up to sound smart, since the clients aren't very technical. Someone saw packets on port 17 and looked up port 17 and decided that meant the QOTD service was involved in the attack. Probably.
It almost feels like writing about this is exactly what the attacker wants: Free validation and advertisement for exactly what their botnet can do
Is this a sign that
A: Cloudflare is feeding the trolls because they think that they are invincible. Or: These post-mortems don't establish any proof that the attack was successful, especially if they are covering DDoSes that were barely even noticed by the public until CF publishes a blog post 1 month later -- so it's actually embarrassing for them and hurts their ability to market botnets for rent, at least once they no longer have the literal world record.
B: Cloudflare is feeding the trolls for free testing scenarios to improve the mitigation
C: The trolls don't really care if you feed them, large DDoS is something that's happening all the time anyways
D: all of the above
← Inserting standard complaint about Cloudflare protecting the sites selling these DDoS attacks here (at best: a conflict of interest selling the cure while protecting the disease).
This article taught be about the QOTD protocol: https://datatracker.ietf.org/doc/html/rfc865
Cool artifact of the internet!
I run it inside my private network because it's cute. I wrote a toy C utility and made it Docker-friendly so I could just toss it at proxmox.
https://github.com/jkingsman/RFC865-QotD-Server-for-Docker
Dodgy IoT devices will be the end of us all.
It's wild to think with the proliferation of 1gbps fiber internet, even a modern pi board or old desktop is a potential 1gbps bot host.
When your IP is found to have been part of a botnet, I think ISPs should just limit you to like 20Mbps for at least a year, so you think twice about buying that 10$ wifi baby monitor next time.
That's quite harsh. Good thing you're not in charge of making decisions.
When you get caught speeding on the road or being a nuisance otherwise you can and will get punished by the courts, including temporary restrictions on your driver license. When you money mule for others, even if you don't know that you actually fell victim to a scam, you get punished as well. When you litter in Singapore, you can get ordered to work community service.
I see no issue in handing out similar punishments in the digital space. The Internet is a shared medium, everyone who connects to it has a responsibility to not be a nuisance to others.
On the road you could have killed someone. Your 20$ baby monitor bought from an authorized store you know... whatever happens, it's not gonna kill anyone very directly ...
The main ingredient of crime is intent, whatever you say. A smaller ingredient can be recklessness, but maybe it's the ISPs sending all those millions of empty packets to a single server that should start feeling some heat ?
If that could make people think about it, I'd be all for it. But the people buying that junk are absolutely clueless, and would remain so even after the punishment was well-underway.
Obviously they are - everyone's clueless about everything except the one thing they know about. I imagine for the clothes you're wearing you're clueless about the conditions of the people who made them.
Thanks to CGNAT you, obviously an upstanding digital citizen, will also have to pay for your neighbor purchasing an IoT toaster.
Your ISP can tell you apart from your neighbor since they are the ones doing the CGNAT.
That doesn't make any sense. Who do you think is doing the CGNAT?
[flagged]
Any proof that this happened except cloudflare claiming it did? Just wondering whether these kind of attacks are seen by other orgs.
Anybody know who the "Cloudflare customer, a hosting provider" was and what IP they were targeting and why? I'm curious why someone would go to such great lengths to try to take down a service.
The article says it was a 45 second attack. I used to run a high profile website which used to get a lot of 90 second attacks. Best I could figure was some of the ddos as a services would give a short attack as a free sample, and people picked us cause we were high profile. Thankfully, these would almost always attack our website rather than our service, and availability for our website didn't really matter. Most of the attacks weren't a big deal, and they'd get bored and move on to something else. The ones that did take a web server down were kind of nice... I could use those to tune both the webservers and the servers doing real work.
I don't know who the provider is, but the attack was almost certainly not targeting the provider, but a site hosted on their platform. Many hosting companies upsell their customers into stuff like providing Cloudflare DDoS prevention. The target site was probably something political or controversial. I work at a hosting provider and we deal with this type of thing constantly.
> The target site was probably something political or controversial.
Since this is Cloudflare, my headcanon is that it was a rival DDOS service, after a wild flamewar on some .ru hacker forum.
A DDoS gets some fraction of the entire internet to attack a single host.
As the internet gets more users and more devices connected, the ratio of DDoS volume to a single connections volume will only get larger.
Is there any kind of solution?
> As the internet gets more users and more devices connected, the ratio of DDoS volume to a single connections volume will only get larger.
I'm not sure if that's the case. Large volumetric DDoS records have been increasing, but connection bandwidths have also been increasing.
7 tbps is a lot of traffic, but it only takes 7,000 nodes with 1G symetric connections to do it. Botnet sizes don't seem to be getting that much bigger.
The basic solution to volumetric DDoS is to get a bigger pipe; this works, kind of, but it's hard to get 7 Tbps of downstream capacity, and you need to be careful that you don't become a 7 Tbps reflector.
The more scalable way is using BGP to drop traffic before it gets to you. Depending on your relationship with your hosting facility and their ISPs or your ISPs, it's often pretty easy to get packet to a given IP dropped one network before yours. Ocassionally, those blocks could propagate, and things like BGP Flow Spec promise more specific filtering... dropping all packets to an attacked IP mitigages the attack for the rest of the IPs on the path, but dropping all UDP to an attacked IP might get all the attack traffic and let most non-attack traffic through... More specific rules are possible if you wanted to try to let DNS and HTTP/3 survive while being attacked.
To work against a 45 second attack, BGP based measures need a lot of automation.
You don't think the proliferation of inexpensive dogshit IoT products from the Far East, running already-10-years-out-of-date versions of Linux (bonus if it has a hidden Telnet daemon with hardcoded root password!), hooked to ever-expanding 1Gbps residential fibre lines, has anything to do with it?
This represents like 75% of surveillance camera systems out there btw.
I think the increase in 1G residential connections is a bigger factor than the IoShit products. I don't think botnet node counts are getting that much bigger, but the amount of garbage each one can push certainly is.
Not a 100% solution but would help greatly if ISPs:
1) performed egress filtering to prevent spoofing arbitrary source addresses
2) temporarily shut off customers that are sending a large volume of malicious traffic
> sending a large volume of malicious traffic
How would an ISP determine egress is malicious? Genuinely curious.
One simple way to do it is configure the customers routers to drop/reject all UDP/TCP packets where SRC address does not match Private IP/WAN Assigned Public IP.
I cannot believe this is still not commonly done. I remember discussing this with some people in the industry over ten years ago and the sentiment was “if ISPs just stopped IP spoofing that would solve most problems”.
It would solve a ton of other people’s problems, but cause a few for you, so it won’t be done until required by law.
E.g., customer does something stupid with addresses but the “wrong address” is something they control on another network, so it works. Egress filtering breaks it, support call and crying.
The customer's router is for the customer to configure
I think ideally the customers router shouldn’t be touched, but the ISP can still do packet filtering on the next hop to drop any packets which don’t have a src ip matching the assigned WAN address of the router.
Wouldn't that need a huge amount of extra hardware to do that filtering when the routers in each customer's home are mostly idle? Just setting egress filtering as the default and letting users override that if they need to for some reason should be a good outcome. The few that do change the default hopefully know what they are doing and won't end up part of a DDoS but they'll be few anyway so the impact will still be small.
> Wouldn't that need a huge amount of extra hardware to do that filtering
20 years ago Cisco (probably much longer) routers were able to do this without noticeable performance overhead (ip verify unicast reverse-path). I don't think modern routers are worse. Generally filtering is expensive if you need a lot of rules which is not needed here.
The router in the customer's home cannot be trusted. With cable at least, you are able to bring in your own modem and router. Even if not, swapping it is easy, you just have to clone the original modem's MAC. In practice this is probably quite common to save money if nothing else (cable box rental is $10+/mo).
Note that spoofing source IPs is only needed by the attacker in an amplification attack, not for the amplyfing devices and not for a "direct" botnet DDOS.
I would in fact guess that it's not common at all. Setting up your own cable modem and router is going to be intimidating for the average consumer, and the ISP's answer to any problems is going to be "use our box instead" and they don't want to be on their own that way. I don't know anyone outside of people who work in IT who runs their own home router, and even many of them just prefer to let the ISP take care of it.
Common no, very easy to proliferate though as people become aware of the savings possible. And the 2 cases I've seen where litteraly order the same model online and swap it, no configuring required. And it wasn't even the family tech support guy(me) who came up with the idea. The ISPs incuding the router as a monthly line item on the bill are litteraly indirectly asking you to do this.
Comcast/Xfinity in fact gives me a discount for using their router. Probably because (a) it lowers their support burden and (b) they are logging and selling my web traffic or at least DNS lookups.
That's surprising to me, it was when I used Comcast (2016) that I first purchased a cable modem. It did save me money.
I think it is less common now, but ISP routers on average used to be trash with issues — bufferbloat, memory leaks, crashes — so a number of people bought a higher end router to replace the ISP provided one. Mostly tech savvy people who were not necessarily in IT.
Nowadays my ISP just uses dhcp to assign the router an address so you can plug any box into it which talks ethernet and respects dhcp leases to be a router which is nice, albiet 99.9% of people probably leave the router alone.
Indeed, though we're at the mercy of the tyranny of the default.
All large ISPs have fancy network visibility and DDoS mitigation solutions.[1] But getting them to actually USE them for problems that aren't lighting up their monitoring dashboards is another story entirely.
(1. I know this, because I used to work for a company that made them, and the majority of worldwide ISPs were our customers.)
Hundreds of Gbps of UDP traffic to random ports of a single destination IP from residental (?) network should be pretty easy pattern to automatically detect and throttle.
More advanced attacks are more tricky to detect, but plain dumb UDP flood should be easily detectable.
> Hundreds of Gbps of UDP traffic to random ports of a single destination IP from residental (?) network
You mean my legitimate QUIC file transfer?
Have you ever uploaded 100's of Gbps over QUIC from your residential connection to a single IP?
And the aggregate across the ISP's network could in theory be monitored - so if you were uploading 1Gbps, yes, it could be legitimate. If you and 582 others were all uploading 1Gbps to the same IP at the same time, much less likely legitimate.
My homenet is 1GBit, so is my Internet
I.e. no traffic beyond my legitimate saturation can reach the ISP
I have saturated my link with quic or wireguard (logical or) plenty of times.
The lack of any response on high data rates would be an indicator I've only tried that once and it failed gloriously due to congestion. I don't think there's many real protocols that are unidirectional without even ACKs
> Have you ever uploaded 100's of Gbps over QUIC from your residential connection to a single IP?
Yes actually --- migration between cloud bulk storage providers.
Edit: I misread Gbps as Mbps above.
Which residential ISP offers >100Gbps service?
https://www.ietf.org/rfc/rfc3514.txt
If someone is reporting malicious traffic coming from the ISP's network then an ISP should be obligated to investigate and shut off the offending customer if necessary until they've resolved the problem.
How would this ever work at scale? These attacks come from thousands of compromised devices usually. e.g. Someone's smart fridge with 5 year old firmware gets exploited
As dijit (above this comment) has noted, this is somewhat possible and automated today.
For example, one method has the attacked IP get completely null-routed, and the subsequent route is advertised. Upstream routers will pick up the null-route advertisement and drop the traffic ever closer to the source(s). The effect of the null route is that the attacked IP is unreachable by anyone until the null-route is lifted... so the aim of the DDoS isn't averted, but at least the flood of traffic won't pummel any network paths except for (ideally) the paths between the attacker(s) and the first router respecting the null-route. In my experience the DDoS tends to stop more quickly and shift away to other targets if the folks directing the attack can no longer reach the target (because: null-route) and then the null-route can be lifted sooner relative to a long-running DDoS that hasn't shifted away to other targets.
With SMTP there are services who provide a list of malicious servers so that they can be blocked at the receiving end.
I wonder if this would work in reverse, having a standardised, automated protocol that allow providers like Cloudflare to notify upstream networks of attacks in real time, so malicious traffic can be blocked closer to the source.
Genuinely curious, I'm not an expert in low-level networking ops.
Your ISP likely knows you're part of a botnet quite early. For example many of them use magic domains as either shutoff switches or CC endpoints, so could be detected. But when was the last time anyone's ISP ever told them "hey one of your hosts is infected"?
I don't have a specific answer for that but it is really a problem that residential ISPs are going to have to solve now that gigabit or faster symmetric internet connections are becoming the norm.
> How would this ever work at scale?
We pay internet providers healthy amounts of money each month. Surely they can afford to hire some staff to monitor the abuse mailbox and react on it - we know they can when the MAFIAA comes knocking for copyright violations, because if they don't comply they might end up getting held liable for infractions.
Largely they do these things, it’s just not completely automatic.
Banks have already figured out fraud detection through pattern recognition, ISPs can do the same. When a connection has never used more than 300/10 of a 1000/1000 link and 80% of that was TCP with dstport 80 or 443, then it starts doing /900 UDP to every possible dstport, maybe something is wrong?
"Your network is generating an extraordinary amout of traffic, which is likely the result of a virus-infected device. As a result, we have lowered your speed to 100/20. Please read the steps to check your devices and unlock your connection here: ____"
IoS botnets depend on total number of devices and not individual bandwidth. Most IoT devices have cheap network chipsets and unoptimized networking stacks, I wouldn't expect them to saturate a 100mbps connection.
Economic fraud detection is like trying to find a needle in a haystack.
Blocking DDoS is like trying to separate the shit from the bread in a shit sandwich.
It's a completely different problem.
Banks have way lower traffic and slower reaction times than what cf needs to support.
Lowering the speed means "good" traffic is also impacted, resulting in higher timeouts.
count the number of events isn't cheap either.
So many false positives can happen here.
Most ISPs are already a pain in the ass to deal with. (Fuck you Charter/Spectrum). I don’t trust them to do their due diligence and implement this correctly. Or worse, abuse it.
“hey you pay for 1000/300 package. We detected abnormal traffic. Now you get throttled to 100/100. But still pay 1000/30”. Then they will drag on the resolution process until you give up.
Apparently no solution that has gained traction, and no single solution that works everywhere. Source address filtering (BCP 38) got us part of the way, but it's difficult/undesired to do in data centers.
IoT devices (speculated to be used here) would have to have a solution upstream. Things like MUD (RFC 8520) have been proposed, but have problems too - developers need to be able to list all communications of their device and make that available somehow (MUD profile server). Some consumers will never do it on their own, and may want to prevent alerting a device manufacturer they have a device (think connected adult toy...).
Also given that IoT devices may never be updated by their owners, expect to see IoT botnet DoS attacks for years.
Consumer home/office routers provide their clients IP connectivity without reserve. Why is that the case?
The default is to allow all available bandwidth, which presumably should be the case from ISP to consumer (most likely a paid-for service), but why should that be the default at consumer router <-> IoT? What need has your printer for 500Mbps outgoing? Or my fancy toothbrush?
Residential ISPs need to better police abuse of the network and they need to better respond to reports of abuse by cutting off the abusive, botnet-infected users. Of course, until there is a financial or regulatory incentive to cut off these customers, they won’t.
Is there any method for a connected device to advertise the required throughput? Maybe some SNMP thing? That’s the only way this would work I think.
You would want the advertised speed to be approved by the user at the time of setup.
If it was automatically accepted, the malware would just change the advertisement.
Locate and brick IoT devices with vulnerabilities?
Good idea. People only learn that something is wrong, when... they don't have internet anymore ;D.
Capachas?
Sorry for the worst and most hated possible solution, but I thought I'd at least mention it.
Maybe too many failed capachas causes you to not connect to the IP for an hour.
How would you expect capachas to help against UDP flood? The attack works by oversaturating the network channel. Capachas is a (terribly bad) solution to prevent the server from spending CPU and transmit bandwidth on garbage request, but these wouldn't do anything if the server have too much packets receive in the first place.
TIL about capachas[1]
[1]: https://en.wikipedia.org/wiki/Cachapa
Capacha =/= Cachapa =/= CAPTCHA
Make people pay per traffic.
We already do. Attackers use stolen capacity.
But why doesn't the market do the market thing, then?
For each separate endpoint the impact is minimal. Being part of the attack would cost you an extra $1 and you wouldn't even notice. On the other hand, ensuring the metering works correctly, reporting to the billing system works, invoicing it properly, providing support, etc. likely costs more per-customer.
i just depends on waf, as long as the ddos attack does not reach my server. is that ok?
L4 level ddos is useless and is easily protected by Cloudflare.
App level DOS use Cloudflare evasion techniques and directly DOS the destination server, while keeping itself undetected by Cloudflare's systems.
Do not assume that Cloudflare will protect you from all attacks, if your app is dogshit python/js/php then even cloudflare wont protect you from L7 DDOS
Should Cloudflare release the IPs and try to get those devices removed from the internet?
That would just be a target list for hackers. Most of the devices that take part are going to be in homes or SMBs with old firmware that’s subject to known vulnerabilities. They will give the list to AS operators who request the offending IPs (presumably restricted to the AS ranges) but dropping it out on the public internet just invites trouble.
They could but it's whack-a-mole and most ISPs just route abuse reports straight to /dev/null.
IMHO, ISPs caught in that act should get yanked off the internet.
What does this botnet do when it's not performing a 7.3 Tbps DDoS? Yea it's probably regular folks computers, but what "wakes up" the botnet to attack? What makes an attack target worthwhile? Presumably something this large would be on someone's radar...
The Command-and-Control part of the botnet would be whatever component they build to instruct it to attack; often using some dummy website they register and have the compromised clients poll for changes with instructions.
I think an increasing amount of them are state actors or groups offering the botnet as a service.
also add in DNS fastflux
https://www.cisa.gov/news-events/cybersecurity-advisories/aa...
https://www.cloudflare.com/learning/dns/dns-fast-flux/
>What does this botnet do when it's not performing a 7.3 Tbps DDoS?
Living their best "Im a retail Asus router/iot from Amazon" life.
I mean... 7 Tbps sounds like a lot, but 1Gbps symetric connections are common in many areas. 7,000 botnet nodes with good connectivity can deliver that. The article says the attack traffic came from 122,145 source IPs, but I would expect at least some traffic to be spoofed.
meanwhile, cloudflare has been blocking my reading of websites more and more.
What was the goal of an attack lasting only 45 seconds?
A few options:
- testing in preparation for a future attack
- proof of capability ("Nice network you have there. It'd be a shame if something happened to it")
- misfire ('What happens when I push this button that says "don't push"?')
Maybe someone was interested in buying the services, and the creator need to prove the capabilities. I'm sure there's other reasons too.
I was thinking reconn...but the other reasons cited by others here seem totally viable too.
Possibly the only kind of advertising that I actually like. Informative, engaging, no overselling.
[flagged]
[flagged]
@tete called it: https://news.ycombinator.com/item?id=44262324
Unrelated. Has nothing to do with the gcp outage that was related to.
No, this is old.
Oops - My mistake, this is not the attack that got stopped earlier this year that broke the previous record.
Cloudflare is the One Punch Man of the internet
Cloudflare protects scammers and want to recentralize the Internet around a for-profit company based in the Untied States.
One-Punch Man is a reluctant mentor, is often broke, loves ramen and cares about others.
They are not the same.
The current optics are 400gbps, and 800gbps are sampling; next up is 1.6 tbps; so this is 20x400gbps, basically 1 expensive switch’s worth of traffic. Which is itself a scary prospect!
It's cloudflare so it's distributed. 10Gbps at this POP, 20Gbps at that one...
> DDoS sizes have continued a steady climb over the past three decades.
This is a bit misleading; according to Wikipedia[1], the first DDoS is said to have occurred less than three decades ago.
[1] "Panix, the third-oldest ISP in the world, was the target of what is thought to be the first DoS attack. On September 6, 1996, Panix was subject to a SYN flood attack, which brought down its services for several days while hardware vendors, notably Cisco, figured out a proper defense.", source: https://en.wikipedia.org/wiki/Denial-of-service_attack
90's, 00's, 10's. Three decades.
Exactly, should be less. Unless we have some data about DDoS sizes in the early 90s, before the first DDoS has occurred.
I'm going to give you the benefit of the doubt and assume you aren't just being pedantic to be a troll, and point out that when rounding 29 to the nearest 10, you get 30.
round(29 years) is three decades. This is hyper-pedantic to the point of being obnoxious.
Fair enough, apologies.
In my defense, reading that for the first time gave me an impression that DDoS attacks themselves were older; I was disappointed and wanted to share so that others wouldn’t get similar hopes. Next time I’ll round more decimals.
So the change from 0 sized ddos in June 1995 (30 years ago aka 3 decades ago) to a >0 sized ddos in September 1996 (29 years ago aka basically 3 decades ago) doesn't constitute an increase in size?
But that’s my point, I wouldn’t call it an increase from 0, I’d say 30 years ago that value was NULL - not even a zero sized DDoS has happened yet.
So two problems...
1) I'm not sure what your problem with the reasonable rounding of 29 years ago to 3 decades is... but the one that comes across is "extra pedantry for no reason"
2) According to wikipedia the "first dos" attack was in 1996. There are other sources most of which attribute that 1996 panix attack as "one of the first" or "the first major" ddos attack. Before that there were other DoS attacks using udp and/or syn floods, and some of them likely involved several computers (and possibly people) working in coordination. Those several computers were probably not compromised machines that had malware responding to a cnc server, so the squishiness has to do in part with how exactly one defines DDoS - some definitions include a botnet requirement, others just need multiple computers working in coordination. It's claimed that Kevin Mitnick was targeting his prosecutor with syn floods in 1994 (over 30 years ago), but its not fully verified and the details are unknown from my research... likely though >1 computers were involved in that flood if it happened.
In the early 90s there were all sorts of fun and games where people would knock over IRC servers by triggering bugs/behaviors in a lot of connected clients. It's primitive but it seems to have a huge number of elements of DDoS. Similar for attacks on various telecomms infrastructure as the soviet union/eastern bloc fell apart in that time period.
Trying to put a hard "29 years ago" line in the sand is difficult to do... techniques evolve from previous ones and there are shared elements that make the line necessarily fuzzy.
So yeah... theres no reason to quibble about "three decades" since theres 35+ years of history around "things that look like DDoS attacks but don't fit a strict definition that requires botnets"