Why Defense.Net and F5: The Hybrid Cloud

May 27th, 2014 by Barrett Lyon
I have been hearing the term “hybrid cloud” for quite a while, but until recently, it sounded more like a marketing pipe dream than a reality. I’ve often wondered why hardware companies didn’t include cloud services that work harmoniously with their hardware offerings. Apple, Microsoft, and other software makers have figured out how to integrate the cloud with their own platforms, but hardware companies seem very slow to adopt the concept. I’ve had many conversations with SVPs at large hardware vendors, and it turns out the cloud is completely foreign to them. The billing models are different, the sales processes are different – and to a publicly traded company – the differences seem terrifying enough to stay out of cloud.

Then comes F5 Networks: These are the guys that you see in nearly every cage in every datacenter around the planet. The glowing red F5 logo might ring a bell. It turns out they are a security company with really robust offerings and they’ve been quietly building a solid security posture for their devices over the past decade. They also have open minds and have been working on a strategy to merge hardware with the cloud. They know that a DDoS defense as a cloud service may be one of the most difficult cloud services to build — but if built correctly and with innovation, it becomes one of the best and most solid cloud platforms possible. DDoS defense as a service is the foundation to all cloud services.

By having such a solid foundation, the next step is to seamlessly merge the DDoS defense network with F5’s hardware to create the world’s first true hybrid cloud. The vision is that customers can create their own local DDoS defense, and when volumetric attacks hit, at a specific point they’re “automatically” offloaded to the cloud.

This is obviously a huge step for F5, and it is going to take a lot of F5’s smartest people working together with Defense.Net’s group to make this happen. But it WILL happen. It’s a very exciting time for me to watch my company join forces with F5 to really change the game and create a platform that will help the Internet and businesses grow for the next decade.

Blue Apron: I’m not having fun.

May 12th, 2014 by Barrett Lyon

Open letter to Blue Apron from a dyslexic guy:

Your instructions look cute and fun… They’re well designed for someone without a learning disability.  To me… they are a confusing mess:


“Blue Apron makes cooking fun and easy.” (For people without learning disabilities)
  • Your “knick knacks” pack is never referenced in the directions.
  • You’re putting pictures of the ingredients that don’t look anything like what you’ve delivered.
  • The instructions require you to flip between two sides of a page (for someone like me that’s difficult and it fucks with my head).
  • I can’t follow directions like:  “gather the produce”.  You give me nothing labeled produce or anything that even matches a picture or what produce is.  I know what produce is but I am concentrated on following the instructions and they just scramble me.
  • The lettering is too small on the pages, you’re compressing too much into a single page.  Why?  Hell add additional directions online if you’re worried about printing costs.
  • Honestly, the pages are overwhelming to me and I shutdown just looking at them.
  • It’s not fun if I don’t have my wife participating. :(

Anyway… thank you, we did enjoy trying the service.  However, when my wife is not helping me navigate your instructions I am left angry and embarrassed.

Further, I can’t find any auxiliary ways to learn or get direction.  You could easily provide links to videos that show the directions without the awful back-to-back vague “recipe”.

I, like many people, learn differently and a lot of people process information differently.  You should help people like me have fun with your product by providing different ways to ingest your information.

So sadly I am canceling… I’ll come back if you guys fix this a bit. Startups are hard! I know! I’ve done a few. I hope you guys can help folks like me and I will become a loyal customer.

PS:  This is exactly why I don’t bake.  Oh and I love to cook.

I finally updated opte.org

May 12th, 2014 by Barrett Lyon

 

It’s been almost 11 years since www.opte.org has seen an update.  Today I updated the entire site with new code, a new image, and a new format.  This will be the foundation for releasing and creating new images starting this month.

Take a look and enjoy!

Defense.Net Squashes The Heartbleed Bug

April 9th, 2014 by Barrett Lyon
http://heartbleed.com:
CVE-2014-0160

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.“

Unless an OpenSSL implementation has been patched anyone can remotely view 64K chunks of memory. Said another way, whatever was left behind in the memory of the vulnerable server… becomes public data… This could be passwords, accounts, personal data, and the SSL private keys of the server itself!

To give you an idea of how big of a problem this is, this software is used in everything from web sites, VPNs, specialized networking equipment, email communications, phones apps, you name it.

There are at least half of a million web sites that are exposed to this and this may be one of the most catastrophic bugs in secure computing history.

Whether or not this is a bug or an intentional addition is all speculation at this point and it’s been in the software for over two years, exposing anyone using OpenSSL.

To make matters worse, once the bug has been patched globally, it’s highly likely that every SSL certificate that has been on an exposed server will have to be re-issued creating an absolute logistical and security nightmare.

The cost of replacing half a million SSL certificates could range in the several hundreds of millions of dollars and it’s unclear when this can or will happen

 

How Defense.Net squashes the Heartbleed Bug

My company, Defense.Net has built a secure network the primary purpose for which is to provide DDoS mitigation.  However, the safeguards we put in place with our proprietary DefenseD scrubbing system to protect against DDoS attacks also protects against the Heartbleed attack vector.

The byproduct of DDoS defense in this case is a better more protected network and further explains why DDoS defense is more than keeping your sites online when they’re attacked with hundreds of gigabits of garbage… they’re full defensive networks.  In the process of cleaning up invalid bots and removing attack traffic, we also validate legitimate network protocols against illegitimate ones.  This ability to safeguard our customers from more than just DDoS attacks helps outline our goals and the future of our network.

We’re capable of doing this because we’re using a proprietary SSL implementation on one layer of our network and on another layer of our network we can monitor and block the behavior of traffic that attempts to exploit the bug.

What’s going on with WhatsApp?

February 22nd, 2014 by Barrett Lyon

WhatsApp went down today around 8:30 AM Pacific and was lights out for about six hours and still continues to struggle to connect users into their network.  In addition, when connected it’s not possible to share images, videos, and audio.

Looking at their network design it’s clear to me that they appear to have their eggs in one basket. The application initially connects to the host “c.whatsapp.net” which is hosted on a single /24 block on SoftLayer (mid-tier hosting provider):

c.whatsapp.net. 3600 IN A 50.22.231.54
c.whatsapp.net. 3600 IN A 50.22.231.55
c.whatsapp.net. 3600 IN A 50.22.231.56
c.whatsapp.net. 3600 IN A 50.22.231.57
c.whatsapp.net. 3600 IN A 50.22.231.58
c.whatsapp.net. 3600 IN A 50.22.231.59
c.whatsapp.net. 3600 IN A 50.22.231.60
c.whatsapp.net. 3600 IN A 50.22.231.36
c.whatsapp.net. 3600 IN A 50.22.231.44
c.whatsapp.net. 3600 IN A 50.22.231.45
c.whatsapp.net. 3600 IN A 50.22.231.46
c.whatsapp.net. 3600 IN A 50.22.231.47
c.whatsapp.net. 3600 IN A 50.22.231.48
c.whatsapp.net. 3600 IN A 50.22.231.49
c.whatsapp.net. 3600 IN A 50.22.231.50
c.whatsapp.net. 3600 IN A 50.22.231.51
c.whatsapp.net. 3600 IN A 50.22.231.52
c.whatsapp.net. 3600 IN A 50.22.231.53

50.22.231.0/24 appears to host all of the c.whatsapp.net hosts which makes it vulnerable to a DDoS attack and hijacking.  It’s generally bad design to put all of your critical services on a single host that’s routed to a single network provider in a single location.

In addition, 184.173.136.0/24 is at the same datacenter with the same provider and hosts a bunch the mms and chat functions of the application… which was also not working properly.

Discounting the design, the SoftLayer network looks like it’s healthy and there are no indications of a volumetric DDoS attack such as latency or jitter and the network itself appears to be up working just fine.

So what’s wrong?  Well, the c.whatsapp.net IP addresses are not answering on port 443 reliably.  Sometimes they open and function and sometimes they don’t.  That indicates that there is one of three things going on:

  • Application layer DDoS on port 443 (SSL) to their c.whatsapp.net host
  • Application bug
  • Extreme growth

Given this has gone on all day I would imagine a bug would have been fixed quickly.

So… Is it an application layer DDoS attack?  I don’t know.  The Facebook acquisition angered a lot of users and the timing of the outage looks pretty suspect, however, calling it a DDoS is still speculative.  The service has been stable for me for years.

If I were to guess:  It’s a rapid growth problem which helped them discover new limits in either their firewall hardware or their load balancers.  They tend to be the thing that breaks first. Replacing or upgrading hardware like a load balancer in hours is typically not easy.

Regardless of if it’s an application layer DDoS attack or just unprecedented growth I am really worried about their design… It reminds me of the early days of Twitter.  

P.S.:  I wish the team at WhatsApp best of luck to get this fixed whatever it is…  I miss chatting with my friends.

The European Cyber Army Has Bits

January 31st, 2014 by Barrett Lyon

After enough taunting of the European Cyber Army (ECA) launched modest attack against blyon.com.  The traffic has yet to exceed 1 Gbps and it’s comprised of a smorgasbord of attack methods:

Initially the attack came in as a HTTP HEAD and GET flood requesting different items from my site.  Shortly after a DNS reflection attack and an ICMP reflection attack came into blyon.com as well.

The HEAD attack was directed to a single image with the User-Agent of “ICAP-IOD”.

The GET flood contained a User-Agent of  “LOWC=@ECA_Legion&ID=1391196316226″

Luckily I am the CTO/Founder of a DDoS defense company (Defense.Net), so getting an attack like this to my personal blog is really not a big deal. However with this modest attack, an unprepared or unprotected web site will struggle.  If this attack was in fact directed at paser.gov or other small unprotected sites they probably would be impacted.

This is not a confirmation that the ECA launched the attack to the targets they boast about.  The attack to me could be a random sympathetic user to the ECA, however, they do have the ECA Twitter handle as part of the User-Agent string in the attack hitting my server.

If you’re a server administrator at any of their alleged targets, contact me if you saw any of the User-Agents I saw.

Is the “European Cyber Army” Capable of Big DDoS Attacks?

January 31st, 2014 by Barrett Lyon

I follow what’s happening in the DDoS world very closely and when I see banks go down for extended periods of time, that tells me that someone has a large botnet. On Twitter, a group calling itself the “European Cyber Army” took claim for the attacks on January 29th. Their claim prompted me to do a little digging and Tweeting to learn more.

I wrote a blog post that was loaded with items that would intentionally irritate them. I wanted to see what kind of reaction I would get. It was not met with a warm reception and they began to threaten me on Twitter and hit me with a little tiny 6Mbps GET flood:

ECA_Legion: …We have a mind to destroy your website!

[I say nothing and an attack starts.]

BarrettLyon: It was a cute GET flood
ECA_Legion: thank you!
BarrettLyon: It didn’t do anything.
BarrettLyon: I guess I was expecting a real DDoS and not a cute one.
ECA_Legion: If we want the site to go down we will hit it! Right now we are busy on an important target!
ECA_Legion: At times we will threaten and never follow through! But like we said, hit us up when we aren’t busy and we will take it down!
BarrettLyon: Sounds like you’re just finding site outages and reporting them as if you did the DDoS. Your “attack” kinda proves my theory.
ECA_Legion: Believe what you what!
BarrettLyon: I thought you were going to attack me? That’s what you threatened me with right?
ECA_Legion: We did threaten to do that! But sadly your site isn’t injectable! Dammit!
BarrettLyon: What does an injection attack have to do with DDoS?
BarrettLyon: So all this #tangodown stuff is what I thought it was. #faildown
ECA_Legion: You doubt our DDoS abilities?
BarrettLyon: I’m pretty sure that’s what I said.
ECA_Legion: Then you will enjoy the upcoming attacks! Lulz
BarrettLyon: Okay cool, well… I’m going to dinner with my family. Have fun sending me “upcoming attacks”.

At that point I went to dinner and I have not seen a single DDoS attack. Meanwhile they keep Tweeting that they’re taking sites down and defacing sites with an injection attack.

They may be taking sites down with their 6Mbps GET flood but I don’t think they’re doing it with a 200 Gbps capable botnet.

So, that begs the question: Who is behind the big attacks?

Here Comes the European Cyber Army

January 30th, 2014 by Barrett Lyon
With the disappearance of the Izz ad-Din al-Qassam Cyber Fighters, DDoS attacks have not been on the top of the headlines for a few months. Well, a new group calling itself the “European Cyber Army” (@ECA_Legion or ECA) has been making some news. They claim to be targeting the US military and banks, however, based on their twitter feed it appears they are taking claim for site outages and passing them off as attacks.

They claim to have attacked and downed over 60 web sites ranging from bankofamerica.com, Japanese retailers, theme parks, US military sites, and numerous foreign sites.

I found it odd they were targeting such a wide list of web sites, so I tweeted about the random list of hosts they were targeting. They responded directly to me with, “‪@BarrettLyon Casualties of warfare”.

What war they are fighting or starting is not exactly clear. They’ve posted a YouTube propaganda video, which basically declare they’re mad at nearly everything and everyone:

The Bank of America and Chase attacks made public news as the attacks clearly impacted their sites. The European Cyber Army tweeted the following statement:

Bank of America’s site was unresponsive during the tweet but it’s unclear if they were calming responsibility or if they actually did the attack.

Following Bank of America, someone launched an attack to chase.com.  The Euro Cyber Army guys made the following statements on their Twitter account:

Some large attacks have happened (maybe not carried out by these guys) and they appear to have been extremely successful and were at rates of around 190 Gbps.  I believe the actual attacks were accomplished with a derivative of the Brobot, which was what the al-Qassam Cyber Fighters were using.  There are other rumors that they have some control over the IADAQ botnet but that does not seem to be true.

To date, the would-be attacks appear to be sprinkled around as they make a stir at each of their targets. They take a site down for a few hours and then shift the focus to a new target. They may be shifting the attack to create pain at their target without having their botnet overly exposed or they’re just reporting outages as attacks and eventually the site goes back online.

Who are these guys?  Based on a pastebin post that they are a group of hackers from LeakSecurity (#LeakSec) , possibly people affiliated with @OpFunKill, and @oG_maLINKo.

Stay tuned for more updates.

UPDATE:  They didn’t like my blog post and threatened to attack me, “We have a mind to destroy your website!”  They did actually attack with a little 5Mbps GET Flood which was quickly shut-off.

I responded with, “@ECA_Legion Sounds like you’re just finding site outages and reporting them as if you did the DDoS. Your ‘attack’ kinda proves my theory.”

The conversation ended with, “@BarrettLyon Believe what you what!”

Still no major attack.

 

As American Culture Shifts Online… Why Are We Okay with Second-World Internet Connectivity?

January 23rd, 2014 by Barrett Lyon

As a child, I was an early Internet user. There were still .arpa addresses attached to things, and from day one, I realized I was a consumer of the vast data on the Internet. I needed bandwidth to download, view, exchange, and to work faster. And as a child with no job, the dream of having any high-speed access was a distant one. It shaped my career as I started my quest to have as much bandwidth as I needed. Over the course of the startups I have created, I always insisted that the offices connect with top-quality connectivity. The argument goes like this: We’re creating the world’s top new technology, why don’t we have access to it?

Spoiled over the years with having gigabit Ethernet to my desktop, I moved to Auburn, CA in the Sierra Nevada Foothills. Bay Area people may recognize its name because of the famous burger shack Ikeda’s that is right up the street from me. It’s where my family lives and I telecommute to work four days a week. But it’s difficult to telecommute to work when your home network connection is inconsistent with latency, bandwidth availability, or even having it work “most of the time.” As a result, it has hampered my VoIP calls, my research work, and my ability to do my job. Something had to change. I signed up for a fiber service for businesses that’s equivalent in cost to a the 90′s T1 line.

So what is a Gigabit? And so what? Well first off, it’s 1000 Mbps of bandwidth, and if it’s delivered over fiber, it has lower latency by an order of magnitude from a cable modem or DSL. So I bought burstable bandwidth. I only needed about 10 Mbps of the total 1000, but the ISP allows me to use all of the 1000 if it’s available and nobody is using it. It’s a great deal for me, and it’s really no sweat off the carrier’s back.

BUT NOW WHAT?

Why should everyone have burstable 1 Gigabit or 10 Gigabit service? Because the world has changed. People now consume bandwidth on most of their devices, cars, TVs, AppleTVs, Google services, etc… and it’s important to our daily lives. Lack of fast connectivity is like settling for tainted brown water from your local water utility, or having your lights shut off when you use an electric oven. People, we’re living in the stone ages of the Internet and we need to progress!

One of the main impacts of switching to an Ethernet-based solution is your upload speed does not (and should not) impact your download speed. For years, ISPs have used this as a false marketing mechanism to differentiate “business class” services from “home services.” They needed a reason they could charge businesses a boatload of money for their Internet access, while providing something similar to home users for a fraction of the cost. The marketing decision matched well with the cable modem DOCSIS and DSL technology, and as a result, became the standard for home Internet services.

That bandwidth model worked well for the 90s, but the Internet has changed. With a real Internet connection, we can download as fast as most storage devices can store, and fully utilize cloud services. HD videos play instantly, while not impacting overall network performance. Web pages snap into place, and everything just works better. Beyond being an Internet consumer, with a real Internet connection you can create – as well as consume. Uploading and serving content gives a user the ability to be more than just a consumer. This is an ideological change. When it affects millions, the Internet and culture will be directly impacted. The Internet will not be made of servers and users anymore, because everyone will have the capacity to be both.

Imagine if the new gold standard of home Internet connectivity was full duplex (the same speed both ways at the same time) 1GbE. Those huge HD files off HD cameras would then find their way onto the web very quickly. Sharing content between friends would no longer be done via a third-party service like Dropbox. The Internet would depend more on the cloud as the Internet connection, reducing the bottleneck between cloud services and the user. In addition, person-to-person networks would become feasible. It would spark innovation. Startups that are network-based could be hosted from your garage. Bandwidth would no longer be expensive, and networks would have a new renaissance of growth.

Beyond just the ability to upload at a reasonable rate, we would see our home networks become more stable. We would have fewer angry calls to call centers because, “the cable modem starts blinking a weird color when it’s raining outside.” The jitter of the network would be gone and we’d no longer be consumers of a poor quality service, we would have something we could trust to build a network-based society on.

All media companies should be pushing cable operators to switch everyone to gigabit Ethernet or faster. Why? Because they will consume (buy) more content. Right now on a gigabit circuit, it takes less than three minutes to download an entire HD movie from iTunes. People will use services like Netflix much more because it would work so much better than anything else out there.

So, what about the cost?

For an early adopter, it’s not easy. It’s expensive, because you have to buy what’s essentially a business-class or carrier-class service. However for the provider, the cost is nothing compared to the cable and here’s why:

  • Physical copper cable costs more than fiber
  • Replacing cable that is faulty is expensive and should just be done with fiber
  • The equipment is inexpensive
  • Support costs go down due to fewer interruptions
  • Service quality increases

What about the arguments that uploaded content costs ISPs more, or that it will cost more to upgrade their networks? To that I say: I don’t see any telecommunications companies filing for bankruptcy. Innovation, social change, and inspiration bring customers. Their networks can support bi-directional communications. Companies like Comcast would love to make you think that it’s a huge burden to carry traffic out of their network, but just by the nature of Ethernet being bi-directional, if Comcast is supporting the same in-bound bandwidth to their customers they can rationally support the same out-bound bandwidth.

In 2009, the American Recovery and Reinvestment Act (Recovery Act) was chartered by President Obama to create a National Broadband Plan. The plan itself is about effective as the HealthCare.gov website. However, unlike HealthCare.gov, the plan actually does not do anything for the consumer. It lays out a lot of legal liability for carriers such as Comcast to reduce their operating cost, but falls horribly short on making any real guidelines that will impact the future. The plan states, “Goal No. 1: At least 100 million U.S. homes should have affordable access to actual download speeds of at least 100 megabits per second and actual upload speeds of at least 50 megabits per second.” It suggests that should happen in the next DECADE. Ten years from now, my thermostat will have more bandwidth than that – actually it does already. Hmm, okay, in ten years my phone will have more bandwidth than that. Hmm, it almost does today. What they’ve done is to spend a huge amount of time and money to create short-sided goals. It should have read, “By X date, all Americans with at least X connectivity will be provided full bidirectional service of at least 1000 Mbps.”

When it comes down to it, consumers need to demand more – not just from their ISPs, but from the lack of any true leadership from elected officials. This country should be a leader in broadband, not a mess of red tape with the lack of any vision.

Our first “breakdown” with our Tesla Model S

September 20th, 2013 by Barrett Lyon

We’ve been purely ecstatic with our Tesla since the moment we decided to go forward with it. It really is the car that everyone boasts about. But like any new car, I was still somewhat worried when it has a bad day. That bad day came last Monday when we parked at the local airport and tried to drive it home. The car would not recognize the keys and would not turn on.  It just sat there with the AC on and Stereo going, “Key Not Inside”.

We called Tesla’s support line and ran over a bunch of options to engage the keys again – none worked. The car was just stuck where we parked it. They spoke with their engineers and decided the car needed a “cluster reboot”. They issued the reboot remotely to the console cluster. Apparently the “cluster” is the gauges behind the steering wheel controlled by a dedicated computer. That computer apparently also controls the key system.

After a few hours of fooling around and being on the phone, the keys were finally working again. However, this begs the question: How the hell did that happen?

This is where Tesla got a little vague and promised it would never happen again. I got into a semantic disagreement with them over the words “glitch” and “software” bug. They finally gave up and said, “We don’t know why this happened, it’s only happened to 3 other people I have spoken with.” This means that this has happened to more people than just us.

Finally, they escalated the issue to an engineer and the answer was, “The keys are highly sensitive to radio frequencies, and at airports there are a lot of different things going on with radio frequencies.”

This answer did not really help and I could not stop thinking about a situation where someone could figure out how to deactivate keys to Teslas with a simple radio device – and could then disable all of the cars parked at a super charger station.

It also made me think hard about what a botched software update or decent software hack could do to thousands of cars.

Anyhow, we were told it shouldn’t happen again, but if it does, we can simply “reboot the cluster” and it would be fine.

My answer to them was, “Isn’t that how Microsoft told people to deal with their bugs?”

 

PS:  I still love the car and yes… they were able to remotely fix it.  :)