Tuesday, December 19, 2017

Consumer IoT Security v1.01

They say charity begins at home, well IoT security probably should too. The growing number of Internet enabled and connected devices we populate our homes with continues to grow year on year - yet, with each new device we connect up, the less confident we become in our home security.

The TV news and online newspapers on one-hand extol the virtues of each newly launched Internet-connected technology, yet with the other they tell the tale of how your TV is listening to you and how the animatronic doll your daughter plays with is spying on her while she sleeps.

To be honest, it amazes me that some consumer networking company hasn't been successful in solving this scary piece of IoT real estate, and to win over the hearts and minds of  family IT junkies at the same time.

With practically all these IoT devices speaking over WiFi, and the remaining (lets guess at 10% of home deployments) using Zigbee, Z-Wave, Thread, or WeMo, logically a mix of current generation smart firewall, IPS, and behavioral log analytics would easily remediate well over 99% of envisaged Internet attacks these IoT devices are likely to encounter, and 90% of the remaining threats conducted from within the local network or residential airwaves.


Why is that we haven't seen a "standard" WiFi home router employing these security capabilities in a meaningful way - and marketed in a similar fashion to the Ads we see for identity protection, insurance companies, and drugs (complete with disclaimers if necessary)?

When I look at the long list of vulnerabilities disclosed weekly for all the IoT devices people are installing at home, it is rare to encounter one that either couldn't have an IPS rule constructed to protect it, or would be protected by generic attack vector rules (such as password brute forcing).

If you also included a current (i.e. 2017) generation of ML -powered log analytics and behavioral detection systems in to the home WiFi router, you could easily shut out attack and abuse vectors such as backdoor voyeurism, bitcoin mining, and stolen credential use.

Elevating home IoT security to v1.01 seems so trivial.

The technologies are available, the threat is ever present, the desire for a remedy is there, and I'd argue the money is there too. Anyone installing an app controllable light bulb, door lock, or coffee maker, has obviously already invested several hundreds of dollars in their WiFi kit, Internet cable/fiber provider, laptop(s), and cell phone(s) - so the incremental hit of $100-200 to the WiFi router unit RRP plus a $9.99 or $19.99 monthly subscription fee for IPS signatures, trained classifiers, and behavioral analysis updates, seems like a no-brainer.

You'd think that Cisco/Linksys, D-Link, Netgear, etc. would have solved this problem already... that IoT security (at home) would be "in the bag" and we'd be at v1.01 status already. Maybe market education is lagging and a focused advertising campaign centers on securing your electronic home would push market along? Or perhaps these "legacy" vendors need an upstart company to come along and replace them?

Regardless, securing IoT at home is not a technologically challenging problem. It has been solved many times with different tools within the enterprise (for many years), and the limited scope and sophistication of home networking makes the problem much easier to deal with.

I hope some intelligent security vendor can come to the fore and bring the right mix of security technology to the fore. Yes, it costs R&D effort to maintain signatures, train classifiers, and broaden behavioral detection scenarios, but even if only 1% of homes that have WiFi routers today (approximately 150 million) paid a $9.99 monthly subscription for updates - that $15m per month would be the envy of 95% of security vendors around the world.

-- Gunter

[Note to (potential) vendors that want to create such a product or add such capabilities to an existing product, I'd happily offer up my expertise, advice, and contact-book to help you along the way. I think this is a massive hole in consumer security that is waiting to be filled by an innovative company, and will gladly help where I can.]

Sunday, December 17, 2017

Deception Technologies: Deceiving the Attacker or the Buyer?

Deception technologies, over the last three-ish years, have come into vogue; with more than a dozen commercial vendors and close to a hundred open source products available to choose from. Solutions range from local host canary file monitoring, through to autonomous self-replicating and dynamic copies of the defenders network operating like an endless hall of mirrors.


The technologies employed for deception purposes are increasingly broad - but the ultimate goal is for an attacker to be deceived into tripping over or touching a specially deposited file, user account, or networked service and, in doing so, sounding an alarm so that the defenders can start to... umm... well..., often it's not clear what the defender is supposed to do. And that's part of the problem with the deception approach to defense.

I'm interested, but deeply cautious about the claims of deception technology vendors, and so should you be. It's incredibly difficult to justify their expense and understand their overall value when incorporated in to a defense in depth strategy.

There have been many times over the last couple of decades I have recommended to my clients and businesses a quick and dirty canary solution. For example, adding unique user accounts that appear at the start and end of your LDAP, Active Directory, or email contacts list - such that if anyone ever emails those addresses, you know you've been compromised. And similar canary files or shares for detecting the presence of worm outbreaks. But, and I must stress the "but", those solutions only apply to organizations that have not invested in the basics of network hygiene and defense in depth.

Honeypots, Honeynets, canaries, and deception products are HIGHLY prone to false positives. Vendors love to say otherwise, but the practical reality is that there's a near infinite number of everyday things that'll set them off - on hole or in part. For example:

  • Regular vulnerability scanning,
  • Data backups and file recovery,
  • System patching and updates,
  • Changes in firewall or VPN policies,
  • Curious employees,
  • Anti-virus scanners and suite updates,
  • On-premise enterprise search systems,
  • Cloud file repository configuration changes and synchronization,
The net result being either you ignore or turn off the system after a short period of time, or you swell your security teams ranks and add headcount to continually manage and tune the system(s).

If you want my honest opinion though, I'd have to say that the time for deception-based products has already past. 

If you're smart, you've already turned on most of the logging features of your desktop computers, laptops, servers, and infrastructure devices, and you're capturing all file, service, user, and application access attempts. You're therefore already capturing more of the raw information necessary to detect any threat your favorite deception technology is proposing to identify for you. Obviously, the trick is being able to process those logs for anomalies and responding to the threat.

This year alone the number of automated log analytics platforms and standalone products that employ AI and machine learning that are capable of real-time (or, worst case, "warm") detection of threats, has grown to outnumber all the tools in the Deception solution category - and they do it cheaper, more efficiently, and with less human involvement. 

Deception vendors were too slow. The log analytics vendors incorporated more advanced detection systems, user behavioral analytics, and were better able to mitigate the false positive problems - and didn't require additional investment in host agents and network appliances to collect the data that the deception technologies needed.

As an enterprise security buyer, I think you can forget about employing deception technologies and instead invest in automated log analytics. Not only will you cover the same threats, but the log analytics platforms will continue to innovate faster and cover a broader spectrum of threats and SecOps without the propensity of false positives.

-- Gunter Ollman

Saturday, December 16, 2017

What would you do if...

As a bit of a "get to know your neighbor" exercise or part of a team building exercise, have you ever been confronted with one of those "What would you do if..." scenarios?

My socially awkward and introvert nature (through some innate mechanism of self preservation) normally helps me evade such team building exercises, but every so often I do get caught out and I'm forced to offer up an answer to the posed scenario.

The last couple of times the posed question (or a permutation thereof) has been "What would you do if you were guaranteed to be financially secure and could choose to do anything you wanted to do - with no worries over money?" i.e. money is no object. It surprises me how many people will answer along the lines of building schools in Africa, working with war veterans, helping the homeless, etc.

Perhaps its a knee jerk response if you haven't really thought about it and re-actively think of something that you expect your new found group of friends and colleges will appreciate, or maybe it is genuine... but for me, such a thought seems so shallow.

I've often dwelled and retrospectively thought about the twists and turns of my career, my family life, and where I screwed up more than other times etc. and, along the way, I have though many many times about what I'd do if I were ever financially secure that I could chose to do anything.

Without doubt (OK, maybe a little trepidation), I'd go back to University and purse a degree and career in bio-medical engineering research. But I don't have any desire to be a doctor, a surgeon, or pharmacist.

I'd cast away my information security career to become someone driving research at the forefront of medicine - in the realm of tissue, organ, and limb regrowth... and beyond. And, with enough money, build a research lab to purse and lead this new area of research

You see I believe were at the cusp of being able to regrow/correct many of the disabilities that limit so many lives today. We're already seeing new biomedical technologies enabling children deaf or blind from birth to hear their mothers voice or see their mothers face for the first time. It's absolutely wonderful and if anyone who's ever seen a video of the first moments a child born with such disabilities experiences such a moment hasn't choked up and felt the tears themselves, then I guess we're cut from different sheets.

But that fusion of technology in solving these disabilities, like the attachments of robotic limbs to amputees, is (in my mind) still only baby-steps; not towards the cyborgs of science fiction fame, but towards to world of biological regrowth and augmentation through biological means.

Today, we see great steps towards the regrowth of ears, hearts, kidneys, bone, and skin. In the near future... the future I would so dearly love to learn, excel, and help advance, lies in what happens next. We'll soon be able to regrow any piece of the human body. Wounded warriors will eventually have lost limbs restored - not replaced with titanium and carbon-fiber fabricated parts.

I believe that the next 20 years of bio-medical engineering research will cause medicine to advance more that all medical history previously combined. And, as part of that journey, within the 30 years after that (i.e. 21-50 years from now), I believe in the potential of that science to not only allow humans to effectively become immortal (if you assume that periodic replacement of faulty parts are replaced, until our very being finally gives up due to boredom), but also to augment ourselves in many new and innovative ways. For example, using purely biological means, enabling our eyes to view a much broader spectrum of the electromagnetic spectrum, at orders of magnitude higher than today, with "built-in" zoom.

Yes, it sounds fantastical, but that's in part to the opportunities that lie ahead in such a new and exciting field, and why I'd choose to drop everything an enter "...if you were guaranteed to be financially secure and could choose to do anything you wanted to do - with no worries over money."

-- Gunter

Sunday, January 15, 2017

Allowing Vendors VPN access during Product Evaluation

For many prospective buyers of the latest generation of network threat detection technologies it may appear ironic that these AI-driven learning systems require so much manual tuning and external monitoring by vendors during a technical “proof of concept” (PoC) evaluation.

Practically all vendors of the latest breed of network-based threat detection technology require varying levels of network accessibility to the appliances or virtual installations of their product within a prospect’s (and future customers) network. Typical types of remote access include:

  • Core software updates (typically a pushed out-to-in update)
  • Detection model and signature updates (typically a scheduled in-to-out download process)
  • Threat intelligence and labeled data extraction (typically an ad hoc per-detection in-to-out connection)
  • Cloud contribution of abstracted detection details or meta-data (often a high frequency in-to-out push of collected data)
  • Customer support interface (ad hoc out-to-in human-initiated supervisory control)
  • Command-line technical support and maintenance (ad hoc out-to-in human-initiated supervisory control)

Depending upon the product, the vendor, and the network environment, some or all of these types of remote access will be required for the solution to function correctly. But which are truly necessary and which could be used to unfairly manually manipulate the product during this important evaluation phase?

To be flexible, most vendors provide configuration options that control the type, direction, frequency, and initialization processes for remote access.

When evaluating network detection products of this ilk, the prospective buyer needs to very carefully review each remote access option and fully understand the products reliance and efficacy associated with each one. Every remote access option eventually allowed is (unfortunately) an additional hole being introduced to the buyers’ defenses. Knowing this, it is unfortunate that some vendors will seek to downplay their reliance upon certain remote access requirements – especially during a PoC.

Prior to conducting a technical evaluation of the network detection system, buyers should ask the following types of questions to their prospective vendor(s):

  • What is the maximum period needed for the product to have learned the network and host behaviors of the environment it will be tested within?
  • During this learning period and throughout the PoC evaluation, how frequently will the product’s core software, detection models, typically be updated? 
  • If no remote access is allowed to the product, how long can the product operate before losing detection capabilities and which detection types will degrade to what extent over the PoC period?
  • If remote interactive (e.g. VPN) control of the product is required, precisely what activities does the vendor anticipate to conduct during the PoC, and will all these manipulations be comprehensively logged and available for post-PoC review?
  • What controls and data segregation are in place to secure any meta-data or performance analytics sent by the product to the vendor’s cloud or remote processing location? At the end of the PoC, how does the vendor propose to irrevocably delete all meta-data from their systems associated with the deployed product?
  • If testing is conducted during a vital learning period, what attack behaviors are likely to be missed and may negatively influence other detection types or alerting thresholds for the network and devices hosted within it?
  • Assuming VPN access during the PoC, what manual tuning, triage, or data clean-up processes are envisaged by the vendor – and how representative will it be of the support necessary for a real deployment?

It is important that prospective buyers understand not only the number and types of remote access necessary for the product to correctly function, but also how much “special treatment” the PoC deployment will receive during the evaluation period – and whether this will carry-over to a production deployment.

As vendors strive to battle their way through security buzzword bingo, in this early age of AI-powered detection technology, remote control and manual intervention in to the detection process (especially during the PoC period) may be akin to temporarily subscribing to a Mechanical Turk solution; something to be very careful of indeed.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

Friday, January 13, 2017

Machine Learning Approaches to Anomaly and Behavioral Threat Detection

Anomaly detection approaches to threat detection have traditionally struggled to make good on the efficacy claims of vendors once deployed in real environments. Rarely have the vendors lied about their products capability – rather, the examples and stats they provide are typically for contrived and isolated attack instances; not representative of a deployment in a noisy and unsanitary environment.

Where anomaly detection approaches have fallen flat and cast them in a negative value context is primarily due to alert overload and “false positives”. False Positive deserves to be in quotations because (in almost every real-network deployment) the anomaly detection capability is working and alerting correctly – however the anomalies that are being reported often have no security context and are unactionable.

Tuning is a critical component to extracting value from anomaly detection systems. While “base-lining” sounds rather dated, it is a rather important operational component to success. Most false positives and nuisance alerts are directly attributable to missing or poor base-lining procedures that would have tuned the system to the environment it had been tasked to spot anomalies in.

Assuming an anomaly detection system has been successfully tuned to an environment, there is still a gap on actionability that needs to be closed. An anomaly is just an anomaly after all.
Closure of that gap is typically achieved by grouping, clustering, or associating multiple anomalies together in to a labeled behavior. These behaviors in turn can then be classified in terms of risk.

While anomaly detection systems dissect network traffic or application hooks and memory calls using statistical feature identification methods, the advance to behavioral anomaly detection systems requires the use of a broader mix of statistical features, meta-data extraction, event correlation, and even more base-line tuning.

Because behavioral threat detection systems require training and labeled detection categories (i.e. threat alert types), they too suffer many of the same operational ill effects of anomaly detection systems. Tuned too tightly, they are less capable of detecting threats than an off-the-shelf intrusion detection system (network NIDS or host HIDS). Tuned to loosely, then they generate unactionable alerts more consistent with a classic anomaly detection system.

The middle ground has historically been difficult to achieve. Which anomalies are the meaningful ones from a threat detection perspective?

Inclusion of machine learning tooling in to the anomaly and behavioral detection space appears to be highly successful in closing the gap.

What machine learning brings to the table is the ability to observe and collect all anomalies in real-time, make associations to both known (i.e. trained and labeled) and unknown or unclassified behaviors, and to provide “guesses” on actions based upon how an organization’s threat response or helpdesk (or DevOps, or incident response, or network operations) team has responded in the past.

Such systems still require baselining, but are expected to dynamically reconstruct baselines as it learns over time how the human operators respond to the “threats” it detects and alerts upon.
Machine learning approaches to anomaly and behavioral threat detection (ABTD) provide a number of benefits over older statistical-based approaches:

  • A dynamic baseline ensures that as new systems, applications, or operators are added to the environment they are “learned” without manual intervention or superfluous alerting.
  • More complex relationships between anomalies and behaviors can be observed and eventually classified; thereby extending the range of labeled threats that can be correctly classified, have risk scores assigned, and prioritized for remediation for the correct human operator.
  • Observations of human responses to generated alerts can be harnesses to automatically reevaluate risk and prioritization over detection and events. For example, three behavioral alerts are generated associated with different aspects of an observed threat (e.g. external C&C activity, lateral SQL port probing, and high-speed data exfiltration). The human operator associates and remediates them together and uses the label “malware-based database hack”. The system now learns that clusters of similar behaviors and sequencing are likely to classified and remediated the same way – therefore in future alerts the system can assign a risk and probability to the new labeled threat.
  • Outlier events can be understood in the context of typical network or host operations – even if no “threat” has been detected. Such capabilities prove valuable in monitoring the overall “health” of the environment being monitored. As helpdesk and operational (non-security) staff leverage the ABTD system, it also learns to classify and prioritize more complex sanitation events and issues (which may be impeding the performance of the observed systems or indicate a pending failure).

It is anticipated that use of these newest generation machine learning approaches to anomaly and behavioral threat detection will not only reduce the noise associated with real-time observations of complex enterprise systems and networks, but also cause security to be further embedded and operationalized as part of standard support tasks – down to the helpdesk level.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

(first published January 13th - "From Anomaly, to Behavior, and on to Learning Systems")

Friday, December 23, 2016

Body Worn Camera Technologies – Futures and Security

“Be careful what you wish for” is an appropriate adage for the flourishing use and advancement of body worn camera (BWC) technologies. As police forces around the world adapt to increased demands for accountability – where every decision, reaction, and word can be analyzed in post-event forensic fashion – the need and desire to equip each police or federal agent with a continuously recording camera has grown.

There are pros and cons to every technology – both from technical capability and societal changes. The impartial and continuous recording of an event or confrontation places new stresses on those whose job is to enforce the thousands of laws society must operate within on a daily basis, in the knowledge that each interpretation and action could be dissected in a court of law at some point in the future. Meanwhile, “offenders” must assume that each action – hostile or otherwise – could fall afoul of some hitherto unknown law in fully recorded technicolor.

Recently the National Institute of Justice released a market survey on Body Worn Camera Technologies. There are over 60 different BWCs specifically created for law enforcement use and the document provides information on the marketed capabilities of this relatively new class of technology.

The technological features of the current generation of BWCs are, overall, quite rudimentary - given limitations of battery power, processing capabilities, and network bandwidth. There is however a desire by the vendors to advance the technology substantially; not just in recording capability, but in areas such as facial recognition and cloud integration.

Today’s generation of BWCs truly are the 1.0 version of a policing platform that will evolve rapidly over the coming decade.

I’ve had a chance to look a little closer at the specifications and capabilities of today’s BWC solutions and have formulated some thoughts to how these BWC platforms will likely advance over the coming years (note that some capabilities already exist within specialized military units around the world – and will be easy additions to the BWC platform once the costs to produce reduce):
  1. Overcome the bandwidth problem to allow real-time streaming and remote analysis of the video date. As cellular capabilities increase and 4G/5G becomes cheaper and more reliable in metro centers, “live action” can be passed to law enforcement SOC (just like existing CCTV capabilities). In cases where such cellular technology isn’t reliable, or where having multiple law enforcement officers working in the same close geographic proximity, the likely use of mobile cellular towers (e.g. as a component of the police vehicle) to serve as the local node – offering higher definition and longer recording possibilities, and remote SOC “dial-in” to oversee operations with minimal bandwidth demands.
  2. Cloud integration of collected facial recognition data. As the video processing capabilities of the BWC improves, it will be possible to create the unique codification of faces that are being recorded. This facial recognition data could then be relayed to the cloud for matching against known offender databases, or for geographic tracking of individuals (without previously knowing their name – but could be matched with government-issued photo ID’s, such as driver license or passport images). While the law enforcement officer may not have immediately recognized the face or it may have been only a second’s passing glimpse, a centralized system could alert the officer to the persons presence. In addition, while an officer is questioning or detaining a suspect, facial recognition can be used to confirm their identity in real-time.
  3. BWC, visor, and SOC communication integration. As BWCs transition from a “passive recording” system in to a real-time integrated policing technology, it is reasonable to assume that advancements in visual alerting will be made – for example a tactical visor that presents information in real time to the law enforcement officer – overlaying virtual representations and meta-data on their live view of the situation. Such a technology advance would allow for rapid crowd scanning (e.g. identifying and alerting of wanted criminals passing through a crowd or mall), vehicles (e.g. license plate look-up), or notable item classification (e.g. the presence of a firearm vs replica toy).
  4. Broad spectrum cameras and processing. The cameras used with today’s BWC technology are typically limited to standard visible frequencies, with some offering low-light recording capabilities. It is reasonable to assume that a broader spectrum of frequency coverage will expand upon what can be recorded and determined using local or cloud based processing. Infrared frequency recording (e.g. enabling heat mapping) could help identify sick or ailing detainees (e.g. bird flu outbreak victim, hypothermic state of rescued person), as well as provide additional facial recognition capabilities independent of facial coverings (e.g. beard, balaclava, glasses) – along with improved capabilities in night-time recording or (when used with a visor or ocular accessory) for tracking a runaway.
  5. Health and anxiety measurement. Using existing machine learning and signal processing techniques it is possible to measure the heart rate variability (HRV) from a recorded video stream. As the per-unit compute power of BWC devices increase, it will be possible to accurately measure the heart rate of an individual merely by focusing on their face and relaying that to the law enforcement officer. Such a capability can be used to identify possible health issues with the individual, recent exertions, or anxiety-related stresses. Real-time HRV measurements could aid in determining whether a detainee is lying or needs medical attention. Using these machine learning techniques, HRV can be determined even if the subject is wearing a mask, or if only the back of the head is visible.
  6. Hidden weapon detection. Advanced signal processing and AI can be used to determine whether an object is hidden on a moving subject based of fabric movements. As a clothed person moves, the fabrics used in their clothing fold, slide, oscillate, and move in many different ways. AI systems can be harnessed to analyze frame-by-frame movements, identify hard points and layered stress points, and outline the shape and density of objects or garments hidden or obscured by the outer most visible layer of clothing. Pattern matching systems could (in real-time) determine the size, shape, and relative density of the weapon or other hidden element on the person. In its most basic form, the system could verbally alert the BWC user that the subject has a holstered gun under the left breast of their jacket, or a bowie knife taped to their right leg. With a more advanced BWC platform (as described in #3 above), a future visor may overlay the accumulated weapon and hard-point detection on the law enforcement officer’s view of the subject – providing a pseudo x-ray vision (but not requiring any active probing signals).

Given the state of current and anticipated advances in camera performance, Edge Computing capability, broadband increases, and smart-device inter-connectivity over the coming decade, it is reasonable to assume that BWC technology platform will incorporate most if not all of the above listed capabilities.

As video evidence from BWC becomes more important to successful policing, it is vital that a parallel path for data security, integrity, and validation of that video content be advanced.

The anti-tampering capabilities of BWC systems today are severely limited. Given the capabilities of current generation off-the-shelf video editing suites, manipulation of video can be very difficult if not impossible to detect. These video editing capabilities will continue to advance. Therefore, for trust in BWC footage to remain (and ideally grow), new classes of anti-tamper and frame-by-frame signing will be required – along with advanced digital chain of custody tracking.


Advances and commercialization block-chain technology would appear at first glance to be ideally suited to digital chain of custody tracking.

Wednesday, December 21, 2016

Edge Computing, Fog Computing, IoT, and Securing them All

The oft used term “the Internet of Things” (IoT) has expanded to encapsulate practically any device (or “thing”) with some modicum of compute power that in turn can connect to another device that may or may not be connected to the Internet. The range of products and technologies falling in to the IoT bucket is immensely broad – ranging from household refrigerators that can order and restock goods via Amazon, through to Smart City traffic flow sensors that feed navigation systems to avoid jams, and even implanted heart monitors that can send emergency updates via the patient’s smartphone to a cardiovascular surgeon on vacation in the Maldives.

The information security community – in fact, the InfoSec industry at large – has struggled and mostly failed to secure the “IoT”. This does not bode well for the next evolutionary advancement of networked compute technology.

Today’s IoT security problems are caused and compounded by some pretty hefty design limitations – ranging from power consumption, physical size and shock resistance, environmental exposure, cost-per-unit, and the manufacturers overall security knowledge and development capability.
The next evolutionary step is already underway – and exposes a different kind of threat and attack surface to IoT.

As each device we use or the components we incorporate in to our products or services become smart, there is a growing need for a “brain of brains”. In most technology use cases, it makes no sense to have every smart device independently connecting to the Internet and expecting a cloud-based system to make sense of it all and to control.

It’s simply not practical for every device to use the cloud the way smartphones do – sending everything to the cloud to be processed, having their data stored in the cloud, and having the cloud return the processed results back to the phone.

Consider the coming generation of automobiles. Every motor, servo, switch, and meter within the vehicle will be independently smart – monitoring the devices performance, configuration, optimal tuning, and fault status. A self-driving car needs to instantaneously process this huge volume of data from several hundred devices. Passing it to the cloud and back again just isn’t viable. Instead the vehicle needs to handle its own processing and storage capabilities – independent of the cloud – yet still be interconnected.

The concepts behind this shift in computing power and intelligence are increasingly referred to as “Fog Computing”. In essence, computing nodes closest to the collective of smart devices within a product (e.g. a self-driving car) or environment (e.g. a product assembly line) must be able to handle he high volumes of data and velocity of data generation, and provide services that standardize, correlate, reduce, and control the data elements that will be passed to the cloud. These smart(er) aggregation points are in turn referred to as “Fog Nodes”.
Source: Cisco
Evolutionary, this means that computing power is shifting to the edges of the network. Centralization of computing resources and processing within the Cloud revolutionized the Information Technology industry. “Edge Computing” is the next advancement – and it’s already underway.

If the InfoSec industry has been so unsuccessful in securing the IoT, what is the probability it will be more successful with Fog Computing and eventually Edge Computing paradigms?

My expectation is that securing Fog and Edge computing environments will actual be simpler, and many of the problems with IoT will likely be overcome as the insecure devices themselves become subsumed in the Fog.

A limitation of securing the IoT has been the processing power of the embedded computing system within the device. As these devices begin to report in and communicate through aggregation nodes, I anticipate those nodes to have substantially more computing power and will be capable of performing securing and validating the communications of all the dumb-smart devices.

As computing power shifts to the edge of the network, so too will security.

Over the years corporate computing needs have shifted from centralized mainframes, to distributed workstations, to centralized and public cloud, and next into decentralized Edge Computing. Security technologies and threat analytics have followed a parallel path. While the InfoSec industry has failed to secure the millions upon millions of IoT devices already deployed, the cure likely lies in the more powerful Fog Nodes and smart edges of the network that do have the compute power necessary to analyze threats and mitigate them.

That all said, Edge Computing also means that there will be an entirely new class of device isolated and exposed to attack. These edge devices will not only have to protect the less-smart devices they proxy control for, but will have to be able to protect themselves too.


Nobody ever said the life of an InfoSec professional was dull.

Wednesday, December 7, 2016

Sledgehammer DDoS Gamification and Future Bugbounty Integration

Monetization of DDoS attacks has been core to online crime way before the term cybercrime was ever coined. For the first half of the Internet’s life DDoS was primarily a mechanism to extort money from targeted organizations. As with just about every Internet threat over time, it has evolved and broadened in scope and objectives.

The new report by Forcepoint Security Labs covering their investigation of the Sledgehammer gamification of DDoS attacks is a beautiful example of that evolution. Their analysis paper walks through both the malware agents and the scoreboard/leaderboard mechanics of a Turkish DDoS collaboration program (named Sath-ı Müdafaa or “Surface Defense”) behind a group that has targeted organizations with political ties deemed inconsistent with Turkey’s current government.

In this most recent example of DDoS threat evolution, a pool of hackers is encouraged to join a collective of hackers targeting the websites of perceived enemies of Turkey’s political establishment.
Using the DDoS agent “Balyoz” (the Turkish word for “sledgehammer”), members of the collective are tasked with attacking a predefined list of target sites – but can suggest new sites if they so wish. In parallel, a scoreboard tracks participants use of the Balyoz attack tool – allocating points that can be redeemed against acquiring a stand-alone version of the DDoS tool and other revenue-generating cybercrime tools, for every ten minutes of attack they conducted.

As is traditional in the dog-eat-dog world of cybercrime, there are several omissions that the organizers behind the gamification of the attacks failed to pass on to the participants – such as the backdoor built in to the malware they’re using.

Back in 2010 I wrote the detailed paper “Understanding the Modern DDoS Threat” and defined three categories of attacker – Professional, Gamerz, and Opt-in. This new DDoS threat appears to meld the Professional and Opt-in categories in to a single political and money-making venture. Not a surprise evolutionary step, but certainly an unwanted one.

If it’s taken six years of DDoS cybercrime evolution to get to this hybrid gamification, what else can we expect?

In that same period of time we’ve seen ad hoc website hacking move from an ignored threat, to forcing a public disclosure discourse, to acknowledgement of discovery and remediation, and on to commercial bug bounty platforms.

The bug bounty platforms (such as Bugcrowd, HackerOne, Vulbox, etc.) have successfully gamified the low-end business of website vulnerability discovery – where bug hunters and security researchers around the world compete for premium rewards. Is it not a logical step that DDoS also make the transition to the commercial world?

Several legitimate organizations provide “DDoS Resilience Testing” services. Typically, through the use of software bots they spin up within public cloud infrastructure, DDoS-like attacks are launched at paying customers. The objectives of such an attack include the measurement and verification of the defensive capabilities of the targets infrastructure to DDoS attacks, to exercise and test the companies “blue team” response, and to wargame business continuity plans.


If we were to apply the principles of bug bounty programs to gamifying the commercial delivery of DDoS attacks, rather than a contrived limited-scope public cloud imitation, we’d likely have much more realistic testing capability – benefiting all participants. I wonder who’ll be the first organization to master scoreboard construction and incentivisation? I think the new bug bounty companies are agile enough and likely have the collective community following needed to reap the financial rewards of the next DDoS evolutionary step.