The electronic theatre of war

First published: LinkedIn on May 11, 2015

We have to face the fact that in our current day and age our communications systems have become a theatre of war.

Where once only specific systems were hotspots of electronic conflict, we now see that almost any device, almost any peripheral, is being employed in the activities of the illegal gathering of information, subversion of countermeasures, destruction and ransoming of digital assets and overall disruption of services.

The sheer diversity of attack vectors and the massive amount of information transferred by the ever increasing number of clients makes keeping track of who does what and when, and whether this is within the desired parameters a herculean task indeed.

To compound matters even further, a single user now has multiple devices and workplaces from which she can use the resources; each utilizing different routes and intermediary systems at different times and from different locations, depending on her life- and work-habits.

Each of these devices can be compromised in various ways; a malicious application can be installed, like worms or viruses, the hardware can be compromised, like the BAD BIOS or the USB Microcontroller bug, the encrypted link can be compromised by Man-In-The-Middle attacks that spoof certificate authorities or by just counting on the indifference of the end-user.

The fact that even USB Microcontrollers are not safe from malware makes even the practice of air gap or air wall computing not safe enough anymore.

If the opponent were just one single party with known goals, the aforementioned would be somewhat manageable.

But alas, this is not the case; from single disgruntled individuals, via hacktivistic groups, through to criminal organizations and nation-states: all of these can be adversaries in the electronic theatre of war.

If you put all of these together, their compound budget easily exceeds any single country in the world and so each organization has to fend off this amorphous unknown Goliath with a disadvantages from the start; less budget, no specific opponent, no specific target with the own organization.

And finally we have the biological malware; a person within the organization that has access to information (most often legally or because of job responsibilities). People could access information, copy it and disseminate it via alternate channels.

But to defend each and every asset with the utmost and extreme measures available would stifle and choke the organization’s ability to perform well in its primary function (whether business, public service or law/military).

So, is this fight already lost?

The answer to that is: not by a long shot.

But the first realization that any organization needs to have is that, whether they choose to or not, it is in a battle for the integrity and ownership of its assets and that the only way it can retain these is to fully commit to keeping in control.

Next, the organization needs to understand that it is compromised right now. Whether it is because of foreign agency hardware on the mainboards, backdoors in routers, viruses and worms, infected printers, tablets or any other information or communications device, it does not really matter, it is compromised.

Short of completely rebuilding all the hardware and software from the ground up, using own engineers and at least a randomized four-eyes approach, there is no simple method of cleaning up and starting over.

But, just as the human body can cope with infections and intruders, so can an organization.

At best, an intruder just wastes computing cycles (and appropriately only raises the temperature a little), at worst, the intruder is intend on using the infrastructure to spread and misuse the environment for its own nefarious purpose.

So, just as the human body, the organization needs a method of detecting any violations of its integrity.

Here is where there’s a break; most of the opponents do not have direct physical access to any of the organization’s assets and therefore need to use the networking capabilities to assess and control their tools.

Of course, it is possible to use Rube Goldberg style methods to transfer information, but the effort needed for making such a style of malware reliable and undetectable makes for a nation-state budget and resources, so defending against this should, unless the organization has a specific reason to be targeted by a nation-state, rather be a capstone project and should not distract from the vast majority of work to be done.

Now, because the organization has, at least partial, control over the network infrastructure, there are chokepoints where traffic must pass through. Most likely this will be the main routers or firewalls.

If proper detection systems are hooked into these chokepoints then the dispersion traffic, information transport traffic and control traffic can be seen and the organization can act upon these findings.

Finally, the organization needs to prevent or quell the rate of malware distribution within its infrastructure. This is easier said than done, since it is mostly a matter of ultimate trust in the vendors of the hardware and software.

In recent years there have been pre-infected system components like disks, mainboards, USB keys and others. Also, in closed software, it is the organization`s trust that at the origin (i.e. the vendor) there have been no backdoors added and that all known best practices to avoid security loopholes have been executed properly.

Using Open Source Software (OSS) is marginally better; the majority of the code has not been properly audited even though myriads of developers have looked at it. Glaring security issues may be found, but as the recent disclosure of the Shellshock bash bug has shown; security holes may go unnoticed for decades (in this case at least since 1992!).

If the organization can afford the effort and finances, then a bootstrap from OSS would be possible; start with the minimal code needed to get a server up and running and fully review that. With each patch, review the code again. And then do the same for each service application. The organization would need a small software development company under its wings.

For most however it would entail re-designing the software management policies; being extremely strict and fast-acting at server level and agile but watchful at end-user level.

Things like having proxies monitor outdated browsers and plugins and blocking them would already be a large step forward, but it would hardly be enough.

Being agile in being able to switch in client software is another step; if browser X has a big problem, be ready to switch to browser Y in a very short time.

The gist of this is actually to be pro-active rather than reactive; every minute that an organization needs to think of a policy and how to implement it is a minute that malware can burrow deeper into the organization and entrench itself. Software does not sleep and purveyors of malware have a 24 / 7 business.