Revisiting the Cybersecurity Basics: An Independence Day Special

When writing this piece I was looking for a way to bridge the current cybersecurity issues we face on a daily basis against the backdrop of the nation’s birth and I saw no better way than making that connection.

There are scholars and historians who will do a much better job than I ever could in decoding the decision making of The Founding Fathers, but something I feel we can all agree with is this: The Founding Fathers had enough.  They said “this isn’t working for us” and took ownership of their destiny.

The Founding Fathers realized that when individuals were given the freedom and liberty to conduct their own affairs, they could unlock powers otherwise not available to them.  And of course, along with that freedom and liberty comes individual responsibility.

That is the bridge I want to make with current cybersecurity challenges: each and every single one of us has a role to play if we’re going to tackle this cybersecurity beast.  We simply cannot have the expectation that somebody else is going to take care of our cybersecurity problems.  In last month’s piece, I illustrated that importance, along with previous pieces (here and here) focusing on the things we as individuals can to do up our cybersecurity game.  And of course, my “go to” move always: never forget the basics.

If we are going to provide new guards for our future (cyber) security, it is our duty to stop the long train of abuses we have been facing.  That means if we’re still leaving internet traffic unencrypted, if we’re not protecting our crown jewels, and are still allowing ourselves to be suckered into clicking malicious links, we can – and should – hold ourselves responsible for our cybersecurity follies.

I know you all have the power to prevent future cybersecurity attacks against your interests.  I mean that sincerely.  The keys are individual responsibility and doing the basics.  I have no scientific proof, but here is my SWAG: take control of your cyber responsibilities and do the basics and I’m confident you’ll reduce your cyber risk profile by 80% at least.  That’s some pretty solid work at your fingertips that provides safety and happiness.  And it’s an easy way to stop suffering the cyber evils.

A Happy Fourth to you and yours as we celebrate freedom and liberty!  And with that, I leave you with my favorite words ever said on this day.

In Congress, July 4, 1776:

The unanimous Declaration of the thirteen united States of America, When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.–That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed,–That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to affect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shown, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.

 

By George Platsis, SDI Cyber Risk Practice, July 3, 2018

 

Keep it Local: Cybersecurity is Everybody’s Problem

Two recent interactions – one business meeting and one personal conversation – prompted me to write this piece.  Both these experiences, coupled with experiences over the last few years, drove me to these conclusions:

  1. People don’t see cybersecurity as a problem they are responsible for; and
  2. People just don’t care about cybersecurity.

While these conclusions are upsetting, I find them both valuable as well.  They have value because they can serve as starting points to explain why you are responsible for your behavior online and why you should care about cybersecurity.

In both interactions, I had respective “flash point” moments where I thought to myself, “Houston, we have a problem.”  Here’s how the interactions unfolded.

During the business meeting – which started as more of a “how do we keep the lights on during a crisis” conversation – the discussion shifted to cybersecurity.  One participant said, “But cybersecurity is such a specific issue.  Should all these senior executives really focus their attention on this niche problem?”

I felt like we just lost the main thruster.

This comment prompted me to take the conversation back to basics (feel free to check out last year’s blog post on the basics).

Disclaimer: I consider the person who said this comment very knowledgeable and bright, so this wasn’t a case of ignorance or dismissiveness.  Furthermore, the others in the room tended to agree with the person who made the comment, so my “aha!” moment came when I realized we have a fundamental problem: our common understanding of cybersecurity is inherently incomplete and flawed.

So I went back to the basics and began to outline how cybersecurity and all related issues, like data integrity and information warfare, are not niche issues for an organization.  In fact, they are core business issues which, if mismanaged, can force your business to close shop overnight.   Really, EVERYTHING we do today relies on some sort of network or data communications system.  This is not niche.  This is the furthest thing away from niche.

By explaining to the participants in the room – in the absolute simplest possible way – what “cybersecurity” touches in any organization, if nothing else, I felt like they walked away with a sense of “maybe we need to think about this differently.”

One small step for a better understanding of cybersecurity, one giant leap for protecting your organization.

The second interaction arose during a quick catch up with a friend who – generically speaking – works in an industry we rely on every single day. My friend said to me, “Yeah, these audit and IT people drive me nuts.  I don’t really care about protecting the information.  I’m a marketer.  That’s their job, not mine.”

The hyperdrive on the Millennium Falcon just went clunk and R2-D2 is nowhere in sight to help get us back to light speed.

With no disrespect to my friend –— we have been conditioned to think our devices are safe because of anti-virus, firewalls, monitoring software, and name your latest piece of software garbage that promises you the path to cybersecurity paradise.

A car with every single safety feature possible still has one basic requirement to ensure safety: a driver who drives safely!

Our devices – really, there’s no difference between “desktop” and “tablet” and “smartphone” anymore, they’re all powerful computers – are little ticking data time bombs that, at best, may only hurt us personally if they go off, and at worst, take out some of the people and organizations we care about most (or rely on for a paycheck).

If you have been following my posts and other writings, you’ll know that I’m a fan – a big fan – of segregating your devices and accounts for specific tasks.

Does this have a greater upfront cost?  Yes.  Does this mean I lose some functionality capabilities?  Yes.  Does this mean I may have to carry three devices instead of one while traveling?  Yes.  But you know what else this does?  It allows me to quickly triage and deal with a problem if it arises.

Sometimes people hand me their phones to show me something and I can’t resist; I need to look at the taskbar to catch a quick glimpse of all the services running and radios that are on or broadcasting.  My inner voice most often wants to yell out “why don’t you just paint yourself neon green, start yelling at the top of your lungs, and leave a trail of bread crumbs while you’re at it!”

In a digital sense, that’s what most of us are doing.  Somewhere, somebody had the bright idea to monetize data collection in exchange for convenience.  That worked great for a while, except all these conveniences are coming back to bite us in the behind … meaning people are trying to make money on bite prevention therapy.

Perhaps I live with a tad more simplicity.  Instead of trying the latest and greatest mosquito repellent at each and every turn (because short of walking through the path with a flamethrower turned to 11, I’m still likely to get a bite or two), maybe I just avoid mosquito infested paths if I really don’t need to walk through them.

Yes, having a Facebook app on your “phone” (aka computer in your pocket) – which you use for business e-mail and personal banking and calling your mom and taking pictures of food that has enough embedded metadata tags to allow you never to forget where you have eaten every meal for the last six years – may be fun and convenient for you, but understand what you could be giving up and respect the costs that those data leaks could cost you and the organization that employs you.

This is why cybersecurity is everybody’s problem.  It’s not a niche issue.  Yes, people are becoming more aware of cybersecurity problems, but they do not really understand them.  I have friends who occasionally forward me “cybersecurity stuff” (which I appreciate) but it’s all the same vendor garbage.  People toss around words like “continuity” and “resilience” and similar jargon, which convinces me more and more of what Nassim Nicholas Taleb – somebody who really understands risk – says: there are a lot of charlatans out there trying to make a quick buck on the latest fad.

And just like Taleb is a fan of localism, I’m a fan of localism as it relates to cybersecurity.  Don’t expect some IT nerd in some far off department to protect your data.  Don’t believe vendors who put their interests (taking your money) ahead of yours (protecting your data).  You protect it.  Yourself.  It’s better for you.  It’s better for everybody.  Be responsible for your own affairs.  It’s a type of change in thinking that could make a meaningful cybersecurity impact.

 

By George Platsis, SDI Cyber Risk Practice

June 5, 2018

Security By Design Applies to Organizations Too

If you are an engineer or software developer, there is a good chance that you have heard the phrase “security by design” before (sometimes also referred to as “secure by design”).  If you are unfamiliar with the phrase, it pretty much means what you think it would mean: something has been designed, developed, and manufactured with security in mind.

Security by design is an extremely good practice, but unfortunately is underutilized.  Think IoT devices as an example.  Many of these devices are driven by market forces and cost.  When there are pressures to get a product to market quickly, security – arguably a form of quality control – takes the hit.  The reason is simple: secure code is expensive to write, because it requires a great deal of vulnerability testing.

Here’s another way to know that “security by design” is not commonplace with IoT devices: default passwords.  If security by design was really being employed, all devices would force the user to create a new administrative username and password upon first use, but far too often, people are satisfied with the default password.

Let’s use an analogy to understand default passwords.  Imagine you make a visit to your local hardware store to buy a front door for your house.  That door has a prefabricated combination lock built into it.  But there’s a catch.  The default combination for all locks on every door for sale is 00000.  Of course, there are instructions on how to change that combination and I suspect as soon as you install that door in your home, the first thing you’ll be doing is changing that combination.  In essence, you are changing the “default password” of your door.

Now you may not have experienced this scenario for your front door, but I bet you’ve done it for some luggage locks that have combinations.  It’s exactly the same principle.

Here’s the difference if the door was built with “security by design” methodology.  There wouldn’t be a default 00000 combination.  Instead, to “lock” the door the first time, you would be forced to set a combination code.  Otherwise, the door would just be wide open.

I trust you understand what I mean now.  And when you understand that concept, you’ll begin to see that many of the systems we rely on have not been built using the security by design methodology.  It’s quite obvious when you see the mishmash of new and legacy technologies trying to work together.  And the clearest case of a system not using the security by design methodology is the Internet itself, as decisions made decades ago made the Internet inherently vulnerable.

While the security by design methodology is almost exclusively a technical solution (you can learn a great deal by referencing the NIST 800-160 Special Publication), I invite you to consider that we can use security by design lessons for our organizations and even ourselves.  It’s not that odd when you think about the basic principles.

First, I’ll be reasonable: we cannot make ourselves or our organizations 100% impervious to attacks.  It’s completely unrealistic, but that does not mean we cannot make ourselves better.  Think about it like this: not all of us are going to be pro athletes or musical virtuosos, but with some continuous practices and pushing our limits, we will get better.

What are some ways we can do that on a personal level? Here are a few tips:

  • Get into the habit of changing your passwords.
  • Segregate work and personal accounts/devices.
  • Learn to identify phishing/spear-phishing e-mails.

All basic stuff, but all small steps that get us into the habit of behaving in a more cyber secure manner.  If you are doing that deliberately, it’s by design.  You’re not relying on somebody else to defend you.

How about at the organizational level?  Here are a few considerations:

  • Whitelisting applications and communications. You’ve almost certainly heard of blacklisting (do something wrong, like use an app you shouldn’t, and you get an ouchy on the wrist).  Whitelisting, instead, only lets you use pre-approved applications and whitelisting communications means you can only send/receive messages from certain people.  Of course, this comes at the expense of some convenience, but may save your keister in the long-run.
  • Make Red Team/Blue Team exercises a regular part of your organization’s operations. You need not have some elaborate simulation or exercise.  You just need something continual to get you in the right frame of mind.

Story time: one of best examples of Red Team testing came from an organization that employed a small, but crafty, Red Team full time.  This Red Team would scour the facilities looking for unattended machines or vulnerable devices.  Think getting up from your computer to go to your bathroom and not locking it or leaving smaller devices, like a phone or tablet, unattended.  Use your imagination what happens next.  This organization took the view point “better we catch you than the real bad guys.”

Ultimately, “security by design” for people and organizations means practice, practice, practice.  It’s a culture shift, but it can be done.  The alternatives are burning through a pile of cash, scrambling to recover locked out systems, or explaining to your clients why their data may be out in the wild.

 

By George Platsis, SDI Cyber Risk Practice

May 8, 2018

 

 

 

 

Ask Yourself: Why Do You Amass Data?

Unless you have been living under a rock, you may have noticed that a growing number of people are not too pleased with Facebook and Alphabet Inc., parent of Google and developers of the mobile operating system, Android.

What started the recent frustrations are the revelations from an employee of Cambridge Analytica and how the company harvested information from 50 million Facebook users.  I want to skip over the part about how the data was used, because that’s the fog in this storm and the distraction is entering “funny cat video” territory.

Where we should be focusing our thinking is here: that the data was harvested in the first place, in one central repository.  That’s the issue.

Ask yourself: why do we amass data?  Let’s get our hands dirty to answer that question.

Raw data is amassed for usually one or all of these reasons:

1) To understand something,

2) To develop something, and

3) To sell something.

It’s all pretty straight forward.

The only other reasons to amass data would be that you are a hoarder or doing something really creepy.

I’m going to go out on a limb here (insert Bugs Bunny level sarcasm) but my guess is Facebook and Google were doing at least all three.

It’s actually quite transparent when you understand the business models these companies rely on.  They need to “understand” you in order to “develop” something so that they can “sell” it to you.

Do you see the hook?  In order to “understand” you, they need to entice you somehow, because this isn’t your run of the mill brick and mortar retail shop or professional services company that you hire for a specific job.

So how are you enticed?  Simple: funny cat videos, “free” software and apps, and dopamine hits.  In other words, exploiting “vulnerability in human psychology” says the founding president of Facebook.

And of course these companies will ask, while you’re using these services, help us “enhance the user experience” by telling us everything about you.  And if that’s too much time for you, just click on this “I Accept” button and we’ll scoop up whatever is laying around on your device.

Now, take all that data we’ve accumulated on you and start crunching it through algorithms and ever-more-powerful AI and we’re going to start predicting your every move.  Why, we may even “understand” you so well that we will develop not only a product, but an idea, say a political idea, that we can sell you!

Okay, okay.  We may not be able to do that, something about election laws, so we’ll just sell the data to somebody who can.

Back to being serious: to anybody that has been sincerely following the digital evolution, information security, big data, artificial intelligence, and algorithms over the last 10-15 years, the Cambridge Analytica, Facebook, and Android revelations are worthy of no more than a yawn or shrug.

Here’s why: The system operated exactly as it was supposed to.  And that is the scary part.

In our warp speed attempts to improve the user experience and make money, we lost sight of a few things that matters to us: like protecting our data and even putting limits on what really needs to be gathered.

For example, if I was using certain Facebook features on Android devices – disclaimer: I don’t and won’t ever – and I have your number saved in my contact list, what consent have you given me to give your phone number to Facebook via Google?  Chances are you never gave me consent, but even if you’re not a Facebook user, Facebook may have your phone number, which is one more piece of data they can collate.

Spend a few minutes and check out Google’s Privacy Policy.  No need to spend too much time on it, since most privacy policies take about 15-20 minutes to read.  But just spend two minutes on the section “Information we collect” and see how you feel after reading.

In a simpler time, smaller organizations would have the living daylights sued out of them if client information leaked, especially if non-disclosure agreements were signed.  Really, NDAs can be considered a type of non-digital information security protocol.  But that close guard on information, a responsibility really, never transferred over to the digital world.

Don’t be surprised if there’s a sea change on the horizon regarding data security, particularly when it comes to personal data.  Users and clients may begin to place pressures on the data collectors to explicitly state what they’re doing with the data, how they’re protecting it and – the key to me – what happens if your data makes it out of your vaults.

If we start to pressure for legislative and regulatory changes, along with more civil legal action – I think we will – data collectors will not only see their liability exposure increase, but I think you’ll see a change in business models as well.  That’s why my advice to you is step up your cybersecurity game if you’re collecting information on individuals and clients, because we’ve had a lot of talk the last few years using the carrot approach, but the general public is starting to get fed up and will want to see some stick treatment used on the data collectors.

Again, remember: the system worked exactly as it was designed to.  Some don’t like the fact that Bob had access to the data.  But keep this mind: it’s quite possible that we haven’t yet heard of an Alice, Joe, and Sally who had access to similar, or even more, data.

And my suspicion is some people are having some very sleepless nights these days because, while their technical cybersecurity measures may have been alright, these same people made it a policy to share and sell the data for their own interests, perhaps sharing with some, selling to others, and withholding altogether from another group.

I don’t believe the general public will be too pleased when they begin to realize they were not the consumer, but rather, they were the product, which is all the more reason if you’re collecting data, be sure you’re taking the appropriate steps to protect it and not misuse it.  It could end up being costlier to you that you initially thought.

 

By George Platsis, SDI Cyber Risk Practice

April 3, 2018

 

 

Why “Security” and “Efficiency” Should Never Be Used in the Same Sentence

Marching along well into 2018 and I think it’s safe to say we’re not experiencing a cybersecurity revolution.  Sure, there has been some great advancement in tech, with AI and blockchain applications beginning to steam roll.  It seems if you add “blockchain” to whatever you’re doing, you’ll get a bump in business.  Really, this happened in late 2017.

And as an aside, “blockchain” should not be synonymous with cryptocurrencies.  They are distinct from each other.  I personally think the blockchain technology is fantastic, but I am still a bit of a skeptic on the cryptocurrency front if you missed the initial few “investment” waves.  With that said, economic realities throughout the world do make cryptocurrencies attractive to many, so I am not writing them off.  Just too soon to tell for me, which means I see them more as a speculative commodity – today – instead of a means of conducting daily financial transactions.

So great, we have a lot of tech progress, but what does that mean for keeping our data protected?  Well, because we haven’t seen the “cybersecurity revolution” mentioned above, my feeling is that we’re still tripping over ourselves, which means we’re still getting the basics wrong.  If you need a refresher on some of the basics, here is an older post from the summer of 2017.  Half a year plus later, everything still applies and that tells me that there is something fundamentally wrong with how we approach our cybersecurity problems.  And I’m going to present to you one of the most galactic-sized problems we face.

I have often said that you shouldn’t use “security” and “efficiency” in the same sentence (okay, I know I am here, but it’s to define context!).  The reason is quite simple: the terms are contradictory in nature.  But I continue to hear nonsensical comments like “efficient cybersecurity” and it makes me wonder: do these people have a clue?

I get it, we look for efficiencies because – at least in theory – efficiencies make us more productive and better productivity means greater profits (or whatever “metric for success” you use).

Efficiency, as a concept, works great when you don’t have these cataclysmic costs hit you every so often (in this case, we call them the painful cyber breaches).  And why is that?  It’s because efficiency makes you, wait for it: efficient.  Efficiency doesn’t make you strong, resilient, robust, or antifragile.  It just makes you efficient, which in many cases means you build fragility into your system.

Let’s visualize this.  Think of an artistic glass sculpture.  It’s gorgeous, a total masterpiece.  The sculptor has used every technique in the book to make this piece of art look the way it does.  In terms of art, it is the most efficient use of glass known.

And then I drop a glass marble on it and the entire thing shatters.

That’s what we’re doing in our supply chains, enterprises, and if we are utterly foolish (we have a good track record of doing that) our soon-to-be “smart” cities (note: “smart” cities are actually pretty dumb, but that’s for another post).

We always look at things from the upside.  We look at “what can be made” most of the time.  This is a good thing to be honest.  Being positive is good for the soul and helps us innovate.  But there is a looming downside that we are pretending doesn’t exist in many ways.  We rarely look at “what can be lost” in these cases.

The problem is that a couple decades of building fragility into the system means that there are more ways for things to be lost now and our blinders are, in large part, a function of this entire “efficiency worship” in the business world.

And that’s why I think security measures are not getting any traction because security is really this: a redundancy.

So here is my easiest way of explaining it: what is the “most efficient” way into your house?  It’s a house with no door.  Efficiency means reducing impediments, or put another way, a redundancy.  A “door” is an impediment to you entering your house, making it a redundancy.  A lock on the door is another impediment (redundancy).  Just like a fence.  Or a security guard.  Or a moat!

So why do we build all these redundancies into our homes?  Simple: because we want to protect our homes.

That’s the mindset change that’s required if we want to emphasize the security in cybersecurity: build redundancies, reduce fragility, and be capable of withstanding shocks.  Is that more expensive on the front end?  Yes.  But we’ve entered the age of “massive” shock where billion dollar losses are within the realm of the possible.  Not many companies can survive that and unless we go down the “too big to fail” road, government subsidies to help weather losses of the big corporations is not a likely option.

Efficiencies in business are great, but in order for them to be effective, a precondition needs to exist: nothing goes wrong.  We’re finding out in the cybersecurity world – something that touches everything – a lot is going wrong.

 

 

By George Platsis, SDI Cyber Risk Practice

March 6, 2018

The End of Evidence

Perhaps you noticed from a recent Vanity Fair publication that Oprah Winfrey has three hands and Reese Witherspoon has some odd looking legs. Of course they really don’t. This was just “magic gone wrong” in the world of photo editing and likely invoked more than a few Homer Simpson “d’ohs!” and forehead smacks.

Goofy mistakes aside though, some photo editing and CGI work has been quite impressive and will surely get better.  AI is even playing a role in this space. We’re going to keep this blog G-rated, but if you’re following the technology, it is possible to put somebody’s face on somebody else’s body in videos that are highly suggestive. Thankfully, at quick glance you can still tell these are fakes, but for how long will the naked eye be able to spot a fake?

So what do fake images and videos have to do with cybersecurity? Well, it’s a question of data integrity.

You see, there was a time where we considered a picture or a video definitive proof of something having happened.  We are well into the early stages of “that may no longer be the case” and here is why.

Hollywood and LA music studio tricks have been commercialized and miniaturized. Steve Soderbergh (of the Ocean’s Eleven remake and series) recently said at Sundance 2018 that he only wants to shoot movies on iPhones from now on, just as his latest horror-thriller Unsane was. As somebody who lived a past life as a DJ and producer, I can tell you that music production is no different. I literally have the tools to produce a studio-quality album on the same computer I’m typing up this blog post.

Now, given that I have these tools at my fingertips, with some work and practice, I can really do some incredible things. Never mind simple audio editing, like adding pauses or cutting out certain pieces of what was said. I can alter things like voice inflection, pitch, speed, you name it. So not only can I change what you said, I can change how you said it.

 

Does this worry you?  It should.

And I am certain you have heard the term “Photoshopping” (which comes from being able to use Adobe’s Photoshop to create something that may not have happened). Sure, it’s fun to add filters to your pictures or cut and paste thingsto make goofy looking pictures, but what happens when these fakes become indistinguishable from the real thing?

Now you really should be worried.

The technology is good enough to make that happen, especially if you have a dedicated and meticulous user trying to alter the data. If somebody is committed enough, they will go pixel to pixel, nanosecond to nanosecond to eliminate all possible traces of frauds. Add AI assistance into this mix and the process will only become easier.

We’ve clearly fallen behind the times legislatively with respect to cybersecurity laws. I’ve made comments in the past that we still don’t have basic terminology down right. For example, “stealing” somebody’s emails is profoundly different from “copying” somebody’s emails, but we still have too many “experts” and pundits using the terms interchangeably, not knowing the difference.

My concern is we’re going to fall behind the curve when it comes to the integrity of evidence as well. Consider this scenario: a crime happens and the victim calls the police to investigate. The perpetrator knows that the victim has a video system protecting their facility, but is able to hack into that system and alter the video on the DVR. Of course, the investigators will ask for the video, but will only find out – from the video – that no crime took place. Or worse, the victim will get accused or charged for making a false claim!

 

Have some integrity.

They key to all of these problems is data integrity. Some of our systems have ways to ensure the data hasn’t been tampered with. For example, most emails have some sort of stamp burned into the header properties, meaning that it is pretty tough to fake emails (tough, though not impossible if somebody is truly committed enough and depending on the email system being used). The same though cannot be said for picture, video, and audio files.

Are there ways to ensure digital data has not been tampered with? Yes, the most common being the practice of hashing. Super simple version: take a file or text, run it through a “checksum” utility, use some algorithm (MD5, SHA-256, take your choice) and watch some funky garble get generated. If that funky garble is changed, even by one character, the garble will be different.

 

For example, the paragraph above, when run through a SHA-256 calculator, gives the following string:

7C15BFDDE577EA8C58BB317741FEF1017CC4A5EA052086226B61F12B057BE648

If I take out the quotation marks around the word checksum, the string changes to:

D8536F24D5695955FB65A03D1309959971509F6A550E28E8660937795D943CF8

And if I put back the quotations, but change the capitalization of “Are” to “are” the string changes to:

F450B0275C9B61687601A6F98C698769D1880AA95F4A719A3A11A00C5E0426EE

This is a problem we really need to spend some serious time on, because new technology will really play a role on how we treat and handle our evidence. Best we start having this discussion before the stakes are higher than Oprah’s hand being misplaced.

 

By George Platsis, SDI Cyber Risk Practice

February 13, 2018

Operation Fitbit

The Chinese philosopher Laozi gave this saying to history…”A journey of a thousand miles begins with a single step.”  Two thousand five hundred years later, give or take a century, modern technology makes it simple to count those steps. And, thanks to companies like Fitbit, counting to 10,000 steps each day has become something between a relatively common practice and a maniacal pursuit, depending on one’s personality and predilections. And, as so often the case, the practice has produced unforeseen consequences, some of which have now become a military vulnerability.

Writing in Wired, Jeremy Hsu offered a fascinating look at the way a seemingly innocent fitness craze can morph into something potentially far more sinister. A San Francisco based company called Strava offers a website and mobile application aimed at connecting athletes around the world. It enables users to track their activities and share their workouts with friends. It tracks all kinds of metrics, and even offers a “suffer score.”  Just the kind of thing to engage highly competitive individuals, like, say, military personnel. What could possibly go wrong?

Well, Strava publishes a “heat map,” which shows the clusters of activity associated with highly active people who are contributing their data. An Australian college student studying the map noticed that it appears there was really substantial activity in certain areas of Afghanistan, and Iraq, and other areas where there were American military bases. Soon, other analysts saw activity that could be associated with French and Italian military bases, and even CIA “black sites.”

As if this weren’t enough, Hsu points out… “the bigger worry from an operations security standpoint was how Strava’s activity data could be used to identify interesting individuals and track them to other sensitive or secretive locations.”  Capturing the concern that has arisen,  Jeffrey Lewis of the Middlebury Institute of International Studies at Monterey, CA said  Strava “is sitting on a ton of data that most intelligence entities would literally kill to acquire.”

Naturally, the Department of the Defense and the CIA are studying the issue closely, and who knows, they might even find some strategic advantage. But in the meantime, another timely reminder that no matter how fit you are, the cyber world can be a very difficult place to traverse.

By Tom Davis, SDI Cyber Risk Practice

January 31, 2018

Treat Your Data Like Cash

How annoyed are you when you find out you lost some cash?  Whether it is a few bucks in your jeans pocket or that “emergency stash” under the mattress, losing that “cold hard cash” is a feeling that always twists your stomach.  Sometimes you blame yourself.  Sometimes you blame others.  Depending on the amount lost, your emotions could range from the standard “how could I be so stupid?” to a profanity-laced tirade that is not suitable for print here.

Question: do you feel the same way when you experience credit card fraud?  My instinct is that while you would feel some sort of violation and negative feelings, it’s just not “the same” as losing cash.

I say this because sometimes you have ways of getting back your money with credit card fraud.  There’s hope.  It’s painful and takes up a lot of your time.

But there’s a chance.  Also, you haven’t lost anything really “tangible” if you have just lost some purchasing power on your credit card until the fraudulent charges get reversed (yes, I accept that for some this is a bigger problem than others).

But you really begin to feel the hurt if you can’t get these charges reversed and you do have to pony up the cash to cover it.  So, it comes back to cold hard cash again.  And usually the only way we get our cash back is because of a Good Samaritan or divine intervention.

With that thought in mind, here’s my first 2018 cyber comment to you: treat your data like cash.  My feeling is that most of us are treating our data like credit card fraud, hoping that we can get it back somehow.

I’m going to tell you that once your data is out in the wild, you should treat it as gone for good.  Sure, you may come across some cyber Good Samaritan or get some much needed divine intervention, but really, your data is gone.

I find myself both chuckling and smacking my forehead when I hear “if you just pay  the ransom, we’ll give you back your data and destroy all copies we have.”  Okay, if you really want to believe the person that just ripped you off and extorted you (which by the way, you’ll probably never see in the flesh), fine, but that’s a personal problem I can’t really help you with.

That’s why I’m keeping this post short and simple, hoping that 2018 brings about a sea change on how we treat our data.  Information is just another form of currency (arguably, the most valuable), which is why if you believe in the old saying “cash is king” then we should really start thinking “data is king” also.

Just start believing that once your data is compromised, it’s gone for good.  This is the case of course unless you can verify that you have gotten all of it back and also verify no copies have been made and also verify that your data has not been tampered with.  I believe we have enough evidence to show this is no easy task, so let me make this easy for you: just assume you lost some “cold hard data” in the process.

Let me wrap up with these last few words.  There has been a shift in the last 18 months from the belief that cybersecurity is more about tech issues.  This is a good step, even if it’s late to the game by a few years in my opinion.  I also like that there have been some more frequent calls for a “cybersecurity culture change” in order to stop the data loss.  Regrettably though, there has been little in terms of easy-to-explain-and-execute culture change.

That’s why I’m calling for data to be treated like cold hard cash.  If we can burn that mentality into our minds, I think we’ll take a giant leap forward in protecting our data.  Have a Happy 2018 full of good health, happiness, prosperity, and meaningful cybersecurity!

 

By George Platsis, SDI Cyber Risk Practice

January 9, 2018

Warm Holiday Wishes

As the holiday season gathers steam, we traditionally pause to take stock of our many blessings. This year we can find one in the just released U.S. National Security Strategy.  It appears we’ve won the war on climate change, and climate change is no longer a national security threat, so a long winter’s nap should be marginally easier to come by.  However, the strategy does recognize the growing threat from cyber weapons, and the evidence of that threat is abundant.

CSO online just issued its security predictions for 2018, and predictably, it forecasts ever increasing state sponsored cyber attacks.  As the article notes, “The usual suspects for state-sponsored attacks — North Korea, Iran, and Russia — don’t have much to lose by continuing their attempts to extort, steal, spy and disrupt by infiltrating information systems. All are already heavily sanctioned, and the consequences — at least those we know about — in response to state-sponsored attacks have been minimal.” Their forecast is consistent with the outlook of Experian, which pointed to critical infrastructure as a sector where breach activity by nation states is likely to rise.

How timely then that FireEye just announced that Schneider Electric SC had just received a lump of coal in its business stocking. Schneider provides safety technology, and one of its products, Triconex, is widely used in the energy industry, including at nuclear facilities, and oil and gas plants. The breach victim is said to be in the Middle East, and some cyber experts suggest Iran had sponsored an attack on Saudi Arabia, which, if true, would hardly be shocking news. More importantly, this seems to be the first report of a safety system cyber breach at an industrial plant. This offers a new front in cyber warfare, because by compromising a safety system, hackers could destroy the ability of an industrial plant to identify an attack or limit the damage.

This comes as security experts are closely watching developments in the Ukraine, where the holiday season in recent years has been marked by significant attacks on their power grid. Officials from other nations have been studying the attacks on the Ukraine to determine what additional safety measures need to be employed to lessen the vulnerability of power grids around the world. It’s fair to say that if Ukraine is again victimized the repercussions will ripple widely. When we say we hope your holiday season is warm and bright this year, we really mean it.

By Tom Davis, SDI Cyber Risk Practice

December 19, 2017

 

An Eye on GDPR

There is a lot of talk about the European Union’s General Data Protection Regulation (Regulation (EU) 2016/679).  And rightly so, because it will impact a great many organizations, many of which reside in the U.S.  Set to come fully into effect May 25, 2018, the GDPR has understandably caused a lot of headaches because it is wide-sweeping and costly regulation, especially if you are in violation.

Clearly, the first question to ask is if the GDPR applies to you. If it doesn’t, you are in the clear (but that is not an excuse to relax your data protection measures).  If it does, well, you have work to do if you haven’t been on top of your GDPR compliance. This is especially true if you are a big organization, are not based in the EU, and have a lot of EU customers and clients.

I would like to take a step back here for a moment and perhaps calm some of the GDPR hysteria out there. Yes, some commenters and compliance professionals are rightly having heartburn over the GDPR. And some others have said not to freak out, like Elizabeth Denham, the UK Privacy Commissioner, stating that the GDPR should just be looked at as an “evolution” in data protection and not a revolution.

My humble opinion is that if the GDPR applies to you and you are a non-EU country, your worry should be greater than zero.  Here is why: the EU needs money. And who do you think they will fine first?  EU-based organizations or non-EU-based organizations?  Option 1 seems like it could be detrimental to the EU economy (something about hurting your own) but Option 2 seems like a nice windfall being extracted from a competitor.  If I’m the EU, I know who I am fining first.

But the fines can’t be that bad, can they?  Yes, they can be that bad. Violators of the GDPR can be fined up to 4 percent of annual global turnover or €20 Million, whichever is greater. That sounds like some industrial strength motivation to take the GDPR seriously, especially if you could end up near the top of the pecking order.

Apart from all your usual data protection and cybersecurity grief, the real shift of power of the GDPR comes in the form of individual rights, specifically in terms of privacy. This nuance is important culturally, because Europeans have generally had more constitutional protections that relate to privacy than say freedom of speech.  And from a business perspective, what that means is that individual consumers will have incredible leverage over organizations.

The GDPR will give individual consumers the following powers:

– The right to be informed

– The right of access

– The right of rectification

– The right of erasure

– The right to restrict processing

– The right to data portability

– The right to object

– Rights related to automated decision making and profiling

All of this sounds pretty straightforward, but think of all the resources required to implement and comply.  To begin, anything that could be considered “personal data” is swallowed up by the GDPR. This could be a name, a credit card number, IP address, and preferences.  As you can imagine, the list can go on and on. This begs the question: have you identified all possible pieces of “personal data” within your organization?  By the way, charities are not exempt from the GDPR, so if your thought is that your well-meaning good-cause not-for-profit will be given a pass, I wouldn’t bet the farm on that sort of wishful thinking.

Of course, each of the rights presents its own set of headaches for the organization, but I will pick the first “the right to be informed” as an example. Think Equifax. Think Uber. Now think about how to notify those tens and hundreds of millions within 72 hours. That is the sort of headache you are going to have to deal with.

A single blog post is not going to give you all the answers you need regarding GDPR, but I will close with this: the Data Protection Officer (DPO), could end up making or breaking you. The comparison to the Chief Compliance Officer is not right, because the DPO has some incredible powers that other C-Suite officers may not have.  For example, the DPO must:

– Act “independently”

– Not take instructions from their employer regarding the exercise of their tasks

– Have expert knowledge of data protection law

– Be provided with sufficient resources

– Not be dismissed merely for performing their tasks

– Report directly to the “highest management level”

And guess what?  You could be fined for not allowing your DPO to do their job!  If this GDPR thing is starting to give you some unexpected heartburn, it would be completely expected.

While I would like to believe the intent of the GDPR is to instill some good data protection and cybersecurity habits into all of us, remember what is driving it: a focus on privacy and a very big stick (with no apparent carrot in sight).  The coffers in Brussels need to be refilled, so don’t be surprised if the bureaucrats are looking across the pond for a way to do just that.

In closing, a very Merry Christmas and Season’s Greetings!  May the Holiday Season and the New Year be full of health, happiness, and success for you and yours!  See you in 2018!

By George Platsis, SDI Cyber Risk Practice

December 5, 2017

 

active