Ask Yourself: Why Do You Amass Data?

Unless you have been living under a rock, you may have noticed that a growing number of people are not too pleased with Facebook and Alphabet Inc., parent of Google and developers of the mobile operating system, Android.

What started the recent frustrations are the revelations from an employee of Cambridge Analytica and how the company harvested information from 50 million Facebook users.  I want to skip over the part about how the data was used, because that’s the fog in this storm and the distraction is entering “funny cat video” territory.

Where we should be focusing our thinking is here: that the data was harvested in the first place, in one central repository.  That’s the issue.

Ask yourself: why do we amass data?  Let’s get our hands dirty to answer that question.

Raw data is amassed for usually one or all of these reasons:

1) To understand something,

2) To develop something, and

3) To sell something.

It’s all pretty straight forward.

The only other reasons to amass data would be that you are a hoarder or doing something really creepy.

I’m going to go out on a limb here (insert Bugs Bunny level sarcasm) but my guess is Facebook and Google were doing at least all three.

It’s actually quite transparent when you understand the business models these companies rely on.  They need to “understand” you in order to “develop” something so that they can “sell” it to you.

Do you see the hook?  In order to “understand” you, they need to entice you somehow, because this isn’t your run of the mill brick and mortar retail shop or professional services company that you hire for a specific job.

So how are you enticed?  Simple: funny cat videos, “free” software and apps, and dopamine hits.  In other words, exploiting “vulnerability in human psychology” says the founding president of Facebook.

And of course these companies will ask, while you’re using these services, help us “enhance the user experience” by telling us everything about you.  And if that’s too much time for you, just click on this “I Accept” button and we’ll scoop up whatever is laying around on your device.

Now, take all that data we’ve accumulated on you and start crunching it through algorithms and ever-more-powerful AI and we’re going to start predicting your every move.  Why, we may even “understand” you so well that we will develop not only a product, but an idea, say a political idea, that we can sell you!

Okay, okay.  We may not be able to do that, something about election laws, so we’ll just sell the data to somebody who can.

Back to being serious: to anybody that has been sincerely following the digital evolution, information security, big data, artificial intelligence, and algorithms over the last 10-15 years, the Cambridge Analytica, Facebook, and Android revelations are worthy of no more than a yawn or shrug.

Here’s why: The system operated exactly as it was supposed to.  And that is the scary part.

In our warp speed attempts to improve the user experience and make money, we lost sight of a few things that matters to us: like protecting our data and even putting limits on what really needs to be gathered.

For example, if I was using certain Facebook features on Android devices – disclaimer: I don’t and won’t ever – and I have your number saved in my contact list, what consent have you given me to give your phone number to Facebook via Google?  Chances are you never gave me consent, but even if you’re not a Facebook user, Facebook may have your phone number, which is one more piece of data they can collate.

Spend a few minutes and check out Google’s Privacy Policy.  No need to spend too much time on it, since most privacy policies take about 15-20 minutes to read.  But just spend two minutes on the section “Information we collect” and see how you feel after reading.

In a simpler time, smaller organizations would have the living daylights sued out of them if client information leaked, especially if non-disclosure agreements were signed.  Really, NDAs can be considered a type of non-digital information security protocol.  But that close guard on information, a responsibility really, never transferred over to the digital world.

Don’t be surprised if there’s a sea change on the horizon regarding data security, particularly when it comes to personal data.  Users and clients may begin to place pressures on the data collectors to explicitly state what they’re doing with the data, how they’re protecting it and – the key to me – what happens if your data makes it out of your vaults.

If we start to pressure for legislative and regulatory changes, along with more civil legal action – I think we will – data collectors will not only see their liability exposure increase, but I think you’ll see a change in business models as well.  That’s why my advice to you is step up your cybersecurity game if you’re collecting information on individuals and clients, because we’ve had a lot of talk the last few years using the carrot approach, but the general public is starting to get fed up and will want to see some stick treatment used on the data collectors.

Again, remember: the system worked exactly as it was designed to.  Some don’t like the fact that Bob had access to the data.  But keep this mind: it’s quite possible that we haven’t yet heard of an Alice, Joe, and Sally who had access to similar, or even more, data.

And my suspicion is some people are having some very sleepless nights these days because, while their technical cybersecurity measures may have been alright, these same people made it a policy to share and sell the data for their own interests, perhaps sharing with some, selling to others, and withholding altogether from another group.

I don’t believe the general public will be too pleased when they begin to realize they were not the consumer, but rather, they were the product, which is all the more reason if you’re collecting data, be sure you’re taking the appropriate steps to protect it and not misuse it.  It could end up being costlier to you that you initially thought.

 

By George Platsis, SDI Cyber Risk Practice

April 3, 2018

 

 

Why “Security” and “Efficiency” Should Never Be Used in the Same Sentence

Marching along well into 2018 and I think it’s safe to say we’re not experiencing a cybersecurity revolution.  Sure, there has been some great advancement in tech, with AI and blockchain applications beginning to steam roll.  It seems if you add “blockchain” to whatever you’re doing, you’ll get a bump in business.  Really, this happened in late 2017.

And as an aside, “blockchain” should not be synonymous with cryptocurrencies.  They are distinct from each other.  I personally think the blockchain technology is fantastic, but I am still a bit of a skeptic on the cryptocurrency front if you missed the initial few “investment” waves.  With that said, economic realities throughout the world do make cryptocurrencies attractive to many, so I am not writing them off.  Just too soon to tell for me, which means I see them more as a speculative commodity – today – instead of a means of conducting daily financial transactions.

So great, we have a lot of tech progress, but what does that mean for keeping our data protected?  Well, because we haven’t seen the “cybersecurity revolution” mentioned above, my feeling is that we’re still tripping over ourselves, which means we’re still getting the basics wrong.  If you need a refresher on some of the basics, here is an older post from the summer of 2017.  Half a year plus later, everything still applies and that tells me that there is something fundamentally wrong with how we approach our cybersecurity problems.  And I’m going to present to you one of the most galactic-sized problems we face.

I have often said that you shouldn’t use “security” and “efficiency” in the same sentence (okay, I know I am here, but it’s to define context!).  The reason is quite simple: the terms are contradictory in nature.  But I continue to hear nonsensical comments like “efficient cybersecurity” and it makes me wonder: do these people have a clue?

I get it, we look for efficiencies because – at least in theory – efficiencies make us more productive and better productivity means greater profits (or whatever “metric for success” you use).

Efficiency, as a concept, works great when you don’t have these cataclysmic costs hit you every so often (in this case, we call them the painful cyber breaches).  And why is that?  It’s because efficiency makes you, wait for it: efficient.  Efficiency doesn’t make you strong, resilient, robust, or antifragile.  It just makes you efficient, which in many cases means you build fragility into your system.

Let’s visualize this.  Think of an artistic glass sculpture.  It’s gorgeous, a total masterpiece.  The sculptor has used every technique in the book to make this piece of art look the way it does.  In terms of art, it is the most efficient use of glass known.

And then I drop a glass marble on it and the entire thing shatters.

That’s what we’re doing in our supply chains, enterprises, and if we are utterly foolish (we have a good track record of doing that) our soon-to-be “smart” cities (note: “smart” cities are actually pretty dumb, but that’s for another post).

We always look at things from the upside.  We look at “what can be made” most of the time.  This is a good thing to be honest.  Being positive is good for the soul and helps us innovate.  But there is a looming downside that we are pretending doesn’t exist in many ways.  We rarely look at “what can be lost” in these cases.

The problem is that a couple decades of building fragility into the system means that there are more ways for things to be lost now and our blinders are, in large part, a function of this entire “efficiency worship” in the business world.

And that’s why I think security measures are not getting any traction because security is really this: a redundancy.

So here is my easiest way of explaining it: what is the “most efficient” way into your house?  It’s a house with no door.  Efficiency means reducing impediments, or put another way, a redundancy.  A “door” is an impediment to you entering your house, making it a redundancy.  A lock on the door is another impediment (redundancy).  Just like a fence.  Or a security guard.  Or a moat!

So why do we build all these redundancies into our homes?  Simple: because we want to protect our homes.

That’s the mindset change that’s required if we want to emphasize the security in cybersecurity: build redundancies, reduce fragility, and be capable of withstanding shocks.  Is that more expensive on the front end?  Yes.  But we’ve entered the age of “massive” shock where billion dollar losses are within the realm of the possible.  Not many companies can survive that and unless we go down the “too big to fail” road, government subsidies to help weather losses of the big corporations is not a likely option.

Efficiencies in business are great, but in order for them to be effective, a precondition needs to exist: nothing goes wrong.  We’re finding out in the cybersecurity world – something that touches everything – a lot is going wrong.

 

 

By George Platsis, SDI Cyber Risk Practice

March 6, 2018

The End of Evidence

Perhaps you noticed from a recent Vanity Fair publication that Oprah Winfrey has three hands and Reese Witherspoon has some odd looking legs. Of course they really don’t. This was just “magic gone wrong” in the world of photo editing and likely invoked more than a few Homer Simpson “d’ohs!” and forehead smacks.

Goofy mistakes aside though, some photo editing and CGI work has been quite impressive and will surely get better.  AI is even playing a role in this space. We’re going to keep this blog G-rated, but if you’re following the technology, it is possible to put somebody’s face on somebody else’s body in videos that are highly suggestive. Thankfully, at quick glance you can still tell these are fakes, but for how long will the naked eye be able to spot a fake?

So what do fake images and videos have to do with cybersecurity? Well, it’s a question of data integrity.

You see, there was a time where we considered a picture or a video definitive proof of something having happened.  We are well into the early stages of “that may no longer be the case” and here is why.

Hollywood and LA music studio tricks have been commercialized and miniaturized. Steve Soderbergh (of the Ocean’s Eleven remake and series) recently said at Sundance 2018 that he only wants to shoot movies on iPhones from now on, just as his latest horror-thriller Unsane was. As somebody who lived a past life as a DJ and producer, I can tell you that music production is no different. I literally have the tools to produce a studio-quality album on the same computer I’m typing up this blog post.

Now, given that I have these tools at my fingertips, with some work and practice, I can really do some incredible things. Never mind simple audio editing, like adding pauses or cutting out certain pieces of what was said. I can alter things like voice inflection, pitch, speed, you name it. So not only can I change what you said, I can change how you said it.

 

Does this worry you?  It should.

And I am certain you have heard the term “Photoshopping” (which comes from being able to use Adobe’s Photoshop to create something that may not have happened). Sure, it’s fun to add filters to your pictures or cut and paste thingsto make goofy looking pictures, but what happens when these fakes become indistinguishable from the real thing?

Now you really should be worried.

The technology is good enough to make that happen, especially if you have a dedicated and meticulous user trying to alter the data. If somebody is committed enough, they will go pixel to pixel, nanosecond to nanosecond to eliminate all possible traces of frauds. Add AI assistance into this mix and the process will only become easier.

We’ve clearly fallen behind the times legislatively with respect to cybersecurity laws. I’ve made comments in the past that we still don’t have basic terminology down right. For example, “stealing” somebody’s emails is profoundly different from “copying” somebody’s emails, but we still have too many “experts” and pundits using the terms interchangeably, not knowing the difference.

My concern is we’re going to fall behind the curve when it comes to the integrity of evidence as well. Consider this scenario: a crime happens and the victim calls the police to investigate. The perpetrator knows that the victim has a video system protecting their facility, but is able to hack into that system and alter the video on the DVR. Of course, the investigators will ask for the video, but will only find out – from the video – that no crime took place. Or worse, the victim will get accused or charged for making a false claim!

 

Have some integrity.

They key to all of these problems is data integrity. Some of our systems have ways to ensure the data hasn’t been tampered with. For example, most emails have some sort of stamp burned into the header properties, meaning that it is pretty tough to fake emails (tough, though not impossible if somebody is truly committed enough and depending on the email system being used). The same though cannot be said for picture, video, and audio files.

Are there ways to ensure digital data has not been tampered with? Yes, the most common being the practice of hashing. Super simple version: take a file or text, run it through a “checksum” utility, use some algorithm (MD5, SHA-256, take your choice) and watch some funky garble get generated. If that funky garble is changed, even by one character, the garble will be different.

 

For example, the paragraph above, when run through a SHA-256 calculator, gives the following string:

7C15BFDDE577EA8C58BB317741FEF1017CC4A5EA052086226B61F12B057BE648

If I take out the quotation marks around the word checksum, the string changes to:

D8536F24D5695955FB65A03D1309959971509F6A550E28E8660937795D943CF8

And if I put back the quotations, but change the capitalization of “Are” to “are” the string changes to:

F450B0275C9B61687601A6F98C698769D1880AA95F4A719A3A11A00C5E0426EE

This is a problem we really need to spend some serious time on, because new technology will really play a role on how we treat and handle our evidence. Best we start having this discussion before the stakes are higher than Oprah’s hand being misplaced.

 

By George Platsis, SDI Cyber Risk Practice

February 13, 2018

Operation Fitbit

The Chinese philosopher Laozi gave this saying to history…”A journey of a thousand miles begins with a single step.”  Two thousand five hundred years later, give or take a century, modern technology makes it simple to count those steps. And, thanks to companies like Fitbit, counting to 10,000 steps each day has become something between a relatively common practice and a maniacal pursuit, depending on one’s personality and predilections. And, as so often the case, the practice has produced unforeseen consequences, some of which have now become a military vulnerability.

Writing in Wired, Jeremy Hsu offered a fascinating look at the way a seemingly innocent fitness craze can morph into something potentially far more sinister. A San Francisco based company called Strava offers a website and mobile application aimed at connecting athletes around the world. It enables users to track their activities and share their workouts with friends. It tracks all kinds of metrics, and even offers a “suffer score.”  Just the kind of thing to engage highly competitive individuals, like, say, military personnel. What could possibly go wrong?

Well, Strava publishes a “heat map,” which shows the clusters of activity associated with highly active people who are contributing their data. An Australian college student studying the map noticed that it appears there was really substantial activity in certain areas of Afghanistan, and Iraq, and other areas where there were American military bases. Soon, other analysts saw activity that could be associated with French and Italian military bases, and even CIA “black sites.”

As if this weren’t enough, Hsu points out… “the bigger worry from an operations security standpoint was how Strava’s activity data could be used to identify interesting individuals and track them to other sensitive or secretive locations.”  Capturing the concern that has arisen,  Jeffrey Lewis of the Middlebury Institute of International Studies at Monterey, CA said  Strava “is sitting on a ton of data that most intelligence entities would literally kill to acquire.”

Naturally, the Department of the Defense and the CIA are studying the issue closely, and who knows, they might even find some strategic advantage. But in the meantime, another timely reminder that no matter how fit you are, the cyber world can be a very difficult place to traverse.

By Tom Davis, SDI Cyber Risk Practice

January 31, 2018

Treat Your Data Like Cash

How annoyed are you when you find out you lost some cash?  Whether it is a few bucks in your jeans pocket or that “emergency stash” under the mattress, losing that “cold hard cash” is a feeling that always twists your stomach.  Sometimes you blame yourself.  Sometimes you blame others.  Depending on the amount lost, your emotions could range from the standard “how could I be so stupid?” to a profanity-laced tirade that is not suitable for print here.

Question: do you feel the same way when you experience credit card fraud?  My instinct is that while you would feel some sort of violation and negative feelings, it’s just not “the same” as losing cash.

I say this because sometimes you have ways of getting back your money with credit card fraud.  There’s hope.  It’s painful and takes up a lot of your time.

But there’s a chance.  Also, you haven’t lost anything really “tangible” if you have just lost some purchasing power on your credit card until the fraudulent charges get reversed (yes, I accept that for some this is a bigger problem than others).

But you really begin to feel the hurt if you can’t get these charges reversed and you do have to pony up the cash to cover it.  So, it comes back to cold hard cash again.  And usually the only way we get our cash back is because of a Good Samaritan or divine intervention.

With that thought in mind, here’s my first 2018 cyber comment to you: treat your data like cash.  My feeling is that most of us are treating our data like credit card fraud, hoping that we can get it back somehow.

I’m going to tell you that once your data is out in the wild, you should treat it as gone for good.  Sure, you may come across some cyber Good Samaritan or get some much needed divine intervention, but really, your data is gone.

I find myself both chuckling and smacking my forehead when I hear “if you just pay  the ransom, we’ll give you back your data and destroy all copies we have.”  Okay, if you really want to believe the person that just ripped you off and extorted you (which by the way, you’ll probably never see in the flesh), fine, but that’s a personal problem I can’t really help you with.

That’s why I’m keeping this post short and simple, hoping that 2018 brings about a sea change on how we treat our data.  Information is just another form of currency (arguably, the most valuable), which is why if you believe in the old saying “cash is king” then we should really start thinking “data is king” also.

Just start believing that once your data is compromised, it’s gone for good.  This is the case of course unless you can verify that you have gotten all of it back and also verify no copies have been made and also verify that your data has not been tampered with.  I believe we have enough evidence to show this is no easy task, so let me make this easy for you: just assume you lost some “cold hard data” in the process.

Let me wrap up with these last few words.  There has been a shift in the last 18 months from the belief that cybersecurity is more about tech issues.  This is a good step, even if it’s late to the game by a few years in my opinion.  I also like that there have been some more frequent calls for a “cybersecurity culture change” in order to stop the data loss.  Regrettably though, there has been little in terms of easy-to-explain-and-execute culture change.

That’s why I’m calling for data to be treated like cold hard cash.  If we can burn that mentality into our minds, I think we’ll take a giant leap forward in protecting our data.  Have a Happy 2018 full of good health, happiness, prosperity, and meaningful cybersecurity!

 

By George Platsis, SDI Cyber Risk Practice

January 9, 2018

Warm Holiday Wishes

As the holiday season gathers steam, we traditionally pause to take stock of our many blessings. This year we can find one in the just released U.S. National Security Strategy.  It appears we’ve won the war on climate change, and climate change is no longer a national security threat, so a long winter’s nap should be marginally easier to come by.  However, the strategy does recognize the growing threat from cyber weapons, and the evidence of that threat is abundant.

CSO online just issued its security predictions for 2018, and predictably, it forecasts ever increasing state sponsored cyber attacks.  As the article notes, “The usual suspects for state-sponsored attacks — North Korea, Iran, and Russia — don’t have much to lose by continuing their attempts to extort, steal, spy and disrupt by infiltrating information systems. All are already heavily sanctioned, and the consequences — at least those we know about — in response to state-sponsored attacks have been minimal.” Their forecast is consistent with the outlook of Experian, which pointed to critical infrastructure as a sector where breach activity by nation states is likely to rise.

How timely then that FireEye just announced that Schneider Electric SC had just received a lump of coal in its business stocking. Schneider provides safety technology, and one of its products, Triconex, is widely used in the energy industry, including at nuclear facilities, and oil and gas plants. The breach victim is said to be in the Middle East, and some cyber experts suggest Iran had sponsored an attack on Saudi Arabia, which, if true, would hardly be shocking news. More importantly, this seems to be the first report of a safety system cyber breach at an industrial plant. This offers a new front in cyber warfare, because by compromising a safety system, hackers could destroy the ability of an industrial plant to identify an attack or limit the damage.

This comes as security experts are closely watching developments in the Ukraine, where the holiday season in recent years has been marked by significant attacks on their power grid. Officials from other nations have been studying the attacks on the Ukraine to determine what additional safety measures need to be employed to lessen the vulnerability of power grids around the world. It’s fair to say that if Ukraine is again victimized the repercussions will ripple widely. When we say we hope your holiday season is warm and bright this year, we really mean it.

By Tom Davis, SDI Cyber Risk Practice

December 19, 2017

 

An Eye on GDPR

There is a lot of talk about the European Union’s General Data Protection Regulation (Regulation (EU) 2016/679).  And rightly so, because it will impact a great many organizations, many of which reside in the U.S.  Set to come fully into effect May 25, 2018, the GDPR has understandably caused a lot of headaches because it is wide-sweeping and costly regulation, especially if you are in violation.

Clearly, the first question to ask is if the GDPR applies to you. If it doesn’t, you are in the clear (but that is not an excuse to relax your data protection measures).  If it does, well, you have work to do if you haven’t been on top of your GDPR compliance. This is especially true if you are a big organization, are not based in the EU, and have a lot of EU customers and clients.

I would like to take a step back here for a moment and perhaps calm some of the GDPR hysteria out there. Yes, some commenters and compliance professionals are rightly having heartburn over the GDPR. And some others have said not to freak out, like Elizabeth Denham, the UK Privacy Commissioner, stating that the GDPR should just be looked at as an “evolution” in data protection and not a revolution.

My humble opinion is that if the GDPR applies to you and you are a non-EU country, your worry should be greater than zero.  Here is why: the EU needs money. And who do you think they will fine first?  EU-based organizations or non-EU-based organizations?  Option 1 seems like it could be detrimental to the EU economy (something about hurting your own) but Option 2 seems like a nice windfall being extracted from a competitor.  If I’m the EU, I know who I am fining first.

But the fines can’t be that bad, can they?  Yes, they can be that bad. Violators of the GDPR can be fined up to 4 percent of annual global turnover or €20 Million, whichever is greater. That sounds like some industrial strength motivation to take the GDPR seriously, especially if you could end up near the top of the pecking order.

Apart from all your usual data protection and cybersecurity grief, the real shift of power of the GDPR comes in the form of individual rights, specifically in terms of privacy. This nuance is important culturally, because Europeans have generally had more constitutional protections that relate to privacy than say freedom of speech.  And from a business perspective, what that means is that individual consumers will have incredible leverage over organizations.

The GDPR will give individual consumers the following powers:

– The right to be informed

– The right of access

– The right of rectification

– The right of erasure

– The right to restrict processing

– The right to data portability

– The right to object

– Rights related to automated decision making and profiling

All of this sounds pretty straightforward, but think of all the resources required to implement and comply.  To begin, anything that could be considered “personal data” is swallowed up by the GDPR. This could be a name, a credit card number, IP address, and preferences.  As you can imagine, the list can go on and on. This begs the question: have you identified all possible pieces of “personal data” within your organization?  By the way, charities are not exempt from the GDPR, so if your thought is that your well-meaning good-cause not-for-profit will be given a pass, I wouldn’t bet the farm on that sort of wishful thinking.

Of course, each of the rights presents its own set of headaches for the organization, but I will pick the first “the right to be informed” as an example. Think Equifax. Think Uber. Now think about how to notify those tens and hundreds of millions within 72 hours. That is the sort of headache you are going to have to deal with.

A single blog post is not going to give you all the answers you need regarding GDPR, but I will close with this: the Data Protection Officer (DPO), could end up making or breaking you. The comparison to the Chief Compliance Officer is not right, because the DPO has some incredible powers that other C-Suite officers may not have.  For example, the DPO must:

– Act “independently”

– Not take instructions from their employer regarding the exercise of their tasks

– Have expert knowledge of data protection law

– Be provided with sufficient resources

– Not be dismissed merely for performing their tasks

– Report directly to the “highest management level”

And guess what?  You could be fined for not allowing your DPO to do their job!  If this GDPR thing is starting to give you some unexpected heartburn, it would be completely expected.

While I would like to believe the intent of the GDPR is to instill some good data protection and cybersecurity habits into all of us, remember what is driving it: a focus on privacy and a very big stick (with no apparent carrot in sight).  The coffers in Brussels need to be refilled, so don’t be surprised if the bureaucrats are looking across the pond for a way to do just that.

In closing, a very Merry Christmas and Season’s Greetings!  May the Holiday Season and the New Year be full of health, happiness, and success for you and yours!  See you in 2018!

By George Platsis, SDI Cyber Risk Practice

December 5, 2017

 

Prepare to Defend Your Reputation

“Lose money and I will forgive you. Lose even a shred of reputation and I will be ruthless. …Wealth can always be recreated, but reputation takes a lifetime to build and often only a moment to destroy.”

Warren Buffet

There is widespread acknowledgement that corporate reputation has significant value.  Calculating that value with any precision is a bit more dicey. Many have attempted to quantify reputational value, and estimates vary from 20 percent on the low end to 70 percent to 80 percent on the high end. One can accept that there is value, and the value represents an asset that must be protected, and ideally, enhanced. An article in the Harvard Business Review sought to assess reputational risk.  It posited there were three determinants of reputational risk, saying “Three things determine the extent to which a company is exposed to reputational risk. The first is whether its reputation exceeds its true character. The second is how much external beliefs and expectations change, which can widen or (less likely) narrow this gap. The third is the quality of internal coordination, which also can affect the gap.”

Today I want to focus on the second of those determinants. A recent article by Dan Kiely in Entrepreneur looked at how reputation of smaller firms can be adversely affected by cyber breaches.  “…don’t be fooled into thinking that you have to be a Fortune 500 corporation to be a target. Cybercrime is an equal opportunity menace. Larger mature companies are hit most often, but smaller scale-ups are hit the hardest, and it takes longer for them to recover. Only 14 percent of small businesses rate their ability to mitigate cyber risks, vulnerabilities and attacks as highly effective. In today’s digital economy, winning and maintaining the trust of your customers is central to business growth, and nothing erodes trust quite like a cyber breach.”

The many people who have a trust relationship with a business, customers, clients, shareholders, investors, employees alike, expect that certain standards will be met with regard to cybersecurity. They do not expect perfection, and may even have some tolerance for breaches, if the business can show that it has engaged in a rigorous process to defend itself against being breached, and communicates effectively before, during and after a breach. However, if analysis of the breach exposes unexpected shortcomings in preparation and/or response, beliefs and expectations about the company will change for the worse, and reputation will suffer.

Heed Warren Buffet’s words, protect your reputation.

By Tom Davis, SDI Cyber Risk Practice

November 21, 2017

 

Controlling Your Cyber Supply Chain

Back in September, I wrote a piece that questioned whether or not you trust your network. As an extension to that piece, this piece focuses on your cyber supply chain.

Let’s begin with this simple premise: you may never fully know who is a part of your cyber supply chain. Why do I say that?  It is because it is exactly impossible for you to have a watchful eye on all parts of the supply chain. It would be a full time job for you. In my view, the only entity that could have full control of their cyber supply chain is a government (emphasis on could because even for a government full control of the cyber supply chain could be an incredibly difficult and expensive proposition).

If you accept that simple premise, then by extension, you will have no problem accepting this one as well: the probability of you being breached is greater than zero.

If you are with me so far, this is excellent. It means you have not bought a bag of magical beans from vendors or consultants who are already preaching to you that you are on the way to the cyber secure promised land.

My point is this: you don’t know what you don’t know, so when that is the case, ensure that you are taking some extra cautionary steps. And this is why I will reference a very handy tool from NIST that outlines some basic principles regarding the cyber supply chain. I won’t go through the entire tool but just focus on two areas: principles and key risks.

Principles

I’m not going to reinvent the wheel, so will therefore say the majority of what you need to know about cybersecurity is captured within these three principles:

1) Develop your defenses based on the principle that your systems will be breached.

2) Cybersecurity is never just a technology problem, it’s a people, processes and knowledge problem.

3) Security is Security.

I recommend viewing the tool, but here is my brief commentary on each point:

1) If you believe – even for a nanosecond – you have an impenetrable system (or let somebody convince you that one is possible) you may also believe that all is well in the world right now.  Caveat: even if we achieve some incredible technology, like Quantum Key Distribution (QKD) for communications, there will still be other threats, which is a perfect lead in to the next comment.

2) If you are not placing considerable emphasis on the human element, your cybersecurity strategy will always fail.  What has started as a hypothesis of mine has turned into a truism for me over the years: I am so certain of the human element issue that I am willing to personally guarantee your cybersecurity strategy will fail, 100% of the time, if you are not showing significant bias to solve the human element of the problem. Plenty more on this issue can be found in previous SDI posts and on LinkedIn.

3) If your cybersecurity strategy is independent of your security posture, you’re looking for trouble. This is why we say cybersecurity should be viewed through an organization risk management lens. This means if your IT department is not working with your security department and both are not working with all other departments in the organization, the question is not “when will I get breached” but rather “how badly will I be breached when it happens?”  Leadership at the top is crucial and absolutely necessary. The C-suite needs to adopt a risk management mentality and instill a culture of “security smart” within the organization.

You are probably wondering what I mean by “security smart” right now.  It’s simple: make sure everybody has a generally good idea of what the cyber risks are.  Don’t be paranoid.  Just get your staff to understand these threats are real and they can impact your organization and their jobs.  You do not see people freaking out that a fire may spontaneously erupt in the middle of your organization’s lobby, but people are trained enough to know that if they smell something burning or see some smoke, it’s best to warn others, quickly investigate, and if needed, pull a fire alarm or call 911.

We don’t have “hall monitors” walking around our offices checking for fires.  It’s something all persons of the organization have a watch out for (in large part, because of personal safety).  Well, if your company goes bankrupt because all its IP has been stolen, I think that impacts your personal safety.  So, start a program of being “security smart” within your organization (hint: SDI Cyber can help there).

Key Risks

The next section is all straight forward, again from the NIST tool.  All you need to know is that these risks exist and you should be thinking of ways on how to deal with them. These risks include:

  • Third party service providers or vendors – from janitorial services to software engineering with physical or virtual access to information systems, software code, or IP.
  • Poor information security practices by lower–tier suppliers.
  • Compromised software or hardware purchased from suppliers.
  • Software security vulnerabilities in supply chain management or supplier systems.
  • Counterfeit hardware or hardware with embedded malware.
  • Third party data storage or data aggregators.

It’s a bit of a raw deal, but yes, you have to worry about everybody else that’s part of your supply chain. And here’s the real kicker: you may have no control over what you can do except alter your supply chain, which could be an expensive proposition. This is where risk management comes into play: do you accept that risk (and the associated and potential costs) or do you do something about it?  That’s your decision, but it’s something you need to think about.  Otherwise, you’re just setting yourself up for a world of hurt that you may not be able to recover from.

By George Platsis, SDI Cyber Risk Practice

November 7, 2017

Beary Scary

We are slowly easing through the languorous days of fall, reluctantly trading daylight for darkness, feeling the crunch of leaves, inhaling the smoke-tinged air that marks the fullness of the season. Soon it will be All Hallows Eve, a night when witches ride high across cloud-strewn skies and spirits restlessly roam the earth below. They will be joined by millions of children less concerned about the spirits than the potential bounty that awaits behind closed doors. Tiny princesses will race alongside pirates and ballerinas, each eager to ring a doorbell and shout in unison “trick or treat!” Older adolescents and young adults will gorge on horror shows, feasting on the fright inspired by vampires, werewolves, goblins, and countless maladjusted individuals who act out in truly horrific fashion. Those who’ve been around for a while may think of frightening figures such as Nosferatu, Frankenstein’s monster, the Mummy, and more recently Candyman, Pennywise, Leatherface, and Berserk Bear.

Astute readers may have tripped over Berserk Bear, but Berserk Bear may be very scary indeed. The world was introduced to Berserk Bear in CrowdStrike’s 2014 Global Threat Intel Report. “Proactive analysis during 2014 revealed another Russian actor that has not encountered public exposure, yet appears to have been tasked by Russian state interests. BERSERK BEAR has conducted operations from 2004 through to the present day, primarily aimed at collecting intelligence but has also provided capability in support of offensive operations in parallel to the Russia/Georgia conflict in August 2008.”

Since then, the legend of Berserk Bear has grown. In 2016 it was reported to be attacking energy interests in the Middle East. In September of 2017, Symantec said Berserk Bear had penetrated firms in the U.S., Turkey, and Switzerland, and had the ability to cause mass power outages, shutdown electrical grids, and disrupt utilities. That report was confirmed last Friday, when the Department of Homeland Security (DHS) and the FBI issued an alert warning critical infrastructure companies of “advanced persistent threat (APT) actions targeting government entities and organizations in the energy, nuclear, water, aviation, and critical manufacturing sectors.”

What we know at this point is that the attacks have been successful, and critical parts of the infrastructure have been breached. DHS has reported the attack is ongoing. There are no reports of damage to this point. We are left to speculate as to motivation, and what might happen next.  Like many scary stories, this one may have a sequel. Stay tuned.

By Tom Davis, SDI Cyber Risk Practice

October 24, 2017

 

active