If you are an engineer or software developer, there is a good chance that you have heard the phrase “security by design” before (sometimes also referred to as “secure by design”).  If you are unfamiliar with the phrase, it pretty much means what you think it would mean: something has been designed, developed, and manufactured with security in mind.

Security by design is an extremely good practice, but unfortunately is underutilized.  Think IoT devices as an example.  Many of these devices are driven by market forces and cost.  When there are pressures to get a product to market quickly, security – arguably a form of quality control – takes the hit.  The reason is simple: secure code is expensive to write, because it requires a great deal of vulnerability testing.

Here’s another way to know that “security by design” is not commonplace with IoT devices: default passwords.  If security by design was really being employed, all devices would force the user to create a new administrative username and password upon first use, but far too often, people are satisfied with the default password.

Let’s use an analogy to understand default passwords.  Imagine you make a visit to your local hardware store to buy a front door for your house.  That door has a prefabricated combination lock built into it.  But there’s a catch.  The default combination for all locks on every door for sale is 00000.  Of course, there are instructions on how to change that combination and I suspect as soon as you install that door in your home, the first thing you’ll be doing is changing that combination.  In essence, you are changing the “default password” of your door.

Now you may not have experienced this scenario for your front door, but I bet you’ve done it for some luggage locks that have combinations.  It’s exactly the same principle.

Here’s the difference if the door was built with “security by design” methodology.  There wouldn’t be a default 00000 combination.  Instead, to “lock” the door the first time, you would be forced to set a combination code.  Otherwise, the door would just be wide open.

I trust you understand what I mean now.  And when you understand that concept, you’ll begin to see that many of the systems we rely on have not been built using the security by design methodology.  It’s quite obvious when you see the mishmash of new and legacy technologies trying to work together.  And the clearest case of a system not using the security by design methodology is the Internet itself, as decisions made decades ago made the Internet inherently vulnerable.

While the security by design methodology is almost exclusively a technical solution (you can learn a great deal by referencing the NIST 800-160 Special Publication), I invite you to consider that we can use security by design lessons for our organizations and even ourselves.  It’s not that odd when you think about the basic principles.

First, I’ll be reasonable: we cannot make ourselves or our organizations 100% impervious to attacks.  It’s completely unrealistic, but that does not mean we cannot make ourselves better.  Think about it like this: not all of us are going to be pro athletes or musical virtuosos, but with some continuous practices and pushing our limits, we will get better.

What are some ways we can do that on a personal level? Here are a few tips:

  • Get into the habit of changing your passwords.
  • Segregate work and personal accounts/devices.
  • Learn to identify phishing/spear-phishing e-mails.

All basic stuff, but all small steps that get us into the habit of behaving in a more cyber secure manner.  If you are doing that deliberately, it’s by design.  You’re not relying on somebody else to defend you.

How about at the organizational level?  Here are a few considerations:

  • Whitelisting applications and communications. You’ve almost certainly heard of blacklisting (do something wrong, like use an app you shouldn’t, and you get an ouchy on the wrist).  Whitelisting, instead, only lets you use pre-approved applications and whitelisting communications means you can only send/receive messages from certain people.  Of course, this comes at the expense of some convenience, but may save your keister in the long-run.
  • Make Red Team/Blue Team exercises a regular part of your organization’s operations. You need not have some elaborate simulation or exercise.  You just need something continual to get you in the right frame of mind.

Story time: one of best examples of Red Team testing came from an organization that employed a small, but crafty, Red Team full time.  This Red Team would scour the facilities looking for unattended machines or vulnerable devices.  Think getting up from your computer to go to your bathroom and not locking it or leaving smaller devices, like a phone or tablet, unattended.  Use your imagination what happens next.  This organization took the view point “better we catch you than the real bad guys.”

Ultimately, “security by design” for people and organizations means practice, practice, practice.  It’s a culture shift, but it can be done.  The alternatives are burning through a pile of cash, scrambling to recover locked out systems, or explaining to your clients why their data may be out in the wild.

 

By George Platsis, SDI Cyber Risk Practice

May 8, 2018