The #1 rule
„We take security seriously“ or „we take your privacy and security seriously” is what you hear from every company that offers some kind of service on the internet. It is what customers want to hear, what shareholders want to hear, and of course, what employees want to hear. After all, no one wants to be a customer, an employee or investor of a company that openly states: “we do not know how to take care of private data”. The security-is-important message normally can be found hidden under several layers on the homepage, or at major sales event when a new service is introduced. The chances to hear the phrase are highest after a security incident. The playbook in that case is simple:
- Data gets exposed that should not have been due to the security taken seriously thing.
- The breach gets public. (<- dirty secret: this is the actual problem for the company)
- Company goes from defense to offense, trying to gain control back and communicates the famous phrase. The idea behind the phrase is to convince affected people and everybody else that the data leakage was an exception, something that should not have happened and will never happen again.
- Company restores trust in their services. They are sorry that besides the best security principles applied it occurred, unfortunately nothing is 100% secure, and therefore it was inevitable to happen. But everyone is dedicated to security which is #1 priority for the company.
If it works for the company, everyone is happy. OK, except the people whose data were exposed. Nevertheless, company is seen as the good one, service is seen as trustworthy, usage comes back and as time passes, people forget. Eventually, people are convinced that this really was an unfortunate exception, a rare constellation occurred:
When the moon is in the Seventh House
And Jupiter aligns with Mars
Then a security breach will occur
And private data will be exposed.
Following the playbook, the same communication strategy will emphasize that security is a top priority for the company, if not the number 1 priority. That’s the point where I start to disagree. The number 1 priority should not be security. That’s not because I want to work with unsecure software. Au contraire. You put a priority on something you want to measure, to control, to report on. It is a nice approach for managers as they can report on numbers, but it won’t make the services more secure. Security is not a number. It is treated as such, yet if a company puts a number on security, it means they try to manage something that is unmanageable. It ignores some basic facts about security.
Why can’t you report on security? For example, a given service by a company is reported as secure. The company might state that since the start of the service 900 days ago no one was able to hack it. And then a hack is published stating that the service was hacked since day 1. And now? The secure service was not secure for 900 days as the company stated and sold it for, thhe service was never secure. You might say that it is possible to measure how many security related bugs are found. How close a service / code follows security best practices. While you can do this, it means that software is seen as secure when it follows given rules. When its not common practice, then what? Not doing it? 2FA was possible for decades, yet only recently it is best practice. The problem with measung on best practices is simple: running behind. A company is not going to lead and be in front of security if the only things that they apply are best practices. If the foundation that provides a number is already compromised, then what is the value of the reported number? Security cannot be a priority, it must be part of the DNA of a company. Only then it is turned into the foundation of everything people do. You do not put a number on how often you breath per minute. And if you do, it’s almost too late. Most of the time, it is people that do the work, and it is in our nature to fail. We are humans, failing is part of how we live and learn. A company is nothing without its employees, the poeple. That’s why security must be a fundamental part of everything you do. It must be part of what you are, how you approach things, how you act. Only when security is part of how people work, then it security is becoming a part of the company’s DNA: it comes automatically embedded with every task.
The #1 rule
When security cannot be the #1 rule, what is the #1 rule then? (Now the text starts the NSFW part)
With everyone, I mean everyone on the internet. Let’s not get too generic. But why include not only hackers, attackers, script kiddies, the bad ones, but also normal users of services, customers & partners? For sure the people that spend money to use a company’s service are valuable customers. They are interacting with the best intentions with the services. They cannot be assholes! Yes, correct. Yet, if a service is on the internet, it’s accessible. Even customers might abuse it or bring it to its limits:
- demand 24/7 availability, worldwide
- the service must be fast
- work on laptop, tablet, smartphone, TV
- support for several languages, even the ones spoken in countries unknown to you
- works with Alexa
- chat functionality
- constant updates
- links from a legacy system must be maintained so they are still working
On top of this: too many of the normal users and all the plans about concurrent usage and bandwidth are dust. In case the service is not fulfilling the customers’ expectations, they might give a 1-star rating and leave forever. Partners might demand integration, special access. Running e.g. a local shop is work, doing the same on the internet is a whole new level of pain. And on top there are the evil users that try to hack, send DDOS or do other things to harm the service. Sometimes you might even wonder: are these customers that bring my service down or script kiddies? A successfull promotion can bring so many users to a service that it is impossible to differentiate between good and bad access. Which might not really matter when the service is down for hours. The result is the same: no business. As you can see, everyone can be an asshole. No exception. Depressing? Maybe. Yet, there might be hope! The rule states: everyone. Let that sink in: everyone. And yes, no exception means no exception. Remember: you and the company you work for are on the internet too. That makes you an asshole as well. No need to play nice. To be the good guy that makes live super easy for everyone. If you offer a service on the internet, there is absolutely no need to make it too easy for everyone. What does this mean? As an example, think about a typical service that is available on a web server. Applying the asshole rule, what can be done to make the life for attackers harder, while at the same time not scaring away too many of your users, customers or partners?
- Secure the service: TLS, passwords, 2FA. No need to offer an insecure protocol because some old browsers do not support TLS 1.2+. Some people need privileged access? No problem! Make them use at least 2FA.
- Log as much as you can legally. Analyze the logs, find patterns, find out who is trying to break in and block the person.
- Put the system behind a reverse proxy, behind an intelligent data traffic management tool. If too many requests come in, block them. If a port or URL scan starts, block them. Your site only runs on HTTPS and someone tries to connect to SMTP port? Block them. Normal interaction means that only a few MB are transferred, and suddenly 10x the amount is requested? Invalid URLs? Invalid HTTP method? Block. Block. Block. These are not the users you want or need.
- Do not simply expose your service. Use API management. Let people have to register to be able to use your API, you might even charge them. Hand out new keys on a regular basis. Block old keys.
- Create layers of access. Customers can access the service on the internet, partner must be connected via VPN. Employees must use VPN and a specific hardware. The server does not need to be running in your datacenter, connected to all other valuable servers. Run it isolated, in the cloud. Think about re-creating the service from scratch all the time via an automated pipeline.
- Additional services are needed only by a group of users? Separate the services. No need to have a one central access point for all. For the additional service, is (public) DNS needed? Does the service need to be referenced somewhere or can it be hidden?
- The service needs to be running 24/7? Or can be shut down at night? Do maintenance regularly, have a maintenance page available. Maybe you can show the maintenance page at night (for a few hours) and shut the system down (save money) or actually do maintenance and update the software. Remember: a bug that is fixed cannot be exploited. You are offering the service, you make the rules whether or not it is available.
- Information shared is marked as confidential? Encrypt it. The information is only valid until a certain date? Delete it afterwards. And data that is not intended for everyone to read? Again: encrypt it.
This is just a small list of possible actions. As you can see, it is easy to be an asshole without making the life of your users overly complicated. It is better to be an asshole than to be an idiot, be hacked and have private data leaked. Best thing about this #1 rule is: People will understand and respect it. Maybe even see you as a leader. It is possible for a company to act like an asshole on the internet without being seen as one. The company is seen as actually protecting sensitive data and take measures to prevent that something bad happens, and when it happens, that the possible consequences are eased already (yes, encryption really is your friend).
With security as DNA supported by the #1 rule, why is it then so easy to find unsecured services? After all, with security as DNA, the #1 rule is going to be followed automatically. With security in everything a company does, no one will put an unsecured service live. As this is still the case, there is one question left to ask: why is security not part of the DNA of companies? I think one reason is: because it is hard for companies to change. It is easy to talk about change, much harder to actually change. At the same time it is too easy to manage security by treating it as a number. So many ways to report on people trained, on security bugs found in code. For the DNA part, a company would not only have to “upskill” their current employees, they would also need to get new people on board. People that have security in their DNA already. And these people would then have to deal with all the unsecure services, remove them, implement new ones, get into many internal fights and also scare away some customers. That would be a real change, hard work and time consuming. The alternative to this is to implement some training and reports. Guess what companies are doing. But then don’t complain the next time (your) private data is leaked.