The Ransomware Social Contract

I had been anticipating this for a while: there has finally been a publicly known case where the social contact between ransomware extortionists and their victims had been broken. The contract? That after paying the ransom, the victims would be given access to their files.

What is ransomware? Extortionist criminals are now using this tactic to make money. They break into their victims computer systems and encrypt their data. The victims are then told to pay a ransom in order to get the key to decrypt the data. Imagine the situation where all your photos and work in your computer are inaccessible, despite still being in your computer. If you could pay a small amount to make this problem go away, odds are that you would.

Now multiply the volume of data a million-fold. A business is hit. Their daily operations requires this data to be accessible. Every second that they do not have it is money lost. If it is a hospital? Hospitals have been hit and left unable to provide effective care to their most vulnerable patients for short periods. Most would be willing to just pay the small amount in ransom than put their work in jeopardy.

Low ransoms and the fact that the extortionists have kept their promise of providing the decryption key have made ransomware a viable business model. This may be finally over. One hospital paid the ransom only to have the extortionists ask for more. The ‘social contact’ is broken. It was always a possibility that the attackers would go back on their word. It has happened.

Ransomware is not a new phenomenon. It has been around almost since three decades ago. For some reason, it just took off as something big in the last three years. Perhaps the existence of commercial software such as exploit kits that package various methods of attack without requiring much technical skill on the part of the attacker helped its rise.

What now? Ransomware is not about to go away. We should practise some IT security 101 to protect the data that is precious to us (yes, really). Backing up data is the old-fashioned and effective method that protects against the loss of data (ransomware or otherwise). Knowing not to click on unknown links or open dubious email attachments helps too. Keeping your operating system and software updated and having an anti-virus enabled is another. These things are all IT security 101 and knowing and practicing them will protect you against more than ransomware.

What if you have been hit and do not have a backup? You could pay, but be aware that you are depending upon the mercy of criminals.

Also read:

On weakening encryption

It’s history time! While we are discussing Apple vs FBI and the ongoing legal battles over encryption, let’s consider how American politics have already prevented technology from being as good as it could be. Just a few decades ago, the internet came along and started improving the lives of a lot of people – mostly rich people in developed countries at first. Smart people were developing the technologies serving the internet as they went along. Encryption was among them. How could a person ensure that a communication over the internet would be accessible only to the intended recipient? Encryption was the answer. How could a person ensure that his credit card details transferred over the internet for a payment would not be stolen by someone? Encryption!

This is all very nice, but both the internet and encryption have strong links to the military. The precursor to the internet was ARPANET, a project by the US department of defense. Encryption was big during World War 2. Mathematicians worked in the United States and UK to break the code used in the German Enigma machines. This gave the Allies the ability to intercept German communications and it was essential in their establishment of military superiority leading to their victory in the war.

Perhaps due to its background, encryption was treated as a “munition” and the export of strong encryption from the US was severely restricted until the 1990s. This made it difficult for companies to provide secure services over the internet and – let us have no doubts about it – ordinary consumers failed to get the benefits of these protections until these restrictions were slowly eased during the nineties.

Lessons learned? Not yet. Politicians in the United States and UK, among others, continue to ask to make encryption and similar consumer protections weaker in order to carry out “law enforcement” and “anti-terrorism” activities. How far are they willing to harm their constituents in order to achieve the aim of law enforcement?

Here is one answer: A vulnerability called “DROWN” was discovered last week that makes it possible to intercept supposedly secure communications between your computer and 25% of servers (25% of HTTPS servers, to be precise.). That’s your credit card information, your personal details, your income tax information and your children’s birthdays that are being made available to criminals to exploit. As I type this, millions of IT departments will be working on patching and otherwise changing their systems to protect their companies and clients from the risk posed by this vulnerability. That will be millions of man-hours of work lost fixing a problem that should never have existed. Why did this happen? The researchers who discovered this vulnerability explicitly blame US government policies of the nineties for allowing this to happen.

“In the most general variant of DROWN, the attack exploits a fundamental weakness in the SSLv2 protocol that relates to export-grade cryptography that was introduced to comply with 1990s-era U.S. government restrictions.”

XKCD comic on encryption

Better cryptography was available at the time SSLv2 was invented. The US just refused to let people outside their country have it. Major US tech companies made unsecure products and distributed them everywhere (including in the USA). It is bizarre that this is putting people’s information at risk even today, in 2016. Now you know why (among other reasons) people in technology and security are backing Apple in the Apple vs. FBI case.

Shit just got real

Over the past few years, we have had plenty of time to read about the exploits of malicious hackers. These have appeared on the news so many times that we have had the (mis) fortune to get desensitised to them. Why does it matter? Why should anyone care about who got hacked? And then, what can one (the layman) do about it?

It matters because it can affect us, and affect us badly. Our personal details are stored by many companies and governments. Not all of them put effort into securing this information that they have been entrusted to keep safe. Details such as our birth date and our address may be used by our bank or our email provider to verify our identity over the phone. Imagine if someone were able to access our email just because they knew our birthday and our address. This happened to to the director of the CIA, James Brennan. It has happened to ordinary people as well.

What about the companies that got hacked? Sometimes the hacking is (relatively minor) reputational damage in the form of website defacement. It is a serious matter when personal information, email contents and proprietary data are stolen – things that can directly affect a company’s bottom line and harm its customers. Theft of money or something like money (such as credit card information) also happens. How do we not get habituated to ignoring these things when the show up in the news?

Last week presented something that I found to be quite scary: the “shit just got real” moment. A hospital in Hollywood had great difficulty doing the job of caring for its patients because on an attack on their IT infrastructure. The hospital’s files were affected by a type of malware called ‘ransomware’, which encrypts the data until decrypted with a key obtained after paying the ransom. Staff used pen and paper to record new patient details, transferred some patients out to other places. Patients’ records are stored in computers. Their details are digitised so that a doctor or nurse can easily pull them up on a monitor when getting to do their work. What happens when something as basic as a hospital is unable to function because their IT is hit? This is why security is important, and why we have to demand that our various service providers take it very seriously.

What can we do about this?

  1. Educate yourself about personal information security.
  2. Vote with your feet against companies that do a bad job; especially against companies that are unrepentant and against companies that claim that they were hacked by “sophisticated” attackers (don’t take their word for it).

If you happen to work in IT, operations, or risk management, make the effort to understand how information security risks may affect your organisation and your clients and take steps to reduce the risks.

Choosing your password manager

I have advocated password managers in a few previous posts. Here are a few considerations when you go about picking one for your own use:

How are your passwords stored?
The technical implementation may be hard for most users to figure out, so you may need to rely on reviews by others. The passwords need to be stored with an encryption algorithm that can only be retrieved using your master password (password to the password manager). Read their documentation explaining how they store passwords to understand. The company / people who own the password manager (if such a group exists) should not be able to retrieve your passwords even if they wanted to. This is Security 101. Convince yourself that this is how it is done before you proceed further with any password manager. Also – do not store passwords in your browser unless you secure them with a master password.

Cloud-based or local software
Are your passwords going to be stored in the cloud (the internet) or will they be stored only in your computer? There are advantages and disadvantages for both. Cloud-based password managers may have the advantage that you can use them on multiple devices: your personal laptop, perhaps your work computer, maybe even your mobile. A password manager installed locally on just one computer allows users access to their passwords only on that computer.

Conversely, a cloud-based password manager is a target for attack from the internet. Criminals will go after it given how valuable the contents are. There is less of that risk if your passwords are only stored on your computer.

2-factor authentication
Pick a software that allows 2-factor authentication. Enable it. Use it. Know what to do in the event that something happens that makes it difficult for you to access your 2nd factor (e.g. loss of a security token or your mobile phone to receive 2FA messages). Each password manager will have different methods of handling this.

Open-source or proprietary
Open-source software is software for which the code is publicly available. Proprietary software code is not. Prevailing security wisdom recommends open-source software. Proprietary software has less eyes looking at it and the odds are higher that someone who does detect vulnerabilities in the software does not wish to reveal it for fear of backlash from the company. Keeping code hidden is not great security practice, but it may be justifiable as a business practice. Some people may be willing to sacrifice some features if that is what it takes to use open-source software. Some will find the features to be worth the risk of ‘trusting’ a business. All other things being equal, choose an open-source password manager.

Which password manager does the author use?
I will keep that out of this essay in order to keep this article unbiased. Spend some time doing your own research before you ask.

Related posts
Passwords ain’t nothing but trouble
Lessons from Target on password complexity
When is 2FA not 2FA?

When is 2FA not 2FA?

Security experts strongly recommend using 2-factor authentication (2FA) for accessing critical accounts, such as your bank account. What precisely does this mean?

Authentication methods are divided into three factors:

  1. Something you know (passwords)
  2. Something you have (authentication token, a key, a code sent to your phone, etc.)
  3. Something you are (biometrics: fingerprint, iris, veins under your fingertips, etc.)

Combing two of these factors provides effective security protection in the event that any single factor is compromised. In the event that someone shoulder-surfed your password, they may still need your authentication token to log in. In the event that your RSA token device was stolen, it is useless without your password. If an attacker has your key, but faces a retinal scanner awaiting input, he may be stopped.

So far so good. The problem is that users themselves are key to subverting these additional protections. The use of a 2FA token is not a panacea that allows you to not use a good password. This is precisely what happens in environments with a large number of devices where a few resource-starved engineers are tasked with remembering passwords with no password management.

Typing in “12345678” for the password followed by the generated token eliminates the effectiveness of the 2FA token. I have seen worse.

What do I suggest: use a password manager so that the passwords can indeed be long and complex and the administrators are not tasked with remembering them. Also use a token / other form of 2FA for extra protection. Do not task someone with remembering a large number of complex, distinct passwords, or they will all end up being the same or being easy to guess.

Also see my previous posts: Passwords ain’t nothing but trouble
and Lessons from Target on password complexity

Should we ban encryption so that terrorists can’t use it?

Short answer: No. Read on.

A pattern has been emerging in the last few years of terror attacks: An attack happens, then politicians and spying bureau chiefs call for increased powers of surveillance without oversight. They use (mostly unproven) statements about encrypted technology being used to communicate, preventing the ‘good guys’ from seeing what they are doing. This was certainly the case for the recent Paris attacks and Trevor Timm has written an excellent piece on the various political agenda that Paris is being used for – and on the incompetence of the spy agencies in failing to prevent the attack.

SMSes and phone calls that are used in normal communication are unencrypted. These can be snooped on and, despite the fact, the attackers’ SMS communications were not intercepted and the attacks happened. The simple matter is that there are too many people to monitor to effectively prevent an attack. Plenty of people who are known resent the ‘free world’ will never get around to actually kill in the name of that resentment. How does a spy agency know which communications to actually watch for when there are so many potential threats?

The other much simpler reason for not banning encryption is that encryption benefits humanity. It keeps our data safe from criminals. It allows us to log in to our Facebook, our emails, our dating apps, our bank accounts with some reassurance that people who intend to harm us in various ways are not able to do so. Banning encryption totally removes that security blanket. We are all harmed by banning encryption. To take such a drastic step is to acknowledge that the terrorists have won – that we are so terrorised that we would willingly enable criminals to view our bank accounts and our private lives.

What about the possibility of enabling backdoors (or ‘front doors’) that allow only the government to view encrypted information? To put it simply, this is not possible. If a backdoor (call it ‘front door’ if you wish) is created, criminals will find it and misuse it. Or perhaps hostile Governments. Don’t take my word for it. Take Barack Obama’s.

See my previous post: Did the Paris shooters communicate using Playstation 4?

Did the Paris shooters communicate using Playstation 4?

The news has been spreading that the Paris shooters planned their attacks using the Playstation 4. Is this true?
1. There is no reason to believe that it is.
2. The belief that they did so originated from an interview given by the Belgian interior minister, Jan Jambon, three days before the attack, talking about IS in general, and not about the particular attack which was then in the future.

The more interesting question is whether it matters if they did.

Should governments now start monitoring in-game chats in the Playstation network? OK. How about in-game chat for the Xbox? How about Words with Friends? The above are examples of communications that get ignored on account of the huge amount of noise from actual gamers. How about spoken words or a real-time drawing or video? Then there are real messaging applications, some of which are encrypted, some of which actually do a very good job of it.

Should governments start monitoring communications between every app that is built and made available to any two humans in order to ensure that terrorists do not plan something? Is this even possible? It may be interesting to think about a Person of Interest – like system that has the ability to monitor everything and alert the good guys when danger threatens someone. Thinking that the government can eavesdrop on every communication is folly. Aside from the technical hurdles for encrypted communications, there is the hurdle of the huge volume of noise to sort through.

Governments should come to the realisation that mass-surveillance is not the answer and that porn-viewing and playing video games is just perhaps a wastage of hard-earned tax money. There is pressure from the electorate to be seen doing something after any act causing terror, but doing something useless or harmful is worse than doing nothing.

Secure messaging

We send a lot of communication over messaging services that send a few characters of text per message. Have you ever considered how easy (or difficult) it would be for someone to spy on these communications? What if the messaging service provider wanted to spy on you? The Electronic Frontier Foundation (EFF), a non-profit organisation dedicated to “civil liberties in the digital world” has some answers.

The EFF has checked a number of messaging apps against security concerns. It continues to update the list as the app owners / developers make updates to the respective apps. Things that you might want to watch out for: Skype, Whatsapp, Facebook chat and Snapchat are all built with their customers’ security and privacy as afterthoughts. Even the once-popular Blackberry Messenger is terrible at security.

The page explains each criterion in detail. I shall explain two of them right here: “Encrypted so the provider can’t read it” – consider the fact that Google scans through your conversations to know what advertisements to serve you. How about the fact that any of these providers could be served with a subpoena to have a conversation of yours made available. Properly encrypted, this becomes impossible.

“Is the code open to independent audit” – it is possible to make the claim that one has built a secure system. It can be verified that the system is reasonably secure only if the code is open to investigation by independent parties. Trusting the maker to have done it right is not something that we do in the security world.

Read all about it: EFF’s Secure messaging scorecard
Exciting news: Signal messaging app has now come to Android

Lessons from Target on password complexity

US retailer Target was infamously hacked in 2013, causing the credit card records of tens of millions of customers to be stolen. Target had its systems assessed and came up with some findings which Brian Krebs has just made public. While there are many lessons in this, I want to focus on one item: the passwords.

The Verizon security team was able to crack a large number of Target’s passwords in a week. Observe that most of the listed top 10 passwords were at least 8 characters long, had small letters, capitals, numbers and a special character. Despite the credentials adhering to the password policy, the passwords were successfully cracked.

The lesson: password complexity rules may be outdated. It is quite possible to stick with the letter of the compliance requirement and be quite insecure. Consider using password managers with really long passwords and multi-factor authentication systems. Meanwhile we should look into technologies that move beyond passwords for authentication.

Also see my previous post: Passwords ain’t nothing but trouble

Passwords ain’t nothing but trouble

You may be familiar with the standard script that your IT gives you about password complexity: it must have 8 characters or more, at least one small letter, one capital, one numeric character and one special. If you are in IT, you may have even seen the Dilbert strip above and felt it hit home.

What’s with these requirements? It is the length and complexity of a password that determine how a hacker with very little information about the user can crack the password. The methods are various: For a password with just 6 characters, a “brute-force” attempt can be made using all possible combinations of six characters to match a piece of encrypted text that is known to contain one’s password. If it is longer, “dictionary attacks” are made against known common passwords or actual words as brute force rapidly loses effectiveness. So “Pa$$w0rd” is a bad password, despite it having all the requirements stated in the first line.

The problem here is that the more complex and long the password becomes, the harder it is for the user to remember. This results in the worst problem of all: The password gets written down.

I have come across many posts over the years trying to cover this topic. I recall someone recommending passphrases on the basis that length and not complexity was key to making a password unhackable. Then there is this:

And yet we have missed something crucial. All of the discussion so far was about one password. I now have more than one hundred passwords, about twenty of which I use on a weekly basis. None of these methods are even slightly usable if we have to remember such a large number of passwords.

If we try to memorise, we need to find some patterns with slight variations. If we do lose a couple of these patterns, a person who is interested in getting your information may figure out the pattern. There are some sites that we use that may store passwords very poorly, sometimes even in clear text.

I mostly gave up on memorisation a few months ago and started using a password manager. This comes with its own set of problems. If, for some reason, the password manager is unavailable when one needs to log in, login may be impossible. If someone malicious (or merely mischievous) should get access to one’s unlocked password manager, one can get locked out of all one’s accounts. If the password manager is installed locally on one device, you still need some means of remembering passwords when using other devices. If it stores information that is accessible over the internet it can be used from many devices, but may be more vulnerable to attack.

What can we do? There are people working on that very question. Biometrics is one possibility for the future. We now have mobile phones that unlock upon fingerprint and finger swipe identification and office doors that open using retinal scans. If these technologies gain wide commercial acceptance in various products that we use, they may one day allow us to log in to websites and applications as well. People have currently proven many of these technologies to be theoretically hackable, but the products will keep improving.

Plenty of smart cards are brought out all the time, but they tend to have one flaw: they can easily be lost / stolen. Technologies are now coming up that require the smart card in addition to a biometric or a simple memorisation option. For the sake of our security and convenience, I hope that passwords get replaced by something better in the next decade or two.

This essay was originally posted at my LinkedIn  page: