Feeling secure vs being secure

When talking security with individuals one often observes a distinction in language that completely changes the meaning of the topic: people talk about wanting to feel secure, which is a very different matter from actually being secure.

Humans are poor at understanding security and risk: they overestimate the danger from scary and infrequent events (such as terrorist attacks) and underestimate the dangers of common events (such as road accidents and nutrition-related illnesses).

Humans are also prone to live by the falsehood that “something needs to be done” to remedy real and perceived problems, when it might actually be more pragmatic to not do anything at the time. Security expert Bruce Schneier has written extensively about this, on how the TSA is a multi-billion dollar harm inflicted upon people travelling within the United States after the very successful terrorist attacks on September 11, 2001. The fact that they put travellers through a lot of misery gives people the impression that travel in the United States is safe, when in fact it is not proven to be any safer than it was before 9/11.

This false sense of security is a problem. It causes organisations to spend billions of dollars on security products that do not solve their problems. It causes people to go through unnecessary suffering at airports because they are trained to believe that it is “for their own safety”. At the same time, the companies and people remain vulnerable to the same problems that they believed they were protected from.

A false sense of security can make someone perform a riskier action than they would under normal circumstances – think of the American rules football players, heavily padded and helmeted, who would strike in a more dangerous fashion under the impression that their artificial paddings would protect them against injury. This is the biggest problem with feeling secure: it can actually cause you to be less secure.

How do we guard against responding poorly to perceived risks in these fashions? A measured and thoughtful approach to security that avoids knee-jerk measures helps. Education is key, as is the recognition that there is no silver bullet that rapidly provides security. Security requires a combination of people, processes and technology to function in harmony. All three must come in place and people must recognise that no system is perfectly secure. This is fine. It is not worth the trouble to make something perfectly secure. Live your lives.

Blocking internet access for cyber security: Will it work?

“The only secure computer is one that’s unplugged, locked in a safe, and buried 20 feet under the ground in a secret location… and I’m not even too sure about that one.” —Dennis Hughes, FBI [attributed]

From May 2017, the Singapore government intends to block internet connectivity for the work computers of its 100,000-strong force of public servants. The government has opted for the drastic measure on account of security concerns. Is it really necessary and is this the best way of keeping things safe?

A key principle that information security managers learn is that security must work to enable business and not prevent it. Attempts to add security that appear to work against the normal functioning of the business are doomed to fail. This will be critical to whether the Singapore government’s efforts succeed or fail. It is not only the technological aspect of this new setup that must be taken care of, but also the ‘people’ and ‘process’ aspects. The latter appear to be lacking, at least in what has been covered in the news.

Did the government get the technology aspect right? Among other things, companies that have information security-aware management perform a ‘blacklisting’: for instance file-sharing activity, chatting, pornography sites and known malware sites and activity may be blacklisted and cannot be accessed by employees. Some security experts recommend a tougher measure called ‘whitelisting’: only the specific sites in the whitelist may be accessed by employees. This list could contain the top 1000 sites on the internet known to be safe and, upon business justification, additional sites could be added to the list. Entirely blocking internet access is the toughest possible measure and might be a bit heavy-handed.

Disconnecting a computer from the internet is called air-gapping. It is a legitimate security measure for the very paranoid / persons under surveillance. Security expert Bruce Schneier explains here what he did to stay secure from the NSA while working on the Snowden documents. Air-gapping requires a huge amount of effort to get right, primarily because the information that you work with tends to come through the internet. Air-gapping will make life harder for an attacker who wishes to access information in/through your computer. Information on one’s computer may still be accessible in certain ways, but accessing the office network through that device does get considerably more difficult for an attacker.

Air-gapping is not foolproof. An air-gapped computer owned by a non-technical person is less likely to be updated with security patches than one that is connected to the internet. It may make the device more susceptible to attacks through vectors outside of the internet. Targeted attacks have been carried out against air-gapped devices as long ago as 2010 using USB drives. The Singapore government currently does allow its employees to use the USB ports on their devices. USB drives are well-known transmission vectors for malware and many companies prevent their usage by locking them down. This would be a pragmatic step to take before the more desperate measure of taking away internet access.

The initial announcement of the upcoming policy also stated that employees would be allowed access to the internet on their personal devices and devices kept specifically for internet use. The Infocomm Development Authority (IDA) clarified in a Facebook post that “only unclassified emails for purposes such as accessing URLs” could be forwarded to private email accounts. This is going to be tricky. An employee who has habituated himself to transferring emails between his work and personal emails is going to do more and more work when directly connected to the internet on their personal devices, especially when the work requires research or benefits from information found on the internet. This in turn could lead to the personal devices becoming targets of attack, reducing the need to attack the office-issued devices in the first place. Considerable effort will need to be made to ensure that employees are aware of what information may absolutely not be transferred to devices outside the office-issued computers. These are serious flaws in the ‘people’ and ‘process’ aspects of the new policy.

I have already encountered people discussing how to subvert this ‘problem’ of no internet access. Singaporeans are technically savvy enough to get the internet that they need. The government has to ensure that their work does not get too painful and that access is had where they require it or the subversion will eliminate the positive security effects of removing internet access.

What about that Whatsapp privacy policy change?

You may have heard recently that Whatsapp’s privacy policy has changed ‘for the worse’ and that it is now sharing user account information with Facebook. What’s that all about and what should you do about it?

Whatsapp is a mobile phone app that provides messaging services between users of the app. Whatsapp accounts are linked to phone numbers. Facebook is an online social media platform with 1.7 billion monthly users (as of June 2016). Facebook bought Whatsapp for US $19 billion in 2014 and now Whatsapp has over 1 billion users. Prior to its acquisition, Whatsapp charged a fee to its users – a nominal $1. After the acquisition, the fee was eliminated, leaving the company’s business model unclear to users. Whatsapp announced earlier this year that they would introduce tools to let businesses connect to users.

One of the founders of Whatsapp, Jan Koum, was born in Soviet-era Ukraine and the matter of privacy is said to be personal to him. Whatsapp now encrypts all messages that are sent between users using updated versions of the app, meaning not even the company can read messages that are sent through the app.

Why then are we so concerned? The information that Whatsapp does have is metadata – data about data. Whatsapp has the contacts on your mobile phone (required to provide its service), the time you last checked the app, the person whom you messaged, when you messaged them, how many times, etc. Go back three years and you might recall that this is the kind of data collection by the NSA that caused a huge uproar when Edward Snowden blew the lid on it.

A record of phone calls or messages between you and a specialist doctor may reveal medical concerns of yours. Phone records between two parties may allow for inferences where nothing may be relevant – or they may give away something about one’s life that one prefers to keep private. The choice of whether these matters are made known to others belong to the people whom they concern – not to an internet / communications company, the government or advertising firms. You will lose that choice if your Whatsapp account data is transferred to Facebook. Facebook is an advertising company and the metadata is going to be used to serve you with advertisements from businesses.

What causes more worry is the manner in which this has been implemented. We have the option to opt out of the sharing of account data. The opt out is designed to be easy to miss. You still have 30 days to go back and update your settings, but after that the choice to opt out is removed entirely.

But does it really matter? Many of us do share a lot of information about ourselves publicly on our social media profiles. Even the content that is restricted to ‘friends’ can be copied, screenshotted and shared by our contacts. A certain level of sagacity is called for when sharing matters that one may think are not public and that is upto your own judgement.

Take the following steps now to take control of your Whatsapp account data: https://www.whatsapp.com/faq/general/26000016

Security buzzwords – zero day and APT

Information security / cybersecurity is all over the news these days with a serious hack / breach reported just about every week. News reporters and salesmen are happy to capitalise with catchy headlines and spreading of fear. On occasion, the truth gets lost while this is going on. Here are a few buzzwords / buzz-phrases that you may have come across, that may not mean what you think:

Zero day
A ‘zero-day vulnerability’ is a vulnerability that the maker of the product does not know about, but an attacker does. Once information about this vulnerability is known, the vendor has 0 days to fix it before it affects their customers. Plenty of vulnerabilities these days are discovered by ‘good guys’ – white-hat researchers – who report their findings to the producers of the software and give them a reasonable period (such as 90 days) before the information is made public. These vulnerabilities are not zero-days because the wider community (and malicious players in particular) typically learn about them only after the patch is released. A ‘zero-day exploit’ would be the use of such a vulnerability before the vendor learned of its existence.

Examples of misuse:
Vulnerability in LastPass misreported as zero-day by many reputable news sites: http://www.theregister.co.uk/2016/07/27/zero_day_hole_can_pwn_millions_of_lastpass_users_who_visit_a_site/
Nonsensical title talking about a ‘patched zero-day’: https://threatpost.com/patched-ie-zero-day-incorporated-into-neutrino-ek/119321/

APT / advanced persistent threat
An advanced persistent threat (APT) is a higher grade attacker than the usual. APT attacks tend to be targeted, stealthy, may involve a large number of steps, with multiple devices being infected and data being exfiltrated only in the later stages. Some of these will be so effective that they will remain undetected for years. On occasion, they can also reach computers that are not connected to a network. The level of sophistication is said to be such that only nation-states and big criminal organisations are said to have the expertise to carry out such attacks.

Some of the top names in security have ‘APT’ solutions. Do they work? They do detect some threats that traditional methods do not, but advanced persistent threats? No. Claims to detect APTs must be taken with some amount of caution. Malware testing company NSS labs came up with a few tests of increasing difficulty where none of the tested products detected the stealthy test. Their conclusion: “Novel anti-APT tools can be bypassed with moderate effort…” They were able to develop the test samples without having access to the APT solutions during test development and “resourceful attackers who may be able to buy these products will also be able to develop similar samples or even better ones.” Our takeaway from this: Make sure that the vendors claims are supported by evidence and seek unbiased sources when trying to find out more information.

Also check out this previous article: The horror of the security product presentation

The password-reuse attack

There was news very recently about an online storage provider named Carbonite being “breached” through a password reuse attack. What might that be?

It is just as it sounds like: an attacker reusing a password that they already have. This obviously requires no technical skills. One doesn’t have to “hack” in order to do a password reuse attack.

Is this even an attack? How did the attacker gain the password in the first place? There was an actual attack by people who may have had technical skills at one point. They would have tried hacking a popular site or application (a recently publicised example: LinkedIn). They may have gotten the usernames and passwords of a large number of users if they were successful. It appears that Mark Zuckerberg’s Twitter and Pinterest accounts were accessed this way.

Given that they already had this list of credentials, they could proceed to use the username – password combination on other sites. How many people can tell that they do not use the same credentials in at least a few sites?

What can you do to prevent this?

Simple and obvious: use different passwords for different sites. Do not reuse them.

But I can’t remember so many different passwords! 

Of course you can’t. And you shouldn’t try! Use a password manager. I have written a whole bunch of articles about them. Once you start, you can safely give up on remembering a whole bunch of long and complex passwords.

Lessons from Target on password complexity
Choosing your password manager
Passwords ain’t nothing but trouble

The Ransomware Social Contract

I had been anticipating this for a while: there has finally been a publicly known case where the social contact between ransomware extortionists and their victims had been broken. The contract? That after paying the ransom, the victims would be given access to their files.

What is ransomware? Extortionist criminals are now using this tactic to make money. They break into their victims computer systems and encrypt their data. The victims are then told to pay a ransom in order to get the key to decrypt the data. Imagine the situation where all your photos and work in your computer are inaccessible, despite still being in your computer. If you could pay a small amount to make this problem go away, odds are that you would.

Now multiply the volume of data a million-fold. A business is hit. Their daily operations requires this data to be accessible. Every second that they do not have it is money lost. If it is a hospital? Hospitals have been hit and left unable to provide effective care to their most vulnerable patients for short periods. Most would be willing to just pay the small amount in ransom than put their work in jeopardy.

Low ransoms and the fact that the extortionists have kept their promise of providing the decryption key have made ransomware a viable business model. This may be finally over. One hospital paid the ransom only to have the extortionists ask for more. The ‘social contact’ is broken. It was always a possibility that the attackers would go back on their word. It has happened.

Ransomware is not a new phenomenon. It has been around almost since three decades ago. For some reason, it just took off as something big in the last three years. Perhaps the existence of commercial software such as exploit kits that package various methods of attack without requiring much technical skill on the part of the attacker helped its rise.

What now? Ransomware is not about to go away. We should practise some IT security 101 to protect the data that is precious to us (yes, really). Backing up data is the old-fashioned and effective method that protects against the loss of data (ransomware or otherwise). Knowing not to click on unknown links or open dubious email attachments helps too. Keeping your operating system and software updated and having an anti-virus enabled is another. These things are all IT security 101 and knowing and practicing them will protect you against more than ransomware.

What if you have been hit and do not have a backup? You could pay, but be aware that you are depending upon the mercy of criminals.

Also read:
http://www.essaysonsecurity.com/blog/shit-just-got-real
https://nakedsecurity.sophos.com/2015/10/28/did-the-fbi-really-say-pay-up-for-ransomware-heres-what-to-do/

On weakening encryption

It’s history time! While we are discussing Apple vs FBI and the ongoing legal battles over encryption, let’s consider how American politics have already prevented technology from being as good as it could be. Just a few decades ago, the internet came along and started improving the lives of a lot of people – mostly rich people in developed countries at first. Smart people were developing the technologies serving the internet as they went along. Encryption was among them. How could a person ensure that a communication over the internet would be accessible only to the intended recipient? Encryption was the answer. How could a person ensure that his credit card details transferred over the internet for a payment would not be stolen by someone? Encryption!

This is all very nice, but both the internet and encryption have strong links to the military. The precursor to the internet was ARPANET, a project by the US department of defense. Encryption was big during World War 2. Mathematicians worked in the United States and UK to break the code used in the German Enigma machines. This gave the Allies the ability to intercept German communications and it was essential in their establishment of military superiority leading to their victory in the war.

Perhaps due to its background, encryption was treated as a “munition” and the export of strong encryption from the US was severely restricted until the 1990s. This made it difficult for companies to provide secure services over the internet and – let us have no doubts about it – ordinary consumers failed to get the benefits of these protections until these restrictions were slowly eased during the nineties.

Lessons learned? Not yet. Politicians in the United States and UK, among others, continue to ask to make encryption and similar consumer protections weaker in order to carry out “law enforcement” and “anti-terrorism” activities. How far are they willing to harm their constituents in order to achieve the aim of law enforcement?

Here is one answer: A vulnerability called “DROWN” was discovered last week that makes it possible to intercept supposedly secure communications between your computer and 25% of servers (25% of HTTPS servers, to be precise.). That’s your credit card information, your personal details, your income tax information and your children’s birthdays that are being made available to criminals to exploit. As I type this, millions of IT departments will be working on patching and otherwise changing their systems to protect their companies and clients from the risk posed by this vulnerability. That will be millions of man-hours of work lost fixing a problem that should never have existed. Why did this happen? The researchers who discovered this vulnerability explicitly blame US government policies of the nineties for allowing this to happen.

“In the most general variant of DROWN, the attack exploits a fundamental weakness in the SSLv2 protocol that relates to export-grade cryptography that was introduced to comply with 1990s-era U.S. government restrictions.”

XKCD comic on encryption

Better cryptography was available at the time SSLv2 was invented. The US just refused to let people outside their country have it. Major US tech companies made unsecure products and distributed them everywhere (including in the USA). It is bizarre that this is putting people’s information at risk even today, in 2016. Now you know why (among other reasons) people in technology and security are backing Apple in the Apple vs. FBI case.

Shit just got real

Over the past few years, we have had plenty of time to read about the exploits of malicious hackers. These have appeared on the news so many times that we have had the (mis) fortune to get desensitised to them. Why does it matter? Why should anyone care about who got hacked? And then, what can one (the layman) do about it?

It matters because it can affect us, and affect us badly. Our personal details are stored by many companies and governments. Not all of them put effort into securing this information that they have been entrusted to keep safe. Details such as our birth date and our address may be used by our bank or our email provider to verify our identity over the phone. Imagine if someone were able to access our email just because they knew our birthday and our address. This happened to to the director of the CIA, James Brennan. It has happened to ordinary people as well.

What about the companies that got hacked? Sometimes the hacking is (relatively minor) reputational damage in the form of website defacement. It is a serious matter when personal information, email contents and proprietary data are stolen – things that can directly affect a company’s bottom line and harm its customers. Theft of money or something like money (such as credit card information) also happens. How do we not get habituated to ignoring these things when the show up in the news?

Last week presented something that I found to be quite scary: the “shit just got real” moment. A hospital in Hollywood had great difficulty doing the job of caring for its patients because on an attack on their IT infrastructure. The hospital’s files were affected by a type of malware called ‘ransomware’, which encrypts the data until decrypted with a key obtained after paying the ransom. Staff used pen and paper to record new patient details, transferred some patients out to other places. Patients’ records are stored in computers. Their details are digitised so that a doctor or nurse can easily pull them up on a monitor when getting to do their work. What happens when something as basic as a hospital is unable to function because their IT is hit? This is why security is important, and why we have to demand that our various service providers take it very seriously.

What can we do about this?

  1. Educate yourself about personal information security.
  2. Vote with your feet against companies that do a bad job; especially against companies that are unrepentant and against companies that claim that they were hacked by “sophisticated” attackers (don’t take their word for it).

If you happen to work in IT, operations, or risk management, make the effort to understand how information security risks may affect your organisation and your clients and take steps to reduce the risks.

Stop hiding behind Terms and Conditions

It was revealed two months ago that toy company VTech was hacked. The criminals broke in and stole information including the birth dates and addresses of millions of children and their parents. It was quickly found out that VTech had employed abysmal security practices and made no notable efforts to keep their clients’ information safe. Lesson learned, one would suppose.

Unfortunately not so. Troy Hunt discovered last week that VTech made some changes – to their terms of service. The change, which happened in December, stated that parents were responsible for the security of their childrens’ data even though that had already been handed to VTech. Rather than accept responsibility for their egregious previous failure, VTech chose to absolve themselves with words of text.

The following words were added to the terms and conditions for VTech’s Learning Lodge website: “You acknowledge and agree that any information you send or receive during your use of the site may not be secure and may be intercepted or later acquired by unauthorized parties… Recognizing such, you understand and agree that…neither VTech nor [its partners] or employees will be liable to you for any…damages of any kind.”

The BBC got a comment from UK’s Information Commissioner’s Office that VTech indeed was responsible for such information. “The law is clear that it is organisations handling people’s personal data that are responsible for keeping that data secure,” said a spokeswoman. This should also be the case for the rest of the EU. Local laws will apply to Asian countries.

Having to deal with pages and pages of small-printed legal text on contracts and emails that end in statements disclaiming responsibility for the email content are very disappointing aspects of modern society. Not only do people quickly learn to ignore such text as the noise that it is, it gives service providers the false sense of security that they may not be held accountable. For the sense of everyone’s sanity, please stop doing this! Keep your disclaimers brief and to the point. Take a look at this old Google page about how to actually tell someone about relevant terms and conditions.

In other news, Amazon has told its customers that its Lumberyard game development tools are not to be used for life-critical or safety-critical systems… unless the zombie apocalypse is going on, in which case this particular condition is nullified! See clause 57.10.

What are Free Basics and why did India block it?

Free Basics (under the umbrella of internet.org) is a service by Facebook to bring “essential services” of the internet to the unconnected people of the world for free. Doing so would improve their access to information on a variety of matters such as health, banking and of course, Facebook. Sites such as the BBC and Wikipedia were included. Facebook estimates that 50% of the people it connected this way upgraded to a paid plan within months.

The initiative was heavily publicised. The prime minister of India backed it on his social media accounts. The criticism came from the already-connected people of India: an internet that is curated by Facebook goes against the principle of net-neutrality. The protests were vociferous and it lead to a call for input by the Telecom Regulatory Authority of India (TRAI), multiple rounds of consultation including an open-house that finally lead to the ruling (short version): internet service pricing that discriminated based on content in India was illegal. TRAI has made an exception for emergency services, with the caveat that the differential pricing for emergencies must be made known to TRAI within seven days.

What is the big deal with net-neutrality? Net-neutrality is the idea that all data on the internet should be treated equally i.e. It is not OK to prevent access to some sites or some types of content or to ask the user to pay extra for it. In the absence of net-neutrality, internet service providers (ISPs) would be able to control what the user can and can’t do or see on the internet. Net-neutrality also acts in favour of openness of the internet, preventing sites with a particular agenda from dominating information.

A well-known example is when American ISP Comcast throttled the bandwidth of its users who viewed videos on streaming-site Netflix and demanded Netflix to pay charges for the additional bandwidth usage. Netflix was forced to pay after its users found themselves practically unable to view videos.

The BBC may be benign, but some may prefer their news to come from another provider who sees the world in a different light from the BBC. For one media organisation to dominate the news made available to the poor of India for free would be quite an impressive coup for that outlet – one that we really do not want, given that all media have their own agenda and political leanings.

TRAI has ruled that all plans that differentiate on content is banned. The decision is significant as it considers the threat against net-neutrality to be a more serious one than that a majority of Indians not be connected at all.

TRAI’s ruling in its entirety is worth reading for its detail and its simple language.