Category Archives: Defence

Australian Census

Australia’s #CensusFail, but is it really?

Full disclosure: I disagree with the amount of detail collected in the census and the way that the data will be shared. However, this post is not a post on privacy concerns, rather a post on common myths surrounding incident response and what can we learn from it.

Some (mis)information that has been going around on various sources (Twitter, news, etc) has prompted this post. As the situation is still unravelling, there is a lot of information here that has not been considered/discussed. So, I welcome any constructive discussion to fill in the gaps.

Background

9th of August 2016, the most important day for the Australian Bureau of Statistics (ABS): National census day. A day where massive amounts of data will be collected on individuals providing a snapshot of the nation. Various government bodies will use this valuable data to plan out major projects and allocate resources for the next decade. The data collected is quite rich and is of major value to many entities, both good and bad. The Australian population is about 23 million people and with the census ‘open’ for a short period of time, this big bang launch requires a lot of planning and carries a lot of risk. Kudos to the Australian government for leveraging technology on such a large scale.

The successful management of any teething issues is a sum of risk minimisation and incident response.

Challenge accepted: Knowing your adversary

As with all governments, there are adversaries: political activists, criminal syndicates and nation state actors just to name a few. Coupled with a motive and capability, a worthy adversary could mount a successful attack. United States Office of Personnel Management (OPM) is just an example of one successful attack involving a well prepared adversary.

Playing defence on this scale is not for the faint hearted so spare a thought for any (InfoSec) professional assisting with this mammoth project.

What happened?

In short, the Census suffered 4 DDOS attacks before the service was taken offline. The seriousness of the attack prompted the Australian Signals Directorate (ASD) to investigate.

Dispelling misinformation

The points below serve to dismiss or debate any misinformation seen to date.

Misinformation #1: Bandwidth exhaustion is not the only form of denial of service attack.

A Distributed Denial of service (DDOS) exist in many forms with the well-known being bandwidth consumption. One other type of DDOS is resource exhaustion, where finite system resources are consumed leading to the service failing, examples include and are not limited to, state based TCP stack attacks or memory or CPU based attacks. This form of DDOS does not require significant amount of bandwidth to be successful.

Hence, determining the presence of a DDOS through network based analysis alone is inconclusive.

Misinformation #2: Just because there is an ‘attack’ it doesn’t mean that the attack was successful.

Without knowing the underlying architecture of the system, it is very difficult to conclude what vulnerabilities were present that led to a successful denial of service.

Anyone that has worked with an IDS/IPS/etc will understand that an ‘attack’ is only successful when the attack is targeted against a specific vulnerability present in a particular system. The question of whether a vulnerability is present at the time of attack will not be known until a thorough investigation has been completed.

Any ‘intelligence service’ that lay claim that an ‘attack’ has occurred against an organisation can only be definitive once an ‘attack’ has been matched with the underlying vulnerability. With this in mind, without probes/detectors/etc within the victim organisation, no service cannot make this claim.

Misinformation #3: A DDOS is not a data breach however, it can be used to provide cover for one.

Risk minimisation is the best form of prevention any organisation could do before commissioning a system. It is impossible to protect against all possible scenarios, so enter the obligatory Sun Tzu quote,

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

In other words, knowing yourself and your adversary is very important to devise a comprehensive risk minimisation plan.

Incorporated into any worthy risk minimisation strategy is an incident response plan, commonly known as a ‘Plan B’. Plan B is to cater for the other things that could go wrong, unforeseen events or things that cannot be treated during risk minimisation such as natural disasters and data breaches. Knowing you have a worthy, determined adversary is one thing but knowing what they are capable of or what they will do, is another.

An incident response plan is both damage control and public relations rolled into one, with the main objective being restoring operations as soon as possible. History has shown that diversionary tactics is one way to consume human resources allowing attackers to proceed with their main objective, which could be a data breach. There are many references to diversionary tactics used, one of which is last year’s attack on TalkTalk.

It must be noted that only time will tell whether or not a data breach did take place which will be explained below.

Misinformation #4: ‘Cyber’ Attribution is extremely hard. A host used in an attack does not mean that the owner of that host is the attacker.

Pivoting is powerful technique where by an ‘attacker’ gains access to a host in order to progress through an attack. Used by penetration testers (“Red teams”) and attackers alike, pivoting could be used for several reasons such as, gaining access to a critical management server or to disguise an attack.

Pivoting to a host could also be used to throw off any investigation, the digital equivalent of framing someone else for some nefarious/questionable activity. A savvy attacker would incorporate some form of pivoting to cover or deflect attention away from themselves.

To compound the issue, obtaining evidence for attribution is met with many challenges: availability and quality of residual evidence, obtaining permission from the owner of the host to gain access, political/legal boundaries and so on.

The net result makes attribution very difficult.

Misinformation #5: Just because the entire story has not been released, it doesn’t mean that there is a cover up. A thorough investigation takes time.

The duration of an investigation is determined by the number of moving parts requiring analysis. Evidence could be destroyed, incomplete, falsified resulting in more time and resources to determine what occurred during an event. Complex systems require significant amount of time for investigators to gather evidence, analyse, correlate, test theories before presenting their findings to an audience especially one wanting heads to roll or determined to scrutinise the smallest of details.

An incomplete, bad or botched investigation could result in the loss of credibility for the investigator and the organisation, or worse still, could result in a law suit.

The pressure on investigators to obtain factual information is paramount and they would rather avoid any speculation until they are certain that their findings are rock solid.

As InfoSec professionals what could we do?

  • Avoid planting conspiracy theories and instead promote good/defend/assist the hard working investigators who are undergoing a lot of pressure.
  • Correct any false, misleading information with factual information
  • Learn from what has happened, use this as a case study and where appropriate, communicate to your superiors on what could be done so that the fallout form something like this could be minimised.
  • Investigators/incident responders: treat incoming information with a pinch of salt, gather conclusive evidence to support the theories.

Understandably, this is no means the end to this saga however it is definitely not the time to spread rumours or play the blame game.

Allow the incident responders, ABS, ASD and other investigators conduct what is necessary in order to form a conclusion based on fact rather than crucifying the innocent. To the Australian government, the public will demand transparency for what has happened, please put the blame game aside, let us all learn from what has happened. It is tax payer money so let us all be good sports, everyone deserves to know the truth (granted it will be 10 years before another census will happen again).

References (by no means comprehensive):
https://en.m.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach
http://www.abc.net.au/news/2016-08-10/australian-bureau-of-statistics-says-census-website-hacked/7712216
http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/culture-media-and-sport-committee/cyber-security-protection-of-personal-data-online/written/26312.pdf

Mobile Phone intercept, it’s not new

A 60 Minutes report demonstrated how mobile networks could be intercepted and that this has been an open secret within intelligence agencies.

Mobile intercepts are done by accessing the mobile network infrastructure shared by providers across the world. The mobile infrastructure utilise a protocol called SS7 that handles call connections and subscriber information. So, anyone with access to SS7 can intercept subscribers on the network just about anywhere in the world.

Demonstration on base band weaknesses was made as early as 2010, Karsten Nohl the person who demonstrated this to 60 Minutes presented his findings in 2010 and again in 2014. If you have thought about it, chances are that someone else has as well. In this case, intelligence agencies.

Why is this not a big deal?

Just as providers and adversaries could intercept landlines through a PBX, wire taps and even exchanges, surely the same principles/assumptions would exist for mobile technologies?

All the same information security networking rules apply: if you have anything of importance to transmit over a 3rd party network, assume that the 3rd party is not trusted and make sure you have some sort of end to end encryption in place (like the couplers used in spy movies) or use an alternative method.

http://thebluediamondgallery.com/e/ethics.html

The ethics of vulnerability disclosure

Badlock, a newly publicised vulnerability is making the rounds with a catch…the patch is not available for another 3 weeks. However, marketing hype is already doing the rounds.

What makes this vulnerability big is the hint that it affects SMB used in SAMBA/Windows File sharing and as SAMBA is open source, vulnerability researchers good and bad will be scrambling through the code.

How, given the wide spread reach of the vulnerability, the hype is not called for. Three weeks in information security land is a lot of time and the wide spread use of SMB means organisations could be sitting ducks.

People that wield power must act sensibly, if you know something that could affect a lot of people use power with discretion, report vulnerabilities to vendors and ensure that a working fix is applied by customers before causing mass mayhem. Defenders already have a hard time keeping people out, we shouldn’t be fighting amongst ourselves.

As always, opinions welcome. Thanks to Steve Regan (@SteveD3) for exposing this.

Ref:

http://www.csoonline.com/article/3047221/techology-business/company-behind-the-badlock-disclosure-says-pre-patch-hype-is-good-for-business.html

Cross published on Peerlyst:

https://www.peerlyst.com/posts/the-ethics-of-vulnerability-disclosure-ed-yuwono

 

edit: 24/3: typo

Dealing with insider threats

Interesting survey just came out stating that employees would sell passwords for USD$1000.

The temptation of a quick buck could be hard to resist but that is all you need to gain access to a system.

Just last month, Apple employees were also offered bribes to disclose their credentials.

Insider threats are real, but as security professionals, working with departments would help provide an understanding on the state of play.

For example: If you’re a service provider for a high profile organisation, you may be the subject for such threats.

How could you reduce this:

  • The human factor is important, organisational morale, culture and trust must be at the top to help staff from being influenced by bribes.
  • Reporting is also key, staff must be encouraged to report such threats.
  • Suman Sourav (@SumanS0urav) makes a point with monitoring, detective controls for anomalous behaviour is important.

It’s not easy to stop insider threats but a culture conductive of trust will serve to reduce threats.

References:

http://www.infosecurity-magazine.com/news/employees-would-sell-passwords-for

http://www.ibtimes.co.uk/apple-employees-ireland-offered-20000-bribes-pass-login-ids-hackers-1543042

Operations security (OPSEC): part of a balanced organisational security awareness diet

Summary:

  • The goal of OPSEC is to limit the adversaries capability to gather information about individuals or the organisation
  • Implement OPSEC just as you would for controls
  • Assess whether OPSEC could aid your existing information security program.

Authentication based on something you know is the easiest and cheapest way to prove someone’s identity, however it is also one of the most easiest to defeat. With personal privacy not being as private as it used to be, it is hard to establish a universal authentication system based on something you know.

The age of mass surveillance, Social media, Internet of Things (IoT) track individuals and record secrets. Continuous changes with social media features, IoT devices gathering detailed information provides a challenge with maintaining a high level of privacy. Criminals feasting on personal gain leverage lapses in privacy to exploit organisations relying on weak authentication to clean up innocent victims.

The absence of a suitable technological solution (at the moment) makes the security of something you know rest solely with the subject.

This is where Operations Security (OPSEC) comes in.

Operations security condition individuals to think about limiting the information that they disclose to the public. The military and other clandestine groups have employed OPSEC for many decades now and it serves to limit the effectiveness of reconnaissance on individuals.

Considering that offensive teams such as penetration testers, social engineers leverage intelligence as one of their first stages, information starvation would limit or prolong the opportunity of a successful ‘attack’.

Incorporating OPSEC into organisational security awareness does not require staff to undergo military style training, instead consider:

  • is there a gap that OPSEC could address (eg: organisational or private use of social media, job listings, etc)
  • the level of detail the program will cover. This needs to strike a balance between practicality, freedom and security.

Two examples where OPSEC in security awareness would be valuable include:

  • Compartments – defining boundaries between professional and private or segmentation within professional ie: secret vs internal
  • Need to know – Do you *have* to communicate this information? Is it necessary for them to function? Could the information be used against the individual or organisation? For example: security questions/password reset, does your online trader need to know your date of birth?

Before OPSEC can be deployed throughout the organisation, there are some considerations

  • OPSEC augments information security and should be part of an existing security awareness program
  • OPSEC may not be for everyone, some specific groups will benefit more from OPSEC more than others
  • Does the culture of the organisation allow for OPSEC to be implemented? Will staff allow for their organisation to interfere with their personal lives
  • People are human, mistakes happen. OPSEC serve to limit fallout.
  • OPSEC is expensive not only financially but continuously maintaining good OPSEC is quite hard to do.

There may be some debate as to the effectiveness and application of OPSEC however, a successful campaign could provide an adversary a run for their money.

If you are running an OPSEC campaign in your organisation, I’d like to hear about it.

Some information

http://www.dailymail.co.uk/news/article-3475126/Security-alert-NatWest-online-banking.html
https://www.owasp.org/index.php/Choosing_and_Using_Security_Questions_Cheat_Sheet
http://www.dodea.edu/offices/safety/opsec.cfm