Tag Archives: Defence

Australian Census

Australia’s #CensusFail, but is it really?

Full disclosure: I disagree with the amount of detail collected in the census and the way that the data will be shared. However, this post is not a post on privacy concerns, rather a post on common myths surrounding incident response and what can we learn from it.

Some (mis)information that has been going around on various sources (Twitter, news, etc) has prompted this post. As the situation is still unravelling, there is a lot of information here that has not been considered/discussed. So, I welcome any constructive discussion to fill in the gaps.

Background

9th of August 2016, the most important day for the Australian Bureau of Statistics (ABS): National census day. A day where massive amounts of data will be collected on individuals providing a snapshot of the nation. Various government bodies will use this valuable data to plan out major projects and allocate resources for the next decade. The data collected is quite rich and is of major value to many entities, both good and bad. The Australian population is about 23 million people and with the census ‘open’ for a short period of time, this big bang launch requires a lot of planning and carries a lot of risk. Kudos to the Australian government for leveraging technology on such a large scale.

The successful management of any teething issues is a sum of risk minimisation and incident response.

Challenge accepted: Knowing your adversary

As with all governments, there are adversaries: political activists, criminal syndicates and nation state actors just to name a few. Coupled with a motive and capability, a worthy adversary could mount a successful attack. United States Office of Personnel Management (OPM) is just an example of one successful attack involving a well prepared adversary.

Playing defence on this scale is not for the faint hearted so spare a thought for any (InfoSec) professional assisting with this mammoth project.

What happened?

In short, the Census suffered 4 DDOS attacks before the service was taken offline. The seriousness of the attack prompted the Australian Signals Directorate (ASD) to investigate.

Dispelling misinformation

The points below serve to dismiss or debate any misinformation seen to date.

Misinformation #1: Bandwidth exhaustion is not the only form of denial of service attack.

A Distributed Denial of service (DDOS) exist in many forms with the well-known being bandwidth consumption. One other type of DDOS is resource exhaustion, where finite system resources are consumed leading to the service failing, examples include and are not limited to, state based TCP stack attacks or memory or CPU based attacks. This form of DDOS does not require significant amount of bandwidth to be successful.

Hence, determining the presence of a DDOS through network based analysis alone is inconclusive.

Misinformation #2: Just because there is an ‘attack’ it doesn’t mean that the attack was successful.

Without knowing the underlying architecture of the system, it is very difficult to conclude what vulnerabilities were present that led to a successful denial of service.

Anyone that has worked with an IDS/IPS/etc will understand that an ‘attack’ is only successful when the attack is targeted against a specific vulnerability present in a particular system. The question of whether a vulnerability is present at the time of attack will not be known until a thorough investigation has been completed.

Any ‘intelligence service’ that lay claim that an ‘attack’ has occurred against an organisation can only be definitive once an ‘attack’ has been matched with the underlying vulnerability. With this in mind, without probes/detectors/etc within the victim organisation, no service cannot make this claim.

Misinformation #3: A DDOS is not a data breach however, it can be used to provide cover for one.

Risk minimisation is the best form of prevention any organisation could do before commissioning a system. It is impossible to protect against all possible scenarios, so enter the obligatory Sun Tzu quote,

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

In other words, knowing yourself and your adversary is very important to devise a comprehensive risk minimisation plan.

Incorporated into any worthy risk minimisation strategy is an incident response plan, commonly known as a ‘Plan B’. Plan B is to cater for the other things that could go wrong, unforeseen events or things that cannot be treated during risk minimisation such as natural disasters and data breaches. Knowing you have a worthy, determined adversary is one thing but knowing what they are capable of or what they will do, is another.

An incident response plan is both damage control and public relations rolled into one, with the main objective being restoring operations as soon as possible. History has shown that diversionary tactics is one way to consume human resources allowing attackers to proceed with their main objective, which could be a data breach. There are many references to diversionary tactics used, one of which is last year’s attack on TalkTalk.

It must be noted that only time will tell whether or not a data breach did take place which will be explained below.

Misinformation #4: ‘Cyber’ Attribution is extremely hard. A host used in an attack does not mean that the owner of that host is the attacker.

Pivoting is powerful technique where by an ‘attacker’ gains access to a host in order to progress through an attack. Used by penetration testers (“Red teams”) and attackers alike, pivoting could be used for several reasons such as, gaining access to a critical management server or to disguise an attack.

Pivoting to a host could also be used to throw off any investigation, the digital equivalent of framing someone else for some nefarious/questionable activity. A savvy attacker would incorporate some form of pivoting to cover or deflect attention away from themselves.

To compound the issue, obtaining evidence for attribution is met with many challenges: availability and quality of residual evidence, obtaining permission from the owner of the host to gain access, political/legal boundaries and so on.

The net result makes attribution very difficult.

Misinformation #5: Just because the entire story has not been released, it doesn’t mean that there is a cover up. A thorough investigation takes time.

The duration of an investigation is determined by the number of moving parts requiring analysis. Evidence could be destroyed, incomplete, falsified resulting in more time and resources to determine what occurred during an event. Complex systems require significant amount of time for investigators to gather evidence, analyse, correlate, test theories before presenting their findings to an audience especially one wanting heads to roll or determined to scrutinise the smallest of details.

An incomplete, bad or botched investigation could result in the loss of credibility for the investigator and the organisation, or worse still, could result in a law suit.

The pressure on investigators to obtain factual information is paramount and they would rather avoid any speculation until they are certain that their findings are rock solid.

As InfoSec professionals what could we do?

  • Avoid planting conspiracy theories and instead promote good/defend/assist the hard working investigators who are undergoing a lot of pressure.
  • Correct any false, misleading information with factual information
  • Learn from what has happened, use this as a case study and where appropriate, communicate to your superiors on what could be done so that the fallout form something like this could be minimised.
  • Investigators/incident responders: treat incoming information with a pinch of salt, gather conclusive evidence to support the theories.

Understandably, this is no means the end to this saga however it is definitely not the time to spread rumours or play the blame game.

Allow the incident responders, ABS, ASD and other investigators conduct what is necessary in order to form a conclusion based on fact rather than crucifying the innocent. To the Australian government, the public will demand transparency for what has happened, please put the blame game aside, let us all learn from what has happened. It is tax payer money so let us all be good sports, everyone deserves to know the truth (granted it will be 10 years before another census will happen again).

References (by no means comprehensive):
https://en.m.wikipedia.org/wiki/Office_of_Personnel_Management_data_breach
http://www.abc.net.au/news/2016-08-10/australian-bureau-of-statistics-says-census-website-hacked/7712216
http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/culture-media-and-sport-committee/cyber-security-protection-of-personal-data-online/written/26312.pdf

Advertisement

Mobile Phone intercept, it’s not new

A 60 Minutes report demonstrated how mobile networks could be intercepted and that this has been an open secret within intelligence agencies.

Mobile intercepts are done by accessing the mobile network infrastructure shared by providers across the world. The mobile infrastructure utilise a protocol called SS7 that handles call connections and subscriber information. So, anyone with access to SS7 can intercept subscribers on the network just about anywhere in the world.

Demonstration on base band weaknesses was made as early as 2010, Karsten Nohl the person who demonstrated this to 60 Minutes presented his findings in 2010 and again in 2014. If you have thought about it, chances are that someone else has as well. In this case, intelligence agencies.

Why is this not a big deal?

Just as providers and adversaries could intercept landlines through a PBX, wire taps and even exchanges, surely the same principles/assumptions would exist for mobile technologies?

All the same information security networking rules apply: if you have anything of importance to transmit over a 3rd party network, assume that the 3rd party is not trusted and make sure you have some sort of end to end encryption in place (like the couplers used in spy movies) or use an alternative method.

Canals, Law Firms and InfoSec – Lessons learned from the Mossack Fonseca data breach

Another day, another leak.

This time the victim of the leak is Mossack Fonseca, a Panama Law firm which has been exposed for facilitating shady tax arrangements. The size of the breach is enormous, the volume alone could displace what Panama is traditionally known for, the canal.

Naturally, findings from the breach would never see the light of day but speculation is always fun so…what could we learn from this breach?

1. Trust is king otherwise use identity management and access control

The locks in the canal serve to direct water to the right places.

No one knows if this was an inside job but one thing is certain, ‘John Doe’ who conducted the breach had the motivation to carry out the leak. If this was an inside job, this person would have had a lot of access within the organisation. Identity management, separation of duties and access control limit the amount of data any one individual has access to. In any organisation handling sensitive data, all of this is very important to limit the potential loss of data from a disgruntled employee.

2. Don’t forget physical access

A hole in the lock allows water to seep through where it’s not supposed to.

John Doe could have access to backups, tapes, systems, USB ports, you get the idea.

3. Are you looking – Egress points

If no one is inspecting the locks for leaks, how do you know if there is a problem?

Süddeutsche Zeitung, the newspaper organisation reporting the breach acquired about 2.6 terabytes of data. Let’s assume for a second that the person doing the leak was not an insider who stole mirrored drives from Mossack Fonseca servers or copied the data to a USB hard drive, how did the 2.6 terabytes of data get out?

If it was done remotely, that’s 2.6 terabytes over the wire. If you had a SOC/NOC in a law firm, one thing that you should be keeping an eye out for is mass exfiltration, out of hours transfers and any other anomalous activity. Sure, John Doe could have been drip feeding but with 2.6 TB of data, that would take an excruciatingly long time. If your NOC is not talking to your security team, well, that too is a problem.

4. Data loss protection is not a silver bullet

Just stopping blue coloured liquid will not stop leaks.

Süddeutsche Zeitung reports that the haul consisted of e-mails, pdf files, photo files, and excerpts of an internal Mossack Fonseca database. A quick review of the leaked documents shows that Mossack Fonseca was handling both structured and unstructured data. Documents derived from a predefined template eg: official company letterhead form structured data. Passports, share certificates and other documents that do not adhere to MF’s document management standards is classified as unstructured data. Documents could also be tagged specifically for DLP. DLP works by detecting structured or tagged documents ‘moved’ to an ‘unauthorised’ location where DLP would then block the movement or trigger an alert for further action.

Doing DLP right, MF would have to identify all critical/sensitive documents, tag or convert documents to a standard format before it could be picked up by DLP. This requires a lot of work and failure to do so means that data could slip right under the watchful eye of DLP. Not to mention that there are ways to circumvent DLP which, segways nicely into our final point.

5. Security is like an onion

A failure in one lock should not prevent the entire system from failing.

The hot topic of encryption makes another entrance. Encrypted data makes life hard for inspection based systems such as DLP and that’s assuming that inspection based systems can detect and decrypt the encryption in the first place. A tagged document could be encrypted in a password protected ZIP/RAR/<insert favourite exfiltration format here> and if you’re lucky, inspection based systems may only log the activity. In this scenario, you will need all of the above to reduce the risk of the data breach from occurring.

There may be other lessons learnt from this breach, feel free to share below.

Sources:
http://panamapapers.sueddeutsche.de/articles/56febff0a1bb8d3c3495adf4/
https://www.documentcloud.org/public/search/Source:%20%22Internal%20documents%20from%20Mossack%20Fonseca%20%28Panama%20Papers%29%22

http://thebluediamondgallery.com/e/ethics.html

The ethics of vulnerability disclosure

Badlock, a newly publicised vulnerability is making the rounds with a catch…the patch is not available for another 3 weeks. However, marketing hype is already doing the rounds.

What makes this vulnerability big is the hint that it affects SMB used in SAMBA/Windows File sharing and as SAMBA is open source, vulnerability researchers good and bad will be scrambling through the code.

How, given the wide spread reach of the vulnerability, the hype is not called for. Three weeks in information security land is a lot of time and the wide spread use of SMB means organisations could be sitting ducks.

People that wield power must act sensibly, if you know something that could affect a lot of people use power with discretion, report vulnerabilities to vendors and ensure that a working fix is applied by customers before causing mass mayhem. Defenders already have a hard time keeping people out, we shouldn’t be fighting amongst ourselves.

As always, opinions welcome. Thanks to Steve Regan (@SteveD3) for exposing this.

Ref:

http://www.csoonline.com/article/3047221/techology-business/company-behind-the-badlock-disclosure-says-pre-patch-hype-is-good-for-business.html

Cross published on Peerlyst:

https://www.peerlyst.com/posts/the-ethics-of-vulnerability-disclosure-ed-yuwono

 

edit: 24/3: typo

Dealing with insider threats

Interesting survey just came out stating that employees would sell passwords for USD$1000.

The temptation of a quick buck could be hard to resist but that is all you need to gain access to a system.

Just last month, Apple employees were also offered bribes to disclose their credentials.

Insider threats are real, but as security professionals, working with departments would help provide an understanding on the state of play.

For example: If you’re a service provider for a high profile organisation, you may be the subject for such threats.

How could you reduce this:

  • The human factor is important, organisational morale, culture and trust must be at the top to help staff from being influenced by bribes.
  • Reporting is also key, staff must be encouraged to report such threats.
  • Suman Sourav (@SumanS0urav) makes a point with monitoring, detective controls for anomalous behaviour is important.

It’s not easy to stop insider threats but a culture conductive of trust will serve to reduce threats.

References:

http://www.infosecurity-magazine.com/news/employees-would-sell-passwords-for

http://www.ibtimes.co.uk/apple-employees-ireland-offered-20000-bribes-pass-login-ids-hackers-1543042

Operations security (OPSEC): part of a balanced organisational security awareness diet

Summary:

  • The goal of OPSEC is to limit the adversaries capability to gather information about individuals or the organisation
  • Implement OPSEC just as you would for controls
  • Assess whether OPSEC could aid your existing information security program.

Authentication based on something you know is the easiest and cheapest way to prove someone’s identity, however it is also one of the most easiest to defeat. With personal privacy not being as private as it used to be, it is hard to establish a universal authentication system based on something you know.

The age of mass surveillance, Social media, Internet of Things (IoT) track individuals and record secrets. Continuous changes with social media features, IoT devices gathering detailed information provides a challenge with maintaining a high level of privacy. Criminals feasting on personal gain leverage lapses in privacy to exploit organisations relying on weak authentication to clean up innocent victims.

The absence of a suitable technological solution (at the moment) makes the security of something you know rest solely with the subject.

This is where Operations Security (OPSEC) comes in.

Operations security condition individuals to think about limiting the information that they disclose to the public. The military and other clandestine groups have employed OPSEC for many decades now and it serves to limit the effectiveness of reconnaissance on individuals.

Considering that offensive teams such as penetration testers, social engineers leverage intelligence as one of their first stages, information starvation would limit or prolong the opportunity of a successful ‘attack’.

Incorporating OPSEC into organisational security awareness does not require staff to undergo military style training, instead consider:

  • is there a gap that OPSEC could address (eg: organisational or private use of social media, job listings, etc)
  • the level of detail the program will cover. This needs to strike a balance between practicality, freedom and security.

Two examples where OPSEC in security awareness would be valuable include:

  • Compartments – defining boundaries between professional and private or segmentation within professional ie: secret vs internal
  • Need to know – Do you *have* to communicate this information? Is it necessary for them to function? Could the information be used against the individual or organisation? For example: security questions/password reset, does your online trader need to know your date of birth?

Before OPSEC can be deployed throughout the organisation, there are some considerations

  • OPSEC augments information security and should be part of an existing security awareness program
  • OPSEC may not be for everyone, some specific groups will benefit more from OPSEC more than others
  • Does the culture of the organisation allow for OPSEC to be implemented? Will staff allow for their organisation to interfere with their personal lives
  • People are human, mistakes happen. OPSEC serve to limit fallout.
  • OPSEC is expensive not only financially but continuously maintaining good OPSEC is quite hard to do.

There may be some debate as to the effectiveness and application of OPSEC however, a successful campaign could provide an adversary a run for their money.

If you are running an OPSEC campaign in your organisation, I’d like to hear about it.

Some information

http://www.dailymail.co.uk/news/article-3475126/Security-alert-NatWest-online-banking.html
https://www.owasp.org/index.php/Choosing_and_Using_Security_Questions_Cheat_Sheet
http://www.dodea.edu/offices/safety/opsec.cfm

Integrating Security with Asset management Part 1

Summary:

  • First in a sequence of blogs which will outline how to implement security measures with business support.
  • Know what you know and investigate what you don’t know
  • Leverage existing systems but apply some intelligence to achieve security goals
  • Refine by identifying the weaknesses, fix the weaknesses and monitor for improvements.

Simple question: Can you tell me right now, who has disabled their AV?

It’s amazing to find the number of organisations that would not be able to give you an answer to that.

‘You cannot protect what you can’t see’

Consider these scenarios:

  • decentralised procurement/budgets, leading to authorised/unauthorised purchases such as a computer, for one reason or another, the asset is not tracked in the asset register, computer is compromised, leading to security issue.
  • staff connect a custom built system to the network, system gets infected, causes issues on the network, leading to security issue.

I’ve been an advocate of having asset management as part of a security function for quite some time, it may not be managed by the security team but the security team could and should have some input and exposure. Reasoning behind this stem from the implementation of the SANS 20 Critical Security Controls.

The first control is to create an Inventory of Authorised and Unauthorised Devices

http://www.sans.org/critical-security-controls/control/1

The win is that you know what to protect within your organisation, providing the availability of statistics. As mentioned, statistics then can be used for a lot of things: metrics, milestones, forecasting, planning, KPIs, etc.

Remember, before any system is implemented, it is important to note that business support and approval must be obtained otherwise the project will be doomed for failure.

No matter how good your asset management is, you will get rogue purchases, staff circumventing the system. This is where planning, policy and processes are required to stop the leaks.

Not everything has a technological solution, in fact, planning/strategy, policies, procedures and guidelines help drive the technology selection process for an organisation.

Planning/strategy allow for the right decisions to be made so that the company infrastructure can be designed to support the organisation. This also provides uniformity to a common goal for all staff.
Policies are required to steer procurement decisions for the organisation and also used to curb deviations from the corporate norm. (for procurement and delinquent departments)
Standards are defined to ensure alignment with the policies for procurement, IT and all staff. As mentioned, standards are also there for system identification
Guidelines assist planners by outlining what could be used within an environment

With this in place, the laws of the land is set out and referred to by all.

A good resource for policy creation is the SANS Security Policy Project:
http://www.sans.org/security-resources/policies/

The primer is a good read.

Having these in place will help reduce the number of ‘rogue’ devices within the organisation as well as set the framework for a solid asset management system.

Security is a balance between offence and defence.

Summary:

  • How do you know that your defences are good if you don’t test them?
  • What lengths would your adversaries take to compromise your systems?
  • Adopt the same offensive mentality as your adversary in order to test your defences

You have your security strategy, you know (most of*) your threats/vulnerabilities, you implemented a top-down well oiled security program, policies, procedures, processes, technology, etc. You’re now safe…

Really…?

As mentioned in a previous blog, security must be tested either by you or by your threat.

The Mandiant Threat Landscape report shows that intruders can be on your network for 243 days before being detected. (1)

Let’s be ignorant and ask a few questions:

  1. How did they get through our defences?
  2. Why did it take so long for them to be detected?

etc…

Let’s assume that all the policies and technology is capable of keeping the adversary out, how would *you* know? Business is evolving, computational power is getting cheaper, inter connectivity is expanding, attack information is easily accessible and adversaries adapt. With all of that in mind, is your security program evolving?

This gap between the security program and the adversaries’ capabilities is the current risk to the organisation.

From the last blog, an assessment would have been made to find out an organisation’s assets and threats. As part of the threat analysis, some consideration would have been made to examine the likelihood and the capabilities of the threat.

Risk mitigation will consider the threat analysis then apply the appropriate risk treatment.

A lot of threats could be addressed by going through a ‘checklist’, this would be considered to be a ‘baseline’. However to stop a determined adversary, advanced threats, etc some innovation is required. As a penetration tester, you need to be innovative and think outside of the square to find ways to get in, this is where innovation is practised.

Back to answering question 1, To be a good security professional you have to think like *the* threat, adversary, a pen tester in order to minimise the risk of a threat from compromising your organisation.

As for the second question, it is obvious that the adversary was not detected at the time of entry and that raises other questions: how well oiled is your security program? Is it measurable? Were the right vulnerabilities addressed?

To answer this, you need to put your defensive hat on…

1. https://www.mandiant.com/threat-landscape/