Security Policies IRL (Part 1)

If you’ve been following the news lately you would have heard about a large real estate group in China with a large debt problem. In short, the Chinese government decided to reduce risk by limiting the amount of debt held by Real Estate companies. To accomplish this, they implemented a new policy which could play out to a major disaster.


  • Implementing a policy without consultation could lead to chaos
  • Creating policy requires lots of communication and solid knowledge of the macro and micro environment
  • Consultation with functional units and effective policy metrics serve as good indicators as inputs for current and new policy

Now here lies the problem: by implementing a new policy in a short space of time meant that, companies that are heavily exposed without adequate cash flow are suddenly unable to service their debts. Evergrande is one of those companies with large unserviceable debts.

You have probably seen this in the InfoSec realm, policies that are dysfunctional, out of place in the real world? We’re not talking about the no personal floppy disk policy rather, policies that prevent the business from, well, doing their business. If you ever heard the phrase, sorry, you can’t do that because it’s against security policy then that is the problem right there.

Take a step back for one second, remember why you are creating the policy in the first place? It is to protect and enable the business to operate in a secure manner (add your own seasoning to that but generally speaking, it is in the same theme/spirit). However, if you are creating policies without business consultation, you’re effectively creating policy in a vacuum, leading to distrust, policy circumvention and ultimately, a policy fail.

Back to Evergrande for a second, while this is not a suggestion that Beijing created their policy in a vacuum, they knew companies were in debt but had to take corrective action they, pulled the plug to available credit expecting the problem to resolve itself or worse still, actually implemented a policy without stakeholder consultation.

Beijing is now playing a game of chicken, either Evergrande resolves their debt issue or they will step in. Why? Beijing does not want an economic meltdown, citizens losing funds, supply chains closing, etc at the same time they can’t afford to bail Evergrande out as other companies would then seek to request funds. Damned if you do, damned if you don’t, the intent was good but the execution, not so.

Unless if your operating within an authoritarian organisation, then a democratic approach to the solution is required to enable the business. You still want the business to curtail risk while consultation will allow the policy makers to understand the current situation and devise an appropriate strategy to level out risk over an appropriate timeframe to a mutually agreed, manageable level.

Well, where to I start? How does one know if a policy is causing fires within the organisation? Simple, ask the people on the ground for feedback. For existing policies, if the people on the ground have used the phrase, it’s against security policy so many times, well, it’s time for a review. Naturally, there are reasonable bounds in which the business may operate but if this the phrase is being used time and time again, it is time to change the policy or devise a solution that is capable of fulfilling the request.

Does this mean that the policy is bad? Not necessarily, it could be as simple as tweaking the wording, providing standards, guidelines to supplement the policy or a test case to identify how the policy could satisfy all parties. The point being is that all of the mentioned solutions cannot be completed in a vacuum, they all require consultation, constant, regular consultation.

Secondly, if you caught on to the fact that the phrase sounds like a policy metric, that’s because it probably is. Consultation will allow for open lines of communication for feedback, any feedback positive and negative is a metric. No consultation, no feedback, no metric, simple. What about other ways of obtaining policy metrics? You may not have time to consult all the business units, communicating that there’s a feedback line is a start or better still, speak to the front line staff, establish regular catchup sessions.

Other metrics could be obtained through other business functions, Shadow IT is just one example, the fact that it exists could suggest that a policy could be hindering the business. Assuming that you have a procurement function, significant IT purchases for assets or services that are not completed through approved procurement channels could suggest a policy failure. However, you would never know if you have not established a communication channel with Finance/Procurement.

Another way is to employ technical means to verify the effectiveness of the policy. It could be as crude as leveraging your support ticketing system and categorising the tickets. If there is trust in the establishment, expect constructive feedback. Unless if there is a structural problem with your policies, you should not expect an avalanche of responses either because it working or there is no trust and your policies are being circumvented.

So what happens if my policies are circumvented? Well, aside from implementing a dictatorship, asking your IT support, security teams in ways to look for holes would be a first step. They may also provide information on technical solutions such as infrastructure monitoring to provide metrics.

Now this is no means a comprehensive list, context is required to devise an suitable approach and effective solution but let it be known that, unchecked, ineffective policies leads to a divisive environment for ripe for abuse but change is possible and assistance can be provided by starting one simple constructive dialog.

In the next blog we will discuss policy development.


Australian Census

Australia’s #CensusFail, but is it really?

Full disclosure: I disagree with the amount of detail collected in the census and the way that the data will be shared. However, this post is not a post on privacy concerns, rather a post on common myths surrounding incident response and what can we learn from it.

Some (mis)information that has been going around on various sources (Twitter, news, etc) has prompted this post. As the situation is still unravelling, there is a lot of information here that has not been considered/discussed. So, I welcome any constructive discussion to fill in the gaps.


9th of August 2016, the most important day for the Australian Bureau of Statistics (ABS): National census day. A day where massive amounts of data will be collected on individuals providing a snapshot of the nation. Various government bodies will use this valuable data to plan out major projects and allocate resources for the next decade. The data collected is quite rich and is of major value to many entities, both good and bad. The Australian population is about 23 million people and with the census ‘open’ for a short period of time, this big bang launch requires a lot of planning and carries a lot of risk. Kudos to the Australian government for leveraging technology on such a large scale.

The successful management of any teething issues is a sum of risk minimisation and incident response.

Challenge accepted: Knowing your adversary

As with all governments, there are adversaries: political activists, criminal syndicates and nation state actors just to name a few. Coupled with a motive and capability, a worthy adversary could mount a successful attack. United States Office of Personnel Management (OPM) is just an example of one successful attack involving a well prepared adversary.

Playing defence on this scale is not for the faint hearted so spare a thought for any (InfoSec) professional assisting with this mammoth project.

What happened?

In short, the Census suffered 4 DDOS attacks before the service was taken offline. The seriousness of the attack prompted the Australian Signals Directorate (ASD) to investigate.

Dispelling misinformation

The points below serve to dismiss or debate any misinformation seen to date.

Misinformation #1: Bandwidth exhaustion is not the only form of denial of service attack.

A Distributed Denial of service (DDOS) exist in many forms with the well-known being bandwidth consumption. One other type of DDOS is resource exhaustion, where finite system resources are consumed leading to the service failing, examples include and are not limited to, state based TCP stack attacks or memory or CPU based attacks. This form of DDOS does not require significant amount of bandwidth to be successful.

Hence, determining the presence of a DDOS through network based analysis alone is inconclusive.

Misinformation #2: Just because there is an ‘attack’ it doesn’t mean that the attack was successful.

Without knowing the underlying architecture of the system, it is very difficult to conclude what vulnerabilities were present that led to a successful denial of service.

Anyone that has worked with an IDS/IPS/etc will understand that an ‘attack’ is only successful when the attack is targeted against a specific vulnerability present in a particular system. The question of whether a vulnerability is present at the time of attack will not be known until a thorough investigation has been completed.

Any ‘intelligence service’ that lay claim that an ‘attack’ has occurred against an organisation can only be definitive once an ‘attack’ has been matched with the underlying vulnerability. With this in mind, without probes/detectors/etc within the victim organisation, no service cannot make this claim.

Misinformation #3: A DDOS is not a data breach however, it can be used to provide cover for one.

Risk minimisation is the best form of prevention any organisation could do before commissioning a system. It is impossible to protect against all possible scenarios, so enter the obligatory Sun Tzu quote,

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

In other words, knowing yourself and your adversary is very important to devise a comprehensive risk minimisation plan.

Incorporated into any worthy risk minimisation strategy is an incident response plan, commonly known as a ‘Plan B’. Plan B is to cater for the other things that could go wrong, unforeseen events or things that cannot be treated during risk minimisation such as natural disasters and data breaches. Knowing you have a worthy, determined adversary is one thing but knowing what they are capable of or what they will do, is another.

An incident response plan is both damage control and public relations rolled into one, with the main objective being restoring operations as soon as possible. History has shown that diversionary tactics is one way to consume human resources allowing attackers to proceed with their main objective, which could be a data breach. There are many references to diversionary tactics used, one of which is last year’s attack on TalkTalk.

It must be noted that only time will tell whether or not a data breach did take place which will be explained below.

Misinformation #4: ‘Cyber’ Attribution is extremely hard. A host used in an attack does not mean that the owner of that host is the attacker.

Pivoting is powerful technique where by an ‘attacker’ gains access to a host in order to progress through an attack. Used by penetration testers (“Red teams”) and attackers alike, pivoting could be used for several reasons such as, gaining access to a critical management server or to disguise an attack.

Pivoting to a host could also be used to throw off any investigation, the digital equivalent of framing someone else for some nefarious/questionable activity. A savvy attacker would incorporate some form of pivoting to cover or deflect attention away from themselves.

To compound the issue, obtaining evidence for attribution is met with many challenges: availability and quality of residual evidence, obtaining permission from the owner of the host to gain access, political/legal boundaries and so on.

The net result makes attribution very difficult.

Misinformation #5: Just because the entire story has not been released, it doesn’t mean that there is a cover up. A thorough investigation takes time.

The duration of an investigation is determined by the number of moving parts requiring analysis. Evidence could be destroyed, incomplete, falsified resulting in more time and resources to determine what occurred during an event. Complex systems require significant amount of time for investigators to gather evidence, analyse, correlate, test theories before presenting their findings to an audience especially one wanting heads to roll or determined to scrutinise the smallest of details.

An incomplete, bad or botched investigation could result in the loss of credibility for the investigator and the organisation, or worse still, could result in a law suit.

The pressure on investigators to obtain factual information is paramount and they would rather avoid any speculation until they are certain that their findings are rock solid.

As InfoSec professionals what could we do?

  • Avoid planting conspiracy theories and instead promote good/defend/assist the hard working investigators who are undergoing a lot of pressure.
  • Correct any false, misleading information with factual information
  • Learn from what has happened, use this as a case study and where appropriate, communicate to your superiors on what could be done so that the fallout form something like this could be minimised.
  • Investigators/incident responders: treat incoming information with a pinch of salt, gather conclusive evidence to support the theories.

Understandably, this is no means the end to this saga however it is definitely not the time to spread rumours or play the blame game.

Allow the incident responders, ABS, ASD and other investigators conduct what is necessary in order to form a conclusion based on fact rather than crucifying the innocent. To the Australian government, the public will demand transparency for what has happened, please put the blame game aside, let us all learn from what has happened. It is tax payer money so let us all be good sports, everyone deserves to know the truth (granted it will be 10 years before another census will happen again).

References (by no means comprehensive):

Online dating at the organisation’s expense – covered by your InfoSec policy

A few months after the sensational Ashley Madison data breach, another dating website has been breached. Members of Beautiful People have their personal details floating around on the internet.

Previously undisclosed intimate details such as sexual preferences and orientation will have a social impact on the unfaithful and close family members. Personal impact has serious consequences spanning beyond financial impact such as strained family relationships, reputational damage and adverse health consequences (eg: stress).

Organisational policy is there to ensure that staff behave in line with organisational expectations. Just as staff are not allowed to take bribes, turn up intoxicated to work, staff are expected to ensure that their behaviour online do not tarnish the organisation’s reputation. When allowing personal use of corporate systems organisations must consider the external impact of the staff member’s actions.

An organisation’s security policy must reflect the level of professionalism expected by senior management and importantly their customers. Policy, communication, education, transparency (and incident response/public relations) must be in place to protect all parties involved.

After all, staff that behave badly is a reflection on the organisation as a whole.

Mobile Phone intercept, it’s not new

A 60 Minutes report demonstrated how mobile networks could be intercepted and that this has been an open secret within intelligence agencies.

Mobile intercepts are done by accessing the mobile network infrastructure shared by providers across the world. The mobile infrastructure utilise a protocol called SS7 that handles call connections and subscriber information. So, anyone with access to SS7 can intercept subscribers on the network just about anywhere in the world.

Demonstration on base band weaknesses was made as early as 2010, Karsten Nohl the person who demonstrated this to 60 Minutes presented his findings in 2010 and again in 2014. If you have thought about it, chances are that someone else has as well. In this case, intelligence agencies.

Why is this not a big deal?

Just as providers and adversaries could intercept landlines through a PBX, wire taps and even exchanges, surely the same principles/assumptions would exist for mobile technologies?

All the same information security networking rules apply: if you have anything of importance to transmit over a 3rd party network, assume that the 3rd party is not trusted and make sure you have some sort of end to end encryption in place (like the couplers used in spy movies) or use an alternative method.

Canals, Law Firms and InfoSec – Lessons learned from the Mossack Fonseca data breach

Another day, another leak.

This time the victim of the leak is Mossack Fonseca, a Panama Law firm which has been exposed for facilitating shady tax arrangements. The size of the breach is enormous, the volume alone could displace what Panama is traditionally known for, the canal.

Naturally, findings from the breach would never see the light of day but speculation is always fun so…what could we learn from this breach?

1. Trust is king otherwise use identity management and access control

The locks in the canal serve to direct water to the right places.

No one knows if this was an inside job but one thing is certain, ‘John Doe’ who conducted the breach had the motivation to carry out the leak. If this was an inside job, this person would have had a lot of access within the organisation. Identity management, separation of duties and access control limit the amount of data any one individual has access to. In any organisation handling sensitive data, all of this is very important to limit the potential loss of data from a disgruntled employee.

2. Don’t forget physical access

A hole in the lock allows water to seep through where it’s not supposed to.

John Doe could have access to backups, tapes, systems, USB ports, you get the idea.

3. Are you looking – Egress points

If no one is inspecting the locks for leaks, how do you know if there is a problem?

Süddeutsche Zeitung, the newspaper organisation reporting the breach acquired about 2.6 terabytes of data. Let’s assume for a second that the person doing the leak was not an insider who stole mirrored drives from Mossack Fonseca servers or copied the data to a USB hard drive, how did the 2.6 terabytes of data get out?

If it was done remotely, that’s 2.6 terabytes over the wire. If you had a SOC/NOC in a law firm, one thing that you should be keeping an eye out for is mass exfiltration, out of hours transfers and any other anomalous activity. Sure, John Doe could have been drip feeding but with 2.6 TB of data, that would take an excruciatingly long time. If your NOC is not talking to your security team, well, that too is a problem.

4. Data loss protection is not a silver bullet

Just stopping blue coloured liquid will not stop leaks.

Süddeutsche Zeitung reports that the haul consisted of e-mails, pdf files, photo files, and excerpts of an internal Mossack Fonseca database. A quick review of the leaked documents shows that Mossack Fonseca was handling both structured and unstructured data. Documents derived from a predefined template eg: official company letterhead form structured data. Passports, share certificates and other documents that do not adhere to MF’s document management standards is classified as unstructured data. Documents could also be tagged specifically for DLP. DLP works by detecting structured or tagged documents ‘moved’ to an ‘unauthorised’ location where DLP would then block the movement or trigger an alert for further action.

Doing DLP right, MF would have to identify all critical/sensitive documents, tag or convert documents to a standard format before it could be picked up by DLP. This requires a lot of work and failure to do so means that data could slip right under the watchful eye of DLP. Not to mention that there are ways to circumvent DLP which, segways nicely into our final point.

5. Security is like an onion

A failure in one lock should not prevent the entire system from failing.

The hot topic of encryption makes another entrance. Encrypted data makes life hard for inspection based systems such as DLP and that’s assuming that inspection based systems can detect and decrypt the encryption in the first place. A tagged document could be encrypted in a password protected ZIP/RAR/<insert favourite exfiltration format here> and if you’re lucky, inspection based systems may only log the activity. In this scenario, you will need all of the above to reduce the risk of the data breach from occurring.

There may be other lessons learnt from this breach, feel free to share below.


The ethics of vulnerability disclosure

Badlock, a newly publicised vulnerability is making the rounds with a catch…the patch is not available for another 3 weeks. However, marketing hype is already doing the rounds.

What makes this vulnerability big is the hint that it affects SMB used in SAMBA/Windows File sharing and as SAMBA is open source, vulnerability researchers good and bad will be scrambling through the code.

How, given the wide spread reach of the vulnerability, the hype is not called for. Three weeks in information security land is a lot of time and the wide spread use of SMB means organisations could be sitting ducks.

People that wield power must act sensibly, if you know something that could affect a lot of people use power with discretion, report vulnerabilities to vendors and ensure that a working fix is applied by customers before causing mass mayhem. Defenders already have a hard time keeping people out, we shouldn’t be fighting amongst ourselves.

As always, opinions welcome. Thanks to Steve Regan (@SteveD3) for exposing this.


Cross published on Peerlyst:


edit: 24/3: typo

Dealing with insider threats

Interesting survey just came out stating that employees would sell passwords for USD$1000.

The temptation of a quick buck could be hard to resist but that is all you need to gain access to a system.

Just last month, Apple employees were also offered bribes to disclose their credentials.

Insider threats are real, but as security professionals, working with departments would help provide an understanding on the state of play.

For example: If you’re a service provider for a high profile organisation, you may be the subject for such threats.

How could you reduce this:

  • The human factor is important, organisational morale, culture and trust must be at the top to help staff from being influenced by bribes.
  • Reporting is also key, staff must be encouraged to report such threats.
  • Suman Sourav (@SumanS0urav) makes a point with monitoring, detective controls for anomalous behaviour is important.

It’s not easy to stop insider threats but a culture conductive of trust will serve to reduce threats.


Avoiding Cyber Burnout

After a very late night out, something possessed me to check Twitter and up came a tweet:

It’s Saturday but I still have to ask: what was your win for this past week? Everything counts. – @jessysaurusrex Mar 19

It struck the late night philosophical side so I asked, why the question?

…I think it’s important, esp, in infosec (bc there’s a high failure/burnout) to refocus on – @jessysaurusrex Mar 19

It got me thinking again in the morning, as leaders/managers/peers what do we do to reduce burnout?

I used to manage a virtual team for patch management with members geographically distributed. We all had our respective workloads but as patch Tuesday came in, it was all hands on deck, every month, without fail. Maintaining morale with a big workload is hard to do. Just as detecting emotional state is difficult over email, how do you know that the team is keeping well?

In our team, there were some punishing moments but we all managed ok. Part of this was due to the fact that we had short weekly phone conferences.

Why weekly? I wanted to know if there was something in the pipeline that could scuttle the ship. Did another manager just assign a piece of work that would detract from patching? Did a priority one come in?

The meetings also allowed members to openly express any immediate concerns or challenges. Just like the tweet, I wanted to know if we made any wins between meetings? If we didn’t, was there anything that other members could do to help out? Has anyone else experienced any problems? Is there something that I can escalate or offload?

It was a chance for others to help collaborate and help another person out.

Some other things that helped:

  • The ‘Vegas’ policy, what happens in the meeting, stays in the meeting
  • Updates, anyone assigned with an action item, owns it and is expected to follow through. If we have to chase, it is understood to be urgent and not monitoring.
  • Leave management to management, if a SME needs management support, the management rep would own the task, leaving the SME to focus on their turf.

We can’t control the world but we can always help others in need.

I’d like to know if you’ve had any techniques that could help reduce burnout amongst your teams?

Operations security (OPSEC): part of a balanced organisational security awareness diet


  • The goal of OPSEC is to limit the adversaries capability to gather information about individuals or the organisation
  • Implement OPSEC just as you would for controls
  • Assess whether OPSEC could aid your existing information security program.

Authentication based on something you know is the easiest and cheapest way to prove someone’s identity, however it is also one of the most easiest to defeat. With personal privacy not being as private as it used to be, it is hard to establish a universal authentication system based on something you know.

The age of mass surveillance, Social media, Internet of Things (IoT) track individuals and record secrets. Continuous changes with social media features, IoT devices gathering detailed information provides a challenge with maintaining a high level of privacy. Criminals feasting on personal gain leverage lapses in privacy to exploit organisations relying on weak authentication to clean up innocent victims.

The absence of a suitable technological solution (at the moment) makes the security of something you know rest solely with the subject.

This is where Operations Security (OPSEC) comes in.

Operations security condition individuals to think about limiting the information that they disclose to the public. The military and other clandestine groups have employed OPSEC for many decades now and it serves to limit the effectiveness of reconnaissance on individuals.

Considering that offensive teams such as penetration testers, social engineers leverage intelligence as one of their first stages, information starvation would limit or prolong the opportunity of a successful ‘attack’.

Incorporating OPSEC into organisational security awareness does not require staff to undergo military style training, instead consider:

  • is there a gap that OPSEC could address (eg: organisational or private use of social media, job listings, etc)
  • the level of detail the program will cover. This needs to strike a balance between practicality, freedom and security.

Two examples where OPSEC in security awareness would be valuable include:

  • Compartments – defining boundaries between professional and private or segmentation within professional ie: secret vs internal
  • Need to know – Do you *have* to communicate this information? Is it necessary for them to function? Could the information be used against the individual or organisation? For example: security questions/password reset, does your online trader need to know your date of birth?

Before OPSEC can be deployed throughout the organisation, there are some considerations

  • OPSEC augments information security and should be part of an existing security awareness program
  • OPSEC may not be for everyone, some specific groups will benefit more from OPSEC more than others
  • Does the culture of the organisation allow for OPSEC to be implemented? Will staff allow for their organisation to interfere with their personal lives
  • People are human, mistakes happen. OPSEC serve to limit fallout.
  • OPSEC is expensive not only financially but continuously maintaining good OPSEC is quite hard to do.

There may be some debate as to the effectiveness and application of OPSEC however, a successful campaign could provide an adversary a run for their money.

If you are running an OPSEC campaign in your organisation, I’d like to hear about it.

Some information

I’m back

After a hiatus (being anti social), I’m back on social media.

I’m aiming to blog once a week on strategy, infosec, security awareness.

Also caved in to twitter: @IAm0xEd

I’m quite new to using twitter and its idiosyncrasies so bear with me. If you have any suggestions on who to follow, etc feel free to let me know (or follow me on twitter). Constructive feedback is always welcome.