Lessons Learned From the Sony Hack

sony-hack-photo-3This article reviews the 2014 Sony hack from a strengths and weaknesses standpoint based on select parts of the SysAdmin, Audit, Network and Security (SANS) and National Institute of Standards in Technology (NIST) frameworks. Although an older hack the lessons learned here a still relevant today.

Strengths – A Track Record of Innovation and Multilayered Information Security:
From early boom-boxes in the 1980s to the first portable disc player in the early 1990s.  To high-quality headphones, the first HD TVs, to high-quality speakers, a gaming system revolution called the PlayStation, and now a massive on-line gaming network, Sony has been creative and innovative.  This has made them one of the most respected and profitable Japanese companies to date.  Yet this success derived overconfidence in other areas including information security but they still have the potential and the money to be a security leader.   The managerial layering of Sony’s information security team was a good start even if their head count was too low.  One source stated, “Three information security analysts are overseen by three managers, three directors, one executive director and one senior vice president” (Hill, 2014).  Although contradictory, at least there was some oversight.

Failure 1 – Poor Culture and Lack of Leadership Support:
Sony’s leadership is on the record as not respecting the recommendations of either internal or external auditors.  A quote from an I.T. risk consultancy summarized it this way, “The Executive Director of Information Security talked auditors out of reporting failures related to Access Controls which would have resulted in Sony being SOX (Sarbanes-Oxley) non-compliant in 2005” (Risk3sixty LLC, 2014).  Things like this trickle down the layers of management and become a part of the company culture.  Specifically, low level whistle blowers were silenced even though their I.T. risk arguments were solid.  “Sony’s own employees complained that the network security was a joke. (Risk3sixty LLC, 2014)”.  When this happened Sony’s leaders failed to execute their fiduciary duty to the board, shareholders, and customers.  They did this so they did not look bad in the short term yet it cost the company more in the long term.

Failure 2 – Not Understanding Their Baseline:
The baseline is a measure that determines when you have the right amount of security and security process in relationship to your required business objectives and risk tolerance.  Being below the baseline means risk is too high and an attack or breach is likely.  This is why the baseline changes often and needs to be closely monitored.  For example, when you are producing a very politically controversial movie about an unruly world leader who has a history of making war threats against his political opponents, you should have a higher baseline to be on guard from hacktivists.  Sony overly focused on their cash generating core competencies and security was at most an afterthought.  According to one source, Sony Pictures had just 11 people assigned to a top-heavy information security team out of 7,000 total employees (Hill, 2014).  For a technology company that is way too few people working in security.  It’s not enough people to collect and intelligently review logs, patch software, pen test, red team, and be available for one or more war room type projects which are bound to come up – all things prudent security would require.

Understanding your I.T. risk baseline requires testing and measurement and this has to be based on some framework, SANS, NIST, or some of the others.  One former employee described Sony’s failure to comply with any framework as follows, “The real problem lies in the fact that there was no real investment in or real understanding of what information security is.  One issue made evident by the leak is that sensitive files on the Sony Pictures network were not encrypted internally or password-protected” (Hill, 2014).  Had they conformed to the SANS or NIST framework they would have been required to encrypt the data – see conclusion.

Failure 3 – Weak Password Policies:
Sony’s password policy was embarrassingly weak.  In fact, so weak you might think they were deliberately trying to help hackers.  “Employees kept plaintext passwords in Microsoft Word documents” (Franceschi-Bicchierai, 2014).  Even very small companies from the 1990s would have policies against that.  Moreover, one source confirmed that the word files were named with password in the file name (Risk3sixty LLC, 2014).  Once in the network, all a hacker has to do is search for a file with password in the name and they have it.

Failure 4 – Late Detecting the Hack and Data Exfiltration:
Right away the intruders easily walked into Sony’s internal network and began stealing unencrypted sensitive data with apparently no log alarms going off.  Sony had not followed data classification, retention, or governance plans – not even checkbox compliance.  If they did they would not have had all types of data mixed together.  One reporter described it this way, “Intruders got access to movie budgets, salary information, Social Security numbers, health care files, unreleased films, and more” (Hill, 2014).  Thus, their network segmentation here must have been weak or non-existent.  Health care data should not be near unreleased film files as they are totally different.  There is no business justification for this.  Segmenting and encrypting the data would have greatly reduced and delayed any data theft.

Conclusion:
sans-top-3-sony
nist-cyber-sec-framework-for-sony

References:
Baker, L., & Finkle, J.  “Sony PlayStation suffers massive data breach”.  Reuters.  Published 04/26/11.  Viewed 10/26/16.  http://www.reuters.com/article/2011/04/26/us-sonystoldendata-idUSTRE73P6WB20110426

Franceschi-Bicchierai, Lorenzo.  “Don’t believe the hype: Sony hack not ‘unprecedented,’ experts say.”  Mashable.  Published 12/08/14.  Viewed 10/20/16.  http://mashable.com/2014/12/08/sony-hack-unprecedented-undetectable/#359BD06aEkq6

Greene, Tim.  “SANS: 20 critical security controls you need to add.” Networked world.  Published 10/13/15.  Viewed 10/23/16.  http://www.networkworld.com/article/2992503/security/sans-20-critical-security-controls-you-need-to-add.html

Hill, Kashmir.  “Sony Pictures hack was a long time coming, say former employees”.  Published 12/04/14.  Viewed 10/20/16.  http://fusion.net/story/31469/sony-pictures-hack-was-a-long-time-coming-say-former-employees/

NIST.  “Framework for Improving Critical Infrastructure Cyber Security”.  Published 01/01/2016.  Viewed 10/23/16. https://www.nist.gov/sites/default/files/documents/cyberframework/Cybersecurity-Framework-for-FCSM-Jan-2016.pdf Risk3sixty LLC.

Risk3sixty. “The Sony Hack – Security Failures and Solutions.”  Published 12/19/14.  Viewed 10/20/16. http://www.risk3sixty.com/2014/12/19/the-sony-hack-security-failures-and-solutions/

Sanchez, Gabriel.  “Case Study: Critical Controls that Sony Should Have Implemented”.  SANS Institute Information security Reading Room.  Published 06/01/2015.  Viewed 10/20/16.  https://www.sans.org/reading-room/whitepapers/casestudies/case-study-critical-controls-sony-implemented-36022

Demystifying 9 Common Types of Cyber Risk

1)       Crimeware
This is designed to fraudulently obtain financial gain from either the affected user or third parties by emptying bank accounts, or trading confidential data, etc. Crimeware most often starts with advanced social engineering which results in disclosed info that leads to the crimeware being installed via programs that run on botnets which are zombie computers in distant places used to hide the fraudsters I.P (internet protocol) trail. Usually the victim does not know they have crimeware on their computer until they start to see weird bank charges or the like, or an I.T. professional points it out to them. Often times it masquerades as fake but real looking antivirus software demanding your credit card info in an effort to then commit fraud with that info.

2)       Cyber-Espionage
The term generally refers to the deployment of viruses that clandestinely observe or destroy data in the computer systems of government agencies and large enterprises – unauthorized spying by computer, tablet, or phone. Antivirus maker Symantec described one noteworthy example where the U.S. Gov’t made a worm to disable Iran’s nuclear reactors arguably in the name of international security (Fig. 1).

“Stuxnet is a computer worm that targets industrial control systems that are used to monitor and control large scale industrial facilities like power plants, dams, waste processing systems and similar operations. It allows the attackers to take control of these systems without the operators knowing. This is the first attack we’ve seen that allows hackers to manipulate real-world equipment, which makes it very dangerous. It’s like nothing we’ve seen before – both in what it does, and how it came to exist. It is the first computer virus to be able to wreak havoc in the physical world. It is sophisticated, well-funded, and there are not many groups that could pull this kind of threat off. It is also the first cyberattack we’ve seen specifically targeting industrial control systems” (Accessed 03/20/16, Norton Stuxnet Review).

Richard Clarke is the former National Coordinator for Security, Infrastructure Protection and Counter-terrorism for the United States and he commentated on Stuxnet and cyber war generally in this Economist Interview from 2013.

Fig.1.

3)       Denial of Service (DoS) Attacks
A DoS attack attempts to deny legitimate users access to a particular resource by exploiting bugs in a specific operating system or vulnerabilities in the TCP/IP implementation (internet protocols) via a botnet of zombie computers in remote areas (Fig. 2). This allows one host (usually a server or router) to send a flood of network traffic to another host (Fig. 3.). By flooding the network connection, the target machine is unable to process legitimate requests for data. Thus the targeted computers may crash or disconnect from the internet from resource exhaustion – consuming all bandwidth or disk space, etc (Fig. 3.). In some cases they are not very harmful, because once you restart the crashed computer everything is on track again; in other cases they can be disasters, especially when you run a corporate network or ISP (internet service provider).
Fig. 2.                                                                Fig. 3.Botnet and TCP image
4)      
Insider and Privilege Misuse
Server administrators, network engineers, outsourced cloud workers, developers, I.T. security workers, and database administrators  are given privileges to access many or all aspects of a company’s IT infrastructure. Companies need these privileged users because they understand source code, technical architecture, file systems and other assets that allow them to upgrade and maintain the systems; yet this presents a potential security risk.

With the ability to easily get around controls that restrict other non-privileged users they sometimes abuse what should be temporary access privileges to perform tasks. This can put customer data, corporate trade secrets, and unreleased product info at risk. Savvy companies implement multi-layered approvals, advanced usage monitoring,  2 or 3 step authentication, and a strict need to know policy with an intelligible oversight process.

5)       Miscellaneous Errors
This is basically an employee or customer doing something stupid and unintentional that results in a partial or full security breach of an information asset. This does not include lost devices as that is grouped with theft – this is a smaller category. The 2014 Verizon Enterprise Data Breach Investigation Report gives an example of this category as follows:

“Misdelivery (sending paper documents or emails to the wrong recipient) is the most frequently seen error resulting in data disclosure. One of the more common examples is a mass mailing where the documents and envelopes are out of sync (off-by-one) and sensitive documents are sent to the wrong recipient” (Accessed 02/21/16, Page 29).

6)       Payment Card Skimmers
This is a method where thieves steal your credit card information at the card terminals, often at bars, restaurants, gas stations, sometimes at bank ATMs, and especially where there is low light, no cameras, or anything to discourage the criminal from tampering with the card terminal.

Corrupt employees can have a skimmer stashed out of sight or crooks can install hidden skimmers on a gas pump. Skimmers are small devices that can scan and save credit card data from the magnetic stripe (Fig. 4.). After the card slides through the skimmer, the data is saved, and the crooks usually then sell the information through the internet or if they really want to be secure the Darknet which is a secure non-mainstream internet that requires a special browser or plug-in to access. After this counterfeit cards are made, then bogus charges show up, and the bank eats the costs which unfortunately drives up the cost of banking for everyone else. Also, some skimmers have mini cameras which record the pin numbers typed at ATM machines for a more aggressive type of fraud (Fig. 5.).  Here are two images of skimmer technologies:

Fig 4.                                                                       Fig 5.
Card Skimmer and Camera

7)       Physical Theft and Loss
This includes armed robbery, theft by accident, and/or any type of device or data lost.  Although some of the stolen or lost items may never end up breached or used for fraud sometime they are depending on what device and/or what data is on that device and/or if it was encrypted or not, or if it the data could be deleted remotely, etc.

8)       Point of Sale Intrusions
See my 2014 post on the Target Data Breach here for a good example.

9)       Web App Attacks
These incidents were carried out primarily via manipulation of vulnerabilities in input validation and authentication affecting common content management systems like Joomla, Magento, SiteCore, WordPress, and Drupal.

According to the 2015 Verizon Data Breach Investigation Report these types of attacks are not only a reliable method for hackers, but also fast with 60% of the compromises taking a few minutes or less(Accessed 02/21/16). With web applications commonly serving as an organization’s public face to the Internet, the ease of exploiting web-based vulnerabilities is alarming (Accessed 02/21/16, 2015 Verizon Data Breach Investigation Report). According to The Open Web Application Security Project these are two common types Web App weaknesses (Accessed 02/21/16, 2013, OWASP 10 Most Critical Web Application Security Risks):

“i) Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.

ii) XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping (Fig. 6.). XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites access unauthorized pages”.

Fig. 6.
RXSS
Jeremy Swenson, MBA is a seasoned, Intel certified, retail technology marketing and training representatives on assignment at Best Buy for clients including Intel, Trend Micro, Adobe, and others. He also doubles as a Sr. business analyst and project management consultant. Tweet to him @jer_Swenson.