Sunday, September 22, 2013

To Remove Sysadmins or not to Remove

Lately, security trends has shared the disturbing idea of removing the system administration function or hiding it into something else … It would be Ok if this was a result of automation or simplification,  but this is not the case here.

The latest description of incidents in the NSA, with two other articles about this with various reports, as well as my own experience with system administration, starts to worry me ..

The article that caught my eye was " NSA Plans to Eliminate System Administrators”
August 13, 2013 SANS newsbites, (Excerpt #1 below)  because it is frankly an insane idea  especially for such a tight security structure as the NSA needs to be. I'm not sure, but the same would probably apply for other similar organisations. Just think back, a few years, how a lot of high end security companies were hacked.

First of all, we need to agree on what system administration is today, with regards to what defines a big system and data breach.

As for the definition of a system administrator and system administration in IT, I like this rather elderly quote:

The job of a system administrator is like this: "On one side, you have a set of resources: computers, networks, software, etc. On the other side, you have a set of users with needs and projects--people who want to get work done. Our job is to bring these two sets together in the most optimal way possible, translating between the world of vague human needs and the technical world when necessary."

"Perl for System Administration", by David N. Blank-Edelman, ISBN 1-56592-609-9, First edition.
from July 2000. It precedes some big meltdown in IT but it is still relevant today

It shows the important role of controlling the system, which also assumes understanding the system and its architecture. Basically it shows someone who is part of the system not an outsider. This is extremely hard to achieve today because of huge size of big systems, policies,  management and organizational issues (same for agencies or big corporations, but where sanity prevails in the corporate world). In How Did Snowden Access All That Data? (August 24 & 26, 2013) (Excerpt #2), from SANS newsbites, this incident is presented in more details and shows the disturbing similarities to common big data breach incidents. If we look back at, the Verizon reports about big data breaches, especially the first one from 2008, what stands out is a set of big unknowns in each compromised system. This report also gives a good description of “A big system” and “A big data breach”.  These big unknowns become all the more interesting when observed from the system administration perspective (Excerpt #3)

This “Unknown” numbers are:
•     unknown data 66%
•     unknown network connections or accessibility 27%
•     unknown accounts or privileges 10%
•     system unknown  7%

This simply means that “unknown” issues were out of the radar, or that no one was responsible for administering such a system or simply a lousy system administration.  In a well administered system such unknowns should be impossible, so why do such unknowns exists and why didn’t anyone care about them? Such data is visible if you do some system mapping or log data analyses, so the right question would be “why no-one in management cares and what is the rationality behind this careless approach.”

When all this is put together it makes a rather scary picture of lack of administration and lack of care and most of all lack of interest in the actual state of the system. I’ll put my money down and say that probably happened because someone was doing some cost cutting, as is usual when removing non-primary-business related part of the organisation.  It is hard to say but this goes for most of the big organisations. In the  “Low tech hacking intro, the author summarized in almost exactly the same situations as to why such incidents keep happening. It is easy to forget that infrastructure today is handling data and that that is the base of your core business, whatever business it is.

As any other problem, of such impact and scale, this should have something to do with the management within this organisations. Removing the system administration looks desperate  and is frankly impossible  since system administration means keeping the system operational. The Snowden case looks like a direct transfer from the Verizon report, the part about consultants and contractors and data breach.  So how can removing the sysadmins function help? Probably in a way that now the owner of the cloud will be the one to blame with future incidents.  It is like renting a car and not checking the state of the vehicle before driving out, so if it crashes it is not my fault, as I was just transporting my precious belongings with it. The best solution would cost more than just applying the best practices and remembering that whatever your business is, it depends on the IT infrastructure.



Here are relevant parts the articles mentioned, since editor’s notes are so interesting
I’ve put it whole quotations below.
-----------------------------------------------------------------------------------------------------------------------
In an effort to reduce the risk of information leaks, the US National
Security Agency (NSA) plans to get rid of 90 percent of its contracted
system administrator positions. NSA Director General Keith Alexander
said that the agency plans to move to an automated cloud infrastructure.
Speaking on a panel along with FBI Director Robert Mueller at a security
conference in New York, Alexander referred to the recent revelations
about the scope of NSA surveillance, noting that "people make mistakes.
But ... no one has willfully or knowingly disobeyed the law or tried to
invade ... civil liberties or privacy."


http://arstechnica.com/information-technology/2013/08/nsa-directors-answer-to-security-first-lay-off-sysadmins/


http://www.theregister.co.uk/2013/08/09/snowden_nsa_to_sack_90_per_cent_sysadmins_keith
_alexander/

[Editor's Note (Paller): A huge revelation to executives of the Snowden
affair is illuminated in this decision by NSA.  System administrators
are powerful - too powerful.  In the mainframe era, IBM and its
customers invested 15 years (1967-1982) building strong controls into
computers, specifically to constrain the power of the systems
programmers.  System administrators are now as powerful as system
programmers were in the 60s and 70s, and are unconstrained.  NSA is in
the vanguard of a major shift coming to every organization that cares
about security. The immediate implementation of the top 4 controls in
the 20 Critical Controls is a core survival task for IT security
organizations. See Raising the Bar for evidence
(http://csis.org/publication/raising-bar-cybersecurity). Organizations
failing to implement those quickly should anticipate an unstoppable
board-level push to outsource system administration and management to
the cloud providers.]
-----------------------------------------------------------------------------------------------------------
The US government is having difficulty figuring out exactly what data
Edward Snowden took while working as a contractor at the NSA because
Snowden was careful to hide his digital footprints by deleting or
bypassing electronic logs. The incident illustrates problems inherent in
the structure of the data systems if they were so easily defeated. It
also appears to refute assurances from the government that NSA
surveillance programs are not subject to abuse because they are so
tightly protected.

http://www.zdnet.com/how-snowden-got-the-nsa-documents-7000019860/

[Editor's Note (Murray): If the user can cause or prevent entries in a
log or journal, then it is not reliable. Admittedly, the
process-to-process isolation problem was difficult when we tried to
solve it with software in expensive hardware.  Perhaps their contractors
have not told the NSA that hardware is now cheap. ]
-------------------------------------------------------------------------------------------------------------------------
Excerpt #3 Verizon report 2008, pg 24:

Unknown Unknowns

Throughout hundreds of investigations over the last four years, one theme emerges as perhaps the most consistent and widespread trend of our entire caseload. Nine out of 10 data breaches involved one of the following:

•     A system unknown to the organization (or business group affected)
•     A system storing data that the organization did not know existed on that system
•     A system that had unknown network connections or accessibility
•     A system that had unknown accounts or privileges  

We refer to these recurring situations as “unknown unknowns” and they appear to be the Achilles heel in the data protection efforts of every organization—regardless of industry, size, location, or overall security posture.


Wednesday, September 18, 2013

Conference “Kritična Nacionalna Infrastruktuta” (KNI) Police academy Zagreb 12-13th September 2013

Interesting event based on this science project,  it is the third conference in the project,  not a day too early. Here is the program with the list of participants : http://www.mup.hr/UserDocsImages/PA/IIIkonferencija_nove_ugroze/program_skup_NSU_%20KNI.pdf


The idea behind the conference was to reshape the definition of what a critical infrastructure is and how to secure it. Also to mingle  about state of the art in the world, to shake up general conception and put a few ideas into circulation. As a current standpoint, there were EU regulations that had been integrated into local legislations particularly related to critical infrastructure “Zakon o kritičnim infrastruktirama” (NN/56/13)..

The critical national infrastructure  is a rather neglected area , it is really time to think about it. Moderator, prof Antoliš, insisted on not-only applying the stone and iron  approach, but on other more light weight issues such as current knowledge and processes. Even though the current legislation concentrates on tangible property - i.e. buildings and hardware equipment, it is almost completely neglecting non-tangible property, such as intellectual property and complex hybrid systems.  The conference sessions have tried to show the importance of non-tangible property in the form of software and virtual critical infrastructure elements. Most of the participants either work with or have a law enforcement background, but if you don’t have a legal definition, stated in the legislation, it is hard to do anything in the law enforcement field.

While considering all the issues covered, II get a mental image of SANS scada trainings and workshops with their, game-like, hands on approach, and that is just one of the aspects of the whole story, technical side anyway. This has been clear to me since having supervised an interesting graduation thesis on general IT security of the national electric grid a few years ago. After that research I’ve always tried to think about the problem of technical abilities within a legal scope.

Attendees in the auditorium were a mixed bunch, some from the academic world, others from institutes, professionals in the Intelligence community, and a lot from the police force. Multiple theoretical issues were addressed, from education, tourism and energy infrastructure to intellectual property issues. This is all good, since only the theory of critical infrastructure was the main discussion at this event. From the practical side there were other technical industry professionals, like me, with case studies, tools and implementation experience. The event community was an international one, which is excellent since we can hardly talk about anything without sharing our different experiences with someone from across the border. I believe that national borders, in my field, are almost non existent with regards to attacks.  In most incidents you are either just part of the bigger picture or collateral damage. It is essential to see the infrastructure as whole needs to be protected.

Even though the conference was not directly addressing my current work profile, they did cover quite a lot of legal aspects which are important for me to understand.  It is rather challenging for me to accept those legal definitions which define how I can protect my property, the methodology ,  and the rules of detection and engagement I’m allowed to use. The conference also covered all aspects of infrastructure issues from national to international, since cases are almost always global attacks on the global system. The legal framework can be very tricky, when looking at it from the technical point of view, since it can actually prevent effective countermeasures. A lot of good topics on that issue can be found on Bruce Schneier’s writings.
Here on this conference we actually talk about step one, which is our national infrastructure, what criteria and what threatens it. This is a strategic question is not very well defined in the current law, what comes to mind again is the DNS and IP routing, how it can be defined under the current legal definition, especially when we talk about infrastructure, the national Vs the international,  a tricky question.

My participation was as part of the New Technologies and Critical National Infrastructure track, with some issues about Entrepirse forensic tools and analyzing colleted data. This may be too abstract for this conference but I felt the need to share how enterprise forensic tools work and the  huge amount of data available for analysis, even out of scope of forensic tool.

Presented subjects were good, even if some papers were a bit dry.  Presenters are experienced in their field and made it all very interesting within relation to real life situations. My favourite non-IT forensic paper was about  measuring and predicting reliability of a local power plant “prof. dr. sc. Dario Matika, Jakov Batelić: ODREĐIVANJE EKSPLOATACIJSKE
POUZDANOSTI TERMOELEKTRANE PLOMIN 2 U SVRHU VREDNOVANJA
KRITIČNE NACIONALNE INFRASTRUKTURE/DETERMINING EXPLOITATION
RELIABILITY OF PLOMIN 2 THERMAL POWER PLANT FOR THE PURPOSE OF
ASSESSMENT OF CRITICAL NATIONAL INFRASTRUCTURE“, actually an excerpt from Ph.d thesis by Mr Baletić. Sadly I don’t yet have link for this article, at the moment of writing this post.











Monday, September 16, 2013

Using Data Tools With Forensically Extracted Data

Using Data Tools With Forensically Extracted Data


Getting entreprise forensic data into snapshot on examiner machine


To demonstrate concepts and problems we were using two products in a set of scenarios. As principal enterprise level forensic tool we used EnCase Enterprise with some add-ons and as principal data analytics tools we used InfoZoom. As EnCase Enterprise is currently in transition from version 6 to 7, only results for EnCase Enterprise 7 is presented.


The first test was done some time ago with EnCase version 6, where enterprise snapshots were optionally stored in MSSQL database with mandatory storage in L01 (logical evidence file) files. Accessing MSSQL database and analyzing data from InfoZoom was simple and without any problems, unfortunately this functionality is no longer available in EnCase Enterprise 7. EnCase Enterprise 7 stores snapshot data in L01 format and in SQLite modified database, this complicates data access from InfoZoom tool.


For example we have additional analyses of snapshots for a set of end nodes done through the Sweep Enterprise functionality.


Sweep Enterprise is a built-in functionality that enables forensics examiner to collect data from end node machines where forensics servlet is installed. Detailed description of this process and functionality is available in the EnCase documentation.


In the basic steps, the forensics examiner defines a set of end nodes that will be examined by the Sweep Enterprise script, and data sets that will be collected from end nodes (Pictures 1 & 2).


Picture 1: Defining Target End Nodes for Sweep Enterprise


Picture 2: Selection of data to be collected from End Nodes

For each end node the data is stored in a folder named by Sweep Enterprise timestamp „Sweep Enterprise Collection 2012_12_30 02_27_01“ where each target node has its own LE01 file with stored data, for example „Machine - DAMIRDDELL.L01“ store snapshot data for DAMIRDELL. Same data is also stored in the SQLite file that contains all collected snapshot data.


Picture 3: Snapshot data in EnCase Enterprise console



After automated analyses results are stored in cache data files and available for additional processing, there is no uniform data format in data cache structures as it can vary from SQLite and L01 formats to plain text files. Data was collected from end nodes about the operating system, hardware installed, software installed and users on the system, network shares and USB devices. This data also includes processes and DLL information, open ports and open files, ARP, DNS and routing tables, averaging about 10MB L01 file per machine.

Since SQLite database has a modified format, direct access from InfoZoom was not possible, same problem was for ODBC access attempt, but later we managed to get ODBC access to SQLite..


Picture 4: USB devices data in EnCase Enterprise console


Data from Encase Internals to formats and files accessible by Infozoom


Fortunately data from logical evidence files are viewed in a limited form, a subset of the actual information that was collected, through the EnCase Enterprise console (Pictures 3 & 4). Since there is no direct access to this data from InfoZoom it is necessary to export data into another format that InfoZoom can read. There are a few possibilities doing this task without writing dedicated programs, the fastest method is creating a review package from the EnCase console view and exporting it again into XLSX format from a web browser, then importing this into InfoZoom (Picture 5). EnCase review package is readable in a web browser format that can be exported into XLSX, using MS Internet Explorer. Other export formats are also available, but the whole process is not yet well documented.

 

Picture 5: Conversion from review form into Excel data

Since excel data is easily imported into InfoZoom it is easy to do further comparison and selection. USB devices are organized in an overview mode (Picture 6), where we make conclusions about USB devices that were attached to end nodes before the Sweep Enterprise process.


Picture 6: Data from USB Devices imported into InfoZoom

Same methods can be used for other snapshot data. In The processes from end nodes are presented in the overview mode (Picture 7). Since there are more than 500 processes, the view does not reveal patterns, additional analysis is required in order to check for interesting patterns.

 

Picture 7: Process data imported into InfoZoom

Results from InfoZoom analyses can only be added manually into EnCase Consoles for further forensics analysis or forensics data fetch. Obviously it is a slow process if large amounts of data need to be exported and analysed out of EnCase, also it slows down the EnCase examiner station where exports and imports are done.



Accessing from Infozoom directly into SQLite


With SQLite3 0.99.00.00 ODBC driver for Win64 it is possible to access SQLite files in EnCase case folder structure and conncet to db data. This works in EnCase v7.06 and higher. Sweep.sqlite file contains all sweep data which are also stored in L01 files and in named folders too (Picture 8). It is important to connect to sweep.sqlite file through an odbc connection. Unfortunately, a new connection has to be created for each file which means means that each EnCase case has to get one dedicated connection.


Picture 8: Encase Enterprise sweep data in case folder structure

Infozoom easily gets structure of data from the db file, it is a complex structure presenting sweep data collected from remote nodes. There are plenty of tables where data is stored in Encase friendly format, so some decoding and recoding may need to be done by hand since this is undocumented in EnCase user documentation. Data tables stored in db files are visible in Picture 9

.

Picture 9: SQLite tables presenting sweep data



By using Infozoom features it is possible to create links and new attributes which present views into the data that we are interested in. It is a lot of work, but can be reused later on. Since it is read only data, this access does not change the original data and since it is directly into db file it is much faster than the previous method without much load on the examiner machine. In theory it is possible to connect to SQLite from remote machines but sqlite wiki discourage such efforts http://www.sqlite.org/cvstrac/wiki?p=SqliteNetwork), so we have not attempted this in this case.


Even Though Infozoom is independent tool which provides access to other security tools’ internal files and reports, it sometimes also requires a lot of work in order to get a sensible view of the data and to correlate the relevant data from various sources.


As an example, a process svchost.exe is drilled down in sweep database in the data table containing all the process data, with references to other tables. This data can then be related to snapshot and host information to show where and when snapshot was taken.

Picture 10: SQLite tables HasProcessList



Since infozoom allows creation of complex attributes it is possible, with some reverse engineering, to create views where all relevant data is visible. Currently this poses a challenge because the table structure in sweep database is not documented.


Even though Infozoom can be replaced with other data exploration tools, it is still the very best in a scenarios where we expect to go into more than one data source, i.e. having sweep database from several unrelated sweeps and access to SIEM or log storage. To fully explore such scenarios it is necessary to use parallel access to an FTK and EnCase data at same time with also access to other security tools, but this is for another project.


Conclusion ?


We can say that any additional ability to analyze data collected through forensic tools poses a great challenge. But when it is possible it is a great advantage.


There are drawbacks and problems that cannot be easily solved. Due to the rapid development cycles with currently available tools, it is hard to maintain an efficient connection between forensics tools and data analytics tools. Often data can be reliably transferred, only in simplest forms and through help of some “mediator” tool, as demonstrated in the example with html export and excel data aggregation into InfoZoom database.


Also it complicated to transfer data between various forensics products, due to strong vendors’ competition, which makes it very problematic especially in mobile forensics. Forensics of Mobile Devices has become a more and more important field, yet it still lacks enterprise class forensic tools, so additional data analytics is even more complicated than in standard networks and enterprise forensics. Fortunately things are changing recently XRY and UFED have introduced tools which do some essential data analytics.


It is worth mentioning, to forensic tools’ developers/vendors, that it can be beneficial for them to build in, desperately needed, data analytics functionality in future generations of enterprise oriented forensic tools, even though it might still be limited to only the forensics view of data.



Wednesday, September 11, 2013

EnCase v6 to v7 and Infozoom



A bit of going under the bonnet of EnCase to see what has changed from v6 to v7, it’s data presentation and if data can be accessed with other tools like Infozoom.
EnCase v7 introduced a brand new browser-like user interface, which used to be dashboard-like features in v6, also usage of EnCase conditions are more restricted that direct data comparison a bit more challenging. Furthermore, in the enterprise version, it is no longer possible to do simple data search in enterprise sweep results.


One workaround would be to write your own Enscript program to do those tasks for you. However, because Enscripts language and libs were also changed during the transition from v6 to v7, writing your own Enscript will be quite time consuming. You might consider generating reports from sweep results, but don’t do that because the report files have a different format that doesn’t allow you to export it into any way that you can use in other tools. You can, however, use the EnCase review feature. The review feature allows you to get the data into MS Explorer, which then gives you the option to export it into MS Excel.
And then in MS Excel one can do external analyses and return with results back to EnCase to do a new sweep or whatever is necessary. Here we use Infozoom for the first time on the exported data instead from MS Excel. Infozoom gives a good overview into the meaning of the data, especially through its overview feature which is quite simple to use.


At some point, while testing the above, we received information regarding the EnCase sweep results that they are actually stored in an SQLite file, both with L01 files as in version 6, so in theory any datalytics tool with access to the SQLite file can open and analyse the data..
As we used Infozoom before to dig through sweep results and other data provided by EnCase, it was logical to try to access SQLite from Infozoom too. There is odbc access to SQLite and Infozoom can use odbc this was a logical step which works with interesting and useful results.