Tuesday, December 31, 2013

The end of 2013

At the end of 2013 I can't say I'm Ok how 2013 was. To many unfinished things left open.

When I do some analyses basically I've spent 50% of all my working time on EnCase, actually preparing and testing class setups and materials. This is the trend which goes with EnCase v7 since it first day, it simply shows development cycle is yet not finished, there are rumors about v8 coming soon in 2014.
Rest of my time is spread over other preparations, other tools and training itself.  

What is missing from 2013 is enterprise level forensics for me. Since EnCase v7 done some drastic changes in user interface and functionality enterprise version went into stall in our patch of EMEA.  Hopefully there will be revival especially since with v7.05 and v7.08 some important improvements were introduced.

My favorite tool in 2013 was Silent Runner from Acess Data which was prepared and delivered during last quarter of 2013. It was a story for itself, when I think twice very typical for environment, country and whole digital forensic/security business.

Plenty of things can be said about digital forensics and security definitely there is a lot to be done. Tools, practice, industry, lack of standardization,  ideas .. to much resembles on networking ideas in pre-tcpip days
also there is a clash among forensic part and digital / computer part in digital forensic. I assume one key missing point in digital forensic is a digital forensic language (programming language) designed to describe digital forensic tasks and procedures, but this is issue for more elaborate discussion.



 

Tuesday, December 3, 2013

EnCase Macintosh-Linux Examinations - Guidance Software training

Recently I was in Slough on this training, it is very comprehensive one, where actual matter is always in front of printed materials .. 

Maybe a good idea is to do a two or even three separate training each dealing with closer to subject, like Mac only, Linux only and UNIX server systems. It is same core idea but approach, requirements are different so it needs different tools and approach.  

Olaf training

I'm doing training for OLAF program in Opatija, currently my courses are EnCase Forensic II and
EnCase Mac and Linux Forensics.  Preparations for this event was my occupation for last few weeks.

There are some very interesting experiences related to machines tools used. It is about 200 PC machines with same HW and SW configuration (there are difference in number of disks since EnCase v7 requires 3 disks to work efficiently). It a good statistical test sample actually. What is interesting is variations in the behavior of the forensic tools. HW is practically from same batch of serial numbers and SW is cloned installation  it is interesting to see how tools are acting, especially how reported errors are influencing outcomes of forensics tasks.  Up to this  moment we have only two such situations first when partition finder script failed and when evidence processor module reported error and were unable to finish processing and create required records folders. It was about 25% percent machines which had impact on correctness of the results.

Wednesday, November 13, 2013

LNK file and conditions in EnCase v7

Windows lnk files are very useful source of forensic data. It is not easy to get in analyzed and ordered on their attributes. Traditionally we are usually looking for volume serial numbers to corelate evidence. So how we can do that in ENCase v7 ? Here is the example based on Forensic II training evidence files which I am often using in training Forensic II sessions to extend the idea of condition and filters. I often find people are not thinking in condition way when they are using EnCase.


On the picture is link file „Final Cawin Weapons Purchase Order 2-7-2011.ppdf.LNK“ shown with its attribute bar expanded to „Link Data“ folder. There are two columns automatically populated by encase, “Name” and  “Value” for attribute pairs parsed out of file content. Since this view is available under attribute tab we can create condition based on this values and find which link files has same set of attributes like pointing to the file system with same serial number. Basically we have to ask two things Attribute name is „Serial Number“ and Attribute value is „6A97-109C“  this should return a list of lnk files which points to same file system For that we can create a new condition but there is a catch, since a attributes and not listed in table view column names but in view pane we have to go trough filters in conditions to get a new function which will address attributes name.


This is the result of our action, in filters for AttributeValueRoot we created new filter ValueFS.


If we edit ValueFS filter we see it has exactly the same logic as we noted before (Attribute name is „Serial Number“) and (Attribute value is „6A97-109C“). This values are easily copied from EnCase, but I suggest to save attribute view into txt file and do copy out from there.


Here is this filter applied in condition wizard, it is called as always filters are called trough „equal to“ construct to catch that entries on which filter return hit.
This condition works as any other condition, called from condition tab. It is a good idea to put some meaningful name to results like „Volume Serial XX“ so it is easy to follow what it is.


If you keep you conditions for future use it is a good idea to check ask for value and value required buttons, so you'll be able to reuse it for different combination of attributes and values.

Here we actually insert both values for attribute name and its value into condition box.
Results are as we expected list of lnk files which points to file system with same serial number.


This method can be applied to other objects which has attributes In Encase manual and on the excellent Lance Mueller site you can learn a lot more about conditions, filters and Enscript programming. http://www.forensickb.com/

Here is condition source code
class MainClass {
  class ValueFSClass {
    typedef String[] Array1;
    class FilterDialogClass: DialogClass {
      ArrayEditClass  Variable1;
      StringEditClass Variable2;
      FilterDialogClass(DialogClass parent, ValueFSClass v):
        DialogClass(parent, "Edit Conditions"),
          Variable1(this, "Value matches", START, NEXT, 200, 102, 0, v.Variable1, REQUIRED),
        Variable2(this, "Name equal to", START, NEXT, 200, DEFAULT, 0, v.Variable2, 512, REQUIRED)
      {
      }
    }
    Array1     Variable1;
    String     Variable2;
    ValueFSClass():
      Variable1{"6A97-109C"},
      Variable2 = "Serial Number"
    {
      FilterDialogClass dialog(null, this);
      if (dialog.Execute() != SystemClass::OK)
        SystemClass::Exit();
    }
    bool Main(AttributeValueClass e) {
      return Variable1.Find(e.Value()) >= 0 && e.Name().Compare(Variable2) == 0;
    }
  }
  ValueFSClass ValueFSData;
  bool ValueFS(AttributeValueClass root) {
    forall (AttributeValueClass e in root)
      if (ValueFSData.Main(e))
        return true;
    return false;
  }
  MainClass():

Sunday, November 3, 2013

EnCase Enterprise v7 training and education

At the moment I'm involved in preparing training for EnCase  Enterprise product, training is EnCase  Enterprise Examinations for v7 . Clients are not from IT company but from one to the neighboring countries ministry of finance. It is a long, long project delayed with budget problems finally comes to conclusion. Schedule was changed so many times that EnCase  evolved from version 7.05 to 7.08.1 with all new training changes and new software features and of course bugs. Key differences are case processor on separates nodes and non-safe servlet (FIM replacement). Also VMware products are with new versions and sometimes there are compatibility issues.
Preparation is always a bit of lengthily process since I don't have a dedicated classroom or dedicated machines but multipurpose ones which had to be tailored for each training. EnCase  v7 is very resource hungry, when we are talking about enterprise version where training include simulated network of several machines the resource bill is extremely high. Since most of trainings are on client premises we use strong laptops with a lot of external disks to fulfill the role. Priorities are disks, RAM, CPU, network.  For acceptable performance quad core I5 64bit laptop with 16GB+ ram and three sata/ esata disks is enough (xpress card with two esata ports are here extremely useful but on some machines especially Dell can be problems). This configuration has enough power for EnCase  evidence processor and also gives you three or more disks to spread load of virtual machines. In theory training can be provided on customer machines but in practice this  fails since configuration and system administration problems, the best way is to bring your own devices and configured it by yourself.
As for real EnCase  Enterprise training what is important to take into concern is versatility of EnCase  Enterprise. Lance Mueller describes this in his paper with precisely defining main areas of EnCase Enterprise usage. Current training is too much condensed and gives you intro into all capabilities where actually attendee in advance should be able to understand how EE will be used in their work. This is by my experience too optimistic approach since attendees usually does not have much EnCase  experience. Initially EE training was in two separate weeks but later it was changed to one week with idea of unification of EE and Forensic product. Basically this proves as a problem since no one can force customers to get Forensic courses as preparation for Enterprise (problem is always a limitation of their budget) so we usually lack a good understanding of EnCase  forensic abilities. I always suggest to attendees if they are without Forensic training to at least look at free v7 intro online training  but that is often not enough.  The workaround is to extend introduction and add tailored points in the aspects where they’ll work. As we are here talking about financial regulator the stress should be on e-discovery process, then on standard forensic investigation and at the end on the incident response. For each of this tasks different configuration and tools in EnCase , also different user roles are required. Actually very good material for discussion is on the International competition network in the “ Anti-Cartel Enforcement Manual” It puts all this into some defined process close to those which attendees have experience with. From that point we can discuss ideas of e-discovery which is almost unknown idea in our part of Europe.

Wednesday, October 30, 2013

Mobile forensic education and training


It si a good news that Celebrite http://www.cellebrite.com has started to formalise training process and procedures. As I have been with their mobile forensic product UFED since 2010 I had some very frustrating experiences delivering UFED trainings, hopefully this is past. Trainign is formalised, well described http://www.cellebritelearningcenter.com/mod/page/view.php?id=16 and finally should provide official edu materials. This will provide us with reference materials which can be translated also hopefully someone will keep documentation in sync with software releases. Horrors of having manuals beeing version 2.4 while software is 3.6 is hopefully behind us.
Requests for trainer certification is finally defined, as for “Cellebrite Certified Instructor Certifications”, still they have some topics to cover. Mobile forensic is young field and vendors has not yet grasped fact about standardisation of interfaces and formats. On the training website one thing is missing, Python programing with physical analyzer product. By my opinion this is maybe too advanced for everyday users but course should be available.

Micro systemation http://www.msab.com/ has such traings approach for quite a long time. Traings are well defined http://www.msab.com/training/training-overview. With other vendors are more or less same, depends how deep vendors grabs mobile forensics. Ill fated Encase Neutrino had once its own very good training. Today Accessdata has its own MPE+ Mobile Forensics product, http://www.accessdata.com/training with elaborate training.
As some other available things and sources for mobile forensics my favourite is http://my.safaribooksonline.com/book/networking/forensic-analysis/9781597495967 “Digital Triage Forensics: Processing the Digital Crime Scene”, By: Stephen Pearson perfect intro into classic mobile devices forensic and Paraben tools.

Monday, October 28, 2013

LTEC 2013 Prag

This is a bit late post about Prag forensic conference LTEC 2013 http://www.lawtecheuropecongress.com/. It is a nice conference with goal to bringing digital forensic practinioer and law practitioners into contact. A lot of panels workshops and presentations, many presenters local and world wide vendors.
My task there was to have a small 2 hour workshop on Encase Forensic v7, to present how things are done in latest Encase, to show some basic set of features , about 20% of functionality.

Slides are on the slideshare http://www.slideshare.net/DamirDelijadamirdeli/ltec-2013-encase-v70801-presentation. It was supposed to be 20 people attending so nice cozy working environment.  Required PCs were supposed to provided by local conference partner in Prag, while we provide EnCase. As it goes in real world delivered workshop machines were so weak and undersized,  it was not possible to run workshop , to be worse machines were delivered late, just evening before start of LTEC. So I've canceled the workshop and went to just doing live presentation of scenario workshop, My colleague  Davorka Foit kept her part on EnCase reporting also as presentation.

Steve Gregory from GuidanceSoftware had very interesting presentation on the TD3 forensic duplicator http://www.tableau.com/index.php?pageid=products&model=TD3 by Tableau. It was masterly done even when IT infrastructure, especially power was giving some troubles. Whole presentation was  about network access to TD3 in write blocking mode, this feature was a bit buggy before last firmware update, but know works perfect. It is interesting idea by FBI, which actually shows reality in the digital forensic field, not enough trained people to go. Steve also helped us with borrowing us one of his usb write blokers for modified workshop/presentation.  


Just to expand my digital forensic knowledge I visited a museum related to historical fact finding methods http://www.museumtortury.cz/en/index.html. It gives a very interesting ideas to solve problems with misdelivered equipement. 

Sunday, September 22, 2013

To Remove Sysadmins or not to Remove

Lately, security trends has shared the disturbing idea of removing the system administration function or hiding it into something else … It would be Ok if this was a result of automation or simplification,  but this is not the case here.

The latest description of incidents in the NSA, with two other articles about this with various reports, as well as my own experience with system administration, starts to worry me ..

The article that caught my eye was " NSA Plans to Eliminate System Administrators”
August 13, 2013 SANS newsbites, (Excerpt #1 below)  because it is frankly an insane idea  especially for such a tight security structure as the NSA needs to be. I'm not sure, but the same would probably apply for other similar organisations. Just think back, a few years, how a lot of high end security companies were hacked.

First of all, we need to agree on what system administration is today, with regards to what defines a big system and data breach.

As for the definition of a system administrator and system administration in IT, I like this rather elderly quote:

The job of a system administrator is like this: "On one side, you have a set of resources: computers, networks, software, etc. On the other side, you have a set of users with needs and projects--people who want to get work done. Our job is to bring these two sets together in the most optimal way possible, translating between the world of vague human needs and the technical world when necessary."

"Perl for System Administration", by David N. Blank-Edelman, ISBN 1-56592-609-9, First edition.
from July 2000. It precedes some big meltdown in IT but it is still relevant today

It shows the important role of controlling the system, which also assumes understanding the system and its architecture. Basically it shows someone who is part of the system not an outsider. This is extremely hard to achieve today because of huge size of big systems, policies,  management and organizational issues (same for agencies or big corporations, but where sanity prevails in the corporate world). In How Did Snowden Access All That Data? (August 24 & 26, 2013) (Excerpt #2), from SANS newsbites, this incident is presented in more details and shows the disturbing similarities to common big data breach incidents. If we look back at, the Verizon reports about big data breaches, especially the first one from 2008, what stands out is a set of big unknowns in each compromised system. This report also gives a good description of “A big system” and “A big data breach”.  These big unknowns become all the more interesting when observed from the system administration perspective (Excerpt #3)

This “Unknown” numbers are:
•     unknown data 66%
•     unknown network connections or accessibility 27%
•     unknown accounts or privileges 10%
•     system unknown  7%

This simply means that “unknown” issues were out of the radar, or that no one was responsible for administering such a system or simply a lousy system administration.  In a well administered system such unknowns should be impossible, so why do such unknowns exists and why didn’t anyone care about them? Such data is visible if you do some system mapping or log data analyses, so the right question would be “why no-one in management cares and what is the rationality behind this careless approach.”

When all this is put together it makes a rather scary picture of lack of administration and lack of care and most of all lack of interest in the actual state of the system. I’ll put my money down and say that probably happened because someone was doing some cost cutting, as is usual when removing non-primary-business related part of the organisation.  It is hard to say but this goes for most of the big organisations. In the  “Low tech hacking intro, the author summarized in almost exactly the same situations as to why such incidents keep happening. It is easy to forget that infrastructure today is handling data and that that is the base of your core business, whatever business it is.

As any other problem, of such impact and scale, this should have something to do with the management within this organisations. Removing the system administration looks desperate  and is frankly impossible  since system administration means keeping the system operational. The Snowden case looks like a direct transfer from the Verizon report, the part about consultants and contractors and data breach.  So how can removing the sysadmins function help? Probably in a way that now the owner of the cloud will be the one to blame with future incidents.  It is like renting a car and not checking the state of the vehicle before driving out, so if it crashes it is not my fault, as I was just transporting my precious belongings with it. The best solution would cost more than just applying the best practices and remembering that whatever your business is, it depends on the IT infrastructure.



Here are relevant parts the articles mentioned, since editor’s notes are so interesting
I’ve put it whole quotations below.
-----------------------------------------------------------------------------------------------------------------------
In an effort to reduce the risk of information leaks, the US National
Security Agency (NSA) plans to get rid of 90 percent of its contracted
system administrator positions. NSA Director General Keith Alexander
said that the agency plans to move to an automated cloud infrastructure.
Speaking on a panel along with FBI Director Robert Mueller at a security
conference in New York, Alexander referred to the recent revelations
about the scope of NSA surveillance, noting that "people make mistakes.
But ... no one has willfully or knowingly disobeyed the law or tried to
invade ... civil liberties or privacy."


http://arstechnica.com/information-technology/2013/08/nsa-directors-answer-to-security-first-lay-off-sysadmins/


http://www.theregister.co.uk/2013/08/09/snowden_nsa_to_sack_90_per_cent_sysadmins_keith
_alexander/

[Editor's Note (Paller): A huge revelation to executives of the Snowden
affair is illuminated in this decision by NSA.  System administrators
are powerful - too powerful.  In the mainframe era, IBM and its
customers invested 15 years (1967-1982) building strong controls into
computers, specifically to constrain the power of the systems
programmers.  System administrators are now as powerful as system
programmers were in the 60s and 70s, and are unconstrained.  NSA is in
the vanguard of a major shift coming to every organization that cares
about security. The immediate implementation of the top 4 controls in
the 20 Critical Controls is a core survival task for IT security
organizations. See Raising the Bar for evidence
(http://csis.org/publication/raising-bar-cybersecurity). Organizations
failing to implement those quickly should anticipate an unstoppable
board-level push to outsource system administration and management to
the cloud providers.]
-----------------------------------------------------------------------------------------------------------
The US government is having difficulty figuring out exactly what data
Edward Snowden took while working as a contractor at the NSA because
Snowden was careful to hide his digital footprints by deleting or
bypassing electronic logs. The incident illustrates problems inherent in
the structure of the data systems if they were so easily defeated. It
also appears to refute assurances from the government that NSA
surveillance programs are not subject to abuse because they are so
tightly protected.

http://www.zdnet.com/how-snowden-got-the-nsa-documents-7000019860/

[Editor's Note (Murray): If the user can cause or prevent entries in a
log or journal, then it is not reliable. Admittedly, the
process-to-process isolation problem was difficult when we tried to
solve it with software in expensive hardware.  Perhaps their contractors
have not told the NSA that hardware is now cheap. ]
-------------------------------------------------------------------------------------------------------------------------
Excerpt #3 Verizon report 2008, pg 24:

Unknown Unknowns

Throughout hundreds of investigations over the last four years, one theme emerges as perhaps the most consistent and widespread trend of our entire caseload. Nine out of 10 data breaches involved one of the following:

•     A system unknown to the organization (or business group affected)
•     A system storing data that the organization did not know existed on that system
•     A system that had unknown network connections or accessibility
•     A system that had unknown accounts or privileges  

We refer to these recurring situations as “unknown unknowns” and they appear to be the Achilles heel in the data protection efforts of every organization—regardless of industry, size, location, or overall security posture.


Wednesday, September 18, 2013

Conference “Kritična Nacionalna Infrastruktuta” (KNI) Police academy Zagreb 12-13th September 2013

Interesting event based on this science project,  it is the third conference in the project,  not a day too early. Here is the program with the list of participants : http://www.mup.hr/UserDocsImages/PA/IIIkonferencija_nove_ugroze/program_skup_NSU_%20KNI.pdf


The idea behind the conference was to reshape the definition of what a critical infrastructure is and how to secure it. Also to mingle  about state of the art in the world, to shake up general conception and put a few ideas into circulation. As a current standpoint, there were EU regulations that had been integrated into local legislations particularly related to critical infrastructure “Zakon o kritičnim infrastruktirama” (NN/56/13)..

The critical national infrastructure  is a rather neglected area , it is really time to think about it. Moderator, prof Antoliš, insisted on not-only applying the stone and iron  approach, but on other more light weight issues such as current knowledge and processes. Even though the current legislation concentrates on tangible property - i.e. buildings and hardware equipment, it is almost completely neglecting non-tangible property, such as intellectual property and complex hybrid systems.  The conference sessions have tried to show the importance of non-tangible property in the form of software and virtual critical infrastructure elements. Most of the participants either work with or have a law enforcement background, but if you don’t have a legal definition, stated in the legislation, it is hard to do anything in the law enforcement field.

While considering all the issues covered, II get a mental image of SANS scada trainings and workshops with their, game-like, hands on approach, and that is just one of the aspects of the whole story, technical side anyway. This has been clear to me since having supervised an interesting graduation thesis on general IT security of the national electric grid a few years ago. After that research I’ve always tried to think about the problem of technical abilities within a legal scope.

Attendees in the auditorium were a mixed bunch, some from the academic world, others from institutes, professionals in the Intelligence community, and a lot from the police force. Multiple theoretical issues were addressed, from education, tourism and energy infrastructure to intellectual property issues. This is all good, since only the theory of critical infrastructure was the main discussion at this event. From the practical side there were other technical industry professionals, like me, with case studies, tools and implementation experience. The event community was an international one, which is excellent since we can hardly talk about anything without sharing our different experiences with someone from across the border. I believe that national borders, in my field, are almost non existent with regards to attacks.  In most incidents you are either just part of the bigger picture or collateral damage. It is essential to see the infrastructure as whole needs to be protected.

Even though the conference was not directly addressing my current work profile, they did cover quite a lot of legal aspects which are important for me to understand.  It is rather challenging for me to accept those legal definitions which define how I can protect my property, the methodology ,  and the rules of detection and engagement I’m allowed to use. The conference also covered all aspects of infrastructure issues from national to international, since cases are almost always global attacks on the global system. The legal framework can be very tricky, when looking at it from the technical point of view, since it can actually prevent effective countermeasures. A lot of good topics on that issue can be found on Bruce Schneier’s writings.
Here on this conference we actually talk about step one, which is our national infrastructure, what criteria and what threatens it. This is a strategic question is not very well defined in the current law, what comes to mind again is the DNS and IP routing, how it can be defined under the current legal definition, especially when we talk about infrastructure, the national Vs the international,  a tricky question.

My participation was as part of the New Technologies and Critical National Infrastructure track, with some issues about Entrepirse forensic tools and analyzing colleted data. This may be too abstract for this conference but I felt the need to share how enterprise forensic tools work and the  huge amount of data available for analysis, even out of scope of forensic tool.

Presented subjects were good, even if some papers were a bit dry.  Presenters are experienced in their field and made it all very interesting within relation to real life situations. My favourite non-IT forensic paper was about  measuring and predicting reliability of a local power plant “prof. dr. sc. Dario Matika, Jakov Batelić: ODREĐIVANJE EKSPLOATACIJSKE
POUZDANOSTI TERMOELEKTRANE PLOMIN 2 U SVRHU VREDNOVANJA
KRITIČNE NACIONALNE INFRASTRUKTURE/DETERMINING EXPLOITATION
RELIABILITY OF PLOMIN 2 THERMAL POWER PLANT FOR THE PURPOSE OF
ASSESSMENT OF CRITICAL NATIONAL INFRASTRUCTURE“, actually an excerpt from Ph.d thesis by Mr Baletić. Sadly I don’t yet have link for this article, at the moment of writing this post.