FILETIME ANALYSIS, or maybe a TIMELINE ANALYSIS

 

Timelines when it comes to computer forensics can take many forms. And the software that is used to analyze the file timelines is just as voluminous as the number of files to analyse.

Timelines are sometimes used to answer questions about file times and when a computer is used, or when some computer activities have take place. A timeline analysis may involve collecting information about the file times on a computer file system and then analysing these times for forensic purposes to determine what may have occured. The analysis can give more specific information as to what/when certain acitivites may have occured or when certain files may have been placed on the file system. File dates could be very important to certain activities. In other words a chronological listing of file dates may be very important to your examination or investigation.

There are many programs and processes available which will analyse file dates and activities, and provide information which you could use to prove your case. One of the processes which will allow to analyse the sequence of file dates will be discussed and explained in this article. The process and steps described here is one of many, and it is only a small part of what could be a much larger and thorough forensic timeline analysis. This article will go thru the collection, and "analysis" of the file dates using a few of the Maresware command line programs.

Most timeline analysis tools provide a pretty color coded picture/representation of the timeline results. This article does no such thing. However, it will provide as a final result the process which you can modify to collect, find, and analyse using appropriate case information and then copy or isolate the necessary files which will make up the final evidence. It will also provide you with a basic batch file (thats a script for you millenials) which is easily modified and reused based on the individual case situation.

First things first.

Lets start with file list collection. Or cataloging the suspect file system. You need to know what and how many files you may be dealing with. So lets create a list of the files. The first step in this batch file is to create a complete list of all the appropriate files within the suspect diretory. We are assuming you are on a corporate server where you only wish to collect information relating to a single suspect or tree. And in some cases only collect file information relating a specific type of file, ie: graphic or pictures downloaded from the internet. We use the program called DISKCAT  which is short for disk cataloging software. A sample command line is here. Diskcat has many other options, but the basics are used here as a demonstration.

C:>diskcat -p X:\top_level_folder  -vw 120 -d "|" -T3 -8830E -o D:\WORK_CAT.TXT  --milli   -1 work_timeline.log

The options are: 
-p X:\top_level_folder:	start at this top level directory of the X: drive 
-vw 120:  		no verbose output, create fixed width record, make the filename 120 characters wide. larger than 120 is often used 
-d "|":   		delimit each field with a pipe. DO NOT DELIMIT as a CSV 
-T3:      		include all three MAC times in YYYY-MM-DD format for eays sorting 
-8830E:   		include 3 additional fields: filename, ext, and 8.3 filename 
-o D:\WORK_CAT.TXT:  	place the catalog information in an output file called WORK_CAT.TXT
--milli:  		include filetime millisecond values
-1 work_timeline.log: 	create a log file called: work_timeline.log
Depending on the options used, the output record length will change, which means that any further processing with Maresware will have to be adjusted to accomodate the different record lengths.
This creates an output line like (spaces removed, some fields eliminated and shorted for visibility, ).
H:\work600meg.cd |1998/10/05|17:31:53:340c|1998/05/03|09:19:50:000w|2019/05/14|09:08:57:021a|EST|600MEG.CD|CD|600meg.cd|
The logfile contains this information (some info removed for space)
Record length                    302
Number of files processed:     50893
Includes alternate data streams: 178
3052 directories, 50893 files, 4,972,889,196 bytes:
Finished: Sat Jul 08 15:23:31 2023
Elapsed:  0 hrs. 0 mins. 20 secs.

Next step in the process. Sort the file list on date.

Now that we have a reasonble file listing/catalog we will need to sort it on whichever time field we are interested in. For this execise we sorted on the WRITE date because in most cases the create and write day are likely to be the same or very close. However, sorting on whichever date field is pertinent to the case is easy with the DISKSORT program. The command is shown here.

C:\disksort D:\WORK_CAT.TXT  D:\WORK_CAT.SRT -r 314 -p 177 -l 10 -1 work_timeline.log
Notice we are taking the output of the diskcat and sending it to the input of disksort, and writing to the same log file. Disksort can sort on any part of the record you choose. In this case we choose the write data.
The options are:
WORK_CAT.TXT:  input filename
WORK_CAT.SRT:  sorted output filename
-r 314:        record length of the input record (the WORK_CAT.TXT file
-p 177:        position of the field to sort (the 1998/05/03 item in the example)
-l 10:         length of the field to sort (the last write time field) 
-1 work_timeline.log:  the log file again.
And the disksort results in the logfile are:
  records=  50,894
  wrote  =  50,894
  Elapsed time:  0 secs. 
Now we have all the files sorted on write date and its time to do some timeline calculations. Isn't that what we are here for? (a $10.00 cracker barrel gift card to the first person who can tell me why there is a 1 count discrepency between the diskcat output log, and the disksort log)

Next step #3 in the process. Total (count) the dates

We want a count for each day in the write date field. To do this, we use a program which can count. What a novel idea. The program which we use is called TOTAL. Total can count the number of instances of a of key for the entire file, or it can count and add up values in different fields, ie: $$$ fields within a record. For this exercise we are going to count each file write date and come up with a total value for each date within the 50894 records. The command to "count" occurances is:

C:\total  -i  WORK_CAT.SRT  -o  WORK_CAT.CNT  -r  314  -p  176  -l  10  -d  176  -n  10  -c  -f  32  -1  work_timeline.log

-i  WORK_CAT.SRT:  input file to total
-o  WORK_CAT.CNT:  output file to place totals in
-r  314:           record length of the input file
-p  176  -l  10:   position and length of the sorted field (count from offset 0, a mainfram holdover.) 
-d  176  -n  10:   position and length of the field to count/total (count from offset 0)
-c:                count only, do not actually add values in the field
-f  32:            format of the field containing the total values with left column sign
-1  work_timeline.log: gues what, the logfile again.
A sample output record will look like this for each date in the sorted file. Filenames listed are placeholders only. All fields except the write field are removed for legibility and space here.
H:\...\Upper.alt  |1984/02/17|12:14:26:000w|+                  1|
H:\...\Ebcdic.alt |1986/04/24|12:11:22:000w|+               1234|
H:\...\BINARY.C   |1992/02/19|11:34:14:000w|+                882|
   =============================
logfile data:
Records processed    =     50894
No of records written=      2616
Final record length  =       335
Highest count        =      9449
Elapsed time: 0 hrs. 0 mins. 0 secs
The final field is the number of occurances of the write date. We then sort on the field containing the counts to see what dates have the highest write date. Notice 1986/04/24 had 1234 occurances of files being written that date. Notice that the number of records written, 2616 indicates there were 2616 unique write dates found within the 50894 records. Does a high or low write date count mean anything. Depending on the case, the number of occurances, either large, small, or indifferent may be what we are looking for.

This file with the total counts can be relatively large. So you may wish to find a way to search the file for specific items, or as we did above, use a program where you can ask it to show all lines where the total value is either a specific amount, greater then a chosen amount, or less than a target amount. The maresware program SEARCH search can do this with its eyes closed. If you are needing to search a fixed length record on any field take a look at the search program. If you wish to implement this batch file and wish a copy, with all the associated comments, and search criteria let me know.

Next step #4 in the process. Sort the counts

Where have we seen this command before. We sort the counts to see which dates had the high and low values. Those values shown here are actual high counts of the run.

C:\disksort WORK_CAT.CNT WORK_CAT2.TIM -r 335 -p 328 -l 6 -d -1 work_timeline.log 
  ================
  records=   2,616
  wrote  =   2,616
  Elapsed time:  0 secs. 
The resulting file sorted on the largest count looks like: (again, filename is irrelevant, just a placeholder) The top 4 counts shown here. Notice Sept 26, 2008 has an unusually high count of files for that date. This may be something to look at.
H:\....\diskcat.htm.ttx|2008/09/26|15:51:52:000w|+    9449|
H:\....\output.0       |2009/03/12|07:22:58:000w|+    4511|
H:\....\output2.0      |2009/02/10|07:19:52:000w|+    3757|
H:\....\2UPPER.pdb     |2007/11/26|07:56:46:000w|+    1977|

In order to select the appropriate lines with the counts you wish, you might want to take a look at the maresware SEARCH program. It is designed from an old mainframe data search program to be somewhat programmable and allow the user to find and select records meeting specific search criteria. That is what was used to find the 4 records shown above with arbitrary counts > 1000 items.

Next step #5 in the process is your decision

In the case where there are a high activity on a specific date, or you find a date that strikes your fancy, you would extract those records with the appropriate date from the original diskcat output. Since the diskcat output is already sorted by date, it should be an easy cut and paste with a reliable text editor to copy the necessary catalog/list records to another file for analysis.

Then you would find a way to "forensically" copy the files of interest (based on your superior forensic intellectual knowledge) from the subjects source directory, to a work location for further analysis. But may I suggest that the forensic copy process be conducted using the UPCOPY program using the -s option. Its pretty reliable.

The contents of this batch file are available for anyone who want to play with the idea. Just send me an email and include your idea of why there is a 1 item count difference from the diskcat log to the disksort log.

Home  |  Whats New  |  Howto Order  |  Training  |  Services  |
About Us  |  FAQ's  |  Articles  |  Resources  |  Legal Notices  |  Contact Us  |
Files A-C  |  Files D-F  |  Files G-K  |  Files L-O  |  Files P-S  |  Files T-Z  |
 |  SoftwareData Analysis Software  |  Forensic Processing Software  |  Linux Processing Software  |
Complete helpfile.zip  | Complete pdf_s.zip  | Complete 16 bit software.zip  | Complete 32 bit software.zip  |
 
copyright © 1998-2023 by Dan Mares