Table of contents

     BATES_NO       Rename a file with a unique bates_no. Attorneys know all about this.
     BSEARCH        Perform 'B'inary search on sorted fixed length file
     COLLATE        Collate two sorted files together.
     COPY_ADS       Extract/Copy Alternate Data Streams to live "touchable" files.
     CRCKIT         Perform (32bit) CRC of files.
     CSV2PIPE       Convert a "CSV","delimited" file to pipe (|) delimited..
     DISKCAT        Perform a DISKCATalog, listing of files in a tree.
     EML_PROCESS    Analyse the headers of *.eml email files, and output to spreadsheet friendly data.
     FILBREAK       Break up/reformat the fields within a record for easier manipulation.
     FILSPLIT       Split up a file into maneageble pieces.
     FINDRECL       For fixed length files, find the record length of the record.
     HASH           Perform hash, sha's of files. Some neat evidence tricks within this program.
     HASH_DUP       On fixed hash records find duplicate hash values in the file.
     HASHCMP        Compre two files on the sorted hash field and find matches, or mismatches.
     KITING         Find differences in two date fields of a record, such as differences of the MAC dates. 
     MD5            Hashes little brother. Performs MD5 with different output format.
     MDIR           Intelligent, forensic replacement for DIR.
     MERGE          "Merge" two similar files together. More efficient and sound than:  copy /b  file1 + file2 + ....
     MOUSE          Replacement for the type command. Similar to the *IX cat command. Like, cat and mouse.
     NO_HTML        Remove as best as possible all html code from a file to a clean text file for report.
     PIPEFIX        Take a pipe (|) delimited file and make all fields fixed width based on users requests.
     RM & RMD       Intelligent del replacement, with overwrite RM & Destroy (RMD) capability.
     SEARCH         Search fixed length records on any field(s). Linear Big brother of BSEARCH.
     SPLIT          SPLIT a file into maneagable pieces, or small sample size to work with.
     SSN_VALID      See if an SSN is valid and what state its from. Only works on SSN's older than about 15 years. SSA screwed up the algorithm then.
     STRSRCH        StringSRCH. Search text files for unlimited number of strings. Clean spreadsheet compatable output.
     TOTAL          Totals fields within a record, OR: counts number of records per sorted field. Excellent for IP counting.
     TOUCHME        OOH. Performs a *IX style touch of file dates. Very configurable.
     UNIQUE         Removes/Uniques, duplicate records based on the sorted field.
     UPCOPY         A true forensic copy program. Try it, you'll like it.
     URL_SRCH       Searches text files for IP, URL, EMAIL, Phone Nos, and other stuff. Great pcap searcher.
     VERTICLE       Ever have a very very wide multi-field record and try to include in a report. This makes horizontal records verticle for reports.

        Click this link if you want to download an exe with actual sample batch files of many of the above.
         However, you must follow the numbers as some of the subsequent batches use output from those before.




Todays Software Special
BATES NUMBER

This session is about BATES NUMBERING your evidence files. If you don't know what a bates number is, talk to your local prosecuting attorney. Basically it means adding a unique modifier: (ID number mask) to the pages of reports so that there is no confusion when talking evidence about which page is being referenced.

Situation: You have a few hundred evidence files. Obviously you are presenting them in their original directory structure. You may also be providing a file listing of all the appropriate file names within the tree. You with me so far. Many files, many directories, gigundo file listing.

Now what you find is that for whatever reason, you may have a few files with the same name but obviously living in different trees. In your report, or when discussing with someone you say: file joes_house.txt contained evidence information. However there are a number of different joes_house.txt files. Which one were you really referencing??? Can everyone reading/hearing/looking at your report know exactly which file you are talking about? Maybe yes, maybe no. So how do we make it clear which file you are talking about?

What we do is add a unique "bates number" identifier to the filenames. An example might be change the name of the file to have an added component which will make each filename unique. This is what the bates_number program does. It adds a unique (user designed) identifier to the filename. Lets say our mask to add is DJM_000. So now the file joes_house.txt has an added modifier which now makes it read joes_house.DJM_000.txt. The program then proceeds to add this mask to each file while incrementing the index 000 for each file processed. So the next joes_house.txt that gets renamed, depending on where in the process it occurs might have a new name of joes_house.DJM_021.txt. So when referening a particular file the mask/index will uniqify every filename, just as bates numbering makes unique each page in a legal presentation. Got it.
     C:\joes_house.txt  ==> C:\joes_house.DJM_000.txt
top
========================================================================

Todays Software Special
BSEARCH

This session is about searching fixed length sorted text files using, BSEARCH, a "binary" search algorithm with the speed of an indexed search.

During your examination you may often generate a significant number of output "data" files. Or have access to large data files provided by outside sources. One such source is the NIST NSRL files of MD5 values. A few hundred thousand records. There are ways of sequentially searching data files which may take a substantial amount of time. However, if the data files are fixed width records, and in this case sorted on a key field (ie: the MD5 value) you can use this BSEARCH program to conduct a "binary" search of the records on your key field. The term binary search in this case simply means it is in effect an indexed search. Meaning super fast.

Since bsearch was developed for searching fixed length data from mainframe computers (you know what those are) it has the restriction of requiring that the input records be of a fixed length, and be sorted on the key field being searched. If you are familiar with other maresware software, forming data files to meet this criteria is a simple matter using some of the other programs available to create a fixed length record, and for sorting those records on your key field.

Doing a binary/indexed search of almost 270,000,000 records didn't even register a second. How long would it take to do a linear search of 270 million records?
   Processing:          NIST_NSRL_SORTED
   Records:                  269,570,948
   No of records written =             1
   Elapsed time: 0 hrs. 0 mins. 0 secs
top
========================================================================

Todays Software Special
COLLATE

This session is about collating two sorted files together.
At some point you will have two identicle files which you need to combine into a single larger file. If these two files are fixed length records, and if they are sorted on a key field then the maresware program COLLATE can do this easily.

As mentioned in prior items, this program was developed while working with large fixed length mainframe data files. But thats not to say that during your forensic data analysis you may not come across similar operations and necessity to collate two identicle format files together. Here is your answer.

The collate program is designed to do the following. You provide two files of identicle format. Meaning the records in each fixed length file are identicle. And each file is sorted on the same key field. Lets say its the MD5 value field. Then you ask collate to "merge" or collate these two files together, all the time maintaining the sort of the MD5 field. Thus you end up with a larger file still sorted on the key field. You can then proceed to analysize/process this resulting file to whatever degree needed.

As before, the only restriction is that both files be of fixed length records, and be sorted on the same key field. After that, its a simple command to perform the process. See below: collated 33 million records in 24 seconds. Just try loading 33 million records into the spreadsheet to perform the same operation.

   collate MD5_271_ANDROID   MD5_271_IOS  combined -r 34 -p 0 -l 32
      7,845,648 records read from  MD5_271_ANDROID
     25,883,151 records read from  MD5_271_IOS
     33,728,799 records written to combined
 	Elapsed time: 0 hrs. 0 mins. 24 secs
top
========================================================================

Todays Software Special
COMPARE

This session we will talk about comparing one sorted file to another and finding either matches or mismatches of the key field.

Consider that during your investigation you have two files which have a similar key field, such as the hash MD5 value. Lets say you have an inventory (list/catalog) of seized evidence/suspect files which contains the MD5 value of each file. And you have a second file of just MD5 values of either good MD5 values or MD5 values that might belong to bad files such as virus or other items which you need to look for and identify. So what you need to do is compare the file with the bad MD5's to the evidence file containing the list of files from your evidence group which contains an MD5 field. After the comparison you find which files in your evidence list might match or mismatch (depending on your needs) the list of MD5's which indicate suspicious files. To do this "match" you run the maresware COMPARE program.

The compare program compares two files which are sorted on the same key field. The files DO NOT have to be identicle in the record format. For instance one file may contain full directory listing of your evidence files. This record also contains the hash/MD5 of the each file in the listing.
    sample_junk  708  CFA68D8D2299206E1CF31BF8D414635C   2024-01-01 12:15:32w
While the other file only contains a single field of suspect MD5 values. And you need to compare the files to see which items in your evidence list match the MD5 list. Thats what the compare program is designed to do. It compares two files on a single key field and allows you create a new output record from the match.

As before, this program was born from analysis of mainframe fixed length record data files. So both files must be fixed length and sorted. However, the record layout for each file may be different. The only restriction is that the key field be the sorted field. Simple enough???
   compare  evidence_file.md5   suspect_md5   matched_bad_files  compare.par

top
========================================================================

Todays Software Special
COPY_ADS

This session we will talk about finding and extracting NTFS Alternate Data Streams (ADS) using the COPY_ADS program.

When working a case where the evidence may be located on an NTFS file system you will come across alternate data stream files. You may consider them child objects, hitch-hikers, or some other cool name. But they could be very important evidence items. In most cases (except using some forensic suites) you will never see, hear, or taste the effect of alternate data streams. And as such, you may miss evidence.

ADSs can take the form of any other file you are used to seeing. They can be executables (virus, porn, etc), photos, data bases, or any other normal file and some special items. One special item you may be interested in when investigating an internet related case is that most browsers create alternate data streams when you are on the internet. In particular, one browser (and I'll let you investigate for yourself which one) maintains the URL of the download site. Do you think this might help your investigation in a sex investigation? Other information can be hidden in ADS's that might assist in your investigation. So why not look at the files.

How do you get to see these alternate data streams, because explorer, DIR, and other general directory listing software doesn't see/feel/find ADS's. So let's find a program that can detect and expose the ADS. You use COPY_ADS. What COPY_ADS does, is it finds the ADS and copies it from its alternate data stream position and makes it a "normal" file so you can see it using simple processes like explorer, DIR, a word processor, any normal program. So why not expose the ADS to the light of day? COPY_ADS simply renames the file to what you might consider a normal filename that can be seen and felt by your normal directory and file processing software. However, you want to run COPY_ADS on a forensic copy of the evidence tree because the program in effect "creates" files in the directory or on the drive. But the initial caveat being: did your initial forensic copy of the evidence files capture the ADS's so you can now expose them to the light of day.

top
========================================================================

Todays Software Special
CRCKIT

This session we will talk about running a basic program which calculates the 32bit CRC of files. The program name is CRCKIT.

Even though the 32bit-CRC algorithm is not used very much any longer this CRCKIT program may have some benefit. It calculates the CRC of a file and allows you to create a listing of the files and their CRC values. In addition there are options which allow for selecting files to process based on size and date and obviously path and filename. Actually, in its basic operation you can ask it to only print the path/filename. An easy way to create a quick file listing of the files within the tree without their CRC or MD5 value.

   C:\TMP\JUNK_DEL\HASH_TEST\EXES\HASH.EXE

One security aspect of the crckit program is that it can add a CRC signature to a file. This signature can later be checked (using CRCKIT) to verify the file hasn't been altered. A quick and dirty virus or modification process. If the file has been modified the crckit program says its an invalid CRC and you need to figure out why the file was altered.

   C:\TMP\junk   45E7A9A9 Incorrect CRC

Crckit has other output and processing options which include calculating the CRC of the ADS's. Just remember, it is command line and very basic. But maybe thats all you need right now.

top
========================================================================

Todays Software Special
CSV2PIPE

This session we will talk about converting a misformed (or messed up) .csv file to a true pipe (|) delimited file using the CSV2PIPE program.

I don't know how many times you have tried to import into the spreadsheet a mis-formed csv file and had one or all of the fields/columns corrupted. This is because the csv format with its (",") format and misplaced quotes ( \" ) within fields causes the spreadsheet to loose its mind while importing the data. I know this has happened too me all too often, that is why I wrote this program which converts CSV 2 PIPE delimited files. (my dog "," has fleas) becomes (my dog | has fleas).

For those of you not from the mainframe world, the pipe delimiter, and it followed to the PC world is one of the few restricted characters to use in filenames, and other formatting. So the occurance of a pipe in a record actually means something special. In this case, the pipe character is used to identify field delimiters.

What the csv2pipe program does, is it reads the supposed csv file format and finds all the "," csv indicators and converts them to the pipe (|) delimiter. It knows when it sees a mal-mis formed field and ignors that item. So what you end up with is a true pipe delimited record which any program, especially spreadsheets, that are worth their weight in characters knows exactly what to do with the pipe delimiter. It means that this is another field to load to the column.
I don't think I've ever seen a pipe delimited file mis-loaded by a spreadsheet. Can't speak for data base programs. Since I don't use data base software. I'm a purist at heart. Just ask my relatives why I don't put strawberries on top of my cheesecakes.

top
=================================================================

Todays Software Special
DISKCAT

Lets talk file cataloging, or listing the files in your evidence tree. This program, DISKCAT is one of the most important of the maresware forensic programs. So take a close look.

Whenever you seize evidence isn't creating an inventory of the seized items a primary operation? So, when seizing computer evidence in preparation of a forensic exam you might consider creating an inventory or catalog of those files which are located on the suspect computer and thuse seized as evidence. Yes/No??

If so, lets create an evidence list or catalog of files in the tree. Or as I call it, diskcat, for disk cataloging. For this operation you might consider using the DISKCAT,    program.

The diskcat program is specially designed to find and list ALL the files (or only selected file types, ie: *.doc *.txt, *jpg etc.) within the specified directory trees. Notice I said trees not tree. Because on the command line you can point it to a number (up to 10) of directories to search and thus create a catalog or listing of ALL the files contained within the directory tree.

The output can be customized to include any or all three file times (MAC); find files based on file time or file size; full path; delimited fields for easy import to your favorite spreadsheet; drive serial number and label so you can differentiate later which drive the file was found on; a user specified identifier to uniquely identify this suspect computer from the other suspect computer(s); any NTFS Alternate Data Streams; and other stuff. (thats a technical term).
   COMMENT | PATH\FILENAME   |SIZE|ATTR  |   CDATE  | CTIME       | TZ| SERIAL # | DISK LABEL 
   Susepct1|C:\password.txt  |  60|ARH...|2020/01/01|07:34:56:789c|GMT| 7EF7-17FC| DRIVE_C
top
========================================================================

Todays Software Special
EML_PROCESSING

Lets talk .eml file processing. with the eml_process.exe.

How often during an investigation have you asked for more header information from an email other than the to:from items. A lot of times I suspect. So why not use a program which can identify and seperate the heading fields to a nice pipe delimited record which can be easily imported to a spreadsheet for analysis. If this is true, try out this program.

The only restriction is that the eml files which you process are actually .eml files on the drive. Not emails contained in a container maintained by a larger email program. So the first thing you must do is export all the emails to the traditional .eml format. Once this is done, the next step is easy.

Point eml_proc at the directory containing the .eml files, and say have at it. The program finds all the .eml files, and reads the header information and splits "most" of the header information into pipe delimited fields which can then be imported to a spreadsheet for easy review and processing.

Some of the important information which the program identifies is the usual: to, from, bcc and subject fields. Here is a list of what is placed in the output records:
   "Filename|From|To|CC|Bcc|Date|ForwardDate|Subject|MessageRead|AttachmentInfo|RecentIP|"
If the item is not available, the field is left blanks. Now import to the spreadsheet and have fun.

top
============================================================

Todays Software Special
FILBREAK

Take a filbreak and split the record to its parts.

Suppose you have a very large record that is made up of many fields such as below example. Much of the record, that is to say many of the fields are not needed for the next step in your processing.
   TEST_FILE|C:\CLASS\runme.bat|4003|A......|01/01/2019|07:34:56c|01/01/2019|07:34:56w|01/23/2024|21:21:17a|EST|
The next step in your processing needs only the path, size, and maybe the creation date in YYYYMMDD format. The record above contains all that information and more that is not needed. So how do you extract the information you need and in this case reform the date field. You use the filbreak.exe program.

FILBREAK is designed to "break" up a file/record into whatever pieces the user desires. It can move fields around (bring the end field to the beginning etc.), truncate fields, add comment fields, and in the case here, massage the date field to the necessary YYYYMMDD format which as you know is much easier to sort in the YYYYMMDD form rather than that which is displayed.
The user provides information as to what fields or parts of the field is needed and the program finds these characters and builds a new output record. After the process is completed the new record is written to the output and so on and so on. So the new record from above might look like:
   C:\CLASS\runme.bat|4003|20190101|07:34:56c|2019-01-01|07:34:56w|MY_INSERTED_INFO|
The user can massage the record any way your little heart desires to reform it to whatever format is necessary for the next processing stage. I do hope that you have a next processing stage for the data as input.

top
==================================================================================

Todays Software Special
FILSPLIT

Lets split a file into smaller pieces using filsplit..

Well, now you have a gigantic (thats a technical term) file that you may need or want to split into smaller manageable pieces. If this is the case, FILSPLIT is the program for you.

The filsplit program can take a large file and guess what: split it: into smaller more manageable pieces. That way you can run your tests on the small segment, and develop your process without wasting time on the large file. You can also, if necessary only split off a single small segment as a sample to show how good an examiner you are.

Filsplit can do the following "splits".
1. Split off XX bytes from a file to a single smaller file.
2. For fixed length record files, can split off XX number of records.
3. Can start copying at position XX and copy YY bytes to the output file.
4. Can split the file into multiple XX record splits. So a larger file becomes many smaller files.
5. While its doing all this, can calculate the MD5 of the splits.

Bottom line: if you need smaller portions of a much larger file, FILSPLIT the large file into pieces.

top
==================================================================================

Todays Software Special
FINDRECL

With a fixed length record file, if you don't already know the record length, you can use findrecl to give an educated guess at the actual record length of this fixed length record.

When using maresware to process fixed length records, you sometimes create a new output record that is a different format/size from the original. This is obviously the case when you use the filbreak program to massage the record contents to different output record.

Once you have this new output record, you ask yourself: what is the new record length of the output record. Or if you are given a file and don't know its record width, you would need someone to tell you the record length of the new record. This program, findrecl is just the thing. It opens a file, and reads records until if finds the traditional CR/LF record terminator. Then says,
   Record len:  34
   From position 1 
   From position 0
   Field 1 is:       000247ECF3A2BC588ECC7BEAC77017C4  CR/LF
   Found 0x0d( carriage return ) at 33
   Found 0x0A ( line feed )      at 34
   Apparent number of records in file based on 34 record length = 2,171
next display has spaces removed for visibility
   Record len:  77
   From position 1   
   From position 0
   Field 1:           0191+324429741907763BE       DTY    T04048SORT ON 21X150334110202 CR/LF
   Found 0x0d( carriage return ) at 76
   Found 0x0A ( line feed )      at 77
   Apparent number of records in file based on 77 record length = 525,000
Once the apparent record length of the fixed length record is determined you can be on your merry way.

top
=============================================================

Todays Software Special
HASH

Now for the good stuff. We will talk about the HASH program.

As you know (at least I hope you know) the hash of a file is a unique value which can be used to identify and record the state of the file as an evidentiary step. The hash program will calculate the MD5 and other hash values (SHA's) of a file or files.

Hashing of files is a primary step in most forensic analysis processes. The calculation is relatively common place, but what you do with the output of a hash program and how you actually hash the file is important. What I mean to say is that I have tested many hashing programs against four simple evidentiary requirements, and found that only about 1% of those tested passed all the tests. So before you use a hashing program in an evidentiary environment you might want to test its evidentiary worthiness. Or in other words, can you defend its operation. Enough about the evidence portion. Now for the operation.

The hash program simply performs the hash of the files it finds. But the trick is that it has many many options which may make it an invaluable evidentiary tool. For instance, it can not only perform an MD5 hash, it can also calculate SHA values. I'll let you read the manual to see exactly how many SHA's. When you think about it, and I know, sometimes thinking is difficult. But if you create a hash file of ALL the files in your evidence tree, haven't you also created a reliable "inventory" or listing of those evidence files? Nice thing to have.

For a tease, I'll say now, it can also provide the three file dates, find and calculate ADS hashes, and create a field which can be used when imported to a spreadsheet of only the 8.3 filename. If you don't know what the 8.3 filename is, get another job.

Hash is a very powerful and useful evidentiary program. So take a look and hash it out.
  spaces removed for clarity
  -------- BEGIN PROCESSING MD5 -----------
   "  PATH                 |              MD5                 |  SIZE |    MDATE  | MTIME | TZ | NAME    "
   "H:\TRAINING\FILE.xxx   | B4B075398B28D773D063FB1204D0B308 | 149192| 07/30/2023| 16:02 | EST| FILE.xxx"
top
===============================================

Todays Software Special
HASH_DUP

Now lets find out how many duplicate hash values we have. To do this will talk about the HASH_DUP program.

Now that you have a large inventory of hash values created from the hash or md5 or crckit program you can find which items (hashes) are duplicated in the files. The only restriction is that if you should decide to merge any two hash files that the record formats be identicle and as before be fixed length records. However, with the filbreak and other other maresware software, making a fixed length record is childs play.

The hash_dup program will take a fixed length record file which is sorted on the hash value and remove any duplicate records. This duplicate removal then allows for the use or identification of only a single instance of any hash value which is useful in a number of circumstances. The main "current" restriction is that there is a 1 million record limit.

The primary use of the hash_dup program is to determine if hashes from different suspect locations match each other. So that you can identify files regardless of path or filename which are identicle from different source locations. Even identicle files living in different directories on the same drive.

For your own file maintainance, it would be nice to know if you have similar redundant files living in two different places. Then if you do find duplicates you can remove them if necessary. No need to keep unwanted duplicates. Is there?
	hash_dup -i junk1 -o dupes
   There are     11 records to process
  Processed      11
  There were      3 duplicate sets found

top
===========================================================

Todays Software Special
HASHCMP

Next we will talk about the HASHCMP and HASHCMPV programs.

The hashcmp program works on fixed length files while the hashcmpv works of files with variable length records. That is the major difference between the two.

When you create output files with the hash program you end up with a fixed length record containing a specific key field. In the case of the hash program, that field is the MD5 hash value. Now, at some other point you create another file with identicle format from say a different suspect drive or location. Or for your own use, you create a file on day 1, and then at a later time you create another file on day XX.

Now you wish to compare the two files to see what hash values either show up on each file or what hash values are on file1 and not on file2. For forensic or your own maintainance purposes, the hashcmp program allows you to determine if the two file listing contain similarities or most usually you want to determine what hashes are on file1 and not on file2.

A situation might be you have a hash listing created on day1. Then something has gone wrong and on day2 you create another hash listing of the files on the same drive. You wish to see what has either changed from day1 to day2 or you need to see what was added on day2. Maybe a virus file, or some such animal. So what do you do. You call hashbusters. Or rather hashcmp. It compares the hash value from file1 to file2 and for this situation shows which hashes show up on the file/day2 that weren't on day1 regardless of filename. So if a virus hit a file evern though the name is the same the guilty will be found.

Since hashcmp works on any fixed length file containing the key field to compare, you can use it for other purposes also. Maybe compare $$$ figures, or dates or names. Use your imagination. But the main purpose is to tell you which hashes are either matched on the two files or are different on the two files.

Notice below the runs were done at different times, and even though the filename was not changed at some point between time1 and time2 hash values for two files were altered. Why were they altered? That for you to figure out.

When it says in file 1 not in file 2 this means that a hash was found in file1 and no match in file2. It ignores names.
   in file: junk1 |not in file: junk2 |I:\TMP\SYNC_1.BAT  |  0071D6D738D1E2DB3115BDEE23478117|
   in file: junk2 |not in file: junk1 |I:\TMP\SYNC_1.BAT  |  1071D6D738D1E2DB3115BDEE23478117|

   in file: junk2 |not in file: junk1 |I:\TMP\menu.lst    |  7AA6CE75A808890E2F7CC5AC422EED71|
   in file: junk1 |not in file: junk2 |I:\TMP\menu.lst    |  6AA6CE75A808890E2F7CC5AC422EED71|

top
=================================================================

Todays Software Special
KITING

Next we will fly a kite or go KITING.

For those of you with an accounting background you know the word kiting means something along the line of finding the difference between $$ value1 and $$ value2. But in this case the kiting program will find difference in file date1 and file date2. For instance we have a create date of 2024-01-01 and a last write date of 2024-02-01 and we wish to see how many days were between the two dates. The reason for this is your guess is as good as mine. But in cases of theft or file exfiltration the last access date in a kiting calculation might be useful to see what a file was possibly copied and walked or thumbed its way out the door.

Kiting if provided a fixed length record with two dates in it. The date formats can be a variety of normal date formats. So you don't have to use filbreak to fix the field format. Then you tell kiting the location within the record of the two dates and voila, you end up with an output record showing the date difference. Not much more to the programs operation. I'll leave it to you to figure how you need to use the date difference of a files MAC date/times.
   I:\FINDRECL.EXE  | 2013/02/25|06:18:18:710c| 2009/11/10|10:02:06:000w|+  1203|
   I:\find_nsrl.bat | 2013/02/25|06:18:18:725c| 2010/05/26|08:43:14:250w|+  1006|
   I:\findrecl      | 2009/01/13|08:30:08:000c| 2009/01/13|08:30:08:000w|+     0|

top
=========================================================================

Todays Software Special
MD5

Now the less verbose hash calculating program is MD5.

The MD5 program is the little brother of the hash.exe program. It does most of the same things but produces output format a little different. More often it is used from the command line to do a quick and dirty calculation of only a single file or small number offiles.

Its output record is a simple three field record of filename, filesize and MD5 value. If you wish other fields to be included you must include options on the command line. The MD5 and HASH produce different outputs depending on your needs and how you feel that day.

Both can be batch included for hands free operation and create nice clean fixed length record output files. Not much else to be said that hasn't already been said in the hash.exe explanations.
   Started Wed Jan 24 19:38:32 2024 GMT, 15:38 Eastern Standard Time (EST/EDT:UTC-4*)

   RM.EXE      212168  5258E35273924860A78860E4ADDE8946
   RMD.EXE     212168  5258E35273924860A78860E4ADDE8946

top
======================================================

Todays Software Special
MDIR

Lets replace DIR with MDIR.

MDIR is a forensically sound replacement for the DIR command. It produces similar looking output but has much more capability than the DIR command. Remember, it is a command prompt program. One of the nice things about MDIR is that by default it produces a sorted output which generally makes it much easier to find the filename you are looking for.

Other options include size, date/time restrictions which further allow you to "program" its output. The output can also be easily placed into an output file for later use. The date format not only allow for listing all three MAC dates at the same time, but allows for YYYYMMDD format which many find more useful and easy to read. And one last important forensic enhancement is that MDIR lists by default file attributes, so you will see hidden and read-only attributes, and it lists any NTFS alternate data streams it sees. To obtain many of MDIR's default capability, you have to add many convoluted options to the DIR command. So this is a simple to use replacement with forensic evidence capability.
   MDIR  SEARCH.* -T3 --zulu -d "|" 

   SEARCH.EXE | 198,344| 2023/10/14|18:37:53c| 2011/06/10|11:13:20w| 2024/01/24|16:23:59a|GMT|A.....|


top
========================================================================

Todays Software Special
MERGE

Now lets MERGE files together.

The merge program is a replacement for the copy command: copy /b file1 + file2 + file3 etc. when you are trying to "merge" a number of files with sequence extensions.

The main reason you might use merge instead of the copy command is when you have a significant number of extension sequence files (ie: file.000 file.001 etc). And there may be many of them which were generated from your processing evidence. You wish to merge or recombine them into a single file. This is what merge was designed to do. It takes a command of filenames with sequenced file extensions, and adds them all together into a larger file.
It is simple to use and easier to type than a multiple file copy command when you have many sequence number file names to add to the command line.
Nothing else special about it as far as forensic or evidence use. But is is a simple to use command to "merge" or combine many sequence filenames together.
   C:\merge  basename.*    new_output_merged_filename


top
=================================================================================

Todays Software Special
MOUSE

Now lets cat and MOUSE files.

Mouse is my version of the *IX CAT command. It can be used to simply display the contents of a text file with page breaks on the screen. But it can do a lot more.
When you wish to include a text file in your report, or print it, mouse can include in the output/print copy a number of useful pieces of information for the investigator/report. It can, print/display the filename and todays date on the top of each page which makes review of printed items easier. Processing a fixed length record that does not have carriage returns, it can add a carriage return. Again this is useful for saving a print copy.
It has many filename/date/page inclusion options plus many useful print modification options for when you need to include a text file in your report.

top
===========================================================================

Todays Software Special
NO_HTML

If you need to remove the html code from a file you might try running it through NO_HTML.

The NO_HTML file takes an html formatted file and very simply removes most of the html code. It does not remove all the html coding, because that coding can get quite complicated. But it removes enough so that it can be looked and opened with a simple text editor or word processor as a simple text file.

Not much else to say about the program, but you might consider using it what trying to include an html type document into a report where the html code would simple corrupt the view and the user has no other way to view an html document.

top
======================================================================

Todays Software Special
PIPEFIX

Now lets make a fixed length record from a pipe delimited record using PIPEFIX.

As has been mentioned before, most of the maresware data processing software relies on the fact that the files it is processing are fixed length records. This requirement comes from the ancient times of working with mainframe data. So what do you do if you have a delimited (and I hope its pipe delimited) file? Anwser: you make the delimited file into a fixed length record by using the pipefix program. As the name implies it fixes pipes. Which means it turns pipe delimited records into fixed length records.

In order to do this you must tell the program all you know (and I hope you know ) about the input record format. And that means you should know how many fields are in the input record, and what is the delimiter (hopefully a pipe symbol, but not reguired) and what size you wish the fixed width fields to be. You can then send the input file thru pipefix and end up with a fixed with record.

The trick behind this operation is what is called a parameter file. This parameter file contains the "parameters" or list of fields in input file. It contains a value that you want each field to become. For instance, you want the path field to end up as an XX length fixed field, and you want the date field to be 20 characters in size. You provide the width of each output field. You can also tell pipefix thru this parameter file to drop unnecessary fields, and add small comment fields if you wish. You can add literals like GMT or TZ or other short descriptors so that when imported or looked at the reader knows what they are seeing. See below to see how it works. The original variable width pipe record turns into a fixed width field record.
  Dan Mares | 123 anystreet | nowhere | USA | 10-01-2020 |
  Joe Smithsone | 12334 news plaza | bologna | ITALY | 10-10-1980 | 	

    can become  fixed width records. pipes maintained for legibility.

  Dan Mares       | 123 anystreet      | nowhere    | USA   | 2020-10-01 |
  Joe Smithsone   | 12334 news  plaza  | bologna    | ITALY | 1980-10-10 | 	
which makes the output easier to process further using other maresware software, and can also be imported into a spreadsheet or database.

top
================================================================================

Todays Software Special
RM & RMD

Now lets see how to remove and destroy files using the RM program.

RM and RMD are designed to delete or remove files similar to the *IX rm command. The RM command simply removes files, while the RMD command removes and destroys. Meaning it overwrites the file so it can't be recovered. During the overwrite phase it also renames the file so the original filename is not available for recovery.

In addition to removing files RM has the capability of traversing a tree while removing targets. So when you tell it to remove *.* and tell it to recurse the tree, in effect the entire tree/directory is removed. It will also remove read-only and hidden files if asked nicely. If you tell it to remove all *.mp4 files it will travers the tree, and remove all the mp4 files, while leaving the others alone.

Part of the forensic capability is that you can also pass rm a list of files to remove. So if you obtain or create a list of files during your investigation and need to delete them for whatever reason, you give rm the list and it will only remove those files within the list. The nice thing about the list, is that it can have files living in different trees. The program will find them and remove them.

RM and RMD have many other useful options. Try it as an intelligent replacement for DEL.

top
============================================================================

Todays Software Special
SEARCH

How do you find a large number of keys in a fixed length file? You SEARCH the file.

The maresware SEARCH program was developed to search large fixed length (notice I keep referencing fixed length records) data files on any number of search keys. And I mean any number. You may have a 200 million NSRL MD5 data set and you wish to search it for the 10000 MD5 values you extracted from your evidence files. Try doing that search using a spreadsheet. It might take a few years. But suppose there was a program that could search through those 200 million and find which of the 10000 are contained within the data set. Would be nice wouldn't it.

The search program can search a large data set for any number of keys. However, as mentioned in one of my earlier rants. If the two files are sorted on the key, then the compare program would be more efficient. However, if they are not sorted on the key field, then the search program is the way to go. But, remember, the records searched must be fixed length. So if you need to, run the source thru the pipefix program. In the case below we are only searching for 2 keys. But you get the idea.
    Output file name = SEARCH72.out
    Output record length is         72
    No of records read =       127,000
    No of records wrote=             2
    Elapsed time: 0 hrs. 0 mins. 0 secs
top
========================================================================================

Todays Software Special
SPLIT

What is the opposite of merge? It is how do we SPLIT a file into smaller segments.

In another session we learned how to merge files together to make a single large file. Now, if you have a large file that you want to "split" into smaller manageable pieces, you use the split program.
You take a fixed length input file, and tell the program how many records you want in each of the output files. OR: A second option is to tell the program to only make XX output files from the larger input/source file. Again, the input should be a fixed width record. DAH! How do we create fixed with records? We use pipefix. Then we split the input file to any number of output segments which can be used as test samples or whatever your heart desires.

top
=================================================================================

Todays Software Special
SSN_VALID

Do you often try to validate if an ssn is valid or fraud. Well SSN_VALID might be able to help.

Years ago, and I mean many years ago I had to confirm whether a particular SSN was valid or a fake. Back then the government had a formula for issuing social security numbers. Today, I don't think the formula is used, but if you have a person who is older than about 20 this program might help to determine if the SSN they are sporting is valid, and best of all which state it was issued.

Very simple. You provide the ssn on the command line, or provide a text file containing multiple ssn's and the program will attempt to determine its state and validity. But, again, I caution that the formula the government used then may not be current. So caveat be emptor.

   SSN: 001-02-1234   New Hampshire         VALID=YES
   SSN: 008-80-1234   Vermont               VALID=YES
   SSN: 001-11-1234   New Hampshire         VALID=NO
   SSN: 001-00-1234   New Hampshire         VALID=NO
   SSN: 005-02-1234   Maine                 VALID=YES
   SSN: 001-02-0000   New Hampshire         VALID=NO

top
================================================================================

Todays Software Special
STRSRCH

Often you need to search thru text files for strings. Well, if your files that contain textual content, and you have a boatload of strings to search for, then STRSRCH can do the job.

STRSRCH is designed to take a list of text strings and search thru the files for those particular strings. The file it is searching thru need not be totally text. Just that the string you are searching for is in fact text within the file. For instance, say you are looking for a particular URL within some sort of network data file. Usually the URL's are stored as text and so you can search for the URL.

Once found, the program will extract surrounding "text" and place that information in the output file along with the position/location within the file the string was found. You can then use other software to open the suspect file and go to the position where your string was found. The amount of surrounding text placed in the output is user defined, so you can get as little or as much surrounding information as you like. Just be aware that a lot of hits result in a lot of output.

(fields truncated for legibility)
    C:strsrch -f f*.exe -s mares -wo junk     

  STRING |  LOCATION  |        FILENAME          |   TEXT                     |
  mares  |     150509 | H:\TRAINING\FILBREAK.exe |2021 by Dan Mares....aB.678-|
  mares  |     158389 | H:\TRAINING\FILBREAK.exe |Mares, www.dmares.com...par.|
  mares  |     169573 | H:\TRAINING\FILBREAK.exe |Copy ...8.B.Maresware Unregi|
  mares  |     169682 | H:\TRAINING\FILBREAK.exe |ave a valid Maresware regist|
  mares  |     134433 | H:\TRAINING\FILSPLIT.EXE |998-2010 by Mares and Compan|
  mares  |     147929 | H:\TRAINING\FILSPLIT.EXE |............Maresware ...UB.|
  mares  |     136282 | H:\TRAINING\FINDRECL.exe |ave a valid Maresware regist|

top
===========================================================================

Todays Software Special
TOTAL

When you have records that are sorted on a key field, ie: name, and it contains a numeric field. Possibly $$ values, or other numbers, and you want to total the $$ amount for all the records with the key field (ie: name) you TOTAL the $$ values.
The total program has two capabilities. The first as mentioned above, is to total the values of the numeric field for each sorted key item, say NAME. So if you have 10 records of the name BFTTU that have a dollar field of 5 digits, you can count or total the amount that BFTTU has in their bank account.
                    Sorted key                                   $$$$$         Total 
0102-212132015221862BFTTU       DMUBUT   W01258PFF          05120 90104|       639648 
0107-222100011153107BFTTUMPUN   FNFPT     11068FTTM FTFNZDMM05920 50124|       265632
The second capability is to do a "C"ount of the number of records with the key field. So as above, if the BFTTU name had 10 records in the file you might see an output record of such indicating the field "C"ounted to 10. This count option is probably the most useful for forensic analysis.
                    Sorted key                                              COUNT  
0102-212132015221862BFTTU       DMUBUT   W01258PFF          05120  90104|      10 
0107-222100011153107BFTTUMPUN   FNFPT     11068FTTM FTFNZDMM05920  50124|      26


top
===========================================================================

Todays Software Special
TOUCHME

If you want to change the MAC dates of a file or files why not use a *IX type program and touch the file(s) using the TOUCHME program.
The TOUCHME program is modelled after the *IX TOUCH program which is designed to allow the user to change any of the three file MAC dates. Very simply put, on the command line you tell the program which files to touch and which of the three or all three MAC dates to change, and what to change it to.
  touchme -f *.txt *.docx  --touch=MAC!2023-01-01:120000      (or)
  touchme -f *.exe *.xls   --touch=M!2023-01-01
There are many options which to use for refining which files to touch. Such as size, date, path/directory etc. Use your imagination when deciding on the options. Unfortunately at this time if you put a time on the command line, the program interprets it as local. So if you want GMT you have to adjust accordingly. Testing is always a possibility.
This program makes it easy to set up sample dates to test your forensic softwares ability to find and list files.

top
======================================================================


Todays Software Special
UNIQUE

Many times you only need a single record containing the key item, such as filename, date or MD5. So if your records are sorted on a specific key, and you only want a single instance to work with, then you can UNIQUE the data records to produce only a single instance of the record with the sort key.

The unique program was designed to eliminate records with duplicate sort keys. So you can work with, in effect, a smaller universe of data. Then when you perfect your process, you can go back to the larger data file and process the data.
This unique program is especially useful when removing duplicate MD5 values from a data set. Usually, when working with MD5 values isn't your first instinct to find all those unique MD5's then do what you want with the list. No need to have a gigantic sorted file of MD5's when you are only needing a representative single instance for initial work.
The unique program removes subsequent records of duplicate sort keys. The only requirement is that the input file be fixed length (where have you heard that before) and that the file be sorted on the key field to unique on. After that, use the resulting single instance data set for whatever.
   Records in  ..\TEST_DATA\MD5.34 =   2,171
   Output file ..\OUTPUTS\UNIQUE_MD5.OUT
   Number of records written:          1,562


top
==================================================================


Todays Super Software Special
UPCOPY

If you are doing forensic work day to day, you are certainly concerned with accurate and true copies of any evidence or work material you create. When I tested forensic copy software, this was one of the few that passed all my forensic evidence tests. So if you want a truly forensically capable copy program try UPCOPY. You'll like it.

The upcopy program is designed to perform two tasks. The more simple of the tasks is that it can be used as a "sync"ing software package. Meaning that it can sync two trees and copy ONLY those files it needs to copy. Excellent for maintaining and syncing a work tree with a backup tree on your server. So each day you run upcopy to "sync" the work with the backup drive. And only necessary files are copied.

Its default is to copy ONLY those file which are newer or do not currently exist on the destination. So in your day to day work, and you have hundreds of report files. But you only have modified a few, use upcopy to copy only those needed to your backup site. Eliminating the copying time of all the items. It only copies what it needs to copy.

Then we get to the forensic copy aspect of the program. For those who can't install a forensic suite at the suspect location, and you can only run from the folder/tree/directory level and not on a bit level but you need to copy ALL the files from the suspected evidence tree, this is the program for you. Remember, if you are on a gigundo server and can only copy a single users tree, what program do you call? Call UPCOPY. It will copy ALL the files, including alternate data streams, hidden, etc. and guess what? It maintains all MAC dates on both the source and destination. So that the next forensicator that attempts to copy the evidence sees original last access dates, not the date you copied the evidence, and your destination copy tree maintains create dates, not the date your write/copy it to your work location/drive. Nice explanation defense when the attorneys would ask, why the dates are different for two different runs?

top
=================================================================================


Todays Software Special
URL_SRCH

If you are doing work that may involve URL's, Emails, IP's, SSNs, Phone numbers, Credit card numbers, you might take a look at URL_SRCH. to find and list those items it finds.

For instance, when internet investigators might have a pcap file to review. The URL and IP are generally in a text format. This program can find those items and produce a listing of the location within the file and the IP it found. Then you might send that output to the total program to count the number of different IP addresses. If you have an anomolous count either high or low, that might be a clue. You know, you may want to look at the anomolous IP's for intrusions.

It also can detect with some accuracy phone numbers, and email addresses for you investigation. The only restriction is that the items it is looking for be in a text format. The rest of the data can be binary. Then you have to use whatever tools you have to view the rest of the data.
 TYPE | LOCATION  |FILE           |TEXT  field shortened for display                            |
|url  |      3419 | H:\test11.eml |e URL below:..http://paracom.paramountcommunication.com/p/i  |
|url  |      3538 | H:\test11.eml |dor opt-out:..http://paracom.paramountcommunication.com/p/o  |
|url  |      4968 | H:\test11.eml |  =http://paracom.paramountcommunication.com/ct/             |
|ip   |       729 | H:\test10.eml | ceived: from [72.54.140.154] (helo=CPU_02.domain.com)...by  |
|ip   |       945 | H:\test10.eml | 201:47:e-Id: <6.1.2.0.2.20111126064653.02bc1668@mail.domain |
top
=======================================================================


Todays Software Special
VERTICLE

How many times have you had a few fields in a record that were displayed in a horizontal fashion, ie: name, address, city, state, etc. etc. etc. But you would like to include those fields in the report in a line by line format. Well, the VERTICLE program turns a horizontal delimited record into a vertically oriented multiple line records.

Take your spreadsheet exported delimited record and make them ready for your report. However the record has about 10 fields in it, and trying to print that in a horizontal fashion for easy reading is a cludge. So you run the delimited file thru the verticle program and it turns horizontal data into verticle data that is easily input to a report or printed.

Have a look see. The data below is severly truncated and some fields removed for legibility. But you get the idea.
   File Date:'2014-06-11 07:54|From:.."sales@ammunit.com"|To:acc@domain.com|Subject:Subject:Father's Day Ammo Sale|
Becomes:
   File Date:   '2014-06-11 07:54
   From: (#1:) From:  "sales@ammunit.com"
   To:   (#1:) To: acc@domain.com
   CC:   (#body:) Cc:
   Bcc:
   Date:   (#1:) Date: Wed, 4 Jun 2014 14:29:11 -0400 (EDT)
   Date-Time:   '2014-06-04 14:29:11 -0400 (EDT)
   Forward (Prior Sent) Date(s):
   Subject:   (#1:) Subject: Father's Day Ammo Sale
   Message Read:   (#1:)  YES
   Attachment Info:
   Recent IP:   (#1:) [123.45.123.626]  (#2:) [123.452.60.208]

Which of the above formats would you rather include in your report? I think the answer is simple. But thats me, simple.

top
=======================================================================