NTFS $UsnJrnl Parser
This script parses USN_RECORD_V2 change-journal records contained in the $J data stream of the NTFS $UsnJrnl file. It can also search for, and decode, USN_RECORD_V2 records in $LogFile and unallocated clusters.
Change-journal records contain information about changes made to files and folders contained within the associated volume's NT file system.
The examiner can choose to process all, tagged, or selected $UsnJrnl·$J, $LogFile, and unallocated cluster objects. Even if everything is selected, the script will only process those objects that are named $UsnJrnl·$J, $LogFile, or those that are marked as unallocated.
The examiner can opt to parse each $UsnJrnl·$J file in a way that will skip any sparse regions. These contain nothing but null bytes, so excluding them will save some time. Note that this option will have no effect when parsing $UsnJrnl·$J files that are contained within logical evidence files.
The script finds USN_RECORD_V2 records in unallocated clusters and $LogFile using the following GREP/Unicode keyword:
This assumes that the version of each record is 2.0, that the length of each file-name cannot be greater than 510-bytes (the maximum length supported by NTFS is 255-characters with 2-bytes per character), and that the offset to the filename will always be 60-bytes.
In addition, the script validates each search-hit by checking that the length of each record is greater than 60-bytes but less-than or equal-to 576-bytes (records are padded to 8-bytes); also, that the reason, source-information, and attributes fields don't contain any reserved values.
When it comes to reporting the results, the examiner can use the script's built-in filtering functionality to output only those $UsnJrnl records that match specified criteria.
The most common use for this will be to identify records that relate to files/folders with a specific name, in which case the 'TargetFilename' property should be used.
As a time-saving option, the script can populate the filter with terms matching the entries (files or folders) blue-checked in the current view.
This makes it very easy to identify $UsnJrnl·$J records relating to files and folders of interest; it can be expanded so as to include records that reference blue-checked folders as parents of other files and sub-folders.
If this option is selected, the script will create a condition-folder called 'Selected_Entries' to which it will add a sub-folder for each selected entry. Any existing 'Selected Entries' folder will be deleted.
Each entry's sub-folder will contain a term matching its MFT record-number; also, one matching its MFT record sequence-number. Additional terms will be added if the entry is a folder and the appropriate option has been selected.
By default, the script will decrement the MFT record sequence-number of deleted files/folders by 1. This is to account for the fact that their sequence number will have been incremented by one upon deletion.
Although the examiner can override this behaviour, it's not recommended because any of the file/folder's $UsnJrnl·$J records will contain the value of the sequence number prior to deletion.
It should be noted that the script does not check whether the entries blue-checked in the current view are located on the same volume(s) as the objects to be processed.
When it comes to filtering $UsnJrnl·$J records based on the reasons contained therein, the latter are stored as a numeric value such that each reason is represented by a single bit. The reasons are defined in winioctl.h; they are also documented at the following URL:
Because reasons can be combined, validating an individual reason can only be performed reliably by using the bitwise AND operator (&), which is not natively supported by EnCase conditions.
It is possible to overcome this by specifying another operator (the 'equal to' [==] operator, for example) and then changing it to '&' using the 'Edit Source Code' option.
However, this is not straightforward, not least of all because the examiner may wish to filter records containing more than one reason.
Accordingly, the script provides an option allowing the examiner to specify the reason(s) he/she is interested in. The script will then generate the required custom terms automatically, placing them in a condition-folder called 'Reasons'. Any existing folder of the same name will be deleted.
The script can, if requested, attempt to identify the path of the parent of each entry referred to in a record. This will not be possible if the host volume's Master File Table is unavailable or if the parent has been deleted or is also unavailable (such as where $UsnJrnl·$J has been captured as a single file or as part of a logical evidence file but the parent volume wasn't). Note that taking this option will extend processing time considerably.
The option to deduplicate records using USN values is designed primarily to aid the examiner when $LogFile and unallocated clusters are being searched in addition to $UsnJrnl·$J files. It's highly likely that duplicate records will be found in this case.
A USN is simply the offset of a record in the $UsnJrnl·$J file. Taking this into account, the examiner should be aware that USN values are not unique across different volumes. Also, duplicate USNs are likely to occur if change-logging on a volume has been disabled and then re-enabled.
The examiner should note the following with regards to $LogFile processing.
Firstly, the script will account for the update sequence array in each $LogFile record automatically.
Secondly, parsing $LogFile in isolation is not recommended as in most cases any change-log records contained therein will also be found in $UsnJrnl·$J or unallocated clusters. $LogFile is most likely to contain unique change-log records when the system hasn't been shut down properly.
Output is in the form of bookmarks and a tab-delimited spreadsheet file. Note that a CSV file extension is used because programs such as Microsoft Excel do not recognise the TSV file extension. The console and status-bar can be used to monitor processing.
The user has the ability to specify how the content of the 'Reason(s)' field is delimited. Using a newline delimiter makes reading the contents of this field easier but requires the script to delimit text fields using double-quotes, which aren't supported by every application into which the examiner may wish to import the data (see below). The examiner can therefore opt to delimit the contents of this field using spaces, in which case the script won't use double-quotes.
It's important to bear in mind that, taken together, $UsnJrnl·$J, $LogFile, and unallocated-cluster objects on Windows systems starting with Vista are likely to contain several hundred thousand records. These can take substantial time to process especially if the user has taken the option to resolve parent-entries. Unless the examiner is absolutely certain of the entries he/she needs to examine, the best option is probably to process each file-system object in its entirety and then filter the TSV output file using a spreadsheet or database program.
Note that there is a limit of 1,048,576 rows in Excel 2010 and Excel 2013, so the examiner is probably better-off importing and then examining the data into a database application such as MS Access or SQLite Expert Professional. Note that the former supports double-quoted text fields; the latter does not.
YOU MAY ALSO LIKE