Plaso

How to process an image using log2timeline/plaso

Verify E01 image using ewfacquire

Since the other Windows-based processes have included image verification, we'll include the use of ewfverify to verify our E01 set.

$ sudo apt install ewf-tools

user@df:~/cases/26038642$ ls -lah
total 11G
drwxrwxr-x 2 user user 4.0K Jul  7 11:37 .
drwxrwxr-x 3 user user 4.0K Jul  7 11:36 ..
-rw-rw-r-- 1 user user 1.5G Jul  5 12:04 20240212-decrypted-Windows_Server_2022.E01
-rw-rw-r-- 1 user user 1.5G Jul  5 12:04 20240212-decrypted-Windows_Server_2022.E02
-rw-rw-r-- 1 user user 1.5G Jul  5 12:04 20240212-decrypted-Windows_Server_2022.E03
-rw-rw-r-- 1 user user 1.5G Jul  5 12:04 20240212-decrypted-Windows_Server_2022.E04
-rw-rw-r-- 1 user user 1.5G Jul  5 12:04 20240212-decrypted-Windows_Server_2022.E05
-rw-rw-r-- 1 user user 1.5G Jul  5 12:04 20240212-decrypted-Windows_Server_2022.E06
-rw-rw-r-- 1 user user 1.4G Jul  5 12:04 20240212-decrypted-Windows_Server_2022.E07

You only have to specify the first of the E01 segments to verify the entire spanned set.

user@df:~/cases/26038642$ ewfverify 20240212-decrypted-Windows_Server_2022.E01
ewfverify 20140807

Verify started at: Jul 07, 2024 12:11:05
This could take a while

Verify completed at: Jul 07, 2024 12:17:48

Read: 50 GiB (53687091200 bytes) in 6 minute(s) and 43 second(s) with 127 MiB/s (133218588 bytes/second).

MD5 hash stored in file:                9a982399621826a66ff322cc87376e76
MD5 hash calculated over data:          9a982399621826a66ff322cc87376e76

ewfverify: SUCCESS

We can see the result is the data set was verified. The hash also matches the result of FTK Imager's verification process.

Setup and use log2timeline/plaso in Docker

I try to use Docker containers where I can. It helps me separate case files and reduce errors caused by software/version dependencies. You can use plaso on its own, but in this guide we'll use it in a Docker container.

Follow these steps to pull the required plaso container and verify that it's working.

user@df:~/cases/26038642$ docker pull log2timeline/plaso
Using default tag: latest
latest: Pulling from log2timeline/plaso
Digest: sha256:246c53b0c9459f60685b30cb9f749da3fa21159ba2b04c2e6c753150a0850258
Status: Image is up to date for log2timeline/plaso:latest
docker.io/log2timeline/plaso:latest

user@df:~/cases/26038642$ docker run log2timeline/plaso log2timeline.py --version
plaso - log2timeline version 20240308

One consideration with using Docker is accessing data stored on the host from within the container itself. I want to generate a super timeline (all of the parsers) for the entire E01. Take the following command for example (this is one line, it's just wrapped)

docker run -v /home/user/cases/26038642/:/data/ log2timeline/plaso log2timeline.py --storage-file /data/26038642.plaso /data/20240212-decrypted-Windows_Server_2022.E01

Let's break this command down;

docker run(we're invoking Docker)

-v /home/user/cases/26038642/:/data/ (we're mapping the folder /home/user/cases/26038642 on the host, to /data/ in the container)

log2timeline/plaso (the container we want to run)

log2timeline.py --storage-file /data/26038642.plaso (the command/file we're executing (log2timeline.py) along with the name/location of the file we want to store our data in)

/data/20240212-decrypted-Windows_Server_2022.E01 (our data source - which for E01s, can be the first E01 of the set)

If you execute this, you may see the following error;

Unable to proceed. More than one partitions found but no mediator to determine how they should be used.

As the error suggests, there are more than 1 partitions in the image. We can either append--partitions all and process all, or we can specify a specific partition. Let's quickly identify which partition we want to inspect.

user@df:~/cases/26038642$ mkdir /mnt/cases
user@df:~/cases/26038642$ ewfmount 20240212-decrypted-Windows_Server_2022.E01 /mnt/cases
ewfmount 20140807

user@df:~/cases/26038642$ ls /mnt/cases
ewf1

user@df:~/cases/26038642$ disktype /mnt/cases/ewf1

--- /mnt/cases/ewf1
Regular file, size 50 GiB (53687091200 bytes)
DOS/MBR partition map
Partition 1: 100 MiB (104857600 bytes, 204800 sectors from 2048, bootable)
  Type 0x07 (HPFS/NTFS)
  NTFS file system
    Volume size 100.0 MiB (104857088 bytes, 204799 sectors)
Partition 2: 49.29 GiB (52928970752 bytes, 103376896 sectors from 206848)
  Type 0x07 (HPFS/NTFS)
  NTFS file system
    Volume size 49.29 GiB (52928970240 bytes, 103376895 sectors)
Partition 3: 620 MiB (650117120 bytes, 1269760 sectors from 103583744)
  Type 0x27 (Unknown)
  NTFS file system
    Volume size 620.0 MiB (650116608 bytes, 1269759 sectors)

So our entire command becomes;

docker run -v /home/user/cases/26038642/:/data/ log2timeline/plaso log2timeline.py --storage-file /data/26038642.plaso /data/20240212-decrypted-Windows_Server_2022.E01 --partitions 2

It should start processing your image. The amount of time it takes to process is entirely dependent on the specs of your machine.

The resultant file is a 6.3GB SQLite database

user@df:~/cases/26038642$ ls -lah
6.3G Jul  7 12:51 26038642.plaso

user@df:~/cases/26038642$ file 26038642.plaso
26038642.plaso: SQLite 3.x database, last written using SQLite version 3037002, file counter 5, database pages 1635692, cookie 0x14, schema 4, UTF-8, version-valid-for 5

Before we convert this SQLite database to a CSV, we need to use a processing option to convert our times to the local time of the victim system. Export the HKLM\SYSTEM hive from the E01 (using FTK Imager or similar) and inspect the following key;

SYSTEM\CurrentControlSet\Control\TimeZoneInformation\TimeZoneKeyName

In the above image, it's 'GMT Standard Time' which is UTC+0. We don't need to change anything when we process our image to adjust the time offset.

We can run the following command to generate a CSV of all events, using the dynamic processing option.

user@df:~/cases/26038642$ docker run -v /home/user/cases/26038642/:/data/ log2timeline/plaso psort.py /data/26038642.plaso -o dynamic -w /data/26038642.csv

Last updated

Was this helpful?