EC2 DFIR Workshop
Lab 17 – Perform a Timeline Analysis
GOAL:
Create a timeline to determine the sequence of events and to identify missing information.
SUMMARY OF STEPS:
- Make a Filesystem Timeline
- Make a Super Timeline
Step 1: Make a File System Timeline
First, make the body file:
fls -r -m / /dev/xvdf1 > /cases/body.txt
Then, make a CSV file using mactime:
mactime -b /cases/body.txt -d > /cases/timeline.csv
Read the man page for each command (fls and mactime)
For a quick look at the timeline, with interesting files highlighted use the following command:
grep " 2019 " timeline.csv | grep -C10 -f interesting.txt \
--color=always | less -R
NOTE: For this to work, you must create the interesting.txt file with one IOC or item that you want highlighted per line in the file. Make sure the file has no blank lines.
Move the timeline.csv to the evidence S3 bucket:
aws s3 cp /cases/timeline.csv s3://[YOUR-UNIQUE-BUCKET]
Download the CSV and analyze it in Excel
VIDEO: Lab 17 Step 1 - Make a File System Timeline
Step 2: Make a Super Timeline
First, make a plaso dump file in the background with nohup, so that the terminal can be disconnected if necessary:
nohup log2timeline.py /cases/plaso.dump /dev/xvdf1 &
Monitor the progress if desired:
tail -f nohup.out
Look at the plaso storage file:
pinfo.py -v /cases/plaso.dump | less
Generate a CSV file for the date range of interest:
psort.py /cases/plaso.dump "date > '2019-03-12 00:00:00' AND \
date < '2019-03-20 00:00:00'" -w /cases/supertimeline.csv
For a quick look at the timeline, with interesting files highlighted use the following command:
grep -C10 -f interesting.txt /cases/supertimeline.csv \
--color=always | less -R
Copy the file to your evidence S3 bucket and analyze the file in Excel
aws s3 cp /cases/supertimeline.csv s3://[YOUR-UNIQUE-BUCKET]
VIDEO: Lab 17 Step 2 - Make a Super Timeline