forensicate.cloud

Open Source Resources for Forensics in the Cloud

EC2 DFIR Workshop

Module Overview: File System Forensics - Part 3

Persistence – cron Jobs

To survive a reboot, malware must use a persistence mechanism. The two most common methods are cron jobs and start-up scripts

The SANS Linux Intrusion Discovery Cheat Sheet provides the following two suggestions for looking at system-wide cron jobs:

Be sure to check the cron jobs for all users.

Persistence – Startup Scripts

Some malware will make use of the start-up scripts that Linux runs at boot time when entering a specific run level.

On some Linux distributions, these are found in /etc/init.d, but on Amazon Linux and Red Hat variants, the scripts will be in /etc/rc*.d.

Cron jobs and the start-up scripts may have innocuous names and may call other scripts, so sometimes an investigator may have to dig into the details to determine its true nature.

Suspicious Files

There are certain locations and characteristics to look for when performing a manual search for suspicious files.

Examples:

SUID

SUID = Set owner User ID

Normal programs execute with the permissions of the logged in user who executed the program

SUID grants a temporary permission to the user who executed the program to run it with the permissions of the owner

SUID can be used to grant a script permissions to run as root from lower privileged accounts and can be a sneaky way to maintain root access

In Linux, certain executables need SUID, so it is helpful to compare with a baseline volume

High Entropy Files

densityscout

Image 17 shows the use of a utility on the SIFT Workstation called ‘densityscout’ working on four different files created in the screenshot:

densityscout in action IMAGE 18: The densityscout Utility in Action

The important lesson is to make sure that any limitations of a tool are known, and that you understand exactly what the tool is telling you.

System Logs & Configuration Files

Ideally, the critical logs were offloaded to a central repository such as Splunk or a S3 bucket.

Regardless, an examination of the logs on the EBS volume is an important activity because system logs are a valuable source of information regarding the state of the system before and after the attack and may provide answers to how and when the system was compromised.

Logs & Files with Forensic Relevance

About bash History

Although the bash history is not a robust audit log, it still provides information that may be of interest to the examiner.

For example, it can reveal skill level and stylistic proclivities unique to a certain hacker in addition its intended purpose of capturing the recent commands entered by a user.

A user can modify the bash history associated with his or her account, but rarely will do so unless trying to cover the tracks.

Because there are multiple ways to avoid bash history logging it should not be considered a security control. Indications that bash history has been altered or evaded is noteworthy.

Other System Logs

The default location for all logs is /var/log and all logs should be perused to identify information that may be relevant to the specific investigation. The /var/log/secure file contains details about user account changes, ssh connections, sudo activity, and su sessions.

By default, Amazon Linux has the auditd service running and the considerable amount of valuable information that it has logged will be found in /var/log/audit/*

Timeline Analysis

A timeline is an indispensable tool that can unify the investigation effort. The process of creating it can help identify gaps and missing information. A summarized timeline can help to effectively communicate the sequence of events to management.

Two Types of Timelines

File System Timelines

Two tools from the Sleuth Kit are used:

The mactime command rearranges this metadata in temporal order, creating multiple records when the MACB timestamps on a given file are different

MACB Timestamps:

DANGER: Make sure that you fully understand what the MACB timestamps are telling you! See this article to learn more about the nuances of Linux timestamps.

Super Timelines

Super Timelines are created and manipulated by the Plaso suite of tools, that include:

The psort.py tool is used to create a CSV from the plaso file, however an unfiltered CSV file is often too large to analyze with Microsoft Excel. Therefore, the standard process is to create a CSV between two dates that encompass the events of interest to the investigation.