Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

628+ Articles
118+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. Projects
  3. Velociraptor DFIR: Endpoint Forensics and Incident Response at Scale
Velociraptor DFIR: Endpoint Forensics and Incident Response at Scale
PROJECTIntermediate

Velociraptor DFIR: Endpoint Forensics and Incident Response at Scale

Deploy Velociraptor — the open-source DFIR platform — to collect forensic artifacts, run live endpoint hunts with VQL, and build an incident response capability across your entire fleet in hours.

Dylan H.

Projects

April 8, 2026
11 min read
3-5 hours

Tools & Technologies

VelociraptorDockerDocker ComposecurlPowerShellBash

Velociraptor DFIR: Endpoint Forensics and Incident Response at Scale

When an incident happens, the first question is always the same: what actually occurred on that endpoint? Traditional SIEMs show you log events, but they cannot tell you which processes were injecting into lsass.exe at 02:14 or which files were touched in the 60 seconds before a ransomware payload detonated. Velociraptor can.

Velociraptor is a fast, open-source DFIR platform built around VQL (Velociraptor Query Language) — a SQL-like language purpose-built for endpoint forensics. A single VQL hunt can query every Windows registry hive, every prefetch file, every running process across thousands of endpoints in minutes, with results streamed back to a central server.

By the end of this project you will have a production-grade Velociraptor server running in Docker, clients enrolled on Windows and Linux endpoints, and a library of VQL hunts ready to deploy the moment an alert fires.


Project Overview

What We're Building

┌──────────────────────────────────────────────────────────────────┐
│                  Velociraptor DFIR Architecture                   │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│   Analyst Browser ──► Velociraptor Server (Web UI / API)         │
│                               │                                  │
│                    mTLS gRPC (port 8000)                          │
│                               │                                  │
│        ┌──────────────────────┼──────────────────────┐           │
│        ▼                      ▼                      ▼           │
│  Windows Client         Linux Client         macOS Client        │
│  (velociraptor.exe)     (velociraptor)       (velociraptor)      │
│        │                      │                      │           │
│   Collect artifacts      Collect artifacts    Collect artifacts  │
│   Run VQL queries        Run VQL queries      Run VQL queries    │
│                                                                  │
│   Results ──► Server Datastore (SQLite / filesystem)             │
│   Hunts  ──► All clients simultaneously                          │
│   Notebooks ──► Collaborative investigation workspace            │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

What You'll Be Able to Do

  • Live response: Query any endpoint in real time — running processes, open network connections, logged-in users, recently modified files
  • Forensic collection: Pull MFT, registry hives, event logs, prefetch, shimcache, amcache, browser history, and 200+ other artifact types
  • Fleet-wide hunting: Deploy a VQL hunt to every enrolled endpoint simultaneously and see results in under 60 seconds
  • Timeline reconstruction: Correlate filesystem timestamps, event log entries, and process execution history
  • Offline triage: Package the client as a standalone collector for air-gapped systems

Prerequisites

  • Docker 24+ and Docker Compose v2 on the server host
  • At least one Windows or Linux endpoint you can install an agent on
  • DNS or static IP for the Velociraptor server
  • Ports 8000 (client comms), 8889 (web UI) reachable

Part 1: Deploy the Velociraptor Server

Step 1: Create the Project Directory

mkdir -p ~/velociraptor/{config,filestore}
cd ~/velociraptor

Step 2: Generate the Server Configuration

Velociraptor's configuration is self-bootstrapping. The config generate command creates a full PKI and writes both a server config and a client config.

# Pull the Velociraptor image
docker pull wlambert/velociraptor:latest
 
# Generate an interactive config (answers below)
docker run --rm -it \
  -v "$(pwd)/config:/config" \
  wlambert/velociraptor:latest \
  config generate -i

Interactive prompts and recommended answers:

? What OS will the server be running?           linux
? Path to the datastore directory?              /velociraptor
? The public DNS name or IP of the server:      velociraptor.homelab.local
? Enter the frontend port to listen on:         8000
? Enter the port for the admin GUI:             8889
? Use self-signed SSL certificates?             yes
? Name of the output file for the server config: /config/server.config.yaml
? Name of the output file for the client config: /config/client.config.yaml

Inspect what was generated:

ls ~/velociraptor/config/
# server.config.yaml   client.config.yaml

Step 3: Docker Compose Stack

# ~/velociraptor/docker-compose.yml
version: "3.9"
 
services:
  velociraptor:
    image: wlambert/velociraptor:latest
    container_name: velociraptor
    restart: unless-stopped
    command: frontend -v
    ports:
      - "8000:8000"   # client frontend (mTLS gRPC)
      - "8889:8889"   # web UI (HTTPS)
      - "8001:8001"   # monitoring / health (optional)
    volumes:
      - ./config/server.config.yaml:/etc/velociraptor/server.config.yaml:ro
      - ./filestore:/velociraptor
    environment:
      - VELOCIRAPTOR_CONFIG=/etc/velociraptor/server.config.yaml
    healthcheck:
      test: ["CMD", "curl", "-sk", "https://localhost:8889/app/index.html"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
docker compose up -d
docker compose logs -f velociraptor

Look for: Listening on ... :8000 and Listening on ... :8889.

Step 4: Create the First Admin User

docker exec -it velociraptor \
  velociraptor --config /etc/velociraptor/server.config.yaml \
  user add admin --role administrator
# Enter a strong password when prompted

Open https://velociraptor.homelab.local:8889 in your browser. Accept the self-signed cert warning and log in with admin.


Part 2: Enroll Endpoints

Step 5: Build the Windows Client MSI

From the server host, generate a repackaged client that embeds your server config:

docker exec -it velociraptor \
  velociraptor --config /etc/velociraptor/server.config.yaml \
  config repack \
  --exe /usr/local/bin/velociraptor_windows_amd64.exe \
  /config/velociraptor_client_windows.exe

This produces a single .exe that contains the embedded client config and connects directly to your server — no separate config file needed on the endpoint.

Copy the binary to your Windows machine and install it as a service:

# On Windows — run as Administrator
.\velociraptor_client_windows.exe service install
 
# Verify the service is running
Get-Service velociraptor
# Status should be "Running"
 
# Check connection in Windows Event Log or:
.\velociraptor_client_windows.exe query "SELECT * FROM info()"

Step 6: Install the Linux Client

# On a Linux endpoint — copy the binary
scp ~/velociraptor/config/client.config.yaml user@linux-endpoint:~/
scp $(docker exec velociraptor which velociraptor) user@linux-endpoint:~/velociraptor
 
# On the Linux endpoint
chmod +x ~/velociraptor
sudo mv ~/velociraptor /usr/local/bin/velociraptor
sudo mv ~/client.config.yaml /etc/velociraptor/client.config.yaml
 
# Install as a systemd service
sudo velociraptor --config /etc/velociraptor/client.config.yaml \
  service install
sudo systemctl enable --now velociraptor_client

Step 7: Verify Enrollment

Back in the Velociraptor web UI, go to View Clients. Your enrolled endpoints appear within 30 seconds of the client service starting. Each client shows:

  • Hostname and OS version
  • Last seen timestamp
  • Client ID (e.g. C.1a2b3c4d5e6f7890)
  • Assigned labels

You can search clients by hostname, IP, OS, or custom label.


Part 3: VQL Fundamentals

Velociraptor Query Language is the core of everything. It looks like SQL but targets endpoint forensics artifacts.

Basic VQL Syntax

-- List all running processes
SELECT Pid, Name, Exe, CommandLine, Username
FROM pslist()
ORDER BY Name
 
-- Find processes with no parent
SELECT Pid, Name, Ppid, Exe
FROM pslist()
WHERE Ppid = 0 OR Ppid = 4
 
-- Recent files in Temp directories
SELECT FullPath, Mtime, Atime, Size
FROM glob(globs="C:/Users/*/AppData/Local/Temp/**")
WHERE Mtime > now() - 3600   -- last hour
 
-- Network connections
SELECT Pid, Laddr, Raddr, Status, Name
FROM netstat()
WHERE Status = "ESTABLISHED"

Running a Query on a Single Client

In the web UI:

  1. Go to View Clients → select an endpoint
  2. Click Shell → select VQL
  3. Paste a query and press Run

Results stream back in real time from the endpoint.


Part 4: Forensic Artifact Collection

Velociraptor ships with 200+ built-in Artifacts — pre-written VQL queries packaged with descriptions, parameter schemas, and result definitions. The Artifact Exchange community repository adds hundreds more.

Step 8: Collect Windows Event Logs

-- Via the GUI: Collect Artifact → Windows.EventLogs.Evtx
-- Or via VQL directly on the client:
 
SELECT System.TimeCreated.SystemTime AS Time,
       System.EventID.Value AS EventID,
       System.Channel AS Channel,
       EventData
FROM parse_evtx(filename="C:/Windows/System32/winevt/Logs/Security.evtx")
WHERE System.EventID.Value IN (4624, 4625, 4648, 4672, 4688)
ORDER BY Time DESC
LIMIT 500

Key Security Event IDs to hunt:

Event IDMeaning
4624Successful logon
4625Failed logon
4648Logon with explicit credentials (runas / pass-the-hash)
4672Special privileges assigned (admin logon)
4688Process creation (requires audit policy)
4698Scheduled task created
4720User account created
7045New service installed

Step 9: Collect Prefetch and Execution Evidence

-- Windows Prefetch — what executed and when
SELECT Name,
       RunCount,
       LastRunTime,
       PrefetchHash,
       Filenames
FROM Artifact.Windows.Forensics.Prefetch()
ORDER BY LastRunTime DESC

Prefetch files prove a binary executed even after it has been deleted — critical for malware investigations.

Step 10: Collect Shimcache / AppCompatCache

-- AppCompatCache — every executable that ever ran on this system
SELECT Name, Path, Modified, Executed
FROM Artifact.Windows.Registry.AppCompatCache()
ORDER BY Modified DESC

Shimcache entries persist reboots and survive log clearing — often the only evidence of attacker tooling.

Step 11: Scheduled Tasks and Services

-- All scheduled tasks (including hidden ones)
SELECT Name, Path, Command, Arguments, Enabled, NextRunTime
FROM Artifact.Windows.System.ScheduledTasks()
WHERE Enabled = true
ORDER BY NextRunTime
 
-- Installed services (persistence check)
SELECT Name, DisplayName, StartMode, PathName, State
FROM Artifact.Windows.System.Services()
WHERE StartMode IN ("Auto", "Boot", "System")
  AND State = "Running"

Part 5: Fleet-Wide Hunting

A Hunt deploys a VQL artifact to every enrolled client simultaneously. Results from all endpoints flow back to the server and are aggregated into a single downloadable dataset.

Step 12: Create a Hunt for Suspicious Processes

In the web UI, go to Hunt Manager → New Hunt:

Name:       Suspicious LOLBin Execution
Artifact:   Windows.Detection.Amcache
Condition:  All Windows clients
Labels:     (leave blank for all)

Or build a custom artifact in View Artifacts → Add Artifact:

name: Custom.Hunting.LOLBins
description: |
  Hunt for common Living-off-the-Land Binary execution evidence
  across the entire fleet using Amcache and Prefetch.
 
parameters:
  - name: TargetBinaries
    default: "certutil.exe,mshta.exe,wscript.exe,cscript.exe,regsvr32.exe,rundll32.exe,msiexec.exe,wmic.exe,bitsadmin.exe,powershell.exe"
 
sources:
  - name: Prefetch
    query: |
      LET targets <= split(sep=",", string=TargetBinaries)
      SELECT Name, RunCount, LastRunTime, Filenames
      FROM Artifact.Windows.Forensics.Prefetch()
      WHERE Name =~ join(sep="|", array=targets)
      ORDER BY LastRunTime DESC
 
  - name: AmcacheExecution
    query: |
      LET targets <= split(sep=",", string=TargetBinaries)
      SELECT Name, FullPath, SHA1, LastModified
      FROM Artifact.Windows.Registry.Amcache()
      WHERE Name =~ join(sep="|", array=targets)

Launch the hunt → Run Hunt. Results from all enrolled Windows clients appear within minutes.

Step 13: Hunt for Lateral Movement Indicators

name: Custom.Hunting.LateralMovement
description: Identify common lateral movement evidence across the fleet
 
sources:
  - name: RemoteLogons
    query: |
      SELECT System.TimeCreated.SystemTime AS Time,
             EventData.SubjectUserName AS User,
             EventData.IpAddress AS SourceIP,
             EventData.LogonType AS LogonType
      FROM parse_evtx(
        filename="C:/Windows/System32/winevt/Logs/Security.evtx"
      )
      WHERE System.EventID.Value = 4624
        AND EventData.LogonType IN ("3", "10")
        AND EventData.IpAddress != "-"
        AND EventData.IpAddress != "127.0.0.1"
      ORDER BY Time DESC
      LIMIT 1000
 
  - name: PsExecArtifacts
    query: |
      SELECT FullPath, Mtime, Size, SHA256
      FROM glob(globs=[
        "C:/Windows/PSEXESVC.exe",
        "C:/Windows/Temp/PSEXESVC*",
        "C:/Windows/System32/PSEXESVC.exe"
      ])

Part 6: Notebooks for Investigation

Velociraptor Notebooks are collaborative investigation workspaces — markdown cells mixed with live VQL cells that run against collected data.

Step 14: Create an Investigation Notebook

In the web UI: Notebooks → New Notebook

## Incident Investigation — 2026-04-08
 
### Scope
Suspected initial access via phishing on WORKSTATION-01.
Investigate process execution chain starting 02:00-03:00 UTC.
 
### Timeline of Suspicious Processes

Add a VQL cell:

-- Processes spawned between 02:00-03:00 UTC on April 8
SELECT Pid, Ppid, Name, CommandLine, CreateTime, Username
FROM Artifact.Windows.System.Pslist()
WHERE CreateTime > "2026-04-08T02:00:00Z"
  AND CreateTime < "2026-04-08T03:00:00Z"
ORDER BY CreateTime

Notebooks save to the server and can be shared with team members — all analysts see the same live data.


Part 7: Standalone Collector for Offline Systems

For air-gapped endpoints or quick on-site triage, Velociraptor can generate a self-contained collector binary that runs without a live server connection and zips up all collected artifacts.

Step 15: Build an Offline Collector

docker exec -it velociraptor \
  velociraptor --config /etc/velociraptor/server.config.yaml \
  artifacts collect \
  --output /velociraptor/collectors/triage_collector.zip \
  --output_format json \
  Windows.KapeFiles.Targets \
  Windows.Forensics.Prefetch \
  Windows.Registry.AppCompatCache \
  Windows.EventLogs.Evtx \
  Windows.System.ScheduledTasks \
  Windows.System.Services

Copy triage_collector.zip to the target machine and run:

# On the offline endpoint
Expand-Archive triage_collector.zip -DestinationPath C:\Triage
cd C:\Triage
.\velociraptor_windows_amd64.exe --config collector_config.yaml artifacts collect -v
 
# Results land in: C:\Triage\results\

The output ZIP can be uploaded to any Velociraptor server later for analysis in the GUI.


Testing

Verify Server Health

# Check the container is up and passing health checks
docker inspect velociraptor --format '{{.State.Health.Status}}'
# Expected: healthy
 
# Check the frontend is listening
curl -sk https://localhost:8889/app/index.html | grep -o "<title>.*</title>"
 
# Check client comms port
ss -tlnp | grep 8000

Simulate a Hunt End-to-End

  1. In the web UI, create a new hunt: Hunt Manager → New Hunt
  2. Select artifact: Generic.Client.Info (safe, no side effects)
  3. Set condition: All clients
  4. Click Run Hunt
  5. Wait 30–60 seconds
  6. Click the hunt → Results tab
  7. You should see one row per enrolled client with hostname, OS, and memory stats

Run a VQL Test on the Server

docker exec -it velociraptor \
  velociraptor --config /etc/velociraptor/server.config.yaml \
  query "SELECT client_id, os_info.hostname, last_seen_at FROM clients()" \
  --format json | python3 -m json.tool

Deployment Considerations

TLS Certificate Replacement

The default self-signed cert causes browser warnings. Replace it with a valid cert:

# In server.config.yaml — update the GUI section
GUI:
  bind_address: 0.0.0.0
  bind_port: 8889
  tls_certificate_filename: /certs/fullchain.pem
  tls_private_key_filename: /certs/privkey.pem

Mount your cert files into the container:

# docker-compose.yml
volumes:
  - ./config/server.config.yaml:/etc/velociraptor/server.config.yaml:ro
  - ./filestore:/velociraptor
  - /etc/letsencrypt/live/velociraptor.homelab.local:/certs:ro

Scaling the Datastore

The default file-based datastore works well up to ~500 clients. For larger fleets, switch to MySQL:

# server.config.yaml
Datastore:
  implementation: MySQL
  mysql_connection_string: "velociraptor:password@tcp(mysql:3306)/velociraptor"

Hardening Checklist

  • Replace self-signed TLS cert with a valid certificate
  • Create role-based users (analyst, investigator, reader) — avoid sharing the admin account
  • Enable audit logging: Server Artifacts → Server.Audit.Logs
  • Restrict web UI to admin VPN/network via firewall rules
  • Back up the filestore/ directory daily (contains all collected evidence)
  • Set a data retention policy — hunts can accumulate gigabytes quickly

Extensions and Next Steps

  1. SIEM integration — configure Velociraptor to forward events to your Elasticsearch SIEM via the Server.Flows.Webhook artifact or a Syslog forwarder
  2. Sigma rule translation — use hayabusa or the Velociraptor Sigma artifact to translate Sigma detection rules into VQL hunts and run them fleet-wide
  3. Automated triage on alert — trigger a targeted VQL collection via the Velociraptor API when your EDR or SIEM fires an alert, piping evidence collection into your IR ticketing workflow
  4. macOS artifacts — deploy the macOS client and collect unified_logs, launch daemons, FSEvents, and quarantine DB entries
  5. YARA scanning — use Artifact.Generic.Detection.Yara to push YARA rule sets across your fleet and detect in-memory malware signatures
  6. Custom artifact exchange — host a private artifact repository synced to your team's GitHub so all analysts always have the latest VQL hunts

Resources

  • Velociraptor Documentation
  • Velociraptor Artifact Exchange
  • VQL Reference
  • Velociraptor GitHub
  • Rapid Triage with Velociraptor (SANS paper)

Questions? Join the CosmicBytez community Discord.

Related Reading

  • Deception Technology Lab: T-Pot Honeypot with OpenCanary
  • Malware Analysis Sandbox
  • Network Traffic Analysis with Zeek and Suricata
  • Build a SIEM with Open-Source Tools
#DFIR#Forensics#Incident Response#Threat Hunting#Velociraptor#VQL#Blue Team#Endpoint Security

Related Articles

Deception Technology Lab: T-Pot Honeypot with OpenCanary

Deploy a full deception technology stack using T-Pot and OpenCanary to capture real attacker behaviour, generate threat intelligence, and sharpen your...

11 min read

Network Traffic Analysis with Zeek and Suricata

Deploy a network monitoring stack combining Zeek for protocol analysis and Suricata for intrusion detection, with ELK integration for visualization and...

6 min read

SentinelOne Complete Deployment Guide

Full deployment lifecycle for SentinelOne EDR - agent rollout, policy configuration, exclusions, threat hunting queries, and response playbooks.

10 min read
Back to all Projects