DLPlytics
by NLCS
|CONSULTANT IMPLEMENTATION GUIDE
PROPRIETARY — NLCS INTERNAL USE ONLY
NLCS INTERNAL DOCUMENT — v1.0

DLPlytics Implementation Guide

The complete step-by-step consultant playbook for delivering a DLPlytics engagement. Follow this guide sequentially from pre-engagement setup through final deliverable handoff.

Engagement Duration
2–8 Weeks
Required Access
Compliance Admin
Data Residency
Client Tenant Only
Production Risk
Zero (Sim Mode)
Phase 0

Pre-Engagement Setup

Access provisioning, environment verification, and tooling prerequisites

The Compliance Administrator role is the minimum required to read DLP policies, alerts, and Activity Explorer data. Request this role from the client's Global Administrator before the engagement begins.

IMPORTANTDo NOT request Global Administrator. Compliance Administrator is sufficient and follows the principle of least privilege. If the client pushes back, you can also use the combination of Security Reader + Compliance Data Administrator.

Steps for the client's Global Admin to assign the role:

  1. 1.Sign in to Microsoft Entra admin center → https://entra.microsoft.com
  2. 2.Navigate to Identity → Roles & admins → All roles
  3. 3.Search for 'Compliance Administrator' and open it
  4. 4.Click '+ Add assignments' and search for the NLCS consultant's account
  5. 5.Select the account and click 'Add' — the role is active immediately
TIPConfirm access by navigating to https://compliance.microsoft.com — you should see the full Purview compliance portal including Data Loss Prevention in the left nav.

Confirm the client has the necessary licensing before proceeding. DLP analytics and Activity Explorer require:

  • Microsoft 365 E5 or Microsoft 365 E5 Compliance add-on
  • For GCC/GCC High: Microsoft 365 Government G5 or equivalent compliance add-on
  • Endpoint DLP features require Microsoft Defender for Endpoint Plan 2 onboarding
powershell
# Verify tenant license SKUs
Connect-MgGraph -Scopes "Organization.Read.All"
Get-MgSubscribedSku | Select-Object SkuPartNumber, CapabilityStatus | 
  Where-Object { $_.SkuPartNumber -match "E5|COMPLIANCE|INFORMATION_PROTECTION" } |
  Format-Table -AutoSize

Install all required modules on your analysis workstation before the engagement. Run PowerShell as Administrator.

powershell
# Install all required modules (run as Administrator)
Install-Module -Name ExchangeOnlineManagement -Force -AllowClobber
Install-Module -Name Microsoft.Graph -Force -AllowClobber
Install-Module -Name Microsoft.Graph.Beta -Force -AllowClobber
Install-Module -Name ImportExcel -Force  # For Excel export

# Verify installations
Get-Module -ListAvailable | Where-Object { 
  $_.Name -match "ExchangeOnline|Microsoft.Graph" 
} | Select-Object Name, Version | Format-Table
NOTEThe ExchangeOnlineManagement module provides the Export-ActivityExplorerData and Get-DlpCompliancePolicy cmdlets. The Microsoft.Graph module is used for alert and incident data via the Security API.
powershell
# Connect to Security & Compliance PowerShell (Purview)
Connect-IPPSSession -UserPrincipalName "[email protected]"
# You will be prompted for MFA — use the client-provisioned account

# Connect to Microsoft Graph (for alerts/incidents)
Connect-MgGraph -Scopes "SecurityEvents.Read.All","SecurityAlert.Read.All","Policy.Read.All"

# For GCC High tenants, use the -Environment flag:
Connect-IPPSSession -UserPrincipalName "[email protected]" -ConnectionUri "https://ps.compliance.protection.office365.us/powershell-liveid/"
Connect-MgGraph -Environment USGovDoD  # or -Environment USGov for GCC
TIPSave the session commands in a Connect-DLPlytics.ps1 script at the start of each engagement day. Sessions expire after ~1 hour of inactivity.
Phase 1

Discovery & Data Ingestion

Export all DLP policies, rules, and 30–90 days of Activity Explorer data to establish the noise baseline

Export the complete policy inventory. This is your baseline — capture it before making any changes.

powershell
# Export all DLP compliance policies
$policies = Get-DlpCompliancePolicy -IncludeExtendedProperties
$policies | Select-Object Name, Mode, Workload, Priority, IsValid, CreatedBy, WhenCreated, WhenChanged |
  Export-Csv -Path ".\DLPlytics_Policies_$(Get-Date -Format 'yyyyMMdd').csv" -NoTypeInformation

# Export all rules for each policy (the detail level)
$allRules = @()
foreach ($policy in $policies) {
  $rules = Get-DlpComplianceRule -Policy $policy.Name
  foreach ($rule in $rules) {
    $allRules += [PSCustomObject]@{
      PolicyName        = $policy.Name
      PolicyMode        = $policy.Mode
      RuleName          = $rule.Name
      Disabled          = $rule.Disabled
      ContentContainsSIT = ($rule.ContentContainsSensitiveInformation | ConvertTo-Json -Compress)
      MinCount          = ($rule.ContentContainsSensitiveInformation | ForEach-Object { $_.minCount } | Measure-Object -Minimum).Minimum
      MaxCount          = ($rule.ContentContainsSensitiveInformation | ForEach-Object { $_.maxCount } | Measure-Object -Maximum).Maximum
      Severity          = $rule.ReportSeverityLevel
      NotifyUser        = ($rule.NotifyUser -join "; ")
      BlockAccess       = $rule.BlockAccess
      GenerateAlert     = $rule.GenerateAlert
      Workloads         = ($policy.Workload -join "; ")
    }
  }
}
$allRules | Export-Csv -Path ".\DLPlytics_Rules_$(Get-Date -Format 'yyyyMMdd').csv" -NoTypeInformation
Write-Host "Exported $($policies.Count) policies and $($allRules.Count) rules."
IMPORTANTFlag any rules where MinCount = 1 on high-volume SITs (e.g., Credit Card Number, US SSN). These are primary noise generators and will be your first remediation targets.

Activity Explorer data is the raw event stream. Export it in 7-day chunks — the API has a 5,000 record limit per call and a 30-day maximum window per query.

powershell
# Export Activity Explorer data in 7-day chunks
# Adjust $startDate and $endDate for your engagement window (max 90 days back)

$startDate = (Get-Date).AddDays(-90)
$endDate   = Get-Date
$outputDir = ".\ActivityExplorer_$(Get-Date -Format 'yyyyMMdd')"
New-Item -ItemType Directory -Path $outputDir -Force | Out-Null

$current = $startDate
$chunk   = 0

while ($current -lt $endDate) {
  $chunkEnd = $current.AddDays(7)
  if ($chunkEnd -gt $endDate) { $chunkEnd = $endDate }

  Write-Host "Exporting chunk $chunk : $current → $chunkEnd"

  try {
    $data = Export-ActivityExplorerData `
      -StartTime $current `
      -EndTime   $chunkEnd `
      -OutputFormat CSV `
      -PageSize 5000

    $data | Export-Csv -Path "$outputDir\ActivityExplorer_Chunk$chunk.csv" -NoTypeInformation -Append
  } catch {
    Write-Warning "Chunk $chunk failed: $_"
  }

  $current = $chunkEnd
  $chunk++
  Start-Sleep -Seconds 2  # Avoid throttling
}

Write-Host "Activity Explorer export complete. Files saved to $outputDir"
TIPCombine all chunk CSVs into a single file for Power BI: Import-Csv "$outputDir\*.csv" | Export-Csv ".\ActivityExplorer_Combined.csv" -NoTypeInformation

Pull alert-level data from the Graph Security API. This gives you the aggregated alert view that maps to what the SOC sees in the Purview Alerts dashboard.

powershell
# Pull DLP alerts from Microsoft Graph Security API
# Requires SecurityEvents.Read.All permission

$allAlerts = @()
$uri = "https://graph.microsoft.com/v1.0/security/alerts_v2?`$filter=serviceSource eq 'microsoftDefenderForCloudApps' or serviceSource eq 'office365'&`$top=999"

do {
  $response = Invoke-MgGraphRequest -Method GET -Uri $uri
  $allAlerts += $response.value
  $uri = $response.'@odata.nextLink'
  Write-Host "Fetched $($allAlerts.Count) alerts so far..."
} while ($uri)

# Export to CSV
$allAlerts | Select-Object id, title, severity, status, createdDateTime, 
  lastUpdateDateTime, serviceSource, detectionSource, 
  @{N="assignedTo"; E={$_.assignedTo}},
  @{N="alertPolicyId"; E={$_.alertPolicyId}} |
  Export-Csv -Path ".\DLPlytics_Alerts_$(Get-Date -Format 'yyyyMMdd').csv" -NoTypeInformation

Write-Host "Total alerts exported: $($allAlerts.Count)"

Before any analysis or changes, document the baseline state. This is your before/after comparison anchor for the executive report.

powershell
# Baseline snapshot — run BEFORE any changes
$baseline = [PSCustomObject]@{
  SnapshotDate        = Get-Date -Format "yyyy-MM-dd HH:mm"
  TotalPolicies       = (Get-DlpCompliancePolicy).Count
  ActivePolicies      = (Get-DlpCompliancePolicy | Where-Object { $_.Mode -eq "Enable" }).Count
  SimulationPolicies  = (Get-DlpCompliancePolicy | Where-Object { $_.Mode -eq "TestWithNotifications" -or $_.Mode -eq "TestWithoutNotifications" }).Count
  TotalRules          = (Get-DlpCompliancePolicy | ForEach-Object { Get-DlpComplianceRule -Policy $_.Name }).Count
  DisabledRules       = (Get-DlpCompliancePolicy | ForEach-Object { Get-DlpComplianceRule -Policy $_.Name } | Where-Object { $_.Disabled }).Count
}

$baseline | ConvertTo-Json | Out-File ".\DLPlytics_Baseline_$(Get-Date -Format 'yyyyMMdd').json"
$baseline | Format-List
Phase 2

Analytics & Correlation

Calculate noise-to-signal ratios, detect duplicates, flag misconfigurations, and build Power BI dashboards

Score each policy by comparing total alert volume to escalated/actioned alerts. A high ratio = high noise.

powershell
# Load the exported alerts CSV
$alerts = Import-Csv ".\DLPlytics_Alerts_*.csv"

# Group by policy and calculate noise ratio
$policyStats = $alerts | Group-Object alertPolicyId | ForEach-Object {
  $total      = $_.Count
  $actioned   = ($_.Group | Where-Object { $_.status -in @("resolved","inProgress") }).Count
  $dismissed  = ($_.Group | Where-Object { $_.status -eq "dismissed" }).Count
  $noiseRatio = if ($total -gt 0) { [math]::Round(($dismissed / $total) * 100, 1) } else { 0 }

  [PSCustomObject]@{
    PolicyId        = $_.Name
    TotalAlerts     = $total
    ActionedAlerts  = $actioned
    DismissedAlerts = $dismissed
    NoiseRatio_Pct  = $noiseRatio
    SignalRatio_Pct = 100 - $noiseRatio
    RiskLevel       = if ($noiseRatio -gt 80) { "CRITICAL" } 
                      elseif ($noiseRatio -gt 60) { "HIGH" } 
                      elseif ($noiseRatio -gt 40) { "MEDIUM" } 
                      else { "LOW" }
  }
} | Sort-Object NoiseRatio_Pct -Descending

$policyStats | Export-Csv ".\DLPlytics_PolicyScores.csv" -NoTypeInformation
$policyStats | Format-Table -AutoSize

Identify cases where a single user action triggered alerts across multiple workloads within a 5-minute window — the primary source of alert inflation.

powershell
# Load Activity Explorer data
$activity = Import-Csv ".\ActivityExplorer_Combined.csv"

# Group by UserId + 5-minute time window + SensitiveInfoType
$duplicates = $activity | Group-Object UserId, SensitiveInfoTypeName | ForEach-Object {
  $events    = $_.Group | Sort-Object TimeGenerated
  $workloads = $events | Select-Object -ExpandProperty Workload -Unique

  if ($workloads.Count -gt 1) {
    [PSCustomObject]@{
      UserId              = ($events[0].UserId)
      SensitiveInfoType   = ($events[0].SensitiveInfoTypeName)
      WorkloadsTriggered  = ($workloads -join " | ")
      WorkloadCount       = $workloads.Count
      TotalEvents         = $events.Count
      FirstEvent          = $events[0].TimeGenerated
      LastEvent           = $events[-1].TimeGenerated
      InflationFactor     = $workloads.Count  # 1 action = N alerts
    }
  }
} | Where-Object { $_ -ne $null } | Sort-Object WorkloadCount -Descending

$duplicates | Export-Csv ".\DLPlytics_Duplicates.csv" -NoTypeInformation
Write-Host "Found $($duplicates.Count) multi-workload duplication patterns"
powershell
# Load rules export
$rules = Import-Csv ".\DLPlytics_Rules_*.csv"

# Flag rules with MinCount = 1 on high-volume SITs
$highVolumeSITs = @(
  "Credit Card Number",
  "U.S. Social Security Number (SSN)",
  "U.S. Individual Taxpayer Identification Number (ITIN)",
  "U.S. Bank Account Number",
  "International Banking Account Number (IBAN)",
  "U.S. Driver's License Number",
  "U.S. Passport Number",
  "All Full Names",
  "U.S. Physical Addresses"
)

$broadRules = $rules | Where-Object {
  $_.MinCount -eq 1 -and 
  ($highVolumeSITs | Where-Object { $rules.ContentContainsSIT -match $_ })
} | Select-Object PolicyName, RuleName, MinCount, ContentContainsSIT, Severity, Workloads

$broadRules | Export-Csv ".\DLPlytics_BroadRules.csv" -NoTypeInformation
Write-Host "Found $($broadRules.Count) overly broad rules requiring tuning"
$broadRules | Format-Table PolicyName, RuleName, MinCount -AutoSize
IMPORTANTRules with MinCount = 1 on high-volume SITs like "All Full Names" or "U.S. Physical Addresses" are almost always misconfigured. Recommend raising to MinCount = 3–5 minimum, with confidence threshold ≥ 75%.

Load all exported CSVs into Power BI Desktop to build the DLPlytics analytics dashboard.

Power BI Desktop — Data Source Setup

  1. 1Open Power BI Desktop → Home → Get Data → Text/CSV
  2. 2Load these files as separate tables: DLPlytics_Policies, DLPlytics_Rules, DLPlytics_Alerts, DLPlytics_PolicyScores, DLPlytics_Duplicates, ActivityExplorer_Combined
  3. 3In Power Query Editor → rename columns for clarity (e.g., alertPolicyId → PolicyId)
  4. 4Create relationships: Alerts[PolicyId] → PolicyScores[PolicyId] (Many-to-One)
  5. 5Create relationship: Rules[PolicyName] → Policies[Name] (Many-to-One)

Create these DAX measures in Power BI to power the dashboard visuals.

dax
// Total Alerts
Total Alerts = COUNTROWS(DLPlytics_Alerts)

// Noise Rate %
Noise Rate = 
DIVIDE(
  COUNTROWS(FILTER(DLPlytics_Alerts, DLPlytics_Alerts[status] = "dismissed")),
  [Total Alerts],
  0
) * 100

// Signal Rate %
Signal Rate = 100 - [Noise Rate]

// Policies with Critical Noise (>80%)
Critical Policies = 
COUNTROWS(FILTER(DLPlytics_PolicyScores, DLPlytics_PolicyScores[NoiseRatio_Pct] > 80))

// Multi-Workload Duplication Count
Duplicate Patterns = COUNTROWS(DLPlytics_Duplicates)

// Estimated Hours Wasted (assume 8 min per dismissed alert)
Hours Wasted = DIVIDE([Total Alerts] * [Noise Rate] / 100 * 8, 60, 0)

// Policy Effectiveness Score (0-100, higher = better)
Policy Score = 100 - [Noise Rate]
Page 1: Executive Overview
  • KPI cards: Total Alerts, Noise Rate %, Signal Rate %, Hours Wasted
  • Line chart: Alert volume trend (last 90 days)
  • Donut chart: Alert status breakdown (Dismissed / Actioned / Open)
  • Bar chart: Top 5 noisiest policies
Page 2: Policy Performance
  • Table: All policies sorted by Noise Rate % (conditional formatting: red >80%, amber 40-80%, green <40%)
  • Bar chart: Policy Effectiveness Score per policy
  • Scatter plot: Total Alerts vs. Signal Rate (identify outliers)
  • Slicer: Filter by Workload (Exchange / SharePoint / Endpoint / Teams)
Page 3: Alert-Activity Correlation
  • Table: Multi-workload duplication patterns (UserId, SIT, Workloads, InflationFactor)
  • Bar chart: Top 10 users by alert volume
  • Heatmap: Alert volume by hour of day and day of week
  • Treemap: Alert distribution by Sensitive Info Type
Page 4: Misconfiguration Findings
  • Table: Overly broad rules (MinCount = 1) with policy name and SIT
  • Bar chart: Rules by MinCount value
  • Gauge: % of rules with confidence threshold < 75%
  • Recommendations table: Rule name, current setting, recommended setting
Phase 3

Optimization & Remediation

Redesign policies, adjust thresholds, validate in Simulation Mode — zero production risk

IMPORTANTCRITICAL: All policy changes MUST be validated in Simulation Mode (TestWithNotifications) before switching to Enforce mode. Never modify a live enforcing policy directly. Always create a copy first.

Raise the minimum instance count on high-volume SITs to reduce false positives. Start conservatively and monitor for 7 days before enforcing.

powershell
# Example: Tune a specific rule MinCount from 1 to 5
# ALWAYS work on a copy - never edit the live policy directly

$policyName = "Contoso - PII Protection Policy"
$ruleName   = "Credit Card - Low Severity"

# Step 1: Check current settings
Get-DlpComplianceRule -Policy $policyName -Identity $ruleName |
  Select-Object Name, ContentContainsSensitiveInformation, BlockAccess, GenerateAlert

# Step 2: Update MinCount (raise threshold to reduce noise)
Set-DlpComplianceRule -Policy $policyName -Identity $ruleName `
  -ContentContainsSensitiveInformation @{
    Name           = "Credit Card Number"
    minCount       = 5
    maxCount       = "any"
    minConfidence  = 75
    maxConfidence  = 100
  }

Write-Host "Rule updated. Validate in simulation for 7 days before enforcing."
powershell
# Move a policy to Simulation Mode (TestWithNotifications)
# This generates alerts and policy tips but does NOT block actions
# Use this to validate changes before going live

$policyName = "Contoso - PII Protection Policy"

Set-DlpCompliancePolicy -Identity $policyName -Mode TestWithNotifications

# Verify the mode change
Get-DlpCompliancePolicy -Identity $policyName | Select-Object Name, Mode

# After 7 days of monitoring, switch to Enforce:
# Set-DlpCompliancePolicy -Identity $policyName -Mode Enable
TIPMonitor the policy in simulation for a minimum of 7 days. Pull a fresh Activity Explorer export and compare alert volumes to the baseline. If noise rate drops by >50%, proceed to enforcement.

If the same SIT rule exists in separate policies for Exchange and SharePoint, consolidate into a single unified policy scoped to both workloads.

powershell
# Check which workloads a policy currently covers
Get-DlpCompliancePolicy -Identity "Contoso - Exchange PII" | 
  Select-Object Name, Workload

# Add SharePoint/OneDrive workload to an existing Exchange policy
# (instead of maintaining two separate policies)
Set-DlpCompliancePolicy -Identity "Contoso - Exchange PII" `
  -AddSharePointLocation "All" `
  -AddOneDriveLocation "All"

# Rename to reflect the broader scope
Set-DlpCompliancePolicy -Identity "Contoso - Exchange PII" `
  -Name "Contoso - PII Unified Policy"

# Disable the now-redundant standalone SharePoint policy
Set-DlpCompliancePolicy -Identity "Contoso - SharePoint PII" -Mode Disable
Write-Host "Consolidated. Disable the redundant policy after 7-day validation."

Many false positives come from legitimate business processes (e.g., HR sending SSNs to payroll). Add targeted exclusions rather than raising thresholds globally.

powershell
# Add sender domain exclusion (e.g., exclude internal HR to Payroll emails)
Set-DlpComplianceRule -Policy "Contoso - PII Unified Policy" `
  -Identity "SSN - High Severity" `
  -ExceptIfSentToMemberOf "[email protected]" `
  -ExceptIfSentTo "[email protected]"

# Add SharePoint site exclusion (e.g., exclude HR document library)
Set-DlpComplianceRule -Policy "Contoso - PII Unified Policy" `
  -Identity "SSN - High Severity" `
  -ExceptIfContentPropertyContainsWords @{
    Name  = "Department"
    Words = @("Human Resources", "Payroll")
  }

Write-Host "Exclusions added. Monitor for 7 days in simulation before enforcing."

Replace single-rule policies with tiered severity rules. This is the most impactful structural change — it separates noise from real threats at the policy level.

powershell
# Pattern: Split one noisy rule into three severity tiers
# Tier 1 - Low Severity: Notify only (no block), high threshold
New-DlpComplianceRule -Policy "Contoso - PII Unified Policy" `
  -Name "PII - Low Severity (Notify)" `
  -ContentContainsSensitiveInformation @{Name="Credit Card Number"; minCount=3; minConfidence=75} `
  -ReportSeverityLevel Low `
  -NotifyUser "LastModifier" `
  -GenerateAlert $false

# Tier 2 - Medium Severity: Alert + notify, moderate threshold
New-DlpComplianceRule -Policy "Contoso - PII Unified Policy" `
  -Name "PII - Medium Severity (Alert)" `
  -ContentContainsSensitiveInformation @{Name="Credit Card Number"; minCount=10; minConfidence=85} `
  -ReportSeverityLevel Medium `
  -NotifyUser "LastModifier","SiteAdmin" `
  -GenerateAlert $true

# Tier 3 - High Severity: Block + alert, low threshold but high confidence
New-DlpComplianceRule -Policy "Contoso - PII Unified Policy" `
  -Name "PII - High Severity (Block)" `
  -ContentContainsSensitiveInformation @{Name="Credit Card Number"; minCount=50; minConfidence=95} `
  -ReportSeverityLevel High `
  -BlockAccess $true `
  -GenerateAlert $true `
  -NotifyUser "LastModifier","SiteAdmin","Owner"

Write-Host "Severity tiering applied. Set policy to TestWithNotifications for 7-day validation."
Phase 4

Executive Reporting & Handoff

Capture post-tuning metrics, build the executive report, and deliver SOC runbooks

After the 7-day simulation period, re-run the baseline snapshot and compare to the original. This is your ROI proof.

powershell
# Post-tuning snapshot — run AFTER 7-day simulation validation
$postTuning = [PSCustomObject]@{
  SnapshotDate        = Get-Date -Format "yyyy-MM-dd HH:mm"
  TotalPolicies       = (Get-DlpCompliancePolicy).Count
  ActivePolicies      = (Get-DlpCompliancePolicy | Where-Object { $_.Mode -eq "Enable" }).Count
  SimulationPolicies  = (Get-DlpCompliancePolicy | Where-Object { $_.Mode -match "Test" }).Count
  TotalRules          = (Get-DlpCompliancePolicy | ForEach-Object { Get-DlpComplianceRule -Policy $_.Name }).Count
}

# Load baseline for comparison
$baseline = Get-Content ".\DLPlytics_Baseline_*.json" | ConvertFrom-Json

# Export comparison
[PSCustomObject]@{
  Metric              = "Total Policies"
  Before              = $baseline.TotalPolicies
  After               = $postTuning.TotalPolicies
  Change              = $postTuning.TotalPolicies - $baseline.TotalPolicies
},
[PSCustomObject]@{
  Metric              = "Active (Enforcing) Policies"
  Before              = $baseline.ActivePolicies
  After               = $postTuning.ActivePolicies
  Change              = $postTuning.ActivePolicies - $baseline.ActivePolicies
} | Format-Table -AutoSize

# Re-export alerts for the same 30-day window and re-run the noise analysis
# Compare NoiseRatio_Pct before vs after to calculate % improvement

The executive report is the primary deliverable. Use the NLCS report template and populate it with the following sections:

1
1. Executive Summary
2-paragraph summary: what was found, what was done, quantified outcome. Lead with the noise reduction percentage.
2
2. Environment Baseline
Table: Policy count, rule count, workloads covered, alert volume (30-day), noise rate % before engagement.
3
3. Key Findings
Numbered list of the top 5 misconfigurations found, with severity (Critical/High/Medium) and business impact.
4
4. Remediation Actions Taken
Table: Policy name, change made, before/after MinCount, before/after noise rate. One row per changed rule.
5
5. Post-Tuning Results
Before/after comparison: alert volume, noise rate %, signal rate %, estimated SOC hours saved per month.
6
6. Remaining Risks & Recommendations
Any policies still in simulation, recommended next steps, and items deferred to future engagements.
7
7. SOC Runbook Summary
Brief description of the runbooks delivered and how to use them.

Deliver these three runbooks to the client's SOC team as Word documents or a SharePoint page:

Runbook 1: DLP Alert Triage Procedure
  • How to read the Policy Effectiveness dashboard in Power BI
  • Decision tree: when to dismiss vs. escalate a DLP alert
  • How to identify multi-workload duplicates and close them as a single incident
  • Escalation path: Tier 1 → Tier 2 → CIRT
Runbook 2: Monthly Policy Maintenance
  • How to re-run the PowerShell extraction scripts monthly
  • How to refresh the Power BI dashboard with new data
  • Threshold review checklist: when to raise/lower MinCount
  • How to add new SITs to existing policies without creating new noise
Runbook 3: New Policy Deployment Checklist
  • Always start in TestWithNotifications mode
  • Minimum 7-day simulation period before enforcing
  • Required approvals before switching to Enforce mode
  • Post-deployment monitoring: check alert volume for 48 hours after go-live
DLPlytics
by NLCS

PROPRIETARY AND CONFIDENTIAL — NLCS INTERNAL USE ONLY

DLPlytics Implementation Guide v1.0 · © 2026 NLCS. All rights reserved.