Skip to main content
7 min read
Share

Right, here’s a topic that comes up in practically every Sentinel conversation I have — cost. Sentinel is a brilliant SIEM, I really do rate it. But the Log Analytics consumption model can get painful fast if you’re not being deliberate about what you ingest and how.

I’ve spent a good chunk of the last year helping organisations rein in their Sentinel spend without ripping out their detection capabilities. What follows is the stuff that actually moves the needle — not the generic advice you’ll find rehashed across a dozen blog posts.

Understand Where Your Money Is Going

Before you touch anything, you need to know what’s actually costing you. Sounds obvious, right? You’d be amazed how many organisations have no idea.

Run this KQL query against your Log Analytics workspace:

Usage
| where TimeGenerated > ago(30d)
| where IsBillable == true
| summarize TotalGB = sum(Quantity) / 1000 by DataType
| sort by TotalGB desc

That gives you a clear picture of which tables are eating your budget. In nearly every environment I’ve looked at, the top offenders are the same:

  • SecurityEvent — Windows Security Event logs. Almost always the single biggest table
  • Syslog — Linux syslog data
  • AzureActivity — Azure activity logs
  • SigninLogs — Entra ID sign-in logs
  • CommonSecurityLog — CEF/Syslog from third-party appliances
  • AzureDiagnostics — resource diagnostic logs, often massive

Once you can see the split, you’re making informed decisions instead of guessing.

Basic Logs vs Analytics Logs

This is probably the single biggest cost lever you’ve got right now. Microsoft introduced Basic Logs as a cheaper ingestion tier — it’s meant for tables you need for compliance or investigation but aren’t actively querying in analytics rules.

Basic Logs cost roughly a third of Analytics Logs for ingestion. The trade-off? Limited KQL capabilities (no joins across tables, 8-day query window for standard queries, search jobs for anything older) and you can’t use them directly in scheduled analytics rules.

Tables I tend to move to Basic Logs:

  • ContainerLog / ContainerLogV2 — unless you’ve got detection rules running against container output
  • AppTraces / AppDependencies — application performance data, almost never used in security detections
  • AzureMetrics — handy for investigation, not much use for detection
  • StorageBlobLogs / StorageFileLogs — high volume, low detection value in most organisations
  • AzureDiagnostics — depending on the resource types, a huge amount of this is just noise

The question to ask for each table is simple: “Am I running analytics rules against this?” If not, and you’re keeping it for investigation or compliance, Basic Logs is where it belongs.

You configure this per table in the Log Analytics workspace settings. The documentation on Basic Logs walks through the process.

Data Collection Rules Are Your Best Friend

Data Collection Rules (DCRs) let you filter and transform data before it hits your workspace. This is a big deal — you stop paying for data you don’t need at the point of collection, not after it’s already been ingested and billed.

Some practical examples that have saved real money:

  • Windows Security Events: Don’t collect the lot. Use the “Common” or “Minimal” preset, or better still, create a custom DCR that pulls in specific Event IDs only. Most organisations don’t need Event ID 5156 (Windows Filtering Platform connection allowed). That single event type can account for a shocking percentage of SecurityEvent volume. I spent a full afternoon once tracking down why a customer’s costs had spiked — turned out someone had switched to “All Events” during troubleshooting and never switched back.

  • Syslog: Filter by facility and severity. Do you really need every informational syslog message from every Linux box? Almost certainly not. Collect Warning and above as your baseline, then add specific facilities at lower severities only where your detection rules actually need them.

  • Performance counters: If you’re collecting these for security purposes (and that’s rare), be very picky about which counters and how often they sample.

The DCR documentation is worth reading end to end if you haven’t already.

Workspace Transformation Rules

Transformation rules let you modify, filter, or enrich data at ingestion time using KQL. They’re different from DCRs — transformations happen at the workspace level and apply to data regardless of how it arrives.

Patterns I’ve used successfully:

  • Dropping columns you don’t need: Some tables have columns that are always empty or irrelevant in your environment. Strip them out at ingestion and stop paying for the bytes.
  • Filtering noisy rows: Got a specific source churning out thousands of identical low-value events? Kill them with a where clause in the transformation.
  • Parsing and enriching: You can parse data at ingestion time rather than query time. Doesn’t save on ingestion cost, but query performance improves noticeably.

A word of caution though. Document everything you do with transformations. There is nothing worse than spending hours troubleshooting a detection rule that’s stopped firing, only to discover someone added a transformation that filters out the events it depends on. Ask me how I know.

Commitment Tiers

If your daily ingestion is fairly predictable and sits above 100 GB/day, commitment tiers give you meaningful discounts. They start at 100 GB/day and go up from there, with bigger discounts at each step.

The maths is simple — compare your average daily ingestion against the tier pricing. Microsoft publish the Sentinel pricing clearly enough that you can model it in a spreadsheet.

Few tips on this:

  • Don’t over-commit. If your ingestion bounces around, pick a tier that covers your baseline rather than your peaks. Overage gets charged at pay-as-you-go rates, which is perfectly fine for the odd spike.
  • Review it quarterly. Your ingestion profile shifts as you onboard new sources or optimise existing ones. Stick a reminder in your calendar.
  • Watch for double-paying. Sentinel commitment tiers include a Log Analytics commitment, so make sure you haven’t accidentally got both running separately.

Workspace Design Matters

Running multiple Sentinel workspaces? Have a think about whether consolidation makes sense. Commitment tier discounts apply per workspace — so two workspaces each ingesting 80 GB/day get no discount, while one workspace doing 160 GB/day qualifies for a decent one.

There are perfectly valid reasons for multiple workspaces — data residency, RBAC boundaries, different retention requirements. But I’ve seen organisations running multiples purely because different teams set them up independently and nobody ever questioned it. That’s just money left on the table.

Quick Wins Summary

If you want to make an impact fast, here’s how I’d prioritise it:

  1. Run the usage query — get a clear picture of where your spend is going
  2. Move eligible tables to Basic Logs — that’s an immediate 60-70% cut on those tables
  3. Review your Windows Security Event collection — switch to Common/Minimal or build a custom DCR
  4. Look at commitment tiers — if you’re above 100 GB/day, this is basically free savings
  5. Add workspace transformations — target whatever’s still noisy after the above

From what I’ve seen, organisations typically land a 30-50% cost reduction through these measures without losing any meaningful detection capability. The trick is being methodical — not just switching things off and crossing your fingers.

Resources

If you’re battling with Sentinel costs and want to talk it through, feel free to reach out. It’s one of those topics I never get tired of digging into.

Share

Related Posts

The State of Zero Trust in 2026
6 min read

The State of Zero Trust in 2026

Azure Security
Defending Against AitM Phishing - Practical Steps
7 min read

Defending Against AitM Phishing - Practical Steps

Azure Security
Token Theft - The Threat That Keeps Security Teams Up at Night
7 min read

Token Theft - The Threat That Keeps Security Teams Up at Night

Azure Security