This document explains the architecture, cost implications, operational behavior, and troubleshooting guidance for deploying Azure Log Analytics Workspace (LAW) using Azure Monitor Private Link Scope (AMPLS).
The implementation ensures:
| Cost Component | Rate | Description | Remarks |
|---|---|---|---|
| Private Endpoint | ~$0.01/hour (~$7.30/month) | Per Private Endpoint, per region | Azure Private Link pricing |
| Data Processing | ~$0.01/GB | Data processed through Private Endpoint | |
| Public Egress | $0 | No public internet egress charges |
Total Example Cost:
| Cost Component | Rate | Description |
|---|---|---|
| Public Egress | $0.087/GB (varies by region) | Data transfer out to internet |
| Cross-region transfer | $0.02-0.05/GB | Between Azure regions over public endpoint |
Total Example Cost:
✅ Eliminates public egress charges for log ingestion
✅ Reduced cross-region costs via Azure backbone
✅ Predictable pricing with flat PE rate
✅ Better for high-volume scenarios (break-even typically at 500GB+/month)
✅ Enhanced security at minimal additional cost
Capabilities:
Documentation:
Example DCR Transformation (used in DCR definition, not LAW query):
source
| where Severity in ("Error", "Critical")
| extend Environment_CF = "Production"
| project-away RawData
Note: source is a placeholder used in DCR transformations. This query is configured in the DCR JSON, not executed directly in Log Analytics.
Example Regular KQL Query (run in Log Analytics Workspace):
// Query the Syslog table for errors
Syslog
| where SeverityLevel in ("err", "crit")
| extend Environment = "Production"
| project TimeGenerated, Computer, SeverityLevel, SyslogMessage
| take 100
How DCR Transformations Work:
source)Example DCR JSON Configuration:
{
"properties": {
"dataFlows": [
{
"streams": ["Microsoft-Syslog"],
"destinations": ["myWorkspace"],
"transformKql": "source | where SeverityLevel in ('err', 'crit') | extend Environment_CF = 'Production'"
}
]
}
}
| Plan | Retention | Query Capability | Cost | Use Case |
|---|---|---|---|---|
| Analytics | 4-730 days (default) | Full KQL, Alerts | Higher | Real-time monitoring, dashboards |
| Basic | 8 days | Limited KQL, no alerts | 50% lower | High-volume troubleshooting logs |
| Archive | Up to 12 years | Search jobs only | Lowest | Compliance, long-term retention |
Documentation:
Workspace-Level Retention:
Table-Level Retention:
Configuration Paths:
Workspace → Usage and estimated costs → Data RetentionWorkspace → Tables → Manage table → RetentionDocumentation:
Data Transformations allow you to run a limited KQL query on incoming data before it is stored in Log Analytics Workspace.
In simple terms:
Instead of storing raw logs and cleaning them later, you clean/filter/shape them before they are saved. It happens inside the Data Collection Rule (DCR) pipeline.
Source (VM / App / AKS)
│
▼
Azure Monitor Agent (AMA)
│
▼
Data Collection Rule (DCR)
│
▼
Transformation (KQL at ingestion)
│
▼
Log Analytics Workspace (Stored Data)
Transformation Features:
Supported KQL Operations:
where, extend, project, project-awayparse, parse_json, extractsummarize, join, union (single-row operations only)Documentation:
Example: Remove PII and Filter Errors
source
| where Level == "Error" or Level == "Critical"
| extend RequestContext = parse_json(Context)
| project-away EmailAddress, PhoneNumber
| project TimeGenerated, Level, Message, RequestContext
Access Control Modes:
Built-in Roles:
Log Analytics Reader: Read-only accessLog Analytics Contributor: Read/write accessLog Analytics Data Reader: Fine-grained data accessGranular RBAC Features:
Documentation:
| Log Type | Description | Example Tables |
|---|---|---|
| VM Diagnostics | Performance counters, metrics | Perf, Heartbeat |
| Activity Logs | Control plane operations | AzureActivity |
| Resource Diagnostics | Service-specific logs | AzureDiagnostics |
| Network Logs | NSG flows, firewall logs | AzureNetworkAnalytics_CL |
| Log Type | Description | Example Tables |
|---|---|---|
| Application Insights | Telemetry, traces, exceptions | AppTraces, AppExceptions |
| Custom Application Logs | API/SDK ingestion | Custom tables (*_CL) |
| IIS Logs | Web server access logs | W3CIISLog |
| Dependency Tracking | External service calls | AppDependencies |
| Log Type | Description | Example Tables |
|---|---|---|
| Microsoft Defender | Security alerts, vulnerabilities | SecurityAlert, SecurityEvent |
| Azure AD Logs | Sign-ins, audit logs | SigninLogs, AuditLogs |
| Sentinel Events | Security information events | SecurityEvent |
| Key Vault Audits | Access and operation logs | AzureDiagnostics (KeyVault) |
| Log Type | Description | Example Tables |
|---|---|---|
| Container Logs | AKS, Container Instances | ContainerLog, KubePodInventory |
| Windows Event Logs | System, Application, Security | Event |
| Linux Syslog | System logs | Syslog |
| Update Management | Patch compliance | Update, UpdateSummary |
| Log Type | Description | Example Tables |
|---|---|---|
| Heartbeat | Agent connectivity | Heartbeat |
| Performance Counters | CPU, memory, disk | Perf |
| Change Tracking | File/registry changes | ConfigurationChange |
| Service Map | Application dependencies | VMConnection |
┌─────────────────────────────────────────────────────────┐
│ Log Analytics Workspace (LAW) │
└─────────────────────────────────────────────────────────┘
│
┌────────────────┼────────────────┐
│ │ │
▼ ▼ ▼
Storage Acc. Event Hub ADX
(Archive) (Stream) (Long-term)
│ │ │
▼ ▼ ▼
Compliance SIEM/Splunk Analytics
ServiceNow Power BI
Overview: Once logs are ingested into LAW, they can be exported to various destinations for long-term storage, real-time streaming, advanced analytics, or integration with third-party systems. This flexibility ensures that your logging strategy can meet compliance, operational, and business intelligence requirements without being locked into a single platform.
| Method | Destination | Use Case | Real-time |
|---|---|---|---|
| Data Export Rules | Storage, Event Hub | Continuous export | Yes |
| Logic Apps | Any API endpoint | Workflow automation | Yes |
| Azure Data Explorer | ADX clusters | Long-term analytics | Scheduled |
| Power BI | Power BI workspaces | Visualization | Scheduled |
| REST API | Custom applications | Programmatic access | On-demand |
| Archive Tier | Low-cost storage | Compliance retention | No (restore) |
Key Consideration: Choose your export method based on latency requirements (real-time vs. batch), destination system capabilities, and cost constraints. Real-time exports via Event Hub are ideal for SIEM integration, while Storage Accounts offer the most cost-effective solution for compliance archival.
Data Export Rules allow you to continuously send log data from specific tables to external destinations as soon as it’s ingested. This is the preferred method for most export scenarios as it provides near real-time delivery without requiring custom code or additional infrastructure.
Configuration:
Workspace → Settings → Data ExportExample Destinations:
Important Notes:
To SIEM (Splunk, QRadar):
LAW → Event Hub → SIEM Connector → SIEM Platform
Log data is streamed in real-time from LAW to Event Hub, where the SIEM platform consumes events using native connectors or custom consumers. This enables centralized security monitoring across hybrid and multi-cloud environments while maintaining Azure-native log collection.
To ServiceNow:
LAW → Logic App (scheduled query) → ServiceNow API → Incident
Logic Apps run scheduled queries against LAW (e.g., every 5 minutes), checking for specific conditions like critical errors or threshold breaches. When conditions are met, the Logic App automatically creates incidents in ServiceNow with enriched context, enabling automated ITSM workflows and reducing manual ticketing overhead.
To Power BI:
LAW → Power BI Connector → Direct Query or Import
Power BI can connect directly to LAW using the Azure Monitor Logs connector, either importing data for fast dashboard performance or using DirectQuery for real-time data (with some query limitations). This enables business users and executives to visualize operational metrics, SLA compliance, and capacity trends without needing to write KQL queries directly.
To Azure Data Explorer (ADX):
LAW → Continuous Export → ADX → Cross-cluster queries
For scenarios requiring long-term retention with better query performance than LAW’s archive tier, data can be continuously exported to ADX. ADX provides columnar storage optimized for time-series analytics and can retain years of data at lower cost while maintaining sub-second query response times. This is ideal for capacity planning, trend analysis, and data science workloads.
To External APIs (Custom Integration):
LAW → Logic App / Azure Function → HTTP POST → External System
For custom integrations with proprietary systems, webhooks, or APIs not natively supported, you can use Logic Apps or Azure Functions to query LAW on a schedule or respond to alerts, transform the data as needed, and POST it to any HTTP endpoint. This provides maximum flexibility for unique business requirements.
| Requirement | Recommended Method | Rationale |
|---|---|---|
| Compliance archival (7+ years) | Data Export Rules → Storage Account | Lowest cost; immutable storage options available |
| Real-time SIEM integration | Data Export Rules → Event Hub | Native streaming; low latency (<60 seconds) |
| Business intelligence dashboards | Power BI Direct Connect | User-friendly; no ETL required |
| Long-term analytics (complex queries) | Export to Azure Data Explorer | Better performance than LAW archive; lower cost than Analytics tier |
| Automated incident management | Logic Apps with scheduled queries | Flexible workflows; integrates with 200+ connectors |
| Custom application integration | REST API or Azure Functions | Full programmatic control; any language/platform |
| Multi-cloud log aggregation | Event Hub → Third-party SIEM | Centralized security operations across AWS, GCP, on-prem |
Cost Consideration: Exporting data adds costs beyond LAW ingestion. Storage Accounts charge ~$0.02/GB/month for hot tier, while Event Hub charges based on throughput units (~$11/month per unit). Factor in these costs when designing your export strategy, and use DCRs to filter unnecessary data before ingestion to minimize both LAW and export costs.
Use Data Export Rules for real-time scenarios: Set up continuous export to Event Hub for SIEM/alerting and to Storage for archival.
Filter at the source with DCRs: Apply transformations and filters during ingestion to reduce both LAW retention costs and export volumes.
Separate hot and cold data: Keep 30-90 days in LAW Analytics tier for active querying; export older data to Storage Account or ADX for cost-effective long-term retention.
Monitor export health: Set up alerts on Data Export Rule failures (available in Azure Monitor) to ensure compliance requirements aren’t missed.
Test query performance: For Power BI and ADX integrations, test query performance with realistic data volumes to ensure dashboards and reports load within acceptable timeframes.
Implement lifecycle policies: For Storage Account exports, configure lifecycle management to automatically move data from hot to cool to archive tiers based on age, further reducing costs.
Consider data sovereignty: When exporting to external systems or Storage Accounts in different regions, ensure compliance with data residency requirements (GDPR, regional regulations).
┌─────────────────────────────────────────────────────────┐
│ On-Premises Environment │
│ │
│ ┌──────────┐ ┌──────────────┐ │
│ │ VM │────────>│ Firewall │ │
│ │ (Agent) │ │ (Allow 443) │ │
│ └──────────┘ └──────────────┘ │
└─────────────────────────────┬───────────────────────────┘
│ Public Internet
│ (Egress charges apply)
▼
┌───────────────────────────────────┐
│ *.ods.opinsights.azure.com │
│ (Public IP: 13.x.x.x) │
└───────────────────────────────────┘
│
▼
┌───────────────────────────────────┐
│ Log Analytics Workspace (LAW) │
│ (Public Endpoint) │
└───────────────────────────────────┘
Characteristics:
┌─────────────────────────────────────────────────────────────────┐
│ On-Premises Environment │
│ │
│ ┌──────────┐ ┌──────────────┐ │
│ │ User PC │────────>│ On-Prem │ │
│ │ │ │ DNS Server │ │
│ └──────────┘ │ 192.168.1.10 │ │
│ └──────────────┘ │
└─────────────────────────────┬───────────────────────────────────┘
│ ExpressRoute / VPN
│ (Private connectivity)
▼
┌────────────────────────────────────────────────────────────────┐
│ Azure VNet - Hub │
│ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ CI DNS VMs │ │ SI DNS VMs │ │
│ │ 10.1.0.4/10.1.0.5 │ │ 10.2.0.4/10.2.0.5 │ │
│ └──────────┬──────────┘ └──────────┬──────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ Private DNS Zone │ │ Private DNS Zone │ │
│ │ privatelink.*. │ │ privatelink.*. │ │
│ │ opinsights... │ │ opinsights... │ │
│ └──────────┬──────────┘ └──────────┬──────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ Private Endpoint │ │ Private Endpoint │ │
│ │ 10.1.100.50 │ │ 10.2.100.50 │ │
│ └──────────┬──────────┘ └──────────┬──────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ AMPLS-CI │ │ AMPLS-SI │ │
│ └──────────┬──────────┘ └──────────┬──────────┘ │
└─────────────┼──────────────────────────┼───────────────────────┘
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ LAW-CI │ │ LAW-SI │
│ (Central India) │ │ (South India) │
└──────────────────┘ └──────────────────┘
Characteristics:
Query Path:
On-prem PC (192.168.1.10)
│
│ Query: workspace-id.ods.opinsights.azure.com
▼
On-prem DNS Server (192.168.1.10)
│
│ Conditional Forwarder for *.ods.opinsights.azure.com
▼
CI DNS VMs (10.1.0.4, 10.1.0.5) OR SI DNS VMs (10.2.0.4, 10.2.0.5)
│
│ Query Private DNS Zone
▼
Private DNS Zone (privatelink.ods.opinsights.azure.com)
│
│ Returns A Record
▼
Private Endpoint IP (10.1.100.50 or 10.2.100.50)
| Aspect | Before AMPLS (Public) | After AMPLS (Private) |
|---|---|---|
| DNS Resolution | Public IP (13.x.x.x) | Private IP (10.x.x.x) |
| Traffic Path | Public internet | Azure backbone/VPN |
| Network Isolation | None | Full isolation |
| Internet Requirement | Required | Not required |
| Egress Charges | Yes | No |
| Security | Exposed to internet | Private connectivity |
| Latency | Variable | Lower and consistent |
| Access Control | Public endpoint policies | AMPLS scope control |
| Compliance | Limited | Enhanced (data stays private) |
| Workload Type | Average Volume | Bandwidth (sustained) |
|---|---|---|
| Basic VM | 1-5 GB/month | 0.01-0.05 Mbps |
| Application Server | 10-50 GB/month | 0.1-0.5 Mbps |
| Container Cluster (AKS) | 50-500 GB/month | 0.5-5 Mbps |
| High-volume Application | 500+ GB/month | 5+ Mbps |
| Log Type | Typical Rate | Bandwidth Impact |
|---|---|---|
| Performance Counters | 1 KB/min | Very Low |
| Windows Event Logs | Varies (depends on verbosity) | Low-Medium |
| IIS Logs | 1-5 KB per request | Medium-High |
| Custom App Logs | Highly variable | Low-High |
| Container Logs | 10-100 MB/hour per pod | High |
| Security Logs | 5-20 KB per event | Medium |
Daily Bandwidth (MB/s) = (Total GB per day × 1024 MB) / 86400 seconds
Example Calculation:
With compression (typical 3-5x reduction):
Characteristics:
Bytes InBytes OutConnections| Strategy | Impact | Implementation |
|---|---|---|
| Use DCRs to filter logs | 30-70% reduction | Filter unnecessary events at source |
| Configure appropriate intervals | 20-40% reduction | Adjust collection frequency |
| Use Basic tier for high-volume logs | 50% cost reduction | Change table plan |
| Enable compression | 3-5x reduction | Enabled by default |
| Transform data at ingestion | 20-50% reduction | Remove unnecessary fields |
Overview: Entra ID (Azure AD) logs typically constitute 20-40% of total LAW ingestion volume in enterprise environments. Volume varies significantly based on user count, application integrations, and security policies.
Typical Volumes by Organization Size:
| Organization Size | Users | Daily Entra Logs | Monthly Entra Logs | Avg Bandwidth |
|---|---|---|---|---|
| Small (100-1K users) | 100-1,000 | 1-5 GB | 30-150 GB | 10-60 KB/s |
| Medium (1K-10K users) | 1,000-10,000 | 5-20 GB | 150-600 GB | 60-230 KB/s |
| Large (10K-100K users) | 10,000-100,000 | 20-100 GB | 600-3,000 GB | 230 KB - 1.2 MB/s |
| Enterprise (100K+ users) | 100,000+ | 100-500 GB | 3-15 TB | 1.2-6 MB/s |
Volume by Log Type:
| Log Type | Table Name | Avg Event Size | Typical % of Total |
|---|---|---|---|
| Interactive Sign-ins | SigninLogs |
3 KB | 30-40% |
| Non-Interactive Sign-ins | AADNonInteractiveUserSignInLogs |
3 KB | 15-25% |
| Service Principal Sign-ins | AADServicePrincipalSignInLogs |
2 KB | 20-30% |
| Audit Logs | AuditLogs |
2 KB | 10-15% |
| Provisioning Logs | AADProvisioningLogs |
3 KB | 2-5% |
| Risk Detection | AADUserRiskEvents |
3 KB | 1-2% |
Query to Check Your Actual Volume:
// Current Entra ID log volume (last 30 days)
Usage
| where TimeGenerated > ago(30d)
| where DataType startswith "AAD" or DataType in ("SigninLogs", "AuditLogs")
| summarize
TotalGB = sum(Quantity) / 1024,
DailyAvgGB = (sum(Quantity) / 1024) / 30,
MonthlyProjectionGB = ((sum(Quantity) / 1024) / 30) * 30
by DataType
| extend DailyAvgBandwidthKBps = (DailyAvgGB * 1024 * 1024) / 86400
| project
DataType,
TotalGB = round(TotalGB, 2),
DailyAvgGB = round(DailyAvgGB, 2),
DailyAvgBandwidthKBps = round(DailyAvgBandwidthKBps, 2)
| order by TotalGB desc
Factors That Increase Entra Log Volume:
Bandwidth Calculation Example (10,000 users):
Scenario: Medium organization with 10,000 users
- 100,000 interactive sign-ins/day × 3 KB = 300 MB/day
- 50,000 non-interactive sign-ins/day × 3 KB = 150 MB/day
- 200,000 service principal sign-ins/day × 2 KB = 400 MB/day
- 20,000 audit events/day × 2 KB = 40 MB/day
- 5,000 provisioning events/day × 3 KB = 15 MB/day
Total: ~905 MB/day = ~27 GB/month
Average Bandwidth: 905 MB / 86,400 sec = 10.5 KB/s
Peak Hours (8 AM - 6 PM): ~30-40 KB/s
Off Hours: ~2-5 KB/s
Query to Identify High-Volume Applications:
// Which applications generate the most Entra logs
SigninLogs
| where TimeGenerated > ago(7d)
| summarize
SignInCount = count(),
UniqueUsers = dcount(UserPrincipalName),
EstimatedMB = count() * 0.003 // 3KB avg per sign-in
by AppDisplayName
| extend EstimatedGB = EstimatedMB / 1024
| extend MonthlyProjectionGB = (EstimatedGB / 7) * 30
| order by SignInCount desc
| take 20
Query to Monitor Service Principal Activity:
// Service principals can generate massive log volumes
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(24h)
| summarize
SignInCount = count(),
EstimatedMB = count() * 0.002 // 2KB avg
by ServicePrincipalName, AppId
| extend MonthlyProjectionGB = (EstimatedMB * 30) / 1024
| where SignInCount > 1000 // Focus on high-volume SPs
| order by SignInCount desc
Cost Optimization Strategies:
Bandwidth Through AMPLS:
All Entra ID log ingestion flows through your Private Endpoint when diagnostic settings send to LAW. This traffic:
TODO : Will be updated later