AlertManager
Node-expo to Prometheus - alertmanager to team
Got it! Since you're planning to directly add the Teams channel webhook URL in alertmanager.yml, I’ve removed the webhook code. Here's the updated setup:
1️⃣ Docker Compose Setup
1.1 Docker Compose File (docker-compose.yml)
docker-compose.yml)version: '3'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alert.rules.yml:/etc/prometheus/alert.rules.yml
ports:
- "9090:9090"
networks:
- monitoring
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
volumes:
- ./alertmanager.yml:/etc/alertmanager/alertmanager.yml
ports:
- "9093:9093"
networks:
- monitoring
node_exporter:
image: prom/node-exporter:latest
container_name: node_exporter
ports:
- "9100:9100"
networks:
- monitoring
networks:
monitoring:
driver: bridge1.2 Configuration Files
Prometheus Config (
prometheus.yml)
global:
scrape_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: ['alertmanager:9093']
rule_files:
- "/etc/prometheus/alert.rules.yml"
scrape_configs:
- job_name: 'node_exporter'
static_configs:
- targets: ['node_exporter:9100']Alertmanager Config (
alertmanager.yml) In this configuration, the Teams Webhook URL is directly added to the Alertmanager config.
global:
resolve_timeout: 5m
route:
receiver: 'teams-webhook'
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receivers:
- name: 'teams-webhook'
webhook_configs:
- url: 'https://outlook.office.com/webhook/your-webhook-url'
send_resolved: true🔹 Replace your-webhook-url with your actual Teams webhook URL.
Alert Rule File (
alert.rules.yml)
groups:
- name: system_alerts
rules:
- alert: HighMemoryUsage
expr: (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 < 10
for: 1m
labels:
severity: critical
annotations:
summary: "High Memory Usage"
description: "Available memory is below 10%"
- alert: HighDiskUsage
expr: (node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100 < 15
for: 1m
labels:
severity: critical
annotations:
summary: "High Disk Usage"
description: "Disk usage is above 85%"2️⃣ Running the Setup
2.1 Start the Services
Run the following command in the directory where docker-compose.yml is located:
docker-compose up -dThis will start:
Prometheus on
http://localhost:9090Alertmanager on
http://localhost:9093Node Exporter on
http://localhost:9100(to expose system metrics)
2.2 Verify the Setup
Verify Alerts in Prometheus:
Open Prometheus dashboard at
http://localhost:9090.Run the query
ALERTSin Graph to see active alerts.
Verify Alerts in Alertmanager:
Open Alertmanager at
http://localhost:9093to see alerts routed to the Teams webhook.
Verify Webhook in Teams:
In your Teams channel, you should now see the alerts for disk and memory usage when they exceed the thresholds.
✅ Summary
Prometheus collects system metrics (
node_exporter).Alertmanager handles alert routing and sends alerts directly to Teams Webhook.
Alerts for high memory and disk usage are triggered if the thresholds are crossed.
Using Docker Compose, everything is encapsulated and easy to run.
🚀 Let me know if you need any further modifications or have questions! 😃
Alert to teams
Alert manager doesn't support sending to Microsoft Teams out of the boxTo send alert on teams we need tohttps://github.com/prometheus-msteams/prometheus-msteams Steps:
Get webhook URL of MS teams channel
2. Deploy prom-teams container with webhook URLCurrent VG alert promteams container deployed on Bastion-eks server
docker run -d -p 2000:2000 \
--name="promteams" \
-e TEAMS_INCOMING_WEBHOOK_URL="https://example.webhook.office.com/webhookb2/xxx" \
-e TEAMS_REQUEST_URI=alertmanager \
quay.io/prometheusmsteams/prometheus-msteamsAlertmanager.yml
route:
group_by: ['alertname']
group_interval: 30s
repeat_interval: 30s
group_wait: 30s
receiver: 'prometheus-msteams'
receivers:
- name: 'prometheus-msteams'
webhook_configs: # https://prometheus.io/docs/alerting/configuration/#webhook_config
- send_resolved: true
url: 'http://localhost:2000/alertmanager' # the prometheus-msteams proxy
alertrule.yml
groups:
- name: Disk-usage
rules:
- alert: 'Low data disk space'
expr: ceil(((node_filesystem_size_bytes{mountpoint!="/boot"} - node_filesystem_free_bytes{mountpoint!="/boot"}) / node_filesystem_size_bytes{mountpoint!="/boot"} * 100)) > 85
labels:
severity: 'critical'
annotations:
title: "Disk Usage"
description: 'Partition : {{$labels.mountpoint}}'
summary: "Disk usage is `{{humanize $value}}%`"
host: "{{$labels.instance}}"
- name: Memory-usage
rules:
- alert: 'High memory usage'
expr: ceil((((node_memory_MemTotal_bytes - node_memory_MemFree_bytes - node_memory_Buffers_bytes - node_memory_Cached_bytes) / node_memory_MemTotal_bytes) * 100)) > 80
labels:
severity: 'critical'
annotations:
title: "Memory Usage"
description: 'Memory usage threshold set to `80%`.'
summary: "Memory usage is `{{humanize $value}}%`"
host: "{{$labels.instance}}"
specific alert to specific channel
Great question, Pavan!
route:
group_by: ['alertname']
group_interval: 30s
repeat_interval: 30s
group_wait: 30s
receiver: 'prometheus-msteams'
receivers:
- name: 'prometheus-msteams'
webhook_configs: # https://prometheus.io/docs/alerting/configuration/#webhook_config
- send_resolved: true
url: 'http://46.137.228.16:2000/alertmanager' # the prometheus-msteams proxy
Your current alertmanager.yml routes all alerts to Microsoft Teams because of this part:
route:
group_by: ['alertname']
group_interval: 30s
repeat_interval: 30s
group_wait: 30s
receiver: 'prometheus-msteams'🔍 Here's how it works:
The top-level
routeblock has nomatchormatch_recondition.This means: it catches every alert by default.
And sends it to the receiver named
'prometheus-msteams'.
There are no sub-routes (routes:), so no filtering or alternate routing is happening.
🔁 What Happens Now
Any alert from Prometheus (regardless of label) → goes to
prometheus-msteams→ Teams webhook.
✅ Confirmed: All alerts are routed.
To limit or customize routing, you’d add a routes: section like:
route:
receiver: 'null-receiver'
routes:
- match:
alertname: SSLCertificateExpiringSoon
receiver: 'prometheus-msteams'Would you like help rewriting your config to only forward certain alerts (e.g. just SSL alerts) and drop others or route differently?
Last updated