AlertManager

chevron-rightNode-expo to Prometheus - alertmanager to teamhashtag

Got it! Since you're planning to directly add the Teams channel webhook URL in alertmanager.yml, I’ve removed the webhook code. Here's the updated setup:


1️⃣ Docker Compose Setup

1.1 Docker Compose File (docker-compose.yml)

version: '3'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - ./alert.rules.yml:/etc/prometheus/alert.rules.yml
    ports:
      - "9090:9090"
    networks:
      - monitoring

  alertmanager:
    image: prom/alertmanager:latest
    container_name: alertmanager
    volumes:
      - ./alertmanager.yml:/etc/alertmanager/alertmanager.yml
    ports:
      - "9093:9093"
    networks:
      - monitoring

  node_exporter:
    image: prom/node-exporter:latest
    container_name: node_exporter
    ports:
      - "9100:9100"
    networks:
      - monitoring

networks:
  monitoring:
    driver: bridge

1.2 Configuration Files

  1. Prometheus Config (prometheus.yml)

global:
  scrape_interval: 15s

alerting:
  alertmanagers:
    - static_configs:
        - targets: ['alertmanager:9093']

rule_files:
  - "/etc/prometheus/alert.rules.yml"

scrape_configs:
  - job_name: 'node_exporter'
    static_configs:
      - targets: ['node_exporter:9100']
  1. Alertmanager Config (alertmanager.yml) In this configuration, the Teams Webhook URL is directly added to the Alertmanager config.

global:
  resolve_timeout: 5m

route:
  receiver: 'teams-webhook'
  group_by: ['alertname']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 1h

receivers:
  - name: 'teams-webhook'
    webhook_configs:
      - url: 'https://outlook.office.com/webhook/your-webhook-url'
        send_resolved: true

🔹 Replace your-webhook-url with your actual Teams webhook URL.

  1. Alert Rule File (alert.rules.yml)

groups:
  - name: system_alerts
    rules:
      - alert: HighMemoryUsage
        expr: (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 < 10
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "High Memory Usage"
          description: "Available memory is below 10%"

      - alert: HighDiskUsage
        expr: (node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100 < 15
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "High Disk Usage"
          description: "Disk usage is above 85%"

2️⃣ Running the Setup

2.1 Start the Services

Run the following command in the directory where docker-compose.yml is located:

docker-compose up -d

This will start:

  • Prometheus on http://localhost:9090

  • Alertmanager on http://localhost:9093

  • Node Exporter on http://localhost:9100 (to expose system metrics)


2.2 Verify the Setup

  1. Verify Alerts in Prometheus:

    • Open Prometheus dashboard at http://localhost:9090.

    • Run the query ALERTS in Graph to see active alerts.

  2. Verify Alerts in Alertmanager:

    • Open Alertmanager at http://localhost:9093 to see alerts routed to the Teams webhook.

  3. Verify Webhook in Teams:

    • In your Teams channel, you should now see the alerts for disk and memory usage when they exceed the thresholds.


✅ Summary

  • Prometheus collects system metrics (node_exporter).

  • Alertmanager handles alert routing and sends alerts directly to Teams Webhook.

  • Alerts for high memory and disk usage are triggered if the thresholds are crossed.

  • Using Docker Compose, everything is encapsulated and easy to run.


🚀 Let me know if you need any further modifications or have questions! 😃

chevron-rightAlert to teamshashtag

Alert manager doesn't support sending to Microsoft Teams out of the boxTo send alert on teams we need tohttps://github.com/prometheus-msteams/prometheus-msteamsarrow-up-right Steps:

  1. Get webhook URL of MS teams channel

2. Deploy prom-teams container with webhook URLCurrent VG alert promteams container deployed on Bastion-eks server


docker run -d -p 2000:2000 \
    --name="promteams" \
    -e TEAMS_INCOMING_WEBHOOK_URL="https://example.webhook.office.com/webhookb2/xxx" \
    -e TEAMS_REQUEST_URI=alertmanager \
    quay.io/prometheusmsteams/prometheus-msteams

Alertmanager.yml


route:
  group_by: ['alertname']
  group_interval: 30s
  repeat_interval: 30s
  group_wait: 30s
  receiver: 'prometheus-msteams'

receivers:
- name: 'prometheus-msteams'
  webhook_configs: # https://prometheus.io/docs/alerting/configuration/#webhook_config
  - send_resolved: true
    url: 'http://localhost:2000/alertmanager' # the prometheus-msteams proxy

alertrule.yml


groups:

- name: Disk-usage
  rules:
  - alert: 'Low data disk space'
    expr: ceil(((node_filesystem_size_bytes{mountpoint!="/boot"} - node_filesystem_free_bytes{mountpoint!="/boot"}) / node_filesystem_size_bytes{mountpoint!="/boot"} * 100)) > 85
    labels:
      severity: 'critical'
    annotations:
      title: "Disk Usage"
      description: 'Partition : {{$labels.mountpoint}}'
      summary: "Disk usage is `{{humanize $value}}%`"
      host: "{{$labels.instance}}"


- name: Memory-usage
  rules:
  - alert: 'High memory usage'
    expr: ceil((((node_memory_MemTotal_bytes - node_memory_MemFree_bytes - node_memory_Buffers_bytes - node_memory_Cached_bytes) / node_memory_MemTotal_bytes) * 100)) > 80
    labels:
      severity: 'critical'
    annotations:
      title: "Memory Usage"
      description: 'Memory usage threshold set to `80%`.'
      summary: "Memory usage is `{{humanize $value}}%`"
      host: "{{$labels.instance}}"

chevron-rightspecific alert to specific channelhashtag

Great question, Pavan!

route:
  group_by: ['alertname']
  group_interval: 30s
  repeat_interval: 30s
  group_wait: 30s
  receiver: 'prometheus-msteams'

receivers:
- name: 'prometheus-msteams'
  webhook_configs: # https://prometheus.io/docs/alerting/configuration/#webhook_config
  - send_resolved: true
    url: 'http://46.137.228.16:2000/alertmanager' # the prometheus-msteams proxy

Your current alertmanager.yml routes all alerts to Microsoft Teams because of this part:

route:
  group_by: ['alertname']
  group_interval: 30s
  repeat_interval: 30s
  group_wait: 30s
  receiver: 'prometheus-msteams'

🔍 Here's how it works:

  • The top-level route block has no match or match_re condition.

  • This means: it catches every alert by default.

  • And sends it to the receiver named 'prometheus-msteams'.

There are no sub-routes (routes:), so no filtering or alternate routing is happening.


🔁 What Happens Now

  • Any alert from Prometheus (regardless of label) → goes to prometheus-msteams → Teams webhook.


✅ Confirmed: All alerts are routed.

To limit or customize routing, you’d add a routes: section like:

route:
  receiver: 'null-receiver'
  routes:
    - match:
        alertname: SSLCertificateExpiringSoon
      receiver: 'prometheus-msteams'

Would you like help rewriting your config to only forward certain alerts (e.g. just SSL alerts) and drop others or route differently?

Last updated