Splunk
How it works
Splunk Architecture and How It Works
Splunk is a powerful platform for searching, monitoring, and analyzing machine-generated data from applications, devices, and infrastructure logs. It follows a distributed architecture to handle large volumes of data efficiently.
🔹 Splunk Architecture Components
Splunk architecture consists of three main components:
1️⃣ Forwarders (Data Collection)
Role: Collects log data from various sources and forwards it to the indexer.
Types:
Universal Forwarder (UF) – Lightweight, forwards raw data without parsing.
Heavy Forwarder (HF) – Parses and filters data before sending it.
Example Use Case: A server running an application sends logs to Splunk via a forwarder.
2️⃣ Indexers (Data Processing & Storage)
Role: Receives, processes, and stores the data received from forwarders.
Processes:
Parsing: Extracts timestamps, fields, and metadata.
Indexing: Stores data in a structured format for fast retrieval.
Searching: Executes user queries on indexed data.
Example Use Case: Searching logs of failed login attempts from indexed data.
3️⃣ Search Head (Data Querying & Visualization)
Role: User interface for querying and visualizing indexed data.
Features:
Search Processing Language (SPL): Allows complex searches.
Dashboards & Alerts: Helps in monitoring and reporting.
Example Use Case: A dashboard showing CPU usage trends across multiple servers.
🔹 How Splunk Works (Data Flow)
Data Input (Ingestion)
Logs, system metrics, and events are collected by forwarders from sources like servers, cloud, and applications.
Data Processing (Parsing & Indexing)
The indexer processes incoming data, extracts fields, applies filters, and stores it efficiently.
Data Search & Analysis (Querying & Visualization)
The search head enables users to run SPL queries, visualize results, and generate alerts.
🔹 Splunk Deployment Models
Standalone Deployment: All components (Forwarder, Indexer, Search Head) on a single machine (suitable for small environments).
Distributed Deployment:
Multiple Forwarders → Multiple Indexers → Multiple Search Heads
Suitable for large-scale log analysis.
🔹 Splunk Use Cases
✔ Security & Compliance – SIEM solutions, threat detection. ✔ IT Operations Monitoring – Server health, performance monitoring. ✔ Application Debugging – Log analysis for debugging applications.
🔹 Summary Table
Forwarder
Collects and sends data to the indexer
Indexer
Processes, indexes, and stores data
Search Head
Allows users to search, visualize, and analyze data
Would you like a deeper dive into SPL (Search Processing Language) or real-world Splunk queries? 🚀
SIEM stands for Security Information and Event Management.
It is a security solution that helps organizations detect, analyze, and respond to security threats in real time by collecting and correlating log data from various sources such as firewalls, servers, and applications.
Instalation and setup
Splunk Installation and Log Collection Process
Splunk can be installed in different environments depending on use cases (small-scale, distributed, or cloud-based). Here’s how to install Splunk and how it collects logs and data.
1️⃣ Splunk Installation
Splunk can be installed on Linux, Windows, and macOS. Below are the steps for installing Splunk Enterprise and Splunk Universal Forwarder.
🔹 Install Splunk Enterprise (for Indexing & Searching)
Download Splunk Enterprise:
Go to Splunk's official website and download the package for your OS.
Choose
.tar,.rpm,.deb(for Linux) or.exe(for Windows).
Install on Linux (CLI method)
tar -xvzf splunk-*.tgz -C /opt cd /opt/splunk/bin ./splunk start --accept-licenseInstall on Windows:
Run the
.exeinstaller.Follow the GUI instructions and set the admin username/password.
Start Splunk from Start Menu > Splunk Enterprise.
Access Splunk Web UI:
Open http://localhost:8000 in a browser.
Login with the admin credentials.
🔹 Install Splunk Universal Forwarder (for Log Collection)
Download the Forwarder:
Get the Universal Forwarder from Splunk’s website.
Install on Linux:
rpm -i splunkforwarder-*.rpmInstall on Windows:
Run the
.msiinstaller and follow the setup wizard.
Configure Forwarder to Send Logs:
/opt/splunkforwarder/bin/splunk add forward-server <Indexer_IP>:9997 /opt/splunkforwarder/bin/splunk add monitor /var/logRestart the Forwarder:
/opt/splunkforwarder/bin/splunk restart
2️⃣ Where to Install Splunk Components?
Forwarder
On log sources (servers, applications, devices)
Indexer
On central Splunk server (Linux machine, VM, cloud instance)
Search Head
On a separate machine (for large setups) or combined with the indexer in small setups
3️⃣ How Splunk Collects Logs & Data
Splunk collects logs and data from various sources using Universal Forwarders, Heavy Forwarders, and direct data inputs.
🔹 Data Collection Methods
File & Directory Monitoring
Collects logs from files such as
/var/log/messages,C:\logs\app.log.
/opt/splunk/bin/splunk add monitor /var/logSyslog Collection
Splunk listens on a port (e.g., UDP 514) to receive syslog messages.
/opt/splunk/bin/splunk enable listen 514API & Webhooks
Splunk integrates with APIs and webhooks to ingest logs.
Database Logs
Uses Splunk DB Connect to fetch logs from MySQL, PostgreSQL, etc.
Cloud & SaaS Monitoring
Collects AWS CloudWatch logs, GCP logs, and Microsoft Azure logs.
4️⃣ How Logs Move in Splunk (Workflow)
Forwarder collects logs from different sources.
Indexer processes and stores the data.
Search Head allows users to search and analyze the data.
🔹 Example: Linux Server Log Collection Setup
Install Splunk Indexer on a central server.
Install Universal Forwarder on each Linux server.
Configure forwarders to send logs to the indexer.
In Splunk Web UI, run a query to analyze logs:
index=linux_logs | stats count by source
5️⃣ Splunk Deployment Models
Standalone Deployment: All components on one server (for small-scale use).
Distributed Deployment: Forwarders → Multiple Indexers → Multiple Search Heads (for large-scale use).
Splunk Cloud: Managed by Splunk, no need for on-prem servers.
Would you like a demo query or troubleshooting steps? 🚀
Last updated