How to Deploy the ELK Stack on Kubernetes: a comprehensive guide

Alex KondratievAlex Kondratiev

10 min read

This blog will explore a step-by-step guide on deploying the ELK Stack on Kubernetes. It’s a powerful combination of Elasticsearch, Logstash, and Kibana, enabling scalable search, analytics, and log processing for data-driven applications. By the end of this guide, you'll have a fully functional ELK stack setup ready to manage and analyze your logs effectively.


So let’s figure out what exactly the ELK stack is

The ELK stack is an abbreviation for Elasticsearch, Logstash, and Kibana, which offers the following capabilities:

  • Elasticsearch: a scalable search and analytics engine with a log analytics tool and application-formed database, perfect for data-driven applications.
  • Logstash: a log-processing tool that collects logs from various sources, parses them, and sends them to Elasticsearch for storage and analysis.
  • Kibana: A powerful visualization tool that allows you to explore and analyze the data stored in Elasticsearch using interactive charts, graphs, and dashboards.

The architecture of Elasticsearch

Before we dive into deploying the ELK Stack, let's first understand the critical components of Elasticsearch's infrastructure:

  • Nodes: Elasticsearch runs on dedicated servers called nodes that serve as binaries for search and analytics tasks.
  • Shards: the database space is logically divided into shards, enabling faster data accessibility and distribution.
  • Indices: Elasticsearch organizes the stored data into indices, facilitating efficient data management.
  • Configuring the ELK stack: You'll need a Kubernetes cluster to deploy the ELK Stack on Kubernetes. If you already have one, you can proceed with the deployment. Alternatively, you can use the provided GitHub repository with Terraform files to set up a Kubernetes cluster.
  • Deploying Elasticsearch: Utilizing Helm charts, we can efficiently deploy Elasticsearch. Modify the values file to meet your specific requirements, such as adjusting the number of replicas or enabling or disabling certain features. Download them from Artifactory Hub.

Let's modify the values-elasticsearch.yaml to meet our needs:

yaml

1clusterName: "itsyndicateblog"
2replicas: 1
3minimumMasterNodes: 1
4createCert: true
5secret:
6 enabled: true
7 password: "" # generated randomly if not defined
8image: "docker.elastic.co/elasticsearch/elasticsearch"
9imageTag: "8.5.1"
10resources:
11 requests:
12 cpu: "200m"
13 memory: "500Mi"
14 limits:
15 cpu: "300m"
16 memory: "1Gi"
17ingress:
18 enabled: false # enable ingress only if you need external access to elasticsearch cluster
19 hosts:
20 - host: elastic.itsyndicate.org
21 paths:
22 - path: /

Once you've customized the values, use the Helm chart to install Elasticsearch:

shell

1helm install elasticsearch -f elasticsearch-values.yaml chart-name>
Note: Ensure you have configured the drivers (EBS or EFS) for persistent volumes.

Deploying Kibana

Kibana deployment is straightforward using Helm charts. In the values-kibana.yaml file, specify the URL and port of the Elasticsearch service:

yaml

1elasticsearchHosts: "https://elasticsearch-master:9200"
2enterpriseSearch:
3 host: "https://elasticsearch-master:9200"

shell

1helm install kibana -f kibana-values.yaml &LTchart-name>

Check if Kibana is installed correctly by port forwarding the container’s port to the local network (I am using k8s Lens)

Secret: elasticsearch master credentials

Secret elasticsearch master credentials

Port forwarding

port_forwarding.original

Welcome to Elastic

elastic

Deploying Logstash and Filebeat

To manage logs effectively, we use Logstash and Filebeat. Filebeat collects records from various sources and Logstash processes and sends them to Elasticsearch.

Deploy Logstash:

  1. Clone repository with configs: https://github.com/inemyrovsk/tf-modules/tree/master/eks/manifests/logstash-k8s
  2. Move to tf-modules/eks/manifests/logstash-k8s
  3. Edit the configmap.yaml file
  4. Add elasticsearch host, user and password(you can take them from “Secrets” Kubernetes resource)
  5. Apply templates:

shell

1kubectl apply -f logstash-k8s -n $CHANGE_TO_ELASTIC_NS

Deploy Filebeat

  1. Ensure Filebeat's configuration points to the correct log files on your nodes. Usually, in EKS, it’s the /var/log/containers directory. To check it, log in to one of your nodes and navigate to the /var/log/containers directory; if there are no files, try changing the directory.
  2. In case everything is correct, apply Kubernetes templates:

shell

1kubectl apply -f filebeat-k8s

Deploy a simple application

To check how logs are streaming into Elasticsearch perform the following:

  1. Enter the eks/manifests folder from the cloned repository.
  2. Execute command:

shell

1kubeclt apply -f app -n default

After installation is complete, revisit Kibana and create an ElasticSearch index.

Creating an index

Navigate to the Discover console:

Navigate_to_discover_console.original

Data in ElasticSearch:

Data_in_Elasticsearch.original

Create Logstash index pattern

Create data view:

Create_data_view.original

You should now see logs from the deployed application. If not, make some requests to this app and try to troubleshoot the issue; refer to the video guide in case help is required.

Conclusion

You've successfully deployed the ELK Stack on Kubernetes, empowering your applications with robust log analysis and data-driven insights. Elasticsearch, Logstash, and Kibana seamlessly handle large data streams and provide meaningful visualizations.

Manage your logs:

manage_logs.original

Now that you have a robust logging solution, you can efficiently manage your logs and gain valuable insights. Happy analyzing!

Thank you for reading this guide on deploying the ELK Stack.

Feel free to reach out if you have any questions or require further assistance. Happy coding!

Alex Kondratiev

Alex Kondratiev

Founder of ITsyndicate. DevOps Enthusiast with 15+ years of experience in cloud, Infrastructure as Code, Kubernetes, and automation. Specialized in architecting secure, scalable, and resilient systems.

Plan the present.
Build the future.