So let’s figure out what exactly the ELK stack is
The ELK stack is an abbreviation for Elasticsearch, Logstash, and Kibana, which offers the following capabilities:
- Elasticsearch: a scalable search and analytics engine with a log analytics tool and application-formed database, perfect for data-driven applications.
- Logstash: a log-processing tool that collects logs from various sources, parses them, and sends them to Elasticsearch for storage and analysis.
- Kibana: A powerful visualization tool that allows you to explore and analyze the data stored in Elasticsearch using interactive charts, graphs, and dashboards.
The architecture of Elasticsearch
Before we dive into deploying the ELK Stack, let's first understand the critical components of Elasticsearch's infrastructure:
- Nodes: Elasticsearch runs on dedicated servers called nodes that serve as binaries for search and analytics tasks.
- Shards: the database space is logically divided into shards, enabling faster data accessibility and distribution.
- Indices: Elasticsearch organizes the stored data into indices, facilitating efficient data management.
- Configuring the ELK stack: You'll need a Kubernetes cluster to deploy the ELK Stack on Kubernetes. If you already have one, you can proceed with the deployment. Alternatively, you can use the provided GitHub repository with Terraform files to set up a Kubernetes cluster.
- Deploying Elasticsearch: Utilizing Helm charts, we can efficiently deploy Elasticsearch. Modify the values file to meet your specific requirements, such as adjusting the number of replicas or enabling or disabling certain features. Download them from Artifactory Hub.
Let's modify the values-elasticsearch.yaml to meet our needs:
yaml
1clusterName: "itsyndicateblog"2replicas: 1 3minimumMasterNodes: 1 4createCert: true5secret:6 enabled: true7 password: "" # generated randomly if not defined8image: "docker.elastic.co/elasticsearch/elasticsearch"9imageTag: "8.5.1"10resources:11 requests:12 cpu: "200m"13 memory: "500Mi"14 limits:15 cpu: "300m"16 memory: "1Gi"17ingress:18 enabled: false # enable ingress only if you need external access to elasticsearch cluster19 hosts:20 - host: elastic.itsyndicate.org21 paths:22 - path: /Once you've customized the values, use the Helm chart to install Elasticsearch:
shell
1helm install elasticsearch -f elasticsearch-values.yaml chart-name>Note: Ensure you have configured the drivers (EBS or EFS) for persistent volumes.
Deploying Kibana
Kibana deployment is straightforward using Helm charts. In the values-kibana.yaml file, specify the URL and port of the Elasticsearch service:
yaml
1elasticsearchHosts: "https://elasticsearch-master:9200"2enterpriseSearch:3 host: "https://elasticsearch-master:9200"shell
1helm install kibana -f kibana-values.yaml <chart-name>Check if Kibana is installed correctly by port forwarding the container’s port to the local network (I am using k8s Lens)
Secret: elasticsearch master credentials
Port forwarding
Welcome to Elastic
Deploying Logstash and Filebeat
To manage logs effectively, we use Logstash and Filebeat. Filebeat collects records from various sources and Logstash processes and sends them to Elasticsearch.
Deploy Logstash:
- Clone repository with configs: https://github.com/inemyrovsk/tf-modules/tree/master/eks/manifests/logstash-k8s
- Move to tf-modules/eks/manifests/logstash-k8s
- Edit the
configmap.yamlfile - Add elasticsearch host, user and password(you can take them from “Secrets” Kubernetes resource)
- Apply templates:
shell
1kubectl apply -f logstash-k8s -n $CHANGE_TO_ELASTIC_NSDeploy Filebeat
- Ensure Filebeat's configuration points to the correct log files on your nodes. Usually, in EKS, it’s the
/var/log/containersdirectory. To check it, log in to one of your nodes and navigate to the/var/log/containersdirectory; if there are no files, try changing the directory. - In case everything is correct, apply Kubernetes templates:
shell
1kubectl apply -f filebeat-k8sDeploy a simple application
To check how logs are streaming into Elasticsearch perform the following:
- Enter the eks/manifests folder from the cloned repository.
- Execute command:
shell
1kubeclt apply -f app -n defaultAfter installation is complete, revisit Kibana and create an ElasticSearch index.
Creating an index
Navigate to the Discover console:
Data in ElasticSearch:
Create Logstash index pattern
Create data view:
You should now see logs from the deployed application. If not, make some requests to this app and try to troubleshoot the issue; refer to the video guide in case help is required.
Conclusion
You've successfully deployed the ELK Stack on Kubernetes, empowering your applications with robust log analysis and data-driven insights. Elasticsearch, Logstash, and Kibana seamlessly handle large data streams and provide meaningful visualizations.
Manage your logs:
Now that you have a robust logging solution, you can efficiently manage your logs and gain valuable insights. Happy analyzing!
Thank you for reading this guide on deploying the ELK Stack.
Feel free to reach out if you have any questions or require further assistance. Happy coding!
