Launching “AiTest” soon. A Cross-Browser performance testing SaaS platform. Automated but real load testing of your application. Stay tuned!


Launching “AiTest” soon. A Cross-Browser performance testing SaaS platform. Automated but real load testing of your application. Stay tuned!


What is Filebeat and why is it imperative?

what is Filebeat and why is it imperative?

The beats are open-source and lightweight data shippers. You can install these as agents on your servers to dispatch operational data to Elasticsearch. You can send data directly via beats to Elasticsearch or via Logstash. Here (in Logstash) you can further process and develop the data. 

Logstash Pipeline

Filebeat creates a low memory footprint to forward and centralize logs and files and you don’t need to use SSH especially when you have numerous servers, virtual machines, and containers that create logs.

Filebeat is a logging agent. You can install it on the machines that create the log files. Filebeat forwards the data to Logstash or directly into Elasticsearch for indexing.

Filebeat Processors

If you are not using Logstash but still want to process/customize the logs before sending them to ElasticSearch, you can use the Filebeat Processors. You can decode the JSON strings, add various metadata (e.g. Docker, Kubernetes), drop specific fields, and more. 

filebeat read the logs and ship to elasticsearch

If you want to send logs from Filebeat directly to ElasticSearch this provides benefits such as

  1. We were using logstash for parsing the logs to elasticsearch. But, due to heavy traffic, logs were getting lost and delayed on the elasticsearch. We have modified the workflow and parsed the logs from filebeat to elasticsearch. Now, We are getting the logs in real-time and there is no delay in the logs coming to elasticsearch. 

The below Image shows the heavy traffic(6 Million Hits in 15 Minutes) coming from pods and the filebeat is shipping the data in minimal time.

So, The filebeat can handle heavy traffic.

elastic image
  1. It can support encryption
  2. Decreases the latency involved with the log processing by a middle component like Logstash.

Logstash performance works on a few factors, e.g. 

  • The number & complexity of filters you used
  • The number of filters, output workers & available system resources. 

How do you tune your internal pipeline changes over recent versions? Hence offering your config and versions of Logstash helps you with how you tune it in the best possible way. Besides that, Logstash processing will be limited by the throughput of the slowest output.

3. Cost Reduction

If you ship the logs from filebeat to elasticsearch, you will save the cost of resources consumed by the Logstash pods.

4. Logs will get reflected in ElasticSearch in real-time.

The below Image shows the pod logs timestamp


As shown in the below screenshot, At the same time a log was found on Kibana.


5. There is no loss of logs.
When we were passing the logs through the logstash to Elasticsearch. Logs count on the microservices pod and logs in the Kibana were not matching. We noticed only 20% of the microservices pod logs were there on Kibana. After modifying the workflow, Logs count was matching.


You can define processors in the Filebeat configuration file per input. To define a processor:

  • you define the processor name
  • an optional condition
  • a set of parameters
Customizing logs using filebeat processors


  • specifies a processor that accomplishes some kind of action, for example choosing the fields that are already exported or adding metadata to the event.
  • specifies an optional condition. If the condition is present, then the action is executed only if the condition is met. If no condition is set, then the action is always executed.
  • is the list of parameters to pass to the processor.

Let’s take an example where we have 2 APIs ( vehicle and furniture APIs) and each API has multiple microservices.

These microservices are deployed in the Kubernetes cluster as Deployment.

API image

We use Filebeat Autodiscover to fetch logs of pods.

Decode logs are structured as JSON messages using JSON Options.

Customizing logs using filebeat processors

Add Kubernetes metadata into the log so that we can add fields based on the Pod label.

Pod labels will be present under Kubernetes. labels field, e.g. app label is present under

  • Use the add_fields processor to add fields such as api_name and microservice_name.
  • Use the conditional to specify when to add these fields.
  • To add fields at the top level, set the target to an empty string.
  • Use the drop_fields processor to remove unwanted fields.
Customizing logs using filebeat processors

Filebeat output

Finally, send logs to ElasticSearch using the Output.

Customizing logs using filebeat processors
Manan Tank

Cloud Champion

Manan Tank is a passionate cloud expert. He is a fast learner and contributes to our major projects. In his free time, he loves to travel and do road trips.

Nilesh Padmagiriwar

Cloud Champion

Nilesh is a cloud expert. His hardwork is helping us to be at the forefront of CX. In his free time, he loves to sketch and play video games.

More To Explore