Setup, problem and solution design.
Purpose of log aggregation is to develop single point of access for servers data (in our case nginx web servers).
We have a lot of web servers writing off huge amount of log and no real way to understand what is going on there. Initial solution was to have each systems write a local log file with a Munin agent with custom Perl parser transferring data to Munin server there it was displayed as an RRDtool graph. It worked, however servers themselves generated a lot of logged data making it impossible to parse close to real time forcing us to drop out significant amount of data.
After making a small internet research and due to budget constrains we decided to go with open source tools only. Those applications however still had to be high volume, high load, scalable and big data supporting.
We have decided to setup a dedicated loghost and ship all the data to it parsing it on spot to a needed results. Another thing our proposed solution took into consideration was future log indexing for both technical and BI search ability /readability.
Proposed solution consisted of:
Logstash - Tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching) .
Elasticsearch - Distributed, RESTful, Search Engine
Graylog 2 - Software to run analytics, alerting, monitoring and powerful searches over your whole log.
As a bonus, Logstash gave us possibility to export events to a monitoring system or support shift management.
next: implementation of log aggregation
Munin monitoring tool
Logstash
Elasticsearch
Graylog 2
Provided by: ForthScale systems, scalable infrastructure experts
No comments:
Post a Comment