Skip to main content

Posts

Showing posts from 2012

Log aggregation with Logstash, Elasticsearch, Graylog 2 and more Part One

Setup, problem and solution design.
Purpose of log aggregation is to develop single point of access for servers data (in our case nginx web servers).
We have a lot of web servers writing off huge amount of log and no real way to understand what is going on there. Initial solution was to have each systems write a local log file with a Munin agent with custom Perl parser transferring data to Munin server there it was displayed as an RRDtool graph. It worked, however servers themselves generated a lot of logged data making it impossible to parse close to real time forcing us to drop out significant amount of data.

After making a small internet research and due to budget constrains we decided to go with open source tools only. Those applications however still had to be high volume, high load, scalable and big data supporting.
We have decided to setup a dedicated loghost and ship all the data to it parsing it on spot to a needed results. Another thing our proposed solution took into consid…

Setting up Amazon AWS EC2 ftp server with Linux and VSFTP:

Install vsftp (example for Ubuntu / Debian)

apt-get -y install vsftpd

Edit configuration file (in our example with local authentication and no guest user)

vi /etc/vsftpd.conf

write_enable=YES
anonymous_enable=NO
local_umask=022
local_enable=YES


#to add passive ftp:
pasv_enable=YES
pasv_max_port=12100
pasv_min_port=12000
port_enable=YES
pasv_address="your external instance ip or address"


and open inbound port range 20-21 and 12000-12100 in your security groups

Provided by: ForthScale systems, scalable infrastructure experts

fixing E: Archive directory /var/cache/apt/archives/

During apt-get install of a package you might receive message :
E: Archive directory /var/cache/apt/archives/

to fix it you need to create a directory and repair deb packages database:

mkdir -p /var/cache/apt/archives/partial
apt-get autoclean


you should get response output similar to:
Reading package lists... Done
Building dependency tree
Reading state information... Done


then feel free to install you package with apt-get or aptitude


Powered by 123ContactForm | Report abuse

Force ssl in nginx with redirect

Just edit your site configuration file make sure you have a site setup for HTTPS - SSL (port 443)
edit the portion for HTTP:


server {
    listen      80;
    server_name server.yourdomain.tld;
    rewrite     ^   https://$server_name$request_uri? permanent;
}

save and restart nginx
Provided by: ForthScale systems, scalable infrastructure experts

Adding ( passing) linux system variable to Tomcat.

Sometimes you need to pass system variables to applications running in tomcat environment. those could b for example configuration files.
Trying to pass them to start-up script as variables such as 

export TOMCAT_OPTS=-Your.var=foo
will set it in the system shell and will not pass it to the tomcat on it`s start since it runs as a different user.
Java`s System.getEnv("Your.var")
Will return: NULL value.
You need to edit relevant tomcat.conf (tomcat6.conf for example in /etc)
and add:
export Your.var=foo
to the end of file
You can check that variable s set with System.getEnv("Your.var") in you java code.



Provided by: ForthScale systems, scalable infrastructure experts

Fixing libdbus in 64 bit RPM based linux (Centos, SL, Fedora)

Issue affects Firefox, Thunderbird, gEdit etc.
You will see an error:

error while loading shared libraries: libdbus-1.so.3: cannot open shared object file: No such file or directory

most likly you have a 32 bit version of dbus libraries installed.
To fix that you need to install 64 version as well:

yum install dbus-libs.x86_64

Provided by: ForthScale systems, Cloud experts

Instalation of Nomachine NX server or VNC server on Centos 5

Both tools are great to enable external GUI remote access on Linux.

To install NX Free server:
Note. Installation of NX Server requires installation of all three packages: client, node and server due to tools and libraries dependencies.
Download the packages from Nomachine:
wget http://64.34.161.181/download/3.5.0/Linux/nxclient-3.5.0-7.x86_64.rpm
wget http://64.34.161.181/download/3.5.0/Linux/nxnode-3.5.0-9.x86_64.rpm
wget http://64.34.161.181/download/3.5.0/Linux/FE/nxserver-3.5.0-11.x86_64.rpm


Install the packages:
rpm -i nxclient-3.5.0-7.x86_64.rpm
rpm -i nxnode-3.5.0-9.x86_64.rpm
rpm -i nxserver-3.5.0-11.x86_64.rpm


VNC server installation:

yum install vnc-server
yum install xorg*
(you need to install those for VNC fonts)

Provided by: ForthScale systems, Cloud experts

adding support for TCP socket proxy with Nginx

Install needed prerequisites:

apt-get install build-essential checkinstall
apt-get build-dep nginx
apt-get source nginx


Check out TCP sockets patch for nginx from git repository:

git clone  https://github.com/yaoweibin/nginx_tcp_proxy_module.git


Patch your nginx source code:

cd nginx-1.1.19/
patch -p1 < ../nginx_tcp_proxy_module/tcp.patch

Now configure and build a new package:

./configure \
--add-module=../nginx_tcp_proxy_module/  \
--conf-path=/etc/nginx/nginx.conf  \
--error-log-path=/var/log/nginx/error.log  \
--pid-path=/var/run/nginx.pid  \
--lock-path=/var/lock/nginx.lock  \
--http-log-path=/var/log/nginx/access.log  \
--with-http_dav_module  \
--http-client-body-temp-path=/var/lib/nginx/body  \
--with-http_ssl_module  \
--http-proxy-temp-path=/var/lib/nginx/proxy  \
--with-http_stub_status_module  \
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi  \
--with-debug  \
 --with-http_flv_module \
--prefix=/usr \
--sbin-path=/usr/sbin/nginx 


make

you will end with a package for your …

Fixing SSH Daemon - Authentication refused: bad ownership or modes for directory

Then you are unable to connect to your ssh server with your PEM (or PPK) key with connection refused message check the /var/log/auth.log file.
If you see something similar to:
sshd: Authentication refused: bad ownership or modes for directory /Your/Home/Path
You have a home directory permission problem.
change to that user if not already one and execute:
chmod go-w ~/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys 
It will restore your permissions and you should be able to connect.

 Provided by: SiQ systems, Cloud experts
Couple of simple steps to debug munin plugins
First check if munin recognize the plugin.
Execute munin plugin in regular mode:
munin-run plugin_name
you will get output in format of:
some.value XX

then execute munin plugin in configuration mode:
munin-run plugin_name config

you will get output in format of:

graph_title Great Plugin
graph_args --upper-limit 100 -l 0
graph_vlabel %
graph_category some_category
graph_info This is the best munin plugin ever.
something.label LABEL

If you have any problems they can be related to the plugin having a permission problem.

Next step is to test the plugin connection via port 4949
Run a telnet on munin node port 4949

telnet munin-node.example.com 4949

You will get output in format of:

Trying munin-node.example.com...
Connected to munin-node.example.com.
Escape character is '^]'.
# munin node at munin-node.example.com

then type in console:

fetch plugin_name
or
fetch plugin_name config

It will output something similar to munin-run.

Then plugin works with munin-run com…

Amazon AWS EC2 storage types

You can connect different storage types to Amazon EC2 instances, two of them provided naively by Amazon platform and the rest are either provided by external sources or tweaks.  In any EC2 instance (except micro) you have instance storage included in a package. You can also use elastic block storage (EBS) and have an option connecting different 3rd party storage over the network. Instance storage is a fast non-persistent storage provided by Amazon. It means it will revert to it`s original state after any system shutdown, erasing any changes you have applied to the file-system. It is very useful for running “dumb” servers that do not store data locally or as an additional storage for temporary files. Elastic block storage (EBS) is a persistent storage provided by Amazon. All and any data stored on it is available after instance shutdown and can be manipulated with on device level. For example you can detach an EBS volume from one instance and attach it to another. However EBS can not be…

Setting Filezilla Server on Amazon EC2 instance with passive ftp

If you want to set Filezilla  ftp server to handle passive connections on your AWS EC2 instances, you should do the following.

Select non used TCP ports, for example 9024-9048 range

Configure firewalls:

In your AWS EC2 security group, allow the incoming connections on chosen ports:

tcp port 20
tcp port 21
tcp port 9024-9048

If using Windows firewall  on you instance, allow connections on same ports.

Now configure Filezilla to use specific port range on Passive connections:

Open Filezilla management console.

Got to: Edit > Settings > Passive Mode Settings

'External Server IP Address for passive mode transfers'

If you use AWS Elastic IP, enter it in "Use the following IP",
if not - use Filezilla provided web service with "Retrieve external IP address from" option.

Check 'Don't use external IP for local connections'

Check 'Use custom port range'

Enter chosen values (in our example) 9024 - 9048 for custom port range.

Powered by 123Con…

Adding global DNS servers

Sometimes you need to preconfigure your servers for dns resolution and you are not sure what local DNS addresses will be or just to use backup dns servers for resolution.
Here is small list of globally accessible DNS servers:

nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 210.80.60.1
nameserver 210.80.60.2
nameserver 208.67.222.222
nameserver 208.67.220.220