Thursday, October 25, 2012

Making a service autostart on Ubuntu, Debian, Oralce Linux, CentOs and Red Hat (Fedora and RHEL)


For RedHat based distributions (CentOS, Oracle Linux, Fedora) execute (as root):

chkconfig servicename on


For Debian based distributions (Ubuntu) execute (as root):

update-rc.d servicename defaults

Provided by: Forthscale systems, Linux support team

Friday, September 14, 2012

Log aggregation with Logstash, Elasticsearch, Graylog 2 and more Part One


Setup, problem and solution design.
Purpose of log aggregation is to develop single point of access for servers data (in our case nginx web servers).
We have a lot of web servers writing off huge amount of log and no real way to understand what is going on there. Initial solution was to have each systems write a local log file with a Munin agent with custom Perl parser transferring data to Munin server there it was displayed as an RRDtool graph. It worked, however servers themselves generated a lot of logged data making it impossible to parse close to real time forcing us to drop out significant amount of data.

After making a small internet research and due to budget constrains we decided to go with open source tools only. Those applications however still had to be high volume, high load, scalable and big data supporting.
We have decided to setup a dedicated loghost and ship all the data to it parsing it on spot to a needed results. Another thing our proposed solution took into consideration was future log indexing for both technical and BI search ability /readability.


Proposed solution consisted of:

Logstash - Tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching) .
Elasticsearch -  Distributed, RESTful, Search Engine
Graylog 2 - Software to run analytics, alerting, monitoring and powerful searches over your whole log.
As a bonus, Logstash gave us possibility to export events to a monitoring system or support shift management.

next: implementation of log aggregation


Munin monitoring tool
Logstash
Elasticsearch
Graylog 2

Provided by: ForthScale systems, scalable infrastructure experts

Sunday, September 09, 2012

list processes running in MySQL

login to mysql as a root:
mysql -uroot -p
end execute:
mysql> show processlist;
will show you list of processes running in MySQL
and using  \G delimiter will present processes in more readable format.
mysql> show processlist \G

Provided by: ForthScale systems, scalable infrastructure experts

Thursday, September 06, 2012

Setting up Amazon AWS EC2 ftp server with Linux and VSFTP:

Install vsftp (example for Ubuntu / Debian)

apt-get -y install vsftpd

Edit configuration file (in our example with local authentication and no guest user)

vi /etc/vsftpd.conf

write_enable=YES
anonymous_enable=NO
local_umask=022
local_enable=YES


#to add passive ftp:
pasv_enable=YES
pasv_max_port=12100
pasv_min_port=12000
port_enable=YES
pasv_address="your external instance ip or address"


and open inbound port range 20-21 and 12000-12100 in your security groups

Provided by: ForthScale systems, scalable infrastructure experts

Saturday, September 01, 2012

Debian / Ubuntu: purge removed packages with apt

Removing packages with aptitude or apt-get keeps some configuration and temp files on disk.
to purge them (to get rid of these configuration files) execute:

dpkg -l |awk ‘/^rc/ {print $2}’ |xargs sudo dpkg --purge

Powered by 123ContactForm | Report abuse


Provided by: ForthScale systems, scalable infrastructure experts

Wednesday, August 29, 2012

fixing E: Archive directory /var/cache/apt/archives/

During apt-get install of a package you might receive message :
E: Archive directory /var/cache/apt/archives/

to fix it you need to create a directory and repair deb packages database:

mkdir -p /var/cache/apt/archives/partial
apt-get autoclean


you should get response output similar to:
Reading package lists... Done
Building dependency tree
Reading state information... Done


then feel free to install you package with apt-get or aptitude


Powered by 123ContactForm | Report abuse

Friday, July 27, 2012

Force ssl in nginx with redirect

Just edit your site configuration file make sure you have a site setup for HTTPS - SSL (port 443)
edit the portion for HTTP:


server {
    listen      80;
    server_name server.yourdomain.tld;
    rewrite     ^   https://$server_name$request_uri? permanent;
}

save and restart nginx
Provided by: ForthScale systems, scalable infrastructure experts

Monday, July 09, 2012

Adding ( passing) linux system variable to Tomcat.

Sometimes you need to pass system variables to applications running in tomcat environment. those could b for example configuration files.
Trying to pass them to start-up script as variables such as 

export TOMCAT_OPTS=-Your.var=foo
will set it in the system shell and will not pass it to the tomcat on it`s start since it runs as a different user.
Java`s System.getEnv("Your.var")
Will return: NULL value.
You need to edit relevant tomcat.conf (tomcat6.conf for example in /etc)
and add:
export Your.var=foo
to the end of file
You can check that variable s set with System.getEnv("Your.var") in you java code.



Provided by: ForthScale systems, scalable infrastructure experts

Wednesday, June 27, 2012

Fixing libdbus in 64 bit RPM based linux (Centos, SL, Fedora)


Issue affects Firefox, Thunderbird, gEdit etc.
You will see an error:

error while loading shared libraries: libdbus-1.so.3: cannot open shared object file: No such file or directory

most likly you have a 32 bit version of dbus libraries installed.
To fix that you need to install 64 version as well:

yum install dbus-libs.x86_64

Provided by: ForthScale systems, Cloud experts

Tuesday, June 26, 2012

Instalation of Nomachine NX server or VNC server on Centos 5

Both tools are great to enable external GUI remote access on Linux.

To install NX Free server:
Note. Installation of NX Server requires installation of all three packages: client, node and server due to tools and libraries dependencies.
Download the packages from Nomachine:
wget http://64.34.161.181/download/3.5.0/Linux/nxclient-3.5.0-7.x86_64.rpm
wget http://64.34.161.181/download/3.5.0/Linux/nxnode-3.5.0-9.x86_64.rpm
wget http://64.34.161.181/download/3.5.0/Linux/FE/nxserver-3.5.0-11.x86_64.rpm


Install the packages:
rpm -i nxclient-3.5.0-7.x86_64.rpm
rpm -i nxnode-3.5.0-9.x86_64.rpm
rpm -i nxserver-3.5.0-11.x86_64.rpm


VNC server installation:

yum install vnc-server
yum install xorg*
(you need to install those for VNC fonts)

Provided by: ForthScale systems, Cloud experts

Adding KDE Environment to Centos 5

List installation packs on your CentOS 5 machine:
yum grouplist

Install KDE
yum groupinstall "KDE (K Desktop Environment)"

Now you might want to enable GUI remote access with either:
VNC server or Nomachine NX.

Provided by: Forthscale systems, Cloud experts

Thursday, June 21, 2012

adding support for TCP socket proxy with Nginx


Install needed prerequisites:

apt-get install build-essential checkinstall
apt-get build-dep nginx
apt-get source nginx


Check out TCP sockets patch for nginx from git repository:

git clone  https://github.com/yaoweibin/nginx_tcp_proxy_module.git


Patch your nginx source code:

cd nginx-1.1.19/
patch -p1 < ../nginx_tcp_proxy_module/tcp.patch

Now configure and build a new package:

./configure \
--add-module=../nginx_tcp_proxy_module/  \
--conf-path=/etc/nginx/nginx.conf  \
--error-log-path=/var/log/nginx/error.log  \
--pid-path=/var/run/nginx.pid  \
--lock-path=/var/lock/nginx.lock  \
--http-log-path=/var/log/nginx/access.log  \
--with-http_dav_module  \
--http-client-body-temp-path=/var/lib/nginx/body  \
--with-http_ssl_module  \
--http-proxy-temp-path=/var/lib/nginx/proxy  \
--with-http_stub_status_module  \
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi  \
--with-debug  \
 --with-http_flv_module \
--prefix=/usr \
--sbin-path=/usr/sbin/nginx 


make

you will end with a package for your CPU architecture similar to:

/root/nginx-1.1.19/nginx_1.1.19-1_amd64.deb





Force install it:

dpkg -i --force-all /root/nginx-1.1.19/nginx_1.1.19-1_amd64.deb

Provided by: Forthscale, Cloud experts

Friday, June 15, 2012

Fixing SSH Daemon - Authentication refused: bad ownership or modes for directory

Then you are unable to connect to your ssh server with your PEM (or PPK) key with connection refused message check the /var/log/auth.log file.
If you see something similar to:
sshd: Authentication refused: bad ownership or modes for directory /Your/Home/Path
You have a home directory permission problem.
change to that user if not already one and execute:
chmod go-w ~/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys 
It will restore your permissions and you should be able to connect.

 Provided by: SiQ systems, Cloud experts

Wednesday, June 06, 2012

Couple of simple steps to debug munin plugins
First check if munin recognize the plugin.
Execute munin plugin in regular mode:
munin-run plugin_name
you will get output in format of:
some.value XX

then execute munin plugin in configuration mode:
munin-run plugin_name config

you will get output in format of:

graph_title Great Plugin
graph_args --upper-limit 100 -l 0
graph_vlabel %
graph_category some_category
graph_info This is the best munin plugin ever.
something.label LABEL

If you have any problems they can be related to the plugin having a permission problem.

Next step is to test the plugin connection via port 4949
Run a telnet on munin node port 4949

telnet munin-node.example.com 4949

You will get output in format of:

Trying munin-node.example.com...
Connected to munin-node.example.com.
Escape character is '^]'.
# munin node at munin-node.example.com

then type in console:

fetch plugin_name
or
fetch plugin_name config

It will output something similar to munin-run.

Then plugin works with munin-run command but not through telnet execution, you  most likely to have a local PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.


Provided by: SiQ systems, Cloud experts

Wednesday, May 09, 2012

List open ports and listening services in Linux


To list open network ports (sockets) and the processes that run them on any Linux distribution use netstat. You might need to install it via yum / apt-get.
Syntax:
netstat -lnptu

Powered by 123ContactForm | Report abuse


Provided by: SiQ systems, Cloud experts

Sunday, February 19, 2012

Amazon AWS EC2 storage types


You can connect different storage types to Amazon EC2 instances, two of them provided naively by Amazon platform and the rest are either provided by external sources or tweaks.  In any EC2 instance (except micro) you have instance storage included in a package. You can also use elastic block storage (EBS) and have an option connecting different 3rd party storage over the network.
 
Instance storage is a fast non-persistent storage provided by Amazon. It means it will revert to it`s original state after any system shutdown, erasing any changes you have applied to the file-system. It is very useful for running “dumb” servers that do not store data locally or as an additional storage for temporary files.
 
Elastic block storage (EBS) is a persistent storage provided by Amazon. All and any data stored on it is available after instance shutdown and can be manipulated with on device level. For example you can detach an EBS volume from one instance and attach it to another. However EBS can not be attached to more then once instance at the same time.
 
Using S3 as a file system. S3 is a storage infrastructure provided by Amazon as a service, it is not a part of EC2 (Elastic cloud) but can be used to store and retrieve any amount of data from anywhere at any time. Because S3 infrastructure is fully managed and scaled by Amazon it is very useful for large scale web projects, backup media and large volume data transfers. Using S3 as a file system is done via FUSE in Linux or as a mapped network drive in MS Windows. We are providing a tutorial on Linux implementation of S3 as a file system .
 
There are also few companies out there providing iSCSI storage arrays for AWS, one of them is Zadara storage, company providing Virtual Private Storage Arrays and currently in beta stage. iSCSI as any other network attached storage systems will incorporate persistence and availability of EBS at much faster speed.

Sunday, February 12, 2012

Setting Filezilla Server on Amazon EC2 instance with passive ftp


If you want to set Filezilla  ftp server to handle passive connections on your AWS EC2 instances, you should do the following.

Select non used TCP ports, for example 9024-9048 range

Configure firewalls:

In your AWS EC2 security group, allow the incoming connections on chosen ports:

tcp port 20
tcp port 21
tcp port 9024-9048

If using Windows firewall  on you instance, allow connections on same ports.

Now configure Filezilla to use specific port range on Passive connections:

Open Filezilla management console.

Got to: Edit > Settings > Passive Mode Settings

'External Server IP Address for passive mode transfers'

If you use AWS Elastic IP, enter it in "Use the following IP",
if not - use Filezilla provided web service with "Retrieve external IP address from" option.

Check 'Don't use external IP for local connections'

Check 'Use custom port range'

Enter chosen values (in our example) 9024 - 9048 for custom port range.

Powered by 123ContactForm | Report abuse


Provided by:SiQ systems, Cloud experts

Wednesday, January 25, 2012

Adding global DNS servers

Sometimes you need to preconfigure your servers for dns resolution and you are not sure what local DNS addresses will be or just to use backup dns servers for resolution.
Here is small list of globally accessible DNS servers:

nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 210.80.60.1
nameserver 210.80.60.2
nameserver 208.67.222.222
nameserver 208.67.220.220

Tuesday, January 24, 2012

NTP servers for Israel

For Israeli NTP servers you can use: il.pool.ntp.org

Since there are not enough servers in this zone, we recommend you buck up by use the Asia zone (asia.pool.ntp.org):

server 0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org

Provided by:SiQ systems, Cloud experts

solving error: Your current user or role does not have access to Kubernetes objects on this EKS cluster.

Trying to access EKS cluster with kubectl you might get an error similar to: Your current user or role does not have access to Kubernetes ob...