Wednesday, December 11, 2013

PPTP server on AWS Ubuntu instance

Simple VPN server configuration for easy VPN access to AWS using built in Windows VPN client.

First to install pptp server package.
sudo apt-get install pptpd

Now to configuration:

edit /etc/pptpd.conf

option /etc/ppp/pptpd-options

edit  /etc/ppp/pptpd-options

mtu 1420
mru 1420

edit  /etc/ppp/chap-secrets 

# client        server  secret                  IP addresses
client1      pptpd   secret1      *
client2      pptpd   secret2      *

Add to /etc/rc.local

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -t mangle -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
iptables -t mangle -A OUTPUT -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu


service pptpd restart


Provided by:Forthscale systems, cloud experts

Sunday, December 08, 2013

fixing missing apt-add-repository command

You might need to use apt-add-repository script and will get a missing command error. For example:
sudo: apt-add-repository: command not found

in order to fix that you nred to install  python-software-properties package
# apt-get install python-software-properties

That's it.

Provided by:Forthscale systems, cloud experts

Thursday, November 28, 2013

How to disable network manager Red Hat 6 based distributions (RHEL, CentOS, Oracle Linux)

To stop Network Manger ( for example then using pacemaker cluster) execute as root:
service NetworkManager stop
To prevent Network Manager Service from starting at boot execute as root:
# chkconfig NetworkManager off

Keep in mind that you now need to manually configure your network interfaces.

Provided by:Forthscale systems, cloud experts

Wednesday, November 27, 2013

How to turn off SELINUX in Red - Hat based distributions including CentOS, Oracle Linux and Fedora?

To check  if SELinux is running execute following command:
# getenforce

To disable it, execute the following command:
# setenforce Permissive

This will put selinux in a passive (allow all) mode and last until the machine is rebooted.
 To permanently disable it, change SELINUX= line in: /etc/sysconfig/selinux.

Provided by:Forthscale systems, cloud experts

Saturday, November 09, 2013

working and tested USB install procedure for RH 6 based distributions (RHEL 6.x, CentOS 6.x, Oracle Linux 6.x and others)

We needed to install CentOS 6.4 machines with USB and got into absolute madness. All RHEL cones have no working procedure and workarounds consisted of using FAT32 partition with 3rd party tools (problematic with install ISO files greater than 4GB) or multiple partitions and remounts. Until we found a very simple solution that did not work as well and fixed it.

To create the USB You will need:
  • Red hat based distribution to create the USB (in our case CentOS-6.4-x86_64-bin-DVD1.iso)
  • Actual CD iso image 
  • Fedora livecd-iso-to-disk script
Steps are very simple:
Insert the USB stick to the port.
You need to find out the device name (for ex. /dev/sdb1 )
On systems with auto-mount just run df -h or plug it in and run dmesg | tail -20

Install livecd tools:
yum install livecd-tools

Make sure it is bootable  (for ex. /dev/sdb1, via editing it`s parent - sdb ):
/sbin/parted /dev/sdb
(parted) toggle 1 boot
(parted) quit

Format it with ext3 (does not support ext4)
mkfs.ext3 /dev/sdb1

Prepare the USB (for example using CentOS 6.4):
livecd-iso-to-disk --reset-mbr CentOS-6.4-x86_64-bin-DVD1.iso /dev/sdb1

Your USB is ready, it will boot but fail to install since it is missing install root, so we need to fix it.
Edit exlinux.conf, you will see something similar to:
append initrd=initrd.img stage2=hd:UUID=791fc126-638c-4f28-8837-f3c2eae31e57:/images/install.img repo=hd:UUID=791fc126-638c-4f28-8837-f3c2eae31e57:/

what is mising is directive "root=" so you need to switch the line to something similar to: 
append initrd=initrd.img root=UUID=2cd71b0d-09a0-47b6-97ef-02c3fa90e9d3 repo=hd:UUID=2cd71b0d-09a0-47b6-97ef-02c3fa90e9d3:/

save the file and boot fro USB. It will install your OS.

Provided by: Forthscale systems, cloud experts

Monday, August 26, 2013

Accessing NoMachine NX Server as root

IN a default installation on NX server root access is disabled. To allow root login just follow those simple steps.

Edit file:

and substitute line reading

#EnableAdministratorLogin = "0" 
EnableAdministratorLogin = "1"
save the file and exit.

and run in shell as a root user:

/usr/NX/bin/nxserver --useradd root

this is it. Changes will take affect immediately.

Monday, August 19, 2013

Setting up updates with public yum server for Oracle Enterprise Linux

Oracle provides a free and public yum server to update it`s Enterprise Linux distribution.
It is easy to set-up and it supports versions 4, 5 and 6.

To set up public repositories for different versions, download configuration file (for ex. with wget)

In Oracle Enterprise Linux 6.x

# cd /etc/yum.repos.d
# wget

In Oracle Enterprise Linux 5.x

# cd /etc/yum.repos.d
# wget

In Oracle Enterprise Linux 4, Update 6 or Newer

# cd /etc/yum.repos.d
# if you have old repo file then:
# mv Oracle-Base.repo Oracle-Base.repo.disabled
# wget

You can verify new configuration with:
#  yum list

And execute update with :
# yum update

Provided by: Forthscale systems, scalable infrastructure experts

Saturday, August 10, 2013

Vertica installation tutorial


How to install Vertica Analytic Database

Download Vertica

Download Vertica RPM from the site
For this tutorial we are using Community Edition 6.1.2-0, which is the latest version so far. This version has bug, that need to fixed manually in multiple nodes install, the bug will be fixed in next versions.

Server preparation

Before installing Vertica few things must be done on the server and some of the are optional(but strongly suggested). All this steps must be done on all nodes if installing multi node solution.        

Must configurations

  • Install Linux OS - we are using Centos 6.4 for this tutorial
  • Check that the server is has at least 1 GB RAM free(minimum for install, we suggest     to have more memory for normal usage)
[root@vertica01 ~]# free -m
            total       used       free     shared    buffers     cached
Mem:          1877        465       1412          0         14        343
-/+ buffers/cache:        106       1770
Swap:         3039          0       3039
  • Server need at least 2GB of swap
[root@vertica01 ~]# free -m
            total       used       free     shared    buffers     cached
Mem:          1877        465       1412          0         14        343
-/+ buffers/cache:        106       1770
Swap:         3039          0       3039
  • Disable SELinux.
[root@vertica01 ~]# vi /etc/sysconfig/selinux
  • Disable firewall (for multiple nodes install). If you system must have firewall ensure that this ports are open and not used:









Vertica Client



Vertica Spread



Vertica cluster communication



Vertica Management Console



Vertica Management Console










  • Verify that module is configured for su command
[root@vertica01 ~]# vi /etc/pam.d/su
session required

  • For multiple node install add nodes names and IP’s to /etc/hosts file for name resolution. Add it even if you’re using DNS server for faster resolution. Also add master host name and IP to it self, Vertica is not checking on what host it’s running
[root@vertica01 ~]# vi /etc/hosts  vertica01  vertica02  vertica03

  • For multiple node install make sure that root and DB management (default is     dbadmin) are able to ssh between the nodes without a password. The root user ssh is used at the install only

Suggested configurations

  • Configure and     start NTP service
  • Disable CPU Frequency Scaling in BIOS
  • Configure I/O scheduler to deadline, noop or cfq
[root@vertica01 ~]# vi /boot/grub/grub.conf
Add elevator=<name> to kernel line

Single Node

Once the system is ready for installation you may install the RPM you have downloaded from the Vertica
[root@vertica01 ~]# rpm -ivh vertica-6.1.2-0.x86_64.RHEL5.rpm
RPM will add new directory under /opt with Vertica installation and management scripts. For the basic install run the script with one parameter that indicates DB admin user, with this parameter the script will try to recreate it:
[root@vertica01 ~]# /opt/vertica/sbin/install_vertica -u dbadmin
When the script finish to run you have Vertica installed on single node. You can use admin tools to create new DB:
[root@vertica01 ~]# /opt/vertica/bin/adminTools

Multiple Nodes

Multi node installation uses the same script that is ran only on one server(master) that will install Vertica on the rest of the nodes. To install installation script, install same RPM
[root@vertica01 ~]# rpm -ivh vertica-6.1.2-0.x86_64.RHEL5.rpm
The script will copy the RPM to the rest of the nodes so it's important to have passwordless SSH between the nodes for root user and configure all the nodes in the /etc/hosts. Run the installation script with few more parameters:
-s     – nodes list comma-seperated
-r     – path to the Vertica installation RPM file, to install on rest of the nodes
-u    – DB admin user with password less SSH login between the nodes
-T     – point-to-point nodes communication, used when nodes are not on the same subnet or when nodes are virtual machines
[root@vertica01 ~]# /opt/vertica/sbin/install_vertica -s node01,node02,node03 -r ~/vertica-6.1.2-0.x86_64.rpm -u dbadmin -T
Installation will take more time as it installing on couple of nodes and doing system and networks tests. Once it done you can create new DB and check that it’s working on all the nodes
[root@vertica01 ~]# /opt/vertica/bin/adminTools

Common problems

Installation bug – installation fails with Error: invalid literal for int() with base 10: '8%'    
This is bug in Centos/Red Hat installation confirmed by Vertica, current solution is manual fix. Open /opt/vertica/oss/python/lib/python2.7/site-packages/vertica/network/ on line 1982 change
df /tmp | tail -1 | awk '{print $4}'
df -P /tmp | tail -1 | awk '{print $4}'

Network tests fails without error:
Installation will not succeed if the SSH fails between the nodes. If you never SSH from vertica01 to vertica02 the system will ask to add it's fingerprints to know_hosts file, but installation script can't do it by it's own. There are two workarounds for this:
  • Make manual     SSH between all the nodes for the first time
  • Change or add StrictHostKeyChecking no to /etc/ssh/ssh_config on all the nodes, this will cause SSH not to check server fingerprints with known_hosts. You can read more about here:

Provided by: ForthScale systems, scalable infrastructure experts

Wednesday, March 20, 2013

How to install PGPool II on PostgreSQL Servers in master-slave architecture + PGPoolAdmin web managment

General Information

PGPool can run on same server along with PostgreSQL DB or on stand alone server(recommended). In this article we will install PGPool on stand alone server, but the only difference is connection ports on PGPool and PostgreSQL.
We will install PGPool II 3.1 on PostgreSQL 9.1.

Basic architecture:
│                │
│    pgpool-1    │
│  pgpool server │
│                │
//             \\
//               \\
//                 \\
\\//                \\//
\/                  \/
┏───────────────┓           ┏───────────────┓
│                │           │                │
│    pgsql-1     │ streaming │     pgsql-2    │
│  pgsql master  │══════════>│  pgsql slave   │
│     server     │replication│     server     │
│                │           │                │
┗───────────────┛           ┗───────────────┛

Fail cases:

Slave fails

In case slave server will fail PGPool will start script and will mark server as Down (state 3). It'll reconnect all open connections to this server to the master server, at this case users that have been on this connection will be disconnection.
When the server will be fixed and started up you will need manually start recovery process. To do this the PostgreSQL need to be turned off on the server. Recovery process will take backup from the master and restore it on slave. When it will finish restore process PGPool will connect it back. Streaming replication process will restore all the data inserted since the backup. No data lost.

Master fails

In case master server will fail PGPool will start script, that will notice that the failed server is master and set trigger to slave server. When slave server notice the trigger it will promote itself to master. All the connections to master will be reconnected to slave and user will be disconnected. All the data inserted to master and not yet replicated to slave will be lost. When the server will be fixed and started you will need manually start recovery process. To do this PostgreSQL need to be turned off on the server. Recovery process will take backup from the master and restore it on slave. When it will finish restore process PGPool will connect it back. Streaming replication process will restore all the data inserted since the backup.

PGPool fails

PGPool is SPOF and if it fails there will be no connection to client. To fix this problem you need to create PGPool HA cluster. This can be done with cluster software like Pacemaker or Linux Heartbeat.

PGPool II setup

PGPool can be installed from source of from OS packages, I recommended installing from source, as we will need some files from the tar package. Source files can be downloaded from PGPool official site:
Before compiling PGPool you might need to install missing packages:

  • For Debian/Ubuntu: libpq-dev
  • For RH/CentOS: postgresql-libs
Download needed version of PGPool, open and install:
$ tar xfz /some/where/pgpool-II-3.1.1.tar.gz
$ cd pgpool-II-3.1.1
$ ./configure
$ make
$ make install

You can configure PGPool install path using –prefix flag, in this article we will use default configurations, which will install PGPool configurations to /usr/local/share/etc
If you installing from OS software manager you can't configure the path, but you won't need installing missing packages. Default path will for configuration will be /etc/pgpool2/
After installing PGPool we need to do some basic configuration, here's the sample pgpool.conf file with basic configuration needed to run PGPool in master-slave with streaming replication:
link for configuration

Make sure you changing this configuration to ones you need:

  • PGPool client connections port
port = 5432

  • PostgreSQL Master server
backend_hostname0 = 'pgpool-1'

  • PostgreSQL connection port
backend_port0 = 5432

  • Load balancing weight
backend_weight0 = 0

  • PostgreSQL data path(on PostgreSQL server)
backend_data_directory0 = '/var/lib/postgresql/9.1/main'

  • PostgreSQL username and password (super user)
sr_check_user = 'postgres'
sr_check_password = ''

  • Health check for PostgreSQL nodes period in seconds
health_check_period = 10

  • PostgreSQL username and password (super user)
health_check_user = 'postgres'
health_check_password = ''

  • User to run recovery script on PostgreSQL server
recovery_user = 'postgres'
recovery_password = ''

Next we will setup user for PGPool Administration tool called PCP(later can be used in PGPoolAdmin). We need to encrypt user password to md5:
$ pg_md5 password

Edit pcp.conf and insert the new password along with username to the end of the file:

Next we need to add script for failover, that already configured in pgpool.conf. This script will check if the PostgreSQL server that failed is master or slave. In case of master fail it will put trigger on the slave and it will promote itself to master. Put this script at same location as all configuration files /usr/local/share/etc and give it execute permissions:
link to
Create necessary directories
$ mkdir /var/run/pgpool
$ mkdir /var/log/pgpool

Set permissions to this directories for user that will run PGPool.
Now PGPool is fully configured and can be started. To start it just run PGPool command. By default it will run at same shell as your user and will print the log to STDOUT. Here is sample of command how to start PGPool as deamon and save the log to file:
$ pgpool -n -d > /tmp/pgpool.log 2>&1 &

You may not start the PGPool now if you want to install PGPoolAdmin.

PGPoolAdmin setup(optional)

PGPoolAdmin allows to manage PGPool: start/stop PGPool, edit configurations, add/remove/ recovery/promote PostgreSQL nodes.
To install PGPoolAdmin you will need to install first Apache service to the server(I hope you know how to do it).
Open the tar file and put it content into Apache data directory:
$ tar xfz /some/where/pgpoolAdmin-3.1.1.tar.gz
$ mv /some/where/pgpoolAdmin-3.1.1/pgpooladmin /var/www/html/

Now we need to set permissions to apache user to edit configuration files
$ chown apache /var/run/pgpool
$ chown apache /var/log/pgpool
$ chown apache /share/local/etc/pgpool.conf
$ chown apache /share/local/etc/pool_hba.conf
$ chown -R apache /var/www/html/pgpooladmin

PGPool recovery script must login to PostgreSQL server and create trigger file, to do it must login to server via ssh from user apache. To increase security you might want to set firewall to allow to connection only from PGPool server.
PGPoolAdmin is ready to installed and used. Open in you browser http://serverIP/pgpooladmin/install/index.php follow the steps. At the end PGPoolAdmin will start PGPool on the server. To login use PCP user and password configured before.

PostgreSQL setup

At this point we have PGPool running and accepting connections, but it might not work properly as PostgreSQL might not accept those connections. At this step we will re-configure your PostgreSQL server. This step must be done to all new node you want to add to PGPool, before running recovery process.
Edit postgresql.conf configuration file with following variables:
listen_address = '*'
hot_standby = on
wal_level = hot_standby
max_wal_senders = 1

Edit pg_hba.conf configuration file:
host all all trust
host replication postgres trust

This mean that the server will accept connections from all IP's without authentication for any user. It's possible to put your subnet mask or IP of PGPool server. Note, md5 authentication is not working with PGPool master-slave configuration.
Second allows to do streaming replication between any server without authentication for user postgres. This can be changed for needed subnet or IP of second server and if you want to use other user for replication(User must have REPLICATION or SUPERUSER permission on the second server).
Now we need to setup some PGPool recovery functions. If you have installed PGPool from source you have them inside tar file. If you installed it from software manager download the source file and follow this step.
Inside the source file you have sql directory. You need to compile it each one of them and run the compiled SQL file on postgres DB and template1. If you installed PostgreSQL for software manager you will need to install additional package:

  • For Debian/Ubuntu: postgresql-server-dev
  • For RH/CentOS: postgresql-devel
Run in each directory:
$ make install
$ psql -f pgpool-*****.sql postgres
$ pgsql -f pgpool-*****.sql template1

PGPool recovery process is using 2 scripts, one on master and second on slave. First script called, it's executed from the master and creates backup, then it sends it to the slave. This script need to be in PostgreSQL data directory(default /var/lib/postgresql/9.1/main) with execute permissions Next user that will run recovery process on the nodes need to be able to the login without password to the each other(again, I hope you can do it by yourself).

Second script allows PGPool to start PostgreSQL service. This script uses pg_ctl command, check that the path is correct. If you don't have pg_ctl change the path to /etc/init.d/postgresql and remove from the command “-w -D $DESTDIR”.

Create trigger directory and give postgres permissions on it
$ mkdir -p /var/log/pgpool/trigger
$ chown postgres /var/log/pgpool/trigger

Finishing the setup

Now you should have running PGPool and configured PostgreSQL master server. Last step is to configure the slave and test everything. Do all the configuration on the slave same as on the master and at the end stop the PostgreSQL service. If you installed PGPoolAdmin login into the interface, go to “Edit pgpool.conf” and then to “Backends” and press “Add”. Fill all the info for new backend and change “backend_weight” on both backends to 1. Save the configuration and reload PGPool. When PGPool will discover new node press “Recovery” near it. After recovery

Have questions? Just contact us right away and we will be happy to assist

Powered by 123ContactForm | Report abuse