Wednesday, March 16, 2016

5 Essential Continuous Intergation Tools



A lot has been said about continuous integration. Here is our 5c. If you are just going to try it out or seriously researching going CI, you might want some points on tools you will need. So we compiled an article for you with shortlist of the most useful tools for continuous integration.



Jenkins


Everyone knows Jenkins. It is a hardcore, open-source continuous integration server. Mostly it is used for Java projects development. But Jenkins can also work with some .NET version control systems what makes it well-suited for .Net projects as well. With Jenkins you will enjoy a robust developer community, easy installation and more than 400 plugins for high customization.



TeamCity


This Java-based continuous integration server allows you to develop for .Net and mobile platforms. It runs locally and has a system tray notification tool which will alert you over email if any issues happen while the build is finishing.

Also, TeamCity has a built-in support for a project hierarchy. It allows you to build a project tree that will inherit settings and permissions.



Codeship


Codeship supports the most popular languages: Java, Ruby, PHP, Python. It integrates with multiple repository hosting services such as GitHub, Bitbucket, Deploy Anywhere, Engine Yard. Codeship significantly eases deployment. You just need to define a project in a UI and setting couple of parameters before making a commit. Codeship will test new code once it’s pushed and deploy it automatically.



Travis CI


This tool was built to test projects hosted on GitHub. Travis CI has pretty easy and straightforward setup. It requires only a.yaml configuration file to be created in the root of the desired repository. You can develop in C, C++, JavaScript, PHP, Perl and Python, all are supported.



Heili


Heili is an AI infrastructure monitoring and management solution. Think of it as a sidekick that helps out a true devops superhero. It automatically discovers your stack in the cloud, deploys monitoring and gives you peace of mind. Heili uses Ansible for provisioning so you get tons of automation out of the box and can add your own in just few clicks. Running a stack on AWS, Google or Softlayer was never easier.

And which tools do you use? Share with us in comments!

Provided by:Forthscale systems, cloud experts

Tuesday, February 23, 2016

10 cool tools every DevOps team needs



Everyone needs tools that will help to increase productivity. DevOps engineers need such instruments too. In this article you will find some awesome and useful tools that you will definitely love.


1. Logstash




This tool allows teams to analyze log file information. Logstash helps to improve your product by gleaning performance and behavioral metrics.


2. ITinvolve



Productive collaboration is extremely important for every DevOps team. This tool gives you an opportunity to share tools, workspaces, diagrams and other visual information and documentation. ITinvolve is designed for IT. So every team member get at-a-glance understanding of how can something affect existing process or system.


3. Docker



There even is a verb “Dockerize”. This verb is used by DevOps teams who work with this containerization tool. Docker easies the process of pushing the code from development to production without any unavoidable hiccups. It provides standardizations that will make Ops guys happy and flexibility that allows to use almost any language or tool.


4. Heili



It is a real lifechanger for DevOps. Heili is an Artificial Intelligence driven self-learning tool that will go the management work for you. It will learn your operations, increase reliability and cut performance degradation or downtime incidents duration.


5. Jenkins



This tool will help you to build and test your software continuously. Also, it monitors externally-run jobs that allows you to see when something is going wrong.

6. Apache ActiveMQ



Apache ActiveMQ is an open source messaging and Integration Patterns server. DevOps engineers prefer ActiveMQ because it is really fast. Also, it supports several cross language clients and protocols.


7. Squid



Squid will optimize web delivery by reducing bandwidth and improving response times by caching and reusing frequently-requested web page. It is supported on most available operating systems.


8. Snort



This is a good solution for DevOps engineers who are looking for a security tool that will provide real-time traffic analysis and packet logging. Snort is able to detect variety of attacks and probes.


9. Ansible



It is a configuration management tool that will significantly improve your productivity. Ansible is an all-in-one instrument: app deployment, configuration, management and orchestration.

10. Code Climate



This tool will monitor the health of your code from your command line to the cloud,
so you can fix issues sooner and ship better code, faster.


Which tools do you use?


Provided by:Forthscale systems, cloud experts

Monday, July 06, 2015

Part 2: Using Docker

 
This is second part in our Docker tutorials, we strongly suggest to Part 1 before continuing here. At this point you should be familiar with basic Docker commands, how to run containers with cool random names. But you need more, to do something really useful with Docker. So today we will learn how to build basic Web application with couple of containers that will look like this at the end:
As you can see we will have lot of different solutions at the end of this tutorial. This structure is not production wise, it's just to show what Docker can do.

Docker Hub / Database

I've picked pgSQL for this tutorial, bust you must look at this more as concept, because you can do it with any DB (SQL and noneSQL).
With Docker your life should be easy, you don't have to install machines and DB's anymore, you just download image you need and use it. Most of the software have images ready in Docker Hub and lot of them already have preconfigured images made by users. But what is Docker Hub? It's official public images repository - https://hub.docker.com. You can register and upload your images to the hub for personal and public use. If you will continue use Docker you will meet personal registries (own by companies or private people), they work same way (later in our tutorials we will even install registry server).
One you logged in to Hub (but you don't have too) you can search for postgres. I've found 10 repositories - first one is official repository and 9 repositories made by users:
It's always suggested to take official repositories and make modifications for them, but if you're lazy and there is user repository that fit your needs (check what images include). If you're using user repository it's also important to check last update. Sometimes it's good to use old but stable versions, but sometime you need updated one.
Inside the repository you will see tags information (usually tags represent versions) and how to use the repository.
Now let's go to your machine and start building containers
As always we will use latest image and version, just because it's tutorial and we don't really care about it :)
Actually if you sure that you going to use common software you don't have to look in repository, just pull the name of it... The repository can be useful for usage, but also not must, as you can view image configuration... Anyway, let's pull our repo:

davidg@linux-cpl8:~> docker pull postgres:latest
latest: Pulling from postgres
104de4492b99: Pull complete 
065218d54d7d: Pull complete 
6d342ad75f37: Pull complete 
9433e325f9ad: Pull complete 
7c38e9491f7e: Pull complete 
d9a636286bd1: Pull complete 
4020db192fff: Pull complete 
b93ccbbdcc22: Pull complete 
13af7ae40c45: Pull complete 
e6826f7776c8: Pull complete 
5c67b212f3da: Pull complete 
1e87f75b5751: Pull complete 
24a73f6adf68: Pull complete 
effe3b6a83fc: Pull complete 
65482096da78: Pull complete 
089cc1d86ef7: Pull complete 
d4c43025a271: Pull complete 
41684070b967: Pull complete 
f969e36858c2: Pull complete 
a2c56c0927fc: Pull complete 
7bf0ec35adaf: Already exists 
postgres:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:0b2d2e463174edacb17d976d72adc839c032bfcfdf6da6799e288014d59998f8
Status: Downloaded newer image for postgres:latest 
And run your first Postgres container:

~ $ docker run --name postgres_test -e POSTGRES_PASSWORD=d0ckerul3z -d postgres
415a2a8734845a8d9188e959bb7acf90ecf21da1479f8213a6df4e2ac096430a
~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
415a2a873484        postgres:latest     "/docker-entrypoint.   5 minutes ago       Up 5 minutes        5432/tcp            postgres_test
So now we have  DB with default database, let's connect to it! But how? 'postgres_test' is name of the container but not DNS name. How does Docker network works?
I'll short the explanation of the network in few words. If you will run ifconfig you will notice new device:

~ $ ifconfig
docker0   Link encap:Ethernet  HWaddr 56:84:7A:FE:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:35 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:304 (304.0 b)  TX bytes:6496 (6.3 Kb)
 Device docker0 is created when docker service is started and it's Virtual Ethernet Bridge. All you containers traffic goes through it, it's the gateway. This mean we can connect to container via it's IP:

~ $ docker inspect postgres_test | grep IPAddress
        "IPAddress": "172.17.0.1",
If you will run 'docker inspect [CONTAINER_NAME]' without the 'grep' you will all the info about the container, try doing it. When you done let's connect to our DB:

~ $ psql -h 172.17.0.1 -U postgres postgres
Password for user postgres: d0ckerul3z
psql (9.3.6, server 9.4.4)
WARNING: psql major version 9.3, server major version 9.4.
         Some psql features might not work.
Type "help" for help.

postgres=# 
 Now you have container with fully functioned DB

Application Container

Next part is to create an application that will connect to our DB and use it.
I'm using simple flask application for the basic things, but it can be anything you want:

import psycopg2
import psycopg2.extras
import sys
from flask import Flask

app = Flask(__name__)

@app.route("/")
def test():
  con = None
  result = '<h1>The Guardians of the Galaxy</h1><table border="1"><tr><th>&nbsp;</th><th>Character</th><th>Real Name</th></tr>'
  try:
    con = psycopg2.connect("host='postgres_test' dbname='postgres' user='postgres' password='d0ckerul3z'") 
    cursor = con.cursor(cursor_factory=psycopg2.extras.DictCursor)
    cursor.execute("SELECT * FROM guardians")
    rows = cursor.fetchall()
    for row in rows:
      if row['teamleader']:
        result += "<tr><td>%s</td><td><b>%s</b></td><td><b>%s</b></td></tr>" % (row["id"], row["character"], row["realname"])
      else:
        result += "<tr><td>%s</td><td>%s</td><td>%s</td></tr>" % (row["id"], row["character"], row["realname"])
    result += '</table>'
  except psycopg2.DatabaseError, e:
    result =  'Error %s' % e    
  finally:
    if con:
      con.close()
  return result

if __name__ == "__main__":
  db_data = (
    ('Adam Warlock', 'Him', 'false'),
    ('Drax the Destroyer', 'Arthur Sampson Douglas', 'false'),
    ('Gamora', 'Gamora', 'false'),
    ('Quasar a.k.a. Martyr', 'Phyla-Vell', 'false'),
    ('Rocket Raccoon', 'Rocket Raccoon', 'true'),
    ('Star-Lord', 'Peter Quill', 'true'),
    ('Groot', 'Groot', 'false'),
    ('Mantis', 'Mantis', 'false'),
    ('Major Victory', 'Vance Astro', 'false'),
    ('Bug', 'Bug', 'false'),
    ('Jack Flag', 'Jack Harrison', 'false'),
    ('Cosmo the Spacedog', 'Cosmo', 'false'),
    ('Moondragon', 'Heather Douglas', 'false'),
)

  con = None
  try:
    con = psycopg2.connect("host='postgres_test' dbname='postgres' user='postgres' password='d0ckerul3z'")   
    cur = con.cursor()  
    cur.execute("DROP TABLE IF EXISTS Guardians")
    cur.execute("CREATE TABLE Guardians(Id SERIAL PRIMARY KEY, Character TEXT, RealName TEXT, TeamLeader BOOLEAN)")
    query = "INSERT INTO Guardians (Character, RealName, TeamLeader) VALUES (%s, %s, %s)"
    cur.executemany(query, db_data)
    con.commit()
  except psycopg2.DatabaseError, e:
    if con:
        con.rollback()
    print 'Error %s' % e    
    sys.exit(1)
  finally:
    if con:
      con.close()
  app.run(host= '0.0.0.0')

And now for the interesting part, get this to container! You can always start simple CentOS container and copy files there and start them, but why use Docker for this?
In Docker we make it simpler and more automatic - we make and image, just like image you just downloaded!
How do we do it? Lets start with making new directory where new image files will be stored:

~ $ mkdir docker_image_1
~ $ cd docker_image_1/
~/docker_image_1 $ 
Save the code as 'guardians.py'. Create new file called 'Dockerfile' with you favorite editor and insert this:

FROM centos:latest
MAINTAINER David Golovan 
LABEL Description="guardians:1 - WebApp to print Guardians list" Vendor="Forthscale" Version="1.0"

RUN yum -y update && yum -y install epel-release 
RUN yum -y install python-pip gcc postgresql postgresql-devel python-devel
RUN pip install flask psycopg2
COPY guardians.py /opt/guardians/
CMD ["python", "/opt/guardians/guardians.py"] 
Save the scripts to the new directory and before we proceed, what we are doing here?

  • FROM - Image name + tag. This is the image on which our image is based, so it will contain everything from it.
  • MAINTAINER - Just info about image owner if it goes public
  • LABEL - Information about the image and it's version
  • RUN - Execute shell command. We are executing yum install and pip install for packages, but it can be any command
  • ADD - Copy file/directory from local server to the image. We are coping our scripts to /root/ of the image
  • CMD - Command to execute once container is running. When this commands stops(error or just exit) the container will also stop, so always make sure to run here commands that don't exit :)
Build our image and we can start containers using it. Note: I've not pulled CentOS images before so our image build will pull on first build:

~/docker_image_1 $ docker build -t forthscale/guardians:1.0 .
Sending build context to Docker daemon 5.632 kB
Sending build context to Docker daemon 
Step 0 : FROM centos:latest
latest: Pulling from centos
f1b10cd84249: Pull complete 
c852f6d61e65: Pull complete 
7322fbe74aa5: Already exists 
centos:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:a4627c43bafc86705af2e8a5ea1f0ed34fbf27b6e7a392e5ee45dbd4736627cc
Status: Downloaded newer image for centos:latest
 ---> 7322fbe74aa5
Step 1 : MAINTAINER David Golovan <davidg@forthscale.com>
 ---> Running in cded0075df8b
 ---> b582a22635b5
Removing intermediate container cded0075df8b
Step 2 : LABEL Description "guardians:1 - WebApp to print Guardians list" Vendor "Forthscale" Version "1.0"
 ---> Running in e8ba10cb9536
 ---> 3779b4331b2e
Removing intermediate container e8ba10cb9536
Step 3 : RUN yum -y update && yum -y install epel-release
 ---> Running in 2bab13a17bf5
Loaded plugins: fastestmirror
Determining fastest mirrors
.....
 ---> b7207970f3b8
Removing intermediate container 841af65c665a
Step 4 : RUN yum -y install python-pip gcc postgresql postgresql-devel python-devel
 ---> Running in ffbb682e8747
Loaded plugins: fastestmirror
.......
 
 ---> 534c270b560b
Removing intermediate container ffbb682e8747
Step 5 : RUN pip install flask psycopg2
 ---> Running in 7f380d6f374c
Downloading/unpacking flask
....... 
Downloading/unpacking psycopg2
Successfully installed flask psycopg2 Werkzeug Jinja2 itsdangerous markupsafe
Cleaning up...
 ---> a7038e07d3fc
Removing intermediate container b066cc5e4b21
Step 6 : COPY guardians.py /opt/guardians/
 ---> a98ee8d4bc21
Removing intermediate container ffb5d8030f25
Step 7 : CMD python /opt/guardians/guardians.py
 ---> Running in c2c8da611dca
 ---> 70df63860b6e
Removing intermediate container c2c8da611dca
Successfully built 70df63860b6e
Our image is ready!

~/docker_image_1 $ docker images
REPOSITORY                            TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
forthscale/guardians                  1.0                   70df63860b6e        About a minute ago   379.2 MB
centos                                latest              7322fbe74aa5        2 weeks ago          172.2 MB
postgres                              latest              7bf0ec35adaf        2 weeks ago          213.9 MB
ubuntu                                latest              6d4946999d4f        3 weeks ago          188.3 MB
ubuntu                                trusty              6d4946999d4f        3 weeks ago          188.3 MB
ubuntu                                14.04               6d4946999d4f        3 weeks ago          188.3 MB
You can use it same way as any other containers, but we need to let it know about the DB container. On previous step we found the IP manually and it's not good for production environment, specially when the address are random. To handle this problem Docker allow to link containers on same host (and actually on different hosts using Ambassador) by just using flags. Second problem is that usually all Docker container ports are closed to public and we need to open it (if you look at postgres container you will see it opened 5432/tcp port):

~/docker_image_1 $ docker run -p 8080:5000 --link postgres_test:postgres_test --name guardian -d forthscale/guardians:1.0
fb2d7143af007fccdad3bf74c500a55562757c4a0fedc4ecdd9e9b35d6c22b99

  • -p 8080:5000 - Open port and redirect it. 80 is public port and 5000 is port our application is listening (default flask port)
  • --link - Linking to container. First write container name (that's why it's important to name your containers) and second write what name to use for it inside the container (I prefer to use same name).
Open your browser and go to http://localhost:8080


But wait, that's not all, you can make even more! We can have shared directory for all the containers and for example server static files from there.
In the Dockerfile add this lines before the CMD:

RUN mkdir /opt/guardians/static 
VOLUME ["/opt/guardians/static"] 
Change this lines in guradians.py

app = Flask(__name__) 
result = '<h1>The Guardians of the Galaxy</h1><table border="1"><tr><th>&nbsp;</th><th>Character</th><th>Real Name</th></tr>'
To

app = Flask(__name__, static_url_path = "/static", static_folder = "static")
result = '<img src="static/small_h.png" /><br /><h1>The Guardians of the Galaxy</h1><table border="1"><tr><th>&nbsp;</th><th>Character</th><th>Real Name</th></tr>' 
Build the image again, now you can use 1.1 tag:

~/docker_image_1 $ docker build -t forthscale/guardians:1.1
Remove old container and run it again with new flag:

~/docker_image_1 $ docker stop guardian && docker rm guardian 
~/docker_image_1 $ docker build -t forthscale/guardians:1.1 .
Now create the shared directory and put Docker logo there:

~/docker_image_1 $ mkdir /tmp/guardians
~/docker_image_1 $ wget -P /tmp/guardians https://www.docker.com/sites/default/files/legal/small_h.png
And start new container:

~/docker_image_1 $ docker run -p 8080:5000 --link postgres_test:postgres_test -v /tmp/guardians:/opt/guardians/static --name guardian -d forthscale/guardians:1.1
Test your page again :) You can start another container with same image, just make sure they have different ports, and they will use same DB and same directory for static image. Here is explanation what you did in this steps:

  • In the Docker file you've added command to create new folder and make read/write with VOLUME command
  • In guardins.py you've allowed static files in flask and their location and added HTML tag to show the image
  • Build image was fast because it use cache for most of the parts of the file
  • In docker run command flag -v allow to map any local directory to any read/write directory of the image. When running  docker inspect [CONTAINER] it will list volumes with read/write permissions that you can rewrite.

Now you can build much more advanced containers for you applications


Provided by:Forthscale systems, cloud experts

Monday, June 29, 2015

Part 1: Starting with Docker

If you haven’t heard about Docker by now, you are probably hanging out with the wrong crowd. As we speak, Docker is next thing after Cloud computing and if you don’t understand yet how significant they are, last April they joined the Unicorn Club.

And what this post is all about? It is a first in a series of posts about Docker usage. I’ll start with how to start using Docker (YA newbie tutorial) in a common infrastructure. Next posts will concentrate on more advanced Docker usage.


Basics. What is Docker?

Docker is an open platform for building, shipping and running distributed applications. It gives programmers, development teams and operations engineers the common toolbox they need to take advantage of the distributed and networked nature of modern applications
Docker.io
Basically Docker allows to use application packaged in containers on machine to simplify deployment. More simple explanation is that you can run virtual OS on any Linux and soon Microsoft powered machine. This not exact explanation, since container is not a real VM, you just receive ready to use virtual environment. What is a Container? Container is a Linux Kernel level virtual environment. Sort of evolution of chroot jail environments.
If you want to compare LXC/Docker to any other VM that you can run on your computer that the biggest difference will be the lightweight of the containers. Each VM need it’s own disk space and run full OS on it while the containers use shared OS and layered FS(AuFS). So it’s lighter, faster and easier.

Why Docker?

The news that docker worth more than 1B$ is not the reason to use, but the Moby Dock whale logo may just be :). Seriously, Docker is open source platform and if you’re at this blog you know the benefits of it. They are not the first to make the service, but they are working on it for two years and at this point Docker leadership become a de-facto standard. Docker can run on any Linux OS and it’s has native support of big cloud providers - AWS, Google Engine, OpenStack, ProfitBricks and more are joining every day. Deploying Docker is really easy, I’ve started using Docker while I was going on train with bad WiFi and after 1 hour I had small application cloud with web server and DB.
Actually that’s the main reason we’ve chosen to use Docker to deploy our own infrastructure manager - Heili :) But there are few other products that might have similar functionality but they are not as advanced, neither solid as Docker.

Let’s start

Let`s leave theory here, if you want to know more check Docker site or Google.
Let’s start and build containers! As I already said Docker can run on a lot of different OS’s and it doesn’t really matter what OS you run once you installed it, that’s why I decided not to explain how to install it and just send you to official guide for installation:


Now, once you have Docker installed let’s make a first container, but first we need to have OS. Docker has ready images of common OS and we will use Ubuntu for this tutorial

~ $ docker pull ubuntu:14.04
14.04: Pulling from ubuntu
428b411c28f0: Pull complete
435050075b3f: Pull complete 
9fd3c8c9af32: Pull complete 
6d4946999d4f: Pull complete 
ubuntu:14.04: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:59662c823007a7a6fbc411910b472cf4ed862e8f74603267ddfe20d5af4f9d79
Status: Downloaded newer image for ubuntu:14.04
Let’s see what this command does:
  • pull - Pull image from repository(by default Docker Hub)
  • ubuntu - Image name
  • :14.04 - Image tag, in this case it’s OS version
After sometime (depends on your internet connection speed) you will have Ubuntu 14.04 image ready for use. And now you can start the container :)

~ $ docker run -t -i ubuntu:14.04
root@eb029641a22c:/#
  • run - Run a command in new container
  • -t - Allocate pseudo terminal, so we will see the output
  • -i - Interactive, you will be able to use STDIN
  • ubuntu:14.04 - Image name
That’s all, you have running container with Ubuntu! Of course that’s not the way we use container, because we don’t want them to be another kind of VM, we want the container to run command/application, that’s why we have run command in Docker. So how do we do it?
The command docker run also expect to receive a command to run once the container is running:

~ $ docker run -t -i ubuntu:14.04 echo “Hello World”
Hello World
~ $
Cool! our container printed “Hello World” and we are back to our original OS, But why?
Docker container is designed to run one command and once this command is done the container is closed. So how can we run something and keep container running? Just execute command that will never exit, until something happens, in our case simple loop (press Ctrl+C to stop it):

~ $ docker run -t -i ubuntu:14.04  sh -c "while :; do echo \"Hello World\"; sleep 10; done"
Hello World
Hello World
^C~ $
So we have simple bash loop that will run until the world collapse but we don’t want to see the output all the time, because if we do, why do we need to use container? This mode is really helpful for debug for image(more explanation in the next posts) and our production container will be demonized:

~ $ docker run -d ubuntu:14.04  sh -c "while :; do echo \"Hello World\"; sleep 10; done"
dde9009d78b7b6f58a57eb7737bea01913552164c2f4125bb1e9fe6e8336b0db
~ $
Voila! Something happened and we have now long strange string, but where is the container? Is it running? Docker has command that allow us to see all containers, let’s check if we can see something with it’s help:

~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
46671d396781        ubuntu:14.04        "\"sh -c 'while :; d   5 minutes ago      Exited (130) 5 minutes ago                       mad_thompson        
4460b0b7a5f7        ubuntu:14.04        "echo “Hello World     8 minutes ago      Exited (0) 8 minutes ago                        gloomy_pasteur      
50292108ff3b        ubuntu:14.04        "/bin/bash"            12 minutes ago      Exited (0) 12 minutes ago                        silly_lumiere       
dde9009d78b7        ubuntu:14.04        "\"sh -c 'while :; d   3 minutes ago       Up 3 minutes                            cranky_turing
Let’s start with the command:
  • ps - List containers
  • -a - Show all containers, without this flag we will see only running containers
And what we have at the output?
  • Container ID - Each container receive unique ID once it started, for easy management. The string we received earlier is the full ID and here we can see the shorter ID
  • Image - Name of the image that has been used by this container
  • Command - Command that was executed on this container
  • Created - How long it’s here?
  • Status - What is the current status and how long is this status
  • Ports - What ports are open(explanation in the next post)
  • Names - Each container can have a unique name, so we will not have to use the ID all the time. If name is not provided, Docker will generate random name for us(this is the 1st Docker issue submitted on Git)
We have here “mad” Ken Thompson(Unix and C inventor), “gloomy” Louis Pasteur(vaccination discoverer), “silly” Lumiere brothers(first film makers) and “cranky” Alan Truing(founder father of computer since, also spotted as Sherlock) - sounds like the Avengers(probably you have another team members). You can keep using this cool name generator or next time when you run the container give --name flag with desired unique name.
We see that our latest container is up and running but we also can see 3 container that are exited. This are the container that  we used on previous steps, you can see it by the command. This containers are exist, so this mean that this names are reserved, so if you use random names you’re alright, but if you have your own unique name, you can use it again. To be able to use it again we need to remove the stopped containers:

~ $ docker rm mad_thompson gloomy_pasteur silly_lumiere
or

~ $ docker rm $(docker ps -q -f status=exited)
46671d396781
4460b0b7a5f7
50292108ff3b
~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
dde9009d78b7        ubuntu:14.04        "\"sh -c 'while :; d   5 minutes ago      Up 5 minutes                           cranky_turing
Turing is alone now, it’s it should print “Hello World” in loop for 5 minutes, every 10 seconds = 30 messages. Let’s check the output of the container (make sure to change to your container name):

~ $ docker logs cranky_turing
Hello World
Hello World
Hello World
Hello World
...
Hello World
~ $
As you can see our container is running as expected. To stop just execute:

~ $ docker stop cranky_turing
And delete the container

~ $ docker rm cranky_turing


This is the basics of using the Docker containers, you can play more with it while the next post is written.

The next part will explain more advanced containers usage, containers network and how to build small web application on Docker container!

Provided by:Forthscale systems, cloud experts

Friday, January 23, 2015

ProfitBricks Has Released a New Data Center Management Tool


ProfitBricks, our partner, has released DCD R2 - unique and cloud computing industry-leading data center management tool.

Over the years ProfitBricks' teams have continued to push the envelope in performance, keeping the lowest prices in the indusrty. They wanted to create a better public cloud platform because the future of cloud needed it.

As a result ProfitBricks has enhanced the user experience to provide the most effortless cloud computing design and management tool the industry have ever seen.


What's New with the Data Center Designer R2?

DCD R2, a complete rewrite of game-changing visual infrastructure management tool, enables even easier design and management of data centers in the cloud. It is included with every account so you are able to test it out.

A new visual drag-and-drop interface lets businesses completely re-imagine the data center. DCD R2 is very intuitive as a data center management tool so even non-technical employees will be able to use it. This tool helps to free organisations from the limited and inflexible choices provided by first generation cloud computing industry incumbents. 

ProfitBricks provides a better and "unbound" cloud experience that delivers on the true promise of cloud computing.


Watch video below to learn how easy designing, configuring and managin the cloud can be with ProfitBricks' new data center management tool. Also, you can see it for yourself with a free live demo or by signing up for a free 14-days trial.


Provided by:Forthscale systems, cloud experts