Category: Linux Application

Some funs with Ansible playbooks

As we have seen last time, Ansible is really cool especially for automation. In the blog post Getting started with Ansible i have share the following points:

  • Basic deployment of Ansible on 2 virtual machines
  • Dependencies that are usually needed
  • Setting up the SSH key
  • Some basic commands like ping, copy, transfer etc..
  • Errors that may be encountered due to dependency and SELINUX

Screenshot from 2016-02-20 12:43:56

Lets see how to delete a specific file

ansible ansible2 -m file -a 'path=/tmp/hackers.log state=absent'

Screenshot from 2016-02-20 12:01:11

However, you easily grasp all those modules names to delete, add etc.. with the ansible-doc -l command. If you want a more detailed information, say for example the copy module which i have used in the last Ansible blog post, we can just use the ansible-doc copy command. Cool isn’t it?

I will now lay some more emphasis on Ansible Playbooks. Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.Ansible. Compared to Puppet there are odules and in Chef, we have Cookbooks, same thing apply for Ansible where we have a Playbook. These playbooks are written in YAML, which is human friendly data serialization standards for all programming languages. Playbooks are divided into three sections, that is Target ,Variable and Tasks sections :

  • Target section (similar to nodes.pp in Puppet and run-list in Chef), The target section simply defines in which host playbooks will be executed. 
  • Variable section – This will comprise of all variables which can be used in the Playbooks
  • Tasks section – This is a list of all the modules that is going to run and in which particular order.

Before getting further deep into Ansible Playbooks, its very important to know that indentation and spacing are very important. So, get into /etc/ansible and create a .yml file there. I created a file1.yml as follows:

- - -
   -  hosts: ansible2 
       user: root
        motd_welcome: 'Welcome to hackers Mauritius'
       - name: sample motd
        dest: /etc/motd
        content: "{{ motd_welcome }}"

Explanation of the file1.yml

  • The hosts in the file1.yml means in which target the task is going to be applied.
  • The user is pretty straight forward denoting with which user same is going to be executed
  • Vars here is where the variable section
  • motd_welcome is a variable defined under the vars section having the value ‘Welcome to hackers Mauritius’
  • tasks is where the task section starts
  • copy is the task that you want to do
  • dest is the destination where you want to copy
  • content is “{{ motd_welcome }}” which is reference in the vars section.

Execute this file with the command ansible-playbook test1.yml

Screenshot from 2016-02-20 13:04:46

The MOTD is now created on the ansible2 machine 🙂

If there were a syntax error, you might have encountered errors as such where the motd_welcome variables is wrongly set up.

Screenshot from 2016-02-20 13:29:17

Let’s take another example by creating a playbook for installing htop by creating test2.yml

- - -
   - hosts: ansible2
      user: root
       - name: copy repo files
          copy: src=files/epel.repo dest=/etc/yum.repos.d/epel.repo
       - name: installing htop
          action: yum name=htop state=installed

So, we can clearly see that the actions will be carried out on the ansible2 server with user root for the task to copy the file epel.repo to the repo of ansible2 following which the installation of htop will be carried out. I have also created a directory in /etc/ansible/files and a file called epel.repo which have the necessary configurations.

Screenshot from 2016-02-21 01:38:46

Another interesting stuff is that with Ansible, you have the flexibility of setting up several actions and at the time you are executing the .yml file you can for example use the command ansible-playbook -v test2.yml — step which will prompt you each time for a yes/no answer.

Getting started with Ansible Deployment

Ansible is on open source IT orchestration engine that manages on-premise and in the cloud remote devices in a coordinated fashion. These are servers, networking hardware, and storage devices. Ansible can be used to talk to typical load balances, firewalls switches or any Linux machines. Continuous deployment in any environment is important as to whether the tools are predictable. Undefined behavior should be taken into consideration. Ansible is a human-readable playbook format. There is a minimum jargon in the system.


How Ansible is different compared to Puppet and Chef?

Compared to Puppet or Chef which need to have an agent installed on remote machines and the controller on the main server, but, with Ansible, you do not need to install anything on the remote machine as it relies on SSH connection and a simple push mechanism. On the other side, Puppet and Chef would use a Pull mechanism.

Let’s deploy Ansible

You would normally find lots of pretty documentation on the official website. If you want to adventure a bit around Ansible here are some tips to get started on a Centos 6 machine. I have created 2 machines called ansible1 and ansible2. Each can ping each other and Port 22 – SSH is listening. There are several dependencies needed to install Ansible. I would advise you to edit the /etc/hosts file and point the IP on the hostname if you do not have any DNS.

On ansible1, simply enable the epel repo and do a yum install ansible. However, you can also compile from source. Different Python version would be required. Those are usually the packages needed:

 python-keyczar noarch
 python-paramiko noarch
 python-pyasn1 noarch 
 python-simplejson i686

Once Ansible is installed on the machine ansible1, even if more machines are connected on the same network, you would not need to install it anything. To make ansible2 part of the ansible1 network, an inventory file need to be configured. This is located at /etc/ansible/hosts

Add the following block in the /etc/ansible/hosts file


Try testing a ping

After adding the block as mentioned above, you carry out a simple test to check for ping via the ansible command.

ansible ansible2 -m ping -u root -k

Here is the result.

Screenshot from 2016-02-20 09:54:33

You might want to set up root password. This can be set up with the command :

ansible ansible2 -m setup -u root -k

Setting up your SSH Key

However, you might want to set up Ansible with a ssh key.

On ansible1, simply create a key with the command ssh-keygen and/or if already got your key send it to the ansible2 using the following commands ssh-copy-id -i ansible2. Also repeat same steps on ansible2 by sending your key to ansible1. The file located at ~/.ssh/authorized_keys would contain the keys. As from here you simply run any command without being prompt each time to enter password.

Screenshot from 2016-02-20 10:14:10

More funs with commands

Let’s say we want to have an information about the /etc/passwd file from the ansible2 server. We simply need to fire this command

ansible ansible2 -m file -a 'path=/etc/passwd'

Screenshot from 2016-02-20 10:20:26

I can also create a directory with Ansible in any directory i want and even setup the user and group permission. For example to create a directory in the /tmp.

ansible ansible2 -m file -a 'path=/tmp/hackers_mauritius state=directory mode=777 owner=root'

Screenshot from 2016-02-20 10:26:50

Errors that can be encountered

However, its very important you test your command before setting it up on production environment. Errors can also be encountered if dependencies packages are not installed. For example let’s send a file from ansible1 to ansible2. The command is

ansible ansible2 -m copy -a 'src=/root/hackers.log dest=/tmp'

Screenshot from 2016-02-20 09:37:59

You might noticed that SELINUX can be disabled or simply set the parameter in the /etc/selinux/config. I have disabled selinux and rebooted the machine. Here is the output

Screenshot from 2016-02-20 10:45:51

A brief overview and installation of MongoDB

MongoDB is an open source application document-oriented database which is available under the GPL for free. It was initially built to be scalable, open source, high performance, and document-oriented database. The Document Oriented Database falls under the category of a “no SQL databases”. Usually, databases, in general, would be classified under RDBMS, OLAP or NOSQL. MongoDB is a NoSQL database.

A brief overview and installation of MongoDB 1

Installation of MongoDB is very simple either through compilation or through the official repository. Follow the instructions below to install MongoDB. Let’s see how to install MongoDB on a centos7 machine.

1. Create the MongoDB repo at /etc/yum.repos.d as indicated on the official MongoDB website.

vim /etc/yum.repos.d/MongoDB.repo and add the following lines:

name=MongoDB Repository

2. Then, you would simply need to do the installation.

yum -y install mongodb-org

3. After the installation, you can start the daemon as follows:

systemctl mongod start

4. You would notice that the daemon, by default would run under the port 27017

[[email protected] yum.repos.d]# netstat -ntpl | grep -i mongo
tcp        0      0*               LISTEN      9692/mongod 

5. By typing mongo on the terminal you would be prompted to the MongoDB CLI

How NoSQL came to the scene? It was found that Relational Databases were not capable of handling big data. Through, there is a serious debate on this issue, NoSQL was the answer to handle big data.  Horizontal scalability was found missing in RDBMS compared to NOSQL. In RDBMS, an entity is represented by a table compared to MongoDB, the table is comparable to collections.

Some features of MongoDB is that it supports:

Ad hoc – you can do a search by field, range queries and regular expressions

Indexing – Any field in a particular document can be indexed

Replication – Master and slave replication is supported. A master can performs reads and writes and a slave will copies data from the master where it can only be used for reads or backup.

Data Duplication – MongoDB can run on multiple servers where data is duplicated to keep the system up and running in case of hardware failure.

Load Balancing – Since horizontal scalability is one of MongoDB feature, load balancing can be set up between machines. New machines can be added to a running database.

Ruby on Rails deployment on Ubuntu Server

Ruby on Rails is a web application framework that enables you to develop web applications. It uses the Mobile-View-Controller (MVC) mechanism by providing default structures for the databases, web services and web pages. This article will be based on the installation and brief overview of Rails, Ruby, Ruby Version Manager (RVM) and Rubygem. I am using an Ubuntu Server 15.10 for the deployments.


To install Rails, you need to have Ruby which is an open source programming language. There are different ways to install Ruby. You can install it with your apt-get or yum tools which will not give you the latest version by default. You can download and compile it from source or you can also use the RVM Ruby Version Manager. The RVM permits you to switch between environments and different Ruby versions within the same machine easily. The community of Ruby developers is moving at a fast pace. It’s very difficult to manage your Ruby applications without having proper control over the different environments especially when an application is migrated from the Dev to Prod environment. There is another tool called Rbenv which can use to switch between environments. Let’s install Ruby on Rails! 🙂

1. Basically, an apt-get install ruby will install ruby with all dependencies. However, from the Ubuntu repo, the version is 2.1 To install it with RVM, you need to install RVM itself first. I installed it using CURL

apt-get install curl

2. install mpapis public key as per official documentation.

gpg --keyserver hkp:// --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

3. Install RVM – stable version. You would notice several dependencies being installed such as g++, gcc, make etc..

curl -sSL | bash -s stable --ruby

4. To start using RVM, you need to run

source /usr/local/rvm/scripts/rvm

5. You can check your RVM version and update your RVM version with the commands below. More information is also available on  GitHub rvm repo

rvm -v

rvm get head

rvm reload

rvm get stable

6. You can check if all dependencies are installed with the command

rvm dependencies

7. Install Ruby with the command

rvm install ruby. 

8. To use the default version of ruby, you just need to do a

rvm use ruby --default. 

Note that when RVM is installed,  by default it installs a version of Ruby.

9. Normally, after installing RVM, RubyGems, which is a package management framework for Ruby will also be installed alongside. Verify it with the command

rvm rubygems current. 

10. However, if you need to upgrade to the latest RubyGems you need to do a

gem update --system. 

11. Rubygem itself can be updated provided you install the rubygems-update gem. Do a

gem install rubygems-update 

12. To install Rails alongside with all documentation

gem install rails

13. At this stage, we have already installed Ruby on Rails on our Ubuntu server. I will now get into some more details. Since we downloaded the default Ruby, through RVM at step 8, you would notice that you do not have the latest Ruby version! To download the latest stable release which is Ruby-2.2.3 at the time I am writing this article.

rvm install 2.2.3

14. You switch on to the new Ruby-2.2.3 with the command

rvm use ruby-2.2.3

15. Right now, I have Ruby-2.2.1 which is the default version that I have to install alongside RVM and Ruby-2.2.3 which I have manually downloaded at step 13. Supposed we have to test an application with Ruby-2.1.7, lets now download another Ruby i.e; Ruby-2.1.7

rvm install 2.1.7

16. Once the Ruby-2.1.7 is downloaded, we switch on to it (see step 14). At this stage, you would also notice that the PATH of your ENV has changed. Each time you change the Ruby version, you would notice a change in the path.

Screenshot from 2015-11-11 18:14:28

17. Assuming we no longer need this version, lets now purge and uninstall Ruby-2.1.7 After uninstalling repeat the version you want to use at step 14

rvm uninstall 2.1.7

There are other tools such as Rbenv and Chruby which you can use instead of RVM to manipulate Ruby version. There are however its pros and cons when it comes to Rbenv VS RVM.

Dare to do a brute force attack again!

Dare to do an SSH Bruteforce attack again and you are banned!! I have noticed that there are several DDOS SSH botnets attack these days on my server. Despite that I would prefer SSH to listen on port 22, I can imagine how many attempts can be made to break through it. Though these attacks are very common, it can increase CPU consumption on your server and consequently the server can die. However, if you did not protect the server from malicious SSH remote connection, things can get pretty dangerous and the attacker can take over the machine.

Photo credits –

Fail2Ban is one of the tools which you can install on your machine to ban IPs that show malicious signs. However, today with the help of Kheshav, we have decided to find a solution to reveal all the IPs to the public. From the fail2Ban log, we can find all IPs that that are being banned. The solution was an easy one.

1.Install Nodejs, npm package

yum install nodejs npm

2. Install frontail with the npm utility

npm install frontail -g

3. Now you can launch frontail on any port as a demon with the following command

frontail -p {port number here} -h {IP or Hostname here} {location of your log} -d

Afterward, you have to include the IP, the port number and the location where you want the log to be streamed live.

Here are the banned IPs – US time attempting some brute force on You can also view the IPs on the right side widget of the blog. It might take some few seconds before loading.

There are several websites where you can report IPs for abuse as well as verification of precedent attacks. We are still brewing up some ideas to produce a better and well-defined output of the log.