Tag: Automation

Puppet already installed ? What Next ? – Part 1

A few days back, we have seen the installation of the Puppet server and Puppet Agent on the RHEL7 environment. In this article, we will focus on the technical part to administer and write manifests in the Puppet server to instruct the Agent. If you landed directly in this article, consider viewing the 10 steps to install the Puppet configuration management tool before continuing further in this article. Otherwise, I invite you all to continue on in this discovery of what Puppet is capable of.

All manifests will be available on the My-Puppet-Manifests Github repository.

The first keyword that someone should be familiar with is “resource”. In Puppet everything is a resource. The second keyword is “manifest”. To instruct the Puppet server, we have to write a file with the extension ‘.pp’ and it is called a manifest.

1. To check what are the resources in Puppet, you can use the following command:

puppet resource --type

2. You will notice a lot of resources. Let’s say you want to get more details about the resource called ‘file’, use the following command

puppet describe file

3. Let’s do something locally. let’s create a file in /tmp called test.txt. Create a file called file.pp as follows:

file {'/tmp/test.txt':

        ensure=> file,

        content=> "My first puppet file",

}

This is very simple to grasp. ‘file’ here is an attribute, the /tmp/test is a ‘content’ and ensure is the ‘attribute’. The content inside the attribute is the ‘value’.

4. To apply it with puppet locally use the following command:

puppet apply file.pp

You would notice that the file has already been created in the /tmp directory with the content as well.

5. If you want to remove the file use puppet apply file.pp but instead of ensure => file use ensure => absent.

file {'/tmp/test.txt':

 ensure=> absent,

 content=> "My first puppet file",

}

6. In the same manner, if you want to create a directory instead, use ensure => directory.

7. You can also check if you have any syntax error in your Manifest by using the following command:

puppet parser validate file_absent.pp

8. You can also create a user and at the same time add it in the same playbook of that of file. For example:

file {'/tmp/test.txt':

 ensure=> file,

 content=> "My first puppet file",

}

user {'tom':

 ensure=> present,

}

9. The idea is to look at the documentation and understand the parameter for a certain module, for example, the module ‘user’ with the command ‘puppet describe user‘ and you will notice that you can also create the home directory and specify the shell.

user {'harry':

 ensure=> present,

 comment=> "Harry Bell",

 shell=> '/sbin/nologin',

 home=> "/home/harry",

}

10. Another interesting resource is ‘service’:

service{ 'sshd.service':

 ensure=> 'running',

 enable=> 'true',

}

At this stage, it should be very clear how to create puppet manifest and execute locally. I create a Github repository to store all the Puppet Manifests. In the next blog post on Puppet, I will share more details. If you like it do comment below 🙂

cyberstorm.mu team at Developers Conference Mauritius

A few weeks back, I registered myself to present the Ansible automation tool at the Developers Conference 2019 at Voila Hotel, Bagatelle Mauritius. The event is an initiative of Mauritius Software Craftsmanship Community – MSCC sponsored by several companies such as Mauritius Commercial Bank, SdWorx, Eventstore, Ceridian, Castille, etc. There were other members of cyberstorm.mu who also registered for their presentations: they are Codarren Velvindron, technical lead at Orange Business Services who spoke about “becoming an automation artist”, Loganaden Velvindron who spoke about “RedHat Enterprise Linux 8 and Derivatives have a new Firewall: NFTABLEs”, and Nathan Sunil Mangar who spoke about “Introduction to the STM32”. There was also a special event where Mukom Akong Tamon, head of capacity building for Africa region at Afrinic who spoke on “IPv6 deployment in Mauritius and Africa at large”. I presented myself as a member of cyberstorm.mu and DevOps Engineer at Orange Business Services and spoke on Ansible for beginners with some basic and advanced demos.

In the past, I have written several articles on Ansible:

  1. Getting started with Ansible deployment
  2. Some fun with Ansible Playbooks
  3. Configure your LVM via Ansible
  4. Some tips with Ansible Modules for managing OS and Applications
  5. An agentless servers inventory with Ansible and Ansible-CMDB
  6. Project Tabulogs: Linux Last logs on HTML table with Ansible

My presentation started with a basic introduction to Ansible following some brief examples and demos. I started with a brief introduction of myself. It looks like it was a mixed audience including, Students, Professional from the management and technical side, Engineers, and others. I brushed out quickly as to why we need Ansible in our daily life whether for home use or on production. Ansible is compatible with several Operating systems and one of the most interesting tools is the AWX which is an opensource product. Before getting started with Ansible, it is important to grasp some keywords. I introduced it as well as giving some examples using Playbooks. Ansible Ad-hoc commands were also used. The audience was asked to give some ideas about what they want to automate in the future. There were lots of pretty examples. I laid some emphasis on reading the docs and keep in touch with the version of Ansible one is using. Also gave some brief idea about Ansible-Galaxy, Ansible-doc, Ansible-pull, and Ansible-vault. To spice up your automation layout, it would be nice to use Jinja templates, verbosity for better visual comprehension. I also spoke about Ansible-CMDB, which is not a tool of Ansible. Some days back, I blogged on Ansible-CMDB which is pretty interesting to create an inventory. I also shed some ideas about how to modify the source code of Ansible-CMDB. Also, an example using an Ansible Playbook build up web apps.

Thanks, everyone for taking pictures and some recordings.

cyberstorm.mu @ DevConMru

After my session, I went to the Afrinic session on IPv6, where Mukom Akong Tamon was presenting on IPv6 where he brushed out on an introduction to IPv6 and the IPv6 format structure. Also, several examples of why it is important to migrate to IPv6. Loganaden Velvindron from Afrinic enlightened the audience about dual stack programming.

One of the important part where Mr. Mukom mentioned that there are still developers hard coding IP addresses in the code which is not a good practice.

There was another session by Loganaden Velvindron of Afrinic, who spoke on NFtables in RedHat 8. Mukom was also present there in the session. Loganaden explained about NFtables architecture and its advantages. Also explained how to submit patches and dual stack building with NFtables.

Codarren Velvindron, technical lead at Orange Business Services and member of cyberstorm.mu explain why automation is important. He took some example on the conference.mscc.mu website itself. Also gave some ideas using “Expect”. For those who are not familiar with “Expect”, it is a scripting programming language that talks with your interactive programs or script that require user interaction.

Nathan Sunil Mangar also presented on an introduction to the STM32 microcontroller. He also gave some hints to distinguish between fake and real microcontrollers on the market. Apart from the basic introduction, he went brushed out some examples on several projects and explain which one can is better. However, it also depends on the budget when choosing microcontrollers. He also showed how to use the tool of programming for the STM32 microcontroller. The documentation was also perused during the presentation. At the end of the presentation, there were several giveaways by Nathan Mangar including, fans, Microcontrollers, and a small light bulb made from STM32.

I also have the opportunity to meet with several staffs from the Mauritius Commercial Bank who asked for some hints and best practice on Ansible. Also had some conversations with other people in the tech industry such as Kushal Appadu, Senior Linux system Engineer at Linkbynet Indian Ocean. We discussed lengthily on new technologies. Some days back, I presented the technicalities of Automation as a DevOps Engineer at SupInfo university Mauritius under the umbrella of Orange Cloud for Business and Orange Business Service. I was glad to meet a few students of SupInfo at the DevCon 2019 who instantly recognized me and congratulated me for the Ansible session.

Speaker at SUPINFO under the umbrella of Orange Business Services

I sincerely believe there is still room for improvement at the Developers conference such as the website itself which needs some security improvements. Otherwise, a feature that could be added is to specify which session is for beginners, intermediate or advanced so that attendees can choose better. The rating mechanism which is not based on constructivism might discourage other speakers to come forward next time. But overall, it was a nice event. Someone from the media team filmed me for a one-minute video, hoping to see it on the net in the future. I also got a “Thank You” board for being a speaker by Vanessa Veeramootoo-Chellen, CTO at Extension Interactive and one of the organizers at the Developers conference who can be seen to be always working, busy and on the move during the event.

Project Tabulogs: Linux Last logs on HTML table with Ansible

Project Tabulogs: It will give you the ability to display logs on an HTML tabular format page using Ansible Playbook. Some days back, I shed some ideas, how to use Ansible to create an agentless server inventory with an interesting HTML format. In this post, we will see how to present information on an HTML tabular format logging information from lastlog. You can apply the same principle for any logs and present it to an HTML template. One of the advantage with Ansible is that you can also launch shell commands which allows you to automate several tasks whether remote or local. Many times, I came across non-technical personnel who want to see logins and logouts of staffs on specific servers. The goal of this project is to present accurate and reliable information to non-technical persons in an IT organization. It’s also a way to perused logins and logouts of users more easily.


All right reserved : tunnelix.com
All right reserved: tunnelix.com

 

Limitations and Solutions

However, there is some limitation of presenting bulk information such as logs on an HTML page. Imagine having hundreds of servers which could amount more than 20,000 lines. This would make the page impossible to load and may crash on the browser. To remediate this situation, a database such as SQLite could be interesting. Otherwise, AJAX can be used to fetch page by page. Since I’m keeping all information in JSON format in a JavaScript page, I decided to make use of Pagination. A search button will also come handy for all columns and rows.

Now, the thing is how to report that information which keeps on changing? let’s say every month you want to have a report of users connected on your organization’s servers in JSON format that’s too in a JavaScript page. Here is where Ansible comes useful. I created a Playbook to build the JSON array itself and the JavaScript page using the Shell module based on Linux Awk commands. Of course, there are other tasks Ansible will perform such as fetching of the information remotely, updating the HTML page: Example the date the report was created. The directory can then be compressed and send easily to anyone.  So, no database needed!!. Cool isn’t it? However, if you want to adopt this system for really huge logs of information, it might work but could be really slow, hence, the need of a database.


You can clone the repo from my GitHub repository.

HTML, AngularJS and JSON

I created a simple HTML page which will call the AngularJS to keep the information in a JSON array. The HTML page will be static. However, for your own use, if you want to add some more columns, feel free to edit the section below. It is located at TabuLogs/master/perimeter1/index.html

 <tr class="tableheader">
                            <th>Hostname</th>
                            <th>Username</th>
                            <th>Login</th>
                            <th>TimeTaken</th>
                            <th>IPAddress</th>
                            <th>Perimeter</th>
                            <th>Application</th>
                        </tr>
                    </thead>
                    <tbody>
                        <tr ng-repeat="rec in SampleRecords | filter:q | startFrom:currentPage*pageSize | limitTo:pageSize">
                            <td>{{ rec.Hostname }}</td>
                            <td>{{ rec.Username }}</td>
                            <td>{{ rec.Login }}</td>
                            <td>{{ rec.TimeTaken }}</td>
                            <td>{{ rec.IPAddress }}</td>
                            <td>{{ rec.Perimeter }}</td>
                            <td>{{ rec.Application }}</td>
                        </tr>                       

Since I have used the lastlog from Linux, I called the page “Linux Login Management”. Otherwise, you can also filter any logs such as Apache or secure log. Some modifications will have to be carried out with the awk command or using Ansible Jinja filters.

You can also point a logo on the HTML page and kept in the images folder


<body ng-app="AngTable" ng-controller="Table">
    <p></p>
      <img src="images/logo1.png" align="right" alt=" " width="380" height="180"> 
      <img src="images/logo2.png" alt="cyberstorm" align="top-left" width="380" height="180">
    <p><h2>Login Management for my servers</p></h2> 

The most interesting thing is about the data which is stored in a JSON array. In that particular example, we have information about the hostname, username, login, the time taken, IP Address, Application, and the Perimeter.

[ 
{"Hostname":"Apacheserver","Username":"tunnelix","Login":"Fri-Sep-28-15:11","TimeTaken":"15:11(00:00)","IPAddress":"192.168.0.1","Application":"Apache","Perimeter":"Production"},
 ]; 

The JSON array will be fetched by AngularJS to render the HTML table.

As mentioned previously, if you had tried to load all the JS page into your browser, same would crash, hence, to overcome this situation, pagination comes handy.

Messing around with Linux AWK command and the Ansible Playbook

The Linux AWK command is pretty powerful to concatenate all logs together. The more interesting is the conversion of the logs into JSON format. The playbook is located at TabuLogs/AnsiblePlaybook/AuditLoginLinux.yml

When the Playbook is launched we can see the first AWK will create a file with the hostname of the machine in /tmp and inside that file, the data is comprised of the Hostname, Username, Login, Time-Taken, and IP Address. To get back the logs from the previous month,  I used Date=`date +%b -d ‘last month’`

Assuming you have rotated logs for /var/log/wtmp either weekly or monthly or any range, the loop is supposed to find logs for only last month. Now, in case you have kept wtmp logs for years, another method needs to be used to fetch log for the actual year.

Date=`date +%b -d 'last month'`; t=`hostname -s` ; for i in `ls /var/log/wtmp*` ; do last -f $i ; done | grep $Date | awk -v t="$t" '{print t, $1, $4 "-" $5 "-" $6 "-" $7, $9$10, $3 }' | egrep -v 'wtmp|reboot' > /tmp/$t

Once the logs have been created on the remote hosts, it is fetched by ansible and kept on the local server. Consequently, the remote files are destroyed.

We also need to merge all the data into one single file and also removing all blank lines, which is done using the following command:


awk 'NF' /Tabulogs/AnsibleWorkDesk/Servers/* | sed '/^\s*$/d' > /Tabulogs/AnsibleWorkDesk/AllInOne

Assuming that the file located at /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter contain information about the server, application, and perimeter, the following awk command will append the information to the table created remotely on the hosts.

awk 'FNR == NR {a[$3] = $1 " " $2; next} { if ($1 in a) { NF++; $NF = a[$1] }; print}' /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter /Tabulogs/AnsibleWorkDesk/AllInOne > /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication

Example of the format of the table is:

Nginx dev server2

Apache prod server1

Nginx dev server4

After adding all the data together on a simple table, the following AWK will convert each row into JSON format

awk '{print "{" "\"" "Hostname" "\"" ":" "\"" $1"\"" "," "\"" "Username" "\"" ":" "\"" $2"\"" "," "\"" "Login" "\"" ":" "\"" $3"\"" "," "\"" "TimeTaken" "\"" ":" "\"" $4"\"" ",""\"" "IPAddress" "\"" ":" "\"" $5"\"" "," "\"" "Application" "\"" ":" "\"" $6"\"" "," "\"" "Perimeter" "\"" ":" "\""$7"\"""}" "," }' /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication > /Tabulogs/AnsibleWorkDesk/perimeter1/table.js

It does not end here, we also need to add the JS codes to it. Using Lineinfile module of Ansible, writing at the beginning of the file is pretty easy.


lineinfile: dest=/Tabulogs/AnsibleWorkDesk/perimeter1/table.js line={{ item.javascript }} insertbefore=BOF
     with_items:
       - { javascript: "$scope.SampleRecords=[ " }
       - { javascript: "$scope.q = ''; " }
       - { javascript: "$scope.pageSize = 10; " }
       - { javascript: "$scope.currentPage = 0; " }
       - { javascript: "app.controller('Table', ['$scope', '$filter', function ($scope, $filter) { "}
       - { javascript: "var app = angular.module(\"AngTable\", []);  "}

Same method to write at the end of the file to create the table.js file.

lineinfile: dest=/Tabulogs/AnsibleWorkDesk/Perimeter1/table.js line={{ item.javascript }} insertafter=EOF
     with_items:
       - { javascript: " ]; " }
       - { javascript: "$scope.getData = function () { " }
       - { javascript: "return $filter('filter')($scope.SampleRecords, $scope.q) " }
       - { javascript: " }; "}
       - { javascript: "$scope.numberOfPages=function(){ "}
       - { javascript: "return Math.ceil($scope.getData().length/$scope.pageSize); "}
       - { javascript: "}; "}
       - { javascript: "}]); "}
       - { javascript: "app.filter('startFrom', function() { "}
       - { javascript: "    return function(input, start) { "}
       - { javascript: "start = +start; "}
       - { javascript: "return input.slice(start); "}
       - { javascript: " } "}
       - { javascript: "}); "}

After writing at the end of the file, the table.js is now completed.

Everything is assembled using the Ansible playbook. The Playbook has been tested for RedHat 7 servers. I believe it can also be adapted for other environments.

Final result

Here is an idea how will be the final result


Some information blurred for security reasons
Some pieces of information are blurred for security reasons

Future Enhancements

I’m so excited to share this article though I know there are much more improvements that can be done such as:

  • The awk command can be combined.
  • Removal of backticks on the shell commands.
  • shell command like rm -rf will be removed as well using the file module.
  • Some deprecated HTML tags were used.
  • Code sanitization.

My future goal is to improve the Ansible Playbook and the source code itself. Maybe someone else has a much better way of doing it. I would be glad to hear more. Please do comment below for more ideas. In case, you believed that some improvements can be done on the Playbook, feel free to send some commits on my GitHub repository or comment below.

As usual, a plan is needed. Consider reading the “tips” section below which might give you some hints.

TIPS:


  • There is, however, some other limitations with last command. Some arguments will not be present if you are using old utils-linux packages. Consider updating the package to be able to filter the last command easily.
  • If you can group your servers by some category or environment, it would be helpful.
  • There will be other versions of the project Tabulogs to create better and fast Ansible playbook. Lets called this one version 1.0

An agentless servers inventory with Ansible & Ansible-CMDB

Building from scratch an agentless inventory system for Linux servers is a very time-consuming task. To have precise information about your server’s inventory, Ansible comes to be very handy, especially if you are restricted to install an agent on the servers. However, there are some pieces of information that the Ansible’s inventory mechanism cannot retrieve from the default inventory. In this case, a Playbook needs to be created to retrieve those pieces of information. Examples are VMware tool and other application versions which you might want to include in your inventory system. Since Ansible makes it easy to create JSON files, this can be easily manipulated for other interesting tasks, say an HTML static page. I would recommend Ansible-CMDB which is very handy for such conversion. The Ansible-CMDB allows you to create a pure HTML file based on the JSON file that was generated by Ansible. Ansible-CMDB is another amazing tool created by Ferry Boender.


Photo credits: Ansible.com
Photo credits: Ansible.com

Let’s have a look how the agentless servers inventory with Ansible and Ansible-CMDB works. It’s important to understand the prerequisites needed before installing Ansible. There are other articles which I published on Ansible:

Ansible Basics and Pre-requisites

1. In this article, you will get an overview of what Ansible inventory is capable of. Start by gathering the information that you will need for your inventory system. The goal is to make a plan first.

2. As explained in the article Getting started with Ansible deployment, you have to define a group and record the name of your servers(which can be resolved through the host file or DNS server) or IP’s. Let’s assume that the name of the group is “test“.

3. Launch the following command to see a JSON output which will describe the inventory of the machine. As you may notice that Ansible had fetched all the data.


Ansible -m setup test

4. You can also append the output to a specific directory for future use with Ansible-cmdb. I would advise creating a specific directory (I created /home/Ansible-Workdesk) to prevent confusion where the file is appended.

Ansible-m setup --tree out/ test

5. At this point, you will have several files created in a tree format, i.e; specific file with the name of the server containing JSON information about the servers inventory.

Getting Hands-on with Ansible-cmdb

6. Now, you will have to install Ansible-cmdb which is pretty fast and easy. Do make sure that you follow all the requirements before installation:

git clone https://github.com/fboender/ansible-cmdb
cd ansible-cmdb && make install

7. To convert the JSON files into HTML, use the following command:

ansible-cmdb -t html_fancy_split out/

8. You should notice a directory called “cmdb” which contain some HTML files. Open the index.html and view your server inventory system.

Tweaking the default template

9. As mentioned previously, there is some information which is not available by default on the index.html template. You can tweak the /usr/local/lib/ansible-cmdb/ansiblecmdb/data/tpl/html_fancy_defs.html page and add more content, for example, ‘uptime‘ of the servers. To make the “Uptime” column visible, add the following line in the “Column definitions” section:


{"title": "Uptime",        "id": "uptime",        "func": col_uptime,         "sType": "string", "visible": True},

Also, add the following lines in the “Column functions” section :

<%def name="col_uptime(host, **kwargs)">
${jsonxs(host, 'ansible_facts.uptime', default='')}
</%def>

Whatever comes after the dot just after ansible_fact.<xxx> is the parent value in the JSON file. Repeat step 7. Here is how the end result looks like.

Photo credits: Ferry Boender
Photo credits: Ferry Boender

Getting beyond Ansible-cmdb

Now, imagine that you want to include a specific application version (Example VMware tool version ) in the HTML inventory file. As I mentioned in part 4, I created the directory /home/Ansible-Workdesk. This where the “out” and “cmdb” directories have been created.

10. Create another directory called /home/Ansible-Workdesk/other_info/vmwaretool. I use this directory to deposit another JSON file for the VMware tool version after launching a playbook. Here is an extract from my InventoryUsingAnsibleCMDB.yml Playbook.

- setup:
  register: setup_res

- command: vmware-toolbox-cmd -v
  register: vmwareversion

- set_fact:
  vmwareversion: '{ "vmwareversion": {{ vmwareversion.stdout_lines }} }'

You can view the whole Ansible Playbook here on my Github.

11. Once the playbook has been executed, you will have identical files name in /home/Ansible-Workdesk/out and /home/Ansible-Workdesk/out/other_info/vmwaretool.

12. However, the content will be different. The one in the “out” directory will contain JSON files about the default Ansible inventory, whilst, the one in the “vmwaretool” directory will contain a JSON file about the VMware tool version having its parent as “vmwareversion“. I change the parent from “stdout_lines” to “vmwareversion” using the set_fact module in Ansible.

13. By now, you are ready to tweak the html_fancy_defs.html again as described in part 9. Both the Column definitions and Column functions need to be appended. Here is the line to be added in the Column definitions section:

{“title”: “VMware Tool”,        “id”: “vmwareversion”,        “func”: col_vmwareversion,         “sType”: “string”, “visible”: True},

And that of the Column functions section:

<%def name=“col_vmwareversion(host, **kwargs)”>
${jsonxs(host, ‘vmwareversion’, default=”)}
</%def>

14. Repeat steps at part 7 with the “vmwaretool” directory.


ansible-cmdb -t html_fancy_split out/ out/other_info/vmwaretool/

In case, you are able to create an Ansible Playbook to create valid JSON files by merging those in the vmwaretool directory to that of the out directory, please comment below. I would like to hear more about it.

Tips:

  • More Playbooks can be found on my Ansible-Playbooks Github repository.
  • With regards to part 3, if direct root access has been disabled on the destination servers, you can use -u <username> which will permit you to connect on the server.
  • The ansible-cmdb command also allows you to generate CSV file.
  • Part 10 lays emphasis on a separate JSON file. If you have been able to merge both outputs on the same JSON file that has been created by ansible default inventory please comment below.
  • The group in the ansible host file can also be added to the server inventory html file. Please see the ansible-cmdb doc for more information.

Some tips with Ansible Modules for managing OS and Application

In the year 2016, I published some articles on Ansible: Getting started with Ansible deployment, which provide some guides to get started with Ansible, setting up the SSH key and other basic stuffs. Another article is about LVM configuration on CentOS as well as updating Glibc on a linux server following a restart of the service. There is another article for some more details about Ansible playbooks which could be helpful to get started with.

It is almost two years since I published these articles. I noticed that the concept of Ansible remains the same. Now we have other tools such as Ansible-Galaxy and Ansible-Tower to ease much more of the tasks using this agentless tools. On top of that there is also the possibility to perform agentless monitoring using Ansible. In future articles, I will get into some more details about this such as using Ansible to perform monitoring on servers. The concept remain the same, however, it is important to make sure that the modules used is in conformity of the version of the Ansible. Otherwise, you might end up with deprecated module. The Ansible Playbook’s output will give you an indication on which servers it has failed or succeeded, You will also have access to the <PlaybookName>.retry file which will show you all failed servers.


When using Ansible, always make sure that you are on the official documentation. Each version of Ansible is well documented on the official website

These days I have written some few playbooks. Let’s see some interesting stuff what ansible can do.

Ansible can edit files using the bullet proof approach. Instead of copying files from one destination to the other, we can edit it directly. Here is an extract of one such type of action:

Another interesting way of using the Ansible shell module where you can fire shell command remotely from the Ansible playbook. For example: removing specific users from a specific group using the shell module:


You can also delete specific user along with its home directory:

Do check out my Github Repository to have access to my Ansible Playbooks.