Tag: Automation

Project Tabulogs: Linux Last logs on HTML table with Ansible

Project Tabulogs: It will give you the ability to display logs on an HTML tabular format page using Ansible Playbook. Some days back, I shed some ideas, how to use Ansible to create an agentless server inventory with an interesting HTML format. In this post, we will see how to present information on an HTML tabular format logging information from lastlog. You can apply the same principle for any logs and present it to an HTML template. One of the advantage with Ansible is that you can also launch shell commands which allows you to automate several tasks whether remote or local. Many times, I came across non-technical personnel who want to see logins and logouts of staffs on specific servers. The goal of this project is to present accurate and reliable information to non-technical persons in an IT organization. It’s also a way to perused logins and logouts of users more easily.


All right reserved : tunnelix.com
All right reserved: tunnelix.com

 

Limitations and Solutions

However, there is some limitation of presenting bulk information such as logs on an HTML page. Imagine having hundreds of servers which could amount more than 20,000 lines. This would make the page impossible to load and may crash on the browser. To remediate this situation, a database such as SQLite could be interesting. Otherwise, AJAX can be used to fetch page by page. Since I’m keeping all information in JSON format in a JavaScript page, I decided to make use of Pagination. A search button will also come handy for all columns and rows.

Now, the thing is how to report that information which keeps on changing? let’s say every month you want to have a report of users connected on your organization’s servers in JSON format that’s too in a JavaScript page. Here is where Ansible comes useful. I created a Playbook to build the JSON array itself and the JavaScript page using the Shell module based on Linux Awk commands. Of course, there are other tasks Ansible will perform such as fetching of the information remotely, updating the HTML page: Example the date the report was created. The directory can then be compressed and send easily to anyone.  So, no database needed!!. Cool isn’t it? However, if you want to adopt this system for really huge logs of information, it might work but could be really slow, hence, the need of a database.


You can clone the repo from my GitHub repository.

HTML, AngularJS and JSON

I created a simple HTML page which will call the AngularJS to keep the information in a JSON array. The HTML page will be static. However, for your own use, if you want to add some more columns, feel free to edit the section below. It is located at TabuLogs/master/perimeter1/index.html

 <tr class="tableheader">
                            <th>Hostname</th>
                            <th>Username</th>
                            <th>Login</th>
                            <th>TimeTaken</th>
                            <th>IPAddress</th>
                            <th>Perimeter</th>
                            <th>Application</th>
                        </tr>
                    </thead>
                    <tbody>
                        <tr ng-repeat="rec in SampleRecords | filter:q | startFrom:currentPage*pageSize | limitTo:pageSize">
                            <td>{{ rec.Hostname }}</td>
                            <td>{{ rec.Username }}</td>
                            <td>{{ rec.Login }}</td>
                            <td>{{ rec.TimeTaken }}</td>
                            <td>{{ rec.IPAddress }}</td>
                            <td>{{ rec.Perimeter }}</td>
                            <td>{{ rec.Application }}</td>
                        </tr>                       

Since I have used the lastlog from Linux, I called the page “Linux Login Management”. Otherwise, you can also filter any logs such as Apache or secure log. Some modifications will have to be carried out with the awk command or using Ansible Jinja filters.

You can also point a logo on the HTML page and kept in the images folder


<body ng-app="AngTable" ng-controller="Table">
    <p></p>
      <img src="images/logo1.png" align="right" alt=" " width="380" height="180"> 
      <img src="images/logo2.png" alt="cyberstorm" align="top-left" width="380" height="180">
    <p><h2>Login Management for my servers</p></h2> 

The most interesting thing is about the data which is stored in a JSON array. In that particular example, we have information about the hostname, username, login, the time taken, IP Address, Application, and the Perimeter.

[ 
{"Hostname":"Apacheserver","Username":"tunnelix","Login":"Fri-Sep-28-15:11","TimeTaken":"15:11(00:00)","IPAddress":"192.168.0.1","Application":"Apache","Perimeter":"Production"},
 ]; 

The JSON array will be fetched by AngularJS to render the HTML table.

As mentioned previously, if you had tried to load all the JS page into your browser, same would crash, hence, to overcome this situation, pagination comes handy.

Messing around with Linux AWK command and the Ansible Playbook

The Linux AWK command is pretty powerful to concatenate all logs together. The more interesting is the conversion of the logs into JSON format. The playbook is located at TabuLogs/AnsiblePlaybook/AuditLoginLinux.yml

When the Playbook is launched we can see the first AWK will create a file with the hostname of the machine in /tmp and inside that file, the data is comprised of the Hostname, Username, Login, Time-Taken, and IP Address. To get back the logs from the previous month,  I used Date=`date +%b -d ‘last month’`

Assuming you have rotated logs for /var/log/wtmp either weekly or monthly or any range, the loop is supposed to find logs for only last month. Now, in case you have kept wtmp logs for years, another method needs to be used to fetch log for the actual year.

Date=`date +%b -d 'last month'`; t=`hostname -s` ; for i in `ls /var/log/wtmp*` ; do last -f $i ; done | grep $Date | awk -v t="$t" '{print t, $1, $4 "-" $5 "-" $6 "-" $7, $9$10, $3 }' | egrep -v 'wtmp|reboot' > /tmp/$t

Once the logs have been created on the remote hosts, it is fetched by ansible and kept on the local server. Consequently, the remote files are destroyed.

We also need to merge all the data into one single file and also removing all blank lines, which is done using the following command:


awk 'NF' /Tabulogs/AnsibleWorkDesk/Servers/* | sed '/^\s*$/d' > /Tabulogs/AnsibleWorkDesk/AllInOne

Assuming that the file located at /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter contain information about the server, application, and perimeter, the following awk command will append the information to the table created remotely on the hosts.

awk 'FNR == NR {a[$3] = $1 " " $2; next} { if ($1 in a) { NF++; $NF = a[$1] }; print}' /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter /Tabulogs/AnsibleWorkDesk/AllInOne > /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication

Example of the format of the table is:

Nginx dev server2

Apache prod server1

Nginx dev server4

After adding all the data together on a simple table, the following AWK will convert each row into JSON format

awk '{print "{" "\"" "Hostname" "\"" ":" "\"" $1"\"" "," "\"" "Username" "\"" ":" "\"" $2"\"" "," "\"" "Login" "\"" ":" "\"" $3"\"" "," "\"" "TimeTaken" "\"" ":" "\"" $4"\"" ",""\"" "IPAddress" "\"" ":" "\"" $5"\"" "," "\"" "Application" "\"" ":" "\"" $6"\"" "," "\"" "Perimeter" "\"" ":" "\""$7"\"""}" "," }' /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication > /Tabulogs/AnsibleWorkDesk/perimeter1/table.js

It does not end here, we also need to add the JS codes to it. Using Lineinfile module of Ansible, writing at the beginning of the file is pretty easy.


lineinfile: dest=/Tabulogs/AnsibleWorkDesk/perimeter1/table.js line={{ item.javascript }} insertbefore=BOF
     with_items:
       - { javascript: "$scope.SampleRecords=[ " }
       - { javascript: "$scope.q = ''; " }
       - { javascript: "$scope.pageSize = 10; " }
       - { javascript: "$scope.currentPage = 0; " }
       - { javascript: "app.controller('Table', ['$scope', '$filter', function ($scope, $filter) { "}
       - { javascript: "var app = angular.module(\"AngTable\", []);  "}

Same method to write at the end of the file to create the table.js file.

lineinfile: dest=/Tabulogs/AnsibleWorkDesk/Perimeter1/table.js line={{ item.javascript }} insertafter=EOF
     with_items:
       - { javascript: " ]; " }
       - { javascript: "$scope.getData = function () { " }
       - { javascript: "return $filter('filter')($scope.SampleRecords, $scope.q) " }
       - { javascript: " }; "}
       - { javascript: "$scope.numberOfPages=function(){ "}
       - { javascript: "return Math.ceil($scope.getData().length/$scope.pageSize); "}
       - { javascript: "}; "}
       - { javascript: "}]); "}
       - { javascript: "app.filter('startFrom', function() { "}
       - { javascript: "    return function(input, start) { "}
       - { javascript: "start = +start; "}
       - { javascript: "return input.slice(start); "}
       - { javascript: " } "}
       - { javascript: "}); "}

After writing at the end of the file, the table.js is now completed.

Everything is assembled using the Ansible playbook. The Playbook has been tested for RedHat 7 servers. I believe it can also be adapted for other environments.

Final result

Here is an idea how will be the final result



Some information blurred for security reasons
Some pieces of information are blurred for security reasons

Future Enhancements

I’m so excited to share this article though I know there are much more improvements that can be done such as:

  • The awk command can be combined.
  • Removal of backticks on the shell commands.
  • shell command like rm -rf will be removed as well using the file module.
  • Some deprecated HTML tags were used.
  • Code sanitization.

My future goal is to improve the Ansible Playbook and the source code itself. Maybe someone else has a much better way of doing it. I would be glad to hear more. Please do comment below for more ideas. In case, you believed that some improvements can be done on the Playbook, feel free to send some commits on my GitHub repository or comment below.

As usual, a plan is needed. Consider reading the “tips” section below which might give you some hints.

TIPS:


  • There is, however, some other limitations with last command. Some arguments will not be present if you are using old utils-linux packages. Consider updating the package to be able to filter the last command easily.
  • If you can group your servers by some category or environment, it would be helpful.
  • There will be other versions of the project Tabulogs to create better and fast Ansible playbook. Lets called this one version 1.0

An agentless servers inventory with Ansible & Ansible-CMDB

Building from scratch an agentless inventory system for Linux servers is a very time-consuming task. To have precise information about your server’s inventory, Ansible comes to be very handy, especially if you are restricted to install an agent on the servers. However, there are some pieces of information that the Ansible’s inventory mechanism cannot retrieve from the default inventory. In this case, a Playbook needs to be created to retrieve those pieces of information. Examples are VMware tool and other application versions which you might want to include in your inventory system. Since Ansible makes it easy to create JSON files, this can be easily manipulated for other interesting tasks, say an HTML static page. I would recommend Ansible-CMDB which is very handy for such conversion. The Ansible-CMDB allows you to create a pure HTML file based on the JSON file that was generated by Ansible. Ansible-CMDB is another amazing tool created by Ferry Boender.


Photo credits: Ansible.com
Photo credits: Ansible.com

Let’s have a look how the agentless servers inventory with Ansible and Ansible-CMDB works. It’s important to understand the prerequisites needed before installing Ansible. There are other articles which I published on Ansible:

Ansible Basics and Pre-requisites

1. In this article, you will get an overview of what Ansible inventory is capable of. Start by gathering the information that you will need for your inventory system. The goal is to make a plan first.

2. As explained in the article Getting started with Ansible deployment, you have to define a group and record the name of your servers(which can be resolved through the host file or DNS server) or IP’s. Let’s assume that the name of the group is “test“.

3. Launch the following command to see a JSON output which will describe the inventory of the machine. As you may notice that Ansible had fetched all the data.


Ansible -m setup test

4. You can also append the output to a specific directory for future use with Ansible-cmdb. I would advise creating a specific directory (I created /home/Ansible-Workdesk) to prevent confusion where the file is appended.

Ansible-m setup --tree out/ test

5. At this point, you will have several files created in a tree format, i.e; specific file with the name of the server containing JSON information about the servers inventory.

Getting Hands-on with Ansible-cmdb

6. Now, you will have to install Ansible-cmdb which is pretty fast and easy. Do make sure that you follow all the requirements before installation:

git clone https://github.com/fboender/ansible-cmdb
cd ansible-cmdb && make install

7. To convert the JSON files into HTML, use the following command:

ansible-cmdb -t html_fancy_split out/

8. You should notice a directory called “cmdb” which contain some HTML files. Open the index.html and view your server inventory system.

Tweaking the default template

9. As mentioned previously, there is some information which is not available by default on the index.html template. You can tweak the /usr/local/lib/ansible-cmdb/ansiblecmdb/data/tpl/html_fancy_defs.html page and add more content, for example, ‘uptime‘ of the servers. To make the “Uptime” column visible, add the following line in the “Column definitions” section:


{"title": "Uptime",        "id": "uptime",        "func": col_uptime,         "sType": "string", "visible": True},

Also, add the following lines in the “Column functions” section :

<%def name="col_uptime(host, **kwargs)">
${jsonxs(host, 'ansible_facts.uptime', default='')}
</%def>

Whatever comes after the dot just after ansible_fact.<xxx> is the parent value in the JSON file. Repeat step 7. Here is how the end result looks like.

Photo credits: Ferry Boender
Photo credits: Ferry Boender

Getting beyond Ansible-cmdb

Now, imagine that you want to include a specific application version (Example VMware tool version ) in the HTML inventory file. As I mentioned in part 4, I created the directory /home/Ansible-Workdesk. This where the “out” and “cmdb” directories have been created.

10. Create another directory called /home/Ansible-Workdesk/other_info/vmwaretool. I use this directory to deposit another JSON file for the VMware tool version after launching a playbook. Here is an extract from my InventoryUsingAnsibleCMDB.yml Playbook.

- setup:
  register: setup_res

- command: vmware-toolbox-cmd -v
  register: vmwareversion

- set_fact:
  vmwareversion: '{ "vmwareversion": {{ vmwareversion.stdout_lines }} }'

You can view the whole Ansible Playbook here on my Github.

11. Once the playbook has been executed, you will have identical files name in /home/Ansible-Workdesk/out and /home/Ansible-Workdesk/out/other_info/vmwaretool.

12. However, the content will be different. The one in the “out” directory will contain JSON files about the default Ansible inventory, whilst, the one in the “vmwaretool” directory will contain a JSON file about the VMware tool version having its parent as “vmwareversion“. I change the parent from “stdout_lines” to “vmwareversion” using the set_fact module in Ansible.

13. By now, you are ready to tweak the html_fancy_defs.html again as described in part 9. Both the Column definitions and Column functions need to be appended. Here is the line to be added in the Column definitions section:

{“title”: “VMware Tool”,        “id”: “vmwareversion”,        “func”: col_vmwareversion,         “sType”: “string”, “visible”: True},

And that of the Column functions section:

<%def name=“col_vmwareversion(host, **kwargs)”>
${jsonxs(host, ‘vmwareversion’, default=”)}
</%def>

14. Repeat steps at part 7 with the “vmwaretool” directory.


ansible-cmdb -t html_fancy_split out/ out/other_info/vmwaretool/

In case, you are able to create an Ansible Playbook to create valid JSON files by merging those in the vmwaretool directory to that of the out directory, please comment below. I would like to hear more about it.

Tips:

  • More Playbooks can be found on my Ansible-Playbooks Github repository.
  • With regards to part 3, if direct root access has been disabled on the destination servers, you can use -u <username> which will permit you to connect on the server.
  • The ansible-cmdb command also allows you to generate CSV file.
  • Part 10 lays emphasis on a separate JSON file. If you have been able to merge both outputs on the same JSON file that has been created by ansible default inventory please comment below.
  • The group in the ansible host file can also be added to the server inventory html file. Please see the ansible-cmdb doc for more information.

Some tips with Ansible Modules for managing OS and Application

In the year 2016, I published some articles on Ansible: Getting started with Ansible deployment, which provide some guides to get started with Ansible, setting up the SSH key and other basic stuffs. Another article is about LVM configuration on CentOS as well as updating Glibc on a linux server following a restart of the service. There is another article for some more details about Ansible playbooks which could be helpful to get started with.

It is almost two years since I published these articles. I noticed that the concept of Ansible remains the same. Now we have other tools such as Ansible-Galaxy and Ansible-Tower to ease much more of the tasks using this agentless tools. On top of that there is also the possibility to perform agentless monitoring using Ansible. In future articles, I will get into some more details about this such as using Ansible to perform monitoring on servers. The concept remain the same, however, it is important to make sure that the modules used is in conformity of the version of the Ansible. Otherwise, you might end up with deprecated module. The Ansible Playbook’s output will give you an indication on which servers it has failed or succeeded, You will also have access to the <PlaybookName>.retry file which will show you all failed servers.


When using Ansible, always make sure that you are on the official documentation. Each version of Ansible is well documented on the official website

These days I have written some few playbooks. Let’s see some interesting stuff what ansible can do.

Ansible can edit files using the bullet proof approach. Instead of copying files from one destination to the other, we can edit it directly. Here is an extract of one such type of action:

Another interesting way of using the Ansible shell module where you can fire shell command remotely from the Ansible playbook. For example: removing specific users from a specific group using the shell module:


You can also delete specific user along with its home directory:

Do check out my Github Repository to have access to my Ansible Playbooks.

Chef workstation and a basic cookbook

Since the main jobs of system administrator is to maintain systems, keep repeating ourselves which is kind boring as well as to dig into our memory of previous configurations that we have set up on a machine. No wonder, manual consistency configurations need to be checked on server configurations. It can be thousands of machines. Chef, is just another tool to get rid of these situations. It is a configuration management tool which is written in Ruby and Erlang for IT professional. Compared to Puppet which has only the Workstation and the Derver whilst Chef has three components that are the Chef Server, Chef workstation and Chef Node.

Photo credits: Linode.com
Photo credits: Linode.com

The cookbooks are written on the Workstation, and its then uploaded to the Chef server (service) which will be executed on the nodes. Chef nodes can be physical, virtual or directly on the cloud. Normally, chef nodes cannot communicate directly to the workstation. Let’s not focus on the installation.

Let’s first get into the workstation.

1.On the workstation download and install the Chef client from the client download page. In my case, i am on a Centos7 virtual machine.

[[email protected] ~]# wget https://packages.chef.io/stable/el/7/chef-12.12.15-1.el7.x86_64.rpm

2.After installation, you should notice the four utils already available: chef-apply chef-client chef-shell chef-solo

3. Now, we are going to create a cookbook. Since chef use the DSL – Domain specific language, the file created should end with the extension .rb Here is an example called file.rb. The first line means file resource which means a file is being created. The file resource will manage a file on the machine. The content of the file will be created with the line ‘Hello Tunnelix’

file 'file.txt' do
            content 'Hello Tunnelix'
 end

4. The tool chef-apply can be used to run it as follows:

Screenshot from 2016-08-07 21-49-07

5. You will also noticed that the file.txt has been created in the current directory as the path has not been specified.

Screenshot from 2016-08-07 21-50-24

Tips:

  • If the content of file.rb (refer to point 3) has not been modified and you fire a chef-apply again, you would notice a prompt that its already ‘up to date’ which means that it reduce the disk IO as well as the bandwidth. 
  • A string must be enclosed  in double quotes when using variables. You cannot use a single quote into another single quote. It won’t work!

Chef always check and refer to the resource and attributes in the cookbook to execute an order ; ie to cook a food. The thing is that Chef focus on the DSL with the aim to what the modifications need to be. Chef allows servers to be in a consistent state.

Configure your LVM via Ansible

Some days back, I gave some explanations about LVM such as creations of LVM partitions and a detailed analogy of the LVM structure as well as tips for using PVMOVE. We can also automate such task using the power of Ansible. Cool isn’t it?

ansible

So, I have my two hosts Ansible1 and Ansible2. Ansible1 is the controller and has Ansible installed and Ansible2 is the hosts that the disk will be added to the LVM.

1. Here is the status of the disk of Ansible2 where a disk /dev/sdc has been added

Screenshot from 2016-03-08 11-05-29

2. I have now added a disk of 1GB from the VirtualBox settings. You can refer to the past article on LVM how to add the disk. As we can see on the screenshot below it shows the disk sdc with the size 1GB added on the machine Ansible2 which I have formatted as LVM

Screenshot from 2016-03-08 11-22-17

4. Lets now get into the controller machine – Ansible1 and prepare our Playbook. You can view it on my Git account here. The aim is to get a 500Mb from the /dev/sdc1 to create a new VG called vgdata in the LV called lvdisk.

5. Here is the output

Screenshot from 2016-03-08 11-36-00

Articles on LVM

Articles on Ansible