Category: Linux Application

Project Tabulogs: Linux Last logs on HTML table with Ansible

Project Tabulogs: It will give you the ability to display logs on an HTML tabular format page using Ansible Playbook. Some days back, I shed some ideas, how to use Ansible to create an agentless server inventory with an interesting HTML format. In this post, we will see how to present information on an HTML tabular format logging information from lastlog. You can apply the same principle for any logs and present it to an HTML template. One of the advantage with Ansible is that you can also launch shell commands which allows you to automate several tasks whether remote or local. Many times, I came across non-technical personnel who want to see logins and logouts of staffs on specific servers. The goal of this project is to present accurate and reliable information to non-technical persons in an IT organization. It’s also a way to perused logins and logouts of users more easily.

All right reserved :
All right reserved:


Limitations and Solutions

However, there is some limitation of presenting bulk information such as logs on an HTML page. Imagine having hundreds of servers which could amount more than 20,000 lines. This would make the page impossible to load and may crash on the browser. To remediate this situation, a database such as SQLite could be interesting. Otherwise, AJAX can be used to fetch page by page. Since I’m keeping all information in JSON format in a JavaScript page, I decided to make use of Pagination. A search button will also come handy for all columns and rows.

Now, the thing is how to report that information which keeps on changing? let’s say every month you want to have a report of users connected on your organization’s servers in JSON format that’s too in a JavaScript page. Here is where Ansible comes useful. I created a Playbook to build the JSON array itself and the JavaScript page using the Shell module based on Linux Awk commands. Of course, there are other tasks Ansible will perform such as fetching of the information remotely, updating the HTML page: Example the date the report was created. The directory can then be compressed and send easily to anyone.  So, no database needed!!. Cool isn’t it? However, if you want to adopt this system for really huge logs of information, it might work but could be really slow, hence, the need of a database.

You can clone the repo from my GitHub repository.

HTML, AngularJS and JSON

I created a simple HTML page which will call the AngularJS to keep the information in a JSON array. The HTML page will be static. However, for your own use, if you want to add some more columns, feel free to edit the section below. It is located at TabuLogs/master/perimeter1/index.html

 <tr class="tableheader">
                        <tr ng-repeat="rec in SampleRecords | filter:q | startFrom:currentPage*pageSize | limitTo:pageSize">
                            <td>{{ rec.Hostname }}</td>
                            <td>{{ rec.Username }}</td>
                            <td>{{ rec.Login }}</td>
                            <td>{{ rec.TimeTaken }}</td>
                            <td>{{ rec.IPAddress }}</td>
                            <td>{{ rec.Perimeter }}</td>
                            <td>{{ rec.Application }}</td>

Since I have used the lastlog from Linux, I called the page “Linux Login Management”. Otherwise, you can also filter any logs such as Apache or secure log. Some modifications will have to be carried out with the awk command or using Ansible Jinja filters.

You can also point a logo on the HTML page and kept in the images folder

<body ng-app="AngTable" ng-controller="Table">
      <img src="images/logo1.png" align="right" alt=" " width="380" height="180"> 
      <img src="images/logo2.png" alt="cyberstorm" align="top-left" width="380" height="180">
    <p><h2>Login Management for my servers</p></h2> 

The most interesting thing is about the data which is stored in a JSON array. In that particular example, we have information about the hostname, username, login, the time taken, IP Address, Application, and the Perimeter.


The JSON array will be fetched by AngularJS to render the HTML table.

As mentioned previously, if you had tried to load all the JS page into your browser, same would crash, hence, to overcome this situation, pagination comes handy.

Messing around with Linux AWK command and the Ansible Playbook

The Linux AWK command is pretty powerful to concatenate all logs together. The more interesting is the conversion of the logs into JSON format. The playbook is located at TabuLogs/AnsiblePlaybook/AuditLoginLinux.yml

When the Playbook is launched we can see the first AWK will create a file with the hostname of the machine in /tmp and inside that file, the data is comprised of the Hostname, Username, Login, Time-Taken, and IP Address. To get back the logs from the previous month,  I used Date=`date +%b -d ‘last month’`

Assuming you have rotated logs for /var/log/wtmp either weekly or monthly or any range, the loop is supposed to find logs for only last month. Now, in case you have kept wtmp logs for years, another method needs to be used to fetch log for the actual year.

Date=`date +%b -d 'last month'`; t=`hostname -s` ; for i in `ls /var/log/wtmp*` ; do last -f $i ; done | grep $Date | awk -v t="$t" '{print t, $1, $4 "-" $5 "-" $6 "-" $7, $9$10, $3 }' | egrep -v 'wtmp|reboot' > /tmp/$t

Once the logs have been created on the remote hosts, it is fetched by ansible and kept on the local server. Consequently, the remote files are destroyed.

We also need to merge all the data into one single file and also removing all blank lines, which is done using the following command:

awk 'NF' /Tabulogs/AnsibleWorkDesk/Servers/* | sed '/^\s*$/d' > /Tabulogs/AnsibleWorkDesk/AllInOne

Assuming that the file located at /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter contain information about the server, application, and perimeter, the following awk command will append the information to the table created remotely on the hosts.

awk 'FNR == NR {a[$3] = $1 " " $2; next} { if ($1 in a) { NF++; $NF = a[$1] }; print}' /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter /Tabulogs/AnsibleWorkDesk/AllInOne > /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication

Example of the format of the table is:

Nginx dev server2

Apache prod server1

Nginx dev server4

After adding all the data together on a simple table, the following AWK will convert each row into JSON format

awk '{print "{" "\"" "Hostname" "\"" ":" "\"" $1"\"" "," "\"" "Username" "\"" ":" "\"" $2"\"" "," "\"" "Login" "\"" ":" "\"" $3"\"" "," "\"" "TimeTaken" "\"" ":" "\"" $4"\"" ",""\"" "IPAddress" "\"" ":" "\"" $5"\"" "," "\"" "Application" "\"" ":" "\"" $6"\"" "," "\"" "Perimeter" "\"" ":" "\""$7"\"""}" "," }' /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication > /Tabulogs/AnsibleWorkDesk/perimeter1/table.js

It does not end here, we also need to add the JS codes to it. Using Lineinfile module of Ansible, writing at the beginning of the file is pretty easy.

lineinfile: dest=/Tabulogs/AnsibleWorkDesk/perimeter1/table.js line={{ item.javascript }} insertbefore=BOF
       - { javascript: "$scope.SampleRecords=[ " }
       - { javascript: "$scope.q = ''; " }
       - { javascript: "$scope.pageSize = 10; " }
       - { javascript: "$scope.currentPage = 0; " }
       - { javascript: "app.controller('Table', ['$scope', '$filter', function ($scope, $filter) { "}
       - { javascript: "var app = angular.module(\"AngTable\", []);  "}

Same method to write at the end of the file to create the table.js file.

lineinfile: dest=/Tabulogs/AnsibleWorkDesk/Perimeter1/table.js line={{ item.javascript }} insertafter=EOF
       - { javascript: " ]; " }
       - { javascript: "$scope.getData = function () { " }
       - { javascript: "return $filter('filter')($scope.SampleRecords, $scope.q) " }
       - { javascript: " }; "}
       - { javascript: "$scope.numberOfPages=function(){ "}
       - { javascript: "return Math.ceil($scope.getData().length/$scope.pageSize); "}
       - { javascript: "}; "}
       - { javascript: "}]); "}
       - { javascript: "app.filter('startFrom', function() { "}
       - { javascript: "    return function(input, start) { "}
       - { javascript: "start = +start; "}
       - { javascript: "return input.slice(start); "}
       - { javascript: " } "}
       - { javascript: "}); "}

After writing at the end of the file, the table.js is now completed.

Everything is assembled using the Ansible playbook. The Playbook has been tested for RedHat 7 servers. I believe it can also be adapted for other environments.

Final result

Here is an idea how will be the final result

Some information blurred for security reasons
Some pieces of information are blurred for security reasons

Future Enhancements

I’m so excited to share this article though I know there are much more improvements that can be done such as:

  • The awk command can be combined.
  • Removal of backticks on the shell commands.
  • shell command like rm -rf will be removed as well using the file module.
  • Some deprecated HTML tags were used.
  • Code sanitization.

My future goal is to improve the Ansible Playbook and the source code itself. Maybe someone else has a much better way of doing it. I would be glad to hear more. Please do comment below for more ideas. In case, you believed that some improvements can be done on the Playbook, feel free to send some commits on my GitHub repository or comment below.

As usual, a plan is needed. Consider reading the “tips” section below which might give you some hints.


  • There is, however, some other limitations with last command. Some arguments will not be present if you are using old utils-linux packages. Consider updating the package to be able to filter the last command easily.
  • If you can group your servers by some category or environment, it would be helpful.
  • There will be other versions of the project Tabulogs to create better and fast Ansible playbook. Lets called this one version 1.0

An agentless servers inventory with Ansible & Ansible-CMDB

Building from scratch an agentless inventory system for Linux servers is a very time-consuming task. To have precise information about your server’s inventory, Ansible comes to be very handy, especially if you are restricted to install an agent on the servers. However, there are some pieces of information that the Ansible’s inventory mechanism cannot retrieve from the default inventory. In this case, a Playbook needs to be created to retrieve those pieces of information. Examples are VMware tool and other application versions which you might want to include in your inventory system. Since Ansible makes it easy to create JSON files, this can be easily manipulated for other interesting tasks, say an HTML static page. I would recommend Ansible-CMDB which is very handy for such conversion. The Ansible-CMDB allows you to create a pure HTML file based on the JSON file that was generated by Ansible. Ansible-CMDB is another amazing tool created by Ferry Boender.

Photo credits:
Photo credits:

Let’s have a look how the agentless servers inventory with Ansible and Ansible-CMDB works. It’s important to understand the prerequisites needed before installing Ansible. There are other articles which I published on Ansible:

Ansible Basics and Pre-requisites

1. In this article, you will get an overview of what Ansible inventory is capable of. Start by gathering the information that you will need for your inventory system. The goal is to make a plan first.

2. As explained in the article Getting started with Ansible deployment, you have to define a group and record the name of your servers(which can be resolved through the host file or DNS server) or IP’s. Let’s assume that the name of the group is “test“.

3. Launch the following command to see a JSON output which will describe the inventory of the machine. As you may notice that Ansible had fetched all the data.

Ansible -m setup test

4. You can also append the output to a specific directory for future use with Ansible-cmdb. I would advise creating a specific directory (I created /home/Ansible-Workdesk) to prevent confusion where the file is appended.

Ansible-m setup --tree out/ test

5. At this point, you will have several files created in a tree format, i.e; specific file with the name of the server containing JSON information about the servers inventory.

Getting Hands-on with Ansible-cmdb

6. Now, you will have to install Ansible-cmdb which is pretty fast and easy. Do make sure that you follow all the requirements before installation:

git clone
cd ansible-cmdb && make install

7. To convert the JSON files into HTML, use the following command:

ansible-cmdb -t html_fancy_split out/

8. You should notice a directory called “cmdb” which contain some HTML files. Open the index.html and view your server inventory system.

Tweaking the default template

9. As mentioned previously, there is some information which is not available by default on the index.html template. You can tweak the /usr/local/lib/ansible-cmdb/ansiblecmdb/data/tpl/html_fancy_defs.html page and add more content, for example, ‘uptime‘ of the servers. To make the “Uptime” column visible, add the following line in the “Column definitions” section:

{"title": "Uptime",        "id": "uptime",        "func": col_uptime,         "sType": "string", "visible": True},

Also, add the following lines in the “Column functions” section :

<%def name="col_uptime(host, **kwargs)">
${jsonxs(host, 'ansible_facts.uptime', default='')}

Whatever comes after the dot just after ansible_fact.<xxx> is the parent value in the JSON file. Repeat step 7. Here is how the end result looks like.

Photo credits: Ferry Boender
Photo credits: Ferry Boender

Getting beyond Ansible-cmdb

Now, imagine that you want to include a specific application version (Example VMware tool version ) in the HTML inventory file. As I mentioned in part 4, I created the directory /home/Ansible-Workdesk. This where the “out” and “cmdb” directories have been created.

10. Create another directory called /home/Ansible-Workdesk/other_info/vmwaretool. I use this directory to deposit another JSON file for the VMware tool version after launching a playbook. Here is an extract from my InventoryUsingAnsibleCMDB.yml Playbook.

- setup:
  register: setup_res

- command: vmware-toolbox-cmd -v
  register: vmwareversion

- set_fact:
  vmwareversion: '{ "vmwareversion": {{ vmwareversion.stdout_lines }} }'

You can view the whole Ansible Playbook here on my Github.

11. Once the playbook has been executed, you will have identical files name in /home/Ansible-Workdesk/out and /home/Ansible-Workdesk/out/other_info/vmwaretool.

12. However, the content will be different. The one in the “out” directory will contain JSON files about the default Ansible inventory, whilst, the one in the “vmwaretool” directory will contain a JSON file about the VMware tool version having its parent as “vmwareversion“. I change the parent from “stdout_lines” to “vmwareversion” using the set_fact module in Ansible.

13. By now, you are ready to tweak the html_fancy_defs.html again as described in part 9. Both the Column definitions and Column functions need to be appended. Here is the line to be added in the Column definitions section:

{“title”: “VMware Tool”,        “id”: “vmwareversion”,        “func”: col_vmwareversion,         “sType”: “string”, “visible”: True},

And that of the Column functions section:

<%def name=“col_vmwareversion(host, **kwargs)”>
${jsonxs(host, ‘vmwareversion’, default=”)}

14. Repeat steps at part 7 with the “vmwaretool” directory.

ansible-cmdb -t html_fancy_split out/ out/other_info/vmwaretool/

In case, you are able to create an Ansible Playbook to create valid JSON files by merging those in the vmwaretool directory to that of the out directory, please comment below. I would like to hear more about it.


  • More Playbooks can be found on my Ansible-Playbooks Github repository.
  • With regards to part 3, if direct root access has been disabled on the destination servers, you can use -u <username> which will permit you to connect on the server.
  • The ansible-cmdb command also allows you to generate CSV file.
  • Part 10 lays emphasis on a separate JSON file. If you have been able to merge both outputs on the same JSON file that has been created by ansible default inventory please comment below.
  • The group in the ansible host file can also be added to the server inventory html file. Please see the ansible-cmdb doc for more information.

Some tips with Ansible Modules for managing OS and Application

In the year 2016, I published some articles on Ansible: Getting started with Ansible deployment, which provide some guides to get started with Ansible, setting up the SSH key and other basic stuffs. Another article is about LVM configuration on CentOS as well as updating Glibc on a linux server following a restart of the service. There is another article for some more details about Ansible playbooks which could be helpful to get started with.

It is almost two years since I published these articles. I noticed that the concept of Ansible remains the same. Now we have other tools such as Ansible-Galaxy and Ansible-Tower to ease much more of the tasks using this agentless tools. On top of that there is also the possibility to perform agentless monitoring using Ansible. In future articles, I will get into some more details about this such as using Ansible to perform monitoring on servers. The concept remain the same, however, it is important to make sure that the modules used is in conformity of the version of the Ansible. Otherwise, you might end up with deprecated module. The Ansible Playbook’s output will give you an indication on which servers it has failed or succeeded, You will also have access to the <PlaybookName>.retry file which will show you all failed servers.

When using Ansible, always make sure that you are on the official documentation. Each version of Ansible is well documented on the official website

These days I have written some few playbooks. Let’s see some interesting stuff what ansible can do.

Ansible can edit files using the bullet proof approach. Instead of copying files from one destination to the other, we can edit it directly. Here is an extract of one such type of action:

Another interesting way of using the Ansible shell module where you can fire shell command remotely from the Ansible playbook. For example: removing specific users from a specific group using the shell module:

You can also delete specific user along with its home directory:

Do check out my Github Repository to have access to my Ansible Playbooks.

XpressLiteCoin – Your Litecoin payment gateway

As promised days back on my Facebook Page, I would blog about setting up a Litecoin button on your website for payment or donation purpose which I did myself. See on the top right corner of the blog. I would strongly suggest using the XpressLiteCoin payment gateway for such type of transaction. Some days back during the operation JASK, I contributed to the LiteCoin repository and I thought why not set up a Litecoin donation button. The funds received will be used to renew my server hosting and domain. Below are some instructions to start with.

For some who are not well acquainted to cryptocurrencies, Litecoin is one amongst many and it is a fork from the Bitcoin. Litecoin is an experimental digital currency that enables instant payments to anyone, anywhere in the world. Litecoin uses peer-to-peer technology to operate with no central authority: managing transactions and issuing money are carried out collectively by the network. Litecoin Core is the name of open source software which enables the use of this currency.Litecoin

Imagine, you want to receive payments for your business in a more secure way. Of course, when it comes to cryptocurrencies, no one wants to take the risk. XpressLiteCoin is here to provide merchants with a cheap and convenient way to integrate Litecoin in their business payment process. – XpressLiteCoin

How to start with XpressLiteCoin payment gateway?
1. First, you will need to register yourself on the website. This is pretty straightforward. Make sure you received the confirmation email once you have to sign up on the website.

Create a Litecoin address.

2. You can create a paper-based wallet but the procedures can be lengthy and you will have to secure your key and record all transactions. However, using the online wallet is pretty simple with Jaxx.

3. After installing Jaxx, you will have the option to create a new wallet.

4. Then, you will have the option to choose the paper-based wallet or an online wallet which is easier.

You can create your wallet and scan the QR code to use the same wallet on your mobile device such as Android, IOS etc..

5. After configuration, you will have an LTC Address.

Merge your Litecoin address with XpressLiteCoin gateway

6. Save your Litecoin address and enter it on the prompt which you received when logging for the first time on the prompt as shown below:

By this time, you should have been able to access the dashboard as a user. Now it’s time for some basic installation on the server.

Some basic installations on the server

7. On the server, install the “npm” package manager:

yum install npm

8. You can also upgrade your version of npm as follows:

npm install npm -g --ca=""

9. Use known registrars for the current version of npm

npm config set ca ""

10. Some installations with npm package manager which are required:

npm install express
npm install request
npm install  body-parser

11. You will also need to download the xpresslitecoin.gz at the following link as shown below :

12. To integrate the XpressLiteCoin on your website, go to the documentation page and/or click on guide. You will notice find the integration.pdf which have a piece of Javascript that will be needed on your application.

13. There are two parameters in the code to tweak: First is the port number your application will be listening and second is the token which you will get from the XpressLiteCoin dashboard on the merchant settings option.

14. Copy the token and insert it at line 10 of the code. Example:

const api_token = "XXXX<Token Value here XXXX";

15. By default, the port runs on 8080. In case, you want to change it, feel free.

16. You will also need to run your application. I would, however, recommend you to have a script on autostart for this service :

node xpresslitecoin.js

17. Since the application need to be inserted as a plugin on your website, you can create a ProxyPass on your web server. For Nginx proxy use the following parameter

location /xpresslitecoin/ {

    proxy_set_header HOST $host;

    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_set_header X-Real-IP $remote_addr;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;



18. For Apache HTTPD ProxyPass, see the documentation here.

Create the payment button

19. By now, you should be able to run the node service with the XpressLiteCoin application. However, to insert a button your website to received payments through the gateway, you will need to insert a few lines on JavaScript codes.

<script type="text/javascript" src="xpresslitecoin.js">
<button id="xpl-donate"> <img src="LocationToYourImage.png" alt="Please Donate"> </button>

20. An issue that your might encountered if you have CSP enabled which is a good thing. However, you will need to make sure that you have an exclusion on the plugin.

IETF 100 hackathon on TLS 1.3 by

Some days back, The Register mentioned about preparing for IETF100 hackathon. Hooray! Yeah we did it and the hard work finally paid off thanks to the core team and the whole of team. After registering on the IETF – Internet Engineering Task Force website, the team set itself on the TLS1.3 API source code. We were all focused on the OpenSSL codes.

Once in our office, we set up the network and our equipment. Check out logan’s blog to have an idea how things went on. That’s true we struggled in the beginning, but finally we could see the light at the end of the tunnel. Patience and patience is all what you need and a calm mind to study how things are in the code. The testing was then carried out to confirm the beauty of the TLS 1.3 codes in our chosen projects. You can also view the TLS tutorial which explains the objectives of TLS1.3. For example: Mitigation of pervasive monitoring.

Here are some hints about the security from TLS1.3

  • RSA key was removed.
  • Stream ciphers was reviewed.
  • Removal of compressed data mechanism which was able to influence which data can be sent.
  • Renegotiation was removed.
  • SHA1 and Block ciphers were removed.
  • Use of modern cryptography like A-EAD.
  • Use of modern key such as PSK.

For more details see this blog from OpenSSL. We were also working together with the TLS team in Singapore which was lead by Nick Sullivan, champion at the IETF TLS hackathon.

After the IETF Hackathon, it was announced publicly about the good job done by the team on the IETF channel.

The team at the beach 🙂

More links :

PS: Any more links related to IETF Hackathon TLS 1.3 let me know, I will add it here!

Feel free to join the community group on Facebook and follow us on our Twitter account.