IETF 103 hackathon remotely by cyberstorm.mu – Day 1

Day-0 for the IETF 103 hackathon was really fun. We had two first-timers who worked pretty hard. It was really an intense moment on Day 1. Everyone was busy with their projects. I should admit it was pretty intense. Our first pull request was already merged by Muzaffar from the TLS 1.3 whereas kheshav had the testing part to complete for HTTP 451. I already send a pull request for SSH for the NetSSH Ruby library. We discussed a lot on implementation and testing part. Nathan, Jeremy, and Rahul also worked heavily on the TLS 1.3 implementation. For the IETF 103, we decide to skip the interoperability testing and focus more on implementation.

ietf103day1

3f6622bf-fe37-4d16-bc43-e6bdd09451fe
d9b2c227-fb25-491d-b9c1-c8d8c70932d4
77b59ab3-29f5-4b80-bd1f-23beb585222a
45146056_10156763740867365_5958354691757703168_n
1*A08984TpTb7nhtXKrvKPvg
45263831_2048347495229367_987405297653907456_n

When it comes to goodies, WolfSSL congratulated us for a good job and sent us several goodies and other stuff.

However, on the SSH side, we have to deprecate RC4 in several projects such as NetSSH and JSCH, a JAVA library. On TLS 1.3, SNI added to Httperf, a TLS 1.3 library is still on progress on CSharp and LUA. We also have one module for Drupal and Django for HTTP 451.


IETF 103 hackathon remotely by cyberstorm.mu – Day 0

Today is the first day for the IETF 103 remote hackathon in Mauritius. The cyberstorm.mu team is ready to lead and participate in all the three tracks as champions for the event. We have the TLS 1.3, SSH and HTTP 451. All the three tracks are lead by cyberstorm.mu as champions.

Members participating in the event are :

TLS 1.3 protocol

HTTP 451 protocol

SSH protocol

Our first timers for the IETF 103 hackathon is Kheshav Sewnundun, creator of XpressLiteCoin.com and Devops Engineer at Linkbynet Indian Ocean. We also have Diresh Soomirtee, Junior system administrator at Linkbynet Indian Ocean.

Prior before the hackathon, we did some shopping for the basic amenities.  We reached at the quarter at a Mauritius Villas, a bungalow in Pointe Aux Piments at around 13:00 hrs. The network was set up with two different ISPs in case of breakdown, we can still be live during the hackathon.

We also celebrated Kifah’s birthday during the hackathon.

At cyberstorm.mu, it’s always the pool that brings more relaxing time. We are also in the Halloween week and some guys even brought their Halloween costumes and went swimming inside the pool.

IMG_1720
Screen Shot 2018-11-03 at 10.32.20 AM
Screen Shot 2018-11-03 at 10.19.59 AM
d816b387-bd2a-4b1d-816e-13e99de47561
d816b387-bd2a-4b1d-816e-13e99de47561
IMG_1729
IMG_1728
IMG_1712
IMG_1723
d816b387-bd2a-4b1d-816e-13e99de47561
IMG_1722
IMG_1714
IMG_1724
IMG_1717
IMG_1718
IMG_1713
IMG_1713

Most of us already started working on our projects and some pull requests already sent. More testing on progress as well as creating of several patches. We even need to have a discussion on open source licensing to make sure that there are incompatibility issues between different licenses. We made a plan for the three tracks we are championing and it looks to start pretty fine.

By this time, it’s already late here. I really need some sleep to start day 1 for the IETF 103 hackathon 🙂


Operation KSK-ROLL by cyberstorm.mu – KSK Rollover Explained

The last cyberstorm.mu event was on OpenSource licensing with Dr. Till Jaeger at Flying Dodo, Bagatelle Mall Mauritius. We discussed several issues concerning cybersecurity laws, trademarks, OpenSource licensing issues etc.. Dr. Till Jaeger appreciated the meetup and encouraged us to evangelize more on OpenSource. The event was organized by Loganaden Velvindron member of cyberstorm.mu.


Dr. Till Jaeger and logan with a surprise gift.
Dr. Till Jaeger brought a surprise gift to Logan 🙂

I should say that we were already planning about our next event, hence, a hackathon on Operation KSK-ROLL by the cyberstorm.mu team which was pretty easy, important and successful. Dr. Till Jaeger congratulate us for creating the cyberstorm.mu team. Several pull requests sent to many repositories to encourage developers to adopt the new key.

What is Operation KSK-ROLL?

At cyberstorm.mu all Non-IETF hackathons are usually given a name. This time for the KSK rollover hackathon we have chosen ‘Operation KSK-Roll’. Operation KSK-ROLL has been started to make sure that software is up-to-date with the new KSK key.

What is the KSK rollover?

The DNS KSK Rollover happened on 11 October at 11:00 UTC. Rolling the KSK means generating a new key cryptographic key pair (public and private key).

What are those keys?

The public key is distributed to those who operate valid DNS resolvers such as ISPs, network administrators, system integrators etc.. whilst, the private key is kept secret.

If its secret, why do we need to generate another secret key?

For security purpose, the secret key is generated anew and this ensures that DNS resolvers have a more robust security layer on top of the DNS AKA: DNSSEC

What are DNS resolvers?

All websites, example tunnelix.com which is a domain name is behind an IP Address. For your browser to be able to resolve the website, a DNS resolver which is located at several parts of the world will identify the IP with the domain name. Consequently, this will render the website on your browser.

What is DNSSEC?

As mentioned previously, DNSSEC (DNS Security) is a layer added by ICANN to ensure by means of cryptographic keys to ensure an online protection from the provider of the root domain name to your browser.

How will you know if a website is DNSSEC signed?

There is a tool by VeriSign lab which provides DNSSEC Analyzer. You can enter the name of the domain, say tunnelix.com which will analyze the domain show you the public key and the chain from the . (dot),  com and tunnelix.com.


credits to: verisignlabs.com
credits to: verisignlabs.com

Is there another way to verify it?

Yes, you can use the nslookup or dig tool to check it. In the case of the dig tool here is a screenshot.

What is the logic behind the DIG command?

Some years back (the Year 2015), I explained the anatomy of the dig command. You can view more details about the blog post called “Anatomy of a simple dig result“.


What is the role of the KSK?

The KSK private key is used to generate a digital signature for the ZSK. In fact, the KSK public key is stored in the DNS to be used for authenticating the ZSK. So, the KSK is a key to sign another key for the ZSK. That is why it is called the “Key Signing Key”.

So, what is the ZSK?

The ZSK (Zone Signing Key) is another private-public key pair which is used to generate a digital signature known as RRSIG ( Resource Record Signature). The RRSIG in itself is a digital signature for each RRSET (Resource Record Sets) in a zone. In fact, the ZSK is stored in the domain name system to authenticate the RRset.

What are RRsets?

RRsets (Resource Record sets) is a group of records DNS Record Set (RRsets) with the same record type, for example, all DNS A records are one RRset.

My contributions for KSK ROLL

Please follow me on my Github account. One of the repositories is Nagval which is a plugin to check the validity of one of more DNSSEC domains.

For more information about DNSSEC, ZSK, PSK etc, I would advise to check out Cloudflare which provided a good source of information.

Cyberstorm.mu continue to go beyond and further with innovations and more ideas to protect and secure the Internet. We believe that though we are a small team will be able to recruit more people who are strongly interested in developing their skills to strive for excellence.

I also wish to seize this opportunity to thanks Manuv Panchoo for designing the logo of  tunnelix.com


All rights reserved: tunnelix.com
All rights reserved: tunnelix.com

Installing OpenSSH on Windows 2012 R2 through PowerShell

This is probably my first article on Microsoft Windows. Some days back, I was asked to perform some tasks on Windows, though I’m not really a big fan of Windows, I managed to do it. Prior to the tasks, I wanted to have my usual SSH capabilities to log on the server, so I decided to install OpenSSH on the Windows 2012 R2 server. Microsoft has a repository for OpenSSH on Github. An interesting thing about Windows is that SSH has now been brought to Windows 2016. Well, I decided to add a new category on tunnelix.com about ‘Windows‘. Comment below if you find this weird!


Please credit tunnelix.com for using the picture
Please credit tunnelix.com for using the picture

 

1. Point yourself into the directory where you want the file to be downloaded. In my case, it is the directory: C:\Users\Administrator\Desktop :


PS C:\Users\Administrator> cd C:\Users\Administrator\Desktop

2. The installation is pretty simple. You will need to download the .zip file from the Github repository using the Invoke-WebRequest command. By default, Invoke-WebRequest command supports TLS 1.1 and same has been deprecated. So you might need to change the security protocol to TLS1.2 or TLS1.3 using the following command:

PS C:\Users\Administrator\Desktop> [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

3. Then download the binary using the Invoke-WebRequest:

PS C:\Users\Administrator\Desktop> Invoke-WebRequest -Uri "https://github.com/PowerShell/Win32-OpenSSH/releases/download/v7.7.2.0p1-Beta/OpenSSH-Win64.zip" -OutFile "powershell.zip"

4. On a fresh installation, Windows 2012 R2 does not have the Expand-Archive command, so we will use .NET directly. Add-Type loads a .dll with the necessary .net functions in your current session. then [io.compression.zipfile] is a reference to that loaded .dll and ::ExtractToDirectory is the way to call a function from that dll :

PS C:\Users\Administrator\Desktop> Add-Type -assembly "system.io.compression.filesystem"

5. Now, we can unzip the file:


PS C:\Users\Administrator\Desktop> [io.compression.zipfile]::ExtractToDirectory( 'C:\Users\Administrator\Desktop\powershell.zip','C:\Users\Administrator\Desktop' )

6. After unzipping the file, get into the directory that has been unzipped and launch the installation:

PS C:\Users\Administrator\Desktop> cd .\OpenSSH-Win64

PS C:\Users\Administrator\Desktop\OpenSSH-Win64> .\install-sshd.ps1

7. The output should look as follows:

**** Warning: Publisher OpenSSH resources are not accessible.

[SC] SetServiceObjectSecurity SUCCESS
[SC] ChangeServiceConfig2 SUCCESS
[SC] ChangeServiceConfig2 SUCCESS
sshd and ssh-agent services successfully installed

8. The following command will show the status of the SSHD service:

PS C:\Users\Administrator\Desktop\OpenSSH-Win64> get-service | findstr ssh
Stopped ssh-agent OpenSSH Authentication Agent
Stopped sshd OpenSSH SSH Server

9. Launch the service with the following command:

PS C:\Users\Administrator> Start-Service sshd

10. You might need to add firewall rules to allow port 22 on the machine

PS C:\Users\Administrator> netsh advfirewall firewall add rule name=SSHPort dir=in action=allow protocol=TCP localport=22

11. You can also configure OpenSSH server to start automatically after the server reboot.


PS C:\Users\Administrator> Set-Service -Name sshd -StartupType "Automatic"

OpenSSH must be ready by now. You can SSH on your Windows server now. In future articles, I will blog more about Windows system administration, LDAP on Windows and more about Windows 2016 server. Enjoy 🙂


Project Tabulogs: Linux Last logs on HTML table with Ansible

Project Tabulogs: It will give you the ability to display logs on an HTML tabular format page using Ansible Playbook. Some days back, I shed some ideas, how to use Ansible to create an agentless server inventory with an interesting HTML format. In this post, we will see how to present information on an HTML tabular format logging information from lastlog. You can apply the same principle for any logs and present it to an HTML template. One of the advantage with Ansible is that you can also launch shell commands which allows you to automate several tasks whether remote or local. Many times, I came across non-technical personnel who want to see logins and logouts of staffs on specific servers. The goal of this project is to present accurate and reliable information to non-technical persons in an IT organization. It’s also a way to perused logins and logouts of users more easily.


All right reserved : tunnelix.com
All right reserved: tunnelix.com

 

Limitations and Solutions

However, there is some limitation of presenting bulk information such as logs on an HTML page. Imagine having hundreds of servers which could amount more than 20,000 lines. This would make the page impossible to load and may crash on the browser. To remediate this situation, a database such as SQLite could be interesting. Otherwise, AJAX can be used to fetch page by page. Since I’m keeping all information in JSON format in a JavaScript page, I decided to make use of Pagination. A search button will also come handy for all columns and rows.

Now, the thing is how to report that information which keeps on changing? let’s say every month you want to have a report of users connected on your organization’s servers in JSON format that’s too in a JavaScript page. Here is where Ansible comes useful. I created a Playbook to build the JSON array itself and the JavaScript page using the Shell module based on Linux Awk commands. Of course, there are other tasks Ansible will perform such as fetching of the information remotely, updating the HTML page: Example the date the report was created. The directory can then be compressed and send easily to anyone.  So, no database needed!!. Cool isn’t it? However, if you want to adopt this system for really huge logs of information, it might work but could be really slow, hence, the need of a database.


You can clone the repo from my GitHub repository.

HTML, AngularJS and JSON

I created a simple HTML page which will call the AngularJS to keep the information in a JSON array. The HTML page will be static. However, for your own use, if you want to add some more columns, feel free to edit the section below. It is located at TabuLogs/master/perimeter1/index.html

 <tr class="tableheader">
                            <th>Hostname</th>
                            <th>Username</th>
                            <th>Login</th>
                            <th>TimeTaken</th>
                            <th>IPAddress</th>
                            <th>Perimeter</th>
                            <th>Application</th>
                        </tr>
                    </thead>
                    <tbody>
                        <tr ng-repeat="rec in SampleRecords | filter:q | startFrom:currentPage*pageSize | limitTo:pageSize">
                            <td>{{ rec.Hostname }}</td>
                            <td>{{ rec.Username }}</td>
                            <td>{{ rec.Login }}</td>
                            <td>{{ rec.TimeTaken }}</td>
                            <td>{{ rec.IPAddress }}</td>
                            <td>{{ rec.Perimeter }}</td>
                            <td>{{ rec.Application }}</td>
                        </tr>                       

Since I have used the lastlog from Linux, I called the page “Linux Login Management”. Otherwise, you can also filter any logs such as Apache or secure log. Some modifications will have to be carried out with the awk command or using Ansible Jinja filters.

You can also point a logo on the HTML page and kept in the images folder


<body ng-app="AngTable" ng-controller="Table">
    <p></p>
      <img src="images/logo1.png" align="right" alt=" " width="380" height="180"> 
      <img src="images/logo2.png" alt="cyberstorm" align="top-left" width="380" height="180">
    <p><h2>Login Management for my servers</p></h2> 

The most interesting thing is about the data which is stored in a JSON array. In that particular example, we have information about the hostname, username, login, the time taken, IP Address, Application, and the Perimeter.

[ 
{"Hostname":"Apacheserver","Username":"tunnelix","Login":"Fri-Sep-28-15:11","TimeTaken":"15:11(00:00)","IPAddress":"192.168.0.1","Application":"Apache","Perimeter":"Production"},
 ]; 

The JSON array will be fetched by AngularJS to render the HTML table.

As mentioned previously, if you had tried to load all the JS page into your browser, same would crash, hence, to overcome this situation, pagination comes handy.

Messing around with Linux AWK command and the Ansible Playbook

The Linux AWK command is pretty powerful to concatenate all logs together. The more interesting is the conversion of the logs into JSON format. The playbook is located at TabuLogs/AnsiblePlaybook/AuditLoginLinux.yml

When the Playbook is launched we can see the first AWK will create a file with the hostname of the machine in /tmp and inside that file, the data is comprised of the Hostname, Username, Login, Time-Taken, and IP Address. To get back the logs from the previous month,  I used Date=`date +%b -d ‘last month’`

Assuming you have rotated logs for /var/log/wtmp either weekly or monthly or any range, the loop is supposed to find logs for only last month. Now, in case you have kept wtmp logs for years, another method needs to be used to fetch log for the actual year.

Date=`date +%b -d 'last month'`; t=`hostname -s` ; for i in `ls /var/log/wtmp*` ; do last -f $i ; done | grep $Date | awk -v t="$t" '{print t, $1, $4 "-" $5 "-" $6 "-" $7, $9$10, $3 }' | egrep -v 'wtmp|reboot' > /tmp/$t

Once the logs have been created on the remote hosts, it is fetched by ansible and kept on the local server. Consequently, the remote files are destroyed.

We also need to merge all the data into one single file and also removing all blank lines, which is done using the following command:


awk 'NF' /Tabulogs/AnsibleWorkDesk/Servers/* | sed '/^\s*$/d' > /Tabulogs/AnsibleWorkDesk/AllInOne

Assuming that the file located at /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter contain information about the server, application, and perimeter, the following awk command will append the information to the table created remotely on the hosts.

awk 'FNR == NR {a[$3] = $1 " " $2; next} { if ($1 in a) { NF++; $NF = a[$1] }; print}' /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter /Tabulogs/AnsibleWorkDesk/AllInOne > /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication

Example of the format of the table is:

Nginx dev server2

Apache prod server1

Nginx dev server4

After adding all the data together on a simple table, the following AWK will convert each row into JSON format

awk '{print "{" "\"" "Hostname" "\"" ":" "\"" $1"\"" "," "\"" "Username" "\"" ":" "\"" $2"\"" "," "\"" "Login" "\"" ":" "\"" $3"\"" "," "\"" "TimeTaken" "\"" ":" "\"" $4"\"" ",""\"" "IPAddress" "\"" ":" "\"" $5"\"" "," "\"" "Application" "\"" ":" "\"" $6"\"" "," "\"" "Perimeter" "\"" ":" "\""$7"\"""}" "," }' /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication > /Tabulogs/AnsibleWorkDesk/perimeter1/table.js

It does not end here, we also need to add the JS codes to it. Using Lineinfile module of Ansible, writing at the beginning of the file is pretty easy.


lineinfile: dest=/Tabulogs/AnsibleWorkDesk/perimeter1/table.js line={{ item.javascript }} insertbefore=BOF
     with_items:
       - { javascript: "$scope.SampleRecords=[ " }
       - { javascript: "$scope.q = ''; " }
       - { javascript: "$scope.pageSize = 10; " }
       - { javascript: "$scope.currentPage = 0; " }
       - { javascript: "app.controller('Table', ['$scope', '$filter', function ($scope, $filter) { "}
       - { javascript: "var app = angular.module(\"AngTable\", []);  "}

Same method to write at the end of the file to create the table.js file.

lineinfile: dest=/Tabulogs/AnsibleWorkDesk/Perimeter1/table.js line={{ item.javascript }} insertafter=EOF
     with_items:
       - { javascript: " ]; " }
       - { javascript: "$scope.getData = function () { " }
       - { javascript: "return $filter('filter')($scope.SampleRecords, $scope.q) " }
       - { javascript: " }; "}
       - { javascript: "$scope.numberOfPages=function(){ "}
       - { javascript: "return Math.ceil($scope.getData().length/$scope.pageSize); "}
       - { javascript: "}; "}
       - { javascript: "}]); "}
       - { javascript: "app.filter('startFrom', function() { "}
       - { javascript: "    return function(input, start) { "}
       - { javascript: "start = +start; "}
       - { javascript: "return input.slice(start); "}
       - { javascript: " } "}
       - { javascript: "}); "}

After writing at the end of the file, the table.js is now completed.

Everything is assembled using the Ansible playbook. The Playbook has been tested for RedHat 7 servers. I believe it can also be adapted for other environments.

Final result

Here is an idea how will be the final result



Some information blurred for security reasons
Some pieces of information are blurred for security reasons

Future Enhancements

I’m so excited to share this article though I know there are much more improvements that can be done such as:

  • The awk command can be combined.
  • Removal of backticks on the shell commands.
  • shell command like rm -rf will be removed as well using the file module.
  • Some deprecated HTML tags were used.
  • Code sanitization.

My future goal is to improve the Ansible Playbook and the source code itself. Maybe someone else has a much better way of doing it. I would be glad to hear more. Please do comment below for more ideas. In case, you believed that some improvements can be done on the Playbook, feel free to send some commits on my GitHub repository or comment below.

As usual, a plan is needed. Consider reading the “tips” section below which might give you some hints.

TIPS:


  • There is, however, some other limitations with last command. Some arguments will not be present if you are using old utils-linux packages. Consider updating the package to be able to filter the last command easily.
  • If you can group your servers by some category or environment, it would be helpful.
  • There will be other versions of the project Tabulogs to create better and fast Ansible playbook. Lets called this one version 1.0