Tag: linux

IETF 102 hackathon remotely from Mauritius

The hackers.mu team has been participating in several IETF hackathons these recent years. For the IETF 102 hackathon, we focused tremendously on innovation: The goal to create two teams for the TLS 1.3 project, one for the Implementation team and the other is Interop. At the same time, getting hands on the HTTP 451 project. The IETF hackathon encourages developers to collaborate and develop utilities, ideas, sample code and solutions that show practical implementations of IETF standards. It is not a competition but a collaborative event.

For this IETF hackathon, myself and Loganaden Velvindron core members of hackers.mu team decided to lead the event. We searched a marvelous venue at Pointe aux Piments, a remote coastal area in the north-west of Mauritius which was very peaceful and can accommodate the whole team including first-timers of the IETF hackathon for three nights. As regards food, the best place is at Triolet, a village nearby which is famous for street foods including Pizza, Indian food, Grilled, Burgers and Brianis. We also chose that venue as it included a WiFi hotspot, several rooms, bathrooms and even a swimming pool.

The participants from the hackers.mu team was: Loganaden Velvindron, Rahul Golam, Kifah Sheik Meeran, Nigel Yong Sao Young Steven Ken Fouk, Muzaffar Auhammud, Codarren Velvindron, Yasir Aulear and myself – Nitin J Mutkawoa. As regards to the first-timers were: Veegish Ramdani, Jeremie Daniel, Jagveer Loky, Nathan Sunil Mangar and Avishai Poorun.

On day 1, we all set up our lab environments and since most first-timers were in the TLS 1.3 Interoperability team, a plan was already designed. We knew since the beginning that there would be the logistic issues, so we brought spare laptops, screens, memory card, projector etc.. Logan explained the situation we had to deal with especially when it comes to interoperability to the first-timers. Then, they assigned themselves some tasks. At first, it was time-consuming to get started, but at the end of day1, I can feel how everyone was working as a team and looking in the same direction for the TLS 1.3. On the other hand, Veegish was getting hands-on HTTP 451. Whilst the Interoperability team was having fun, the implementation team, on the other hand, was yet another challenge: Improving source code for TLS 1.3 compat layer.

On day 2, everyone woke up early and went for a morning walk. Afterward, the team was back to coding and debugging. Whilst some were on the implementation and Interoperability tasks, Veegish already advanced on the HTTP 451 project. A debrief carried out by logan to understand where the team stands. We had to constantly evaluate ourselves so that we knew in which direction we are moving. At the end of the day, most of us were already in the pool for some chilling moments. I seized the opportunity to make a Time Lapse video with my iPhone 7+ 🙂

On day 3, the atmosphere was intense. The implementation team needs to make sure the code has been tested and it is running correctly. I was heavily involved in the PHP CURL library part. The testing part was very challenging. At some moment I was so tired and hopeless as the testing part was really complex. At the same time, others were trying to help each other. Kifah was also on some bash scripting for the interoperability part. He wanted to automate some tasks. Logan was also looking at his code and helping the others. Well, at the end of the day we were so happy to be able to accomplish what we had planned. Everyone looked so tired. The only option is to go back to the pool.

We also decided to make some mini videoS to relate our experience during the hackathon. I uploaded the videos on YouTube. You can view it from the playlist below:

On day 4, we packed up to our destination. At that very moment in Montreal, the hackathon was still going on. I reached home at about 19:00 hrs Mauritius time. I was assigned a three minutes presentation for the hackathon carried out by the Mauritius team. It was already midnight. I was so tired. I knew that the presentation had to be carried out. Logan was constantly texting me to make sure that I did not fall asleep. You can view the presentation remotely live in Montreal Canada.

What did IETF hackers say about the IETF 102 hackathon?

“What I think was the most productive output during this time for me was pair-programming…” – Kifah

“I was very excited to be part of the Inter-operability team where I worked with OpenSSL, BoringSSL, WolfSSL, and tlslite using TLS1.3 protocols.” – Jagveer

“Making Internet Protocols great again during the IETF 102 hackathon” – Logan

“Finally after long hours of debugging he managed to test the protocol being used by NRPE locally” – Rahul

“Then… we finally got a Client Hello from Wireshark and made the PR” – Nigel

“At first I thought that it would only be working, working and working but besides of work we started creating bonds.” – Jeremie

“I got a lot of advice, support, and motivation to work with my team members and try to implement on a strategic basis and critical thinking the internet protocols and see their limit on a technical perspective.” – Avishai

“Once OpenSSL was installed, I then performed my first TLS 1.3 Handshake, Resumption, and 0-RTT but did run into difficulties with NSS.” – Chromico

“But while everyone is waiting, we are working. We have reached a deeper understanding of how it will affect our lives.” – Codarren

“IETF 102 was very fun and challenging experience in which I got to work on several opensource projects” – Muzaffar

“At first, I did encounter some issues like parsing JSON files, but I manage to work on those issues” – Veegish

We also had a follower on Twitter appreciating our effort and participation during the IETF 102 hackathon. Thanks, Dan York, senior manager at ISOC.

I’m happy that this hackathon was at the required level. It was a great initiative from the hackers.mu team. No major incidents occurred in our HQ at Pointe aux Piments. Everything that was planned went all and it’s worth investing yourself in this collaborative event.

Debugging disk issues with blktrace, blkparse, btrace and btt in Linux environment

blktrace is a block layer IO tracing mechanism which provides detailed information about request queue operations up to user space. There are three major components: a kernel component, a utility to record the i/o trace information for the kernel to user space, and utilities to analyse and view the trace information. This man page describes blktrace, which records the i/o event trace information for a specific block device to a file.

 

Photo credits : www.brendangregg.com
Photo credits : www.brendangregg.com

Limitations and Solutions with blktrace

There are several limitations and solutions when using blktrace. We will focus mainly on its goal and how the blktrace tool can help us in our day to day task. Assuming that you want to debug any IO related issue, the first command will be probably an iostat. These utilities can be installed through a yum install sysstat blktrace. For example:

iostat -tkx -p 1 2

The limitation here with iostat is that it does not gave us which process is utilising which IO.  To resolve this issue, we can use another utility such as iotop. Here is an example of the iotop output.

blktrace, iotop, blkparse, btt and btrace

Here iotop shows us exactly which process is consuming ‘which’ and ‘how’ much IO. But another problem with that solution is that it does not give detailed information. So, blktrace comes handy as it gives layer-wise information. How it does that? It sees what is going on exactly inside block I/O layer. When used correctly, it’s possible to generate events for all I/O request and monitor it from where it is evolving. Though it extracts data from the kernel, it is not an analysis tool and the interpretation of the data need to be carried out by you. However, you can feed the data to btt or blkparse to get the analysis done.

Before looking at blktrace, let’s check out the I/O architecture. Basically, at the userspace the user will write and read data. This is what the User Process at the user space. The user do not write directly to the disk. They first write to the VFS Page Cache and from which there are various I/O Scheduler and the Device Driver will interact with the disk to write the data.

Photo credits: msreekan.com
Photo credits: msreekan.com

The blktrace will normally capture events during the process. Here is a cheat sheet to understand blktrace event capture.

photo credits: programering.com
photo credits: programering.com

blkparse will parse and format events acquired from blktrace. If you do not want to run blkparse, btrace is a shortcut to generate data out of blktrace and blkparse. Finally we have btt which will analyze data from blktrace and generate time deltas for each layer.

Another tool to grasp before moving on with blktrace is debugfs which is a simple-to-use RAM-based file system specially designed for debugging purposes. It exists as a simple way for kernel developers to make information available to user space. Unlike /proc, which is only meant for information about a process, or sysfs, which has strict one-value-per-file rules, debugfs has no rules at all. Developers can put any information they want there.lwn.

So the first thing to do is to mount the debugfs file system using the following command:

mount -t debugfs debugfs /sys/kernel/debug

The aim is to allow a kernel developer to make information available in user space. The output of the command below describe how to mount and verify same. You can use the command mount to test if same has been successful. Now that you have the debug file system, you can capture the events.

Diving into the commands

1.So you can use blktrace to trace out the I/O on the machine.

blktrace -d /dev/sda -o -|blkparse -i -

2. At the same time, on another console launch the following command to generate some I/O for testing purpose.

dd if=/dev/zero of=/mnt/test1 bs=1M count=1

From the blktrace console you will get an output which will end up as follows :

CPU0 (8,0):
 Reads Queued:           2,       60KiB Writes Queued:       5,132,   20,524KiB
 Read Dispatches:        2,       60KiB Write Dispatches:       61,   20,524KiB
 Reads Requeued:         0 Writes Requeued:         0
 Reads Completed:        2,       60KiB Writes Completed:       63,   20,524KiB
 Read Merges:            0,        0KiB Write Merges:        5,070,   20,280KiB
 Read depth:             1         Write depth:             7
 IO unplugs:            14         Timer unplugs:           9
Throughput (R/W): 8KiB/s / 2,754KiB/s
Events (8,0): 21,234 entries
Skips: 166 forward (1,940,721 -  98.9%)

3. Same result can also be achieved using the btrace command. Apply the same principle as in part 2 once the command has been launched.

btrace /dev/sda

4. In part 1, 2 and 4 the blktrace commands were launched in such a way that it will run forever – without exiting. In this particular example,  I will output the file name for analysis. Assume that you want to run the blktrace for 30 seconds, the command will be as follows:

blktrace -w 30 -d /dev/sda -o io-debugging

5. On another console, launch the following command:

dd if=/dev/sda of=/dev/null bs=1M count=10 iflag=direct

6. Wait for 30 seconds to allow step 4 to be completed. I got the following results just after:

[[email protected] mnt]# blktrace -w 30 -d /dev/sda -o io-debugging
=== sda ===
  CPU  0:                  510 events,       24 KiB data
  Total:                   510 events (dropped 0),       24 KiB data

7. You will notice at the directory /mnt  the file will be created. To read it use the command blkparse.

blkparse io-debugging.blktrace.0 | less

8. Now let’s see a simple extract from the blkparse command:

8,0    0        1     0.000000000  5686  Q   R 0 + 1024 [dd]
8,0    0        0     0.000028926     0  m   N cfq5686S / alloced
8,0    0        2     0.000029869  5686  G   R 0 + 1024 [dd]
8,0    0        3     0.000034500  5686  P   N [dd]
8,0    0        4     0.000036509  5686  I   R 0 + 1024 [dd]
8,0    0        0     0.000038209     0  m   N cfq5686S / insert_request
8,0    0        0     0.000039472     0  m   N cfq5686S / add_to_rr

The first column shows the device major,minor tuple, the second column gives information about the CPU and it goes on with the sequence, the timestamps, PID of the process issuing the IO process. The 6th column shows the event type, e.g. ‘Q’ means IO handled by request queue code. Please refer to above diagram for more info. The 7th column is R for Read, W for Write, D for block, B for Barrier operation and finally the last one is the block number and a following + number is the number of blocks requested. The final field between the [ ] brackets is the process name of the process issuing the request. In our case, I am running the command dd.

9.The output can be also analyzed using btt command. You will get almost the same information.

btt -i io-debugging.blktrace.0

Some interesting information here is D2C means the amount of time the IO has been spending into the device whilst Q2C means the total time take as there might be different IO concurrent.

A graphical user interface to makes life easier

The Seekwatcher program generates graphs from blktrace runs to help visualize IO patterns and performance. It can plot multiple blktrace runs together, making it easy to compare the differences between different benchmark runs. You should install the seekwatcher package if you need to visualize detailed information about IO patterns.

The command to be used to generate a picture to for analysis is as follows where seek.png is the output of the png name given.

seekwatcher -t io-debugging.blktrace.0 -o seek.png

What is also interesting is that you can create a movie-like with seekwatch to view the graph.

seekwatcher -t io-debugging.blktrace.0 -o seekmoving.mpg --movie

Tips:

  • For the debugfs, you can also edit the /etc/fstab file to make the mount point permanently. 

  • By default, blktrace will capture all events. This can be limited with the argument -a.
  • In case you want to capture persistently for a long time or for a certain amount of time, then use the argument -w.
  • blktrace will also stored the extracted data in local directory with a format device.blktrace.cpu, for example sda1.blktrace.cpu.
  • At step 1 and 2, you will need to fire a CTRL +C to stop the blktrace.
  • As seen in part 2, you have created test test1 file, do delete it same may consume disk space on your machine.

  • On part 5, the size of the file created in the  /mnt will not exceeds more that 1M since same has been specified in the command.
  • At part 9, you will noticed several other information which can be helpful such as D2C and  Q2C.
    • Q2D latency – time from request submission to Device.
    • D2C latency – Device latency for processing request.
    • Q2C latency – total latency , Q2D + D2C = Q2C.
  • To be able to generate the movie, you will have to install mencoder with all its dependencies.

Linux memory analysis with Lime and Volatility

Lime is a Loadable Kernel Module (LKM) which allows for volatile memory acquisition from Linux and Linux-based devices, such as Android. This makes LiME unique as it is the first tool that allows for full memory captures on Android devices. It also minimises its interaction between user and kernel space processes during acquisition, which allows it to produce memory captures that are more forensically sound than those of other tools designed for Linux memory acquisition. – Lime. Volatility framework was released at Black Hat DC for analysis of memory during forensic investigations.

Analysing memory in Linux can be carried out using Lime which is a forensic tool to dump the memory. I am actually using CentOS 6 distribution installed on a Virtual Box to acquire memory. Normally before capturing the memory, the suspicious system’s architecture should be well known. May be you would need to compile Lime on the the suspicious machine itself if you do not know the architecture. Once you compile Lime, you would have a kernel loadable object which can be injected in the Linux Kernel itself.

Linux memory dump with Lime

1. You will first need to download Lime on the suspicious machine.

git clone https://github.com/504ensicsLabs/LiME

2. Do the compilation of Lime. Once it has been compiled, you will noticed the creation of the Lime loadable kernel object.

make

3. Now the kernel object have to be loaded into the kernel. Insert the kernel module. Then, define the location and format to save the memory image.

insmod lime-2.6.32-696.23.1.el6.x86_64.ko "path=/Linux64.mem format=lime"

4. You can view if the module have been successfully loaded.

lsmod | grep -i lime

Analysis with Volatility

5. We will now analyze the memory dump using Volatility. Download it from Github.

git clone https://github.com/volatilityfoundation/volatility

6.  Now, we will create a Linux profile. We will also need to download the DwarfDump package. Once it is downloaded go to Tools -> Linux directory, then create the module.dwarf file.

yum install epel-release libdwarf-tools -y && make

7. To proceed further, the System.map file is important to build the profile. The System.map file contains the locations of all the functions active in the compiled kernel. You will notice it inside the /boot directory. It is also important to corroborate the version appended with the System.map file together the version and architecture of the kernel. In the example below, the version is 2.6.32-696.23.1.el6.x86_64.

8. Now, go to the root of the Volatility directory using cd ../../ since I assumed that you are in the linux directory. Then, create a zip file as follows:

zip volatility/plugins/overlays/linux/Centos6-2632.zip tools/linux/module.dwarf /boot/System.map-2.6.32-696.23.1.el6.x86_64

9. The volatility module has now been successfully created as indicated in part 8 for the particular version of the Linux and kernel version. Time to have fun with some Python script. You can view the profile created with the following command:

python vol.py --info | grep Linux

As you can see the profile LinuxCentos6-2632 profile has been created.

10. Volatile contains plugins to view details about the memory dump performed. To view the plugins or parsers, use the following command:

python vol.py --info | grep -i linux_

11. Now imagine that you want to see the processes running at the time of the memory dump. You will have to execute the vol.py script, specify the location of the memory dump, define the profile created and call the parser concerned.

python vol.py --file=/Linux64.mem --profile=LinuxCentos6-2632x64 linux_psscan

12. Another example to recover the routing cache memory:

python vol.py --file=/Linux64.mem --profile=LinuxCentos6-2632x64 linux_route_cache

Automating Lime using LiMEaid

I find the LiMEaid tools really interesting to remote executing of Lime. “LiMEaide is a python application designed to remotely dump RAM of a Linux client and create a volatility profile for later analysis on your local host. I hope that this will simplify Linux digital forensics in a remote environment. In order to use LiMEaide all you need to do is feed a remote Linux client IP address, sit back, and consume your favorite caffeinated beverage.” – LiMEaid

Tips:

  • Linux architecture is very important when dealing with Lime. This is probably the first question that one would ask.
  • The kernel-headers package is a must to create the kernel loadable object.
  • Once a memory dump have been created, its important to take a hash value. It can be done using the command md5sum Linux64.mem
  • I would also consider to download the devel tools using yum groupinstall “Development Tools” -y
  • As good practice as indicated in part 8 when creating the zip file, use the proper convention when naming the file. In my case I used the OS version and the kernel version for future references.
  • Not all Parsers/Plugins will work with Volatile as same might not be compatible with the Linux system.
  • You can check out the Volatile wiki for more info about the Parsers.

 

Auditing Linux Operating System with Lynis

Auditing a Linux System is one of the most important aspect when it comes to security. After deploying a simple Centos 7 Linux machine on virtual box, I made an audit using Lynis. It is amazing how many tiny flaws can be seen right from the beginning of a fresh installation. Lynis Enterprise performs security scanning for Linux, macOS, and Unix systems. It helps you discover and solve issues quickly, so you can focus on your business and projects again.Cisofy.

Credits: cisofy.com
Credits: cisofy.com

Introduction

The Lynis tool performs both security and compliance auditing. It has a free and paid version which comes very handy especially if you are on a business environment. The installation of the Lynis tool is pretty simple. You can install it through the Linux repository itself, download the tar file or clone it directly from Github.

 

Scanning Performed by Lynis

1. I downloaded the tar file with the following command:

wget https://cisofy.com/files/lynis-2.6.0.tar.gz

2. Then, just untar the file and get into it

tar -xzf lynis-2.6.0.tar.gz && cd lynis

3. Once into the untar directory, launch the following command:

./lynis audit system --quick

 
As you can see from the output above, there are several suggestions at the end of the scan. In case the paid version of the application was used, more information and commands as how to remediate the situation would be given including support from Lynis. As regards to the free version, you can also debug by yourself several security aspects from the suggestions.
 
Suggestions, Compliance and Improvement.

 
1.The first two suggestions were about minimum and maximum password age.

Configure minimum password age in /etc/login.defs [AUTH-9286]

Configure maximum password age in /etc/login.defs [AUTH-9286]

To check the minimum and maximum password age, use the chage command :
chage -l
 

2. Use chage -m root to set the minimum password age and chage -M root to set maximum password age:

Also, you will have to set the parameter in the /etc/login.defs file

3. Delete accounts which are no longer used [AUTH-9288]

It is also suggested to delete accounts which are no longer in use. This suggestion was prompted as I created a user  “nitin” account during installation and did not use it yet. For the purpose of this blog, I deleted it using userdel -r nitin

4. Default umask in /etc/profile or /etc/profile.d/custom.sh could be more strict (e.g. 027) [AUTH-9328]

Default umask values are taken from the information provided in the /etc/login.defs file for RHEL (Red Hat) based distros. Debian and Ubuntu Linux based system use /etc/deluser.conf. To change default umask value to 027 which is actually 022 by default, you will need to modify the /etc/profile script as follows:

5. To decrease the impact of a full /home file system, place /home on a separated partition [FILE-6310]

  To decrease the impact of a full /tmp file system, place /tmp on a separated partition [FILE-6310]

  To decrease the impact of a full /var file system, place /var on a separated partition [FILE-6310]

In the article Move your /home to another partition, you will have detailed explanations to sort out this issue.

6. Disable drivers like USB storage when not used, to prevent unauthorized storage or data theft [STRG-1840]

   Disable drivers like firewire storage when not used, to prevent unauthorized storage or data theft [STRG-1846]

To disable USB and firewire storage drivers, add the following lines in /etc/modprobe.d/blacklist.conf then do a modprobe usb-storage && modprobe firewire-core

blacklist firewire-core
blacklist usb-storage

7. Split resolving between localhost and the hostname of the system [NAME-4406]

This issue is only about hostname and localhost in /etc/hosts which could confuse some applications installed on the machine. According to cisofy, for proper resolving, the entries of localhost and the local defined hostname, could be split. Using some middleware and some applications, resolving of the hostname to localhost, might confuse the software.

8. Install package ‘yum-utils’ for better consistency checking of the package database [PKGS-7384]

      Consider running ARP monitoring software (arpwatch,arpon) [NETW-3032]

The yum-utils and arpwatch are nice tools to perform more debugging and verification. Install it using the following commands:

yum install yum-utils arpwatch -y

9. You are advised to hide the mail_name (option: smtpd_banner) from your postfix configuration. Use postconf -e or change your main.cf file (/etc/postfix/main.cf) [MAIL-8818]

You just have to uncomment the following line and lauch a postconf -e. However, since this is a fresh install, and I’m not using postfix, it is better to stop the service.

 10.  Check iptables rules to see which rules are currently not used [FIRE-4513]

Since, I’m not on a production environment, it is very difficult to identify unused iptables rules right now. Once on the production environment, this situation is different. According to Cisofy, the best way is to “use iptables –list –numeric –verbose to display all rules. Check for rules which didn’t get a hit and repeat this process several times (e.g. in a few weeks). Finally remove any unneeded rules.”

 11. Consider hardening SSH configuration [SSH-7408]

  •     – Details  : AllowTcpForwarding (YES –> NO)
  •     – Details  : ClientAliveCountMax (3 –> 2)
  •     – Details  : Compression (YES –> (DELAYED|NO))
  •     – Details  : LogLevel (INFO –> VERBOSE)
  •     – Details  : MaxAuthTries (6 –> 2)
  •     – Details  : MaxSessions (10 –> 2)
  •     – Details  : PermitRootLogin (YES –> NO)
  •     – Details  : Port (22 –> )
  •     – Details  : TCPKeepAlive (YES –> NO)
  •     – Details  : UseDNS (YES –> NO)
  •     – Details  : X11Forwarding (YES –> NO)
  •     – Details  : AllowAgentForwarding (YES –> NO)

Again, hardening SSH is one of the most important to evade attacks especially from SSH bots. It all depends how your network infrastructure is configured and whether it is accessible from the internet or not. However, these details viewed are very informative.

12. Periodic system scan, malware and ransomware scanners are now a must. According to statistics, servers are being hacked constantly. Pervasive Monitoring is becoming a heavy cash deal for malicious softwares. 

The Lynis Command

Lynis documentation is pretty straight forward with a cheat sheet. The arguments are self explicit. Here are some hints.

1.Performs a system audit which is the most common audit.

lynis audit system

2. Provides command to do a remote scan.

lynis audit system remote <host>

3. Views the settings of default profile.

lynis show settings

4. Checks if you are using most recent version of Lynis

lynis update info

5. More information about a specific test-id

lynis show details <test-id>

6. To scan whole system

lynix --check-all Q

7. To see all available parameters of Lynis

lynis show options

At the end of any Lynis command, it will also prompt you where the logs have been stored for your future references. It is usually in /var/log/lynis.log. The systutorial on lynis is also a good start to grasp the command. All common systems based on Unix/Linux are supported. Examples include Linux, AIX, *BSD, HP-UX, macOS and Solaris. For package management, the following tools are supported:- dpkg/apt, pacman, pkg_info, RPM, YUM, zypper.

Securing MySQL traffic with Stunnel in a jail environment on CentOS

Stunnel is a program by Michal Trojnara that allows you to encrypt arbitrary TCP connections inside SSL. Stunnel can also allow you to secure non-SSL aware daemons and protocols (like POP, IMAP, LDAP, etc) by having Stunnel provide the encryption, requiring no changes to the daemon’s code. It is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs’ code. Its architecture is optimized for security, portability, and scalability (including load-balancing), making it suitable for large deployments. – Stunnel.org

The concept that lies behind Stunnel is about the encryption methodology that is used when the client is sending a message to a server using a secure tunnel. In this article, we will focus on using MySQL alongside Stunnel. MariaDB Client will access the MariaDB server database using the Stunnel for more security and robustness.



Photo Credits: danscourses.com
Photo Credits: danscourses.com

I will demonstrate the installation and configuration using the CentOS distribution which is on my Virtual Box lab environment. I created two CentOS 7 virtual machines with hostname as stunnelserver and stunnelclient. We will tunnel the MySQL traffic via Stunnel. You can apply the same concept for SSH, Telnet, POP, IMAP or any TCP connection.

The two machines created are as follows:

  1. stunnelserver : 192.168.100.17 – Used as the Server
  2. stunnelclient : 192.168.100.18 – Used as the Client

Basic package installation and configuration on both servers

1. Install the Stunnel and OpenSSL package on both the client and the server.

yum install stunnel openssl -y

2. As we will be using Stunnel over MariaDB, you can use the MariaDB repository tools to get the links to download the repository. Make sure you have the MariaDB-client package installed on the stunnelclient which will be used as client to connect to the server. Also, install both packages on the stunnelserver. The commands to install the MariaDB packages are as follows:

sudo yum install MariaDB-server MariaDB-client

3. For more information about installations of MariaDB, Galera etc, refer to these links:

MariaDB Galera Clustering

MariaDB Master/Master installation

MariaDB Master/Slave installation

Configuration to be carried out on the stunnelserver (192.168.100.17)

 

 

4. Once you have all the packages installed, it’s time to create your privatekey.pem. Then, use the private key to create the certificate.pem. Whilst creating the certificate.pem, it will prompt you to enter some details. Feel free to fill it.

openssl genrsa -out privatekey.pem 2048

openssl req -new -x509 -days 365 -key privatekey.pem -out certificate.pem

5. Now comes the most interesting part to configure the stunnel.conf file by tunnelling it to the MySQL port on the stunnelserver. I observed that the package by default does not come with a stunnel.conf or even a Init script after installing it from the repository. So, you can create your own Init script. Here is my /etc/stunnel/stunnel.conf on the server:

chroot = /var/run/stunnel
setuid = stunnel
setgid = stunnel
pid = /stunnel.pid
debug = 7
output = /stunnel.log
sslVersion = TLSv1
[mysql]
key = /etc/stunnel/privatekey.pem
cert = /etc/stunnel/certificate.pem
accept = 44323
connect = 127.0.0.1:3306

6. Position your privatekey.pem and certificate.pem at /etc/stunnel directory. Make sure you have the right permission (400) on the privatekey.pem.

 

7. Based upon the configuration in part 5, we will now create the /var/run/stunnel directory and assign it with user and group of stunnel:

useradd -G stunnel stunnel && mkdir /var/run/stunnel && chown stunnel:stunnel /var/run/stunnel

8.  The port 44323 is a non reserved port which I chose to tunnel the traffic from the client.

9. As we do not have the Init script by default in the package, start the service as follows:

stunnel /etc/stunnel/stunnel.conf

10. A netstat or ss command on the server will show the Stunnel listening on port 44323.

Configuration to be carried out on the stunnelclient (192.168.100.18)

11. Here is the stunnel.conf file on the stunnel client :

verify = 2
chroot = /var/run/stunnel
setuid = stunnel
setgid = stunnel
pid = /stunnel.pid
CAfile = /etc/stunnel/certificate.pem
client = yes
sslVersion = TLSv1
renegotiation = no
[mysql]
accept = 24
connect = 192.168.100.17:44323

12. Import the certificate.pem in the /etc/stunnel/ directory.

scp <user>@<ipofstunnelserver>:/etc/stunnel/certificate.pem

13. Based upon the configuration in part 11, we will now create the /var/run/stunnel directory and assign it with user and group of stunnel:

useradd -G stunnel stunnel && mkdir /var/run/stunnel && chown stunnel:stunnel /var/run/stunnel

14. You can now start the service on the client as follows:

stunnel /etc/stunnel/stunnel.conf

15. A netstat on the client will show the Stunnel listening on port 24.

16. You can now connect on the MySQL database from your client to your server through the tunnel. Example:

mysql -h 127.0.0.1 -u <Name of Database> -p -P 24



Tips:

  • When starting Stunnel, the log and the pid file will be created automatically inside the jail environment that is /var/run/stunnel.
  • You can also change the debug log level. Level is a one of the syslog level names or numbers emerged (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), or debug (7). All logs for the specified level and all levels numerically less than it will be shown. Use debug = debug or debug = 7 for greatest debugging output. The default is notice (5).
  • If you compile from source, you will have a free log rotate and Init scripts. Probably on CentOS, it’s not packaged with the script!
  • You can also verify if SSLv2 and SSLv3 have been disabled using openssl s_client -connect 127.0.0.1:44323 -ssl3 and try with -tls1 to compare.
  • For the purpose of testing, you might need to check your firewall rules and SELINUX parameters.
  • You don’t need MariaDB-Server package on the client.
  • Stunnel is running on a Jail environment. The logs and the PID described in part 5 and 11 will be found in /var/run/stunnel.
  • You can invoke stunnel from inetd. Inetd is the Unix ‘super server’ that allows you to launch a program (for example the telnet daemon) whenever a connection is established to a specified port. See the “Stunnel how’s to” for more information. The Stunnel manual can also be viewed here.