Category: Database Systems

Securing MySQL traffic with Stunnel in a jail environment on CentOS

Stunnel is a program by Michal Trojnara that allows you to encrypt arbitrary TCP connections inside SSL. Stunnel can also allow you to secure non-SSL aware daemons and protocols (like POP, IMAP, LDAP, etc) by having Stunnel provide the encryption, requiring no changes to the daemon’s code. It is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs’ code. Its architecture is optimized for security, portability, and scalability (including load-balancing), making it suitable for large deployments. – Stunnel.org

The concept that lies behind Stunnel is about the encryption methodology that is used when the client is sending a message to a server using a secure tunnel. In this article, we will focus on using MySQL alongside Stunnel. MariaDB Client will access the MariaDB server database using the Stunnel for more security and robustness.



Photo Credits: danscourses.com
Photo Credits: danscourses.com

I will demonstrate the installation and configuration using the CentOS distribution which is on my Virtual Box lab environment. I created two CentOS 7 virtual machines with hostname as stunnelserver and stunnelclient. We will tunnel the MySQL traffic via Stunnel. You can apply the same concept for SSH, Telnet, POP, IMAP or any TCP connection.

The two machines created are as follows:

  1. stunnelserver : 192.168.100.17 – Used as the Server
  2. stunnelclient : 192.168.100.18 – Used as the Client

Basic package installation and configuration on both servers

1. Install the Stunnel and OpenSSL package on both the client and the server.

yum install stunnel openssl -y

2. As we will be using Stunnel over MariaDB, you can use the MariaDB repository tools to get the links to download the repository. Make sure you have the MariaDB-client package installed on the stunnelclient which will be used as client to connect to the server. Also, install both packages on the stunnelserver. The commands to install the MariaDB packages are as follows:

sudo yum install MariaDB-server MariaDB-client

3. For more information about installations of MariaDB, Galera etc, refer to these links:

MariaDB Galera Clustering

MariaDB Master/Master installation

MariaDB Master/Slave installation

Configuration to be carried out on the stunnelserver (192.168.100.17)

 

4. Once you have all the packages installed, it’s time to create your privatekey.pem. Then, use the private key to create the certificate.pem. Whilst creating the certificate.pem, it will prompt you to enter some details. Feel free to fill it.

openssl genrsa -out privatekey.pem 2048

openssl req -new -x509 -days 365 -key privatekey.pem -out certificate.pem

5. Now comes the most interesting part to configure the stunnel.conf file by tunnelling it to the MySQL port on the stunnelserver. I observed that the package by default does not come with a stunnel.conf or even a Init script after installing it from the repository. So, you can create your own Init script. Here is my /etc/stunnel/stunnel.conf on the server:

chroot = /var/run/stunnel
setuid = stunnel
setguid = stunnel
pid /stunnel.pid
debug = 7
output = /stunnel.log
sslVersion = TLSv1
[mysql]
key = /etc/stunnel/privatekey.pem
cert = /etc/stunnel/certificate.pem
accept = 44323
connect = 127.0.0.1:3306

6. Position your privatekey.pem and certificate.pem at /etc/stunnel directory. Make sure you have the right permission (400) on the privatekey.pem.

7. Based upon the configuration in part 5, we will now create the /var/run/stunnel directory and assign it with user and group of stunnel:

useradd -G stunnel stunnel && mkdir /var/run/stunnel && chown stunnel:stunnel /var/run/stunnel

8.  The port 44323 is a non reserved port which I chose to tunnel the traffic from the client.

9. As we do not have the Init script by default in the package, start the service as follows:

stunnel /etc/stunnel/stunnel.conf

10. A netstat or ss command on the server will show the Stunnel listening on port 44323.

Configuration to be carried out on the stunnelclient (192.168.100.18)

11. Here is the stunnel.conf file on the stunnel client :

verify = 2
chroot = /var/run/stunnel
setuid = stunnel
setguid = stunnel
pid = /stunnel.pid
CAfile = /etc/stunnel/certificate.pem
client = yes
sslVersion = TLSv1
renegotiation = no
[mysql]
accept = 24
connect = 192.168.100.17:44323

12. Import the certificate.pem in the /etc/stunnel/ directory.

scp <user>@<ipofstunnelserver>:/etc/stunnel/certificate.pem

13. Based upon the configuration in part 11, we will now create the /var/run/stunnel directory and assign it with user and group of stunnel:

useradd -G stunnel stunnel && mkdir /var/run/stunnel && chown stunnel:stunnel /var/run/stunnel

14. You can now start the service on the client as follows:

stunnel /etc/stunnel/stunnel.conf

15. A netstat on the client will show the Stunnel listening on port 24.

16. You can now connect on the MySQL database from your client to your server through the tunnel. Example:

mysql -h 127.0.0.1 -u <Name of Database> -p -P 24



Tips:

  • When starting Stunnel, the log and the pid file will be created automatically inside the jail environment that is /var/run/stunnel.
  • You can also change the debug log level. Level is a one of the syslog level names or numbers emerged (0), alert (1), crit (2), err (3), warning (4), notice (5), info (6), or debug (7). All logs for the specified level and all levels numerically less than it will be shown. Use debug = debug or debug = 7 for greatest debugging output. The default is notice (5).
  • If you compile from source, you will have a free log rotate and Init scripts. Probably on CentOS, it’s not packaged with the script!
  • You can also verify if SSLv2 and SSLv3 have been disabled using openssl s_client -connect 127.0.0.1:44323 -ssl3 and try with -tls1 to compare.
  • For the purpose of testing, you might need to check your firewall rules and SELINUX parameters.
  • You don’t need MariaDB-Server package on the client.
  • Stunnel is running on a Jail environment. The logs and the PID described in part 5 and 11 will be found in /var/run/stunnel.
  • You can invoke stunnel from inetd. Inetd is the Unix ‘super server’ that allows you to launch a program (for example the telnet daemon) whenever a connection is established to a specified port. See the “Stunnel how’s to” for more information. The Stunnel manual can also be viewed here.

 


How i optimised my WordPress website ?

Some days back, I was brewing up some plans to optimise my website source codes, HTTP headers, latency and other security aspects. I had to carry out some analysis and research using some tools available on the internet. I should admit that, at first, it looked pretty simple, but it was not. For instance, I did not permit myself to directly modify the production environment. So, I had to migrate it on a pre-production environment. Page caching was yet another issue which could trick oneself after modifications.

Since my website is behind Cloudflare, which is already an advantage in terms of security, performance, reliability and insight, it does not mean that the website cannot be hacked. According to sucuri.net, websites using WordPress CMS are constantly being hacked. Of course, it depends on the mode of attack and the infection impact.

Photo credits: sucuri.net
Photo credits: sucuri.net

Migrating to TLS

Migrating a CMS which already has several articles posted can be an issue as the URLs are already recorded in the database as well as in the source code itself. Also, there were links on the website which were not pointed on HTTPS. After moving to the HTTPS version, errors such as “Mixed content” could be noticed when accessing the website. One of the interesting free feature of Cloudflare is that everyone can have a free SSL certificate issued by Comodo. You will have to generate your certificate and your private key from Cloudflare and point it on your Virtual Host.

Some corrections on WordPress source code needed to be added in the wp-config file as follows:

define('WP_HOME','https://tunnelix.com/');

define('WP_SITEURL','https://tunnelix.com/');

On top of that, there seemed to be lots of URLs on the database itself that needed corrections using the following commands:

update wp_options set option_value = replace(option_value, ‘http://www.tunnelix.com’, ‘https://www.tunnelix.com’) where option_name = ‘siteurl’;


update wp_posts set guid = replace(guid, 'http://www.tunnelix.com', 'https://www.tunnelix.com');

update wp_posts set post_content = replace(post_content, 'http://www.tunnelix.com', 'https://www.tunnelix.com');

However, there are some tricks to identify those non-HTTPS URLs by making a dump of the database and do a “Grep” in it, followed by a “Sed” to eliminate those unwanted parameters. Once the “Mixed Content” errors have been identified, I launched a scan on the Qualys SSL Labs website. The result was an “A+”. You can also use the Htbridge free SSL server test which is pretty fascinating especially to verify PCI DSS Compliance, HIPAA compliance, NIST guidelines and industry best practice in general. If all those criteria have been met, then you would score an “A+” rather than an “A” or worse a “F”.

Source code optimisation and Page speed verification

This can be verify using the GTmetrix tool available for free online. I noticed that my rank was a “C”. This was caused due to lack of minified HTML and CSS, and Image dimension. To handle the minify HTML errors, I enabled the plugin Minify HTML Markup on WordPress itself which corrected these errors. To tweak the Image dimension i downloaded the tool Optipng from Epel repository:

optipng.x86_64                  0.7.6-1.el6                        @epel        

For example, if you want to optimize a specific image, use the following command:

optipng -o2 Screen-Shot-2016-12-24-at-1.04.45-AM.png

Another verification was made on GTmetrix website and i noticed that the result was then an “A”

from GTMETRIX.COM

Tweaking the Web server HTTP headers

Htbridge will surely give you an overview of the web server security and will accompany you step by step to get a better result.

Of course, since the website is behind cloudflare,  it is limited to certain security tweaks such as Public-key-pins.The Public Key Pinning Extension for HTTP (HPKP) is a security feature that tells a web client to associate a specific cryptographic public key with a certain web server to decrease the risk of Man-in-the-Middle (MITM) attacks with forged certificates. I found an interesting article on https://raymii.org which explained how to activate the HPKP. 

Once you are in possession of your certificate and Private key, you can create the public key and a token will be received to activate the HPKP extension. The following commands can be used to get the token and the public key.

# openssl x509 -noout -in certificate.pem -pubkey | openssl asn1parse -noout -inform pem -out public.key;

# openssl dgst -sha256 -binary public.key | openssl enc -base64
 4vr+koFuogsfghGjgvpsqQIIikg5KowHTIGNQ5Prspc=

However, it looked that HPKP is not supported on Cloudflare. But, there are other issues such as HSTS. HTTP Strict Transport Security (HSTS, RFC 6797) is a web security policy technology designed to help secure HTTPS web servers against downgrade attacks. HSTS is a powerful technology which is not yet widely adopted. CloudFlare aims to change this. I enabled it as per recommendations by Cloudflare.

A curl on the url https://tunnelix.com now prompts the following headers :

No system is perfectly secure, but I believe that these modifications are worth to adventure around. I should say I was really impressed by free tools such as the Qualys SSL test, HTbridge free SSL and Web security test and Gtmetrix in terms of page speed.

 Hello Tunnelers, this is my first article for the year 2017, I seize this opportunity to wish my readers a Happy New Year 2017 and wish you all lots of prosperity. – TheTunnelix


Introducing MariaDB MaxScale 2.0 – Webinar

Today, i attended a webinar conference on MariaDB MaxScale carried out by Roger Bodamer at 10.00 AM PDT. That was really interesting and i strongly feel the future of system and database administrators are going to be enhanced with this technology.

It started with a thorough introduction about challenges faced by the IT industry today especially larged firms to market agility, system integration, real-time insights, security and high availability. Some basic explanations on both application (customer centric front-end) and database(transaction focused back-end) levels were addressed. At the application level, this deal mainly with continuous delivery of the application, microservices and modular deployment whereas at database level, it lays emphasis on security, scalability, high availability and high quality data management.

“MariaDB MaxScale makes it easy to handle the scalability and availability of your database cluster, and also secure it and manage the maintenance downtime. MaxScale is a next generation database proxy that goes well beyond routing, with advanced filtering, enhanced security and authentication. It is a multi-threaded, event-driven engine, that has its main functionality provided by plugins loaded at runtime. With MaxScale’s innovative architecture you can update the data layer on scale-out architectures, without impacting application performance.” – MariaDB

Photo Credits: MariaDB MaxScale
Photo Credits: MariaDB MaxScale

There are so many features in MaxScale that have really amazed me in the security pillar such as data motion, data in use and the DB firewall. Emphasis was laid on the type of job MaxScale that can be performed compared to simple master-slaves environments. MaxScale is really good at replication when it comes on one master having hundreds of slaves which prevents load on the master which means that MaxScale can take great care of replication issues. Another feature of MaxScale is when database is used to store information, a specific route can be defined for traffic within the database itself. Keymaps also comes into a great feature to manage those issues. Schemas can be updated. The goal is high availability. Another feature is data streaming which can be controlled. I noticed a heavy emphasis on security especially when it comes to whitelisting/blacklisting at database level to reduce DDOS attack drastically at database level.

A demo was then displayed live showing some interesting features of MariaDB MaxScale. An interesting GUI web interface called MaxPanel is used to connect with the servers Master-Slave-Slave (All three instances on docker instances) which give indications about the server names, addresses, ports, connections, and status (Server running). A load was generated on of the slave server where writes were configured on the master and read only on the slaves. The tool which was used to generate the load is HammerDB. In the conf file, under the dbfw-blacklist, rules can be inserted there to prevent certain type of query for example a “select * from tablename” which if the table have too many records, this will consume heavy amount of resources. To remediate this issue, a rule can be specified to use the “where” statement. An example of such entry is rule safe_order deny no_where_clause on_queries select which means that no select to be made with a where clause. Also rule query_regex deny regex ‘*.orders*’ which is going to be applied on the table called orders.

After the demo, the questions and answers session were tossed out. This includes different architectures where plugging architectures can be used with specific protocols. Some answers still need to be cleared out as regards to MUSL compatibility with MaxScale which was pointed by me. Some days back, hackers.mu have enhanced MariaDB to be MUSL compatible. That was really an interesting and educational session for MariaDB MaxScale especially for administrators. I am looking forward for MariaDB MaxScale installation soon.


MySQL Binlogs Analysis for data loss

Some days back, i encountered a server where it happens that some data was altered in the database. A quick report can be generated with the mysqlbinlog command.

Photo Credits: Mysql.com
Photo Credits: Mysql.com

The MySQL bin logs contains “events” that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows), unless row-based logging is used. The binary log also contains information about how long each statement took that updated data. The binary log has two important purposes:

  • For replication, the binary log on a master replication server provides a record of the data changes to be sent to slave servers. The master server sends the events contained in its binary log to its slaves, which execute those events to make the same data changes that were made on the master. See Section 18.2, “Replication Implementation”.
  • Certain data recovery operations require use of the binary log. After a backup has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup. See Section 8.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”.

It is to be noted that by enabling My SQL bin logs, servers will tend to react more slowly though the benefits are really useful. My SQL bin logs should not be deleted straight forward with a rm -f command but rather with the command PURGE BINARY LOGS TO ‘mysql-bin.111’;

In this article i will demonstrate some command to strip out interesting information from a Binary log file. Logs that can alter information in a database are classified with the following database requests such as UPDATE, DELETE, INSERT, DELETE, REPLACE and ALTER.

1. If you want to read the whole content of a binlog, use the following command. This will comprise of all request made including selects statement.

mysqlbinlog binlog.1111

2.Let’s say you have a list of binlogs and you want to find all the ALTER carried out only for a specific database called “question”

mysqlbinlog binlog.* | grep -i -e "^alter" | grep -i -e "question" >> /tmp/alter_question.txt

3. Let’s say you want to find the date for 03/05/2016 of all alter commands carried out from the file generated from part 1

grep -i -A 3 '#160503' /tmp/alter_question.txt | less

4. If you want to extract all ALTER from a bunch of binlogs for a specific database (question)

mysqlbinlog --database=question binlog.* | grep -B 5 -i -e "^alter" >> /tmp/alter_question.txt

5. You might also want to retrieve information from a specific date and time.

mysqlbinlog --start-datetime="2016-2-02 5:00:00"--stop-datetime="2016-03-03 8:10:00" mysql-bin.000007

However, analysis of the Mysqlbin logs are pretty vast. It depends what are the information that is being needed. I also find out that the mk-query-digest is also an interesting tool to extract information and perform analysis. SEE http://linux.die.net/man/1/mk-query-digest 


Messing around PL/Tcl on PostgreSQL

Some days back, i was having an issue whilst activating the PL/Tcl extension/language on PostgreSQL. Luckily, it was possible with the help from Mauricio Cortes, a guy from the Facebook group PostgreSQL Server. Let’s first see what is TCL package. “PL/Tcl is a loadable procedural language for the PostgreSQL database system that enables the Tcl language to be used to write functions and trigger procedures.” – PostgreSQL This facilitates us to load the module and unload it if same is not being used. You can also activate it for a particular user.

Screenshot from 2016-03-30 19-04-12

Initially, after installing the package postgresql95-tcl, as per the documentation, i made a “create extension” but same was returning the error “ ERROR: could not open extension control file “/usr/pgsql-9.5/share/extension/pltcl.control : No such file or directory” I also noticed that the file pltcl.control is not present. The guy recommended me to try out a “create language” instead of “create extension” and it works!!.

How silly it may found that, on the official documentation, it was mentioned that the create language has been deprecated and now the create extension is being used. I immediately understand that i was using a bad repository. I would advice to use the official rpm file from the official website yum.postgresql.org.

Lets see how to activate it:

1.Simply fire a yum install postgresql95-tcl Make sure that you are using the right repo (https://www.postgresql.org/download/linux/redhat/) to be able to use the create extension

2.After login, you can verify same with the command \dx  to have a list of installed extensions or \dx+ to have more details about the object

Screenshot from 2016-07-04 20-56-56

3. Now, here is the catch, if you are using PostgreSQL having the version older than 9.5, this should be activated with a create language pltcl; otherwise if its version 9.5 or beta 9.6, you can use a create extension pltcl;.

After some more research i notice that the advantage of this feature is that when a create extension is performed for any plugin the dump of the database will also consist of the extension otherwise in older version the Tcl language would not be included in the dump.

Articles published on PostgreSQL: