Category: Database Systems

How i optimised my WordPress website ?

Some days back, I was brewing up some plans to optimise my website source codes, HTTP headers, latency and other security aspects. I had to carry out some analysis and research using some tools available on the internet. I should admit that, at first, it looked pretty simple, but it was not. For instance, I did not permit myself to directly modify the production environment. So, I had to migrate it on a pre-production environment. Page caching was yet another issue which could trick oneself after modifications.

Since my website is behind Cloudflare, which is already an advantage in terms of security, performance, reliability and insight, it does not mean that the website cannot be hacked. According to sucuri.net, websites using WordPress CMS are constantly being hacked. Of course, it depends on the mode of attack and the infection impact.

Photo credits: sucuri.net
Photo credits: sucuri.net

Migrating to TLS

Migrating a CMS which already has several articles posted can be an issue as the URLs are already recorded in the database as well as in the source code itself. Also, there were links on the website which were not pointed on HTTPS. After moving to the HTTPS version, errors such as “Mixed content” could be noticed when accessing the website. One of the interesting free feature of Cloudflare is that everyone can have a free SSL certificate issued by Comodo. You will have to generate your certificate and your private key from Cloudflare and point it on your Virtual Host.

Some corrections on WordPress source code needed to be added in the wp-config file as follows:

define('WP_HOME','https://tunnelix.com/');

define('WP_SITEURL','https://tunnelix.com/');

On top of that, there seemed to be lots of URLs on the database itself that needed corrections using the following commands:

update wp_options set option_value = replace(option_value, ‘http://www.tunnelix.com’, ‘https://www.tunnelix.com’) where option_name = ‘siteurl’;


update wp_posts set guid = replace(guid, 'http://www.tunnelix.com', 'https://www.tunnelix.com');

update wp_posts set post_content = replace(post_content, 'http://www.tunnelix.com', 'https://www.tunnelix.com');

However, there are some tricks to identify those non-HTTPS URLs by making a dump of the database and do a “Grep” in it, followed by a “Sed” to eliminate those unwanted parameters. Once the “Mixed Content” errors have been identified, I launched a scan on the Qualys SSL Labs website. The result was an “A+”. You can also use the Htbridge free SSL server test which is pretty fascinating especially to verify PCI DSS Compliance, HIPAA compliance, NIST guidelines and industry best practice in general. If all those criteria have been met, then you would score an “A+” rather than an “A” or worse a “F”.

Source code optimisation and Page speed verification

This can be verify using the GTmetrix tool available for free online. I noticed that my rank was a “C”. This was caused due to lack of minified HTML and CSS, and Image dimension. To handle the minify HTML errors, I enabled the plugin Minify HTML Markup on WordPress itself which corrected these errors. To tweak the Image dimension i downloaded the tool Optipng from Epel repository:

optipng.x86_64                  0.7.6-1.el6                        @epel        

For example, if you want to optimize a specific image, use the following command:

optipng -o2 Screen-Shot-2016-12-24-at-1.04.45-AM.png

Another verification was made on GTmetrix website and i noticed that the result was then an “A”

from GTMETRIX.COM

Tweaking the Web server HTTP headers

Htbridge will surely give you an overview of the web server security and will accompany you step by step to get a better result.

Of course, since the website is behind cloudflare,  it is limited to certain security tweaks such as Public-key-pins.The Public Key Pinning Extension for HTTP (HPKP) is a security feature that tells a web client to associate a specific cryptographic public key with a certain web server to decrease the risk of Man-in-the-Middle (MITM) attacks with forged certificates. I found an interesting article on https://raymii.org which explained how to activate the HPKP. 

Once you are in possession of your certificate and Private key, you can create the public key and a token will be received to activate the HPKP extension. The following commands can be used to get the token and the public key.

# openssl x509 -noout -in certificate.pem -pubkey | openssl asn1parse -noout -inform pem -out public.key;

# openssl dgst -sha256 -binary public.key | openssl enc -base64
 4vr+koFuogsfghGjgvpsqQIIikg5KowHTIGNQ5Prspc=

However, it looked that HPKP is not supported on Cloudflare. But, there are other issues such as HSTS. HTTP Strict Transport Security (HSTS, RFC 6797) is a web security policy technology designed to help secure HTTPS web servers against downgrade attacks. HSTS is a powerful technology which is not yet widely adopted. CloudFlare aims to change this. I enabled it as per recommendations by Cloudflare.

A curl on the url https://tunnelix.com now prompts the following headers :

No system is perfectly secure, but I believe that these modifications are worth to adventure around. I should say I was really impressed by free tools such as the Qualys SSL test, HTbridge free SSL and Web security test and Gtmetrix in terms of page speed.

 Hello Tunnelers, this is my first article for the year 2017, I seize this opportunity to wish my readers a Happy New Year 2017 and wish you all lots of prosperity. – TheTunnelix


Introducing MariaDB MaxScale 2.0 – Webinar

Today, i attended a webinar conference on MariaDB MaxScale carried out by Roger Bodamer at 10.00 AM PDT. That was really interesting and i strongly feel the future of system and database administrators are going to be enhanced with this technology.

It started with a thorough introduction about challenges faced by the IT industry today especially larged firms to market agility, system integration, real-time insights, security and high availability. Some basic explanations on both application (customer centric front-end) and database(transaction focused back-end) levels were addressed. At the application level, this deal mainly with continuous delivery of the application, microservices and modular deployment whereas at database level, it lays emphasis on security, scalability, high availability and high quality data management.

“MariaDB MaxScale makes it easy to handle the scalability and availability of your database cluster, and also secure it and manage the maintenance downtime. MaxScale is a next generation database proxy that goes well beyond routing, with advanced filtering, enhanced security and authentication. It is a multi-threaded, event-driven engine, that has its main functionality provided by plugins loaded at runtime. With MaxScale’s innovative architecture you can update the data layer on scale-out architectures, without impacting application performance.” – MariaDB

Photo Credits: MariaDB MaxScale
Photo Credits: MariaDB MaxScale

There are so many features in MaxScale that have really amazed me in the security pillar such as data motion, data in use and the DB firewall. Emphasis was laid on the type of job MaxScale that can be performed compared to simple master-slaves environments. MaxScale is really good at replication when it comes on one master having hundreds of slaves which prevents load on the master which means that MaxScale can take great care of replication issues. Another feature of MaxScale is when database is used to store information, a specific route can be defined for traffic within the database itself. Keymaps also comes into a great feature to manage those issues. Schemas can be updated. The goal is high availability. Another feature is data streaming which can be controlled. I noticed a heavy emphasis on security especially when it comes to whitelisting/blacklisting at database level to reduce DDOS attack drastically at database level.

A demo was then displayed live showing some interesting features of MariaDB MaxScale. An interesting GUI web interface called MaxPanel is used to connect with the servers Master-Slave-Slave (All three instances on docker instances) which give indications about the server names, addresses, ports, connections, and status (Server running). A load was generated on of the slave server where writes were configured on the master and read only on the slaves. The tool which was used to generate the load is HammerDB. In the conf file, under the dbfw-blacklist, rules can be inserted there to prevent certain type of query for example a “select * from tablename” which if the table have too many records, this will consume heavy amount of resources. To remediate this issue, a rule can be specified to use the “where” statement. An example of such entry is rule safe_order deny no_where_clause on_queries select which means that no select to be made with a where clause. Also rule query_regex deny regex ‘*.orders*’ which is going to be applied on the table called orders.

After the demo, the questions and answers session were tossed out. This includes different architectures where plugging architectures can be used with specific protocols. Some answers still need to be cleared out as regards to MUSL compatibility with MaxScale which was pointed by me. Some days back, hackers.mu have enhanced MariaDB to be MUSL compatible. That was really an interesting and educational session for MariaDB MaxScale especially for administrators. I am looking forward for MariaDB MaxScale installation soon.


MySQL Binlogs Analysis for data loss

Some days back, i encountered a server where it happens that some data was altered in the database. A quick report can be generated with the mysqlbinlog command.

Photo Credits: Mysql.com
Photo Credits: Mysql.com

The MySQL bin logs contains “events” that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows), unless row-based logging is used. The binary log also contains information about how long each statement took that updated data. The binary log has two important purposes:

  • For replication, the binary log on a master replication server provides a record of the data changes to be sent to slave servers. The master server sends the events contained in its binary log to its slaves, which execute those events to make the same data changes that were made on the master. See Section 18.2, “Replication Implementation”.
  • Certain data recovery operations require use of the binary log. After a backup has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup. See Section 8.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”.

It is to be noted that by enabling My SQL bin logs, servers will tend to react more slowly though the benefits are really useful. My SQL bin logs should not be deleted straight forward with a rm -f command but rather with the command PURGE BINARY LOGS TO ‘mysql-bin.111’;

In this article i will demonstrate some command to strip out interesting information from a Binary log file. Logs that can alter information in a database are classified with the following database requests such as UPDATE, DELETE, INSERT, DELETE, REPLACE and ALTER.

1. If you want to read the whole content of a binlog, use the following command. This will comprise of all request made including selects statement.

mysqlbinlog binlog.1111

2.Let’s say you have a list of binlogs and you want to find all the ALTER carried out only for a specific database called “question”

mysqlbinlog binlog.* | grep -i -e "^alter" | grep -i -e "question" >> /tmp/alter_question.txt

3. Let’s say you want to find the date for 03/05/2016 of all alter commands carried out from the file generated from part 1

grep -i -A 3 '#160503' /tmp/alter_question.txt | less

4. If you want to extract all ALTER from a bunch of binlogs for a specific database (question)

mysqlbinlog --database=question binlog.* | grep -B 5 -i -e "^alter" >> /tmp/alter_question.txt

5. You might also want to retrieve information from a specific date and time.

mysqlbinlog --start-datetime="2016-2-02 5:00:00"--stop-datetime="2016-03-03 8:10:00" mysql-bin.000007

However, analysis of the Mysqlbin logs are pretty vast. It depends what are the information that is being needed. I also find out that the mk-query-digest is also an interesting tool to extract information and perform analysis. SEE http://linux.die.net/man/1/mk-query-digest 


Messing around PL/Tcl on PostgreSQL

Some days back, i was having an issue whilst activating the PL/Tcl extension/language on PostgreSQL. Luckily, it was possible with the help from Mauricio Cortes, a guy from the Facebook group PostgreSQL Server. Let’s first see what is TCL package. “PL/Tcl is a loadable procedural language for the PostgreSQL database system that enables the Tcl language to be used to write functions and trigger procedures.” – PostgreSQL This facilitates us to load the module and unload it if same is not being used. You can also activate it for a particular user.

Screenshot from 2016-03-30 19-04-12

Initially, after installing the package postgresql95-tcl, as per the documentation, i made a “create extension” but same was returning the error “ ERROR: could not open extension control file “/usr/pgsql-9.5/share/extension/pltcl.control : No such file or directory” I also noticed that the file pltcl.control is not present. The guy recommended me to try out a “create language” instead of “create extension” and it works!!.

How silly it may found that, on the official documentation, it was mentioned that the create language has been deprecated and now the create extension is being used. I immediately understand that i was using a bad repository. I would advice to use the official rpm file from the official website yum.postgresql.org.

Lets see how to activate it:

1.Simply fire a yum install postgresql95-tcl Make sure that you are using the right repo (https://www.postgresql.org/download/linux/redhat/) to be able to use the create extension

2.After login, you can verify same with the command \dx  to have a list of installed extensions or \dx+ to have more details about the object

Screenshot from 2016-07-04 20-56-56

3. Now, here is the catch, if you are using PostgreSQL having the version older than 9.5, this should be activated with a create language pltcl; otherwise if its version 9.5 or beta 9.6, you can use a create extension pltcl;.

After some more research i notice that the advantage of this feature is that when a create extension is performed for any plugin the dump of the database will also consist of the extension otherwise in older version the Tcl language would not be included in the dump.

Articles published on PostgreSQL:


Activating MariaDB Audit log

“The purpose of the MariaDB Audit Plugin is to log the server’s activity. Records about who connected to the server, what queries ran and what tables were touched can be stored to the rotating log file or sent to the local syslogd.”MariaDB. The MariaDB  Audit Plugin works for MariaDB, MySQL and Percona server.

Photo Credits: MariaDB
Photo Credits: MariaDB

Links and Basics

Let’s see how to install the MariaDB audit plugin to retrieve all activity on the database server. I am actually testing it on a MariaDB 5.5 series database. You can use a 10.1 series if you want. For MariaDB installation there are articles which i have posted some times back such as MariaDB Galera cluster installation, MariaDB and its improved security features, Master-Master replication on MariaDB and a Master-Slave replication on MariaDB. I think those articles are pretty straight forward to dive into the installation procedures. Imagine having so many users connected on a database performing so many queries. One on the way to trace those requests are through the binlogs or by activating the MariaDB audit log. Let’s see how to activate the MariaDB Audit log.

Verification and Prerequisites

After installing your Database server, get into the console and launch the following command SHOW GLOBAL VARIABLES LIKE ‘plugin_dir’ ; It should prompt you something like this:

Screenshot from 2016-05-30 08-26-50

If you are using a 32-bit system if would be found in /usr/lib/mysql/plugin. Get into that directory and checked if you got a file called server_audit.so By default, on new MariaDB, its already available. However, if you are using MySQL-Server, or an old MariaDB or Percona server you would need to install the server_audit plugin. Download it from this link: http://www.skysql.com/downloads/mariadb-audit-plugin-beta Once downloaded, extract it and copy the file server_audit.so to the plugin dir value path [See screenshot above].

This does not means its already activated. You can verify it using the following commands SELECT * FROM information_schema.plugins WHERE plugin_name=’server_audit’; If it returns an emply set, it means its not yet activated otherwise you should have something like this:

Screenshot from 2016-05-30 08-37-56

Activating the Plugin

To activate the plugin, you can restart the service. However, there is another option to prevent any MySQL downtime by launching this command INSTALL PLUGIN server_audit SONAME ‘server_audit.so’;

Screenshot from 2016-05-30 08-42-33

Plugin configuration

After activating the plugin and if you are going to launch the command SHOW GLOBAL VARIABLES LIKE ‘server_audit%’; by default it would output you the following parameters.

Screenshot from 2016-05-30 08-51-03

These default values would normally create a log file called server_audit.log in the data directory. The values are self explicit. For example the parameter server_audit_file_rotate_size with value 1000000 means that when the size of the log is going to reach 1000000 bytes, its going to be rotated and nine files will be used before the log file will be overwritten. You also need to choose which type of events you want to log. Here is an example when activating all CONNECT, QUERY and TABLE event. If you want to audit only the CONNECT even, do set the variable to CONNECT only.

Screenshot from 2016-05-30 09-02-51

You can also turn off the plugin using the command SET GLOBAL server_audit_logging=OFF; As mentioned previously, the logs are saved at /var/lib/mysql/server_audit.log Here is an example of a log.

Screenshot from 2016-05-30 09-08-30

Here are some of the most important variables:

  • server_audit_logging – Enables audit logging; if it’s not set to ON, audit events will not be recorded and the audit plugin will not do anything.
  • server_audit_events – Specifies the events you wish to have in the log. By default the value is empty, which means that all events are recorded. The options are: CONNECTION (users connecting and disconnecting), QUERY (queries and their result), and TABLE (which tables are affected by the queries).
  • server_audit_excl_users, server_audit_incl_users – These variables specify which users’ activity should be excluded from or included in the audit. server_audit_incl_users has the higher priority. By default, all users’ activity is recorded.
  • server_audit_output_type – By default auditing output is sent to a file. The other option is syslog, meaning all entries go to the syslog facility.
  • server_audit_syslog_facility, server_audit_syslog_priority – Specifies the syslog facility and the priority of the events that should go to syslog.

Log File Examination

Log file can also be examined. The audit is performed in such a way that if even one user connect and disconnect the the MySQL it would be easily detected. A connect and disconnect would usually appears as this:

Screenshot from 2016-05-30 09-22-42

Queries woud look like this. Even if there are errors on the query, it would keep it in the log

Screenshot from 2016-05-30 09-25-22

The server_audit_events variable specifies which of the five events to log, taking a comma-separated list of the event types as an argument. There are six types of log records:

Photo Credits: MariaDB
Photo Credits: MariaDB

The audit log format looks like this:

[timestamp],[serverhost],[username],[host],[connectionid],
[queryid],[operation],[database],[object],[retcode]

Other Tips and Tricks

  • To avoid a heavy load on the machine, you can exclude a specific user using the parameter server_audit_excl_users=test,toto 
  • If the server_audit_output_type variable is set to SYSLOG instead of the default, FILE, the audit log file format will be as follows:
    [timestamp][syslog_host][syslog_ident]:[syslog_info][serverhost],[username],[host],
    [connectionid],[queryid],[operation],[database],[object],[retcode]
  • Be aware, though, that passwords given with functions PASSWORD() or OLD_PASSWORD() in DML statements will still be logged as plain text in queries. Key strings used with encrypt functions likeENCODE() and AES_ENCRYPT() are also still logged in plain text.
  • DDL and DML statements can also be audited.