Category: Linux Application

How i optimised my WordPress website ?

Some days back, I was brewing up some plans to optimise my website source codes, HTTP headers, latency and other security aspects. I had to carry out some analysis and research using some tools available on the internet. I should admit that, at first, it looked pretty simple, but it was not. For instance, I did not permit myself to directly modify the production environment. So, I had to migrate it on a pre-production environment. Page caching was yet another issue which could trick oneself after modifications.

Since my website is behind Cloudflare, which is already an advantage in terms of security, performance, reliability and insight, it does not mean that the website cannot be hacked. According to sucuri.net, websites using WordPress CMS are constantly being hacked. Of course, it depends on the mode of attack and the infection impact.

Photo credits: sucuri.net
Photo credits: sucuri.net

Migrating to TLS

Migrating a CMS which already has several articles posted can be an issue as the URLs are already recorded in the database as well as in the source code itself. Also, there were links on the website which were not pointed on HTTPS. After moving to the HTTPS version, errors such as “Mixed content” could be noticed when accessing the website. One of the interesting free feature of Cloudflare is that everyone can have a free SSL certificate issued by Comodo. You will have to generate your certificate and your private key from Cloudflare and point it on your Virtual Host.

Some corrections on WordPress source code needed to be added in the wp-config file as follows:

define('WP_HOME','https://tunnelix.com/');

define('WP_SITEURL','https://tunnelix.com/');

On top of that, there seemed to be lots of URLs on the database itself that needed corrections using the following commands:

update wp_options set option_value = replace(option_value, ‘http://www.tunnelix.com’, ‘https://www.tunnelix.com’) where option_name = ‘siteurl’;


update wp_posts set guid = replace(guid, 'http://www.tunnelix.com', 'https://www.tunnelix.com');

update wp_posts set post_content = replace(post_content, 'http://www.tunnelix.com', 'https://www.tunnelix.com');

However, there are some tricks to identify those non-HTTPS URLs by making a dump of the database and do a “Grep” in it, followed by a “Sed” to eliminate those unwanted parameters. Once the “Mixed Content” errors have been identified, I launched a scan on the Qualys SSL Labs website. The result was an “A+”. You can also use the Htbridge free SSL server test which is pretty fascinating especially to verify PCI DSS Compliance, HIPAA compliance, NIST guidelines and industry best practice in general. If all those criteria have been met, then you would score an “A+” rather than an “A” or worse a “F”.

Source code optimisation and Page speed verification

This can be verify using the GTmetrix tool available for free online. I noticed that my rank was a “C”. This was caused due to lack of minified HTML and CSS, and Image dimension. To handle the minify HTML errors, I enabled the plugin Minify HTML Markup on WordPress itself which corrected these errors. To tweak the Image dimension i downloaded the tool Optipng from Epel repository:

optipng.x86_64                  0.7.6-1.el6                        @epel        

For example, if you want to optimize a specific image, use the following command:

optipng -o2 Screen-Shot-2016-12-24-at-1.04.45-AM.png

Another verification was made on GTmetrix website and i noticed that the result was then an “A”

from GTMETRIX.COM

Tweaking the Web server HTTP headers

Htbridge will surely give you an overview of the web server security and will accompany you step by step to get a better result.

Of course, since the website is behind cloudflare,  it is limited to certain security tweaks such as Public-key-pins.The Public Key Pinning Extension for HTTP (HPKP) is a security feature that tells a web client to associate a specific cryptographic public key with a certain web server to decrease the risk of Man-in-the-Middle (MITM) attacks with forged certificates. I found an interesting article on https://raymii.org which explained how to activate the HPKP. 

Once you are in possession of your certificate and Private key, you can create the public key and a token will be received to activate the HPKP extension. The following commands can be used to get the token and the public key.

# openssl x509 -noout -in certificate.pem -pubkey | openssl asn1parse -noout -inform pem -out public.key;

# openssl dgst -sha256 -binary public.key | openssl enc -base64
 4vr+koFuogsfghGjgvpsqQIIikg5KowHTIGNQ5Prspc=

However, it looked that HPKP is not supported on Cloudflare. But, there are other issues such as HSTS. HTTP Strict Transport Security (HSTS, RFC 6797) is a web security policy technology designed to help secure HTTPS web servers against downgrade attacks. HSTS is a powerful technology which is not yet widely adopted. CloudFlare aims to change this. I enabled it as per recommendations by Cloudflare.

A curl on the url https://tunnelix.com now prompts the following headers :

No system is perfectly secure, but I believe that these modifications are worth to adventure around. I should say I was really impressed by free tools such as the Qualys SSL test, HTbridge free SSL and Web security test and Gtmetrix in terms of page speed.

 Hello Tunnelers, this is my first article for the year 2017, I seize this opportunity to wish my readers a Happy New Year 2017 and wish you all lots of prosperity. – TheTunnelix


A simple and fast Memcache implementation on WordPress

Memcache is an in-memory key-value pairs data store. The data store lives in the memory of the machine. That is why it is much faster compared to other data stores. The only thing that you can put in memcache are key-value pairs. You can also put anything that is serializable into memcache either as a key or a value. In general, there are two major use cases such as Caching and Sharing Data Cross App Instances. In caching, datastore query results are kept or a user authentication token and session data. There are also APIs call that can also be cached. URL fetching can also be kept in memcache. Content of a whole page can also be kept in memcache. 

Results can be ten times much faster as explained by this benchmark test carried out by Google Developers team some times back.

Photo credits: Google Developers Team
Photo credits: Google Developers Team

1.For installation on a wordpress website, thats simple straight forward. On CLI, simply do a

yum install memcached

2. By default, memcache configurations on Centos7 is found at /etc/sysconfig/memcached as follows

[[email protected] ~]# cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS=" "

3. To check if your memcache is running correctly, use the following command

echo stats | nc localhost 11211

4. Now, login to the plugin area and download the WP-FFPC plugin and activate it.

Screenshot from 2016-09-04 13-35-27

5. In the wp-config file of the wordpress CMS, define the following parameters

define('WP_CACHE', TRUE);

6. Save changes in the WP-FFPC plugins settings. On your server, you can now see the caching being done

[[email protected]]# echo stats | nc localhost 11211 | grep total_items
STAT total_items 32

And, here you are. A simple and fast memcache implementation on your WordPress website.

Note: By activating a wordpress plugin, you should make sure that proper auditing have been carried out as these days there are several vulnerabilities present on WordPress plugins.


Chef workstation and a basic cookbook

Since the main jobs of system administrator is to maintain systems, keep repeating ourselves which is kind boring as well as to dig into our memory of previous configurations that we have set up on a machine. No wonder, manual consistency configurations need to be checked on server configurations. It can be thousands of machines. Chef, is just another tool to get rid of these situations. It is a configuration management tool which is written in Ruby and Erlang for IT professional. Compared to Puppet which has only the Workstation and the Derver whilst Chef has three components that are the Chef Server, Chef workstation and Chef Node.

Photo credits: Linode.com
Photo credits: Linode.com

The cookbooks are written on the Workstation, and its then uploaded to the Chef server (service) which will be executed on the nodes. Chef nodes can be physical, virtual or directly on the cloud. Normally, chef nodes cannot communicate directly to the workstation. Let’s not focus on the installation.

Let’s first get into the workstation.

1.On the workstation download and install the Chef client from the client download page. In my case, i am on a Centos7 virtual machine.

[[email protected] ~]# wget https://packages.chef.io/stable/el/7/chef-12.12.15-1.el7.x86_64.rpm

2.After installation, you should notice the four utils already available: chef-apply chef-client chef-shell chef-solo

3. Now, we are going to create a cookbook. Since chef use the DSL – Domain specific language, the file created should end with the extension .rb Here is an example called file.rb. The first line means file resource which means a file is being created. The file resource will manage a file on the machine. The content of the file will be created with the line ‘Hello Tunnelix’

file 'file.txt' do
            content 'Hello Tunnelix'
 end

4. The tool chef-apply can be used to run it as follows:

Screenshot from 2016-08-07 21-49-07

5. You will also noticed that the file.txt has been created in the current directory as the path has not been specified.

Screenshot from 2016-08-07 21-50-24

Tips:

  • If the content of file.rb (refer to point 3) has not been modified and you fire a chef-apply again, you would notice a prompt that its already ‘up to date’ which means that it reduce the disk IO as well as the bandwidth. 
  • A string must be enclosed  in double quotes when using variables. You cannot use a single quote into another single quote. It won’t work!

Chef always check and refer to the resource and attributes in the cookbook to execute an order ; ie to cook a food. The thing is that Chef focus on the DSL with the aim to what the modifications need to be. Chef allows servers to be in a consistent state.


HTTPoxy – Is your nginx affected?

httpoxy is a set of vulnerabilities that affect application code running in CGI, or CGI-like environments. It comes down to a simple namespace conflict:

  • RFC 3875 (CGI) puts the HTTP Proxy header from a request into the environment variables as HTTP_PROXY
  • HTTP_PROXY is a popular environment variable used to configure an outgoing proxy

This leads to a remotely exploitable vulnerability. If you’re running PHP or CGI, you should block the Proxy header now.httpoxy.org

httpoxy is a vulnerability for server-side web applications. If you’re not deploying code, you don’t need to worry. A number of CVEs have been assigned, covering specific languages and CGI implementations:

Screenshot from 2016-07-20 22-45-54

After receiving the updates from hackers.mu, i immediately started some heavy research. For Nginx users, to defeat the attack, modifications can be carried out in the fastcgi_params file.

1.Create a file called test.php containing the following codes in the source code.

[[email protected]]# cat test.php

<?php
 echo getenv ('HTTP_PROXY')
 ?>

2. Launch a CURL as follows:

As you can see its “Affected”

[[email protected]]# curl -H 'Proxy: AFFECTED' http://127.0.0.1:/test.php
 AFFECTED

3. Add the following parameters in the /etc/nginx/fastcgi_params file

 fastcgi_param  HTTP_PROXY  "";

4. Stop and start the Nginx Service

5. Conduct the test again. You should be vulnerable by now.

Several links available already explaining this vulnerability :

References:

  • https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
  • https://www.kb.cert.org/vuls/id/797896
  • https://access.redhat.com/solutions/2442861
  • https://httpoxy.org/#history
  • http://www.theregister.co.uk/2016/07/18/httpoxy_hole/
  • https://blogs.akamai.com/2016/07/akamai-mitigates-httpoxy-vulnerability.html
  • http://www.zdnet.com/article/15-year-old-httpoxy-flaw-causes-developer-patch-scramble/
  • https://access.redhat.com/solutions/2435541

 


DDOS attack on WordPress xmlrpc.php solved using Fail2Ban

Several types of attack can be launched against WordPress website such as unwanted Bots, SSH Bot requests, unwanted Crawlers etc.. Some times back, i noticed that there were several attempts to perform a DDOS attack on a WordPress website by sending massive POST requests on the xmlrpc.php file. This will consequently brings the webserver to consume almost all resource and consequently caused the website to crash. Worst is that if you are hosting your website on a container such as OpenVZ or Docker, your hosting provider would usually mentioned on its agreement about abuse of resource which would turn you solely responsible. In many agreements, the provider would terminate the contract and you may loose all your data. Hosting your website on a container is indeed a huge responsibility.

Screenshot from 2016-07-17 13-49-23

What is xmlrpc.php file on a WordPress website ?

It is actually an API which enables developers who build apps the ability to communicate to the WordPress site. The XML-RPC API that WordPress provides gives developers a way to write applications that can do many things when logged in the web interface such as post publishing, Editing etc.. There is a full list of the wordpress API at this link. There are several ways to block out users from performing POST requests on the xmlrpc.php such as IPtables rules, rules in the htaccess file or in the webserver etc.. However, i find Fail2Ban more suitable for my environment.

How VPS hosting providers might interprete these attacks ?

Usually VPS provider will not perform analysis of the attack but it depends on the service you are buying. One of the way how the attack might felt at hosting level is about the conntrack sessions usage. Abuse of conntrack usage will usually raise an alert at hosting side and you might received a mail about the issue. Its now upto you to investigate deeply the issue.

What are conntrack (connections tracking) sessions ?

A normal Linux OS has a maximum of 65536 conntrack sessions by default, these sessions all require memory which is used by the host node and not by the VPS so setting this limit to high can impact the whole node and allow users to use more RAM than their VPS has allocated by eating up the host’s RAM. Any VPS that uses over 20000 conntrack sessions will automatically be suspended by our automated system. “In brief, conntrack refers to the ability to maintain state information about a connection in memory tables such as source and destination ip address and port number pairs (known as socket pairs), protocol types, connection state and timeouts.” – rigacci.org Firewalls that performed such task are known as stateful.

Counter attack measures that could be taken

1.In the jail.local of the Fail2Ban application add the following line. By default jail.local is located at /etc/fail2ban/jail.local

It also depends where your web server access log is located. In this case its located at /var/log/nginx/access.log

[xmlrpc]
enabled = true
filter = xmlrpc
action = iptables[name=xmlrpc, port=http, protocol=tcp]
logpath = /var/log/nginx/access.log
bantime = 43600
maxretry = 20

2. Then, create a conf file in /etc/fail2ban/filter.d I have created it as xmlrpc.conf and add the following lines:

This acts as a rules to look for all post for xmlrpc.php

[Definition]
failregex = ^<HOST> .*POST .*xmlrpc\.php.*
ignoreregex =

3. Restart your fail2ban service and watch them out in the fail2ban log. Here is an idea what happens when an IP is caught

[[email protected] nginx]# cat /var/log/fail2ban.log | grep -i xml
2016-07-17 06:39:06,685 fail2ban.actions [4565]: NOTICE [xmlrpc] Ban 108.162.246.97

4. Here is an idea the number of POST request received from a server.

[[email protected] nginx]# grep -i “xmlrpc.php” /var/log/nginx/access.log| grep POST | awk ‘{print $1}’ | wc -l
62680

5. Lets have a look at the 10 top IPs performing more POST request

[[email protected] nginx]# grep -i “xmlrpc.php” /var/log/nginx/access.log| grep POST | awk ‘{print $1}’  | uniq -c | sort -n | tail -n 10

185 91.121.143.111
186 94.136.37.189
279 37.247.104.148
317 80.237.79.2
1497 191.96.249.20
3060 46.105.127.185
5999 5.154.191.55
11612 191.96.249.54
16917 52.206.5.20
17111 107.21.131.43

6. However, if you are not using a container server, you can set different type of parameters in sysctl.conf if you have not performed a full analysis of the conntrak abuse. You can limit number of connections using the following command.

In this case, i have limit it to 10,000 connections.

/sbin/sysctl -w net.netfilter.nf_conntrack_max=10000

7. To check how many sessions, use the following command

/sbin/sysctl -w net.netfilter.nf_conntrack_count

8. However, you need to make sure that the modules have been activated into the kernel. Check with the following command

modprobe ip_conntrack

The aim is to find a solution to get away with DDOS over xmlprc.php as well as the setting up of the conntrack parameter in sysctl.conf