Category: Linux Application

Chef workstation and a basic cookbook

Since the main jobs of system administrator is to maintain systems, keep repeating ourselves which is kind boring as well as to dig into our memory of previous configurations that we have set up on a machine. No wonder, manual consistency configurations need to be checked on server configurations. It can be thousands of machines. Chef, is just another tool to get rid of these situations. It is a configuration management tool which is written in Ruby and Erlang for IT professional. Compared to Puppet which has only the Workstation and the Derver whilst Chef has three components that are the Chef Server, Chef workstation and Chef Node.

Photo credits: Linode.com
Photo credits: Linode.com

The cookbooks are written on the Workstation, and its then uploaded to the Chef server (service) which will be executed on the nodes. Chef nodes can be physical, virtual or directly on the cloud. Normally, chef nodes cannot communicate directly to the workstation. Let’s not focus on the installation.

Let’s first get into the workstation.

1.On the workstation download and install the Chef client from the client download page. In my case, i am on a Centos7 virtual machine.

[[email protected] ~]# wget https://packages.chef.io/stable/el/7/chef-12.12.15-1.el7.x86_64.rpm

2.After installation, you should notice the four utils already available: chef-apply chef-client chef-shell chef-solo

3. Now, we are going to create a cookbook. Since chef use the DSL – Domain specific language, the file created should end with the extension .rb Here is an example called file.rb. The first line means file resource which means a file is being created. The file resource will manage a file on the machine. The content of the file will be created with the line ‘Hello Tunnelix’

file 'file.txt' do
            content 'Hello Tunnelix'
 end

4. The tool chef-apply can be used to run it as follows:

Screenshot from 2016-08-07 21-49-07

5. You will also noticed that the file.txt has been created in the current directory as the path has not been specified.

Screenshot from 2016-08-07 21-50-24

Tips:

  • If the content of file.rb (refer to point 3) has not been modified and you fire a chef-apply again, you would notice a prompt that its already ‘up to date’ which means that it reduce the disk IO as well as the bandwidth. 
  • A string must be enclosed  in double quotes when using variables. You cannot use a single quote into another single quote. It won’t work!

Chef always check and refer to the resource and attributes in the cookbook to execute an order ; ie to cook a food. The thing is that Chef focus on the DSL with the aim to what the modifications need to be. Chef allows servers to be in a consistent state.


HTTPoxy – Is your nginx affected?

httpoxy is a set of vulnerabilities that affect application code running in CGI, or CGI-like environments. It comes down to a simple namespace conflict:

  • RFC 3875 (CGI) puts the HTTP Proxy header from a request into the environment variables as HTTP_PROXY
  • HTTP_PROXY is a popular environment variable used to configure an outgoing proxy

This leads to a remotely exploitable vulnerability. If you’re running PHP or CGI, you should block the Proxy header now.httpoxy.org

httpoxy is a vulnerability for server-side web applications. If you’re not deploying code, you don’t need to worry. A number of CVEs have been assigned, covering specific languages and CGI implementations:

Screenshot from 2016-07-20 22-45-54

After receiving the updates from cyberstorm.mu, I immediately started some heavy research. For Nginx users, to defeat the attack, modifications can be carried out in the fastcgi_params file.

1. Create a file called test.php containing the following codes in the source code.

[[email protected]]# cat test.php

<?php
 echo getenv ('HTTP_PROXY')
 ?>

2. Launch a CURL as follows:

As you can see its “Affected”

[[email protected]]# curl -H 'Proxy: AFFECTED' http://127.0.0.1:/test.php
 AFFECTED

3. Add the following parameters in the /etc/nginx/fastcgi_params file

 fastcgi_param  HTTP_PROXY  "";

4. Stop and start the Nginx Service

5. Conduct the test again. You should be vulnerable by now.

Several links available already explaining this vulnerability :

References:

  • https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
  • https://www.kb.cert.org/vuls/id/797896
  • https://access.redhat.com/solutions/2442861
  • https://httpoxy.org/#history
  • http://www.theregister.co.uk/2016/07/18/httpoxy_hole/
  • https://blogs.akamai.com/2016/07/akamai-mitigates-httpoxy-vulnerability.html
  • http://www.zdnet.com/article/15-year-old-httpoxy-flaw-causes-developer-patch-scramble/
  • https://access.redhat.com/solutions/2435541

 


DDOS attack on WordPress xmlrpc.php solved using Fail2Ban

Several types of attack can be launched against WordPress website such as unwanted Bots, SSH Bot requests, unwanted Crawlers etc.. Some times back, i noticed that there were several attempts to perform a DDOS attack on a WordPress website by sending massive POST requests on the xmlrpc.php file. This will consequently brings the webserver to consume almost all resource and consequently caused the website to crash. Worst is that if you are hosting your website on a container such as OpenVZ or Docker, your hosting provider would usually mentioned on its agreement about abuse of resource which would turn you solely responsible. In many agreements, the provider would terminate the contract and you may loose all your data. Hosting your website on a container is indeed a huge responsibility.

Screenshot from 2016-07-17 13-49-23

What is xmlrpc.php file on a WordPress website ?

It is actually an API which enables developers who build apps the ability to communicate to the WordPress site. The XML-RPC API that WordPress provides gives developers a way to write applications that can do many things when logged in the web interface such as post publishing, Editing etc.. There is a full list of the wordpress API at this link. There are several ways to block out users from performing POST requests on the xmlrpc.php such as IPtables rules, rules in the htaccess file or in the webserver etc.. However, i find Fail2Ban more suitable for my environment.

How VPS hosting providers might interprete these attacks ?

Usually VPS provider will not perform analysis of the attack but it depends on the service you are buying. One of the way how the attack might felt at hosting level is about the conntrack sessions usage. Abuse of conntrack usage will usually raise an alert at hosting side and you might received a mail about the issue. Its now upto you to investigate deeply the issue.

What are conntrack (connections tracking) sessions ?

A normal Linux OS has a maximum of 65536 conntrack sessions by default, these sessions all require memory which is used by the host node and not by the VPS so setting this limit to high can impact the whole node and allow users to use more RAM than their VPS has allocated by eating up the host’s RAM. Any VPS that uses over 20000 conntrack sessions will automatically be suspended by our automated system. “In brief, conntrack refers to the ability to maintain state information about a connection in memory tables such as source and destination ip address and port number pairs (known as socket pairs), protocol types, connection state and timeouts.” – rigacci.org Firewalls that performed such task are known as stateful.

Counter attack measures that could be taken

1.In the jail.local of the Fail2Ban application add the following line. By default jail.local is located at /etc/fail2ban/jail.local

It also depends where your web server access log is located. In this case its located at /var/log/nginx/access.log

[xmlrpc]
enabled = true
filter = xmlrpc
action = iptables[name=xmlrpc, port=http, protocol=tcp]
logpath = /var/log/nginx/access.log
bantime = 43600
maxretry = 20

2. Then, create a conf file in /etc/fail2ban/filter.d I have created it as xmlrpc.conf and add the following lines:

This acts as a rules to look for all post for xmlrpc.php

[Definition]
failregex = ^<HOST> .*POST .*xmlrpc\.php.*
ignoreregex =

3. Restart your fail2ban service and watch them out in the fail2ban log. Here is an idea what happens when an IP is caught

[[email protected] nginx]# cat /var/log/fail2ban.log | grep -i xml
2016-07-17 06:39:06,685 fail2ban.actions [4565]: NOTICE [xmlrpc] Ban 108.162.246.97

4. Here is an idea the number of POST request received from a server.

[[email protected] nginx]# grep -i “xmlrpc.php” /var/log/nginx/access.log| grep POST | awk ‘{print $1}’ | wc -l
62680

5. Lets have a look at the 10 top IPs performing more POST request

[[email protected] nginx]# grep -i “xmlrpc.php” /var/log/nginx/access.log| grep POST | awk ‘{print $1}’  | uniq -c | sort -n | tail -n 10

185 91.121.143.111
186 94.136.37.189
279 37.247.104.148
317 80.237.79.2
1497 191.96.249.20
3060 46.105.127.185
5999 5.154.191.55
11612 191.96.249.54
16917 52.206.5.20
17111 107.21.131.43

6. However, if you are not using a container server, you can set different type of parameters in sysctl.conf if you have not performed a full analysis of the conntrak abuse. You can limit number of connections using the following command.

In this case, i have limit it to 10,000 connections.

/sbin/sysctl -w net.netfilter.nf_conntrack_max=10000

7. To check how many sessions, use the following command

/sbin/sysctl -w net.netfilter.nf_conntrack_count

8. However, you need to make sure that the modules have been activated into the kernel. Check with the following command

modprobe ip_conntrack

The aim is to find a solution to get away with DDOS over xmlprc.php as well as the setting up of the conntrack parameter in sysctl.conf


Installation and Basics of Tomcat and Jenkins

Jenkins is a free and open source web application for continuous build, integration, deployment and testing over a web server. Jenkins use lots of shells command and it has a shell interface over the web. On the other hand, we have Apache Tomcat which is an open source web server and servlet container developped by the Apache Software Foundation. Apache Tomcat is an Open Source Java web driven web application server. It provides support for Java web application  such as JSP (Java Server Pages) Documents and WAR (Web Application Archive) files. It has also a self-contained HTTP server. It can also be configured by editing xml files. To run Tomcat, you will need Java. Some of the componets of Tomcat are Catalina (Sevlet container), Coyote(HTTP connector) and Jaspera(JSP engine). It is to be noted that different Tomcat Versions have different implementations.

Some of the Tomcat components

Catalina is Tomcat’s servlet container. It implements Sun Microsystems’ specifications for servlets and JavaServer pages (JSP). In tomcat Realm elements represents a database of usernames, passwords, roles assigned to those users(Similar to Unix groups). Different implementations of Realm allow Catalina to be integrated into environments where such authentication information is already been created and maintaned. That information is used to implement Container Managed Security.

Coyote is Tomcat’s HTTP connector component which supports HTTP 1.1 Protocol for the web server or application container. Coyote listens for incoming connections on a specific TCP port. Then, Coyote forwards the request to the Tomcat Engine. The Tomcat engine will then processes the request and send back a response to the requesting client. Coyote can execute JSP’s and Servlets.

Jasper is Tomcat JSP Engine. Jasper parses JSP files to compile them into Java code as servlets. The compiled Java code can be handled by Catalina. At runtime, Jasper detects changes to JSP files and recompiles them.

Apache Tomcat layers

Screenshot from 2016-06-05 13-15-05

Some concept of Jenkins CI

Jenkins is an open source tool. Its a web application and can be run using any web/application server. Before getting into details of Jenkins, its important to understand the concept of Continuous Integration, Continuous Deployment and Continuous Delivery.

Continuous Integration – Is a software development practice where members of a team integrate their work frequently, usually each person  integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allow a team to develop cohesive software more rapidly.

Continuous Deployment – The methodology of continuous putting of new features to live systems, so that can be used by other people (internal or external). Normally, this is done in an automated way, and build on the continuous integration part.

Continuous Delivery – Techniques such as automated testing, continuous integration and continuous deployment allow software to be developed to a high standard and easily packaged and deployed to test environments resulting in rapid, reliable and repeated push out enhancements and bug fixes to customers with minimal overhead.

In brief, Jenkins CI (Continuous Integration) is the leading open source continuous integration server built with Java that provides over 450 plugins to support building and testing virtually any project. Let’s now get into the installation process.

Installing Java, Tomcat and Jenkins

1.You will basically need Java Development Kit. At the time, i am writing this article jdk-8u92 is the latest one. This URL should help you to find the latest one http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html Since, i am on CentOS 6 32-bit, i decided to install the RPM directly instead of compiling using this command.

wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-i586.rpm"

2.Once the download is completed, install it using the command rpm -i jdk-8u92-linux-i586.rpm You should have something similar to this

Screenshot from 2016-05-30 21-38-36

3. By now you should be able to find the version information using javav -version for the compiler or simply java -version

Screenshot from 2016-05-30 21-40-57

Screenshot from 2016-05-30 21-50-28

4. By default the binary is located at /usr/bin and java is installed in /usr/java.

Screenshot from 2016-05-30 21-47-30

5. Before installing tomcat, we will create a group called Tomcat and a specific user (Tomcat) to access a specific directory.

sudo groupadd tomcat
sudo useradd -M -s /bin/nologin -g tomcat -d /opt/tomcat tomcat

6. I created a directory at /opt/tomcat

mkdir /opt/tomcat

7. Download the tomcat tar

cd /tmp && wget http://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.0.M6/bin/apache-tomcat-9.0.0.M6.tar.gz

8. Extract it to the /opt/tomcat directory.

tar xf apache-tomcat-9.0.0.M6.tar.gz -C /opt/tomcat/ --strip-components=1

9. The following files should be present in /opt/tomcat

Screenshot from 2016-05-30 22-04-52

10. Now you can easily run Tomcat in the background using the command ./bin/startup.sh&

Screenshot from 2016-05-30 22-07-31

11. A netstat -ntpl will show you Java running on port 8009 and 8080

Screenshot from 2016-05-30 22-09-25

12. A simple curl shoud give you a http 200 response

Screenshot from 2016-05-30 22-12-20

13. I am actually running Tomcat on a Virtual Box machine with IP 192.168.1.10. The Tomcat page should also appear on port 8080

Screenshot from 2016-05-30 22-11-39

14. Now that Tomcat is already running, lets deploy Jenkins on top of it. Jenkins is a ‘war’ file which you can download at the official website jenkins.io using the following command. In the webapps folder of your Tomcat, fire this command.

wget http://mirrors.jenkins-ci.org/war-stable/latest/jenkins.war

15. You can also clear or backup the webapps folder and deploy only the jenkins.war I have rename mine as ROOT.war. Then you will need to shutdown the tomcat server and start it anew. A folder called ROOT will be created where the jenkins files have been deployed.

Screenshot from 2016-06-04 15-47-02

16. On the webpage port 8080, you can now accessed the Jenkins page and start playing around with it.

Screenshot from 2016-06-04 15-49-18

This post is dedicated to the basics installation. In some next post, i will into some details of Jenkins and Java applications analysis. Have fun 🙂

NOTE: I have updated this article after a hot conversation with Mr. Gaurav Verma on Facebook Linux group Thanks to him that some errors were corrected such as:

  • The diagram which does not seem to be updated as there were still an outdated mechanism in my brain. It now fix my brain gap.
  • Some years back, Tomcat was used for Java resources and apache was serving only static resources which is no longer the case.
  • Another issue raised is that by default all ip addresses of the machine are binded to 0.0.0.0 [Any]. To connect on your machine localhost is not mandatory.

DevConMru – Backup in the cloud for the Paranoid by cyberstorm.mu

At Cyberstorm Mauritius we work on several projects and code for fun. One of the interesting projects we have look at is an application called Tarsnap which is used to perform a secure backup on the cloud. At Cyberstorm Mauritius, myself (@TheTunnelix) and Codarren (@Devildron) recently send codes to Tarsnap and same were approved. That’s really cool when someone’s code is approved and used worldwide by thousands of companies. Today, I have the privilege to speak on Tarsnap at the DevConMru 2016 which was held at Voila hotel, Bagatelle. On reaching there, I was impressed by the number of people already waiting inside the conference room who were curious about Tarsnap. Some were entrepreneurs whilst others were students. I should say around 30 people attended the conference. Since it was a Sunday at 11:30 am, the team did not hesitate to bring some beer to the little crowd present there. I was busy setting up my laptop for the presentation.

As usual, I like to get the attention of my audience before the presentation. My first slide showed the logo of Tarsnap upside down.

Screenshot from 2016-05-22 19-05-41

Everyone was turning their head and making the effort to read the content. And here we go. I noticed that they are all ready and curious about it.

Check out the Slide here. Please wait some minutes. It’s loading…

The basics of Tarsnap were explained. Tarsnap take streams of archive data and splits then into variable-length blocks. Those blocks are compared and any duplicate blocks are removed. Data de-duplication happens before its uploaded to the Tarsnap server. Tarsnap does not create Temporary files but instead create a cache file on the client. The cache file is the files that are being back up to the Tarsnap server. After deduplication, the data is then compressed, encrypted, signed and send to the Tarsnap server. I also explained that the archived are saved on an Amazon S3 with EC2 server to handle it. Another interesting point raised was the concept of Tarsnap which uses smart Rsync-like block oriented snapshot operations that upload only data which is charged to minimize transmission costs. One does not need to trust any vendor cryptographic claims and you have full access to the source codes which uses open-source libraries and industry-vetted protocols such as RSA, AES, and SHA.

Getting on to the other part of Tarsnap and Bandwidth, an emphasis was made on Tarsnap which synchronized blocks of data using a very intelligent algorithm. Nowadays, there are companies that still use tapes for backups. Imagine having so many tapes and when restoration time has arrived, this would take tremendous time. Tarsnap compresses, encrypts and cryptographically signs every byte you send to it. No knowledge of cryptographic protocols is required. At this point, I asked a question about volunteers who are thinking to look at the Tarsnap code. There were three persons who raised their hands. The importance of the key file was raised up as some companies secure their private key in a safe. Tarsnap also supports the division of responsibilities where an explanation was laid out where a particular key can only be used to create an archive and not delete them.

An analogy between google drive compared to Tarsnap was given. Many already understood the importance of Tarsnap compared to Google Drive. The concept of deduplication was explained using examples. For the network enthusiasts, I laid emphasis on the port 9279 which should not be blocked on the firewall as Tarsnap runs on the following port number. Coming to confidentiality, the matter was made clear enough to the audience how much the data is secured. If it happens someone lost the key there is no way of getting back the data. 

Tarsnap is not an open source product. However, their client code is open to learn, break and study. I laid emphasis on the reusable open source components that come with Tarsnap, for example, the Script KDF (Key derivation function). KDF derives one or more secret keys from a secret value such as a master key, a password or passphrase or using a pseudo-random function. The Kivaloo data store was briefly explained. Its a collection of utilities which together form a data store associating keys up to 255 bytes with a value up to 255 bytes. Writes are accepted until data has been synced. If A completed before B, B will see the results of A. The SPIPED secure pipe daemon which is a utility for creating symmetrically encrypted and authenticated pipes between socket addresses so that one may connect to one address. 

I also explained to the audience the pricing mechanism which was perceived rather cheap for its security and data deduplication mechanisms. Tarsnap pricing works similarly as a prepaid utility-metered model. A deposit of $5 is needed. Many were amazed when I told them that the balance is a track to 18 decimal places. Prices are paid exactly what is consumed.

Other interesting features such as regular expression support and interesting kinds of stuff with the dry run features of Tarsnap was given. The concept of Tar command compared to Tarsnap was also explained. Commands, hints, and tricks explained.

At the end, i consider it really important to credit Colin, the author of Tarsnap and i have been strongly inspired by the work of Michael Lucas on Tarsnap. Indeed, another great achievement of Cyberstorm Mauritius at the DevConMru 2016.