Category: Linux System

Chef workstation and a basic cookbook

Since the main jobs of system administrator is to maintain systems, keep repeating ourselves which is kind boring as well as to dig into our memory of previous configurations that we have set up on a machine. No wonder, manual consistency configurations need to be checked on server configurations. It can be thousands of machines. Chef, is just another tool to get rid of these situations. It is a configuration management tool which is written in Ruby and Erlang for IT professional. Compared to Puppet which has only the Workstation and the Derver whilst Chef has three components that are the Chef Server, Chef workstation and Chef Node.

Photo credits: Linode.com
Photo credits: Linode.com

The cookbooks are written on the Workstation, and its then uploaded to the Chef server (service) which will be executed on the nodes. Chef nodes can be physical, virtual or directly on the cloud. Normally, chef nodes cannot communicate directly to the workstation. Let’s not focus on the installation.

Let’s first get into the workstation.

1.On the workstation download and install the Chef client from the client download page. In my case, i am on a Centos7 virtual machine.

[[email protected] ~]# wget https://packages.chef.io/stable/el/7/chef-12.12.15-1.el7.x86_64.rpm

2.After installation, you should notice the four utils already available: chef-apply chef-client chef-shell chef-solo

3. Now, we are going to create a cookbook. Since chef use the DSL – Domain specific language, the file created should end with the extension .rb Here is an example called file.rb. The first line means file resource which means a file is being created. The file resource will manage a file on the machine. The content of the file will be created with the line ‘Hello Tunnelix’

file 'file.txt' do
            content 'Hello Tunnelix'
 end

4. The tool chef-apply can be used to run it as follows:

Screenshot from 2016-08-07 21-49-07

5. You will also noticed that the file.txt has been created in the current directory as the path has not been specified.

Screenshot from 2016-08-07 21-50-24

Tips:

  • If the content of file.rb (refer to point 3) has not been modified and you fire a chef-apply again, you would notice a prompt that its already ‘up to date’ which means that it reduce the disk IO as well as the bandwidth. 
  • A string must be enclosed  in double quotes when using variables. You cannot use a single quote into another single quote. It won’t work!

Chef always check and refer to the resource and attributes in the cookbook to execute an order ; ie to cook a food. The thing is that Chef focus on the DSL with the aim to what the modifications need to be. Chef allows servers to be in a consistent state.


HTTPoxy – Is your nginx affected?

httpoxy is a set of vulnerabilities that affect application code running in CGI, or CGI-like environments. It comes down to a simple namespace conflict:

  • RFC 3875 (CGI) puts the HTTP Proxy header from a request into the environment variables as HTTP_PROXY
  • HTTP_PROXY is a popular environment variable used to configure an outgoing proxy

This leads to a remotely exploitable vulnerability. If you’re running PHP or CGI, you should block the Proxy header now.httpoxy.org

httpoxy is a vulnerability for server-side web applications. If you’re not deploying code, you don’t need to worry. A number of CVEs have been assigned, covering specific languages and CGI implementations:

Screenshot from 2016-07-20 22-45-54

After receiving the updates from cyberstorm.mu, I immediately started some heavy research. For Nginx users, to defeat the attack, modifications can be carried out in the fastcgi_params file.

1. Create a file called test.php containing the following codes in the source code.

[[email protected]]# cat test.php

<?php
 echo getenv ('HTTP_PROXY')
 ?>

2. Launch a CURL as follows:

As you can see its “Affected”

[[email protected]]# curl -H 'Proxy: AFFECTED' http://127.0.0.1:/test.php
 AFFECTED

3. Add the following parameters in the /etc/nginx/fastcgi_params file

 fastcgi_param  HTTP_PROXY  "";

4. Stop and start the Nginx Service

5. Conduct the test again. You should be vulnerable by now.

Several links available already explaining this vulnerability :

References:

  • https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
  • https://www.kb.cert.org/vuls/id/797896
  • https://access.redhat.com/solutions/2442861
  • https://httpoxy.org/#history
  • http://www.theregister.co.uk/2016/07/18/httpoxy_hole/
  • https://blogs.akamai.com/2016/07/akamai-mitigates-httpoxy-vulnerability.html
  • http://www.zdnet.com/article/15-year-old-httpoxy-flaw-causes-developer-patch-scramble/
  • https://access.redhat.com/solutions/2435541

 


Messing around PL/Tcl on PostgreSQL

Some days back, i was having an issue whilst activating the PL/Tcl extension/language on PostgreSQL. Luckily, it was possible with the help from Mauricio Cortes, a guy from the Facebook group PostgreSQL Server. Let’s first see what is TCL package. “PL/Tcl is a loadable procedural language for the PostgreSQL database system that enables the Tcl language to be used to write functions and trigger procedures.” – PostgreSQL This facilitates us to load the module and unload it if same is not being used. You can also activate it for a particular user.

Screenshot from 2016-03-30 19-04-12

Initially, after installing the package postgresql95-tcl, as per the documentation, i made a “create extension” but same was returning the error “ ERROR: could not open extension control file “/usr/pgsql-9.5/share/extension/pltcl.control : No such file or directory” I also noticed that the file pltcl.control is not present. The guy recommended me to try out a “create language” instead of “create extension” and it works!!.

How silly it may found that, on the official documentation, it was mentioned that the create language has been deprecated and now the create extension is being used. I immediately understand that i was using a bad repository. I would advice to use the official rpm file from the official website yum.postgresql.org.

Lets see how to activate it:

1.Simply fire a yum install postgresql95-tcl Make sure that you are using the right repo (https://www.postgresql.org/download/linux/redhat/) to be able to use the create extension

2.After login, you can verify same with the command \dx  to have a list of installed extensions or \dx+ to have more details about the object

Screenshot from 2016-07-04 20-56-56

3. Now, here is the catch, if you are using PostgreSQL having the version older than 9.5, this should be activated with a create language pltcl; otherwise if its version 9.5 or beta 9.6, you can use a create extension pltcl;.

After some more research i notice that the advantage of this feature is that when a create extension is performed for any plugin the dump of the database will also consist of the extension otherwise in older version the Tcl language would not be included in the dump.

Articles published on PostgreSQL:


Operation Prison Break by cyberstorm.mu – Sandboxing and Firejail

This is yet another successful hackathon carried out under the umbrella of cyberstorm.mu. Branded by the theme “Operation PB – Prison Break”, members of cyberstorm.mu shows skills of security innovations. We have also Rahul who is our proud newest member has created Sandboxing on  Strings

[google_ad data_ad_slot=” data_ad_format=’rectangle’]

Photo credits: skycure.com
Photo credits: skycure.com

Our task was to find out vulnerabilities in a linux application and create a Firejail environment. Firejailing is the art of using a SUID program that reduces the risk of security breaches by restricting the running environment of untrusted applications using Linux namespaces and seccomp-bpf. It allows a process and all its descendants to have their own private view of the globally shared kernel resources, such as the network stack, process table, mount tableFirejail can sandbox any type of processes: servers, graphical applications, and even user login sessions. The software includes security profiles for a large number of Linux programs: Mozilla Firefox, Chromium, VLC, Transmission etc. To start the sandbox, prefix your command with “firejail”.

I decided to chose the CPIO, a tool to copy files to and from archives which recently was find to be vulnerable to DOS attack. Cvedetails.com explained the CVE-2016-2037 vulnerability where the cpio_safer_name_suffix function in util.c in cpio 2.11 allows remote attackers to cause a denial of service (out-of-bounds write) via a crafted cpio file. In brief, when a user is going to decompress a file, a user will be able to pass out file for the purpose of the attack. This has been achieved by QuickFuzz.

To sandbox the CPIO tools when decompressing files, Firejail application was used to isolate the program by making use of the syscalls. Here is the firejail environment :

You need to watch this video to understand firejail before reading the Profile 🙂

include /usr/local/etc/firejail/server.profile
include /usr/local/etc/firejail/disable-common.inc
include /usr/local/etc/firejail/disable-programs.inc
include /usr/local/etc/firejail/disable-passwdmgr.inc
caps.drop all
seccomp write,read,open,close,execve,access,brk,umask,munmap,fchmod,mprotect,mmap2,lstat64,fstat64,geteuid32,fchown32,set_thread_are,prctl,setresuid32,getgid32,setgroups32,setgid32,getuid32,setuid32,fcntl64,clone,rt_sigaction,nanosleep

Here are what participants in the hackathon are saying:

To prevent further vulnerabilities such as shown below from being used to target users, this firejail profile has been made. https://www.cvedetails.com/vulnerability-list/vendor_id-72/product_id-1670/GNU-Gzip.html – Yash

[google_ad data_ad_slot=” data_ad_format=’rectangle’]

Decompressing .xz file within a sandboxing environment is just fascinating – Akhil

Many shell users, and certainly most of the people working in computer forensics or other fields of information security, have a habit of running /usr/bin/strings on binary files originating from the Internet. Their understanding is that the tool simply scans the file for runs of printable characters and dumps them to stdout – something that is very unlikely to put you at any risk. – Rahul

Previous hackathons carried out by cyberstorm Mauritius are as follows:


Linux Performance & Analysis – Strace and syscall

A quick look at the manual of Strace would show you an indication that the strace command is used to trace system calls and signals. The desciption part stipulates that “In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file specified with the -o option.”

[google_ad data_ad_slot=” data_ad_format=’rectangle’]

Photo credits: Linuxintro.org
Photo credits: Linuxintro.org

However, there are much more than that to discover. Since strace uses ptrace which observe and control execution of another process and examination of memory and registers. In some way, strace can be dangerous because signal injection and suppression may occur. The debugging mechanism is dangerous as it pause the target process for syscalls to read the state – ptrace(PTRACE_restart, pid, 0, sig)

Proof of concept strace can be dangerous

From the example below we can see the time taken copied is much slower compared with a strace.

[[email protected] ~]# dd if=/dev/zero of=/dev/null bs=1 count=600k
614400+0 records in
614400+0 records out
614400 bytes (614 kB) copied, 0.38371 s, 1.

[[email protected] ~]# strace -eaccept dd if=/dev/zero of=/dev/null bs=1 count=600k
614400+0 records in
614400+0 records out
614400 bytes (614 kB) copied, 16.9985 s, 36.1 kB/s
+++ exited with 0 +++
6 MB/s

The 12 main syscalls

There are 12 main syscalls worth learning to grasp output of strace

SyscallDescription
readread bytes from a file descriptor (file and socket)
writewrite bytes from a file descriptor (file and socket)
openopen a file (returns a descriptor)
closeclose the file descriptor
forkcreate a new process (current process is forked)
execexecute a new program
connectconnect to a network host
acceptaccept a network connection
statread files statistics
ioctlset IO properties and other functions
mmapmap a file to the process memory address space
brkextend the heap pointer

Strace output analysis

I will now take a strace example. I have created a file test in /tmp. You can check out the strace ouput at this link http://pastebin.com/zziCAwDz. Let’s analyse it.

We can noticed the following at the beginning

  1. execve(“/bin/ls”, [“ls”, “-l”, “/etc”], [/* 22 vars */]) = 0
  2. brk(0)                                  = 0x8ca8000
  3. mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7791000
  4. access(“/etc/ld.so.preload”, R_OK)      = -1 ENOENT (No such file or directory)
  5. open(“/etc/ld.so.cache”, O_RDONLY)      = 3
  6. fstat64(3, {st_mode=S_IFREG|0644, st_size=25200, …}) = 0
  7. mmap2(NULL, 25200, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb778a000

The execve() variant is running /bin/ls then libraries are called in the variant followed by several libraries from /lib directory. After the file descriptor is close with the close() function, you will noticed at line 10 there is a open(“/etc/ld.so.cache”, O_RDONLY)  = 3 which means that whilst opening the /etc it has returned a value 3, a file descriptor for later use with other syscalls.

You will noticed that the content of /etc is being read, then for each file inside /etc it calls lstat() vand stat() variant. Two extended attribute varients are also called that are lgetxattr() and getxattr() and finally ls -l start printing out the results. But hey! Did you noticed that ls is running /etc/localtime on every output? stat64(“/etc/localtime”, {st_mode=S_IFREG|0644, st_size=239, …}) = 0 is being called each time!

Some strace commands

#Slow the target command and print details for each syscall: strace command

[google_ad data_ad_slot=” data_ad_format=’rectangle’]

$Slow the target PID and print details for each syscall: strace -p PID

# Slow the target PID and any newly created child process, printing syscall details: strace -fp PID

# Slow the target PID and record syscalls, printing a summary: strace -cp PID

# Slow the target PID and trace open() syscalls only: strace -eopen -p PID

# Slow the target PID and trace open() and stat() syscalls only: strace -eopen,stat -p PID

# Slow the target PID and trace connect() and accept() syscalls only: strace -econnect,accept -p PID

# Slow the target command and see what other programs it launches (slow them too!): strace -qfeexecve command

# Slow the target PID and print time-since-epoch with (distorted) microsecond resolution: strace -ttt -p PID

# Slow the target PID and print syscall durations with (distorted) microsecond resolution: strace -T -p PID

From what we can understand is that if /etc/localtime is being run each time, it is consuming more resource and heavily interrupting the system. So, strace is based on rather simple syscalls, however, it can also cause heavy performance overhead.

I have created a new tag called Linux Performance. This article does not give a clear overview of strace in itself. Some more articles coming later on Linux performance, analysis and tuning.