Category: Linux Application

Installing and configuring OpenVAS on OpenSUSE Leap

“openSUSE Leap is a brand new way of building openSUSE and is new type of hybrid Linux distribution. Leap uses source from SUSE Linux Enterprise (SLE), which gives Leap a level of stability unmatched by other Linux distributions, and combines that with community developments to give users, developers and sysadmins the best Linux experience available. Contributor and enterprise efforts for Leap bridge a gap between matured packages and newer packages found in openSUSE’s other distribution Tumbleweed.”– OpenSUSE

I would welcome all OpenSUSE fans, system and security administrators and students to try out OpenVAS on an OpenSUSE machine which works pretty fine. OpenVAS is a framework of several services and tools offering a comprehensive and powerful vulnerability scanning and mangement solutions.

Photo credits: OpenSUSE & OpenVAS
Photo credits: OpenSUSE & OpenVAS

After you have installed your OpenSUSE Leap on your machine, you will need to open YAST and install the OpenVAS. Let’s installed OpenVAS on the OpenSUSE machine.

1.Open the YAST Control center and under the Software tab click on the software management.

Screenshot from 2016-03-07 10-49-32

2. The YAST2 software management tool will open. Simply type the keyword OpenVAS which will prompt you to install it togather will all the libraries. You will also need to install GreenBone-security-assistant which is a nice tool to use with OpenVAS

Screenshot from 2016-03-07 10-54-12

3. Once, you have installed OpenVAS and Greenbone-security-assistant, now the fun begins. Open a terminal, log in as root user, you will notice that there are several tools which have been installed from the OpenVAS.

Screenshot from 2016-03-07 11-04-06

4. Launch the openvas-setup which will download some bunch of files and libraries.

5. The next step is to create a user which can be done with the command openvas-adduser

6. Create a certificate with openvas-mkcert

7. openvasmd –rebuild which will rebuild openvas with the new configuration

8. Now set address and port number with the command openvasmd -p 9300 -a 127.0.0.1

9. After that set for administrative purpose local address with the command openvasmd -a 127.0.0.1 -p 9393

10. Setting the http for the GreenBone with the command gsad –http-only –listen=127.0.0.1 -p 9392

11. You can now navigate on your browser on http://127.0.0.1:9392 to access the Greenbone security assistant.

OpenVAS will give you information about the ports summary and information about the possible vulnerabilities that OpenVAS has discovered. Please be aware that many times you will get false positives when there are not any vulnerability or the vulnerability is not accessible to anybody. However, its cool to find out what vulnerability OpenVAS has find on your system for future security enhancements.


Configure your LVM via Ansible

Some days back, I gave some explanations about LVM such as creations of LVM partitions and a detailed analogy of the LVM structure as well as tips for using PVMOVE. We can also automate such task using the power of Ansible. Cool isn’t it?

ansible

So, I have my two hosts Ansible1 and Ansible2. Ansible1 is the controller and has Ansible installed and Ansible2 is the hosts that the disk will be added to the LVM.

1. Here is the status of the disk of Ansible2 where a disk /dev/sdc has been added

Screenshot from 2016-03-08 11-05-29

2. I have now added a disk of 1GB from the VirtualBox settings. You can refer to the past article on LVM how to add the disk. As we can see on the screenshot below it shows the disk sdc with the size 1GB added on the machine Ansible2 which I have formatted as LVM

Screenshot from 2016-03-08 11-22-17

4. Lets now get into the controller machine – Ansible1 and prepare our Playbook. You can view it on my Git account here. The aim is to get a 500Mb from the /dev/sdc1 to create a new VG called vgdata in the LV called lvdisk.

5. Here is the output

Screenshot from 2016-03-08 11-36-00

Articles on LVM

Articles on Ansible

 


MariaDB and improved security features presentation

If you have been following the MSCC – Mauritius Software Craftsmanship Community some weeks back, you would have noticed a forthcoming meetup on MariaDB and improved security features spoken by Joffrey Michaie from OceanDBA and Codarren Velvindron from the cyberstorm Mauritius. Thanks to Jochen Kirstätter (joki) founder of the MSCC who proudly sponsored the event.

mariadb

12784250_10153937032372365_1053507839_n

Joffrey at the MariaDB meet up

1915707_190175371353387_934013232802437990_n

Some craftsmans at the Meet up

1935103_190175328020058_3787078129675559662_n

Codarren explaining Glibc

12801299_190175304686727_3455294414280043217_n

Logan and me from hackers Mauritius

10399406_190175258020065_1829899531834394151_n

Codarren and me from hackers Mauritius

12718230_190175168020074_27057020749688372_n

Can you spot where am i ?

Screenshot from 2016-02-28 00-06-47

Jochen. founder of MSCC

Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image...

The first part of the presentation started with Joffrey who gave a brief introduction of MariaDB and the importance of its security features. He also laid heavy emphasis on the backup concepts that DBA need to go through. What are most interesting are that there seem two additional services that are coming on from OceanDBA – Backup as a Service and DB as a service.

Other points raised up concerning the importance of backups are to start a clustering solutions, to perform analysis and several tests on the Pre-production or staging servers. Database backups also need to be tested as there can be corrupted zip files. Another interesting issue raised up is about the locking table mechanisms during backups. Other backup strategy and concept were also explained such as:

  • Cold backups – The downtime issues were raised up which according to me does not look practicable unless there is really a specific reason
  • Hot backups – Usually carried out by the MySQLDUMP utility by everyone.
  • Logical backups – Data that are usually backup as tables, views, indexes etc.. and they are mostly human readable statements. logical backups can be performed at the level of database and table.
  • A tool that is completely new to me is the mydumper which can be used to backup terabytes of data. Some interesting arguments raised up are –lock-all-tables –skip-lock-tables and –master-data
  • Binary backups – The binary backup which is the copy of the actual database structure and requires a file system or disk subsystem access. It is one the fastest method to backup and very compatible for mixed MYISAM and INNODB tables.
  • HA (High availability) as the backup – Usually used in clusters and in Galera replication. However, to ensure that there is no data loss, a SAN replication was also recommended for data centers.
  • Time delay replication – This was explained by taking an example, say a one hour delay backup based on the risk management that has been carried out.
  • The Percona tools which can be used alongside MariaDB for backup Analysis.

On the second part of the presentation, Codarren lays emphasis on the security aspects concerning MariaDB in the context of whether to use Glibc or MUSL. Glibc libraries are used on mail servers, SQL servers, forms etc.. Back to the Glibc Ghost vulnerability, an explanation was given by taking a web-based form application where a particular field when filled with malicious information can be used to make calls to Glibc library with the intention to return a specific value. To re-mediate at that situation, same was patched using the function getaddressinfo() This patch lead to another vulnerability. Since today, we can deduce that though Glibc has gone through the various patch, yet, there are more bugs that have been discovered.

A solution was thus proposed to adopt the MUSL library infrastructure. We can see that the MUSL has a clean code policy compared to Glibc. Coddarren laid emphasis on the Alpine Linux operating system which is naturally based on MUSL. The size of Alpine compared to CentOS, Ubuntu, Debian are much smaller. Other issues raised are on the Grsecurity aspect which though is not widely spread are a very important aspect to take into consideration. MUSL looks to be very promising compared to GLIBC. Another analogy is taken from the Docker technology where companies are adapting Alpine Linux in the production environment to escape Glibc.

ice_logo-5dcea9e47b780ff52f75c3c3304d54827f56211e

The third part of the presentation was continued on by Joffrey on the Galera clustering solution. An explanation is given using a schema how replication is being done at the cluster level. Several particular Database schemas were taken for example where a node with a cluster which is slow in terms of network or infrastructure issue where the other servers will have to wait for the request to reach its destinations. Other points mentioned are:

  • Split brain in Galera where human interactions are needed especially where the ratio of the number of nodes have different data from other nodes within the same cluster. 
  • The importance of having applications built-in with retrying logic.
  • Galera conflict diagnostic. For example cert.log which is used to log and monitor conflict transactions.
  • Features such as auto-commit mode.
  • Galera load balancing using Haproxy – custom monitoring on cluster size.
  • MariaDB Maxscale which operate at layer 7 persistent connection.
  • Maxadmin command line utility to list servers that are in the cluster

Jochen has also laid emphasis on future meetups and the proposal for members if they could find other suitable environments to carry out more interesting meetups in days to come. No one could deny that they have not learned anything. Indeed, the meet up was really interesting and fruitful. Some stickers were shared having the MariaDB logo which I have already pasted at the back of my Laptop 🙂


Installing, Updating and Playing around with a Docker container

Docker will probably be on heavy development those days. This article will be dedicated to most of the basics to install and update a docker instance and some tips to play around. In the future, I will get into details about Docker Engine, Images, Containers, Volumes and Networking in the context of Docker. I have reserved this blog for the installation of Docker, get updates from the official channel and perform some basic daemon configs. So what is Docker? “Docker provides an integrated technology suite that enables developers and IT operations teams to build, ship, and run distributed applications anywhere.” – Docker

Installing, Updating and Playing around with a Docker container 1

A nice experience is to use a Virtual Box machine running on a Centos machine. My physical machine is running an Ubuntu.

Centos 6 Machine:

1. To install docker do a

yum install docker-io

2. Start the docker service

service docker start

3. check the docker version and new version available as well as the info

docker -v
docker version
docker info

Screenshot from 2016-02-26 23-49-21

4. You can also check for the number of containers, images, storage and execution driver details the command.

docker info

Let’s now see how to update Docker. Prior before performing an upgrade, it’s important to perform a backup of images. To get a new docker version, you will need to add a docker repo to get the new version and launch an update. You can check back the version with the command docker -v Just update your repository and launch an update. Check out the docker docs at this link.

Docker needs root to perform major actions like the creation of namespaces and cgroups. Docker also uses the /var/run/docker.sock which is own by root and is found in the group docker. So normal users can be added to the group docker by keeping security control to the docker group.

Screenshot from 2016-02-27 01-31-14

5. Lets try running an instance with the following command using the user called “nitin”: The command simply means lets run docker run to start a new container -it to make it interactive and assigned it a tty and same will be used by a ubuntu image and a bash process will be run inside the container.

docker run -it ubuntu /bin/bash

6. If the user “nitin” is not found in the group “docker” same would not run. To add the user simply use the following command. As you can see here “nitin” is the user which is being added to the group “docker”

gpasswd -a nitin docker

Now, when the command is being launched you would notice the download in progress.

Screenshot from 2016-02-27 01-45-13

Afterward, you will find yourself landed inside the container itself. Cool isn’t it? I am now inside an Ubuntu container from the Centos Virtual Machine. The number 7fa21bcf66b5 is the short form the container unique ID.

Screenshot from 2016-02-27 01-58-17

Type exit to get back to the Virtual machine. More articles coming later on Docker. 

7. To go on the docker hub and see all container images  related to CentOS:

docker search centos

TIPS:

  • On Centos7 machine, a simple curl -fsSL https://get.docker.com/ | sh would do the installation for the latest version
  • Always create a user for the docker application, the add docker to the group with usermod -aG docker docker-user
  • On CentOS, when firing the docker version | grep Storage, you would noticed that the Default storage driver is ‘device mapper’ compared to Ubuntu which is by default AUFS
  • Docker needs root to work. You can see that a ls -l /var/run/docker.sock is assigned by default with user root and group docker. So normal users can be added to the group dockers to allow then to run and break docker without to be root.

 


CVE-2015-7547 – Update Glibc & restart BIND with Ansible

You might be seeing a huge crowd of system administrators and Devops rushing to update their servers immediately due to the security flaws detected on Glibc. This security leak is identified as skeleton key under CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow. What is most sour to taste is where the Glibc library is used in the BIND application!

 In brief, the CVE-2015-7547 vulnerability is simply where an attacker can performed mutilple stack-based overflows of the function send_dg and send_vc in the Glibc library to execute malicious code even causing denial of service attack.

Screenshot from 2016-02-21 12:14:09

Redhat have put it in this way “A back of the envelope analysis shows that it should be possible to write correctly formed DNS responses with attacker controlled payloads that will penetrate a DNS cache hierarchy and therefore allow attackers to exploit machines behind such caches.” I have try a little Ansible cookbook to update your Glibc package. Check it out on my Git Account

- - - 
  - hosts: ansible2
     user: root
     tasks:
       - name: update Glib
          yum: name=glibc* state=latest
       - name: restart named
          service: name=named state=restarted

Screenshot from 2016-02-21 11:30:52

Other articles on Buffer Overflow of Memory:

Other article related to Ansible