Tag: centos

Installing the Networker Management Console (NMC) on CentOS 7

In the last article, we have seen how to install Dell EMC Networker on CentOS7. There have been some issues with dependencies. In this article, I will install the Networker Management Console on the same server. Prior before installing, we will see the services running when the networker services have been started. Then, I will compare if after installation of the NMC.

Before proceeding to installation, the packages that I will install is :

lgtonmc (Networker Management Console) – Gives you the ability to access the Management Interface or Management console to manage backups.

1. The services running before the installation is as indicated in the screenshot below:

2. In the previous article, I downloaded all the packages. From the directory, I have installed the NMC using the RPM command followed with the execution of the script /opt/lgtonmc/bin/nmc_config. You will be prompted to answer a few questions for the installation. I selected the default answer except to the creation of user for the PostgreSQL database.

3. Now, we can see a bunch of new services is running such as more Java processes and PostgreSQL.

4. Since we installed the NMC on the VM, we should be able to access the console on the same network on the port 9000. My VM is actually configured with the IPAddress 192.168.100.19 and by accessing it on port 9000 will now show me the console.

5. As you can notice on the screenshot above I don’t have Java Runtime enabled on the Mac. So I had to install and enable it. Follow the instructions for the installation by clicking on “Browser, OS, & JRE Requirements”.

6. Once installed and activated, you should be able to access the console by clicking on “Click here to start Management Console”. The prompt to enter username/password should then appear.

The default username is Administrator and password is the one you have set when installing Networker.

7. Follow the instructions to set up the database backup server, authentication server etc.. and at the end, you should be able to reach the console.

Installing EMC Dell Networker 9 on CentOS 7

Its been since some days, I attended a training on EMC Dell Networker 9 in Mauritius itself. Though not everything can be covered in the training such as the installation of the networker on Linux machines, I decided to install it myself on my lab.

photo credits: dell.com

For those who are not familiar with Networker 9 formerly called Legato NetWorker is an “enterprise-level data protection software product that unifies and automates backup to tape, disk-based, and flash-based storage media across physical and virtual environments for granular and disaster recovery.”. To install it, I created a Centos 7 minimal installation lab on virtual box, made an update and install some few packages such as vim, tcpdump, net-tools, traceroute, epel-repo, locate, atop, htop and wget. These are basic packages for my own use on the VM. It has nothing to do with the Networker installation.

To be able to download the necessary packages, it’s a prerequisite to register on the EMC Dell website first. Once authenticated, you can move on to the download section of the packages. Dell will provide you in a tar.gz all packages for Debian and RHEL as well. Even Avamar packages will be found there. So, you will need to install only the necessary packages. Follow the instructions below after registration on the Dell website and download of the packages and the links highlighted. Once the file has been downloaded and decompressed, you will notice several RPMs and DEBs inside. The one which will be needed for the networker installation are as follows:

  • lgtoclnt (Networker client) – Provides you the ability to perform file system backup and recovery options.

  • lgtoxtdclnt (Networker Extended client) – Provides additional feature support for NetWorker clients, such as snapshot backup support, command line utility support including server reporting and administration, cloning and staging support, and so on.

  • lgtonode (Networker Storage Node) – Provides features for the storage node which will control storage devices such as tape drives, disk devices, autochangers, and silos.

  • lgtoserv (Networker Server) – Provides you the web server of the Networker portal.

  • lgtoauthc (Networker Authentication Service) – Authentication layer used for the backup purpose.

  • lgtoman (Networker Manual) – Its important for the manual. However, it’s not a prerequisite.

Whilst installing these packages, you will notice dependencies problems. See the “Tips” section below for more information. I had to install the Glibc 32-bit package as some of the networker packages might depend on them.

Here is an idea what error message { libc.so.6 is needed by lgtoclnt-9.1.1.7-1.x86_64 } you may have while performing the installation.

This can be confirmed by a yum whatprovides libc.so.6 which is found inside glibc.i686 package

1. At this point, to continue on with the installation I made the following steps:

yum install glibc-2.17-260.el7.i686
rpm -ivh lgtoclnt*.rpm lgtoxtdclnt*.rpm lgtonode*.rpm lgtoserv*.rpm lgtoauth*.rpm lgtoman*.rpm

2. If you are installing the package one by one, you will need to install lgtoauth first before installing lgtoserv. After installation of lgtoauth, it will prompt you to launch the following script:

/opt/nsr/authc-server/scripts/authc_configure.sh

3. It will prompt you where to specify where you have installed the Java Runtime. At the time, I’m writing this article, I’m using Java Runtime 8 from the oracle.com website. Use the following syntax to download it from wget.

wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "https://download.oracle.com/otn-pub/java/jdk/8u202-b08/1961070e4c9b4e26a04e7f5a083f551e/jre-8u202-linux-x64.rpm"

4. Once, downloaded and installed, java -version should provide you the runtime environment.

5. Now, you can launch the script /opt/nsr/authc-server/scripts/authc_configure.sh anew and it will prompt you to enter the key store and administrator passwords.

6. Once the installation is complete, you can now run the /etc/init.d/networker daemon and check the process running.

Tips:

  • The problem is that the GLIBC2.0 symbol is not provided by the x86-64 Libc on CentOS, but it is provided by the 32-bit i686 package. There is no real dependency of the EMC NetWorker 9.1 package on the 32-bit library, but this is probably a false dependency RPM problem. So it is necessary to download the following 32-bit packages from the CentOS website and install them.

  • If you have installed the JAVA elsewhere, you will need to specify the path launch executing the script /opt/nsr/authc-server/scripts/authc_configure.sh

  • The installation logs are found at /opt/nsr/authc-server/logs/install.log.

  • For testing purpose, I deactivated firewalld and disable SELinux.

Some tips with Ansible Modules for managing OS and Application

In the year 2016, I published some articles on Ansible: Getting started with Ansible deployment, which provide some guides to get started with Ansible, setting up the SSH key and other basic stuffs. Another article is about LVM configuration on CentOS as well as updating Glibc on a linux server following a restart of the service. There is another article for some more details about Ansible playbooks which could be helpful to get started with.

It is almost two years since I published these articles. I noticed that the concept of Ansible remains the same. Now we have other tools such as Ansible-Galaxy and Ansible-Tower to ease much more of the tasks using this agentless tools. On top of that there is also the possibility to perform agentless monitoring using Ansible. In future articles, I will get into some more details about this such as using Ansible to perform monitoring on servers. The concept remain the same, however, it is important to make sure that the modules used is in conformity of the version of the Ansible. Otherwise, you might end up with deprecated module. The Ansible Playbook’s output will give you an indication on which servers it has failed or succeeded, You will also have access to the <PlaybookName>.retry file which will show you all failed servers.


When using Ansible, always make sure that you are on the official documentation. Each version of Ansible is well documented on the official website

These days I have written some few playbooks. Let’s see some interesting stuff what ansible can do.

Ansible can edit files using the bullet proof approach. Instead of copying files from one destination to the other, we can edit it directly. Here is an extract of one such type of action:

Another interesting way of using the Ansible shell module where you can fire shell command remotely from the Ansible playbook. For example: removing specific users from a specific group using the shell module:


You can also delete specific user along with its home directory:

Do check out my Github Repository to have access to my Ansible Playbooks.

Debugging disk issues with blktrace, blkparse, btrace and btt in Linux environment

blktrace is a block layer IO tracing mechanism which provides detailed information about request queue operations up to user space. There are three major components: a kernel component, a utility to record the i/o trace information for the kernel to user space, and utilities to analyse and view the trace information. This man page describes blktrace, which records the i/o event trace information for a specific block device to a file.

 

Photo credits : www.brendangregg.com
Photo credits : www.brendangregg.com

Limitations and Solutions with blktrace

There are several limitations and solutions when using blktrace. We will focus mainly on its goal and how the blktrace tool can help us in our day to day task. Assuming that you want to debug any IO related issue, the first command will be probably an iostat. These utilities can be installed through a yum install sysstat blktrace. For example:

iostat -tkx -p 1 2

The limitation here with iostat is that it does not gave us which process is utilising which IO.  To resolve this issue, we can use another utility such as iotop. Here is an example of the iotop output.

blktrace, iotop, blkparse, btt and btrace

Here iotop shows us exactly which process is consuming ‘which’ and ‘how’ much IO. But another problem with that solution is that it does not give detailed information. So, blktrace comes handy as it gives layer-wise information. How it does that? It sees what is going on exactly inside block I/O layer. When used correctly, it’s possible to generate events for all I/O request and monitor it from where it is evolving. Though it extracts data from the kernel, it is not an analysis tool and the interpretation of the data need to be carried out by you. However, you can feed the data to btt or blkparse to get the analysis done.

Before looking at blktrace, let’s check out the I/O architecture. Basically, at the userspace the user will write and read data. This is what the User Process at the user space. The user do not write directly to the disk. They first write to the VFS Page Cache and from which there are various I/O Scheduler and the Device Driver will interact with the disk to write the data.

Photo credits: msreekan.com
Photo credits: msreekan.com

The blktrace will normally capture events during the process. Here is a cheat sheet to understand blktrace event capture.

photo credits: programering.com
photo credits: programering.com

blkparse will parse and format events acquired from blktrace. If you do not want to run blkparse, btrace is a shortcut to generate data out of blktrace and blkparse. Finally we have btt which will analyze data from blktrace and generate time deltas for each layer.

Another tool to grasp before moving on with blktrace is debugfs which is a simple-to-use RAM-based file system specially designed for debugging purposes. It exists as a simple way for kernel developers to make information available to user space. Unlike /proc, which is only meant for information about a process, or sysfs, which has strict one-value-per-file rules, debugfs has no rules at all. Developers can put any information they want there.lwn.

So the first thing to do is to mount the debugfs file system using the following command:

mount -t debugfs debugfs /sys/kernel/debug

The aim is to allow a kernel developer to make information available in user space. The output of the command below describe how to mount and verify same. You can use the command mount to test if same has been successful. Now that you have the debug file system, you can capture the events.

Diving into the commands

1.So you can use blktrace to trace out the I/O on the machine.

blktrace -d /dev/sda -o -|blkparse -i -

2. At the same time, on another console launch the following command to generate some I/O for testing purpose.

dd if=/dev/zero of=/mnt/test1 bs=1M count=1

From the blktrace console you will get an output which will end up as follows :

CPU0 (8,0):
 Reads Queued:           2,       60KiB Writes Queued:       5,132,   20,524KiB
 Read Dispatches:        2,       60KiB Write Dispatches:       61,   20,524KiB
 Reads Requeued:         0 Writes Requeued:         0
 Reads Completed:        2,       60KiB Writes Completed:       63,   20,524KiB
 Read Merges:            0,        0KiB Write Merges:        5,070,   20,280KiB
 Read depth:             1         Write depth:             7
 IO unplugs:            14         Timer unplugs:           9
Throughput (R/W): 8KiB/s / 2,754KiB/s
Events (8,0): 21,234 entries
Skips: 166 forward (1,940,721 -  98.9%)

3. Same result can also be achieved using the btrace command. Apply the same principle as in part 2 once the command has been launched.

btrace /dev/sda

4. In part 1, 2 and 4 the blktrace commands were launched in such a way that it will run forever – without exiting. In this particular example,  I will output the file name for analysis. Assume that you want to run the blktrace for 30 seconds, the command will be as follows:

blktrace -w 30 -d /dev/sda -o io-debugging

5. On another console, launch the following command:

dd if=/dev/sda of=/dev/null bs=1M count=10 iflag=direct

6. Wait for 30 seconds to allow step 4 to be completed. I got the following results just after:

[[email protected] mnt]# blktrace -w 30 -d /dev/sda -o io-debugging
=== sda ===
  CPU  0:                  510 events,       24 KiB data
  Total:                   510 events (dropped 0),       24 KiB data

7. You will notice at the directory /mnt  the file will be created. To read it use the command blkparse.

blkparse io-debugging.blktrace.0 | less

8. Now let’s see a simple extract from the blkparse command:

8,0    0        1     0.000000000  5686  Q   R 0 + 1024 [dd]
8,0    0        0     0.000028926     0  m   N cfq5686S / alloced
8,0    0        2     0.000029869  5686  G   R 0 + 1024 [dd]
8,0    0        3     0.000034500  5686  P   N [dd]
8,0    0        4     0.000036509  5686  I   R 0 + 1024 [dd]
8,0    0        0     0.000038209     0  m   N cfq5686S / insert_request
8,0    0        0     0.000039472     0  m   N cfq5686S / add_to_rr

The first column shows the device major,minor tuple, the second column gives information about the CPU and it goes on with the sequence, the timestamps, PID of the process issuing the IO process. The 6th column shows the event type, e.g. ‘Q’ means IO handled by request queue code. Please refer to above diagram for more info. The 7th column is R for Read, W for Write, D for block, B for Barrier operation and finally the last one is the block number and a following + number is the number of blocks requested. The final field between the [ ] brackets is the process name of the process issuing the request. In our case, I am running the command dd.

9.The output can be also analyzed using btt command. You will get almost the same information.

btt -i io-debugging.blktrace.0

Some interesting information here is D2C means the amount of time the IO has been spending into the device whilst Q2C means the total time take as there might be different IO concurrent.

A graphical user interface to makes life easier

The Seekwatcher program generates graphs from blktrace runs to help visualize IO patterns and performance. It can plot multiple blktrace runs together, making it easy to compare the differences between different benchmark runs. You should install the seekwatcher package if you need to visualize detailed information about IO patterns.

The command to be used to generate a picture to for analysis is as follows where seek.png is the output of the png name given.

seekwatcher -t io-debugging.blktrace.0 -o seek.png

What is also interesting is that you can create a movie-like with seekwatch to view the graph.

seekwatcher -t io-debugging.blktrace.0 -o seekmoving.mpg --movie

Tips:

  • For the debugfs, you can also edit the /etc/fstab file to make the mount point permanently. 

  • By default, blktrace will capture all events. This can be limited with the argument -a.
  • In case you want to capture persistently for a long time or for a certain amount of time, then use the argument -w.
  • blktrace will also stored the extracted data in local directory with a format device.blktrace.cpu, for example sda1.blktrace.cpu.
  • At step 1 and 2, you will need to fire a CTRL +C to stop the blktrace.
  • As seen in part 2, you have created test test1 file, do delete it same may consume disk space on your machine.

  • On part 5, the size of the file created in the  /mnt will not exceeds more that 1M since same has been specified in the command.
  • At part 9, you will noticed several other information which can be helpful such as D2C and  Q2C.
    • Q2D latency – time from request submission to Device.
    • D2C latency – Device latency for processing request.
    • Q2C latency – total latency , Q2D + D2C = Q2C.
  • To be able to generate the movie, you will have to install mencoder with all its dependencies.

Linux memory analysis with Lime and Volatility

Lime is a Loadable Kernel Module (LKM) which allows for volatile memory acquisition from Linux and Linux-based devices, such as Android. This makes LiME unique as it is the first tool that allows for full memory captures on Android devices. It also minimises its interaction between user and kernel space processes during acquisition, which allows it to produce memory captures that are more forensically sound than those of other tools designed for Linux memory acquisition. – Lime. Volatility framework was released at Black Hat DC for analysis of memory during forensic investigations.

Analysing memory in Linux can be carried out using Lime which is a forensic tool to dump the memory. I am actually using CentOS 6 distribution installed on a Virtual Box to acquire memory. Normally before capturing the memory, the suspicious system’s architecture should be well known. May be you would need to compile Lime on the the suspicious machine itself if you do not know the architecture. Once you compile Lime, you would have a kernel loadable object which can be injected in the Linux Kernel itself.

Linux memory dump with Lime

1. You will first need to download Lime on the suspicious machine.

git clone https://github.com/504ensicsLabs/LiME

2. Do the compilation of Lime. Once it has been compiled, you will noticed the creation of the Lime loadable kernel object.

make

3. Now the kernel object have to be loaded into the kernel. Insert the kernel module. Then, define the location and format to save the memory image.

insmod lime-2.6.32-696.23.1.el6.x86_64.ko "path=/Linux64.mem format=lime"

4. You can view if the module have been successfully loaded.

lsmod | grep -i lime

Analysis with Volatility

5. We will now analyze the memory dump using Volatility. Download it from Github.

git clone https://github.com/volatilityfoundation/volatility

6.  Now, we will create a Linux profile. We will also need to download the DwarfDump package. Once it is downloaded go to Tools -> Linux directory, then create the module.dwarf file.

yum install epel-release libdwarf-tools -y && make

7. To proceed further, the System.map file is important to build the profile. The System.map file contains the locations of all the functions active in the compiled kernel. You will notice it inside the /boot directory. It is also important to corroborate the version appended with the System.map file together the version and architecture of the kernel. In the example below, the version is 2.6.32-696.23.1.el6.x86_64.

8. Now, go to the root of the Volatility directory using cd ../../ since I assumed that you are in the linux directory. Then, create a zip file as follows:

zip volatility/plugins/overlays/linux/Centos6-2632.zip tools/linux/module.dwarf /boot/System.map-2.6.32-696.23.1.el6.x86_64

9. The volatility module has now been successfully created as indicated in part 8 for the particular version of the Linux and kernel version. Time to have fun with some Python script. You can view the profile created with the following command:

python vol.py --info | grep Linux

As you can see the profile LinuxCentos6-2632 profile has been created.

10. Volatile contains plugins to view details about the memory dump performed. To view the plugins or parsers, use the following command:

python vol.py --info | grep -i linux_

11. Now imagine that you want to see the processes running at the time of the memory dump. You will have to execute the vol.py script, specify the location of the memory dump, define the profile created and call the parser concerned.

python vol.py --file=/Linux64.mem --profile=LinuxCentos6-2632x64 linux_psscan

12. Another example to recover the routing cache memory:

python vol.py --file=/Linux64.mem --profile=LinuxCentos6-2632x64 linux_route_cache

Automating Lime using LiMEaid

I find the LiMEaid tools really interesting to remote executing of Lime. “LiMEaide is a python application designed to remotely dump RAM of a Linux client and create a volatility profile for later analysis on your local host. I hope that this will simplify Linux digital forensics in a remote environment. In order to use LiMEaide all you need to do is feed a remote Linux client IP address, sit back, and consume your favorite caffeinated beverage.” – LiMEaid

Tips:

  • Linux architecture is very important when dealing with Lime. This is probably the first question that one would ask.
  • The kernel-headers package is a must to create the kernel loadable object.
  • Once a memory dump have been created, its important to take a hash value. It can be done using the command md5sum Linux64.mem
  • I would also consider to download the devel tools using yum groupinstall “Development Tools” -y
  • As good practice as indicated in part 8 when creating the zip file, use the proper convention when naming the file. In my case I used the OS version and the kernel version for future references.
  • Not all Parsers/Plugins will work with Volatile as same might not be compatible with the Linux system.
  • You can check out the Volatile wiki for more info about the Parsers.