Category: Linux System

Recover logical volumes data from deleted LVM partition

Have you ever deleted a logical volume by accident? Can you recover it looking into the backups? Well, the answer is YES. For those who are not familiar with Logical Volume Management (LVM) is a device mapper target that provides logical volume management for the Linux kernel.- WikipediaIt is an abstraction layer or software that has been placed on top of your hard drive for flexible manipulation of the disk space. Some of the articles published in the past on LVM are:

All test carried out on this blog post have been tested on a CentOS machine. Please don’t make a live test on a production server.

Image Credits: Redhat.com
Image Credits: Redhat.com

1. So, as you can see below I have an lv called lvopt which is from a vg called centos.

2. Same is mounted on the /opt

3. There are some data in that partition as well:

4. I created a directory inside the /opt directory

5. Now, let’s pretend to remove the lvm lvopt. Or say, someone did it by accident because it was unmounted. The command lvremove will be used here to remove the lv. Note: that the lv need to be unmounted.

6. If you make an lvs, lvdisplay or vgs or even mount again the partition, you cannot do it. The data is lost. But you can still recover it. This is because the lvm contains the archive of your lv inside the folder /etc/lvm/archive. But, you cannot read the files directly.

7. But you can still, interpret part of the files. Since we deleted the volume group called “centos”, we knew that it is referenced in the file centos_… The question that arises here is which file is relevant for you. Right? So to understand which archive you want to restore, you need to use the command vgcfgrestore –list <name of volume group>. Here is an example:

8.  If you observe carefully, each archive has been backup at a certain time. In my case, I deleted the LV on 18-Apr-2019 at 11:17:17 2019:

9. So, I want to restore from that last archive. You will need to copy the full patch of the vg file. In my case it is /etc/lvm/archive/centos_00004-1870674349.vg. The goal here is to restore the lv before this specific time, or simply restore back the lv before the command lvremove was fired. Here is the command:

10. If you launch the command lvs, you will notice the presence of the lv.

11. But, mounting back the lv won’t result in anything. This is because the lv is inactive. You can see it with the command lvscan. Please take note below that the lvopt is inactive.

12. To activate it you simply need to use the command lvchange.

13. Mount it back and you are done.

I believe this can be very useful especially when you have encountered a situation where someone deleted an lv. I hope you enjoy this blog post. Please share and comment below if you like it.


Attending AWSome day online conference 2019

The AWSome day was a free online Conference and a training event sponsor by Intel that will provide a step-by-step introduction to the core AWS (Amazon Web Services) services. Its free and everyone can attend. It was scheduled on 26 March 2019 online. The agenda covered broad topics such as AWS Cloud Concepts, AWS Core Services, AWS Security, AWS Architecting and AWS Pricing and Support. It’s pretty interesting for IT manager, system engineers, system administrators, and architects who are eager to learn more about cloud computing and how to get started on the AWS cloud. I do have some experience in managing AWS servers and even host my own server. However, I registered for the free training to refresh my knowledge and get more exposure such as the AWS pricing which I am not aware at all. Another interesting thing is that you will receive a certificate of attendance and you received 25 USD of AWS credits. Pretty cool right?

Right from the beginning, I knew this was something interesting. I encountered a minor problem whilst signing in. I had to send a mail to support and it was resolved immediately. Once connected to the lobby, it was pretty easy to attend and follow the online conference. After some minutes, Steven Bryen, head in the AWS Cloud delivered the keynote speech.

There was also an online challenge and I score 25,821 on the Trivia Leaderboard.

On the “Ask an Expert” tab, I was mostly interested in Man on the Side Attack – MOTS attack. They referred me to the WAF section on AWS. Another interesting link is the whitepaper of the AWS Overview of Security guidelines. AWS also offers comprehensive security across all the layers, SSL, DDoS, Firewall, HSM and Networking. I also shoot some question on Metric and Monitoring on application level such as on MariaDB. I discovered about the RDS performance insight. For applications on EC2, Containers, and Lamda, X-Ray looks very promising. Apart from virtualization, its good to note that AWS also provides containerization services.

The event was pretty enriching. The panel on the question area knows well their subject. I discovered a lot by participating in the AWSomeDay. I’m looking forward to AWS certifications in the near future.


Installing the Networker Management Console (NMC) on CentOS 7

In the last article, we have seen how to install Dell EMC Networker on CentOS7. There have been some issues with dependencies. In this article, I will install the Networker Management Console on the same server. Prior before installing, we will see the services running when the networker services have been started. Then, I will compare if after installation of the NMC.

Before proceeding to installation, the packages that I will install is :

lgtonmc (Networker Management Console) – Gives you the ability to access the Management Interface or Management console to manage backups.

1. The services running before the installation is as indicated in the screenshot below:

2. In the previous article, I downloaded all the packages. From the directory, I have installed the NMC using the RPM command followed with the execution of the script /opt/lgtonmc/bin/nmc_config. You will be prompted to answer a few questions for the installation. I selected the default answer except to the creation of user for the PostgreSQL database.

3. Now, we can see a bunch of new services is running such as more Java processes and PostgreSQL.

4. Since we installed the NMC on the VM, we should be able to access the console on the same network on the port 9000. My VM is actually configured with the IPAddress 192.168.100.19 and by accessing it on port 9000 will now show me the console.

5. As you can notice on the screenshot above I don’t have Java Runtime enabled on the Mac. So I had to install and enable it. Follow the instructions for the installation by clicking on “Browser, OS, & JRE Requirements”.

6. Once installed and activated, you should be able to access the console by clicking on “Click here to start Management Console”. The prompt to enter username/password should then appear.

The default username is Administrator and password is the one you have set when installing Networker.

7. Follow the instructions to set up the database backup server, authentication server etc.. and at the end, you should be able to reach the console.


Installing EMC Dell Networker 9 on CentOS 7

Its been since some days, I attended a training on EMC Dell Networker 9 in Mauritius itself. Though not everything can be covered in the training such as the installation of the networker on Linux machines, I decided to install it myself on my lab.

photo credits: dell.com

For those who are not familiar with Networker 9 formerly called Legato NetWorker is an “enterprise-level data protection software product that unifies and automates backup to tape, disk-based, and flash-based storage media across physical and virtual environments for granular and disaster recovery.”. To install it, I created a Centos 7 minimal installation lab on virtual box, made an update and install some few packages such as vim, tcpdump, net-tools, traceroute, epel-repo, locate, atop, htop and wget. These are basic packages for my own use on the VM. It has nothing to do with the Networker installation.

To be able to download the necessary packages, it’s a prerequisite to register on the EMC Dell website first. Once authenticated, you can move on to the download section of the packages. Dell will provide you in a tar.gz all packages for Debian and RHEL as well. Even Avamar packages will be found there. So, you will need to install only the necessary packages. Follow the instructions below after registration on the Dell website and download of the packages and the links highlighted. Once the file has been downloaded and decompressed, you will notice several RPMs and DEBs inside. The one which will be needed for the networker installation are as follows:

  • lgtoclnt (Networker client) – Provides you the ability to perform file system backup and recovery options.

  • lgtoxtdclnt (Networker Extended client) – Provides additional feature support for NetWorker clients, such as snapshot backup support, command line utility support including server reporting and administration, cloning and staging support, and so on.

  • lgtonode (Networker Storage Node) – Provides features for the storage node which will control storage devices such as tape drives, disk devices, autochangers, and silos.

  • lgtoserv (Networker Server) – Provides you the web server of the Networker portal.

  • lgtoauthc (Networker Authentication Service) – Authentication layer used for the backup purpose.

  • lgtoman (Networker Manual) – Its important for the manual. However, it’s not a prerequisite.

Whilst installing these packages, you will notice dependencies problems. See the “Tips” section below for more information. I had to install the Glibc 32-bit package as some of the networker packages might depend on them.

Here is an idea what error message { libc.so.6 is needed by lgtoclnt-9.1.1.7-1.x86_64 } you may have while performing the installation.

This can be confirmed by a yum whatprovides libc.so.6 which is found inside glibc.i686 package

1. At this point, to continue on with the installation I made the following steps:

yum install glibc-2.17-260.el7.i686
rpm -ivh lgtoclnt*.rpm lgtoxtdclnt*.rpm lgtonode*.rpm lgtoserv*.rpm lgtoauth*.rpm lgtoman*.rpm

2. If you are installing the package one by one, you will need to install lgtoauth first before installing lgtoserv. After installation of lgtoauth, it will prompt you to launch the following script:

/opt/nsr/authc-server/scripts/authc_configure.sh

3. It will prompt you where to specify where you have installed the Java Runtime. At the time, I’m writing this article, I’m using Java Runtime 8 from the oracle.com website. Use the following syntax to download it from wget.

wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "https://download.oracle.com/otn-pub/java/jdk/8u202-b08/1961070e4c9b4e26a04e7f5a083f551e/jre-8u202-linux-x64.rpm"

4. Once, downloaded and installed, java -version should provide you the runtime environment.

5. Now, you can launch the script /opt/nsr/authc-server/scripts/authc_configure.sh anew and it will prompt you to enter the key store and administrator passwords.

6. Once the installation is complete, you can now run the /etc/init.d/networker daemon and check the process running.

Tips:

  • The problem is that the GLIBC2.0 symbol is not provided by the x86-64 Libc on CentOS, but it is provided by the 32-bit i686 package. There is no real dependency of the EMC NetWorker 9.1 package on the 32-bit library, but this is probably a false dependency RPM problem. So it is necessary to download the following 32-bit packages from the CentOS website and install them.

  • If you have installed the JAVA elsewhere, you will need to specify the path launch executing the script /opt/nsr/authc-server/scripts/authc_configure.sh

  • The installation logs are found at /opt/nsr/authc-server/logs/install.log.

  • For testing purpose, I deactivated firewalld and disable SELinux.


Project Tabulogs: Linux Last logs on HTML table with Ansible

Project Tabulogs: It will give you the ability to display logs on an HTML tabular format page using Ansible Playbook. Some days back, I shed some ideas, how to use Ansible to create an agentless server inventory with an interesting HTML format. In this post, we will see how to present information on an HTML tabular format logging information from lastlog. You can apply the same principle for any logs and present it to an HTML template. One of the advantage with Ansible is that you can also launch shell commands which allows you to automate several tasks whether remote or local. Many times, I came across non-technical personnel who want to see logins and logouts of staffs on specific servers. The goal of this project is to present accurate and reliable information to non-technical persons in an IT organization. It’s also a way to perused logins and logouts of users more easily.


All right reserved : tunnelix.com
All right reserved: tunnelix.com

 

Limitations and Solutions

However, there is some limitation of presenting bulk information such as logs on an HTML page. Imagine having hundreds of servers which could amount more than 20,000 lines. This would make the page impossible to load and may crash on the browser. To remediate this situation, a database such as SQLite could be interesting. Otherwise, AJAX can be used to fetch page by page. Since I’m keeping all information in JSON format in a JavaScript page, I decided to make use of Pagination. A search button will also come handy for all columns and rows.

Now, the thing is how to report that information which keeps on changing? let’s say every month you want to have a report of users connected on your organization’s servers in JSON format that’s too in a JavaScript page. Here is where Ansible comes useful. I created a Playbook to build the JSON array itself and the JavaScript page using the Shell module based on Linux Awk commands. Of course, there are other tasks Ansible will perform such as fetching of the information remotely, updating the HTML page: Example the date the report was created. The directory can then be compressed and send easily to anyone.  So, no database needed!!. Cool isn’t it? However, if you want to adopt this system for really huge logs of information, it might work but could be really slow, hence, the need of a database.


You can clone the repo from my GitHub repository.

HTML, AngularJS and JSON

I created a simple HTML page which will call the AngularJS to keep the information in a JSON array. The HTML page will be static. However, for your own use, if you want to add some more columns, feel free to edit the section below. It is located at TabuLogs/master/perimeter1/index.html

 <tr class="tableheader">
                            <th>Hostname</th>
                            <th>Username</th>
                            <th>Login</th>
                            <th>TimeTaken</th>
                            <th>IPAddress</th>
                            <th>Perimeter</th>
                            <th>Application</th>
                        </tr>
                    </thead>
                    <tbody>
                        <tr ng-repeat="rec in SampleRecords | filter:q | startFrom:currentPage*pageSize | limitTo:pageSize">
                            <td>{{ rec.Hostname }}</td>
                            <td>{{ rec.Username }}</td>
                            <td>{{ rec.Login }}</td>
                            <td>{{ rec.TimeTaken }}</td>
                            <td>{{ rec.IPAddress }}</td>
                            <td>{{ rec.Perimeter }}</td>
                            <td>{{ rec.Application }}</td>
                        </tr>                       

Since I have used the lastlog from Linux, I called the page “Linux Login Management”. Otherwise, you can also filter any logs such as Apache or secure log. Some modifications will have to be carried out with the awk command or using Ansible Jinja filters.

You can also point a logo on the HTML page and kept in the images folder


<body ng-app="AngTable" ng-controller="Table">
    <p></p>
      <img src="images/logo1.png" align="right" alt=" " width="380" height="180"> 
      <img src="images/logo2.png" alt="cyberstorm" align="top-left" width="380" height="180">
    <p><h2>Login Management for my servers</p></h2> 

The most interesting thing is about the data which is stored in a JSON array. In that particular example, we have information about the hostname, username, login, the time taken, IP Address, Application, and the Perimeter.

[ 
{"Hostname":"Apacheserver","Username":"tunnelix","Login":"Fri-Sep-28-15:11","TimeTaken":"15:11(00:00)","IPAddress":"192.168.0.1","Application":"Apache","Perimeter":"Production"},
 ]; 

The JSON array will be fetched by AngularJS to render the HTML table.

As mentioned previously, if you had tried to load all the JS page into your browser, same would crash, hence, to overcome this situation, pagination comes handy.

Messing around with Linux AWK command and the Ansible Playbook

The Linux AWK command is pretty powerful to concatenate all logs together. The more interesting is the conversion of the logs into JSON format. The playbook is located at TabuLogs/AnsiblePlaybook/AuditLoginLinux.yml

When the Playbook is launched we can see the first AWK will create a file with the hostname of the machine in /tmp and inside that file, the data is comprised of the Hostname, Username, Login, Time-Taken, and IP Address. To get back the logs from the previous month,  I used Date=`date +%b -d ‘last month’`

Assuming you have rotated logs for /var/log/wtmp either weekly or monthly or any range, the loop is supposed to find logs for only last month. Now, in case you have kept wtmp logs for years, another method needs to be used to fetch log for the actual year.

Date=`date +%b -d 'last month'`; t=`hostname -s` ; for i in `ls /var/log/wtmp*` ; do last -f $i ; done | grep $Date | awk -v t="$t" '{print t, $1, $4 "-" $5 "-" $6 "-" $7, $9$10, $3 }' | egrep -v 'wtmp|reboot' > /tmp/$t

Once the logs have been created on the remote hosts, it is fetched by ansible and kept on the local server. Consequently, the remote files are destroyed.

We also need to merge all the data into one single file and also removing all blank lines, which is done using the following command:


awk 'NF' /Tabulogs/AnsibleWorkDesk/Servers/* | sed '/^\s*$/d' > /Tabulogs/AnsibleWorkDesk/AllInOne

Assuming that the file located at /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter contain information about the server, application, and perimeter, the following awk command will append the information to the table created remotely on the hosts.

awk 'FNR == NR {a[$3] = $1 " " $2; next} { if ($1 in a) { NF++; $NF = a[$1] }; print}' /Tabulogs/AnsibleWorkDesk/rhel7_application_perimeter /Tabulogs/AnsibleWorkDesk/AllInOne > /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication

Example of the format of the table is:

Nginx dev server2

Apache prod server1

Nginx dev server4

After adding all the data together on a simple table, the following AWK will convert each row into JSON format

awk '{print "{" "\"" "Hostname" "\"" ":" "\"" $1"\"" "," "\"" "Username" "\"" ":" "\"" $2"\"" "," "\"" "Login" "\"" ":" "\"" $3"\"" "," "\"" "TimeTaken" "\"" ":" "\"" $4"\"" ",""\"" "IPAddress" "\"" ":" "\"" $5"\"" "," "\"" "Application" "\"" ":" "\"" $6"\"" "," "\"" "Perimeter" "\"" ":" "\""$7"\"""}" "," }' /Tabulogs/AnsibleWorkDesk/AllInOneWithPerimeterAndApplication > /Tabulogs/AnsibleWorkDesk/perimeter1/table.js

It does not end here, we also need to add the JS codes to it. Using Lineinfile module of Ansible, writing at the beginning of the file is pretty easy.


lineinfile: dest=/Tabulogs/AnsibleWorkDesk/perimeter1/table.js line={{ item.javascript }} insertbefore=BOF
     with_items:
       - { javascript: "$scope.SampleRecords=[ " }
       - { javascript: "$scope.q = ''; " }
       - { javascript: "$scope.pageSize = 10; " }
       - { javascript: "$scope.currentPage = 0; " }
       - { javascript: "app.controller('Table', ['$scope', '$filter', function ($scope, $filter) { "}
       - { javascript: "var app = angular.module(\"AngTable\", []);  "}

Same method to write at the end of the file to create the table.js file.

lineinfile: dest=/Tabulogs/AnsibleWorkDesk/Perimeter1/table.js line={{ item.javascript }} insertafter=EOF
     with_items:
       - { javascript: " ]; " }
       - { javascript: "$scope.getData = function () { " }
       - { javascript: "return $filter('filter')($scope.SampleRecords, $scope.q) " }
       - { javascript: " }; "}
       - { javascript: "$scope.numberOfPages=function(){ "}
       - { javascript: "return Math.ceil($scope.getData().length/$scope.pageSize); "}
       - { javascript: "}; "}
       - { javascript: "}]); "}
       - { javascript: "app.filter('startFrom', function() { "}
       - { javascript: "    return function(input, start) { "}
       - { javascript: "start = +start; "}
       - { javascript: "return input.slice(start); "}
       - { javascript: " } "}
       - { javascript: "}); "}

After writing at the end of the file, the table.js is now completed.

Everything is assembled using the Ansible playbook. The Playbook has been tested for RedHat 7 servers. I believe it can also be adapted for other environments.

Final result

Here is an idea how will be the final result



Some information blurred for security reasons
Some pieces of information are blurred for security reasons

Future Enhancements

I’m so excited to share this article though I know there are much more improvements that can be done such as:

  • The awk command can be combined.
  • Removal of backticks on the shell commands.
  • shell command like rm -rf will be removed as well using the file module.
  • Some deprecated HTML tags were used.
  • Code sanitization.

My future goal is to improve the Ansible Playbook and the source code itself. Maybe someone else has a much better way of doing it. I would be glad to hear more. Please do comment below for more ideas. In case, you believed that some improvements can be done on the Playbook, feel free to send some commits on my GitHub repository or comment below.

As usual, a plan is needed. Consider reading the “tips” section below which might give you some hints.

TIPS:


  • There is, however, some other limitations with last command. Some arguments will not be present if you are using old utils-linux packages. Consider updating the package to be able to filter the last command easily.
  • If you can group your servers by some category or environment, it would be helpful.
  • There will be other versions of the project Tabulogs to create better and fast Ansible playbook. Lets called this one version 1.0