Category: Linux Application

Recover logical volumes data from deleted LVM partition

Have you ever deleted a logical volume by accident? Can you recover it looking into the backups? Well, the answer is YES. For those who are not familiar with Logical Volume Management (LVM) is a device mapper target that provides logical volume management for the Linux kernel.- WikipediaIt is an abstraction layer or software that has been placed on top of your hard drive for flexible manipulation of the disk space. Some of the articles published in the past on LVM are:

All test carried out on this blog post have been tested on a CentOS machine. Please don’t make a live test on a production server.

Image Credits: Redhat.com
Image Credits: Redhat.com

1. So, as you can see below I have an lv called lvopt which is from a vg called centos.

Recover logical volumes data from deleted LVM partition 1

2. Same is mounted on the /opt

Recover logical volumes data from deleted LVM partition 2

3. There are some data in that partition as well:

Recover logical volumes data from deleted LVM partition 3

4. I created a directory inside the /opt directory

Recover logical volumes data from deleted LVM partition 4

5. Now, let’s pretend to remove the lvm lvopt. Or say, someone did it by accident because it was unmounted. The command lvremove will be used here to remove the lv. Note: that the lv need to be unmounted.

Recover logical volumes data from deleted LVM partition 5

6. If you make an lvs, lvdisplay or vgs or even mount again the partition, you cannot do it. The data is lost. But you can still recover it. This is because the lvm contains the archive of your lv inside the folder /etc/lvm/archive. But, you cannot read the files directly.

Recover logical volumes data from deleted LVM partition 6

7. But you can still, interpret part of the files. Since we deleted the volume group called “centos”, we knew that it is referenced in the file centos_… The question that arises here is which file is relevant for you. Right? So to understand which archive you want to restore, you need to use the command vgcfgrestore –list <name of volume group>. Here is an example:

Recover logical volumes data from deleted LVM partition 7

8.  If you observe carefully, each archive has been backup at a certain time. In my case, I deleted the LV on 18-Apr-2019 at 11:17:17 2019:

Recover logical volumes data from deleted LVM partition 8

9. So, I want to restore from that last archive. You will need to copy the full patch of the vg file. In my case it is /etc/lvm/archive/centos_00004-1870674349.vg. The goal here is to restore the lv before this specific time, or simply restore back the lv before the command lvremove was fired. Here is the command:

Recover logical volumes data from deleted LVM partition 9

10. If you launch the command lvs, you will notice the presence of the lv.

Recover logical volumes data from deleted LVM partition 10

11. But, mounting back the lv won’t result in anything. This is because the lv is inactive. You can see it with the command lvscan. Please take note below that the lvopt is inactive.

Recover logical volumes data from deleted LVM partition 11

12. To activate it you simply need to use the command lvchange.

Recover logical volumes data from deleted LVM partition 12

13. Mount it back and you are done.

Recover logical volumes data from deleted LVM partition 13

I believe this can be very useful especially when you have encountered a situation where someone deleted an lv. I hope you enjoy this blog post. Please share and comment below if you like it.


cyberstorm.mu team at Developers Conference Mauritius

A few weeks back, I registered myself to present the Ansible automation tool at the Developers Conference 2019 at Voila Hotel, Bagatelle Mauritius. The event is an initiative of Mauritius Software Craftsmanship Community – MSCC sponsored by several companies such as Mauritius Commercial Bank, SdWorx, Eventstore, Ceridian, Castille, etc. There were other members of cyberstorm.mu who also registered for their presentations: they are Codarren Velvindron, technical lead at Orange Business Services who spoke about “becoming an automation artist”, Loganaden Velvindron who spoke about “RedHat Enterprise Linux 8 and Derivatives have a new Firewall: NFTABLEs”, and Nathan Sunil Mangar who spoke about “Introduction to the STM32”. There was also a special event where Mukom Akong Tamon, head of capacity building for Africa region at Afrinic who spoke on “IPv6 deployment in Mauritius and Africa at large”. I presented myself as a member of cyberstorm.mu and DevOps Engineer at Orange Business Services and spoke on Ansible for beginners with some basic and advanced demos.

cyberstorm.mu team at Developers Conference Mauritius 14

In the past, I have written several articles on Ansible:

  1. Getting started with Ansible deployment
  2. Some fun with Ansible Playbooks
  3. Configure your LVM via Ansible
  4. Some tips with Ansible Modules for managing OS and Applications
  5. An agentless servers inventory with Ansible and Ansible-CMDB
  6. Project Tabulogs: Linux Last logs on HTML table with Ansible

My presentation started with a basic introduction to Ansible following some brief examples and demos. I started with a brief introduction of myself. It looks like it was a mixed audience including, Students, Professional from the management and technical side, Engineers, and others. I brushed out quickly as to why we need Ansible in our daily life whether for home use or on production. Ansible is compatible with several Operating systems and one of the most interesting tools is the AWX which is an opensource product. Before getting started with Ansible, it is important to grasp some keywords. I introduced it as well as giving some examples using Playbooks. Ansible Ad-hoc commands were also used. The audience was asked to give some ideas about what they want to automate in the future. There were lots of pretty examples. I laid some emphasis on reading the docs and keep in touch with the version of Ansible one is using. Also gave some brief idea about Ansible-Galaxy, Ansible-doc, Ansible-pull, and Ansible-vault. To spice up your automation layout, it would be nice to use Jinja templates, verbosity for better visual comprehension. I also spoke about Ansible-CMDB, which is not a tool of Ansible. Some days back, I blogged on Ansible-CMDB which is pretty interesting to create an inventory. I also shed some ideas about how to modify the source code of Ansible-CMDB. Also, an example using an Ansible Playbook build up web apps.

cyberstorm.mu team at Developers Conference Mauritius 15

Thanks, everyone for taking pictures and some recordings.

cyberstorm.mu @ DevConMru

Screen Shot 2019-04-16 at 8.01.34 PM
D4AvZJ-WsAY9anz.jpg large
D39IqdeX4AAJr5W
D38Pc7oU8AAF_6F
D4CEUKUWwAAXzU8.jpg large
56899687_10161551169545557_6969110695807811584_n
D4AvZJ-WsAY9anz.jpg large
D4AgYJAW4AIs76t
D4At0coW0AAjYww
D38bIioXoAAOJvj.jpg large
D38nQu_WAAAnZWp
D39CDaCXsAEX5W_
D4CEUKUWwAAXzU8.jpg large
D38Pc7oU8AAF_6F
D383Fg6X4AEJwGN
D4CD2reW4AAQxbJ
D38RrItUcAAE8Nu
D39IqdeX4AAJr5W
D39PRU1XsAAY5eE
Screen Shot 2019-04-16 at 8.01.34 PM
D38O7YPUIAIhRVG
D38j-R1WwAIagUi
D39NrSDWwAAV8yW
D389SnbX4AE-Dvk
D38LFHlU8AEWi_f
D38mJdsWwAAY925
D38lS6LXoAAmRL6
D38lRySXsAAShRB
D4CD2rfXsAEi889
56549524_1223529091139485_8901038877343481856_n
Screen Shot 2019-04-16 at 8.01.34 PM
56862360_2324788081102513_3470547932690776064_n
56980635_10157139028667365_5838586333253074944_n
56899687_10161551169545557_6969110695807811584_n
56842770_685911931837631_1427788979075284992_n
Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image...

After my session, I went to the Afrinic session on IPv6, where Mukom Akong Tamon was presenting on IPv6 where he brushed out on an introduction to IPv6 and the IPv6 format structure. Also, several examples of why it is important to migrate to IPv6. Loganaden Velvindron from Afrinic enlightened the audience about dual stack programming.

One of the important part where Mr. Mukom mentioned that there are still developers hard coding IP addresses in the code which is not a good practice.

cyberstorm.mu team at Developers Conference Mauritius 16

There was another session by Loganaden Velvindron of Afrinic, who spoke on NFtables in RedHat 8. Mukom was also present there in the session. Loganaden explained about NFtables architecture and its advantages. Also explained how to submit patches and dual stack building with NFtables.

cyberstorm.mu team at Developers Conference Mauritius 17

Codarren Velvindron, technical lead at Orange Business Services and member of cyberstorm.mu explain why automation is important. He took some example on the conference.mscc.mu website itself. Also gave some ideas using “Expect”. For those who are not familiar with “Expect”, it is a scripting programming language that talks with your interactive programs or script that require user interaction.

cyberstorm.mu team at Developers Conference Mauritius 18

Nathan Sunil Mangar also presented on an introduction to the STM32 microcontroller. He also gave some hints to distinguish between fake and real microcontrollers on the market. Apart from the basic introduction, he went brushed out some examples on several projects and explain which one can is better. However, it also depends on the budget when choosing microcontrollers. He also showed how to use the tool of programming for the STM32 microcontroller. The documentation was also perused during the presentation. At the end of the presentation, there were several giveaways by Nathan Mangar including, fans, Microcontrollers, and a small light bulb made from STM32.

cyberstorm.mu team at Developers Conference Mauritius 19

I also have the opportunity to meet with several staffs from the Mauritius Commercial Bank who asked for some hints and best practice on Ansible. Also had some conversations with other people in the tech industry such as Kushal Appadu, Senior Linux system Engineer at Linkbynet Indian Ocean. We discussed lengthily on new technologies. Some days back, I presented the technicalities of Automation as a DevOps Engineer at SupInfo university Mauritius under the umbrella of Orange Cloud for Business and Orange Business Service. I was glad to meet a few students of SupInfo at the DevCon 2019 who instantly recognized me and congratulated me for the Ansible session.

cyberstorm.mu team at Developers Conference Mauritius 20
Speaker at SUPINFO under the umbrella of Orange Business Services

I sincerely believe there is still room for improvement at the Developers conference such as the website itself which needs some security improvements. Otherwise, a feature that could be added is to specify which session is for beginners, intermediate or advanced so that attendees can choose better. The rating mechanism which is not based on constructivism might discourage other speakers to come forward next time. But overall, it was a nice event. Someone from the media team filmed me for a one-minute video, hoping to see it on the net in the future. I also got a “Thank You” board for being a speaker by Vanessa Veeramootoo-Chellen, CTO at Extension Interactive and one of the organizers at the Developers conference who can be seen to be always working, busy and on the move during the event.


Attending AWSome day online conference 2019

The AWSome day was a free online Conference and a training event sponsor by Intel that will provide a step-by-step introduction to the core AWS (Amazon Web Services) services. Its free and everyone can attend. It was scheduled on 26 March 2019 online. The agenda covered broad topics such as AWS Cloud Concepts, AWS Core Services, AWS Security, AWS Architecting and AWS Pricing and Support. It’s pretty interesting for IT manager, system engineers, system administrators, and architects who are eager to learn more about cloud computing and how to get started on the AWS cloud. I do have some experience in managing AWS servers and even host my own server. However, I registered for the free training to refresh my knowledge and get more exposure such as the AWS pricing which I am not aware at all. Another interesting thing is that you will receive a certificate of attendance and you received 25 USD of AWS credits. Pretty cool right?

Attending AWSome day online conference 2019 21

Right from the beginning, I knew this was something interesting. I encountered a minor problem whilst signing in. I had to send a mail to support and it was resolved immediately. Once connected to the lobby, it was pretty easy to attend and follow the online conference. After some minutes, Steven Bryen, head in the AWS Cloud delivered the keynote speech.

Attending AWSome day online conference 2019 22

There was also an online challenge and I score 25,821 on the Trivia Leaderboard.

Attending AWSome day online conference 2019 23

On the “Ask an Expert” tab, I was mostly interested in Man on the Side Attack – MOTS attack. They referred me to the WAF section on AWS. Another interesting link is the whitepaper of the AWS Overview of Security guidelines. AWS also offers comprehensive security across all the layers, SSL, DDoS, Firewall, HSM and Networking. I also shoot some question on Metric and Monitoring on application level such as on MariaDB. I discovered about the RDS performance insight. For applications on EC2, Containers, and Lamda, X-Ray looks very promising. Apart from virtualization, its good to note that AWS also provides containerization services.

The event was pretty enriching. The panel on the question area knows well their subject. I discovered a lot by participating in the AWSomeDay. I’m looking forward to AWS certifications in the near future.


Building Docker images and publishing ports

One of the most challenging tasks in a production environment with Docker is to build images and publish ports. As promised in the previous article I will publish more articles on Docker images. So, here we are! For those who missed the previous articles on Docker, firstly we have the basic installation of Docker and some basic commands and secondly, we have an article dedicated about 30 basic commands to start with Docker container. Note that all illustrations and commands in this blog post have been tested on Fedora.

Building Docker images and publishing ports 24

Building Docker images

What is a Docker image? Firstly, we need to understand what is an image. It is a compressed self-piece of software. Once unwrapped, it becomes meaningful to use because it’s all about the functionality that makes the image useful. An image could contain software, operating system, a service, etc.. On the other hand, the Docker image is created by a series or sequence of commands written to a file called “Dockerfile”.  Whenever you execute the Dockerfile using Docker command, it will output an image, thus, a Docker image. In this blog post, we are going to build a Docker image using existing Docker image.

1. As described in the article “30 basic commands to start with Docker container” in part 3, to view the current images available you can use the following command:

docker images

2. In my case, I have a Centos image. My goal is to make a Docker image which has Apache web server already pre-installed. Now, there are two methods to create an image from the existing one. The first is the commit method with the Docker commit command which is not extensively used due to less flexibility. The other is by creating a Docker file. In my home directory, I created a directory at /home/webserver. Now, this directory will be used to build up the web server. You can also create an index.html file to be used as the index page of the web server. Use the following basic commands:

mkdir /home/webserver && touch /home/webserver/{index.html,Dockerfile}

3. I then edited the index.html. Let’s enter some random phrase in it for testing purpose.

This is a test by TheTunnelix

4. Edit the Dockerfile and define the Dockerfile as indicated below. From the comment section, I gave some explanations for each line:

FROM centos:latest # Take the latest image.
LABEL tunnelix.com <[email protected]> # Just a reference using my E-mail.
RUN yum install httpd -y # Run the command to install HTTPD.
COPY index.html /var/www/html # Copy from webserver folder to the docroot.
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"] # Always launch the binary to start the daemon HTTPD.
EXPOSE 80 # Run Apache on port 80. This port need to be exposed to run the HTTPD webserver.

5. Now, point yourself in the directory where your Dockerfile and index.html is located. We will build the image using the Dockerfile using docker build command.

docker build -t centos:apache .

6. You can check it using the command docker images and you should notice that a new image has been created which has been tagged with apache. You also view details all steps using the following command:

docker images -a

7.  To run it, you can use:

docker run -dit --name=ApacheTunnelix centos:apache

At this stage, a docker ps will show you the container running. Remember from the article “30 commands to start with Docker container” in part 24, we learned that Docker will create a bridge. You can check it using docker network ls. You can also confirm it using the command brctl show command.

8. When launching the command docker inspect in the section containers, I can notice my container is accessible with the IPAddress 172.17.0.2 and same is accessible on my browser with the same content of the index.html file created in section 3. You can also check it using the following curl command:

curl http://172.17.0.2

Publishing the port

9. The point is that the container ApacheTunnelix with IPAddress 172.17.0.2 is not available outside the physical host onto which I am running my Docker engine. The catch is that we need to go through a step called publishing ports.

10. I will now create another web server to better differentiate between the container (ApacheTunnelix) accessible locally and that another container (Let’s call it ApacheTunnelixProd) which need to be accessible on the same network of the Physical machine. I copied the directory /home/webserver to /home/webserverprod and pointed myself inside the directory webserverprod.

cp -rp /home/webserver /home/webserverprod && cd /home/webserverprod

11. For my own reference, I change the index.html to:

This is a test by TheTunnelix on Prod.

12. Repeat step 5 by building a new image with a new name:

docker build -t centos:apacheProd

13. Compare to step 7 where we have run the container without publishing the port, this time we will run it by publishing the port from outside the physical machine. By default, the container will run on port 80. To make it accessible, say on port 5000, we use the following command:

docker run -dit --name=ApacheTunnelixProd -p 5000:80 centos:apacheProd

14. By now the container should be accessible on any IP on the network of the local machine including localhost. In my case, the IP address of my physical machine is 192.168.100.9. You can test it using the command:

curl http://192.168.100.9:5000

Or you can simply access your machine from a browser:

Building Docker images and publishing ports 25

15. A docker ps is of great importance to understand as same will show you from the source and destination of the port mapping. Another interesting command to understand the port mapping is the docker port. For example:

docker port ApacheTunnelixProd

This will show the following result:

80/tcp -> 0.0.0.0:5000

In the next article on Docker, I will share some more interesting tips on Docker Networking. keep in touch and comment below for suggestions and improvements.

Tips:

  • EXPOSE allows anyone outside the container to access the web server on the port 80. If you do not expose it, the web server will not be accessible outside the container.
  • CMD allows you to run a command as soon as the container is launched. CMD is different from RUN. RUN is used whilst building the image and CMD is used whilst launching the image.
  • Always check the official Docker documentation when creating Dockerfile.
  • You always stop a running container using the command docker stop <name of the container>. For example, docker stop ApacheTunnelixProd.
  • Also, you can remove a container with the command docker rm <name of the container>. For example, docker rm ApacheTunnelixProd.

Updates:

As explained by Arnaud Bonnet, one should be careful when using distributions such as Centos, Debian etc which can be vulnerable. So auditing is important prior before deploying on Production. A look into Alpine and BusyBox can be very useful.

Also, the MAINTAINER has been deprecated and now used by LABEL. Arnaud gave some examples such as:

LABEL com.example.version=”0.0.1-beta”
LABEL vendor=”ACME Incorporated”
LABEL com.example.release-date=”2015-02-12″
LABEL com.example.version.is-production=”


30 basic commands to start with Docker Container

It’s been a long time, I did not blog anything about Docker. In the article Installing, Updating and Playing around with a Docker container that dated back to the year 2016, I showed some basics on Docker installation and some basic commands to start with. Time to dive a little bit deeper into the basics of Docker. For installation on Fedora Operating system, please see the official installation of Docker on the Docker webpage. All commands and illustrations on this blog post have been tested on Fedora. Once Docker installed, there are various interesting commands you can adventure around.

30 basic commands to start with Docker Container 26

Starting up with Docker containerization

1. Searching for CentOS image with the most stars:

docker search --filter=stars=100 centos

2. You can also pull an image:

docker pull centos

3. To view the images, simply do this:

docker images

4. You can also run the container/images

docker run  -it --name=centostunnelix centos /bin/bash

5. When you run containers and even after exiting, you cannot re-run it again, because that container name, in my case centostunnelix, was used previously. Note that the container has been stopped and not removed! You will find it stopped using the following command:

docker ps -a

6. You can now start the container back:

docker start centostunnelix

7. And after starting it, you can simply stop the container:

docker stop centostunnelix

8. You can also gracefully remove it:

docker rm centostunnelix

9. Also renamed it to another instance, for example, centosprod in this case:

docker rename centostunnelix centosprod

10. A Docker in exited mode means that the changes you made are still there. However, it is still not running. You can now run it either by connecting directly to it or simply run it in the background and attach to it whilst it is running.

docker start centosprod
docker attach centosprod

Warming up with Docker…

11. Moreover, if you need to detach from the container without stopping it, you simply need to do the following two key combination one after each other. It is a good practice to check it using docker ps -a though:

Ctrl+p and Ctrl+q

12. But this keys combination can be painful and as a good practice after starting a Docker container, use the following command to get into the container and by typing Ctrl+d or exit to keep your container still up and running:

docker exec -it centosprod /bin/bash 

13. To get the last container that you have run using the following command:

docker ps -l

14. To see all the commands that have been executed inside a container with its timestamp, use the following command:

docker logs centosprod -t

15. You can also pause and unpause containers which are actually freezing and unfreezing it using the following commands:

docker pause centosprod
docker unpause centosprod

16. Imagine you want to run a container, but as soon as you exit from the container, it should get destroyed immediately. For that to be done, you need to start the container with the following command:

docker run --rm --it centosprod /bin/bash

Docker Hub

17. The Docker Hub is a library and community for Docker container images. You can access it at hub.docker.com and created an account to it. Then, from your Terminal, use the following command to log in to the Docker Hub.

docker login

18. As explained in part 3, to see the list of images created, you can use docker images. Note that image ID is the same. However, to push an image to your repository, firstly, you need to give a tag to the image using the following command:

docker image tag centos thetunnelix/centostunnelix

19. By now, if you launch again the command docker images, you will see the same images under a new name tag. To upload it to your repository using the following command:

docker image push thetunnelix/centostunnelix

20. You can also delete an image locally using the following command:

docker image rm -f centostunnelix

21. To retrieve back your image use the following command:

docker pull thetunnelix/centostunnelix

Let’s dive into Networking

22. Once into the Docker container, the command ipconfig is not present. I have installed the package net-tools using the command yum install net-tools -y. After installing the package, after firing an ipconfig, you would notice that the network card has been assigned with an IP address. On your physical machine, launch the following command to see your connection names, UUID, Type and Devices. You will see a connection name docker0 as a bridge:

nmcli connection show

23. To see how many virtual connected devices to docker0, you can use the following command:

brctl show docker0

24. Since Docker, create a virtual bridge on the machine, you can also see it using the following command:

docker network ls

25. To get more details about the network configuration of each container use the following:

docker network inspect bridge

26. So, we have seen by default docker create a bridge and all containers are assigned IP from that bridge only. However, we can also create another bridge and also specifying the gateway as well as the subnet which is pretty interesting:

docker network create tunnel0 --subnet 10.0.0.0/24 --gateway 10.0.0.1

27. Once you have created a new network bridge, you can use again the command docker network ls and docker network inspect bridge to confirm if the bridge has been created. Now, to start a container in the subnet 10.0.0.0/24, simply use the following command that was used in step 4 but this time with the argument –net <name of virtual bridge>:

docker run -it --net tunnel0 --name=centosprodnew centos /bin/bash

28. In step 27, we have seen how to create a container for a particular network. Imagine that you want to attach the same container to “bridge” that has been created by default. You would notice it using the command docker network ls. To connect it to “bridge”, use the following command.

docker network connect bridge centosprodnew

29. To disconnect it from “bridge”, simply do the following:

docker network disconnect bridge centosprodnew

30. To get logs at host level launch the following command:

journalctl -u docker.service

Tips:

  • You can run directly an image, for example, docker run fedora even if you did not pull it. Docker will automatically pull it and run it for you.
  • Every time you run a container with a different container name, it will assign a unique ID to it under the directory /var/lib/docker/containers. For every container, you have a unique ID and docker ps -aq will show you the containers.
  • When a name is not specified when starting a Docker container, Docker will assign names to it; it could be a really funny name.
  • Imagine that you are exiting from a container and you want the container to be removed automatically.
  • Always remember, whenever you start a Docker container, a unique ID will be allocated to the container and a filesystem will be allocated and mounts as Read/Write for the container. It will also allocate a Network/Bridge interface following an IP assignment and finally the process execution by the user.
  • By default, all Docker containers will be assigned an IP address range from docker0.
  • You can also create a container by using the argument –hostname and by default docker will append the /etc/hosts file with the IP and hostname of the container.
  • Virtual network binds to the bridge which creates a virtual subnet shared between the host and every virtual container. It’s basically a NAT rule that allows containers to talk to the internet but not the other way around. This concept is similar to the option of NAT in Virtual Box.
  • In step 28, “bridge” is the name of the virtual bridge that has been created by default in Docker.

If you liked these Docker basics and have any question please comment below. In future articles, I will focus on building Docker images and publish ports, Docker Swarm, Kubernetes with Docker, Metrics, and Monitoring of Docker containers etc.