Category: Linux Application

Getting started with Terraform

Terraform is an open-source tool created by HashiCorp and it is written in Go programming language. Using Terraform allows us to define our infrastructure as a Code by using declarative language. It’s important to understand that Terraform language is declarative, which describes an intended goal rather than the steps to reach the goal. Once you define your infrastructure, Terraform will figure out how to create it. Terraform also supports a variety of cloud providers and virtualization platforms such as AWS, Azure, VMware, OpenStack, etc.. This is pretty cool as it eliminates several tasks, for example, to create several AWS instances.

Photo credits: terraform.io
Photo credits: terraform.io

Installation of Terraform

1. This is pretty simple. You just have to go on the official website and download the package. In my case, I am on a Linux machine, and I am choosing a Linux 64 bit package.

To download and unzip it, use the following command:

wget https://releases.hashicorp.com/terraform/0.12.10/terraform_0.12.10_linux_amd64.zip && unzip terraform*.zip

2. I moved the binary to /usr/local/bin. Make sure it is in the path environment variable.

mv terraform /usr/local/bin

3. By this time, you should get your binary and be able to check the version.

terraform version

Setting up API call for Terraform on AWS

4. We also need to allow terraform to make an API call on our behalf. I will be calling the API on AWS. For that, you will need to create a user on the AWS IAM and assign the rights and policies. Assuming that you have already created the user and you have the credentials to move ahead. Use the following commands:

export AWS_ACCESS_KEY_ID="AKIA***************"
export AWS_SECRET_ACCESS_KEY="mVTwU6JtC***************"
export AWS_DEFAULT_REGION="us-east-1"

Writing the codes

5. Once you are done exporting the credentials, you can start building your Terraform code. The whole code is in my Github and you can download it for free.

The first thing is to configure the provider and the region.

provider "aws" {

 region = "us-east-1"

}

6. Each provider supports different kinds of resources such as load balancers, servers, databases, etc.. In this example, we are trying to create a single EC2 instance. I have chosen the AWS Linux OS and the smallest nano server. The tags are just the identifier in AWS.

resource "aws_instance" "web" {

  ami           = "ami-0b69ea66ff7391e80"

  instance_type = "t2.nano"

} 

7. Then launch a terraform init to initialized the Terraform working directory.  By that, I mean that it will download the AWS plugin. You should found a similar type of output from your screen.

8. Before performing the actual change, you can use the terraform plan to understand what change has been established. The plus sign means what is going to be added and the minus sign means those that are going to be removed.

9. To create the instance use the terraform apply to create the instance. It will prompt you to type ‘yes’ to continue on with the creation.

10. If you go on the AWS EC2 console, you will notice that the resource has been created successfully.

11. Hey, it’s not over yet! There are more things that need to be added for example the name of the instance. Let’s called it Nginx-Server. Let’s add the tags. Also, launch a terraform apply.

tags = {

    Name = "Nginx-Web"

 }

Adding User Data and Security groups

12. At this stage, I believed you must understand what is Terraform and how it works? To make the installation of Nginx add the following block of lines:

user_data = <<-EOF

  #!/bin/bash

  yum install nginx -y

  systemctl start nginx

  systemctl enable nginx

  EOF

13. To add the security groups, enter these codes:

resource "aws_security_group" "allow_http" {

  name        = "allow_http"

  description = "Allow HTTP inbound traffic"

  ingress {

    from_port   = 80

    to_port     = 80

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }

14. In part 6 under instance_type, I have added this line. What it means? “aws_security_group” is a resource, “allow_http” is a variable that has been called from the security group in part 13, and lastly “id” is the attribute.

  vpc_security_group_ids = ["${aws_security_group.allow_http.id}"]

15. Note that when launching terraform apply, you will notice that Terraform will destroy the old machine and build a new one which implies that there will be a downtime.

16. You can also view your code through a graph. Launch the command terraform graph. The output can also be viewed as more human-readable through Graphviz which you have to install. You can also go to webgraphviz.com to view it online.

It is very interesting to understand the dependency when using declarative language in Terraform. The full code can be viewed here on my Github Repository.



Recover logical volumes data from deleted LVM partition

Have you ever deleted a logical volume by accident? Can you recover it looking into the backups? Well, the answer is YES. For those who are not familiar with Logical Volume Management (LVM) is a device mapper target that provides logical volume management for the Linux kernel.- WikipediaIt is an abstraction layer or software that has been placed on top of your hard drive for flexible manipulation of the disk space. Some of the articles published in the past on LVM are:

All test carried out on this blog post have been tested on a CentOS machine. Please don’t make a live test on a production server.

Image Credits: Redhat.com
Image Credits: Redhat.com

1. So, as you can see below I have an lv called lvopt which is from a vg called centos.

2. Same is mounted on the /opt

3. There are some data in that partition as well:

4. I created a directory inside the /opt directory

5. Now, let’s pretend to remove the lvm lvopt. Or say, someone did it by accident because it was unmounted. The command lvremove will be used here to remove the lv. Note: that the lv need to be unmounted.

6. If you make an lvs, lvdisplay or vgs or even mount again the partition, you cannot do it. The data is lost. But you can still recover it. This is because the lvm contains the archive of your lv inside the folder /etc/lvm/archive. But, you cannot read the files directly.

7. But you can still, interpret part of the files. Since we deleted the volume group called “centos”, we knew that it is referenced in the file centos_… The question that arises here is which file is relevant for you. Right? So to understand which archive you want to restore, you need to use the command vgcfgrestore –list <name of volume group>. Here is an example:

8.  If you observe carefully, each archive has been backup at a certain time. In my case, I deleted the LV on 18-Apr-2019 at 11:17:17 2019:

9. So, I want to restore from that last archive. You will need to copy the full patch of the vg file. In my case it is /etc/lvm/archive/centos_00004-1870674349.vg. The goal here is to restore the lv before this specific time, or simply restore back the lv before the command lvremove was fired. Here is the command:

10. If you launch the command lvs, you will notice the presence of the lv.

11. But, mounting back the lv won’t result in anything. This is because the lv is inactive. You can see it with the command lvscan. Please take note below that the lvopt is inactive.

12. To activate it you simply need to use the command lvchange.

13. Mount it back and you are done.

I believe this can be very useful especially when you have encountered a situation where someone deleted an lv. I hope you enjoy this blog post. Please share and comment below if you like it.


cyberstorm.mu team at Developers Conference Mauritius

A few weeks back, I registered myself to present the Ansible automation tool at the Developers Conference 2019 at Voila Hotel, Bagatelle Mauritius. The event is an initiative of Mauritius Software Craftsmanship Community – MSCC sponsored by several companies such as Mauritius Commercial Bank, SdWorx, Eventstore, Ceridian, Castille, etc. There were other members of cyberstorm.mu who also registered for their presentations: they are Codarren Velvindron, technical lead at Orange Business Services who spoke about “becoming an automation artist”, Loganaden Velvindron who spoke about “RedHat Enterprise Linux 8 and Derivatives have a new Firewall: NFTABLEs”, and Nathan Sunil Mangar who spoke about “Introduction to the STM32”. There was also a special event where Mukom Akong Tamon, head of capacity building for Africa region at Afrinic who spoke on “IPv6 deployment in Mauritius and Africa at large”. I presented myself as a member of cyberstorm.mu and DevOps Engineer at Orange Business Services and spoke on Ansible for beginners with some basic and advanced demos.

In the past, I have written several articles on Ansible:

  1. Getting started with Ansible deployment
  2. Some fun with Ansible Playbooks
  3. Configure your LVM via Ansible
  4. Some tips with Ansible Modules for managing OS and Applications
  5. An agentless servers inventory with Ansible and Ansible-CMDB
  6. Project Tabulogs: Linux Last logs on HTML table with Ansible

My presentation started with a basic introduction to Ansible following some brief examples and demos. I started with a brief introduction of myself. It looks like it was a mixed audience including, Students, Professional from the management and technical side, Engineers, and others. I brushed out quickly as to why we need Ansible in our daily life whether for home use or on production. Ansible is compatible with several Operating systems and one of the most interesting tools is the AWX which is an opensource product. Before getting started with Ansible, it is important to grasp some keywords. I introduced it as well as giving some examples using Playbooks. Ansible Ad-hoc commands were also used. The audience was asked to give some ideas about what they want to automate in the future. There were lots of pretty examples. I laid some emphasis on reading the docs and keep in touch with the version of Ansible one is using. Also gave some brief idea about Ansible-Galaxy, Ansible-doc, Ansible-pull, and Ansible-vault. To spice up your automation layout, it would be nice to use Jinja templates, verbosity for better visual comprehension. I also spoke about Ansible-CMDB, which is not a tool of Ansible. Some days back, I blogged on Ansible-CMDB which is pretty interesting to create an inventory. I also shed some ideas about how to modify the source code of Ansible-CMDB. Also, an example using an Ansible Playbook build up web apps.

Thanks, everyone for taking pictures and some recordings.

cyberstorm.mu @ DevConMru

After my session, I went to the Afrinic session on IPv6, where Mukom Akong Tamon was presenting on IPv6 where he brushed out on an introduction to IPv6 and the IPv6 format structure. Also, several examples of why it is important to migrate to IPv6. Loganaden Velvindron from Afrinic enlightened the audience about dual stack programming.

One of the important part where Mr. Mukom mentioned that there are still developers hard coding IP addresses in the code which is not a good practice.

There was another session by Loganaden Velvindron of Afrinic, who spoke on NFtables in RedHat 8. Mukom was also present there in the session. Loganaden explained about NFtables architecture and its advantages. Also explained how to submit patches and dual stack building with NFtables.

Codarren Velvindron, technical lead at Orange Business Services and member of cyberstorm.mu explain why automation is important. He took some example on the conference.mscc.mu website itself. Also gave some ideas using “Expect”. For those who are not familiar with “Expect”, it is a scripting programming language that talks with your interactive programs or script that require user interaction.

Nathan Sunil Mangar also presented on an introduction to the STM32 microcontroller. He also gave some hints to distinguish between fake and real microcontrollers on the market. Apart from the basic introduction, he went brushed out some examples on several projects and explain which one can is better. However, it also depends on the budget when choosing microcontrollers. He also showed how to use the tool of programming for the STM32 microcontroller. The documentation was also perused during the presentation. At the end of the presentation, there were several giveaways by Nathan Mangar including, fans, Microcontrollers, and a small light bulb made from STM32.

I also have the opportunity to meet with several staffs from the Mauritius Commercial Bank who asked for some hints and best practice on Ansible. Also had some conversations with other people in the tech industry such as Kushal Appadu, Senior Linux system Engineer at Linkbynet Indian Ocean. We discussed lengthily on new technologies. Some days back, I presented the technicalities of Automation as a DevOps Engineer at SupInfo university Mauritius under the umbrella of Orange Cloud for Business and Orange Business Service. I was glad to meet a few students of SupInfo at the DevCon 2019 who instantly recognized me and congratulated me for the Ansible session.

Speaker at SUPINFO under the umbrella of Orange Business Services

I sincerely believe there is still room for improvement at the Developers conference such as the website itself which needs some security improvements. Otherwise, a feature that could be added is to specify which session is for beginners, intermediate or advanced so that attendees can choose better. The rating mechanism which is not based on constructivism might discourage other speakers to come forward next time. But overall, it was a nice event. Someone from the media team filmed me for a one-minute video, hoping to see it on the net in the future. I also got a “Thank You” board for being a speaker by Vanessa Veeramootoo-Chellen, CTO at Extension Interactive and one of the organizers at the Developers conference who can be seen to be always working, busy and on the move during the event.


Attending AWSome day online conference 2019

The AWSome day was a free online Conference and a training event sponsor by Intel that will provide a step-by-step introduction to the core AWS (Amazon Web Services) services. Its free and everyone can attend. It was scheduled on 26 March 2019 online. The agenda covered broad topics such as AWS Cloud Concepts, AWS Core Services, AWS Security, AWS Architecting and AWS Pricing and Support. It’s pretty interesting for IT manager, system engineers, system administrators, and architects who are eager to learn more about cloud computing and how to get started on the AWS cloud. I do have some experience in managing AWS servers and even host my own server. However, I registered for the free training to refresh my knowledge and get more exposure such as the AWS pricing which I am not aware at all. Another interesting thing is that you will receive a certificate of attendance and you received 25 USD of AWS credits. Pretty cool right?

Right from the beginning, I knew this was something interesting. I encountered a minor problem whilst signing in. I had to send a mail to support and it was resolved immediately. Once connected to the lobby, it was pretty easy to attend and follow the online conference. After some minutes, Steven Bryen, head in the AWS Cloud delivered the keynote speech.

There was also an online challenge and I score 25,821 on the Trivia Leaderboard.

On the “Ask an Expert” tab, I was mostly interested in Man on the Side Attack – MOTS attack. They referred me to the WAF section on AWS. Another interesting link is the whitepaper of the AWS Overview of Security guidelines. AWS also offers comprehensive security across all the layers, SSL, DDoS, Firewall, HSM and Networking. I also shoot some question on Metric and Monitoring on application level such as on MariaDB. I discovered about the RDS performance insight. For applications on EC2, Containers, and Lamda, X-Ray looks very promising. Apart from virtualization, its good to note that AWS also provides containerization services.

The event was pretty enriching. The panel on the question area knows well their subject. I discovered a lot by participating in the AWSomeDay. I’m looking forward to AWS certifications in the near future.


Building Docker images and publishing ports

One of the most challenging tasks in a production environment with Docker is to build images and publish ports. As promised in the previous article I will publish more articles on Docker images. So, here we are! For those who missed the previous articles on Docker, firstly we have the basic installation of Docker and some basic commands and secondly, we have an article dedicated about 30 basic commands to start with Docker container. Note that all illustrations and commands in this blog post have been tested on Fedora.

Building Docker images

What is a Docker image? Firstly, we need to understand what is an image. It is a compressed self-piece of software. Once unwrapped, it becomes meaningful to use because it’s all about the functionality that makes the image useful. An image could contain software, operating system, a service, etc.. On the other hand, the Docker image is created by a series or sequence of commands written to a file called “Dockerfile”.  Whenever you execute the Dockerfile using Docker command, it will output an image, thus, a Docker image. In this blog post, we are going to build a Docker image using existing Docker image.

1. As described in the article “30 basic commands to start with Docker container” in part 3, to view the current images available you can use the following command:

docker images

2. In my case, I have a Centos image. My goal is to make a Docker image which has Apache web server already pre-installed. Now, there are two methods to create an image from the existing one. The first is the commit method with the Docker commit command which is not extensively used due to less flexibility. The other is by creating a Docker file. In my home directory, I created a directory at /home/webserver. Now, this directory will be used to build up the web server. You can also create an index.html file to be used as the index page of the web server. Use the following basic commands:

mkdir /home/webserver && touch /home/webserver/{index.html,Dockerfile}

3. I then edited the index.html. Let’s enter some random phrase in it for testing purpose.

This is a test by TheTunnelix

4. Edit the Dockerfile and define the Dockerfile as indicated below. From the comment section, I gave some explanations for each line:

FROM centos:latest # Take the latest image.
LABEL tunnelix.com <[email protected]> # Just a reference using my E-mail.
RUN yum install httpd -y # Run the command to install HTTPD.
COPY index.html /var/www/html # Copy from webserver folder to the docroot.
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"] # Always launch the binary to start the daemon HTTPD.
EXPOSE 80 # Run Apache on port 80. This port need to be exposed to run the HTTPD webserver.

5. Now, point yourself in the directory where your Dockerfile and index.html is located. We will build the image using the Dockerfile using docker build command.

docker build -t centos:apache .

6. You can check it using the command docker images and you should notice that a new image has been created which has been tagged with apache. You also view details all steps using the following command:

docker images -a

7.  To run it, you can use:

docker run -dit --name=ApacheTunnelix centos:apache

At this stage, a docker ps will show you the container running. Remember from the article “30 commands to start with Docker container” in part 24, we learned that Docker will create a bridge. You can check it using docker network ls. You can also confirm it using the command brctl show command.

8. When launching the command docker inspect in the section containers, I can notice my container is accessible with the IPAddress 172.17.0.2 and same is accessible on my browser with the same content of the index.html file created in section 3. You can also check it using the following curl command:

curl http://172.17.0.2

Publishing the port

9. The point is that the container ApacheTunnelix with IPAddress 172.17.0.2 is not available outside the physical host onto which I am running my Docker engine. The catch is that we need to go through a step called publishing ports.

10. I will now create another web server to better differentiate between the container (ApacheTunnelix) accessible locally and that another container (Let’s call it ApacheTunnelixProd) which need to be accessible on the same network of the Physical machine. I copied the directory /home/webserver to /home/webserverprod and pointed myself inside the directory webserverprod.

cp -rp /home/webserver /home/webserverprod && cd /home/webserverprod

11. For my own reference, I change the index.html to:

This is a test by TheTunnelix on Prod.

12. Repeat step 5 by building a new image with a new name:

docker build -t centos:apacheProd

13. Compare to step 7 where we have run the container without publishing the port, this time we will run it by publishing the port from outside the physical machine. By default, the container will run on port 80. To make it accessible, say on port 5000, we use the following command:

docker run -dit --name=ApacheTunnelixProd -p 5000:80 centos:apacheProd

14. By now the container should be accessible on any IP on the network of the local machine including localhost. In my case, the IP address of my physical machine is 192.168.100.9. You can test it using the command:

curl http://192.168.100.9:5000

Or you can simply access your machine from a browser:

15. A docker ps is of great importance to understand as same will show you from the source and destination of the port mapping. Another interesting command to understand the port mapping is the docker port. For example:

docker port ApacheTunnelixProd

This will show the following result:

80/tcp -> 0.0.0.0:5000

In the next article on Docker, I will share some more interesting tips on Docker Networking. keep in touch and comment below for suggestions and improvements.

Tips:

  • EXPOSE allows anyone outside the container to access the web server on the port 80. If you do not expose it, the web server will not be accessible outside the container.
  • CMD allows you to run a command as soon as the container is launched. CMD is different from RUN. RUN is used whilst building the image and CMD is used whilst launching the image.
  • Always check the official Docker documentation when creating Dockerfile.
  • You always stop a running container using the command docker stop <name of the container>. For example, docker stop ApacheTunnelixProd.
  • Also, you can remove a container with the command docker rm <name of the container>. For example, docker rm ApacheTunnelixProd.

Updates:

As explained by Arnaud Bonnet, one should be careful when using distributions such as Centos, Debian etc which can be vulnerable. So auditing is important prior before deploying on Production. A look into Alpine and BusyBox can be very useful.

Also, the MAINTAINER has been deprecated and now used by LABEL. Arnaud gave some examples such as:

LABEL com.example.version=”0.0.1-beta”
LABEL vendor=”ACME Incorporated”
LABEL com.example.release-date=”2015-02-12″
LABEL com.example.version.is-production=”