Author Archives: Nitin J Mutkawoa

Diving into the basics of IPv6

The Internet is growing. In case you are not on IPv6, for sure one day, you might need to migrate from IPv4 to IPv6. Now what kind of methodology you would apply whether a dual stack or a direct changeover depends upon a rigid observation and analysis of the network infrastructure. But, it should no more be taken as a complexity. Since a few years, many companies, government bodies, ISPs, and others are moving towards IPv6. Some are adopting dual stack. IPv6 can be said to be version 2 of the Internet. In this blog post, I will make my best to shed some basics and simple way to understand the features and benefits when using IPv6. I will also contrast it with IPv4. For research purpose, I have perused several books and blogs over the Internet and, same are referenced below. One of the challenges in Africa is to enable the smooth transition to IPv6. Whilst others are doing dual stack, others have successfully migrated the whole network infrastructure to IPv6. IPv4 has been created in the early ’80s. The Internet growth which is so huge and it will definitely need to move ahead with modern technology IPv6 running at its core. I had always admired one of the modern futurist physicist, Dr. Michio Kaku who said that “In the future, the Internet might become a brain“.

Diving into the basics of IPv6 1

So why do we really need IPv6?

Besides, from the growth of the Internet and the scarcity of  IPv4 addresses, we all knew that in IPv4, the network has been divided into two parts which are the Private IPs and the Public IPs allocation. And, those two segments which are Interconnected required NAT configuration. This breaks the contiguous of the Internet. Another reason is that there is no security in IPv4 at its core. Of course, there are other strategies to secure an IPv4 network. When it comes to data prioritization, it cannot be done at the core of IPv4 which means that there is not much of Quality of Service (QoS). In IPv4, we can configure or assigned an IP to a device or simply use an address configuration mechanism such as DHCP. But, the moment DHCP is down, we land into a problem. Here is the catch, this means that there is no way to make a device to be assigned a globally unique address. So, that’s why we need IPv6. Well, wait… What happened to IPv5 ? and what about IPv1, IPv2, and IPv3?

What happened to IPv1, IPv2, IPv3, and IPv5?

Have a look at the diagram below which makes it pretty easy to understand:

Photo Credits: tutorialspoint.com
Photo Credits: tutorialspoint.com

So, IPv0, IPv1, IPv2, and IPv3 were used in the development testing phase. Ipv5 was used while doing the Stream experimentation of the Internet.

Features of IPv6

There is no backward compatibility when using IPv6, but, the basic functions remain the same, and the features have been changed completely. Since IPv4 is a 32-bit address and IPv6 is a 128-bit address, just imagine how much bigger it is. When compared to an IPv4 address bit, IPv6 has four times more bits. We can say that there are more than 1500 IP addresses per square meter on earth.

Photo credits: transition.fcc.gov
Photo credits: transition.fcc.gov

Another feature of the IPv6 is about the header which is twice the size of IPv4.

Photo Credits: radioworld.com
Photo Credits: radioworld.com

In IPv6, there is also end-to-end connectivity which means that NAT is not required for the continuity of the Internet. Every host can reach another host over the Internet.

Photo Credits: concurrency.com
Photo Credits: concurrency.com

Other features are “auto-configuration” which can be either stateful or stateless. Stateless is a mechanism that does not require any intermediate support in the form of DHCP for IP assignment whereas Stateful serves IP addresses from a pool. Also to take into consideration is “faster routing”. In IPv6, the routing information is stored in the first part of the header which makes routing decisions faster by the router. Another feature is IPSec (IP Security). It creates an end-to-end tunnel between the source and the target though it is optional. “No Broadcast” is another feature within IPv6. Using an IPv4 network, you will notice during the IP Address configuration, the clients need to broadcast to the DHCP. In IPv6, the client doesn’t need to broadcast and instead will multicast to communicate with machines over the network. It is important to understand the difference between ‘broadcast’ (one-to-all) and ‘multicast'(one-to-many). In broadcast, clients will send messages to all hosts on the network, whereas in multicast, messages are sent to a group of stations. This allows the building of distribution networks where group management is required. IPv6 does not limits itself to multicast but also bring the ‘unicast’ (one-to-one) feature. This is used especially between routers which need to communicate to a specific router. However, if you have several routers nearby and you can choose any routers for communication, let’s say for a CDN purpose, we can use the anycast method to process efficiency packet routing.

Photo Credits: techiemaster.wordpress.com
Photo Credits: techiemaster.wordpress.com

Reading IPv6 addressing

Now, that you have grasped the basic concepts of IPv6 and why we need it, let’s see how to read IPv6. An IPv6 address is made up of 128-bits divided into 16-bits blocks. Each block is then converted into 4-digits hexadecimal numbers separated by colon symbol. For example, this is an IPv6 address in binary:

0010011000000110 0100011100000000 0000000000110000 0000000000000000 0000000000000000 0000000000000000 0110100000010010 0010100001100000

Since we have three series of zeros, it can be escaped between the two colons symbols. Leading zeros in the third block will result in 30. In case, you had one block of zeros, use one zero in the hexadecimal IP address. When converted to hexadecimal it is:

2606:4700:30::6812:2860

Let’s get into more details. There are two rules when reading an IPv6 address.

Rule1: Leading zeros should be discarded. As we can see in the 3rd block of the IPv6 address above i.e; 0000000000110000 when converted it is written as 30, because it can be read as 110000. Here is a video on how to convert Binary to Hexadecimal.

Rule2: If two or more blocks contain consecutive zeros, omit them all and replace by double colons signs. Example the three blocks of zeros in purple above have been replaced as “::“, However, if there is a single block of zero, use 0 in the IPv6 address.

Assignment of IPv6 address

Similar to IPv4, we need to understand how to identify the number of networks and hosts in IPv6. Let’s take an example from a generic unicast address which uses 64-bits as network ID and 64-bits as hosts ID. Please note from the picture below the 64-bits in the network has been shared in three distinctive fields in the IPv6 packet structure.

Photo Credits: www.networkworld.com
Photo Credits: www.networkworld.com

At this stage, it should be clear how a generic unicast address has been designed. Now, another important point is the IPv6 address scope. A scope is a region where an IPv6 address can be defined as a unique identifier of a network interface. As we can see below, there are three scopes, Global Unicast Address, Unique Local, and Link Local.

Photo Credits: steves-internet-guide.com
Photo Credits: steves-internet-guide.com

The Global Unicast Address is routed and is reachable across the Internet. Also. the prefix for global routing prefix in IPv6 has been assigned by the Internet Assigned Number Authority – IANA, so that by only looking at the prefix of an IPv6 address, you can determine if its global or not. In the picture below, you can see the first 3 bits within the global prefix. Remember, that this is unique globally.

Photo Credits: cisco.com
Photo Credits: cisco.com

Then, comes the Site level aggregator – SLA which is the subnet ID assigned to the customer by the service provider. This follows by the LAN id that is used by the customer and is free to manipulate. This address is globally unique.

Let’s take a look at a Unique Local Unicast Address. It looks like private IP addresses and is used for local communication intersite usually in a LAN and for VPN purpose. It is not routable on the Internet. 

Photo Credits: cisco.com
Photo Credits: cisco.com

The last one is the link local unicast address. This is used for communication between two IPv6 devices on the same link. By default, it is automatically assigned by the device as soon as IPv6 is enabled, and it is not routable. These types of IP addresses are identified by the first 10-bits of the address, i.e; FE80.

Photo Credits: cisco.com
Photo Credits: cisco.com

In this blog post, I took an example from only Unicast addresses. Remember, there are also Multicast and Anycast address ranges. Let’s now create some servers and perform some IPv6 configurations.

Goodbye IPv4 and, say Hello to IPv6

I created a CentOS7 machine on my VirtualBox. As you can see, the interface card enp0s8 have the IP Address 192.168.100.9 as well as fe80::9ef3:b9d3:8b87:4940. Remember, the fe80 is the Link Local Address. 

Diving into the basics of IPv6 2

You can also see the connection using the following command:

Diving into the basics of IPv6 3

To create a connection using nmcli use the following command and check back the connection. You will notice that the connection has been created without any device attached to it.

Diving into the basics of IPv6 4

I am now modifying ipv6-tunnelix and attached it to enp0s9. I will also assign it to an IPv6 address. (For learning and testing purpose, this IPv6 address has not been assigned to me, it’s that of Facebook’s public IPv6)

Diving into the basics of IPv6 5

As you can see, the address has been assigned. But remember, same as you can assign a public IPv4 address on a virtual machine, you will need to route it for connectivity. In this example, I took an example of Facebook public IP Address.

Diving into the basics of IPv6 6

Are your blog’s IPv6 ready?

In 2016, during migration on Cloudflare, tunnelix.com became dual stack i.e; both compatible for IPv4 and IPv6. You can test any website for IPv6 support at this link.

https://ipv6-test.com 

Certifications

Getting certified on IPv6 is really interesting as it can demonstrate comprehensibility. You can participate in free IPv6 training and get certified from Hurricane Electric. It is important to read the IPv6 primer.

IPv6 Certification Badge for jmutkawoa

There is also a service from Hurricane Electric, called Tunnel Broker which can facilitate you for creating a tunnel from your IPv4 static IP address to free IPv6 tunnels. In future blog posts on IPv6, I will get into more details about it. If you like the article, please comment, and share.

Sources:

Recover logical volumes data from deleted LVM partition

Have you ever deleted a logical volume by accident? Can you recover it looking into the backups? Well, the answer is YES. For those who are not familiar with Logical Volume Management (LVM) is a device mapper target that provides logical volume management for the Linux kernel.- WikipediaIt is an abstraction layer or software that has been placed on top of your hard drive for flexible manipulation of the disk space. Some of the articles published in the past on LVM are:

All test carried out on this blog post have been tested on a CentOS machine. Please don’t make a live test on a production server.

Image Credits: Redhat.com
Image Credits: Redhat.com

1. So, as you can see below I have an lv called lvopt which is from a vg called centos.

Recover logical volumes data from deleted LVM partition 7

2. Same is mounted on the /opt

Recover logical volumes data from deleted LVM partition 8

3. There are some data in that partition as well:

Recover logical volumes data from deleted LVM partition 9

4. I created a directory inside the /opt directory

Recover logical volumes data from deleted LVM partition 10

5. Now, let’s pretend to remove the lvm lvopt. Or say, someone did it by accident because it was unmounted. The command lvremove will be used here to remove the lv. Note: that the lv need to be unmounted.

Recover logical volumes data from deleted LVM partition 11

6. If you make an lvs, lvdisplay or vgs or even mount again the partition, you cannot do it. The data is lost. But you can still recover it. This is because the lvm contains the archive of your lv inside the folder /etc/lvm/archive. But, you cannot read the files directly.

Recover logical volumes data from deleted LVM partition 12

7. But you can still, interpret part of the files. Since we deleted the volume group called “centos”, we knew that it is referenced in the file centos_… The question that arises here is which file is relevant for you. Right? So to understand which archive you want to restore, you need to use the command vgcfgrestore –list <name of volume group>. Here is an example:

Recover logical volumes data from deleted LVM partition 13

8.  If you observe carefully, each archive has been backup at a certain time. In my case, I deleted the LV on 18-Apr-2019 at 11:17:17 2019:

Recover logical volumes data from deleted LVM partition 14

9. So, I want to restore from that last archive. You will need to copy the full patch of the vg file. In my case it is /etc/lvm/archive/centos_00004-1870674349.vg. The goal here is to restore the lv before this specific time, or simply restore back the lv before the command lvremove was fired. Here is the command:

Recover logical volumes data from deleted LVM partition 15

10. If you launch the command lvs, you will notice the presence of the lv.

Recover logical volumes data from deleted LVM partition 16

11. But, mounting back the lv won’t result in anything. This is because the lv is inactive. You can see it with the command lvscan. Please take note below that the lvopt is inactive.

Recover logical volumes data from deleted LVM partition 17

12. To activate it you simply need to use the command lvchange.

Recover logical volumes data from deleted LVM partition 18

13. Mount it back and you are done.

Recover logical volumes data from deleted LVM partition 19

I believe this can be very useful especially when you have encountered a situation where someone deleted an lv. I hope you enjoy this blog post. Please share and comment below if you like it.

cyberstorm.mu team at Developers Conference Mauritius

A few weeks back, I registered myself to present the Ansible automation tool at the Developers Conference 2019 at Voila Hotel, Bagatelle Mauritius. The event is an initiative of Mauritius Software Craftsmanship Community – MSCC sponsored by several companies such as Mauritius Commercial Bank, SdWorx, Eventstore, Ceridian, Castille, etc. There were other members of cyberstorm.mu who also registered for their presentations: they are Codarren Velvindron, technical lead at Orange Business Services who spoke about “becoming an automation artist”, Loganaden Velvindron who spoke about “RedHat Enterprise Linux 8 and Derivatives have a new Firewall: NFTABLEs”, and Nathan Sunil Mangar who spoke about “Introduction to the STM32”. There was also a special event where Mukom Akong Tamon, head of capacity building for Africa region at Afrinic who spoke on “IPv6 deployment in Mauritius and Africa at large”. I presented myself as a member of cyberstorm.mu and DevOps Engineer at Orange Business Services and spoke on Ansible for beginners with some basic and advanced demos.

cyberstorm.mu team at Developers Conference Mauritius 20

In the past, I have written several articles on Ansible:

  1. Getting started with Ansible deployment
  2. Some fun with Ansible Playbooks
  3. Configure your LVM via Ansible
  4. Some tips with Ansible Modules for managing OS and Applications
  5. An agentless servers inventory with Ansible and Ansible-CMDB
  6. Project Tabulogs: Linux Last logs on HTML table with Ansible

My presentation started with a basic introduction to Ansible following some brief examples and demos. I started with a brief introduction of myself. It looks like it was a mixed audience including, Students, Professional from the management and technical side, Engineers, and others. I brushed out quickly as to why we need Ansible in our daily life whether for home use or on production. Ansible is compatible with several Operating systems and one of the most interesting tools is the AWX which is an opensource product. Before getting started with Ansible, it is important to grasp some keywords. I introduced it as well as giving some examples using Playbooks. Ansible Ad-hoc commands were also used. The audience was asked to give some ideas about what they want to automate in the future. There were lots of pretty examples. I laid some emphasis on reading the docs and keep in touch with the version of Ansible one is using. Also gave some brief idea about Ansible-Galaxy, Ansible-doc, Ansible-pull, and Ansible-vault. To spice up your automation layout, it would be nice to use Jinja templates, verbosity for better visual comprehension. I also spoke about Ansible-CMDB, which is not a tool of Ansible. Some days back, I blogged on Ansible-CMDB which is pretty interesting to create an inventory. I also shed some ideas about how to modify the source code of Ansible-CMDB. Also, an example using an Ansible Playbook build up web apps.

cyberstorm.mu team at Developers Conference Mauritius 21

Thanks, everyone for taking pictures and some recordings.

cyberstorm.mu @ DevConMru

Screen Shot 2019-04-16 at 8.01.34 PM
D4AvZJ-WsAY9anz.jpg large
D39IqdeX4AAJr5W
D38Pc7oU8AAF_6F
D4CEUKUWwAAXzU8.jpg large
56899687_10161551169545557_6969110695807811584_n
D4AvZJ-WsAY9anz.jpg large
D4AgYJAW4AIs76t
D4At0coW0AAjYww
D38bIioXoAAOJvj.jpg large
D38nQu_WAAAnZWp
D39CDaCXsAEX5W_
D4CEUKUWwAAXzU8.jpg large
D38Pc7oU8AAF_6F
D383Fg6X4AEJwGN
D4CD2reW4AAQxbJ
D38RrItUcAAE8Nu
D39IqdeX4AAJr5W
D39PRU1XsAAY5eE
Screen Shot 2019-04-16 at 8.01.34 PM
D38O7YPUIAIhRVG
D38j-R1WwAIagUi
D39NrSDWwAAV8yW
D389SnbX4AE-Dvk
D38LFHlU8AEWi_f
D38mJdsWwAAY925
D38lS6LXoAAmRL6
D38lRySXsAAShRB
D4CD2rfXsAEi889
56549524_1223529091139485_8901038877343481856_n
Screen Shot 2019-04-16 at 8.01.34 PM
56862360_2324788081102513_3470547932690776064_n
56980635_10157139028667365_5838586333253074944_n
56899687_10161551169545557_6969110695807811584_n
56842770_685911931837631_1427788979075284992_n
Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image... Loading image...

After my session, I went to the Afrinic session on IPv6, where Mukom Akong Tamon was presenting on IPv6 where he brushed out on an introduction to IPv6 and the IPv6 format structure. Also, several examples of why it is important to migrate to IPv6. Loganaden Velvindron from Afrinic enlightened the audience about dual stack programming.

One of the important part where Mr. Mukom mentioned that there are still developers hard coding IP addresses in the code which is not a good practice.

cyberstorm.mu team at Developers Conference Mauritius 22

There was another session by Loganaden Velvindron of Afrinic, who spoke on NFtables in RedHat 8. Mukom was also present there in the session. Loganaden explained about NFtables architecture and its advantages. Also explained how to submit patches and dual stack building with NFtables.

cyberstorm.mu team at Developers Conference Mauritius 23

Codarren Velvindron, technical lead at Orange Business Services and member of cyberstorm.mu explain why automation is important. He took some example on the conference.mscc.mu website itself. Also gave some ideas using “Expect”. For those who are not familiar with “Expect”, it is a scripting programming language that talks with your interactive programs or script that require user interaction.

cyberstorm.mu team at Developers Conference Mauritius 24

Nathan Sunil Mangar also presented on an introduction to the STM32 microcontroller. He also gave some hints to distinguish between fake and real microcontrollers on the market. Apart from the basic introduction, he went brushed out some examples on several projects and explain which one can is better. However, it also depends on the budget when choosing microcontrollers. He also showed how to use the tool of programming for the STM32 microcontroller. The documentation was also perused during the presentation. At the end of the presentation, there were several giveaways by Nathan Mangar including, fans, Microcontrollers, and a small light bulb made from STM32.

cyberstorm.mu team at Developers Conference Mauritius 25

I also have the opportunity to meet with several staffs from the Mauritius Commercial Bank who asked for some hints and best practice on Ansible. Also had some conversations with other people in the tech industry such as Kushal Appadu, Senior Linux system Engineer at Linkbynet Indian Ocean. We discussed lengthily on new technologies. Some days back, I presented the technicalities of Automation as a DevOps Engineer at SupInfo university Mauritius under the umbrella of Orange Cloud for Business and Orange Business Service. I was glad to meet a few students of SupInfo at the DevCon 2019 who instantly recognized me and congratulated me for the Ansible session.

cyberstorm.mu team at Developers Conference Mauritius 26
Speaker at SUPINFO under the umbrella of Orange Business Services

I sincerely believe there is still room for improvement at the Developers conference such as the website itself which needs some security improvements. Otherwise, a feature that could be added is to specify which session is for beginners, intermediate or advanced so that attendees can choose better. The rating mechanism which is not based on constructivism might discourage other speakers to come forward next time. But overall, it was a nice event. Someone from the media team filmed me for a one-minute video, hoping to see it on the net in the future. I also got a “Thank You” board for being a speaker by Vanessa Veeramootoo-Chellen, CTO at Extension Interactive and one of the organizers at the Developers conference who can be seen to be always working, busy and on the move during the event.

Attending AWSome day online conference 2019

The AWSome day was a free online Conference and a training event sponsor by Intel that will provide a step-by-step introduction to the core AWS (Amazon Web Services) services. Its free and everyone can attend. It was scheduled on 26 March 2019 online. The agenda covered broad topics such as AWS Cloud Concepts, AWS Core Services, AWS Security, AWS Architecting and AWS Pricing and Support. It’s pretty interesting for IT manager, system engineers, system administrators, and architects who are eager to learn more about cloud computing and how to get started on the AWS cloud. I do have some experience in managing AWS servers and even host my own server. However, I registered for the free training to refresh my knowledge and get more exposure such as the AWS pricing which I am not aware at all. Another interesting thing is that you will receive a certificate of attendance and you received 25 USD of AWS credits. Pretty cool right?

Attending AWSome day online conference 2019 27

Right from the beginning, I knew this was something interesting. I encountered a minor problem whilst signing in. I had to send a mail to support and it was resolved immediately. Once connected to the lobby, it was pretty easy to attend and follow the online conference. After some minutes, Steven Bryen, head in the AWS Cloud delivered the keynote speech.

Attending AWSome day online conference 2019 28

There was also an online challenge and I score 25,821 on the Trivia Leaderboard.

Attending AWSome day online conference 2019 29

On the “Ask an Expert” tab, I was mostly interested in Man on the Side Attack – MOTS attack. They referred me to the WAF section on AWS. Another interesting link is the whitepaper of the AWS Overview of Security guidelines. AWS also offers comprehensive security across all the layers, SSL, DDoS, Firewall, HSM and Networking. I also shoot some question on Metric and Monitoring on application level such as on MariaDB. I discovered about the RDS performance insight. For applications on EC2, Containers, and Lamda, X-Ray looks very promising. Apart from virtualization, its good to note that AWS also provides containerization services.

The event was pretty enriching. The panel on the question area knows well their subject. I discovered a lot by participating in the AWSomeDay. I’m looking forward to AWS certifications in the near future.

Building Docker images and publishing ports

One of the most challenging tasks in a production environment with Docker is to build images and publish ports. As promised in the previous article I will publish more articles on Docker images. So, here we are! For those who missed the previous articles on Docker, firstly we have the basic installation of Docker and some basic commands and secondly, we have an article dedicated about 30 basic commands to start with Docker container. Note that all illustrations and commands in this blog post have been tested on Fedora.

Building Docker images and publishing ports 30

Building Docker images

What is a Docker image? Firstly, we need to understand what is an image. It is a compressed self-piece of software. Once unwrapped, it becomes meaningful to use because it’s all about the functionality that makes the image useful. An image could contain software, operating system, a service, etc.. On the other hand, the Docker image is created by a series or sequence of commands written to a file called “Dockerfile”.  Whenever you execute the Dockerfile using Docker command, it will output an image, thus, a Docker image. In this blog post, we are going to build a Docker image using existing Docker image.

1. As described in the article “30 basic commands to start with Docker container” in part 3, to view the current images available you can use the following command:

docker images

2. In my case, I have a Centos image. My goal is to make a Docker image which has Apache web server already pre-installed. Now, there are two methods to create an image from the existing one. The first is the commit method with the Docker commit command which is not extensively used due to less flexibility. The other is by creating a Docker file. In my home directory, I created a directory at /home/webserver. Now, this directory will be used to build up the web server. You can also create an index.html file to be used as the index page of the web server. Use the following basic commands:

mkdir /home/webserver && touch /home/webserver/{index.html,Dockerfile}

3. I then edited the index.html. Let’s enter some random phrase in it for testing purpose.

This is a test by TheTunnelix

4. Edit the Dockerfile and define the Dockerfile as indicated below. From the comment section, I gave some explanations for each line:

FROM centos:latest # Take the latest image.
LABEL tunnelix.com <[email protected]> # Just a reference using my E-mail.
RUN yum install httpd -y # Run the command to install HTTPD.
COPY index.html /var/www/html # Copy from webserver folder to the docroot.
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"] # Always launch the binary to start the daemon HTTPD.
EXPOSE 80 # Run Apache on port 80. This port need to be exposed to run the HTTPD webserver.

5. Now, point yourself in the directory where your Dockerfile and index.html is located. We will build the image using the Dockerfile using docker build command.

docker build -t centos:apache .

6. You can check it using the command docker images and you should notice that a new image has been created which has been tagged with apache. You also view details all steps using the following command:

docker images -a

7.  To run it, you can use:

docker run -dit --name=ApacheTunnelix centos:apache

At this stage, a docker ps will show you the container running. Remember from the article “30 commands to start with Docker container” in part 24, we learned that Docker will create a bridge. You can check it using docker network ls. You can also confirm it using the command brctl show command.

8. When launching the command docker inspect in the section containers, I can notice my container is accessible with the IPAddress 172.17.0.2 and same is accessible on my browser with the same content of the index.html file created in section 3. You can also check it using the following curl command:

curl http://172.17.0.2

Publishing the port

9. The point is that the container ApacheTunnelix with IPAddress 172.17.0.2 is not available outside the physical host onto which I am running my Docker engine. The catch is that we need to go through a step called publishing ports.

10. I will now create another web server to better differentiate between the container (ApacheTunnelix) accessible locally and that another container (Let’s call it ApacheTunnelixProd) which need to be accessible on the same network of the Physical machine. I copied the directory /home/webserver to /home/webserverprod and pointed myself inside the directory webserverprod.

cp -rp /home/webserver /home/webserverprod && cd /home/webserverprod

11. For my own reference, I change the index.html to:

This is a test by TheTunnelix on Prod.

12. Repeat step 5 by building a new image with a new name:

docker build -t centos:apacheProd

13. Compare to step 7 where we have run the container without publishing the port, this time we will run it by publishing the port from outside the physical machine. By default, the container will run on port 80. To make it accessible, say on port 5000, we use the following command:

docker run -dit --name=ApacheTunnelixProd -p 5000:80 centos:apacheProd

14. By now the container should be accessible on any IP on the network of the local machine including localhost. In my case, the IP address of my physical machine is 192.168.100.9. You can test it using the command:

curl http://192.168.100.9:5000

Or you can simply access your machine from a browser:

Building Docker images and publishing ports 31

15. A docker ps is of great importance to understand as same will show you from the source and destination of the port mapping. Another interesting command to understand the port mapping is the docker port. For example:

docker port ApacheTunnelixProd

This will show the following result:

80/tcp -> 0.0.0.0:5000

In the next article on Docker, I will share some more interesting tips on Docker Networking. keep in touch and comment below for suggestions and improvements.

Tips:

  • EXPOSE allows anyone outside the container to access the web server on the port 80. If you do not expose it, the web server will not be accessible outside the container.
  • CMD allows you to run a command as soon as the container is launched. CMD is different from RUN. RUN is used whilst building the image and CMD is used whilst launching the image.
  • Always check the official Docker documentation when creating Dockerfile.
  • You always stop a running container using the command docker stop <name of the container>. For example, docker stop ApacheTunnelixProd.
  • Also, you can remove a container with the command docker rm <name of the container>. For example, docker rm ApacheTunnelixProd.

Updates:

As explained by Arnaud Bonnet, one should be careful when using distributions such as Centos, Debian etc which can be vulnerable. So auditing is important prior before deploying on Production. A look into Alpine and BusyBox can be very useful.

Also, the MAINTAINER has been deprecated and now used by LABEL. Arnaud gave some examples such as:

LABEL com.example.version=”0.0.1-beta”
LABEL vendor=”ACME Incorporated”
LABEL com.example.release-date=”2015-02-12″
LABEL com.example.version.is-production=”