Managing LVM with PVMOVE – Part 1

One of the challenging issues that I have encountered is the manipulation of LVM – Logical Volume Management on virtual servers. Whilst writing this article, I noticed that I have to split it into parts as it looks bulky in some sort. Once you have understood the logic of the LVM, things get pretty easy to deal with. I will elaborate some details of LVM, and will surely get into some brief real-life experience that is important to grasp. Let’s take an example of a disk where there are some applications running like MySQL, Apache, some monitoring tools like Atop and Htop which are writing on the disk and we have to shrink that very disk or replace it with another disk. Let’s also assume that the server is running on an ESX host and the operation will be carried out through the VMware  VCenter. How do you shrink a disk having its application generating IOs? How do you replace a disk with a smaller one having its data on an LVM?

In brief, here is what I have understood from what is LVM – Logical Volume Management.

We have Physical Volume (PV), Volume Groups (VG) and Logical Volume  (LV). These terms are a must to understand before proceeding with Logical Volume Management operations.

PV- These are hard disks or even Hard disk partitions. These PVs have logical unit numbers. Each PV is or can be composed of chunks called PEs (Physical Extents)

VG – The Volume Group is the highest level of abstraction used with the LVM. Does this term look complicated? I would say no. In the field of Software Engineering, there are two terms that are usually used that is modeling and meta-modeling. Just understand it like this if you are not familiar with software engineering. Modeling means one step or one level removed from reality whilst Meta-Modelling means modeling at a higher level of logic. So basically, it looks like some sort of higher modeling happened at the level of the VG. 

LV – The logical volume is a product of the VG. They are disks which are not restricted to the physical disk size and it can be resized or even move easily without even having your application to be stopped or having your system unmounted. Of course, I need to do more research to get into a more deeper explanation of the PV, PE, VG, and LV. Lets now see an explanation through my horrible drawing skills.


Screenshot from 2015-09-29 21:04:46From the Diagram we conclude that :

  • PVs looks like hard disks divided into chunks called PEs
  • The VGs are just a high level of abstraction that should look from the above.
  • VGs are created by combining PVs.

If you have access to a Linux machine having LVM configured and some VG have already been created, you can start firing these commands to have an overview of the PV, VG, and LV

  • PVS – Report information about physical volume
  • vgs – Information about volume groups
  • lvs – Information about logical volumes

Those physical disks can be picked up from the datastores, where RAID is configured. This act as another layer of security to be able to handle disk failures or loss of data at all cost.

Screenshot from 2015-09-29 21:37:28


On Linux, if you type vg or lv or pv press tab twice you will have an idea all the commands that exist and possibilities of manipulation. On part2 of this article, I will take an example of the pvmove command and actions that could be done to minimize impact before carrying out a pvmove operation.

Part2 of the article is on this link

 


Managing and Analysing disk usage on Linux

Disk usage management and analysis on servers can sometimes be extremely annoying even disastrous if proper management and analysis are not carried out regularly. However, you can also brew some scripts to sort out the mess. In large companies, monitoring systems are usually set up to handle the situation, especially where there are several servers with consequent sized partitions allocated for let us say /var/log directory. An inventory tool such as the OCS Inventory tool can also be used to monitor a large number of servers.


diskusage

This blog post will be updated as my own notebook to remember the commands used during management of disk usage. You can update me some tricks and tips. I will update the article here 🙂

 

 

 

 


 

Managing disk space with ‘find’ command

1. To find the total size of a particular directory of more than 1000 days

find . -mtime +1000 -exec du -csh {} + | grep total$   

2. Find in the partition/files of more than 50 M and do an ls which is longlisted and human readable.

find / -xdev -type f -size +50M -exec ls -lh '{}' ';' 

3. Find in the /tmp partition every file or directory with mtime of more than 1 day and delete same.

find /tmp -name "develop*" -mtime +1 -exec rm -rf {} \; 

4.Count from the file /tmp/uniqDirectory in the /home directory (uniqDirectory), every directory having the same unique name.

find /home > /tmp/uniqDirectory && for i in $(cat /tmp/uniqDirectory);do echo $i; ls -l /home/test/$i |wc -l;done

5. Find from /tmp all files having the extension .sh or .jar and calculate the total size.

find . -type f \( -iname "*.sh" -or -iname "*.jar" \) -exec du -csh {} + | grep total$

6. Find all files in /tmp by checking which files are not being used by any process and delete same if not being used.

find /tmp -type f | while read files ; do fuser -s $files || rm -rf  $files ; done

Another interesting issue that you might encounter is a sudden increase of the log size which might be caused by an application due to some failure issues Example a sudden increase of binary logs generated by MySQL or a core dump generated!


Let’s say we have a crash package installed on a server. The crash package will generate a core dump for analysis as to why the application was crashed. This is sometimes annoying as you cannot expect as to when an application is going to fail especially if you have many developers, system administrators working on the same platform. I would suggest a script which would send a mail to a particular user if ever the core dump has been generated. I placed a script here on GitHub to handle such situation.

Naturally, a log rotate is of great help as well as crons to purge certain temporary logs. The “du” command is helpful but when it comes to choose and pick for a particular reason, you will need to handle the situation with the find command.

Tips:

  • You should be extremely careful when deleting files from find command. Imagine some replication log file is being used by an Oracle Database Server which has been deleted. This would be disastrous.
  • Also, make sure that you see the content of the log as any file can be named as the *log or *_log

Deploying WordPress labs on Virtual Box

Building miniature virtual labs on Virtualbox are most of the time fascinating especially when you have to troubleshoot between the virtual servers within a network environment, however, there are usually bugs that I have to deal with. The difference between NATNETWORK and that of NAT on VirtualBox differs differently to what I have noticed, this can be seen on the official website documentation.

However, I have noticed that in both situations, you are provided with a virtual router within VirtualBox. In the case of a NAT network, you are NOT allowed to ping between two VMs on NAT network unless you have established a tunnel whereas in the option of the NATNETWORK, this allows you to choose to dynamically range of IPs through the DHCP functionality on VirtualBox and you are also allowed to ping the outside world as well as other VMs on NATNETWORK. I have noticed that this work only on the new version compared to old ones where the NAT and NATNETWORK work almost the same way. There are still many discrepancies if ‘NatNetwork’ is the real name that should have been set!!


Screenshot from 2015-09-27 00:48:18

I have install Centos [minimum install] on my first lab. Here are the procedures for building the webserver.

  1. yum install httpd wget mysql-server php php-mysql php-gd nmap traceroute w3m vim
  2. wget https://wordpress.com/latest.tar.gz
  3. tar -xzf latest.tar.gz && cp -r wordpress /var/www
  4. chown -R apache:apache /var/www/wordpress
  5. vi /etc/httpd/conf.d/myweb.conf 

create the vhost with the following values

  • <VirtualHost *:80>
  • DocumentRoot /var/www/wordpress
  • ServerName www.myweb.com
  • ServerAlias myweb.com
  • <DIrectory /var/www/wordpress>
  • Options FollowSymlinks
  • Allow from all
  • </Directory>
  • ErrorLog /var/log/httpd/wordpress-error-log
  • CustomLog /var/log/httpd/wordpress-access-log common
  • </VirtualHost>

Time to create the Database

  1. mysql -u root -p  [mysqld service should be started first]
  2. CREATE DATABASE mydb;
  3. CREATE USER [email protected];
  4. SET PASSWORD FOR [email protected]= PASSWORD (“mypassword”);
  5. GRANT ALL PRIVILEGES ON mydb.* TO [email protected] IDENTIFIED BY ‘mypassword’;
  6. FLUSH PRIVILEGES;

Exit MySQL and proceed with the following instructions.

  1. mv /var/www/wordpress/wp-sample-config.php wp-config.php 
  2. Vi wp-config.php and modify username, dbname, password and hostname
  3. vi /etc/hosts and enter myweb.com to run as localhost
  4. Service httpd start // service httpd graceful // service mysqld start 
  5. w3m www.myweb.com register on wordpress. Website up

Setting up the SSL

  1. For ssl activation [https] do this yum install openssl mod_ssl
  2. openssl genrsa -out ca.key 2048 [to generate a signed certificate]
  3. openssl req -new -key ca.key -out ca.csr [to generate the .csr]
  4. openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt [generate a self-signed key]
  5. cp ca.crt /etc/pki/tls/certs
  6. cp ca.key /etc/pki/tls/private/ca.key
  7. cp ca.csr /etc/pki/tls/private/ca.csr
  8. vi /etc/httpd/conf.d/myweb.conf and add another vhost with the following values
  • <VirtualHost *:443> 
  • SSLEngine on
  • SSLCertificateFile /etc/pki/tls/certs/ca.crt
  • SSLCertificateKeyFile /etc/pki/tls/private/ca.key
  • DocumentRoot /var/www/wordpress
  • ServerName www.myweb.com
  • ServerAlias myweb.com
  • <DIrectory /var/www/wordpress>
  • Options FollowSymlinks
  • Allow from all
  • </Directory>
  • ErrorLog /var/log/httpd/wordpress-error-log
  • CustomLog /var/log/httpd/wordpress-access-log common
  • </VirtualHost>
  1. Service httpd graceful and website up on https


To make the website accessible on any hosts on same natnetwork, edit /etc/resolv.conf with ipaddress 10.0.2.4 myweb.com

176619

Now that two servers are configured the same way, you can add another server as load Balancing to access the servers behind the load balancer. What is most interesting is that end users (hosts) will know only the load balancing server. I have achieved this by installing Pound on the server use as Load Balancing. This means that end users [hosts] will access the load balancing server which will, in turn, decides upon master/slave priorities. Pound converts server3 to a reverse proxy load balancing server. The aim is to take HTTP/S request from the hosts and request server 1/2 according to the configuration.

Based on this article a new Bash project is actually being brewed on Github to automate the installation of WordPress, Apache, MySQL and all the application specified. This project should enable anyone to deploy a website through the script.


Getting started with the basics of Github

“GitHub is a Web-based Git repository hosting service. It offers all of the distributed revision control and source code management (SCM) functionality of Git as well as adding its own features. Unlike Git, which is strictly a command-line tool, GitHub provides a Web-based graphical interface and desktop as well as mobile integration. It also provides access control and several collaboration features such as bug tracking, feature requests, task management, and wikis for every project.” – Wikipedia


To get started with Github, you have to create an account on Github.com. Once the account has been created, download the Git tool which is used to manipulate your repository.

github-logo-text-horizontal

Here are the steps which you need to follow :

Register on Github and download the Git tool. The git tool is the command “git” on your Linux machine.

Create a repository on the Github website. I am actually working on a project using this repository – TheTunnelix/Deploy

Now, you can start setting up git locally. Here are some commands which you need to remember:

git config –global user.name “TheTunnelix” – Setting up a username

git config –global user.email “[email protected] – Configuring your mail address

git clone https://github.com:TheTunnelix/Deploy.git – Cloning your local machine with the Github Repo that you have created.


git status – You can use this command to understand the status of the repo. For example, the number of files which have not been committed yet.

git add README.md – When you add a file, you are now in between your local and the repository to proceed forward with the commit.

git commit -m “updated readme file for the better description” – This is a way to document the stage area to be ready to finalize the repo. It is not actually a commit. Use git commit -help. Remember, each commit has an ID so that you can roll back in time.

git remote add origin https://github.com:TheTunnelix/Deploy.git – If you did not clone a repo, you may need to use this command because when a repo is cloned, you have already added the origin.

git push -u origin master – the “u” option means saving for the origin master (the branch you are pushing to). It will prompt you to enter your username and password. You will notice the following message “Branch master set up to track remote branch master from origin” which means that things went well.

From the Github account, you will notice that the change has been carried out. Changes are done using the three principles ADD-COMMIT-PUSH. You cannot push without committing. If a new file is created locally and you use the git status command, it will prompt you the test file which you have created.

git add . – Another interesting command is the git add . where the dot means to add untracked files following to a git commit -m “test file is added” The “-m” means message. A git status will show you that there is nothing to commit and everything is clean. Each git push will prompt you to enter your username and password.

git pull -u origin master  – This will pull all the content of the current repo.

There also interesting options to explore on the Github website such as the branch/master. New branches can also be created. The graphs can be useful to view contributors on the project.


Hello Tunnelers

Hello, Tunnelers across the globe. I made this blog to share my experience and knowledge as a System and Application Administrator. Most articles are based on real-life experience in the field of Linux, FreeBSD and Open source technologies. However, additional tests are usually made to support my blog posts and I welcome constructive comments from you to enlighten me if needed.

Fellow Tunnelers, the Tunnelix is a concept that have inspired me to bridge Linux and Unix Operating systems tunneling through the hacking world. Do follow me on Twitter and join the adventure throughout the Tunnel.

linux-bsd-840x420

My website has been made using technologies like Nginx, HHVM, WordPress, CentOS, PHP, JQuery, MariaDB and others. I made some penetration testing using Kali Linux tools, Apache Benchmark and other online testing tools such as GTmetrix. You can follow my tweets to keep in touch with me. Your comments are welcome and I am also reachable on Facebook. Most blog posts will be based on the technical aspects of IT though sometimes I will blog about my own IT Management skills that I have encountered. Sharing is the key to success. Technology always keeps on evolving and just as other blogs, old posts are sometimes void. I will try my best to keep all my blog posts up-to-date.