Create a server with NodeJS – LUGM Meetups

A meet up was carried out today by Yog Lokhesh Ujhoodha today at 12:30 hrs at the University of Mauritius under the banner of the Linux User Group of Mauritius.


The event with the title “How to make a smart server with NodeJs” was announced on Lugm Facebook group as well as on the LUGM mailing list. As a passionate freelance developer, he shared his experience of using NodeJs for a critical production environment.

He started by giving a straightforward explanation to the audience the difference between a web server and a runtime environment in the context of NodeJs. 



11225431_986950471346011_4262715214018075299_n
Yog during the presentation

As you can see on the YouTube video he laid emphasis on the following   topics:

1. A problem statement

2. Web server architectures

3. Building an event-driven web server with NodeJS

4. Distributed load with NodeJs

5. Useful tools and Real life Benchmarks


 

We ended with some technical questions. Several questions were shot up by our hangout viewers. You can view the video and ask any questions for more clarifications. About 15-20 persons attended the meetup.

You can also reach Yog through his website at http://shaanxoryog.hackers.mu

Another article coming up on http://www.hacklog.mu


URGENT – STAGEFRIGHT is here – Update your Android now

This is a straight and direct message to everyone on this planet. YOU NEED TO UPDATE YOUR ANDROID MOBILE PHONES, TABLETS etc.. NOW!!



How many amongst you have an Android device? Are you aware that actually, billions of people around the world are impacted by a vulnerability called Stagefright? After the announcement was made on 27 July 2015 by Joshua Drake of Zimperium, I still noticed that there are many people who are not at all aware of this vulnerability and its devastating effect.




What is Stagefright?

“Stagefright has been called the biggest Android security concern ever. It occurs when malicious code is unknowingly triggered by media in multi-media messages (MMS). Stagefright could affect a billion devices, most particularly those running Android Jelly Bean or earlier. This number, if you’ve taken a recent look at the percentages of different Android versions currently in use, is staggering.” – Androidpit.com

You can download the FREE app at the Google Play Store to verify if your mobile phone is vulnerable or not.

The aim of this article is to sensitize everyone to update their Android devices. Please do inform your friends and everyone around you.

Please take note that there are some companies which have not yet released those patch. In that case, I encourage everyone to voice out their opinions with the help of Twitter.

Note: Some Android cannot be patched as the vendor is not sending any updates. In that case, you can disable “MMS on reception”. But that does not keep you 100% safe!



Click here – This may interest Security Experts and Software Engineers.


Adding a new disk on FreeBSD from VirtualBox

Adding a new disk on FreeBSD is just a matter of minutes. As usual in the field of system administration, I need to do some pre-checks first before carrying out any operation. There are many documentations available on the Official FreeBSD Handbook. However, I create this blog post so that we can discuss more on it.

1. Start by retrieving the trace from dmesg. I fired this command and redirect it into another file.

less /var/run/dmesg.boot > /home/dmesg1.txt

2. You can also redirect the output of the df -h  and or gpart show command.


3. Add the disk from the Virtual Box “Storage” tab, I create a new .vdi (Note that you need to switch off the machine before adding disk)

4. After the disk was added I booted the machine and fired another less /var/run/dmesg.boot > /home/dmesg2.txt

5. Then, I made a diff dmesg1.txt dmesg2.txt to compare both dmesg before and after insertion of the disk to be assured that a new disk has been detected.

Screenshot from 2015-10-05 16:09:38

As you can see the result was awesome the new disk “ada1” was detected.

6. Now, we can need to check if we are using GPT or MBR. Through the gpart show command, we already know we are using GPT.


.Screenshot from 2015-10-05 16:14:01

7.  So, I add the GPT to the disk and the partition is added with the following commands:

gpart create -s GPT ada1
gpart add -t freebsd-ufs ada1

8. Next step is where the file system on the new disk is created with the following commands:

newfs -U /dev/ada1p1

(tips: do press tab twice to see if you really have those device called ada1p1 to do not get confused if you already have more disks)


9. The final step is to create a new directory and mount the new disk:

mkdir /home/newhdd

10. Add the following entry in the /etc/fstab

/dev/ada1p1      /home/newhdd      ufs     rw     2     2

11. Mount the disk now:

mount /home/newhdd

12. I can now do a gpart show to see my new disk as well as a df -h


Repair your Kernel Panic with Dracut

If you have encountered a Kernel Panic which usually happens after a major change in the Linux System, you can follow these procedures to rebuild the Kernel files with Dracut tools.

  1. Boot the server on rescue mode or simply through a live CD or ISO.
  2. To boot the server on rescue mode login on the Vsphere Interface and look for a live CD. In case of Kernel Panic on your own machine, you can boot your machine with a live CD.
  3. Once booted, create a directory in the folder /mnt
    mkdir /mnt/sysimage
  4. Use fdisk –l to find where is the /boot. However, you can also create another directory in mnt to mount different partitions. [sysimage is just a name given]
  5. Mount the disk into sysimage with the aim to mount the boot file. In my case, the sda1 is the boot partition
    mount /dev/sda2 /mnt/sysimage
     mount/dev/sda1 /mnt/sysimage/boot
  6. Once the disks are mounted mount the proc/dev/ and sys folders. Here are the commands:
    mount - -bind /proc /mnt/sysimage/proc
    
    mount - -bind /dev /mnt/sysimage/dev
    
    mount - -bind/sys /mnt/sysimage/sys
  7. After the mount operations have been carried out, you need to access the directory by chrooting into it.
    chroot /mnt/sysimage
  8. Get into the directory sysimage 
  9. You can back up the /boot to another location and use the command Dracut to regenerate anew the file initramfs. An example is as follows: 
    dracut -f /boot/initramfs-2.6.32-358.el6.x86_64.img 2.6.32-358.el6.x86_64
  10. You can umount all partitions and /or simply reboot the machine.
 



Tips:

  • On Vcenter, you may need to boot go through the BIOS interface first before being able to boot through the ISO and force the BIOS screen to appear on your screen.
  • You may also use the Finnix ISO which is usually compatible with all UNIX system.
  • When firing the dracut command make sure you only paste the kernel version with the architecture. Do not use the file .img extension, otherwise, it won’t work – Step9
  • The last part ‘2.6.32-358.el6.x86_64’ is just the same version which needs to be regenerated. -Step9
  • To know which kernel version your machine is actually using, you need to get into the grub folder and look for the grub.conf. The first option is usually the kernel used by default.
  • Sometimes, you need to try with the same version of the OS, it may happen after you have boot your machine with a live CD, the ISO which you have used do not detect your disk or the data store. You may, for example, think the disk is not good or there is a problem in the SAN.
  • However, without doing a root cause analysis, you cannot be certain if by repairing the initrd the Kernel Panic might be the unique solution. There are circumstances where a mounted NFS is not the same version with the actual machine which can result in Kernel Panic. The Dracut solution is not a definite solution.
  • Always investigate on the Dmesg log if possible or the crash dump if same has been set up.

Managing LVM with PVMOVE – Part 2

After a little introduction about LVM from the article Managing LVM with PVMOVE – Part 1, its time to get into the details of the pvmove command. Based on the scenario and constraints described in part 1, that I will elaborate on the pvmove operation here.


Before handling any operation, do your precheck tasks such as verification of the state of the server, URLs, services and application running,  the port they are listening etc.. This is done to be able to handle complicated tasks both at the system and application level. However, in respect of the pvmove operation, I would recommend you to fire the vgs, pvs and lvs commands as well as a fdisk -l to check for the physical disk state. Do a df -h and if possible; a lsblk to list all blocks for future references. On CentOS / RedHat you need to install the package util-linux to be able to use lsblk which is very interesting.

 


Screenshot from 2015-09-30 21:07:28

Let us say we have a disk of 100G [lets called it sdb] and we have to replace it with another disk of 20G [sdc]. We assume that there is 10G of data being used out of the 100G hard disk which looks reasonable and possible to be handled by the 20G hard disk that we have planned to shift to. On our Sdb we have 2 LV lets call it lvsql that is being used by MySQL having the database there and lvhome handling the rest of the server applications. See illustration from the diagram on the right. Both LVs are found in the same VG called VGNAME.

So you might ask yourself how come if you perform a df -h on your machine you find that lvsql total size is 15G and that of lvhome is 80G making a total of 95G when the hard disk is a 100G [As described on the diagram above]. Is there 5G missing? The answer is no. When you fire the command pvdisplay, you will notice that the “PE Size” had consumed 5 GB. For example on this screenshot on the left, the PE Size is 4MB.

Usually, the 5Gb missing will be allocated there which means that the PE Size is used for a specific purpose for example in our situation a back up of the MySQL database or other processes. If the missing size is not found it means that its still in the VG and has not been allocated to the LV. It’s important to allocate some additional space there. So before start do a pvdisplay, lvdisplay, and vgdisplay as well which are important. We now have our sde hard disk as described in this picture below.



Screenshot from 2015-09-30 21:05:13

How to start? It’s up to you to decide which lv you want to allocate more space as you have control over the vg. You can also add another physical disk and extend the vgname and resize the lvsql since a database usually grows in size.

 

 

 

Do your pre-check tasks as described.

  1. Stop all applications running such as MySQL and Atop in our case. You can use the lsof command to check if there are still processes and application writing on the disk.
  2. Once all applications have been stopped you can type the mount command to check the presence of the partitions as well as you have to confirm that there is no application writing on the disk – use lsof or fuser.
  3. Comment the two lines related to the 2 vg partitions in the /etc/fstab file. Usually it would be the lines /dev/vgname/lvsql and /dev/vgname/lvhome. You can now umount the disk. You would notice that it would not be possible to umount the partitions if ever an application is writing on it. Another issue is that if you have ssh running on the machine, you need to stop it! Then, how do you ssh on the machine? Use the console from vSphere or if it’s a physical machine boot it with a live cd.
  4. Next step is to do a lvreduce of the lv if possible according to the size. In our case 5GB being used out of 80. Do a lvreduce -L 7GB –resizefs /dev/vgname/lvhome. This is done because when you will move the pv it will be faster.  The bigger the lv the more time it takes.
  5. Once all lv size has been reduced to the maximum, add the disk sdc. Make sure it gets detected if you are on VMware. Use the fdisk command to check and compare from your precheck test.

  6. Now create from the sdc a pv The command is pvcreate /dev/sdc. This means that you created a pv from the disk you have added.
  7. After the pv has been created extend the vg called vgname from the old disk (sdb) by using the disk sdc which you just added to the same old vg called vgname. Command is vgextend vgname /dev/sdc
  8. Now the magic starts, fire a pvmove /dev/sdb /dev/sdc – This means that you are moving the pv allocated to the vgname belonging to the PEs of hard disk sdb into sdc.
  9. When the pvmove is completed, you just need to do a vgreduce vgname /dev/sdb . When you launch the vgreduce it will throw out the old disk out from the VG as you have reduced it completely. You can now remove the old disk.
  10. Since you have reduced the lvhome you will notice that lvhome is at 7GB instead of 10G as we have reduced size in step 4 to accelerate the pvmove process. The step is to lvresize -l +100%FREE /dev/vgname/lvhome. You will notice that the PE Size is intact as we had not resized the lvsql.
  11. You can now do a /sbin/resize2fs /dev/vgname/lvhome. Uncomment back the lines on fstab, mount -av and restart your applications.



Congrats you just did a pvmove. Comments below so that we can clear any ambiguities.