Category: Scripts and codes

Starting up with some basics of C programming – Hello world- Part 1

phoTo get started with C programming language, there are several basics to grasp in the code structure. In this article, I will elaborate on the anatomy of a simple “hello world” program. The program will just display the words “hello world” through the code. There is also a free book available by Dennis Ritchie, who is the inventor of the C programming language.

Starting up with some basics of C programming - Hello world- Part 1 1

Let us see this piece of code :

Starting up with some basics of C programming - Hello world- Part 1 2

#include <stdio.h>

#include <stdlib.h>

int main()

{

   printf("hello world!\n");

   return 0;

}

The int main () is a function and the line printf(“hello world!\n”); and return 0; are instructions within the function.

The ‘include’ lines #include <stdio.h> and #include <stdlib.h> are files which are included to get some basic functions within the code itself. The aim is to include some built-in functionality into the hello world program.

On your Linux terminal, you just need to save the file as hello.c, set it as executable and run it with the command gcc -o hello hello.c You will notice a file called hello has been created! You can run it as ./hello to find the result ‘hello world’.


A brief description of the fopen PHP vulnerability

One of the PHP vulnerability that is still being found on many websites is the fopen function in PHP – CVE-2007-0448. You can secure your website by disabling includes when calling the fopen function.


According to cvedetails.com “PHP 5.2.0 does not properly handle invalid URI handlers, which allows context-dependent attackers to bypass safe_mode restrictions and read arbitrary files via a file path specified with an invalid URI, as demonstrated via the srpath URI”

A brief description of the fopen PHP vulnerability 3

It’s usually not recommended to enable the fopen function in the php.ini, however, some developers include it in the code itself for a specific task. Let’s see how this is exploited:

Let’s say we have a page called vulnerability.php containing these code


<?php
$vulnerable = $_GET['vulnerable'];
include($vulnerable);
?>

So, $vulnerable = $_GET[‘vulnerable’]; means to put the ‘vulnerable’ GET property in the variable $vulnerable; i.e GET property that is in the URL. An example is http://mysite.com/page.php?vulnerable=yes&howmuch=Very.


By including the value of the variable ($vulnerable), you allowing an attacker to inject code. Someone, for instance, can try this on his browser

http://www.mywebsite.com/fopen.php?vulnerable=../../../index.php

This will enable the attacker to get into subdirectories and start exploring the whole directory. However, if you are running PHP-FPM for a particular instance, only that particular instance is impacted as PHP-FPM allows you to isolate each running instances within the server.


Create a server with NodeJS – LUGM Meetups

A meet up was carried out today by Yog Lokhesh Ujhoodha today at 12:30 hrs at the University of Mauritius under the banner of the Linux User Group of Mauritius.


The event with the title “How to make a smart server with NodeJs” was announced on Lugm Facebook group as well as on the LUGM mailing list. As a passionate freelance developer, he shared his experience of using NodeJs for a critical production environment.

He started by giving a straightforward explanation to the audience the difference between a web server and a runtime environment in the context of NodeJs. 


11225431_986950471346011_4262715214018075299_n
Yog during the presentation

As you can see on the YouTube video he laid emphasis on the following   topics:

1. A problem statement

2. Web server architectures

3. Building an event-driven web server with NodeJS

4. Distributed load with NodeJs

5. Useful tools and Real life Benchmarks

We ended with some technical questions. Several questions were shot up by our hangout viewers. You can view the video and ask any questions for more clarifications. About 15-20 persons attended the meetup.


Managing and Analysing disk usage on Linux

Disk usage management and analysis on servers can sometimes be extremely annoying even disastrous if proper management and analysis are not carried out regularly. However, you can also brew some scripts to sort out the mess. In large companies, monitoring systems are usually set up to handle the situation, especially where there are several servers with consequent sized partitions allocated for let us say /var/log directory. An inventory tool such as the OCS Inventory tool can also be used to monitor a large number of servers.

diskusage

This blog post will be updated as my own notebook to remember the commands used during the management of disk usage. You can update me some tricks and tips. I will update the article here 🙂

Managing disk space with ‘find’ command

1. To find the total size of a particular directory of more than 1000 days

find . -mtime +1000 -exec du -csh {} + | grep total$   

2. Find in the partition/files of more than 50 M and do an ls which is longlisted and human readable.

find / -xdev -type f -size +50M -exec ls -lh '{}' ';' 

3. Find in the /tmp partition every file or directory with mtime of more than 1 day and delete same.

find /tmp -name "develop*" -mtime +1 -exec rm -rf {} \; 

4.Count from the file /tmp/uniqDirectory in the /home directory (uniqDirectory), every directory having the same unique name.

find /home > /tmp/uniqDirectory && for i in $(cat /tmp/uniqDirectory);do echo $i; ls -l /home/test/$i |wc -l;done

5. Find from /tmp all files having the extension .sh or .jar and calculate the total size.

find . -type f \( -iname "*.sh" -or -iname "*.jar" \) -exec du -csh {} + | grep total$

6. Find all files in /tmp by checking which files are not being used by any process and delete same if not being used.

find /tmp -type f | while read files ; do fuser -s $files || rm -rf  $files ; done

7. Once I encountered a VM during an incident which had turned on read-only mode after an intervention on SAN. After several hours, the disk was back on read-write mode. At that material time, there were several processes using the disk such as Screen, ATP, NFS etc.. I noticed that the disk usage turn to 90 % on /var partition despite du command does not show the same amount consumed. To troubleshoot the issue, the following command came handy which showed the process that has locked the disk. After restarting the service, it was back to 2%.

lsof | grep "/var" | grep deleted

Another interesting issue that you might encounter is a sudden increase of the log size which might be caused by an application due to some failure issues Example a sudden increase of binary logs generated by MySQL or a core dump generated!

Let’s say we have a crash package installed on a server. The crash package will generate a core dump for analysis as to why the application was crashed. This is sometimes annoying as you cannot expect as to when an application is going to fail especially if you have many developers, system administrators working on the same platform. I would suggest a script which would send a mail to a particular user if ever the core dump has been generated. I placed a script here on GitHub to handle such a situation.

Naturally, a log rotate is of great help as well as crons to purge certain temporary logs. The “du” command is helpful but when it comes to choose and pick for a particular reason, you will need to handle the situation with the find command.

Tips:

    • You should be extremely careful when deleting files from find command. Imagine some replication log file is being used by an Oracle Database Server which has been deleted. This would be disastrous.
    • lsof on a mount point can also be interesting to troubleshoot disk usage.
    • Also, make sure that you see the content of the log as any file can be named as the *log or *_log