Category: Linux Application

Seven steps to compile Python3.5.0 from source

After a minimal install of Centos7, you would notice that your version of Python would be maybe 2.7.5 which may not be compatible with other applications you are actually using. To be updated to the latest version, one of the possibilities is to compile your own Python from Source.

python


Here are the steps that you can follow to compile your own Python. At the time, I am writing this article the latest version is Python-3.5.0. You can refer to this link for future versions.

1. Download the prerequisites. I would also recommend an update before downloading the prerequisites.

yum update -y && yum install yum-utils make wget

2. To be able to compile your Python, you will need to download some requirements which will facilitate the compilation tasks


yum-builddep python

3.Download the Python package

wget https://www.python.org/ftp/python/3.5.0/Python-3.5.0.tgz

4.Untar your Python Package

tar xvzf Python-3.5.0.tgz

5. Get into the Package and fire the following commands

./configure
make

6. If the make process is successful, you can now start the installation with the following command

make install

7.Python-2.7 is usually the default version. You will need to specify your OS to run with the new version.


alias python='/usr/local/bin/python-3.5'

We now have the python-3.5 installed and ready for use.


Linux server Monitoring with Nodequery

One of the best ways to which I think bloggers can monitor traffic on their website is by using Nodequery. Though it is well adapted for huge businesses, I am sure you would like to adventure on this Public API. NodeQuery is currently in public beta and completely free of charge.






Photo Credits to Nodequery.com

To install NodeQuery, you will need to register on the official website. You will be prompted to install the Agent on your machine by downloading it from Github using the command https://raw.github.com/nodequery/nq-agent/master/nq-install.sh

After installation, you need to launch the bash script and immediately after less than 3 minutes your server is being monitored.






For the free version, you have the possibilities to have a full overview of your system, Network Usage, Network latency, Average Load, Ram and Swap Usage, Disk Usage and the Top Processes.

Well, there are several more features to explore.

Cool isn’t it?


Create a server with NodeJS – LUGM Meetups

A meet up was carried out today by Yog Lokhesh Ujhoodha today at 12:30 hrs at the University of Mauritius under the banner of the Linux User Group of Mauritius.


The event with the title “How to make a smart server with NodeJs” was announced on Lugm Facebook group as well as on the LUGM mailing list. As a passionate freelance developer, he shared his experience of using NodeJs for a critical production environment.

He started by giving a straightforward explanation to the audience the difference between a web server and a runtime environment in the context of NodeJs. 


11225431_986950471346011_4262715214018075299_n
Yog during the presentation

As you can see on the YouTube video he laid emphasis on the following   topics:

1. A problem statement

2. Web server architectures

3. Building an event-driven web server with NodeJS

4. Distributed load with NodeJs

5. Useful tools and Real life Benchmarks

We ended with some technical questions. Several questions were shot up by our hangout viewers. You can view the video and ask any questions for more clarifications. About 15-20 persons attended the meetup.


Deploying WordPress labs on Virtual Box

Building miniature virtual labs on Virtualbox are most of the time fascinating especially when you have to troubleshoot between the virtual servers within a network environment, however, there are usually bugs that I have to deal with. The difference between NATNETWORK and that of NAT on VirtualBox differs differently to what I have noticed, this can be seen on the official website documentation.

However, I have noticed that in both situations, you are provided with a virtual router within VirtualBox. In the case of a NAT network, you are NOT allowed to ping between two VMs on NAT network unless you have established a tunnel whereas in the option of the NATNETWORK, this allows you to choose to dynamically range of IPs through the DHCP functionality on VirtualBox and you are also allowed to ping the outside world as well as other VMs on NATNETWORK. I have noticed that this work only on the new version compared to old ones where the NAT and NATNETWORK work almost the same way. There are still many discrepancies if ‘NatNetwork’ is the real name that should have been set!!


Screenshot from 2015-09-27 00:48:18

I have install Centos [minimum install] on my first lab. Here are the procedures for building the webserver.

  1. yum install httpd wget mysql-server php php-mysql php-gd nmap traceroute w3m vim
  2. wget https://wordpress.com/latest.tar.gz
  3. tar -xzf latest.tar.gz && cp -r wordpress /var/www
  4. chown -R apache:apache /var/www/wordpress
  5. vi /etc/httpd/conf.d/myweb.conf 

create the vhost with the following values

    • <VirtualHost *:80>
    • DocumentRoot /var/www/wordpress
    • ServerName www.myweb.com
    • ServerAlias myweb.com
  • <DIrectory /var/www/wordpress>
  • Options FollowSymlinks
  • Allow from all
  • </Directory>
  • ErrorLog /var/log/httpd/wordpress-error-log
  • CustomLog /var/log/httpd/wordpress-access-log common
  • </VirtualHost>

Time to create the Database

  1. mysql -u root -p  [mysqld service should be started first]
  2. CREATE DATABASE mydb;
  3. CREATE USER [email protected];
  4. SET PASSWORD FOR [email protected]= PASSWORD (“mypassword”);
  5. GRANT ALL PRIVILEGES ON mydb.* TO [email protected] IDENTIFIED BY ‘mypassword’;
  6. FLUSH PRIVILEGES;

Exit MySQL and proceed with the following instructions.

  1. mv /var/www/wordpress/wp-sample-config.php wp-config.php 
  2. Vi wp-config.php and modify username, dbname, password and hostname
  3. vi /etc/hosts and enter myweb.com to run as localhost
  4. Service httpd start // service httpd graceful // service mysqld start 
  5. w3m www.myweb.com register on wordpress. Website up

Setting up the SSL

  1. For ssl activation [https] do this yum install openssl mod_ssl
  2. openssl genrsa -out ca.key 2048 [to generate a signed certificate]
  3. openssl req -new -key ca.key -out ca.csr [to generate the .csr]
  4. openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt [generate a self-signed key]
  5. cp ca.crt /etc/pki/tls/certs
  6. cp ca.key /etc/pki/tls/private/ca.key
  7. cp ca.csr /etc/pki/tls/private/ca.csr
  8. vi /etc/httpd/conf.d/myweb.conf and add another vhost with the following values
  • <VirtualHost *:443> 
  • SSLEngine on
  • SSLCertificateFile /etc/pki/tls/certs/ca.crt
  • SSLCertificateKeyFile /etc/pki/tls/private/ca.key
  • DocumentRoot /var/www/wordpress
  • ServerName www.myweb.com
  • ServerAlias myweb.com
  • <DIrectory /var/www/wordpress>
  • Options FollowSymlinks
  • Allow from all
  • </Directory>
  • ErrorLog /var/log/httpd/wordpress-error-log
  • CustomLog /var/log/httpd/wordpress-access-log common
  • </VirtualHost>
  1. Service httpd graceful and website up on https


To make the website accessible on any hosts on the same NAT Network, edit /etc/resolv.conf with IP Address 10.0.2.4 myweb.com

176619

Now that two servers are configured the same way, you can add another server as load Balancing to access the servers behind the load balancer. What is most interesting is that end users (hosts) will know only the load balancing server. I have achieved this by installing Pound on the server use as Load Balancing. This means that end users [hosts] will access the load balancing server which will, in turn, decides upon master/slave priorities. Pound converts server3 to a reverse proxy load balancing server. The aim is to make HTTP/S request from the hosts and request server 1/2 according to the configuration.

Based on this article a new Bash project is actually being brewed on Github to automate the installation of WordPress, Apache, MySQL and all the application specified. This project should enable anyone to deploy a website through the script.


Getting started with the basics of Github

“GitHub is a Web-based Git repository hosting service. It offers all of the distributed revision control and source code management (SCM) functionality of Git as well as adding its own features. Unlike Git, which is strictly a command-line tool, GitHub provides a Web-based graphical interface and desktop as well as mobile integration. It also provides access control and several collaboration features such as bug tracking, feature requests, task management, and wikis for every project.” – Wikipedia


To get started with Github, you have to create an account on Github.com. Once the account has been created, download the Git tool which is used to manipulate your repository.

github-logo-text-horizontal

Here are the steps which you need to follow :

Register on Github and download the Git tool. The git tool is the command “git” on your Linux machine.

Create a repository on the Github website. I am actually working on a project using this repository – TheTunnelix/Deploy

Now, you can start setting up git locally. Here are some commands which you need to remember:

git config –global user.name “TheTunnelix” – Setting up a username

git config –global user.email “[email protected] – Configuring your mail address

git clone https://github.com:TheTunnelix/Deploy.git – Cloning your local machine with the Github Repo that you have created.


git status – You can use this command to understand the status of the repo. For example, the number of files which have not been committed yet.

git add README.md – When you add a file, you are now in between your local and the repository to proceed forward with the commit.

git commit -m “updated readme file for the better description” – This is a way to document the stage area to be ready to finalize the repo. It is not actually a commit. Use git commit -help. Remember, each commit has an ID so that you can roll back in time.

git remote add origin https://github.com:TheTunnelix/Deploy.git – If you did not clone a repo, you may need to use this command because when a repo is cloned, you have already added the origin.

git push -u origin master – the “u” option means saving for the origin master (the branch you are pushing to). It will prompt you to enter your username and password. You will notice the following message “Branch master set up to track remote branch master from origin” which means that things went well.

From the Github account, you will notice that the change has been carried out. Changes are done using the three principles ADD-COMMIT-PUSH. You cannot push without committing. If a new file is created locally and you use the git status command, it will prompt you the test file which you have created.

git add . – Another interesting command is the git add . where the dot means to add untracked files following to a git commit -m “test file is added” The “-m” means message. A git status will show you that there is nothing to commit and everything is clean. Each git push will prompt you to enter your username and password.

git pull -u origin master  – This will pull all the content of the current repo.

There also interesting options to explore on the Github website such as the branch/master. New branches can also be created. The graphs can be useful to view contributors on the project.