Africa Internet Summit 2018 – Hackathon – Day 2 & 3

Back to blogging after some days, I still recalled the moments in Dakar, Senegal for the Africa Internet Summit. By the time, so many days already elapsed, many have already blogged about the event and more pictures raining on the social media. Our camera, tripod and laptops were all ready. In case, you have missed Day 0 and Day 1, feel free to click on the links.

On day 2, myself with Logan and Serge made a brief introduction of the Network Time Protocol. Serge explained about the TCPDump and Wireshark tools that we can use to understand NTP traffic. We also made some demo about the NTP packets exchange between the client and the server. The algorithm behind was made clear, brief and concise as without which the hacking part would be difficult. Participants chose their projects for the hackathon. Some registered themselves for the Network Programmability track and the Intelligent Transportation Systems Projects. At the end of the day 2 we were already convinced about the hackathon would be a success. Myself, Logan and Charles decided to have a beer at a nearby restaurant.

Day 3 was the moment where everyone looked forward to hack into the code. The team spirit was here. Everyone was helping each other in their ‘parcours’. For the NTP hackathon, more and more participants start joining the team. Additional chairs and tables were needed. The best idea was to form into groups and this is where things change for the good. Patches start raining. Several tests were also carried out to confirm the code was running.

At the end of the hackathon, each group went to present their project and achievements. Their presentation slides can be viewed on the AIS wiki.

Some interesting links:

  • More information about the NTP hackathon is already uploaded on the AIS wiki.
  • The meeting statistics and report can be viewed here.
  • There is also a blog coverage by Charles from CISCO.
  • Dawit Bekele speech at the African Internet Summit 2019.
  • Another Interesting article by Kevin Chege from Internet Society.

On the last day of the hackathon, Logan, Charles and myself made a video on the hackathon.

More and more pictures:


A big thank to the organisers and sponsors for doing a great job. Also congratulations to the participants for stepping ahead in the hackathon. Looking forward to see you soon in the growth and security of Internet in Africa.

Africa Internet Summit 2018 – Getting ready for the Hackathon – Day 1

On day 1, I woke up early in the morning and went outside for a morning walk. Everyone in Senegal says Bonjour to each other, irrespective of being a stranger. The people of Dakar seem very polite and relaxed in nature. Whilst walking on the coastal road of Novotel, I admired the beauty of several massive Baobab trees. 

Baobab tree coastal road of Novotel
Baobab tree coastal road of Novotel

Back at the hotel, the breakfast was delicious with lots of fruits and cakes accompanied with juice. By the time, breakfast was over, it was already 0800 AM. I took a Panoramic view from the back of Novotel.

Panorama picture coastal road of Novotel
Panorama picture coastal road of Novotel

I had to get ready as I needed to travel from Novotel hotel to Radisson Blu where the hackathon preparation was going on. I met with Serge-Parfait GOMA instructor at the Hackathon together with Loganaden Velvindron from Afrinic. On Day 1, there were about 15 participants who already registered themselves. We had to prepare for the Hackathon as it needed to be carried out both in English and French.

From Left to Right : Logan, Nitin and Serge

Preparing for the Hackathon demands lots of time and trying to cover the maximum: from the basics until when the code need to be hacked. The project chosen was the NTP client. I created both slides in English and French.

Whilst I was preparing for the slides, Serge was busy setting up the Pidgin channel. We also tested the livestream. I brought a tripod for my Iphone 7 as it’s so easy for live YouTube video broadcast. We also checked out the hackathon room and carried out several tests. We were happy to be assisted by the guys from ISOC who were always there to help. We reviewed the code anew and discussed a little about the RFCs and Internet Drafts for that specific hackathon.

Time for dinner where I met fellows from Afrinic such as Duksh Koonjoobeeharry from Atlassian User group of Mauritius and Afrinic, Tamon Mookoom from Afrinic – That guy is an IPv6 ninja, Charles Eckel from CISCO who was also leading the hackathon on the network programmability track. I also met other persons from the ISOC team and Nishal Goburdhan a FreeBSD evangelist who gave me a FreeBSD sticker.

Panorama view at the cocktail event
Panorama view at the cocktail event
Screen Shot 2018-05-15 at 10.27.39 AM
Screen Shot 2018-05-15 at 10.26.59 AM
Screen Shot 2018-05-15 at 10.27.26 AM
Screen Shot 2018-05-15 at 10.27.50 AM
Screen Shot 2018-05-15 at 10.27.14 AM
Screen Shot 2018-05-15 at 10.28.00 AM

By the time, dinner was over, it was already late. I went to meet the ISOC and Afrinic guys who were still working hard to set up the hackathon room. I took a cab and headed directly to Novotel hotel.

In case you missed Day 0, do check the article here is constantly retweeting the #AISdakar. It can be viewed here:

Africa Internet Summit 2018 – My first day in Dakar Senegal – Day 0

Dakar offers much to see and do, but my goal this trip lies elsewhere: facilitating the #AISdakar 2018 NTP – Network Time Protocol hackathon under the banner of which has been planned days back at Radisson blu hotel. Network Time Protocol (NTP) packets, as specified by RFC 5905 [RFC5905], carry a great deal of information about the state of the NTP daemon which transmitted them. In the case of mode 4 packets (responses sent from server to client), as well as in broadcast (mode 5) and symmetric peering modes (mode 1/2), most of this information is essential for accurate and reliable time synchronizaton. However, in mode 3 packets (requests sent from client to server), most of these fields serve no purpose. Server implementations never need to inspect them, and they can achieve nothing by doing so. Populating these fields with accurate information is harmful to privacy of clients because it allows a passive observer to fingerprint clients and track them as they move across networks.

The trip from Mauritius to Senegal was lengthy but at the same time, I got to discover parts of Africa: from South Africa to Kenya, hitting Ivory Coast before reaching Dakar, Senegal. During our transit in Johannesburg, Logan and I discussed several aspects of the AIS hackathon 2018 over two large pizzas and beers. One of the main goal is to maintain clear and precise billingual communication in English and French. Our next objective was to make sure that the required level should be reached for the hackathon.

I did not know that the plane will land at Ivory Coast before heading towards Dakar. Gazing out of the plane offered us unique breath taking and impressive views of infrastructures, panorama of the land and landscapes.


Disembarking at Dakar International Airport, I was received by the driver who works for a well reputed company – Prestige. Logan was received by an other company. Whilst travelling to the hotel, that guy was displaying curiosity and was inquisitive about computer repairs. I gave him some tips such as YouTube tutorials and some helpful links.

After landing, I headed directly to and checked in Novotel Hotel in Dakar where I checked in. I received a warm welcome staff members. Tired after long hours of travelling, a nap was very much needed before anything else. The view from the hotel room was magnificent with a swimming pool and beach nearside.


By the time I woke up, it was already late at about 21:00 hrs, I went to Radisson blu and met Kevin Chege and other delegates at the Gala Dinner. The atmosphere was friendly, welcoming and promising. is constantly retweeting the #AISdakar. It can be viewed here:



 Next article coming up soon..

Debugging disk issues with blktrace, blkparse, btrace and btt in Linux environment

blktrace is a block layer IO tracing mechanism which provides detailed information about request queue operations up to user space. There are three major components: a kernel component, a utility to record the i/o trace information for the kernel to user space, and utilities to analyse and view the trace information. This man page describes blktrace, which records the i/o event trace information for a specific block device to a file.


Photo credits :
Photo credits :

Limitations and Solutions with blktrace

There are several limitations and solutions when using blktrace. We will focus mainly on its goal and how the blktrace tool can help us in our day to day task. Assuming that you want to debug any IO related issue, the first command will be probably an iostat. These utilities can be installed through a yum install sysstat blktrace. For example:

iostat -tkx -p 1 2

The limitation here with iostat is that it does not gave us which process is utilising which IO.  To resolve this issue, we can use another utility such as iotop. Here is an example of the iotop output.

blktrace, iotop, blkparse, btt and btrace

Here iotop shows us exactly which process is consuming ‘which’ and ‘how’ much IO. But another problem with that solution is that it does not give detailed information. So, blktrace comes handy as it gives layer-wise information. How it does that? It sees what is going on exactly inside block I/O layer. When used correctly, it’s possible to generate events for all I/O request and monitor it from where it is evolving. Though it extracts data from the kernel, it is not an analysis tool and the interpretation of the data need to be carried out by you. However, you can feed the data to btt or blkparse to get the analysis done.

Before looking at blktrace, let’s check out the I/O architecture. Basically, at the userspace the user will write and read data. This is what the User Process at the user space. The user do not write directly to the disk. They first write to the VFS Page Cache and from which there are various I/O Scheduler and the Device Driver will interact with the disk to write the data.

Photo credits:
Photo credits:

The blktrace will normally capture events during the process. Here is a cheat sheet to understand blktrace event capture.

photo credits:
photo credits:

blkparse will parse and format events acquired from blktrace. If you do not want to run blkparse, btrace is a shortcut to generate data out of blktrace and blkparse. Finally we have btt which will analyze data from blktrace and generate time deltas for each layer.

Another tool to grasp before moving on with blktrace is debugfs which is a simple-to-use RAM-based file system specially designed for debugging purposes. It exists as a simple way for kernel developers to make information available to user space. Unlike /proc, which is only meant for information about a process, or sysfs, which has strict one-value-per-file rules, debugfs has no rules at all. Developers can put any information they want there.lwn.

So the first thing to do is to mount the debugfs file system using the following command:

mount -t debugfs debugfs /sys/kernel/debug

The aim is to allow a kernel developer to make information available in user space. The output of the command below describe how to mount and verify same. You can use the command mount to test if same has been successful. Now that you have the debug file system, you can capture the events.

Diving into the commands

1.So you can use blktrace to trace out the I/O on the machine.

blktrace -d /dev/sda -o -|blkparse -i -

2. At the same time, on another console launch the following command to generate some I/O for testing purpose.

dd if=/dev/zero of=/mnt/test1 bs=1M count=1

From the blktrace console you will get an output which will end up as follows :

CPU0 (8,0):
 Reads Queued:           2,       60KiB Writes Queued:       5,132,   20,524KiB
 Read Dispatches:        2,       60KiB Write Dispatches:       61,   20,524KiB
 Reads Requeued:         0 Writes Requeued:         0
 Reads Completed:        2,       60KiB Writes Completed:       63,   20,524KiB
 Read Merges:            0,        0KiB Write Merges:        5,070,   20,280KiB
 Read depth:             1         Write depth:             7
 IO unplugs:            14         Timer unplugs:           9
Throughput (R/W): 8KiB/s / 2,754KiB/s
Events (8,0): 21,234 entries
Skips: 166 forward (1,940,721 -  98.9%)

3. Same result can also be achieved using the btrace command. Apply the same principle as in part 2 once the command has been launched.

btrace /dev/sda

4. In part 1, 2 and 4 the blktrace commands were launched in such a way that it will run forever – without exiting. In this particular example,  I will output the file name for analysis. Assume that you want to run the blktrace for 30 seconds, the command will be as follows:

blktrace -w 30 -d /dev/sda -o io-debugging

5. On another console, launch the following command:

dd if=/dev/sda of=/dev/null bs=1M count=10 iflag=direct

6. Wait for 30 seconds to allow step 4 to be completed. I got the following results just after:

[[email protected] mnt]# blktrace -w 30 -d /dev/sda -o io-debugging
=== sda ===
  CPU  0:                  510 events,       24 KiB data
  Total:                   510 events (dropped 0),       24 KiB data

7. You will notice at the directory /mnt  the file will be created. To read it use the command blkparse.

blkparse io-debugging.blktrace.0 | less

8. Now let’s see a simple extract from the blkparse command:

8,0    0        1     0.000000000  5686  Q   R 0 + 1024 [dd]
8,0    0        0     0.000028926     0  m   N cfq5686S / alloced
8,0    0        2     0.000029869  5686  G   R 0 + 1024 [dd]
8,0    0        3     0.000034500  5686  P   N [dd]
8,0    0        4     0.000036509  5686  I   R 0 + 1024 [dd]
8,0    0        0     0.000038209     0  m   N cfq5686S / insert_request
8,0    0        0     0.000039472     0  m   N cfq5686S / add_to_rr

The first column shows the device major,minor tuple, the second column gives information about the CPU and it goes on with the sequence, the timestamps, PID of the process issuing the IO process. The 6th column shows the event type, e.g. ‘Q’ means IO handled by request queue code. Please refer to above diagram for more info. The 7th column is R for Read, W for Write, D for block, B for Barrier operation and finally the last one is the block number and a following + number is the number of blocks requested. The final field between the [ ] brackets is the process name of the process issuing the request. In our case, I am running the command dd.

9.The output can be also analyzed using btt command. You will get almost the same information.

btt -i io-debugging.blktrace.0

Some interesting information here is D2C means the amount of time the IO has been spending into the device whilst Q2C means the total time take as there might be different IO concurrent.

A graphical user interface to makes life easier

The Seekwatcher program generates graphs from blktrace runs to help visualize IO patterns and performance. It can plot multiple blktrace runs together, making it easy to compare the differences between different benchmark runs. You should install the seekwatcher package if you need to visualize detailed information about IO patterns.

The command to be used to generate a picture to for analysis is as follows where seek.png is the output of the png name given.

seekwatcher -t io-debugging.blktrace.0 -o seek.png

What is also interesting is that you can create a movie-like with seekwatch to view the graph.

seekwatcher -t io-debugging.blktrace.0 -o seekmoving.mpg --movie


  • For the debugfs, you can also edit the /etc/fstab file to make the mount point permanently. 

  • By default, blktrace will capture all events. This can be limited with the argument -a.
  • In case you want to capture persistently for a long time or for a certain amount of time, then use the argument -w.
  • blktrace will also stored the extracted data in local directory with a format device.blktrace.cpu, for example sda1.blktrace.cpu.
  • At step 1 and 2, you will need to fire a CTRL +C to stop the blktrace.
  • As seen in part 2, you have created test test1 file, do delete it same may consume disk space on your machine.

  • On part 5, the size of the file created in the  /mnt will not exceeds more that 1M since same has been specified in the command.
  • At part 9, you will noticed several other information which can be helpful such as D2C and  Q2C.
    • Q2D latency – time from request submission to Device.
    • D2C latency – Device latency for processing request.
    • Q2C latency – total latency , Q2D + D2C = Q2C.
  • To be able to generate the movie, you will have to install mencoder with all its dependencies.

Linux memory analysis with Lime and Volatility

Lime is a Loadable Kernel Module (LKM) which allows for volatile memory acquisition from Linux and Linux-based devices, such as Android. This makes LiME unique as it is the first tool that allows for full memory captures on Android devices. It also minimises its interaction between user and kernel space processes during acquisition, which allows it to produce memory captures that are more forensically sound than those of other tools designed for Linux memory acquisition. – Lime. Volatility framework was released at Black Hat DC for analysis of memory during forensic investigations.

Analysing memory in Linux can be carried out using Lime which is a forensic tool to dump the memory. I am actually using CentOS 6 distribution installed on a Virtual Box to acquire memory. Normally before capturing the memory, the suspicious system’s architecture should be well known. May be you would need to compile Lime on the the suspicious machine itself if you do not know the architecture. Once you compile Lime, you would have a kernel loadable object which can be injected in the Linux Kernel itself.

Linux memory dump with Lime

1. You will first need to download Lime on the suspicious machine.

git clone

2. Do the compilation of Lime. Once it has been compiled, you will noticed the creation of the Lime loadable kernel object.


3. Now the kernel object have to be loaded into the kernel. Insert the kernel module. Then, define the location and format to save the memory image.

insmod lime-2.6.32-696.23.1.el6.x86_64.ko "path=/Linux64.mem format=lime"

4. You can view if the module have been successfully loaded.

lsmod | grep -i lime

Analysis with Volatility

5. We will now analyze the memory dump using Volatility. Download it from Github.

git clone

6.  Now, we will create a Linux profile. We will also need to download the DwarfDump package. Once it is downloaded go to Tools -> Linux directory, then create the module.dwarf file.

yum install epel-release libdwarf-tools -y && make

7. To proceed further, the file is important to build the profile. The file contains the locations of all the functions active in the compiled kernel. You will notice it inside the /boot directory. It is also important to corroborate the version appended with the file together the version and architecture of the kernel. In the example below, the version is 2.6.32-696.23.1.el6.x86_64.

8. Now, go to the root of the Volatility directory using cd ../../ since I assumed that you are in the linux directory. Then, create a zip file as follows:

zip volatility/plugins/overlays/linux/ tools/linux/module.dwarf /boot/

9. The volatility module has now been successfully created as indicated in part 8 for the particular version of the Linux and kernel version. Time to have fun with some Python script. You can view the profile created with the following command:

python --info | grep Linux

As you can see the profile LinuxCentos6-2632 profile has been created.

10. Volatile contains plugins to view details about the memory dump performed. To view the plugins or parsers, use the following command:

python --info | grep -i linux_

11. Now imagine that you want to see the processes running at the time of the memory dump. You will have to execute the script, specify the location of the memory dump, define the profile created and call the parser concerned.

python --file=/Linux64.mem --profile=LinuxCentos6-2632x64 linux_psscan

12. Another example to recover the routing cache memory:

python --file=/Linux64.mem --profile=LinuxCentos6-2632x64 linux_route_cache

Automating Lime using LiMEaid

I find the LiMEaid tools really interesting to remote executing of Lime. “LiMEaide is a python application designed to remotely dump RAM of a Linux client and create a volatility profile for later analysis on your local host. I hope that this will simplify Linux digital forensics in a remote environment. In order to use LiMEaide all you need to do is feed a remote Linux client IP address, sit back, and consume your favorite caffeinated beverage.” – LiMEaid


  • Linux architecture is very important when dealing with Lime. This is probably the first question that one would ask.
  • The kernel-headers package is a must to create the kernel loadable object.
  • Once a memory dump have been created, its important to take a hash value. It can be done using the command md5sum Linux64.mem
  • I would also consider to download the devel tools using yum groupinstall “Development Tools” -y
  • As good practice as indicated in part 8 when creating the zip file, use the proper convention when naming the file. In my case I used the OS version and the kernel version for future references.
  • Not all Parsers/Plugins will work with Volatile as same might not be compatible with the Linux system.
  • You can check out the Volatile wiki for more info about the Parsers.