Week 9 at RC

Monday : Worked on the presentation “Life of a C Program” and presented it in the evening.

Tuesday : Paired with May to discuss the steps in “Life of a C Program” and in the process making the presentation better – bot visually and organization wise. Started reading the paper on Raft

Wednesday : Continued reading the paper on Raft. Discussed the Safety Argument section (5.4.3) of the Raft paper with others. Continued to to write the blog post on “How do the debuggers set breakpoint”.

Thursday : Understanding closure in Go.

Friday : Worked on the job profile and read about go routines.

Saturday : Worked with May and Juliano on the Sanctuary project at MD5 hackathon. Great experience learning about express, leaflet, and bower. Ended up contributing a few lines of code at the end of the day.

Week 9 at RC

How breakpoints are set

I am fascinated by the debuggers. I love them so much that I wrote a small and very basic debugger as one of my projects recently. In this post I am going to write down what I’ve learned about how can a debugger set a breakpoint.. This post can be divided into these following sections.

What’s a breakpoint?
What’s a debugger?
What does the debugger need to do to set a breakpoint?
How can the debugger make the debuggee process halt?

What’s a breakpoint?

A breakpoint makes your program stop whenever a certain point in the program is reached.

What’s a debugger?

You can consider your debugger to be a program which forks() to create a child process and then calls execl() to load the process we want to debug. I used execl() in my code, but any of the system calls from the exec family of functions can be used.

snip20161006_10
The debugger process

And here’s the run_child() function which calls the execl() with the debuggee process’s executable name and path.
snip20161009_17

We see a call to ptrace() in run_child() function before calling execl(). Let’s, for the moment, not go into what ptrace() is, even though it’s very important to understand how does a debugger work. We will eventually come to it.

Now we have two processes running –

  1. The debugger as the parent process.
  2. And the debugee as the child process.

Let’s now try to think abstractly in a sort of hand-wavy manner what does a debugger need to do to set a breakpoint in the child process. The debugger needs the child process to stop where the breakpoint has been set. But how?

What does the debugger need to do to set a breakpoint?

Let’s examine the phrase “setting a breakpoint” a little closely. We say a process is running when the instructions of the process are being executed by the processor. Where are those instructions located? In the text/code section of the process’s virtual memory.

By setting a breakpoint we expect the debuggee process to halt at a certain point. Which means that we expect the debuggee process to stop just before executing some instruction. What can that instruction be? Well, if the user has set a breakpoint at the beginning of a function, it’s the first instruction at the start of the function. If the user has set a breakpoint at some line number in some file, the instruction is the first instruction in the series of instructions that correspond to that line.

Therefore the debugger needs to make the process halt right before executing that instruction.

In my project, I chose the instruction underlined in the following screenshot.
screen-shot-2016-10-17-at-10-51-31-am

How can the debugger make the debuggee process halt right before executing a particular instruction?

The debugger replaces the instruction (at least part of the instruction) at the start of the debuggee process with an instruction that generates a software interrupt. Hence, when this modified instruction is executed by the processor, the SIGTRAP signal is delivered and that is all that is needed to make the process stop. I have skipped a lot of details here, but we will discover them as we go.

But let’s first discover how does the debugger modify an instruction?
The instructions reside at the process’s text section which is mapped to the virtual memory when a process is loaded. Hence to modify the instruction the debugger needs to know the address of that instruction.

How does the debugger find out the address of an instruction?

If you are compiling a C/C++ program, you pass “-g” option to the compiler to generate some extra information. Those extra information contain these mappings and they are stored in a format known as DWARF in the object file. On Linux, the DWARF format is used to store the debug information inside the ELF file. Isn’t that cool! ELF stands for Executable Linkable Format. It’s the format to represent object files, executable files and shared libraries.

What is the instruction that the debugger puts in place of the original instruction?

The debugger overrides the first byte of the location where the original instruction was with the instruction “int 3”. “int 3” has 1 byte opcode “0xcc”, hence the debugger touches only the first byte of the memory address.

What’s an “int 3” instruction? “int n” instructions generate a call to the exception handler specified by the destination operand. “int 3” generates a call to the debug exception handler. The exception handler is a part of Kernel code and it delivers the signal SIGTRAP to the process, in our example to the process we are debugging.

How and when does the debugger change an instruction of the debugee process?

And now the most exciting part of this post has arrived. Using a very powerful system call ptrace().

Let’s understand ptrace.
What does ptrace() do? ptrace is a system call by which our debugger controls the execution of the process we are debugging. The debugger uses ptrace() to examine and change the memory and registers of the process that we are debugging.

If you look at the code here, before execl()’ing the debuggee process we called ptrace() with PTRACE_TRACEME which indicates that this process is to be traced by its parent process i.e the debugger.

The man page for ptrace mentions this –

If the PTRACE_O_TRACEEXEC option is not in effect, all successful calls to execl(2) by the traced process will cause it to be sent a SIGTRAP signal, giving the parent a chance to gain control before the new program begins execution.

Which simply means that the debugger gets notified via the wait()/waitpid() system call just before the debuggee process starts. And there the debugger has its golden chance to modify the text/code section of the process being debugged.

It also means that while being traced the debuggee process will stop each time a signal is being delivered and the tracer (the debugger) will be notified at its next call of waitpid(). Hence, when the signal SIGTRAP is delivered to the debugee process, it will stop and the debugger will be notified, which is exactly what we want. We want the debuggee process to stop and the debugger get notified when the debuggee process executes the “int 3” instruction. The debugger gets the information about the cause of the stopping of the debuggee process from the status returned by waitpid().

The default behaviour of SIGTRAP is to dump core and abort the process but we can not debug a process if it’s killed. Can we? Hence the debugger ignores the SIGTRAP signal and causes the debuggee process to continue.

Here’s the code to set “int 3” instruction at the first byte of the address that contained the original instruction. As you can see, the same old ptrace() function is used to first get the original instruction at the address so that we save the original instruction to restore it later. And then the same old ptrace() function is used with a different flag PTRACE_POKETEXT to set the new instruction which has “int 3” at its first byte.
screen-shot-2016-10-16-at-10-26-13-pm

What does the debugger do now that the “breakpoint has been hit”?

  • First, the debugger needs to restore the original instruction at the address where the breakpoint has been set.
  • Second, once it has been restored that original instruction should be executed once the debugger lets the debuggee process restart its execution.

How does the debugger restore the original instruction? Just the way the debugger sets “int 3” to set a breakpoint. Here’s the code. While setting the breakpoint we saved the original instruction, now all we need to do is to set it back at the given address.
screen-shot-2016-10-16-at-10-32-42-pm

How is the original instruction in the debuggee process then executed?
The program counter of the debuggee now points to the next instruction now that it has already executed the instruction at the address where we had out “int 3”.

To make the processor execute the original instruction in the debuggee process, we need to set the value of the program counter – %eip (in case of x86 machines) or %rip (in case of x86 64 machines) of the debuggee process to the address again.

And how can we set the instruction pointer of the debuggee process?
Using ptrace()! ptrace() has this super awesome capability of letting us “change the tracee’s memory and registers.” PTRACE_GETREGS makes ptrace copy the general purpose registers of the debuggee process into a struct. And PTRACE_SETREGS modifies the debuggee process’s general purpose registers. Here’s the code that does that
screen-shot-2016-10-17-at-10-39-46-am

Once the debugger restored the debuggee process’s program counter it let’s the process continue and the way to do that is following –
screen-shot-2016-10-17-at-10-41-12-am

And that’s how a debugger can set breakpoints.

The full code of the debugger can be found here.

I presented on this topic at the Thursday evening presentation at RC. Here’s the link to the slides.

References:
Eli Bendersky’s articles on debuggers
Call to interrupt procedures
Interrupts and interrupt handlers

How breakpoints are set

Week 8 at RC

Day 1: Gone for an interview

Day 2: Paired with Mike and Dan to write up the IngestorTest unit test. Did the code dojo and an exercism challenge on Go.

Day 3: Paired with Mike and Dan to finish a test case in the IngestorTest. Finished the code dojo problem. Started learning network programming in Go from this tutorial.

Day 4: Worked on the presentation on debugger and presented it on the Thursday evening presentations.

Day 5: Learned about cache organisation and how to optimize code taking into consideration the caches from the Hardware Software Interface course.

Week 8 at RC

Week 7 at Recurse Center

Day 1: First day of the Fall 2 batch. Quite a busy and exhausting day. Started reading the chapter on Linker from “Computer Systems – A Programmer’s Perspective”. Gave a talk from the delightful article “A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux”. Tried to understand the part 3 of the Eli Bendersky’s article on How Debuggers Work.

Day 2: Again tried to understand the Part 3 of Eli Bendersky’s article on how debuggers work. Looks like if I have to implement breakpoint setting on function names instead of memory addresses, I’ll have to use a library to parse DWARF, which is not looking super interesting right now. Read some more about the linkers from “Computer Systems – A Programmer’s Perspective”.

Day 3: Left the debugger for the time being as parsing a DWARF using a library is not sounding too interesting at this point of time. Started learning Go. Installed it and ran the first “Hello World” program. Resumed BitFunnel after a long time and got stuck again. Need to do something about that.

Day 4: Did some part of A Tour of Go.

Week 7 at Recurse Center

Week 6 at Recurse Center

Day 1: Practiced some graph and tree algorithms problems. Got to know about the Linux stack frames from the Hardware Software Interface course. Wrote the code mentioned in the first part of the debugging tutorial. Rode a Ferris Wheel in the evening.

Day 2: Practiced a couple of algorithm problems. Today I got to know about the register saving conventions in x86 and x86_64 from the Hardware Software Interface course. Did Codo Dojo today. Write a blog post about the Code Dojo problem. Talked to Arpith about the debugger that I am writing.

Day 3: Wrote a blog post about the Euler Problem 24. My debugger could now count the number of instructions it executed. But the every time I run it, the number of instructions keep on increasing which is a mystery that needs to be solved.

Day 4: Continued with the Hardware Software Interface course. Tried to solve their Lab 2 exercises.

Week 6 at Recurse Center

Cool bitwise stuff

I just learned this cool bit manipulation staff yesterday and I am dying to write it down. Here it is –

Let’s write a program to decide if a number is a power of 2.

snip20161004_1

The above code executes at least 31 times on a 32 bit or 64 bit Linux machine.

Can it be made better? Can a number be checked whether it’s power of 2 in less number of instructions?

Let’s park that question for a bit and let’s talk about number representations. The computers represent the numbers in 2’s complement representation. What that means is that the negative numbers are represented in the following form –

(2^b - N)

N = the number
b = number of bits

You can get 2’s complement representation of the number -X by doing the following in binary –

~X + 1

X = binary representation of the positive integer

Let’s take an example.

The 8 bit binary representation of 4 is = 0000 0100
The 8 bit binary representation of -4 is = ~(0000 0100) + 1 = 1111 1011 + 1 = 1111 1100

Let’s take another example.

The 8 bit binary representation of 6 is = 0000 0110
The 8 bit binary representation of -6 is = ~(0000 0110) + 1 = 1111 1001 + 1 = 1111 1010

Now if we AND a number with its 2’s complement representation we see something useful –

In case of 4, AND = 0000 0100
In case of 6, AND = 0000 0010

What does the above pattern say. It says that the first set bit in the actual number is also set in the result of the AND. In case of 4, the third bit was the first set bit. And it’s set in the result of the AND too. In case of 6, the second bit was the first set bit. And it’s set in the result of the AND too.

Now if you think of all the powers of two they have exactly one bit set. If we represent integers in 8 bits, then the possible powers of 2 in this range are –

0000 0001
0000 0010
0000 0100
0000 1000
0001 0000
0010 0000
0100 0000
1000 0000

Which means for the powers of two, the result of the AND operation will always give us the same number. Hence we can just check the equality between the input number and the result of the AND to detect if the number is power of 2 and it’s faster than the previous code too.

Here’s the code
snip20161004_2

I profiled both the functions by calling them on 1 billion numbers. The above code using 2’s complement representation takes the following time

real    0m5.846s
user    0m5.830s
sys     0m0.010s

The old iterative version on the same set of numbers takes the following time

real    2m9.093s
user    2m7.756s
sys     0m0.660s

And as you can see the faster version is almost 25 times faster than the iterative version.

Cool bitwise stuff

Week 5 at Recurse Center

I am going to do something new from this week. Well, it’s not completely new since I used to do it in my last job. I am going to write my weekly goals so that they are documented and I can always see the path I have taken compared to the goals I set at the start of the week.

Day 1: Started reading about distributed systems from Distributed Systems for Fun and Profit. Got introduced to the Raft consensus algorithm through this amazing visualization called The Secret Lives of Data.

Started reading on how to write a debugger. The Beginner’s Guide is a great starting point. That also made me curious about the lifetime of a C program and what happens at the linker stage. Another Beginner’s Guide to Linker was a great resource.

Also continuing making notes from the Hardware-Software Interface. Now things have started to get interesting as I am learning about ISA and assembly.

The motivation of this week’s exploration of system side is this brilliant zine on debugging by Julia Evans and my lingering curiosity of what-happens-when-a-c-program-runs.

Day 2: Went through a two-part article called Hello from a libc-free world. Went through an hour long go tutorial.

Day 3: Went through the Hardware Software Interface tutorial. I am now assembly literate which means that I can read a little bit of assembly and understand what that thing is doing. I also wrote a blog post about the NoteRanger. Tried to solve crazy bit manipulation problems given in lab1 of the Hardware Software Interface tutorial.

Day 4: Continued with the Hardware Software Interface tutorial. Tried some more crazy bit manipulation problems. Resumed going through How Debuggers work.

Week 5 at Recurse Center

Euler Problem 24

Tuesdays are the days of Codo Dojo at RC where you pair with another person and try to solve a problem in one hour. This Tuesday’s problem was the Euler Problem no. 24. Here’s the problem –

A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:
012 021 102 120 201 210
What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?

I paired with Vaibhav and we tried to solve it in Python. Here’s one of the approaches we took to solve the problem.
euler24_1

euler24_2

euler24_3

euler24_4

And here’s the code in C++
snip20160922_4
But it was giving a slightly incorrect result, if you know what I mean. We were getting 278391560 but the correct answer is 2783915460. Can you tell the difference? It’s really only matter of a digit….

Anyway, I set on then finding out the mystery of the vanishing digit. And I found out that our algorithm was not tackling one of the cases where the number of permutations with rest of the digits exactly match the number of slots left to fill. Confusing statement, hence let me present another of my sketches –
euler24_5

It turns out that when we land in a situation like the above that means –

1. We know the first digit (in the above example’s case first, but in our earlier example’s case we have been calculating the first, second, third and so on digits).
2. The number of slots left in the set are completely filled up by the all possible permutations of the rest of the digits.

So what will be the largest possible number in the permutation of rest of the digits?

One easy way of getting them is – if the digits are in ascending order, you just have to read them in the opposite order.

In the smaller example’s case I know that
1. The first digit would be 1.
2. The rest of the digits are {0, 2). Hence the largest number from the permutation of {0, 2) is 20.

Hence the number is 120! Phew!

Here’s the code that fixed the bug –
snip20160922_7

Euler Problem 24

NoteRanger Beta – a basic text based search

I started taking notes in markdown format following David Branner’s suggestion. Here are the two main reasons.

  1. They look much better on a browser than on a text editor on a terminal.
  2. I can access them anywhere.

All I needed was now a search mechanism like the one David has. I forked his notes repo but on a particular Thursday I started to build one of my own for reasons that do not fall under the scope of this blog.

The NoteRanger has two components – Indexer and the Search. I wrote my own indexer in C++ and I am using David’s Javascript code with some addition and modification for the search.

Here’s the search and here’s the code.

Indexing

To use NoteRanger, you have to first index the documents. The indexer then generates two databases which the Search uses to show you the search results.

Usage

Collect list of the documents and put them in a file. Here’s my file containing the list of files that I need to perform search on.

snip20160914_4

I created a softlink to my notes directory in the directory where my executable is. Hence I use the following command to generate the index in the js directory.


$./noteranger notes_filelist js/

Implementation

The indexing part is written in C++. I used some of the features of C++11. It can be built using the Makefile. You will need a G++/Clang++ that supports C++11 and the Boost libraries.

It has a parser which parses (duh!) the documents, discards invalid tokens (for instance, I wouldn’t store special characters like “/”, “#”, “*” etc.). Then it builds two databases. One is called TermDB – the database storing the terms. The TermDB is in the format of an inverted index or a posting list. Posting list is just a fancy name of list of all the documents in which the term occurs.

The TermDB stores the terms and the list of documents in which the term occurs in a map. The list is actually a list of document IDs. The document IDs are generated by hashing the string containing the path to the document.

The other database is called DocumentDB. The DocumentDB is a map of document IDs and the path to the document. It also contains the header of the document.

After indexing the files, the two databases are written as two Javascript files.

  1. IndexEntries.js – the TermDB
  2. TupleStorage.js – the DocumentDB

Since they are in Javascript, you can guess that they are going to be used in the search directly. Hence, I have to check them in to my github repo everytime I make any change in the notes. I am yet to make a cronjob for it, but that’s the plan.

Search

The search is written in Javascript. It’s mostly David’s code. But I added the support to perform AND query (which I shall explain in a bit) and made it a bit more readable with help from Jake (a fellow Recurser).

Usage

If you want to test it locally, open the file index.html in browser and it should work and it should look like this.

Implementation

The search code is written in test.js. If you open the index.html, you’ll see that it includes two more javascript files that the Indexer has generated along with test.js. When you search a term, for instance “variables”, it looks up the map defined in IndexEntries.js and gets the document IDs of the documents in which the term occurs. And using the document IDs, it looks up the TupleStorage.js to get the document name and the header. And then it shows a list of links to the documents.

If you enter multiple terms, for instance “automatic variables”, currently I do the following –

1. Get the posting list for the term “automatic”.
2. Get the posting list for the term “variables”.
3. Perform an intersection of these two lists. Since I store the docuement IDs in the list in a sorted order, my intersection is at most the size of the smaller list.

So that was my AND query. Now this is not the most correct way of doing things. Because it will return a lot of false positives in case there are documents which contain both “automatic” and “variables” but not together. And I probably wanted to know the documents in which “automatic” and “variables” are together. But that’s a feature that I’ll start working on after some time.

Plans

A couple of them.
1. Adding the support of phrase queries will be interesting
2. Adding rank will also be interesting.
3. Know more about Information Retrieval.

Please feel free to fork it, use it, review it, report bugs in it and share your ideas for the “Plans” section.

NoteRanger Beta – a basic text based search

Week 4 at Recurse Center

Day 1: Labour day holiday. Was out most of the day. No work.

Day 2: Got the first cut of a toy information retrieval system working. Calling it NoteRanger. I started reading the book here. I started taking notes in mark-down following the suggestions made by David. I also took his search-the-notes system. But I am generating the index in C++. For now I am using his JS code to query, but I plan to write my own version there as well.

Did warm-up today, did code-dojo today. Ivo explained briefly how he wrote an IRC server to me. Continuing the Hardware Software Interface course for the third day today.

Day 3: I don’t remember what I did on day 3! (Note to self: That’s why it’s so important to blog everyday) Except that a few of us went to the Drunk Ted Talks on Conspiracies after RC and it was quite funny.

Day 4: Took the day off as I was not feeling well. Worked a little on NoteRanger. Made the search result show the header information and made the NoteRanger a github page. Also resumed working on BitFunnel. Tried following the instructions mentioned at the BitFunnel blog.

Day 5: Wrote an AND query in NoteRanger. Jake reviewed the html and js code and gave very good feedback. Now the static part is separated from the dynamic part, the js code looks much more readable and the bug that the form was getting submitted only on click has been resolved. Watched floating point representation from Hardware Software Interface course. Am not sure that I could understand it well. Need to dig that little more.

Week 4 at Recurse Center