After the incredible show put on by TEDxPSU 2010 last year, I was compelled to volunteer for TEDxPSU 2011 this year. So over the past few months I had been in charge of organizing the student expo, or Xpo as we branded it. The goal was to showcase some of the groups and clubs around campus that do cool stuff to the attendees of TEDx before the main event starts. At first, I had trouble getting enough interest from clubs to want to participate in the Xpo. However, given enough time and persistence that changed. From there, everything was fairly straightforward. On the day of TEDx I show up, the groups show up, they present their work to the attendees, and they pack up, all while I simply supervise and take care of any problems that arise. The Xpo went so smoothly that I actually had plenty of time to help other volunteers with their jobs. After everything was said and done, the speakers had presented their topics, and Alumni Hall fell silent all the volunteers and I had the task of tearing down the entire hall (see below pictures!) in just over three hours.
I wanted to add a timer to my Linux compile script so I could see how it took to compile the kernel. However, Bash does not support floating point precision. Now seeing as kernel compiles take some time this shouldn’t matter. I could use the date command to get the hour, minute, and second before and after the compile and subtract them, adjust for difference in hours, days, etc. This way I wouldn’t need any type of precision. But that’s a lot of work, and I want to know exactly how long it took; not rounded to the nearest minute.
Rather, why not just get the seconds from 1970 before and after the compile, subtract the two, and divide by 60? Much easier! Except, I need floating point precision. The solution: the bc program. It’s like a command line calculator that supports all the precision I could ever need. Let’s take a look:
Compiling Linux yourself is one thing. Actually using the kernel you just compiled is another. Here’s my latest debacle with the hell I put upon myself by compiling my own kernels.
The problem: Compiling any kernel modules against a custom compiled kernel would begin to fail after an unknown amount of time had past after compiling the kernel.
I also compile my own video driver kernel modules. At first when I compiled and installed a new kernel the module installer would work fine. But then if I went to do it again for whatever reason say, a week later, it would fail. Programs like VMware and Virtualbox would complain about not being able to find the kernel headers as would dkms.
But I thought that if I installed the kernel image and kernel headers packages that everything would work? That’s how it seemed to work anyway.
Well, was I wrong. I always assumed that the kernel headers should be in /usr/src/ and any program that needed them would look in there. That is, I assumed it was the standard directory for kernel headers. Maybe it is, but there’s more to the story.
As it turns out, there are two symlinks, build and source, in /lib/modules/$(uname -r) that points to the directory the kernel was built in as can be seen in the directory listing below.
In my case, I would build the kernel in my home directory and then tar up the source and move it to long term storage once the system was playing nicely with the new kernel. That is, after I had all my kernel modules built. The undetermined amount of time it would take to stop building modules successfully was the time until I archived the kernel source and deleted the build directory.
So here’s the bottom line/solution to the problem: You must keep the kernel source in the same location as you built it or you need to update the build symlink in /lib/modules/$(uname -r). In my case, that meant creating a directory in /usr/src/ where I’ll be keeping all the sources from now on (or at least the current and previous one).
Now, what I’m curious about is how the kernel packages from the software repos work. They don’t distribute the full kernel source, only the headers. Checking in an old module kernel directory, say, /lib/modules/2.6.38-11-generic shows the build symlink pointing to the kernel headers and the source symlink is not even present. Does this mean I don’t even need to install the headers if I have the full source available? In theory, no, since the source includes the headers. But then why couldn’t I change the build symlink to point to my custom headers and delete the source? If you know, email me with some clarification. Until then, I’ll continue to experiment.
A few months ago I started compiling Linux on my own. Not for any particular reason, just to do it myself. By nature, the commands to compile the kernel are repetitive which means it’s the perfect thing to write a script to do!
Just give it the kernel source directory and let it do it’s thing. Once done, it creates two deb packages, the compiled kernel and the kernel headers (this also makes it only work on Debian-based distros). From there, it’s trivial to install a deb package.
Note that this script will most likely continue to evolve and the copy below may become outdated. Thus, the most copy can always be found on its GitHub repo.
This past weekend I got the opportunity to compete in the ACM’s ICPC programming competition on a team with other members from the Penn State ACM. Up until a week before the competition I was supposed to compete. However, due to some of club members not being able to go at the last minute, I was forced to compete. At first I thought it would be horrible because everyone else had been practicing for months, and I have not even completed a single practice problem. Regardless, I’m happy that I ended up going.
Unfortunately, my team and I only finished with one problem solved, but we were extremely close to having a second problem solved and most likely would have solved it given an extra 15-20 minutes. However, even with only have one problem solved and two incorrect submissions on another, we still ranked respectably compared to the other groups at our location. Apparently, 75% of the teams solve just one problem so it’s not as embarrassing as I thought it would be.
Despite the competition being only 5 hours, I still learned some important lessons. Most importantly, efficiency, efficiency, efficiency. This was the first programming situation I was in where your program was accepted or not based on how long it took to execute. And the problems are designed so that the obvious, but more brute force solution will take too long to execute so you’re forced to find a more efficient way to solve the problem. Some people will say that these problems are essentially just math problems, but this is where the programming comes in. You need to have the programming skills to make your program as efficient as possible. This is the problem my team ran into on the problem we solved. Without going into the details, we had to try multiple algorithms and data structures until we were able to make the program do the calculations it needed quickly enough.