Controlling a relay via an Arduino from an Android client with NFC

Over the past few weeks I’ve been working on a small project that allows me to control electrical relays from an Arduino over the network from Android and C clients. It’s been delayed slightly from finals, holidays and power outages thanks to snow storms, but below is a demo of the first complete version.

Looking for the source code or instructions on how to set up your own?

The whole project aims to be as simple as possible. As can be seen in the demo video above, the hardware setup consists of an Arduino Uno, Arduino ethernet shield, PowerSwitch Tail II relay, two wires connecting the relay to the Arduino, and power and ethernet for the Arduino.

Why it's a bad idea to have duplicate MAC addresses on a LAN

I’ve been using some of my time during winter break to wrap up my RelayRemote project. Without going into much detail, RelayRemote is a small project I started which allows control of an electrical relay though an Arduino server from an Android or C client over a network. I originally started the project with one Arduino and one relay, but after having a rough proof of concept working I decided to add support for multiple servers so I bought another Arduino and another relay. When I was working with only one Arduino, communicating with it over the network from my Android app was nearly instantaneous (less than a second). However, when I added a second Arduino to the mix, things became very slow; less than a second to > 15 seconds. The Arduino’s had different IP addresses and did work, just very slowly so what was slowing things down?

The answer is in the title of this post, but let’s pretend otherwise for a moment. I started trying to track down what was causing the slow down. First up, I used bash to time how long it took my C client on my computer to send a message to one of the Arduinos. In this case, I was asking the state of the pins on the Arduino.

Calculating pi to 10,000,000 digits with MPFR and threads

A few days ago I wrote a post about how to not go about writing an arbitrary precision data type in C to calculate pi. If you read the article, I talked about how a friend and I were trying to accomplish that task in 24 hours. Needless to say, it didn’t work and I resorted to using a library that was already available. Namely, MPFR. After a little research on Wikipedia about the best approximations to pi, and a couple of days of off and on work, I had a pretty good solution up and running.

First, let’s talk about the math behind this. There are a bunch of approximations to pi; some older, some newer, some faster, some slower. At first, I used Newton’s approximation to calculate pi.

This worked, but was slow (I didn’t record exact execution times). As everyone knows, factorials are huge numbers and grow very rapidly. In this case, the numbers were just too big to efficiently accomplish the task at hand. Could have I done something like Sterling’s approximation? Sure, but there’s better ways to calculate pi. No use in wasting time.

Next up, I gave the cubic convergence version of Borwein’s algorithm mainly because there were no factorials in it. This worked pretty well actually. It calculated pi within a reasonable amount of time (more details below), but because it was a recurrance, I would not be able to multithread it.

Now with multithreading in mind, I turned my attention to the 1993 version of Borwein’s algorithm, which was a summation.

On the up side, it was a summation, which is easy to multithread. On the downside, look at all those factorials. Long story short, I hit the same with this approach as I did with Newton’s approximation above; it worked, it was just too slow.

How NOT to write an arbitrary precision data type in C

This past weekend was HackPSU, a typical 24 hour hackathon at Penn State. Without any better idea, my friend, Gage Ames and I decided to break the mold of the typical hackathon projects of games, websites, and mobile apps, and doing something much more nerdy: creating our own arbitrary precision data type in C so we could calculate pi (or any other irrational number) to as many digits as our computers could handle.

About one year earlier I attempted the same project, but with even less success than this time around. My previous solution was to use very large arrays to store the digits of pi in. Obviously, allocating huge amounts of memory for this purpose was a bad idea. That, coupled with a general lack of experience with memory management in C++ led to a complete and utter failure. However, this time around, I tried to learn from these mistakes and took a different approach. After discussing it with Gage, we decided on using pure C rather than any other that fancy C++ stuff, and to use a linked list rather than an array to the store our data. Sounds good so far, but here’s where we made our first fatal mistake. We originally would have liked to use a doubly-linked list as it would have made our adding algorithm simpler. At this stage, I was very concerned with using as little memory as possible though and using a doubly-linked list would have nearly doubled the memory needed to store a digit. As a small digression, knowing that each digit in a node could not be greater than 9, we used a char to save 3 bytes over using a 4 byte integer for each digit. Then, we needed a pointer to the next digit in the list, which was 8 bytes (on our 64bit laptops). There’s no getting around that, but a doubly-linked list would require another pointer to the previous digit, which was another 8 bytes. This brought the total memory needed for a digit to 17 bytes per digit for a doubly-linked list or 9 bytes per digit for a singly-linked list. After a little experimentation, we determined that our adding algorithm would work just fine with a singly-linked list if we represented the digits as the least significant digit at the head of the list. In short, a few hours later we realized that using a singly-linked list and representing the digits in what accounted to little endian was just too darn slow and tedious. But enough talk, let’s look at this horribly flawed code.

Running GitLab from a subdirectory on Apache

Note: As of February 2013, these instructions have been tested with GitLab 4.1. GitLab evolves very rapidly and I do not use it anymore so these instructions will quickly become outdated.

I’ve been looking for a good git manager website that I could install on my own server. A few days ago I found GitLab, which does everything I need it to do and more. The only problem is that the setup guides use Nginx as a webserver. I’m cheap and only have one server, which runs Apache. I also have this Wordpress (this blog) already running on my server so I would like have to install GitLab to a subdirectory too.


Part 1: Running GitLab on Apache

First, let’s talk about running GitLab from Apache. Everything to get Gitlab running on Apache is exactly the same if you’re following the install guides for GitLab, up until the point of installing Nginx. So, if you haven’t started install GitLab yet, go do that and stop when you get to installing Nginx.

I assume you already have Apache installed and up and running, if not, there are more than enough guides floating around on how to do this. I won’t add another to the fray.

GitLab is a Ruby on Rails application and to run it on Apache we need to install the Passenger module.

1
2
$ sudo gem install passenger
$ sudo passenger-install-apache2-module