Unfortunately I didn’t catch the screenshot in time to catch Bruce dancing a jig in front of the robot, but I can assure you that that is exactly what he was doing.
In case you’re wondering why I’m victorious, it’s because this morning I successfully got two different visualizers (in this case, Small Grey Image and Range Data) to display in two different viewports, complete with transparency layering. I’ve been working on getting this right for over a week.
Once I got both visualizers working, I could get very good frame rates from both of them individually, but if I tried to display them together, I got a major slow down, particularly from the camera image. I thought that the problem had something to do with SDL’s surface blitting functions, but the problem was actually much lower level, in our message passing CMU IPC modules.
I had Fritz turn down the rate at which the nav module broadcasts range sensor data, while at the same time lowering the rate at which my interface requests new camera image. As it turns out, the range data messages were basically overwhelming the message queue, such that if any camera data came in, it wasn’t handled before getting clobbered by newer range data.
Right now, I’m added the color video mode visualizers, and then I have to figure out a good way to indicate the current camera orientation. In particular, I need to alert the user if they’re driving forward with the camera pointed to one side.
It does get frustrating at times, but when I get stuff like this working, it’s a really awesome feeling. Fritz felt the same way when his robot monitor module RoboMon ran for 10 straight days without crashing. RoboMon also makes my life a lot easier, because now you just need to start up the interface with its config file. Last summer, we had to have several terminals open to the robot, to start up CMU IPC central, and the various modules (smmd, scmd, and svm). We’re in good shape, I think.