Dragon illusion

(As seen on TechRepublic Blog: Put your color printer to good use with this super-cool optical illusion)
Ok, this illusion is pretty darned cool. Basically, you print out this PDF and construct a paper sculpture of a dragon. It makes use of an inverted face illusion, so that if you close one eye while looking at it, it will look like it’s head is moving around to follow you. This video shows how strong the effect is.

I just printed out one myself, assembled it, and set it on the middle of the table in my lab’s common area. I wonder how my labmates will react when they see this tomorrow morning…

Posted in Illusions | 15 Comments

Pulsar-based “GPS” for interplanetary navigation

Over on Selenian Boondocks there’s a neat post about the X-ray Pulsar Positioning System (XPPS), a system proposed by Microcosm as a GPS-equivalent for interplanetary navigation. I’ve often wondered about this problem: GPS on Earth requires satellites orbiting the planet, so what methods can one use for accurately determining position if there aren’t GPS satellites around? The solution Microcosm is researching (with NASA funding) is to use the natural high-precision repeating signals from X-Ray pulsars to determine location. I’d love to see something like that work, and am curious about what sort of accuracy they can get.
On a related note, I created a new category, “Space/Interplanetary Robotics” for this post. I was tempted to just create a “Space” category, but considering how infatuated I am with spaceflight, I think I’m going to try to refrain from general Space postings. Otherwise, my intention for this blog could end up deviating pretty quickly…

Posted in Space/Planetary Robotics | Leave a comment

Flying robot armed with shotgun

Something I just submitted to slashdot:

A small company called Neural Robotics has produced a robotic mini-helicopter armed with a rapid-fire shotgun. Based on their low-cost AutoCopter, the UAV uses neural network-based flight control algorithms to fly in either a self-stabilizing semi-autonomous mode controlled by a remote operator, or a fully-autonomous mode which can follow GPS waypoints. A video of the AutoCopter Gunship is available. (Dijjer mirror)

Stepping aside the ethical issues of replacing soldiers with flying shotgun-wielding robots for the moment, their “neural network-based” flight control system seemed like an interesting technical accomplishment. This PDF briefing has a few details.

Taking a look at page 14 of their PDF though, perhaps their control system is a little on the simplistic side. It seems to just update roll and pitch based on the current movement and facing of the helicopter, without making use of visual information or other sensors. I’m not too familiar with flight control, but using a neural network for that seems like overkill. When in fully-autonomous mode, I wonder if they make use of sensors for crash-avoidance at all, or if they just hope that nothing’s in the way of the chosen GPS coordinates.

Assuming they haven’t done so already, it would be rather neat to load some range-finding sensors on the helicopter and have it automatically avoid nearby obstacles; the basic algorithms should be fairly straightforward.

Another idea is to allow the robot to visually track a point of laser light, potentially allowing somebody to control the robot with a designated laser. The military application of this is pretty obvious: You could quickly point a laser wherever the people shooting at you are hiding, so that the robot knows what area to scope out. A laser could also be used to trace out a patrol route for the robot, so that a user doesn’t have to deal with typing in cumbersome GPS coordinates.

As for civilian applications, the AutoCopter with a stabilized camera might be useful for filming video. One could imagine a system of two designated laser pointers, one for each hand. One pointer would designate a spot for the robot to hover over, while another pointer would indicate where the robot should direct its camera. Of course, one could alternatively just hire a dedicated RC operator, so perhaps this would be of limited usefulness.

Posted in Ideas, News, Robotics | 2 Comments

Introduction

Some time ago I realized that I couldn’t find any good sites which discuss recent research in computer vision. This “research blog” is intended to help fill that gap. My plan is to discuss recent and/or important research papers about computer vision and biological vision, news articles which involve vision, and vision-related ideas I have which may be interesting/useful. I’m also a big fan of the related topics of neuroscience and robotics, so I’ll probably regularly post about those as well.Although vision research can be a pretty technical at times, I’m hoping that at least some of this blog will be interesting and understandable by those without prior experience in the field.
A few things I hope to post about in the near future:

  • Luis von Ahn’s work with using web-based games to extract huge amounts of visual knowlede from humans.
  • Riya, a commercial web-based product for face recognition in galleries of photographs.
  • The recent PhD thesis defense of fellow CNS grad student Dirk Walther, involving the use of visual attention models to facilitate learning of models for object recognition.
  • The landmark Viola & Jones algorithm for face detection, its extensions, and the excellent open-source implementation of it in Intel’s OpenCV toolkit.
  • My thoughts on various feature descriptors useful in computational object recognition, such as David Lowe’s SIFT and Alex Berg’s geometric blur.
Posted in Administrative | 3 Comments