aShademan

June 23, 2008

Current trends in Robotic Visual Servoing

While I was searching for a sufficiently narrow topic for my thesis research, I thought it would be a good idea to find the current trends in our field by doing a statistical literature exploration. Our field, robotic visual servoing, has an extensive literature starting some 30 years ago with the pioneering work of Shirai and Inoue. Of course there weren't too many researchers working on vision-based control of manipulators those days, but this isn't true anymore. The past 2 decades has observed a significant advancement in vision-based control of robots.

A scholar.google search for the "visual servoing" keyword, shows that there are more than 9,000 relevant papers with more than 6,000 of them written after 1988. The interest in this field is growing rapidly (see figure). This diagram shows that in 2007 alone, there has been more than 600 papers that studied visual servoing in some form. However, the diagram doesn't show what the current trends in visual servoing are. It is very difficult and rather impossible to find the current trends using a search engine alone (I also tried google sets, but it didn't quite work out for our case). Therefore I tried to combine the "visual servoing" keyword with what I thought would be the future in visual servonig, to find the current trends.


When combined with "learning", the number of papers seems to be increasing year after year. The results of 2008 and 2009 would increase our confidence on the growth rate. BTW, I can't really explain what has happened in 2006, it might be due to the implementation overhead of robotics research. Or maybe it's been just a dull year for robotics.

I used to work on a completely calibrated positioin-based visual servo system, where even the CAD model of the objects to be manipulated were known. This was pretty much the typical robotics-in-automation or robotics-for-assembly set ups of the 1990's and earlier. Currently, visual servoing is applied to humanoids research and other settings where the robot needs to work in unstructured environments. The modeled world assumption is no longer valid and position-based approach is not quite applicable. The traditional image-based approaches also use a lot of a priori knowledge of the scene, camera, and the image Jacobian. If these are not known, can we still perform high-precision visual servoing?

To answer the above question, I combined "reinforcement learning" with "visual servoing". The results of 2007 and earlier aren't statistically significant. I predict that in the next 2-5 years we will notice the sudden impact of reinforcement learning in vision-based robotics.

My last statistical literature exploration was to combine "neural network" (NN) with "visual servoing". Neural networks were the flavour of the decade in the 1990's and many used techniques based on NNs to control robots. In the past few years, NNs had become out-of-fashion. It seems though, that NNs are getting some attention these days though.

P.S. This study is by no means complete, nor is meant to be. A thorough study would include more keywords such as ("adaptive control" OR "reinforcement learning") and/or ("visual servoing" OR (("robot" OR "robotics") AND ("motor control" OR "motor learning"))) and/or "hand-eye coordination" and/or "motion planning", etc.

Labels: , ,

June 09, 2008

Smithsonian National Air and Space Muesum

One of the highlights of my DC trip, was a 1-day visit to the Smithsonian National Air and Space Museum. I watched an IMAX 3D presentation of the International Space Station (Space Station 3D) which was a blast! Seeing the Canadarm2, the Mobile Servicing Module (MSS), and the other Canadian built robotic platforms in 3D action was certainly much more pleasant than reading the technical papers. I have a better idea of how these modules work, and where our NSERC/CRD research with CSA stands.

P.S. I guess I was lucky to be there when the Space Station 3D was on schedule! I can't seem to find it to link to it!

P.P.S. Well, the link to that IMAX presentation is here.