aShademan

May 29, 2006

{who.cogSci} Stevan Harnad


Stevan Harnad is renowned cognitive scientist [wiki] and author of the following article:

Harnad, S. (1994) Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't. Special Issue on "What Is Computation" Minds and Machines 4:379-390
[Full Text HTML]

This post was originally meant to introduce a scientist and one of his works that interested me. Though I am not particularly working on cognition, my research would generally be on computational vision and its applications to robotic visual servoing, i.e., vision-based control of robots. I have been passionate about the geometrical/computational approach to 3D vision, but I have to be aware of the limitations of such an approach, as the boundary between biological vision and cognition is blurred.

I have had a couple of interesting discussions with SoloGen on the issue. I could re-phrase the issue as "Which would ultimately make a better artifical visual system for daily/industrial applications: Machine Learning or Geometry?"

I view computational vision is an intricate and interesting problem, but the question is, does solving such a problem help humans towards building their intelligent clones? I must point out that, in my opinion, humans are subconciously (?) in search of making their own version of human-like machines, but are too conservative to boldly challenge their creator. Afterall, it would be more likely that the contemporary scientists fail to make the artificial thinking/intelligent human, so why enter a lose-lose competition?

Labels: ,

2 Comments:

  • I haven't checked this place for a few days until some minutes ago that I found everything has been changed!!! It's great! (:

    I thinkg geometry/machine learning line is not a precise discriminator (even if I might be the guy who used this term before). I think the real difference is the amount of prior knowledge two possible approaches assume when they want to solve the vision problem and the way and abstract level they acquire knowledge from the world.

    In the engineering side (which is almost equal to geometry side you mentioned), people assume that they can do exact geometrical inferences to find out more about the world. They use an enormous amount of knowledge (geometry) to infer about the world. After making the vision system, it acquires knowledge of the world based on some prior knowledge (geometry) and some observations. You see that there are two separate things here: 1) prior knowledge that can do manipulation on data, 2) sensory data.

    On the other hands, I believe that what is going on in an animal's brain is not exactly this way. During the long-term process of adaptation (which includes evolution of the species and individual learning of the animal), the resulted animal acquires data from the world, uses it to infer something about the world, and more importantly, use it as new tools/methods for processing forthcoming data.
    The essential difference, in my opinion and at the current moment, is this flexibility of the nature to build new methodology to solve problems. For instance, after seeing and playing with a toy car, a child knows that the car is almost symmetric along one of its axis (the left and the right side of cars are similar). Now if she goes to a street for the first time and sees one side of a real car, she can infer that there must be another door on the other side, so if she wants to go in it and finds one door locked, she can go to the other side without having seen the other door before. Although this example is simplified, it shows that an animal can acquire new knowledge and use them for making newer inferences.

    By Anonymous Anonymous, At 5/30/2006 5:31 PM  

  • Thanks for your comment, solo. I'd try to post a new entry and further the discussion.

    By Blogger Azad, At 5/31/2006 4:52 AM  

Post a Comment

Subscribe to Post Comments [Atom]



<< Home