Wednesday, March 22, 2006

Science and the Sphinx

Every once in a while during the philosophy of AI classes we get involved with ethical issues. Sometimes students ask me afterwards whether it would be possible to treat ethical issues a bit more systematically (so help me, it's true). I'm always a bit reluctant to do so because when I was a philosophy student I tried to stay away from classes on ethics as far as I could. The whole idea of getting taught about good and bad seemed completely ridiculous to me. I knew what good or bad was, of course, and who were these middle-aged losers anyway, thinking they could teach me. But now that I'm slightly more.....mature myself, I'm starting to think that there may be something to ethics after all, if only in the same way there may be something to theology and esthetics. They're all, if you'll pardon me saying so, about 'tastes' of sorts that can be felt quite strongly and may have some remarkable consequences. So, for as long as my middle-age may be blossoming, I might be writing a bit more on such topics. Superficially of course.

So, for starters, let's consider Francis Bacon on Science and the Sphinx

Sir Francis Bacon has been called one of the first great enthusiasts of science. He retells the famous Greek myth (here's the full, 2-page, text) about how the Sphinx lay in ambush for travelers near the mountains of Thebes, attacking them with riddles and tearing them to pieces if they couldn't give the right answer quickly. Oedipus (the one who would later turn out to be a complex) had no fear, after being promised the sovereignty of Thebes. The Sphinx confronted him with the question what kind of animal was born 4-footed, then became 2-footed, then 3-footed and at last 4-footed again. Oedipus' quick reply 'Man' (going from birth to old age) made him victorious and he carried the sphinx on an ass back to town, became king of Thebes and lived anything but happily ever after.

Interesting is Bacon's suggestion that this fable was
"apparently in allusion to Science (...) Science, being the wonder of the ignorant and unskillful, may be not absurdly called a monster. In figure and aspect it is represented as many-shaped, in allusion to the immense variety of matter with which it deals. It is said to have the face and voice of a woman, in respect of its beauty and facility of utterance. Wings are added because the sciences and the discoveries of science spread and fly abroad in an instant; the communication of knowledge being like that of one candle with another, which lights up at once. Claws, sharp and hooked, are ascribed to it with great elegance, because the axioms and arguments of science penetrate and hold fast the mind, so that it has no means of evasion or escape."

For me it is significant that Bacon does not mention another similarity that one nowadays would notice almost without thinking: the 'tearing to pieces' of human beings. After all, a big-time worry many people today have about science is that what we tend to think of (and cherish) as essential to human beings (rationality, consciousness, free will, just to name a few in the context of cognitive science), is being dissected with clinical precision, sometimes without leaving a single trace. Students asking for discussions of ethical issues may have exactly this in mind (but then again they may merely want to know whether it is ethical to ask for pay raise more than twice a year).

Bacon only says that when the riddles pass on
"from contemplation to practice, whereby there is necessity for present action, choice, and decision, then they begin to be painful and cruel; and unless they be solved and disposed of they strangely torment and worry the mind, pulling it first this way and then that, and fairly tearing it to pieces."

Contrary to Bacon's view, however, the present-day worries I mentioned above start when the riddles are answered. It is the scientific answers to questions about who we are that make many people feel uneasy (rightly or wrongly so is another issue that I'll leave aside for the moment).

Is this an indication of a change in the perception of how science affects our self-image, since the time of Bacon? We shouldn't forget the troubles people like Galileo, Bruno, Descartes and Spinoza had in speaking openly about their scientific and philosophical views. This is a big topic that needs some explanations about 17th century debates (e.g. concerning the Cartesian animal-machine thesis) that I'll get back to later.

Another question is perhaps more complex: It is one thing to say that scientific answers to basic questions about ourselves can be unnerving, but can such answers be true and morally wrong at the same time?
Geez, am I really going to treat this systematically? There are some things I don't like about being a middle-aged loser (though I have to admit that it's not half as bad as I thought it would be). Think I'll have to figure out a way to skip my own classes.

PS
Look here for a brief explanation of the more recent depiction (shown above) of Oedipus and the Sphinx by another famous Francis Bacon.

Saturday, February 11, 2006

Al-Khwarizmi


One of the first things that would come to my mind when asked my opinion about arabic and/or islamic culture (not that anyone did though, all this is pure self-indulgence) is that one of the key-words of AI, algorithm, is derived from the name of the great 9th century arabic scholar
Abu Ja'far Muhammad ibn Musa al-Khwarizmi, who, amongst many other things, wrote a book on the systematic solution of certain types of equations.

Another thing that would come to my mind is the 13th century poet Rumi (I know about the debate whether or not sufism should be considered to be part of the islam or not, but I don't care. He comes to my mind).
Here's a poem of his:

Do you know a word that doesn't refer to something?
Have you ever picked and held a rose from R,O,S,E?
You say the NAME. Now try to find the reality it names.
Look at the moon in the sky, not the one in the lake.
If you want to be free of your obsessions with words
and beautiful lettering, make one stroke down.
There's no self, no characteristics,
but a bright center where you have the knowledge
the Prophets have, without books or interpreter.


Here's another:


Today, like every other day, we wake up empty
and frightened. Don't open the door to the study
and begin reading. Take down a musical instrument.
Let the beauty we love be what we do.
There are hundreds of ways to kneel and kiss the ground.


One more:

I asked for one kiss: You gave me six.
Who was teacher is now student.
Things good and generous take form
in me, and the air is clear.


Oh well:

At night we fall into each other with such grace.
When it's light, you throw me back
like you do your hair.
Your eyes now drunk with God,
mine with looking at you,
one drunkard takes care of another.


Translations by J. Moyne & C. Barks.

Thursday, February 09, 2006

Jyllands-Posten Epaper

Ok, I can't draw, and I don't care too much about cartoons, but I do care about freedom of speech, hence this link to Jyllands-Posten Epaper

Thursday, January 12, 2006

Video on the history of computerchess

Just to see if I can get back into a blogging routine again....
If you're into chess (like I am) and AI (did you notice?) you might like this 6 minute clip on the history of computer chess, starting out and ending with Kasparov vs Deep Blue. It's not bad at all, nice footing and pictures of some great chess players and AI researchers.
Of course, there's no debate anymore about whether computers are better at chess than humans: they are. Still, the principles underlying the human capacity for chess are only roughly known. Sure it has something to do with pattern recognition and, presumably imagery-based, calculation. But chess computers have not helped us all that much in understanding these capacities. Winning turned out to be more important than understanding. Let's hope that something similar won't happen with robot soccer.

PS
There's a nicely done but maybe not really all that funny parody on Kasparov's expressive suffering during his defeat, performed by none other than the legendary Victor Korchnoi, in a commercial of some kind.

PSPS
A brief but clear review of some interesting aspects of the history of computer chess can be found here. There even is an interview, in portuguese, with me about chess, AI and some related issues at the Brazilian clube de xadrez. Ah well, isn't blogging supposed to be all about self-promotion?

Thursday, August 18, 2005

Science and what angels wear on their feet

""I just thought you'd like to see," he said, "what angels wear on their feet. Just out of curiosity. I'm not trying to prove anything, by the way. I'm a scientist and I know what constitutes proof. But the reason I call myself by my childhood name [Wonko (the Sane)] is to remind myself that a scientist must also be absolutely like a child. If he sees a thing, he must say that he sees it, whether it was what he thought he was going to see or not. See first, think later, then test. But always see first. Otherwise you will only see what you were expecting. Most scientists forget that. I'll show you something to demonstrate that later. So, the other reason I call myself Wonko the Sane is so that people will think I am a fool. That allows me to say what I see when I see it. You can't possibly be a scientist if you mind people thinking that you're a fool."
Douglas Adams, The ultimate hitchhiker's guide, p.587 (So long, and thanks for all the fish).
It has been quiet on my blogging front, not because of holidays alas. Long time ago, when I was in San Diego I learned surfing with something that was so long and thick that it looked more like a boat than a board (see my outdated research description for a pic). No complaints, I sure needed it to stay afloat. At the end of the second learning day I leaned forward a bit too much while I was on the top of a rather big wave. I went down with the wave happily following me. Down under I couldn't move for quite a long time as the water kept pounding upon me. Felt like an uncoordinated set of limbs in total darkness. When I finally could move again I was running out of air and didn't even know anymore which way was up or down. I kind of liked the experience but I'm not recommending it though. Don't try this at home in your bathtub.
I have felt in a similar way since about May this year. The wave of work stopped me from getting myself together and knowing where I'm heading. Guess I'm close to taking a deep breath again. Question is: Which way is up? Anyway, as Douglas Adams brilliantly put it: See first, think later, then test. I'll try to keep that in mind this coming year as much as his warning that you can't possibly be a scientist if you mind people thinking that you're a fool.

PS
The painting is the famous 'Holy trinity' from the Russian iconist Andrei Rublev. I don't see no toes, but I'm not sure I see shoes either.

Monday, May 16, 2005

Self-replication

Robot builds copies of itself
There is a lot of commotion about a recent self-replicating robot. Several scientific news webpages spoke about it and Vicente Marçal a Brazilian student at UNESP, Marilia, says at his blog that the Folha de São Paulo (the Brazilian NRC Handelsblad or New York Times or Le Monde or etc.) had an article about it. "Robôs criados nos EUA se reproduzem como seres vivos" they wrote: Robots created in the US reproduce themselves like living creatures.
molecubes.gif It pays to read their paper in Nature (if your university provides access, you can download the paper) or, at the very least, take a look at the video. At the homepage of the makers (Cornell CCSL) a lot more pictures and texts can be found.

Especially interesting is the connection that's being made between artificial systems and biology (living creatures). Along the lines of the Folha de São Paulo, a site quotes Hod Lipson as saying:
"Although the machines we have created are still simple compared with biological self-reproduction, they demonstrate that mechanical self-reproduction is possible and not unique to biology."
I think that is why the media picked it up: It's no longer just us that can reproduce, robots can do it too. No doubt that the 'robots will take over the world' fear is at work here: If they start replicating themselves like we do, how can we stop them? It's the terminator syndrome.

I always get a bit annoyed by the pounding on this fear because it takes away the attention from the real, presently existing, dangers of information technology and AI (see my postings on cyborgs). One of the ways of selling bad things is by talking in a loud voice about currently unrealistically worse things.

In short: the differences between biological self-replication and these robots are huge. So huge in fact, that these robots bring back memories of Penrose's self-replicating wooden blocks from the 1950's. I have a great video about this, but I don't know if I can show it here without being sued by lawyers specialized in copyright. The basic idea is that complex shaped wooden blocks were shaken. They combined and disconnected to form identical structures of 2 or 4, etc. basic building blocks.
Do you get scared by wooden blocks that get rocked and then connect and dissect at regular places? My bet is that you don't. It would be different perhaps if they could build the basic building blocks themselves, or if the basic building blocks would be so simple that they could be found everywhere. But that's not the case.

To be fair, Lipson et al. do refer to Penrose but say that in this case "it is not clear how to scale the process to more complex systems, short of redesigning the ‘atomic’ components. In this demonstration we use a modular substrate in which arbitrarily complex self-reproducing machines can be constructed. We circumvent the long-standing hurdle of ‘what counts as self replication’ by suggesting that self-replicability is not a binary property that a system either possesses or not, but is a continuum dependent on the amount of information being copied"

So they're perfectly cautious and reasonable. In any case, this posting is not against them or their work, but against the terminator syndrome that causes it to be discussed in newspapers (but Lipson et al. are obviously not responsible for that). From tha terminator perspective, much more interesting (or, if you prefer, much more scary) are the fearsome English Slug robots. These robots will actively search for slugs in the field and use what they find as their energy source, though the 'digestion' is done in and by another system. Take a look here, or at Ian Kelly'shomepage.

By the time robots really can find and digest their food, and then self-replicate, then we better watch out. But I wouldn't loose any sleep over it just yet.

Wednesday, May 11, 2005

Erroneous Errors

I came across this site when I made an error trying to check how you write 'erroneous'.


Neurophilosophy

Yesterday the course on neurophilosophy started, for me at least, and sort of. In case you're into distinctions, as many philosophers are, there appears to be a consensus that there is a small but noticeable difference between neurophilosophy and the philosophy of neuroscience.
In the philosophy of neuroscience one takes a philosophy of science type of perspective. Topics ranging from (non)-reductionism to the necessity and interpretation of representation get discussed under that label.
In neurophilosophy we examine how neuroscientific data apply to the great, deep, fundamental, mind-boggling, bestselling philosophical questions about the self, identity, consciousness, autonomy, free will and what have you.
For a brief introduction to both areas, see for instance the Stanford Encyclopedia of Philosophy.
I'll be discussing topics belonging to both fields: Embodied Embedded Cognition and representation, and Brain data and modes of explanation as example cases of the first area. Agency & self, Consciousness, and Free will belonging to the latter.
Here's the page of the neurophilosophy course.

According to some, a new field is emerging, called neurophenomenology.
Antoine Lutz describes it as follows: "Neurophenomenology takes the (...) step of incorporating 'first-person methods'-precise and rigorous methods subjects can use to increase the threshold of their awareness of their experience from moment to moment, and thereby provide more refined first-person reports. The target is to create experimental situations that produce 'reciprocal constraints between first-person phenomenological data and third-person cognitive-neuroscientific data: the subject is actively involved in generating and describing specific and stable, experiential or phenomenal categories; and the neuroscientist can be guided by these first-person data in the analysis and interpretation of the large-scale neural processes of consciousness."
I'm not quite sure what to think of this yet, and I remember not being totally happy with some adepts of Husserl and Heidegger when I was a philosophy student, but together with some friends and colleagues I'll try to find out. In the course I won't be talking about it, not this year at least.

A perhaps interesting interview with Shaun Gallagher about the relation between neurophenomenology, phenomenology and cognitive science can be found here.
And, to round it off; Phenomenological Approaches to Self-Consciousness

PS
There are a few garbled sentences in the 'Editorial review' by Lutz, but I thought that the main message was clear enough to link it anyway.

Friday, May 06, 2005

The 'autonomy' of robots

Back home, if that's what Amsterdam is. Amsterdam_in_1544


Wrapping up my 2 month stay in Brasil, I'd like to mention my talk at CLE, the Centre for Logic, Epistemology and the History of Science in Campinas, SP. It was on 'Robotics, philosophy and the problems of autonomy'.
Like many others, I got a bit annoyed by the recent tendency in AI to call just about anything an 'autonomous agent'. It seems that the label has become almost equivalent to 'something that you can turn on and walk away from'.

I hate this type of conceptual erosion, that unfortunately is all too frequent in the history of AI.
Spam-alert!!
I'm part of a very informal discussion (organized by Incognito, the AI student organization of the University of Utrecht) on Tuesday evening, the 31st of may, 20.30 in 'Cafe de Dijk', exactly on this topic.
So I presented a bit about how robotics developed out of telemanipulation, in which the human directly guides and causes the motions of the remote. Manipulating the operators to direct the remote is of course very tiring and time consuming, and so the idea arose to have simple and repetitive operations performed automatically, without human steering. This got known as supervisory control. Increasing the autonomy of the remote simply means reducing the need for human supervision and intervention. That, as far as I can see, is the origin of the current use of autonomy in AI: Autonomy is interpreted relative to the amount of on-line (while the remote or robot is functioning) involvement of human operators.

Then I contrasted this with a very...uhh...extremely brief review of the history of the philosophy of autonomy (hahaha....I know). For philosophers, autonomy is related to the capacity to act on one's own behalf and make one's own choices, rather than to following goals set by other agents. From this perspective robots have about as much to do with autonomy as people that get brainwashed. They don't even know that their goals were set for them. That's why for many philosophers AI comes close to being fraudulent: making x but calling it y. Conceptual counterfeiting, or so I'll try to claim in Utrecht.

To be fair, I also tried to indicate the problems philosophers have in explaining what 'making your own choices' really means. Before you know it you get stuck in the problem of free will, and why on earth expect that roboticists would solve that problem? It's not even certain that there is such a thing as free will. In a way, philosophers can be accused of conceptual counterfeiting too, except that in their case it's not even sure that the 'real money' exists. It's more creative perhaps, and with considerably less chance of going to jail (who cares nowadays about the rights of what does not exist), but scientifically speaking the creation of mere confusion does not amount to making an argument.

Anyway, there were roboticists and philosophers and no rotten tomatoes were thrown afterwards, so I guess I can safely say that Spam-alert!! I have a manuscript on this topic (and another one) that you can download here from next week on.

"Wrapping up my 2 month stay in Brasil"....Ai, que saudades.

Wednesday, April 20, 2005

Cyborgization

Yes, the class went fine actually. Thank you. It's just that translating labels like 'ubiquitous computing', 'information appliances', 'mindware upgrades' and such into Portuguese is not always....well...easy. I sometimes use an automatic translator, especially when I'm in a hurry, but since it produced 'O testamento é gratis' as a translation of 'The will is free' I realized once again (and not just for theoretical reasons) that translation requires intelligence and understanding.
Let's see if I can reconstruct my brilliant exegesis that I lost due to a brief internet disruption yesterday (makes me wonder how many great postings on these masses of blogs go up in noise everyday).

I started out by making fun of Brooks's ridicule of the likes of Moravec, Minsky & Kurzweil "who have succumbed to the temptation of immortality in exchange for their intellectual souls." (Brooks, 2002, 204-205). He noted that they predicted the major breakthroughs (allowing them to download their consciousness into machines thereby becoming immortal) to occur around the time they would be about 70 years old. "They were each, in their own minds, going to be remarkably luckky, to be in just the right place at the right time." (206).

However, in later chapters Brooks himself regularly speaks of the major techological advances (e.g. in relation to artificial retina's, rerouting nerve signals around areas damaged by Parkinson (223) and all kinds of implants) that will take place in the next 20 years or so: "Most of these technologies are going to come to fruition in the next ten years, and almost certainly they will all be perfected within twenty years." (223).
20 years? Now, I don't know exactly how old he is, but my guess is that he will be approaching his 70's by that time. So, it may not be hope for immortality, but at least hope for some serious improvements 'in the nick of time' all the same. I don't want to make fun of this. Hope springs eternal and where would we be without it. I just want to ridicule the ridiculing.

A second drive I noticed in his repeated referral to the demands of 'the market' (e.g. 217, 218, 229). Brooks is clearly interested in the financial gains involved in robotics (he describes his forays into commercial robot dolls in an earlier chapter). I'm not saying that that is bad, or makes for bad science (see my earlier post on Steven Rose's new book). But when people who describe and create future technologies also have personal financial interests it's time to watch out.

There will be a whole new class of enhancements for our human bodies, Brooks says, and, like cosmetic surgery, these will become socially acceptable. Then, he suggests, why not 'upgrade' for instance a blind ('useless') eye to be able to see in the night? (226).
"Night vision enhancement will get to the point that some people with two perfectly good eyes may be willing to sacrifice one for it. In poorer countries people are already willing to sell some of their own organs for what appear to be pitiful amounts of money. In other parts of the world people are willing to become human bombs to support their causes. Modifying a good eye, to give superhuman performance, will not be too outrageous for lots of individuals, resistance movements, and governments." (226).
As I am currently working in one of these poorer countries (at least it has large and extremely poor areas) I feel most uncomfortable with the 'are willing to sell some of their own organs for pitiful amounts of money' phrase. They are not willing. They are poor. They have the choice between starvation (or serious illness, etc. of themselves, their parents, or their children) and selling themselves. They can almost be forced to sell through the mechanisms of the market. What I don't like is that the obviously negative aspects of this new 'body enhancement industry' don't even get mentioned. Surely there will be an increasing 'hunt' for useful (parts of) eyes and other organs that are essential for the technologies to work.

Now, I am sure Brooks didn't intend it the way it I am reading it. I'm sure he has a heart. But 'market pulls' don't. Moreover, market pulls can be created, as we all know. If you don't have X, you're out (some may remember how smoking cigarettes was supposed to make you look cool).
I know of an 8 year old girl who is being mocked at school for not having a mobile phone. She's an outcast now. I wonder at what age she will be buying a superhuman eye for her child in order to save him/her being scorned in the same way. 28? And I worry about the person who is going to sell his eye for some pitiful amount of money.

At the end of his book, Brooks is triumphant:
"We will change the very ways in which human beings interact. We will be superhumans in many respects. (...) Each of the have-nots will soon want to become one of the haves." (230).
Oh yes they will want that, but many of them will end up selling themselves to the superhumans, or those who'll be approaching 70 and don't know how to deal with decay and impending death.

PS
I'm not against the book, in fact I think it's a good read. I'm not against Brooks, I think his work on robotics is great. I'm not against technology, the future or even market pulls, but I've seen enough misuse to be wary.
Lest you think I just want to show how 'social-minded' I am, let me give you another type of example. Clark and Brooks both envisage the display of additional information on a part of the visual field (who are you meeting, what is her name, background, names of relatives you should now, what were the topics when you last met her, that type of thing). At first this information could be displayed by means of some special glasses, but perhaps later by direct projection onto your visual cortex. I sure could use some of that stuff, as I forget names more rapidly than I can hear them.
But guess what the market pull will lead to: advertisements straight into your brain. I can almost see the slogans: "Get 10% discount on this new implant, and receive three free additional messages per day!!" or something to that effect. Market pull leads to advertisements. Only the rich have's won't have them.
You see what I mean?

PS 2
Maybe I'm exaggerating. Brooks gives some examples of superhuman capacities that don't sound all that bad:
"Having such things implanted in our brains will make us tremendously more powerful. (...) We will be able to think the lights off downstairs instead of having to stumble down in the dark to switch them off (...) we will be able to think the coffee machine on, down in the kitchen." (229).
Ok, I admit; deep down in some dark corner of my soul there is this 'Wille zur Macht' as Nietzsche put it (yes Ma'm, I'm a philosopher) that longs for this kind of tremendous power. You know, pretty soon after having such abilities, we won't be able to imagine how we ever could have lived without them.

PS 3
Well, it was something like this, my original post, only infinitely wittier, wiser and more just to Brooks and everybody else, including myself. What can I say? Technology let me down.

PS 4
A few days after this post I came across the following: Desperate Bangladeshi mother puts eye on sale

Tuesday, April 19, 2005

Cyborgs

The topic of tomorrow's class here at UNESP (Marilia, São Paulo) is 'The future of cognitive science' and since the earlier classes were on robotics mostly, I choose some cyborg-related texts. Andy Clark's (2003) natural-born cyborgs (3-11 & 43-58) and Brooks (2002) last two chapters (197-236). I don't know the future, and I've never been able to worry much about it, but, well, let's just say I'm trying to broaden my horizon.

According to Brooks (2002, p.x) we won't have to worry about machines taking over, because we will become a merger between flesh and machines, combining the best of biology and technology.

"We need not fear our machines because we, the man-machines, will always be a step ahead of them, the machine-machines. We will not download ourselves into machines; rather, those of us alive today, over the course of our lifetimes, will morph ourselves into machines." (212).

Personally I think we don't have to worry about machines taking over for quite a while yet simply because they are so incredibly stupid.

Ayway, here are some recent examples about morphing-style technology from the BBC news page, just to get the idea:

Brain waves control video game
Microchip promises smart artificial arms
Bionic eye will let the blind see
Brain chip reads man's thoughts

I especially like the naivity of the last one on 'chips reading thoughts'. In any case, there's no doubt that the incorporation of technology in the human body will grow spectacularly. The question that interests me more is what drives this development, because that will determine to a significant extent where we'll end up.
I wrote about a page or so on this very topic, but just lost the whole lot. Today ain't my day. Hope the class will go better tomorrow.

Wednesday, April 13, 2005

Shakey on film

SRI - Shakey
"Life Magazine refered to Shakey as the “first electronic person” in 1970."

Remember Shakey? That delicious late 60's, STRIPS-based, robot that reasoned about its world of blocks and rooms? Shakey.jpg

There is a great movie (24 minutes), made in 1969, that shows Shakey in full glory and gives a wonderful impression of AI days gone by (No, I'm not that old). There is a hilarious scene around 4:30 that shows how Shakey can deal with 'unforseen accidents'. Seriously, you gotta see this.

The site also contains a host of the proposals, reports, and publications by Nillson, Rosen and many others, ranging from 1965 to 1984.

Friday, April 08, 2005

Messing with the mind

Messing with the mind
"Rose makes it clear that what is driving modern mind science is not just idle curiosity but the desire for marketable knowledge."

What's good about 'idle curiosity'? Is that the main drive for science's quest for truth? Does 'truth' exist? What exactly would be wrong with a desire for marketable knowledge? Would such a desire (always? often? sometimes? never?) lead away from truth? Questions I won't be able to answer, but that I will come back to at times. Let's see if Rose's book would be worth reading first.

Steven Rose (of the 1984 'Not in our genes' book, written with Lewontin & Kamin) got a book out Rosebookcover (The 21st Century Brain, £20; 256pp, ISBN 0 224 06254 9), in which he, reportedly, criticizes neuroscience for attempting to deal with an autopoietic system on the basis of crude models.
Rose considers himself, along with Lewontin and Kamin, to be part of a fire brigade, aiming to douse deterministic fires with the cold water of reason before the entire intellectual neighborhood goes up in flames (1984, p.265).
Dennett (2003, Freedom evolves, p.19) considers Rose to be one of those that throw a lot more than the cold water of reason:
"while their campaigns have grown threadbare, and their simple fallacies have been exposed over and over by their scientific colleagues, the debris from their campaigns continues to pollute the atmosphere of the discussions, distorting the understanding of the general public on these topics."

Another review of the book says that Rose "sees menace in the smug reductionism of so-called 'neuro-philosophy', which dismisses traditional views of human responsibility as mere unscientific folk psychology".

I happen to give courses both on neurophilosophy and folk psychology. I think folk psychology could very well turn out to be wrong on everything, including the nature of our own mental states. Which is not to say that we now already now that folk psychology is wrong on everything. I also think that the favorite style of explanation in cognitive and neuroscience, functional decomposition and localization, may be too blunt a tool to understand a thoroughly recurrently interconnected system like the brain. Not to mention, of course, that there is a lot more to cognition and behavior than the brain. At the same time, I think reductionism is not to be waved away lightly, e.g. by a mere reference to recurrency or the embodied embeddedness of cognition.

You know what? I'll read the book.

PS
I think this will be the start of a 'Shall I read this book or not?' thread. In addition to the 'This is what I think about this book that I read' thread. I'll have to find out if there's a way to archive the separate threads.

PS-2
One more review.

Thursday, April 07, 2005

Wake up! They're here!


clocky.jpg

But really, they're here to help us.
I owe this interesting example to this interesting page of these interesting students.

Wednesday, April 06, 2005

Attack of the Soccer Robots

When will androids beat us at sports?
Sigh. Ok. This is not the greatest link ever (just look at that terrible cartoon, but and at least the article contains some interesting links).soccer_robot But I wanted to come back to my previous post on vision and thinking (see: robots reveal their human side) by using this article as an illustration.

My attention was drawn by these sentences
"when the robots do kick us into submission, it won't be because they've unlocked some dominant strategy or because they've dramatically surpassed our stamina, coordination, and flexibility. It'll be because the robots have finally learned to see as well as you and I do every day."

So that's 1-0 for Brooks, wouldn't you agree? Then the game goes into second half:
"The eyes (..) still present a huge problem. Today's soccer bots primarily use color-coding to identify objects on the field: white lines, orange ball, yellow-and-pink posts. But color-coding is a very brittle way to perceive the world. If the tint or intensity of the lighting changes, so do the colors—if a cloud passes overhead, the robots are effectively blind. To confuse them, a human team would need only to keep the ball in the air, where the shifting background and light would make the colors almost impossible to identify."

Well, maybe a similar type of goal, but still: 2-0
"What's to stop a team of robots that can best us on the pitch from deciding to fight us outside the stadium? Manuela Veloso, the head of CMU's MultiRobot lab and one of RoboCup's founders, thinks fears of robot rebellion are misplaced. "They only have to understand where's the ball, where's the goal, where are my teammates, where's the end of the field - not what's a house or what's a tree or what's anything," she says."

So Minsky wins in the end? Even if 'understand' here is meant as 'recognize', the capacity to 'fight us outside the stadium' requires knowing what to do with houses, trees and humans once you've recognized them, doesn't it? That's one of Minsky's points about the importance of thinking. So what's the score now? 2-3? 2-1000? Anyone?

Tuesday, April 05, 2005

Robots reveal their human side

Robots reveal their human side
I'm currently teaching a course at UNESP, Marilia, SP, Brazil about philosophy and robotics, so this will be a main theme for the next few posts or so, I'll get to neurophilosophy a bit later.
Reading the book by Rodney Brooks (2002, Penguin) Robot, is quite interesting. At page 74 he says, for instance, the following:

"Marvin Minsky, cofounder of the MIT AI lab and a consultant to Stanley Kubrick on [the movie] 2001 back in 1967 and 1968, helping him to envision HAL, decided to get the pesky vision problem solved once and for all in 1966. (..) We still have not gotten very far with that summer project here in [the year] 2001. (..) Unfortunately, Marvin still did not get it after his failure to solve vision in a single summer, and he continued to belittle what I consider to be the truly hard problems for the next thirty years. (...) I once heard Marvin in the mid-eighties complaining at an AI conference that too many people were working on vision and robotics, and it was as though those misguided people were working on daisy wheel printers. (..) Marvin's point was that computer vision and robotics was just input-output. The real problems in AI were things like thinking."

Rather than siding with Minsky or Brooks, it seems to me that we have to do both. Unfortunately, though, and despite all the great progress, I don't think we really know how either to do vision or thinking in any serious sense of the word 'do' (as in e.g. "Yes, I do"). Maybe I'm pessimistic, philosophers like to be critical (if only because mostly they are completely unaware of how much effort goes into getting a computational model or robot to work in the first place).
KokoroBut, I mean, take this robot Kokoro. As the caption to the picture says: "Kokoro speaks four languages but struggles to give a straight answer."
Of course it's great that you're capable of not knowing what to say in four different languages. I don't deny there is progress here. But it also shows that both understanding language (I take understanding speech here in parallel to Brooks' remarks about the problem of vision) and thinking about a good answer are still beyond us (see also my earlier posts on this topic: the Eliza effect and Common sense).

So what to do when both Minsky and Brooks are right that the other is wrong?

Monday, March 28, 2005

ELIZA effect

the ELIZA effect
Weizenbaum wrote this program in the 60's, simulating "or rather, parodying the role of a Rogerian psychotherapist engaged in an initial interview with a patient." (Weizenbaum, 1976, Computer Power and Human Reason. New York: Freeman, p. 3). The program basically did no more than repeat what you were saying in an encouraging question-like way.

The story goes (well, at least I told it a few times to my students) that he was thrown out of his own office by his secretary because she was busy telling her life-story to this program. He used this to show that programs often seem more than what they are. People get fooled quickly into thinking that they are dealing with something intelligent/meaningful/conscious/etc.

The funny thing about the ELIZA effect ("people's tendency to attach to words meanings which the computer never put there") is the possibility that we may 'suffer' from it not just in relation to computers and robots, but also, and perhaps more often, when interacting with others and even ourselves.

Several times I found out later that what I thought was a meaningful remark by someone (yep, you guessed it: mostly students) actually didn't have the (and in some cases even a) meaning I attached to it. Unfortunately, I know that several people sometimes thought that I said something interesting, whereas I didn't have a clue myself as to what they could possibly think I meant. And many graduated students realize sooner or later that many of their teachers (yep again: me included) sometimes didn't really know what they were saying. Right Edward?

But the worst part of it all is that I (and you too btw) may be overinterpreting myself in a much more profound sense that what happened with Eliza. Daniel Wegner wrote a great book (The illusion of conscious will. Cambridge, MA: MIT Press, 2002) in which he argues that conscious will is an illusion. People mistake their experience of their will for an actual causal mechanism. We think that we, by an act of our conscious will, cause our actions, but all that really may happen is a mere co-occurrence of thought and action, that makes us infer that our thoughts set our acts in motion (p.65).
As Wegner says: Will is a kind of authorship emotion. Conscious will is the somatic marker of personal authorship (p.327).

No more laughing now about those silly people overinterpreting these stupid computers, eh?

Jelle (yeah, him again) told me this great story about a self-over-interpreting coffee machine, but it would be unfair to give it here, as I think he should tell it himself (but you better hurry Jelle!).

PS
Philosophers been there, done that (as Wegner himself is the first to acknowledge). David Hume already said in 1739 that the will is “nothing but the internal impression we feel and are conscious of, when we knowingly give rise to any new motion of our body, or new perception of our mind” (A treatise of human nature chapter 61)

PS-2
Yes I know: there is a difference between attaching meaning to words when there is none and attaching the wrong meaning to words (compared to the different meaning intended by the speaker). So, now the question is: on which side belongs the illusion of conscious will?

PS-3
Brooks (2002, p.106-107) gives the example of owners of Sony's Aibo's (see for some amusing cases here), who claimed that their pet-robots could recognize their face and their voice, despite Sony's denial that their robots were capable of doing this. "Both face and voice understanding are projections by the owners onto their robotic dogs." Brooks says.

Friday, March 25, 2005

Common sense boosts speech software

Common sense boosts speech software: "there ought to be more research into how to get computers to make better mistakes,' said Lieberman."

Common sense is one of my favourite topics, but nobody explained better why than Lieberman.

Looking a bit further (or rather: back) I found this 1999 interview with Doug Lenat, originator of the Cyc-project (a common sense database), in which he seems to be repeating things he said in a documentary I saw at least 8 (?) years ago (I'll check when I get back from Brasil).

Here's the open mind project that Lieberman's work is based upon:
"Computers today are just plain dumb! The Open Mind Commonsense project is an attempt to make computers smarter by making it easy and fun for people all over the world to work together to give computers the millions of pieces of ordinary knowledge that constitute "common-sense", all those aspects of the world that we all understand so well we take them for granted. This repository of knowledge will enable us to create more intelligent and sociable software, build human-like robots, and better understand the structure our own minds. We hope you will join us by registering below!"

I know I will...

And so I did, and my first task was this:

Please write up to five things that someone should already know in order to fully understand the following event:

Alex was depressed.
Alex committed suicide.

Errr.....well....geez what a question...ehhh....let's see.....

Alex was alive
Alex was unhappy
Alex wanted to end his life
So he could not think or feel anymore
So he killed himself

I seriously tried but got an overwhelming sense of falling short. I mean, I am presupposing that someone already knows what being alive, unhappy, think, feel and kill means. But presumably others are working on that (and I mean you!). Furthermore, is this all to committing suicide? Not likely. So how long does the list have to be? 10 things? 100? 10.000? How is the system going to find the relevant bits and pieces fast enough to act appropriately (e.g. respond understandingly)? And are these descriptions enough to understand what it means when someone commits suicide? I mean, aren't there qualia- and consciousness-problems here (unhappy, feel)? Or am I overinterpreting what is meant by 'fully understand'?

I wrote a book once dealing with the problem of common sense and folk psychology that made me realize that these kind of approaches were most likely not going anywhere (I told the 'open mind project people' honestly so in my self-description), but I'd be happy to see genuine progress in this area. It's just that 'getting the data in' does not seem to solve what to me seem to be the real issues. But these are just first thoughts, I'll read their stuff and get back to this.

PS
The next question was:

Christine had no friends.
Christine was lonely and sad.

That got me down a bit, so I left. But I will go back there, and I'll make sure to ask students of some of my courses to visit their page and help. Promise.

Digital apes

Flint
No need to explain the importance of a digital ape I guess.

Why my blog?

Amidst the I don't know how many blogs that already exist? Well, I'm into Embodied Embedded Cognition (EEC), a view in cognitive science that claims that a lot of our behavior is not (at least not primarily) based on abstract internal representations (e.g. general models of the world) but rather on the behavioral tendencies of our bodies and information that is present in the environment.

Scaffolding is one of the central notions of EEC, referring to the creation of structures in the environment to ease the demands on our internal cognitive processes. As far as I know the term was introduced by Jerome Bruner, referring to the assistance provided to children during their development. Its current use derives from Andy Clark's (1997) book 'Being there'.

A blog is good example: I create links and notes that explain why I once thought the link was important. I don't have to remember the information on the page I linked to nor why I linked it.
As far as I can see, most of the time cognitive systems operate according to the laziness principle: Don't think when you can avoid it.
So that's why I have a blog. I get tired of thinking and remembering. Thinking hurts.
Now let's see if the links work the way I intended them to...

Same face builds trust, not lust

BBC NEWS | Science/Nature | Same face builds trust, not lust

Once I gave a talk (that I had to create almost on the spot) on some experiments that showed that symmetrical faces were perceived to be more attractive. I vaguely remember that I tried to argue that the more symmetrical (and the less distinctive) these faces were, the easier it was to project the traits we consider to be most favorable onto them, and that this explained why these faces were evaluated as being more attractive (I'll try to to find my notes later and see if I can be more precise). In any case, ever since then I've kept this interest in research about the attractiveness of faces.

Fastest supercomputer gets faster

BBC NEWS | Technology | Fastest supercomputer gets faster
According to Rodney Brooks (2002, Penguin) one of the possible hypotheses to explain why robots and Alife so far didn't really get comparable to biological systems is that "we might simply be lacking enough computer power." (p.184). He doubts though that this is the primary reason for the slow progress [that's how he sees it, at least around those pages] of AI and Alife, because, as he says "the amount of computer power available has continued to follow Moore's law and doubled every 18 months or two years since the beginning of each field. So it is getting hard to justify lack of computer power as the thing that is holding up progress." (p.185-186).

Elephants learn through copying

BBC NEWS | Science/Nature | Elephants learn through copying
Bloggers too....Thanks Jelle!

Me too!

Inspired by Jelle, I'll have my own blog, mainly for pages that I found along the way.