Wednesday, September 12, 2007

Food for thought

For those of us who survived the dramatic thunderstorm last night, the second day of the conference is in full swing...

Here I just want to put up for general discussion some of the recurring themes that have been debated since the start of the workshops as well as yesterday's sessions on embodiment and cognition.

- What is the relationship between engineering and science in the field of artificial life? Should we distinguish between them? If yes (and I believe this strongly to be the case), then how do we best distinguish between them?

- What is the relationship between information theory and dynamical systems theory? At Sussex we've already been debating this question a little bit on the Life and Mind blog. My personal opinion is that both approaches provide equally valid perspectives that are bound to generate interesting insights. However, the requirement of information theory to deal with input/output systems severely limits its applicability to systems with an autonomous organization.

- Finally, there is an ongoing debate about the relationship between physical robots and simulated agents. What are the advantages of using one over the other or both? From the scientific perspective I'm hard pressed to give examples of questions related to artificial life that require a physical implementation in order to be answered.

Ok, that's all for now! Enjoy the rest of the conference!
Tom

1 comment:

Nathaniel Virgo said...

Speaking of the panel discussion on information theory, here is the paper I mentioned in my comment, in which Edwin Jaynes analyses data from rolling a die and uses information theory to deduce certain physical facts about how the dice was made:

Jaynes, E. T., 1979, `Where do we Stand on Maximum Entropy?' (2.6Mb) in The Maximum Entropy Formalism, R. D. Levine and M. Tribus (eds.), M. I. T. Press, Cambridge, MA, p. 15;

It was relevant to the discussion because it was being claimed that information theory will never tell you about the 'whys' of what's happening in an agent, only the 'whats.' Although it's an analysis of a die rather than a dynamical agent I think this demonstrates rather nicely a way in which information theory can give you the whys.

Unfortunately the paper is rather long - it's practically a book - but it's well worth a read by anyone with an interest in information theory.

Also I'd like to address the comment in your blog post that information theory requires that the agent be an "input-output system." In my view this is simply an artifact of the way that information theory has been applied in ALife studies so far. There is no such requirement in the mathematical formalism of the theory. An analysis more along the lines of the one in Jaynes' paper would not require this, for instance. I think we should see information theory simply as another tool for analysing dynamical systems - one that can compliment phase portraits etc. rather than being somehow in opposition to them.

I'd just like to make one final claim: If you, as a dynamicist, wanted to say that a particular agent was not, in fact, gathering all the information about its environment in some kind of optimal manner, the only way you could prove it would be to use information theory (e.g. to show that the mutual information between the agent's internal state and its environment was not maximised after all). So, far from being opposed to such dynamicist claims, information theory is *the* tool which can be used to support them.