Universal Darwinism: The Evolution of Everything?

After reading Richard Dawkins’ The Selfish Gene,  John Cribben’s In Search of the Multiverse and Philip Balls’ Critical Mass, my interest in evolution as a generic or more universal concept has been revived. Is evolution a concept much broader than Darwin ever envisaged? Can it apply to human behaviours? Natural structures? How about our entire universe?

With the current socio-political climate in the US being driven more and more toward the extreme right, where so-called “respected” politicians harp on about intelligent design and other such bullshit, I found it interesting to see that evolution may extend from explaining how our genes have changed over the millena, to actually understanding everything from our place in the universe to the inherent behaviours that we exhibit. Darwin’s work may have uncovered a greater universal truth. As Daniel Dennett once said:

“If I were to give an award for the single best idea anyone has ever had, I’d give it to Darwin, ahead of Newton and Einstein and everyone else. In a single stroke, the idea of evolution by natural selection unifies the realm of life, meaning, and purpose with the realm of space and time, cause and effect, mechanism and physical law.”

My first proper exposure to the theory of evolution beyond basic biology class was when I was about 17 and learnt about genetic algorithms (GA) when writing the Generation5 website I put together for the ThinkQuest competition along with Samuel Hsuing and Edward Kao.

AH-64D Apache Longbow
The AH-64D Longbow’s radar has genetically evolved algorithms powering it.

Sam had written an article (which I later expanded upon) about using a GA to solve a diophantine equation. I found it amazing that computer scientists had taken Darwin’s idea of “survival of the fittest” and applied it to something as abstract as solving mathematically equations. Not only that, it was bloody efficient at doing it!

Two years later, I interviewed Steve Smith, one of the engineers behind that massive radar that sits atop the AH-64D Apache Longbow (right). The Apache’s radar can automatically detect the target from the radar signature, and the software that powers this intelligence was evolved via genetic programming.

At the time though, the deeper meaning behind all this “cool technology” never really dawned on me. Fast forward many years and my fascination with genetic algorithms remained. I was stunned by the ability of evolution to seemingly solve huge problems if you could simply assign a fitness to any given solution. Now with that said, this post isn’t meant as a lesson on genetic algorithms as I’ve written plenty in the past (including this bad boy if you’re feeling adventurous). Continue reading “Universal Darwinism: The Evolution of Everything?”

Evolving Cooperative IPD Strategies

As I continue to play with the IPD, I created code to genetically evolve IPD strategies to see if cooperation could be borne out of random behaviours. I used the standard GA I’ve created in Wintermute, with each agent represented by the five weights detailed in my last post and with fitness calculated as the average points earned in each bout (this was subtracted from 5 in order to allow the GA to search for a minimum). I then created a population of 500 agents with a mutation and elitism rate of 0.5% per generation.

It took me a while to tweak the GA to start working, but I finally got it with fascinating results. Here is a chart of the distribution of strategies along with the best fitness for each iteration:

Continue reading “Evolving Cooperative IPD Strategies”

More Experimenting with the Spatial IPD

It seems like my initial experiments have been a bit awry, but this is how you learn! I was tweaking the IPD code to use encoded weightings rather than classic logic to start creating the genetic algorithm code to evolve this behaviour.

I recreated all the strategies as weights using a simple encoding denoting their probability to cooperate. They are encoded as {I, R, S, T, P} where I = initial, R = reward (c,c), S = sucker (c,d), T = temptation (d,c), P = punishment (d,d). As an example, all-cooperate and TFT strategies would be:

AllCooperate = 1.0, 1.0, 1.0, 1.0, 1.0
TFT = 1.0, 1.0, 0.0, 1.0, 0.0

So All-C will always cooperate, from the outset and continue to. TFT will cooperate initially, then cooperate after reward or temptation and defect after sucker or punishment. Continue reading “More Experimenting with the Spatial IPD”

Understanding Cooperation with the Spatial Iterated Prisoner’s Dilemma

Not exactly a snappy way to start my new blog, but…

I just finished reading Critical Mass by Phillip Ball and one of the topics discussed was the iterated prisoner’s dilemma. I had implemented a Spatial IPD as part of the Generation5 JDK and more recently as part of the C# port, ‘Wintermute’.

Prior to reading Critical Mass though, I saw the SIPD as an interesting cellular automata implementation of a famous game theory exercise but never fully experimented with the model’s insights. In his book, Ball detailed how IPD can be used to show how cooperation is evolved as being more beneficial than selfish behaviour. Even more fascinating was the IPD’s ability to show how more cooperative strategies are more beneficial, until you get to a completely cooperative world — at which point, a very selfish strategy can quickly take advantage of the “naivety” of the world.

I decided to expand my code to explore this further to see if I could replicate this.

Continue reading “Understanding Cooperation with the Spatial Iterated Prisoner’s Dilemma”