Discussion of Genetic / Open Ended Algorithms on SAFE

Sounds good. As it’s a specific project I suggest we do that in a new topic, but link to it from here.

This work on Open ended algorithms is very exciting and applicable no matter your choice of optimising algorithm:

1 Like

One of the features of genetic algorithms is that they are both compute intensive and that it is relatively easy to distribute that computation. So the new distributed language Unison (via @oetyng) looks like an interesting possibility for GAs on SAFE. See: unisonweb.org and github


Extremely so. There is a longer presentation and great explanation of cppn (great way to indirectly encode huge numbers of neurons and genes) here


I imagine a marketplace that would facilitate the sharing of code and data between clients and workers, but it would also verify the results and manage payments without requiring trust between the parties. It could be used for genetic algorithms, reinforcement learning, or other similar things where the same code needs to run with different sets of parameters in potentially many different environments, but it could also be used for less parallelized code.

  • Clients post binary code, a number of data sets, the payment offered (for CPU cycles, IO requests, and so on) and put some money in escrow.
  • Worker receives the code and one or more sets, distributed by the marketplace. When done, it posts the results together with the bill for the resource use.
  • The marketplace distributes the same jobs to multiple workers and it compares both the results and the reported resource use to avoid cheating. If everything checks out, it pays the workers and returns the results to the client.

It’s the client’s job to make sure the code they post doesn’t give away any secrets. The binary code is for a specific version of a virtual machine (web assembly?) that the workers need to use to execute it, and the virtual machine must monitor resource use so it can report it to the marketplace.

Things can get more complicated if we want to exploit hardware for floating point math due to implementation differences. GPUs would add even more complexity since subsequent runs of the same code may return different results (warps execute in a random order) unless much care is taken.

1 Like

Thanks very much for posting this presentation David, I had a ball watching and learned a lot but wish I could have asked a question!

It occurred to me early on that there is a symmetry between diversity in the solution-agent, and diversity in the environment. So for example, generating diverse mazes would have a similar effect to generating diverse maze solvers, and when Ken Stanley began this seemed to be an important element of open endedness in artificial life (which he used as an example), but was subsequently discarded!

They went on, with Jeff Clune’s walker, to generate the test environments which was good, but done independently from the agents themselves. They never connected the change in environment directly to the change in the solutions. Even though this was present in many of the examples, such as giraffes growing long necks causing trees to grow taller and vice versa. Or where we now recognise wolves can cause a valley to thrive, or die when they are not present.

So in the real world or a-life simulated world, the interactions of individuals change the environment.

This is a key element of the open ended evolution on earth, and perhaps a general property of open ended systems.

Another example: example, plants changed our atmosphere to one which could support animals (oxygen rich), and animals evolved to both take advantage of that and create a balance which meant plants did not die from a lack of CO2.

This aspect of co-evolution wasn’t present in any of their research and only alluded to at the end when talking about communication between agents.

So I was curious as to why they have not done/published any work using simulated environments in which the agents evolve and directly affect the environment in which they are tested, rather than what appeared to be separate evolution / generation of agents and environment.


I agree, I think that would be very interesting. Something I would like to do, if I had time (oh please please one day) is to use actors and the lib @oetyng posted is a great example of that along with diversity search based cppn’s/hyperneat and for groups or network of actors to work together, but in the physical world.

What I mean is give them hints, like read wikipedia, you will find it here, listed for source, here is the mic and so on. Then have each actor network for all the different senses (if you like) search around their space but also communicate shared inputs, so be able to say this sound input is similar to this camera input and so on. Then let these evolve and see what happens.

I would also like my robots that I started building years back to be able to evolve, but again start by cheating, show them how to find an electric socket for recharge and so one, then let them learn better ways.

In terms of a cheat to start mechanism I am intrigued by teaching it (by cheating) Who What Why Where When as questions it should ask whenever possible, i.e. when there is discernible inputs via the mic (somebody speaking). Anyhow it is truly fascinating to be able to perhaps evolve at 4Ghz :wink:


I think POET may actually evolve ever more complex environements as the AI learns enough to conquer one environment.

[edit - actually Ken Stanley is responsive on twitter. ]

1 Like

Good, I’ve tweeted the presentation with my question so fingers crossed. I do hope you get time for this one day. I’d love to see what emerges :crazy_face:

I’m not sure I have time to get properly into this again now, but we’ll see.

Thanks to everyone for helping me get back up to speed with your responses in this topic.


(Project) Commonssense :smiley:

1 Like

Novelty search is rather novel! I like how they suggest wrapping it in to a NSGA-II MOGA. Clever. Excellent presentation. Superb.


Algorithms that evolve over time. Well that’s an interesting concept. I’m not a high level programmer but here are my thoughts on the possible applications.

You could have a series of applications working in tandem. 1. A personal search engine that refined it’s search results every time you searched based on past experience. Unlike say Google which uses a massive database of multiple user data and harvests data from all over the net this one would instead use only your data and instead “procreate” and have “children” that would in turn mutate and return possible results for you to choose from. So say you searched for “cats” do you mean “cats the musical” “jungle cats in Africa” “housecats” “lolcats” “cats the slang expression” or any number of the different permutations of the word. Each child would return as an iteration of that word. Then as you showed interest it would gain fitness. Results would crossbreed with one another based on fitness. So if you liked “lolcats” and “housecats” it would show you more common housecats doing stupid and ridiculous stuff. If you choose Lolcats + musical it would return stupid videos of cats. The engine could also draw on a definitions database which would be formed in a similar way by those attempting to define terms. Type in a word and the engine returns a definition. Type in a definition and the engine attempts to return the defining word. But of course people don’t always define words the same way so the engine has to learn and present the user with multiple options much the same way the search engine would present the user with multiple search results. And as languages evolve over time so would the definitions language engine. A third engine could be a rating system to work with the search engine but independent from it. What offends or is considered inappropriate to people. Again this varies and is very subjective but there might be some consensus as to “Oh you might want some warning before you click this.” So essentially it would be like the search engine but more like the site rot.com and essentially the engine would be seeking to find out what offends the user and why? So content would be given and they would be asked how it made them feel along with a series of adjectives. Fitness would be gained the more offended the user got until they left the engine. Yes that sounds rather toxic but it would also give info on what was considered offensive to people and that’s also why it’s separate from the search engine. Keep in mind people that actively sought out such content would develop a tolerance for it. Might want to pair it with one’s personal filters, again what does this make you feel and why? No reason to leak data in any of this since it’s all driven by local algorithms.

I had Michael stay here for a week as I was looking at that very thing while the guys were getting Fleming ready. Of course changes in house meant I had to drop all of this and step in again to manage Fleming. In any case it’s incredibly powerful and SAFE will benefit from that work. When I can get back to it then brilliant, I was mega excited, so let’s see.


This is nice, very simple and uses GE to optimise NN connections but a great introduction to GA and NN.

I’ve been digging and learning from the Uber Labs presentation Recent Advances in Population-Based Search for Deep Neural Networks: Quality Diversity, Indirect Encodings, and Open-Ended Algorithms posted earlier by David which is fantastic. For anyone whose interest is piqued I recommend it, and have located the slides (pdf) and their code at Uber Research on github.

Their research seems to have culminated in the Go-Explore algorithm which has solved previously unsolved problems and performs ahead of humans and other algorithms on a platform game. At the end of the presentation they highlight the ‘three pillars’ underlying the kind of “Open Ended Evolution” algorithm that this represents, which they explore with the techniques of Novelty Search • Quality Diversity • Open-Ended Search • Indirect Encoding. These are each fascinating in their own right, and the presentation does a fantastic job of explaining how and why they work, and how they come together to advance the effort to create powerful general computational AI.


Does anyone know of the best forums, chat, Twitter feeds for this area?

I was following the Uber Labs guys already but not sure where else I can keep up with their work and related stuff.

i have never found anywhere like that. Much of this is done in Universities and now also uber labs (surprising). To me we need a place that centralises some of the thinking here. OpenAi seems to want to try and do that, but that feels more like a playground for experiments more than a discussion zone.


Back in the 80’s my entry into this area was through one of the Usenet discussion groups so I might see if there are academic channels and mailing lists.

I think I was searching for how to evolve software and stumbled on GAs, then onto David E. Goldberg’s work, and got him to send me copies of all his papers, as well as his book. I think I’ve thrown the papers out, but recently retrieved the book from storage.

1 Like

Kalyanmoy Deb’s book is a nice companion to Goldberg’s. I don’t know why it’s not 5 out of 5 stars on amz.


Decentralised Search Using Open Ended Evolutionary Algorithm

Sketching the architecture for a decentralised search based on an open ended evolutionary algorithm. This is just a framework right now, not much under the hood, just musings:

  • foragers are individuals in a large population. They take a query, return a result (or results), and sometimes store metadata to improve future searches
  • the meta-algorithm selects foragers from a large population (eg n-per page of results?), offers the results to the user, recovers feedback on usefulness of each result (eg clicked / ignored), and uses this to rate the result
  • a payment to the search app from the user can be used to publish new foragers and update public forager metadata
  • if results are good the algorithm may allow successful foragers to receive payment which they can use to store metadata (with a view to improving performance)

EDIT: More thoughts worth noting…

  • to evolve foragers, open ended evolution works by evolving the environment as well as the solutions (foragers+search data), so I’m wondering about ways to do this. For example, the challenge (environment) starts simple and gets more complex in stages. One idea here is to mimic the evolution of search on the web: a) personal indexes (bookmarks), b) collective indexes (sharing and combining personal indexes), c) categorisation and tagging of sites in indexes, d) add sub-categories, sub-sub-categories … ontologies etc, e) … better than Google search using local user specific context and collective indexes. I imagine this can be partly manual and gradually more and more automated as foragers and indexes evolve from things that help users create indexes and categorise entries, and gradually become able to automate more of these processes. This may be a daft idea!

@jonas mentioning you in case the above is of interest. Maybe these algorithms could help with one of your projects (see the presentation posted by David above).

[BTW DrawExpress is an awesome little Android app using gestures to create diagrams of many kinds on mobile or, as above, on tablet. Exceptionally good UX.]


@happybeing … making sense :grinning: