Extremely so. There is a longer presentation and great explanation of cppn (great way to indirectly encode huge numbers of neurons and genes) here
Extremely so. There is a longer presentation and great explanation of cppn (great way to indirectly encode huge numbers of neurons and genes) here
I imagine a marketplace that would facilitate the sharing of code and data between clients and workers, but it would also verify the results and manage payments without requiring trust between the parties. It could be used for genetic algorithms, reinforcement learning, or other similar things where the same code needs to run with different sets of parameters in potentially many different environments, but it could also be used for less parallelized code.
- Clients post binary code, a number of data sets, the payment offered (for CPU cycles, IO requests, and so on) and put some money in escrow.
- Worker receives the code and one or more sets, distributed by the marketplace. When done, it posts the results together with the bill for the resource use.
- The marketplace distributes the same jobs to multiple workers and it compares both the results and the reported resource use to avoid cheating. If everything checks out, it pays the workers and returns the results to the client.
It’s the client’s job to make sure the code they post doesn’t give away any secrets. The binary code is for a specific version of a virtual machine (web assembly?) that the workers need to use to execute it, and the virtual machine must monitor resource use so it can report it to the marketplace.
Things can get more complicated if we want to exploit hardware for floating point math due to implementation differences. GPUs would add even more complexity since subsequent runs of the same code may return different results (warps execute in a random order) unless much care is taken.
Thanks very much for posting this presentation David, I had a ball watching and learned a lot but wish I could have asked a question!
It occurred to me early on that there is a symmetry between diversity in the solution-agent, and diversity in the environment. So for example, generating diverse mazes would have a similar effect to generating diverse maze solvers, and when Ken Stanley began this seemed to be an important element of open endedness in artificial life (which he used as an example), but was subsequently discarded!
They went on, with Jeff Clune’s walker, to generate the test environments which was good, but done independently from the agents themselves. They never connected the change in environment directly to the change in the solutions. Even though this was present in many of the examples, such as giraffes growing long necks causing trees to grow taller and vice versa. Or where we now recognise wolves can cause a valley to thrive, or die when they are not present.
So in the real world or a-life simulated world, the interactions of individuals change the environment.
This is a key element of the open ended evolution on earth, and perhaps a general property of open ended systems.
Another example: example, plants changed our atmosphere to one which could support animals (oxygen rich), and animals evolved to both take advantage of that and create a balance which meant plants did not die from a lack of CO2.
This aspect of co-evolution wasn’t present in any of their research and only alluded to at the end when talking about communication between agents.
So I was curious as to why they have not done/published any work using simulated environments in which the agents evolve and directly affect the environment in which they are tested, rather than what appeared to be separate evolution / generation of agents and environment.
I agree, I think that would be very interesting. Something I would like to do, if I had time (oh please please one day) is to use actors and the lib @oetyng posted is a great example of that along with diversity search based cppn’s/hyperneat and for groups or network of actors to work together, but in the physical world.
What I mean is give them hints, like read wikipedia, you will find it here, listed for source, here is the mic and so on. Then have each actor network for all the different senses (if you like) search around their space but also communicate shared inputs, so be able to say this sound input is similar to this camera input and so on. Then let these evolve and see what happens.
I would also like my robots that I started building years back to be able to evolve, but again start by cheating, show them how to find an electric socket for recharge and so one, then let them learn better ways.
In terms of a cheat to start mechanism I am intrigued by teaching it (by cheating) Who What Why Where When as questions it should ask whenever possible, i.e. when there is discernible inputs via the mic (somebody speaking). Anyhow it is truly fascinating to be able to perhaps evolve at 4Ghz
I think POET may actually evolve ever more complex environements as the AI learns enough to conquer one environment.
[edit - actually Ken Stanley is responsive on twitter. ]
Good, I’ve tweeted the presentation with my question so fingers crossed. I do hope you get time for this one day. I’d love to see what emerges
I’m not sure I have time to get properly into this again now, but we’ll see.
Thanks to everyone for helping me get back up to speed with your responses in this topic.
Novelty search is rather novel! I like how they suggest wrapping it in to a NSGA-II MOGA. Clever. Excellent presentation. Superb.
Algorithms that evolve over time. Well that’s an interesting concept. I’m not a high level programmer but here are my thoughts on the possible applications.
You could have a series of applications working in tandem. 1. A personal search engine that refined it’s search results every time you searched based on past experience. Unlike say Google which uses a massive database of multiple user data and harvests data from all over the net this one would instead use only your data and instead “procreate” and have “children” that would in turn mutate and return possible results for you to choose from. So say you searched for “cats” do you mean “cats the musical” “jungle cats in Africa” “housecats” “lolcats” “cats the slang expression” or any number of the different permutations of the word. Each child would return as an iteration of that word. Then as you showed interest it would gain fitness. Results would crossbreed with one another based on fitness. So if you liked “lolcats” and “housecats” it would show you more common housecats doing stupid and ridiculous stuff. If you choose Lolcats + musical it would return stupid videos of cats. The engine could also draw on a definitions database which would be formed in a similar way by those attempting to define terms. Type in a word and the engine returns a definition. Type in a definition and the engine attempts to return the defining word. But of course people don’t always define words the same way so the engine has to learn and present the user with multiple options much the same way the search engine would present the user with multiple search results. And as languages evolve over time so would the definitions language engine. A third engine could be a rating system to work with the search engine but independent from it. What offends or is considered inappropriate to people. Again this varies and is very subjective but there might be some consensus as to “Oh you might want some warning before you click this.” So essentially it would be like the search engine but more like the site rot.com and essentially the engine would be seeking to find out what offends the user and why? So content would be given and they would be asked how it made them feel along with a series of adjectives. Fitness would be gained the more offended the user got until they left the engine. Yes that sounds rather toxic but it would also give info on what was considered offensive to people and that’s also why it’s separate from the search engine. Keep in mind people that actively sought out such content would develop a tolerance for it. Might want to pair it with one’s personal filters, again what does this make you feel and why? No reason to leak data in any of this since it’s all driven by local algorithms.
I had Michael stay here for a week as I was looking at that very thing while the guys were getting Fleming ready. Of course changes in house meant I had to drop all of this and step in again to manage Fleming. In any case it’s incredibly powerful and SAFE will benefit from that work. When I can get back to it then brilliant, I was mega excited, so let’s see.
This is nice, very simple and uses GE to optimise NN connections but a great introduction to GA and NN.
I’ve been digging and learning from the Uber Labs presentation Recent Advances in Population-Based Search for Deep Neural Networks: Quality Diversity, Indirect Encodings, and Open-Ended Algorithms posted earlier by David which is fantastic. For anyone whose interest is piqued I recommend it, and have located the slides (pdf) and their code at Uber Research on github.
Their research seems to have culminated in the Go-Explore algorithm which has solved previously unsolved problems and performs ahead of humans and other algorithms on a platform game. At the end of the presentation they highlight the ‘three pillars’ underlying the kind of “Open Ended Evolution” algorithm that this represents, which they explore with the techniques of Novelty Search • Quality Diversity • Open-Ended Search • Indirect Encoding. These are each fascinating in their own right, and the presentation does a fantastic job of explaining how and why they work, and how they come together to advance the effort to create powerful general computational AI.
Does anyone know of the best forums, chat, Twitter feeds for this area?
I was following the Uber Labs guys already but not sure where else I can keep up with their work and related stuff.
i have never found anywhere like that. Much of this is done in Universities and now also uber labs (surprising). To me we need a place that centralises some of the thinking here. OpenAi seems to want to try and do that, but that feels more like a playground for experiments more than a discussion zone.
Back in the 80’s my entry into this area was through one of the Usenet discussion groups so I might see if there are academic channels and mailing lists.
I think I was searching for how to evolve software and stumbled on GAs, then onto David E. Goldberg’s work, and got him to send me copies of all his papers, as well as his book. I think I’ve thrown the papers out, but recently retrieved the book from storage.
Kalyanmoy Deb’s book is a nice companion to Goldberg’s. I don’t know why it’s not 5 out of 5 stars on amz.
Decentralised Search Using Open Ended Evolutionary Algorithm
Sketching the architecture for a decentralised search based on an open ended evolutionary algorithm. This is just a framework right now, not much under the hood, just musings:
- foragers are individuals in a large population. They take a query, return a result (or results), and sometimes store metadata to improve future searches
- the meta-algorithm selects foragers from a large population (eg n-per page of results?), offers the results to the user, recovers feedback on usefulness of each result (eg clicked / ignored), and uses this to rate the result
- a payment to the search app from the user can be used to publish new foragers and update public forager metadata
- if results are good the algorithm may allow successful foragers to receive payment which they can use to store metadata (with a view to improving performance)
EDIT: More thoughts worth noting…
- to evolve foragers, open ended evolution works by evolving the environment as well as the solutions (foragers+search data), so I’m wondering about ways to do this. For example, the challenge (environment) starts simple and gets more complex in stages. One idea here is to mimic the evolution of search on the web: a) personal indexes (bookmarks), b) collective indexes (sharing and combining personal indexes), c) categorisation and tagging of sites in indexes, d) add sub-categories, sub-sub-categories … ontologies etc, e) … better than Google search using local user specific context and collective indexes. I imagine this can be partly manual and gradually more and more automated as foragers and indexes evolve from things that help users create indexes and categorise entries, and gradually become able to automate more of these processes. This may be a daft idea!
@jonas mentioning you in case the above is of interest. Maybe these algorithms could help with one of your projects (see the presentation posted by David above).
[BTW DrawExpress is an awesome little Android app using gestures to create diagrams of many kinds on mobile or, as above, on tablet. Exceptionally good UX.]
@happybeing … making sense
@Happy Being - I thought it might help to incorporate some (mock up) components & functions/players into your diagram which could be key to i. help flesh out an ““Open Ended Evolutionary Algorithm” (OEEA) design path ii. establishing our Open AI equivalent discussion group equivalent to “centralise some of the thinking” as described by David (which I will respond to separately) and iii. create a common vision/language & safe data commons project office. It’s rough and looking at how to illustrate the components/players which would appear to need to come together for mutual benefit has been a big learning/integration exercise …. so I hope it helps and that I’m not daft as well.
In this first diagram I have connected your diagram and the r3.0 diagram under the heading “a radically democratising algorithmic driven economic model/platform” as they appear to be different views of the same meta algorithm outcome and thus complimentary.
I’m assuming that the Meta algorithm facilitates the “ intermediation of mathematical models and algorithmic metrics as well as the design of data flow architecture?"
In this 2nd version of your architecture diagram I have inserted the aligned UBER AI preso “Multidimensional Neural networks” slide and a number of smaller diagrams numbered 1-10 which flesh out existing or add key architecture components/requirements which as you say need to be evolved e.g. indexes to underpin a decentralised search app/ultimate path
Summary of diagrams 1-10 highlighted in the previous slide
1. User - UI - This is a “ very rough” solidonsafe/”sense” decentralised search page mock up around a google help diagram … to table some concepts/ideas and picture of a possible (decentralised search, public data commons utility and user established governance system) outcome to start a discussion as to what we collectively want to create, what it looks like, the benefits, what sits behind it and the easiest (project commonssense) path to it.
2. Sharing model – This diagram illustrates a forager connecting to a sharing (public data) model described in the Open Data Institute “The role of data in AI business models” paper as the optimum AI model i.e. Mutual AGI prosumer vs proprietary AI dataplume. Nick Shadbolt makes the comment “ there ought to be some way of managing large scale data sharing efforts that are in everybody’s interest” …… which in theory the safe: data commons mutual AGI dataplume communitylink model path can enable.
3. Multidimensional Neural networks optimisation – If I understand this correctly this UBER AI Deep neural network diagram illustrates what I call as smart product matrix to define the phenotype elites/interlinks to “mutate locate replace” and which can be overlay over any community, industry product need i.e. codemap. A key initially use for a safe:commons phenotype matrix method is for safe network Data Council neural network ecosystem definition and subsequent reuse for members ecosystem i.e. Davids actors/hyperneat example.
4.Meta Algorithm – How do we earning rewards in a sharing model/OEEA? This is a link to an interesting methodology i.e.Real Time Offerings with meta utility token for multi-currency engagement and transition from NNR to SRR value exchange … we can add the time based standards approach (using the CCDM resource & need optimatisation method)
5. Deep Learning - This “Prediction Function of One Hidden layer diagram/formula” is from the Neil Lawrence Deep Learning presentation “Dimensionality & activation of the neural network … to define one of the Multi-dimensions ?) activation functions & parameters for the sharing model/OEEA formula (using the Unison language definition?s] i.e. a radically democratising algorithmic driven economic model/platform. His Guardian Data Trusts could allay our privacy fears article addresses directly the need/problem we are looking to address.
6. Safe/UIA webID sign up – This diagram illustrates a mock-up of a Holistic Code UI/UX user interface to create y/our personal via peer (in the centre) , product, project engagement definition. In the collective intelligence field they call it defining y/our learning profile within a sharing model or collective intelligence.
7. Data Council “Sensor” (solidonsafe) NODE – I am assuming each proposed Data Council Node thru the safenetwork will be an enterprise network of members in their own right to create their own codemap/neural network. The NZ Govt Health Dept has implemented a fully templated Health Data Council CCDM CCDM/data governance methodology from ward thru to hospital level data councils which provides an organisational template to build upon.
8. Rough Engagement xls & statement diagram to capture how contribution & rewards earned via the meta algorithm. To practice what we preach this meta algorithms starts with a Foundation” co sig co dev group where the project delivers to what each is promoting in a via a Collective Real Time offer to capture our generic “go explore”/OEEA method as we go.
9. Learning Academy – This diagram is a mock up overlay of the Ui Path Academy which deploys the related training programs for “actors/roles” within the Ui path Industrial AI platform/software. Our OEEA model requires learning courses in a continuous learning environment (what I call a “Living university”/ULB ) which could evolve as David described via… “the unisonweb.org lib for actors to create simulated environments in which the agents evolve and directly affect the environment in which they are tested for groups or network of actors to work together, for physical world application” using a Multidimensional Smart product matrix archive for collective training course needs definition & OEE algorithm optimisation.
10. Smart product matrix… is the overlay to deliver to a pre-defined smart AI product outcome to define the new interaction/connection gamification pts i.e. the multi-dimensional archive of phenotypic elites (each with a 0 Reserve & index meta token to distribute pre agreed margins to contributors) and so we can gradually automate some of the process.
IMHO to bring the key co-dev stakeholders together around a Open AI equivalent Foundation discussion group and safe commons/OEEA path will require a specific converging path/framework & complimentary opportunity/outcome to assess. This will need a prior discussion to align our Epic stories and address the questions … Which EPIC stories to we share? Where do our EPIC’s & stories intersect & how do they interact? What new shared EPIC’s stories do we need to establish? What adjustments do we have to make to our existing EPIC’s stories/project plan …. to frame the co-development opportunity to easily include other’s “activation functions & parameters” in key contribution areas.
This is all a bit rough so I’m hoping it helps and makes some “sense”.
@dirvine To support our Open Ended Algorithm ULB/neuroevolutionary path forward …interlinking a couple of safe forum Project commonsense approach threads through the Discussion of a Safe enabled Collaborative commons/data marketplace (genetic therapy/algorithm) conversation path into an Integrated Summary & Logistical Integration Process flow…
These links need to be combined with a range of others from key project partners to create a curated set of links to which we collectively agree to contribute and progressively improve.
In the short term the Collaborate commons & data marketplace (community imarket?) discussion topic & links can be used to
i. introduce the safenetwork to non technical (& technical where appropriate) external parties for which safe is an unknown at present and
ii. progress the Co phase 0 objective of bringing the converging thinkers together (Project section 1) to this evolutionary convergence point and hopefully dominant design to open and secure our opportunity thru Project section 2. Commercial and Project Section 3 Community.
iii. Intro an initial WCCDM safe public data utility Health DApp opportunity/Project Business case leveraging off an existing application/path.