Extensible network engine (vault)

(The vault, I consider an engine.)

I’ve been lurking around the perimeters of this question for a while.
A couple of times I’ve begun writing a topic about it. Today, as I was practicing voodoo (reviving old mummified topics), I came across a topic I’ve been at before, but this time there was another post catching my attention (as it resonated with my current ideas):

A growing idea
During my time on the forum, doing research of SAFENetwork and what applications to develop and how, an idea has been forming.
As I’ve spent some days each on various topics such as decentralized exchange, smart contract framework with a virtual machine, decentralized search, to mention the more complex ones, drafting and thinking and even coding simulations, I have always landed in that it would be perfect to use the network infrastructure/logics of SAFENetwork for these things. Reimplementing it over and over - which would be needed if it was placed on app level - for every new application type that needs to form a network, just doesn’t seem right. It would be reinventing the wheel (probably less good of a wheel in many cases) and just stacking complexity upon complexity, with no end in sight, for every new decentralized application.

Really, one of the absolute finest parts of SAFENetwork is the logics of forming a secure network. The work done by the nodes of the network, is a completely decoupled responsibility (currently, storing and serving data). Anything could be done by this network. Token Exchange. Search (crawling, indexing, ranking, serving query responses), [Name any decentralized application]…

Extensibility
I would like to see the network being able to handle additional personas in a very decoupled way. An upgrade could let a new rust library with the additional persona, be added. And then engine (“vault”) managers, i.e. users, could decide what personas to run, and even how much resource they would be allowed to consume per persona type.
Users around the world wouldn’t even have to upgrade unless they wanted to take part of the new personas, as it would have no effect on the other functionality.

Reward mechanism
An additional thing, spinning further on what @Seneca mentioned: The reward mechanism could be abstracted to a general standard, just specifying points. A standard implementation could be used, or each persona could choose to implement its own reward algorithm, as long as it adhered to the interface, and produced the expected output (points), to be handled by the network rewarding processes.

Now, this moves implementation to rust, and some would say it isn’t the responsibility of the network. But I would say that this is exactly what the network should do: being The network, letting additional functionality be plugged in, with full backwards compatibility.

All FFI bindings would of course still be there, and anyone would be free to build their overlay networks as before.

The addition of personas could be done by completely independent developers doing “forks”, to be accepted by whomever liked the result, or by maidsafe, but still running on the same network and using the untouched core.

It would require that that core was designed in such a way to make it possible though.

How do you see the outlook for this?

19 Likes

This feels like one of the natural progressions for the network. Maidsafe have already mentioned intentions for adding compute to vaults in the future and it must surely be general purpose in scope.

I would hope that a standard engine which could process script (say, JavaScript or some such for accessibility) would be a starting point. Users could then request nodes run code on their behalf (read: as their persona), in exchange for payment.

Moreover, just having consensus on the result from other nodes could be sufficient to guarantee good results. So, if all the nodes in a group execute the same code, a majority would dictate the answer returned.

Taking things further, these nodes could create output, which feeds other nodes with input. Thus, complex applications could be created from many nodes running simple applications. In fact, generic applications could become popular, for performing common operations, which would become tools in a toolkit. Sort, encryption, translation, encoding, decoding, etc. All sorts of mutations.

Further still, these nodes could be neural nets, transforming data in predictable but non-programmed ways. Combining these could result in a give mind, never before seen.

So, to get back on point, I think it would be great for vaults to be extensible, assuming it can be done safely. However, having script running in a generic sandbox may be powerful enough but far more accessible. It will be great when maidsafe have scope to start looking at this stuff, as it will be truly amazing!

13 Likes

I’m just beginning on these thoughts, so a few things seem awkward still.
It might currently be a quite large change to try make the personas extensible. Also, going like this shifts focus of the network from data storage to keeping a secure network that can do anything. I think mentally a lot of people have gone down that lane for a long time.

But from development side, this would be a very drastic change. It would make data storage be just one functionality, while it today is the core functionality.
This fact alone is what makes the whole idea kind of uncomfortable to embrace.

But there is something there, about being able to reuse the network-forming logic, that I think is very promising on a long term.

Edit: I see this from an architectural point of view, and I think that I would have liked to see the network being designed like so:

  1. The absolute foundation is forming a secure network.
  2. Ability for the nodes to plugin any number of personas, that make use of consensus and security logic of the core.
  3. Data storage, search, [name any decentralised functionality]
9 Likes

In earlier discussions about decentralised compute I recall that David was suggesting this is the way to go - ie to re-use the concensus logic to ensure the requested computation is carried out correctly, and reward vaults providing the result in a similar way to those serving data.

I thought David’s comment was in the following topic but it isn’t so I’m not sure where that was, but I think your line of thinking is perfectly OK!

10 Likes

Link to blog posts by iExec describing their proof of contribution protocol:
https://iex.ec/news_categories/poco-series/

1 Like