Safe Computation, Apps & Plugins

Isn’t your ‘plugin’ just a fancy word for ‘app’?

I don’t think so. What he’s talking about is in one sense an API, but one which operates at a deeper level in order to allow integration with the network core (consensus, rewards and penalties) rather than services (storage, names, containers, data types).

To support that is not straightforward IMO, not necessarily wise, but potentially very powerful and so worth looking into. It may be more prudent to add new core services than make this an API to which anything can be added, I don’t know. Certainly worth looking into, but I don’t think easy, or high priority because I doubt it’s easy to get right and will take time.

However, if somebody comes up with concrete proposals for this, that would make it easier to understand and reason about.

2 Likes

Not at all. Plugin is an external tool that is called simultaneously X times by Elders. And safenetwork does not care about what it does, it just cares whether there was a consensus amoung those X elders on result of that call. If let say the code is execute by 8 elders and 7 deliver same result and 8ght deliver different result, then the 8ght is a fraudalent one and will be punished the same way as if he failed to deliver proof of storage. This way, such plugin calls can be trusted. The trust level is same as trust level on storage. Since all those plugin calls are paid, elder vaults will want to support as many plugins as possible. Now, such plugin call can be called directly from a client. Like let say I want to have bitcoin wallet on my maidafe site. I do not need to run anything, I can use trusted plugins that handle bitcoin network operations. In advance safenetwork itself can use such plugin system directly, with additional logic. Like native Bitcoin to Safenetwork token swap. Network can trust bitcon transaction happened, or bitcoin is in the address thanks to those plugins. The elders will try to always deliver 100% correct information about bitcoin network requests, otherwaise they would lose the elder status.If their results are not the same as the mayority is. So basically plugins allows to do pretty any task, including checking of physical delivery. There can be even such plugin like physical delivery of packages, or plugin that delivers bitcoin price ticker. Do you want to create a betting system on weather forecast? No problem, you just need a plugin that returns weather data. And the fact, that those plugins can be 100% trusted because of consensus is huge. Nothing else in history had such an option. Safenetwork is the first one, to be able to deliver such trusted data autonomously and in decentralized way.

7 Likes

What you are describing is how the anticipated network computation layer might work. All the core network needs is a general computation API. Then apps can handle all of the details about what is being computed.

Exactly, but without need for any special architecture, etc. The safenetwork layer is very simple, just handling consensus on whether task was delivered or not and paying reward. Plugins are way more powerful than any computation language. Plugins bring trusted data, trusted real life actions, even human actions. Plugins bring trust that any imaginable task, can be executed and checked whether was delivered or not. Including launching of a rocket to the Mars and checking whether it successfully landed.

2 Likes

No, it is not that simple in practice. There are a variety of security related details that will need to be handled. It is not extremely difficult once the initial network is proven, but will still take some work to ensure no data leaks.

Computation will always involve some type of language. You are describing what has been considered the most basic form of general computation on the network. We’re talking about the same thing. A network node doesn’t need to support all kinds of different plugins, just a single computation feature or shell (ex. safe + bash = sash, the safe again shell) that can take a file and perform operations dictated by the file in a safe manner on a set of nodes without leaking data while using group consensus for error checking. Specialized apps can then use this general feature to accomplish everything you are describing (Weather, Voting, BTC transactions, etc). The core stays simple and minimalist. The possibilities for apps remain limitless.

1 Like

Which cases would you need a plugin rather than compute? With compute you would just upload a program to the SAFE Network network and it could run on a section close to the xor of the hash of the code or something like that. Plug-ins need s to be installed on the nodes where they would run, much more manual and less autonomous, but could perhaps support some use cases where compute wouldn’t work? What would that be though and why?

With compute you would have scripts/programs spread out across the network by the network so you could have millions of programs, that wouldn’t really work with plug-ins.

1 Like

No they are not. We are talking about different things. I am talking about external calls outside of safenetwork, that can or can not do anyting with the safenetwork. Safenetwork is not checing anything, just whether all those elders returned the same result. That plugin can do someghin on safenetwork, or nothing at safenetwork at all. It is simply just a checking of a result. No bash, no code execution, nothing like that. Elders do not need to run any code on tthe same computer, they just configure their elders with host:port for rest api(or any other remote api standart) and the vaults just call those configured remote/or local plugins via that API. No security, nothign is checked. The only what netwokr cares is whether the results were the same or not. It does not check what the plugins do, it simply do not care. Such plugins can be called directly from clients, or in later stage, when there is a compute system as you described, netwokr itself can call such plugins. But what you are talking about is quite a complicated cumputing system, and what I am tlaking is, hey why to bother with complicated staff. We have an amazing consensus mechanism and we have a way to punish bad guys. Lets, reuse our current amazin and only autonomous system to use that sonsensus on plugin calls. It is quite a simple layer, just pick X random elders that claim to support that plugin_ID and call it. And wait for the result. If they delivered all the same result, reward them. If soem of them delvered different result, punish him and remove him elder status.

Nothign more and nothing less.

I haven’t understood this yet, so would welcome you setting it down in a new topic and including the essentials of the architecture, and the interactions between plugins and network (node, elders etc).

2 Likes

I disagree. I’ve understood everything you’ve said. Everything you have described is considered easier/simpler/better to implement and use via a single general computation feature/api.

You didn’t understand my analogy.

Then how do you ensure that nodes performing the “calls” don’t copy, leak, steal private data while they are running a computation/task?

Yes, that’s called a client run APP that makes use of a network/distributed computation feature to meet the user’s needs.

Granted, not all computation nodes will be able to support the same types of computations. So there will be some differentiation of capabilities.

Yea, sure. Why to implement awesome feature in weeks, when we can wait another decade for better solution.

You didn’t understand purpose

That is the point. There are no private data. We are talking about public data. Blockchain is public, all transactions are public. Weather is public. The whole point of those plugins is to bring external public data and services into network. If they were not public, then how could all the paralel plugins check their completition?

To explain again the purpose. Lets come back to why this idea was introduced by me. It was based on a discussion with Dirvine, on how to implement SafeNetworkToken to ERC20 wrapped token. And the problem is, there is no way how to talk to ethereum network. But with plugins, it is easy. Plugins can make a whole blockchain copy into safenetwork in realtime. That copy can be trusted. That copy is public data. And those plugins can broadcast ETH transactions. So basicly such plugin allows instnantly to create a Safenetwork website, that can act as ETH wallet.

1 Like

That’s a perfectly fine initial solution and first step towards a more general computation framework that can handle both public and private data + execution code.

Again, you are describing something that can be better handled within an APP + general computation framework.

I think some of the cross-talk confusion has to do with the terminology and imprecise language such as “plugin, feature, general computation”). Again, The “plugin” terminology you are using to describe this is not ideal. Imo it is better to view basic compute capabilities from a traditional client/server perspective. The entire safe network is the server. Safe is a world computer.

Computation nodes can offer support for various libraries/software programs which they have installed natively (rust, python, numpy, scipy, pandas, FreeCAD, BTC node etc.) and will have different hardware capabilities. It would be a misnomer to call this software ‘plugins’. A client can send data or references to data and request computations to be performed on it via any of the supported software. Required software and hardware can be listed as dependencies for the request. These requests are more easily managed by a specialized app to achieve the specialized task.

1 Like

If something is part of core consensus then I’d say by definition it is not a plug-in. Plug-in implies something optional that extends functionality.

It is optional, elders can or can not support a plugin. It is called only on X elders, which support such plugin.

Perhaps something like your plug-in system could be a good start for compute, but it also doesn’t seem like something that should be a decade away after launch.

Essentially you want some elders to have consensus of the result of some compution running on a VM. Ethereum has made a deterministic subset of web assembly for Ethereum 2.0, I guess that would be a good starting point.

With the consensus mechanism in place and already existing VM that can be used, that’s the core parts already, no?

1 Like

A preconfigured Safe VM with all supported software & libraries is a simple way to get started for compute nodes. But job dispatching from clients to elders to compute nodes is where some of the complexity is. Something like (the logic, strategies) used by SLURM or Torque/OpenPBS might be possible to adapt for use on the network.

1 Like

I do not care how you call it, it is plugin because it is a plugin from elder’s perspective. His vault can support any module/plugin he wants, be he does not have to. Not shure why it should be called an app, if I as a client want just call a sinlge call,like get my my bitcoin balance for this adress.

I would not call it compute either. The plugin itself does not run in the network, it does not need to run on the same machine where elder sits. Me, as an elder owner, I can have an elder vault on first computer, my bitcoin client software whith whole bitcoin blockhcain on the second computer and my plugin that handled calls between elder machine and bitcoin wallet machine sits on 3rd computer. That is why I call it a plugin. Some plugins can do really complicated stuff, like run a complex JAVA service, some others can just do a simple math operation. Some requires whole blockchain sync., some just wrapp calls form other API services. So Elders are expected to support only very small part of available plugins, if any. So from all this it is obvious this plugin system is a compute machine. I can run php script on php plugin, I can run JAVA app, on JAVA plugin. And the network itself does not check anything, it just checks whether the result are the same and if not, it punishes elders for claiming they support a plugin and their failure to deliver consensus result.
Doing this is pretty simple. Network just need to store information which elders support which plugins. And randomly pick X of them when there is a call. At this stage any client can call any plugin and get the result. In this stage, network itself does not call any plugin on its own without client action. So at this stage, any client software can have trusted external data. Since that data is most likely public data, any client can store such results from those plugin calls into safenetrok as public data. This way, any client can create a public copy to safenetwork of any blockchain. And how to achieve that others will know that public data uploaded by such client can be trusted as exact copy of blockchain? It is simple. We just need the network to sign with a certificate, the result of each plugin computation. This way client receives signed data, that prooves that the result he recieved was computed by using network plugin. Client can upload whole result together with that signature as public data to the safenetwrok. And at that moment there is a provable copy of the blockchain in the safenetwork which is garanted to be 100% copy of original blockchain. Safenetwork itself did nto upload it. Safenetwork just asked elders to execute some code and signed the result, that the result is the result of such paralele plugin execution. Safenetwork does not know that it was blockchain data, but everyone in the network can now read those signed results and be 100% sure that they can trust those data as a valid copy. And this keeps things very simple and robust. People will use plugins, because they need a proof that those data they uploaded as public can be trusted. People on the network know that they can trust those data and don’t need to verify them externally. And now we have an infinite options to work with that data.

1 Like

With a consensus algorithm already in place, distributed computing should be relatively simple to implement. SETI@home and Folding@home are 20 year old projects. If they could put that together twenty years ago in short order, this shouldn’t be too complicated.

2 Likes

Until someone can clearly describe how this works, to me it is not simple.

1 Like

Depending on what is meant by “compute” and the feature set desired, it seems like having a compute node running inside a docker container with a shared set of libraries across platforms would be the simplest way. Then you use the network to distribute the work and the consensus to confirm it.

1 Like