Safe Computation, Apps & Plugins

Isn’t your ‘plugin’ just a fancy word for ‘app’?

I don’t think so. What he’s talking about is in one sense an API, but one which operates at a deeper level in order to allow integration with the network core (consensus, rewards and penalties) rather than services (storage, names, containers, data types).

To support that is not straightforward IMO, not necessarily wise, but potentially very powerful and so worth looking into. It may be more prudent to add new core services than make this an API to which anything can be added, I don’t know. Certainly worth looking into, but I don’t think easy, or high priority because I doubt it’s easy to get right and will take time.

However, if somebody comes up with concrete proposals for this, that would make it easier to understand and reason about.

5 Likes

What you are describing is how the anticipated network computation layer might work. All the core network needs is a general computation API. Then apps can handle all of the details about what is being computed.

No, it is not that simple in practice. There are a variety of security related details that will need to be handled. It is not extremely difficult once the initial network is proven, but will still take some work to ensure no data leaks.

Computation will always involve some type of language. You are describing what has been considered the most basic form of general computation on the network. We’re talking about the same thing. A network node doesn’t need to support all kinds of different plugins, just a single computation feature or shell (ex. safe + bash = sash, the safe again shell) that can take a file and perform operations dictated by the file in a safe manner on a set of nodes without leaking data while using group consensus for error checking. Specialized apps can then use this general feature to accomplish everything you are describing (Weather, Voting, BTC transactions, etc). The core stays simple and minimalist. The possibilities for apps remain limitless.

2 Likes

Which cases would you need a plugin rather than compute? With compute you would just upload a program to the SAFE Network network and it could run on a section close to the xor of the hash of the code or something like that. Plug-ins need s to be installed on the nodes where they would run, much more manual and less autonomous, but could perhaps support some use cases where compute wouldn’t work? What would that be though and why?

With compute you would have scripts/programs spread out across the network by the network so you could have millions of programs, that wouldn’t really work with plug-ins.

1 Like

I haven’t understood this yet, so would welcome you setting it down in a new topic and including the essentials of the architecture, and the interactions between plugins and network (node, elders etc).

2 Likes

I disagree. I’ve understood everything you’ve said. Everything you have described is considered easier/simpler/better to implement and use via a single general computation feature/api.

You didn’t understand my analogy.

Then how do you ensure that nodes performing the “calls” don’t copy, leak, steal private data while they are running a computation/task?

Yes, that’s called a client run APP that makes use of a network/distributed computation feature to meet the user’s needs.

Granted, not all computation nodes will be able to support the same types of computations. So there will be some differentiation of capabilities.

That’s a perfectly fine initial solution and first step towards a more general computation framework that can handle both public and private data + execution code.

Again, you are describing something that can be better handled within an APP + general computation framework.

I think some of the cross-talk confusion has to do with the terminology and imprecise language such as “plugin, feature, general computation”). Again, The “plugin” terminology you are using to describe this is not ideal. Imo it is better to view basic compute capabilities from a traditional client/server perspective. The entire safe network is the server. Safe is a world computer.

Computation nodes can offer support for various libraries/software programs which they have installed natively (rust, python, numpy, scipy, pandas, FreeCAD, BTC node etc.) and will have different hardware capabilities. It would be a misnomer to call this software ‘plugins’. A client can send data or references to data and request computations to be performed on it via any of the supported software. Required software and hardware can be listed as dependencies for the request. These requests are more easily managed by a specialized app to achieve the specialized task.

2 Likes

If something is part of core consensus then I’d say by definition it is not a plug-in. Plug-in implies something optional that extends functionality.

Perhaps something like your plug-in system could be a good start for compute, but it also doesn’t seem like something that should be a decade away after launch.

Essentially you want some elders to have consensus of the result of some compution running on a VM. Ethereum has made a deterministic subset of web assembly for Ethereum 2.0, I guess that would be a good starting point.

With the consensus mechanism in place and already existing VM that can be used, that’s the core parts already, no?

1 Like

A preconfigured Safe VM with all supported software & libraries is a simple way to get started for compute nodes. But job dispatching from clients to elders to compute nodes is where some of the complexity is. Something like (the logic, strategies) used by SLURM or Torque/OpenPBS might be possible to adapt for use on the network.

1 Like

With a consensus algorithm already in place, distributed computing should be relatively simple to implement. SETI@home and Folding@home are 20 year old projects. If they could put that together twenty years ago in short order, this shouldn’t be too complicated.

2 Likes

Until someone can clearly describe how this works, to me it is not simple.

2 Likes

Depending on what is meant by “compute” and the feature set desired, it seems like having a compute node running inside a docker container with a shared set of libraries across platforms would be the simplest way. Then you use the network to distribute the work and the consensus to confirm it.

1 Like

Bingo. 20 chars.

A true VM running in encrypted ram via something like AMD SEV is another possibility.

Optional is the right word for a plugin but it is contradictory with penalties for nodes that don’t send the right results like you initially described here:


And optionality is unsafe for me:

That means consensus has to be reached in a subset of elders which seems insecure and difficult to control. Sections in this sub-network will be much larger (XOR space wise) than in the underlying safe network and will group together elders that normally don’t communicate between them because they don’t belong to the same safe section.

We can imagine layer 2 safe networks but to provide safety they must be integrated in the core code and this is complex.

1 Like

How are you going to request work be done if you don’t have a baseline from which to work? Say HappyBeing gets his decentralized git in place, and someone wants to setup a decentralized DevOps system. They could write a script to automatically kick off a request to get compute resources to compile their project and deploy the app to the network whenever a new commit is detected.

If there is no consensus/baseline on libraries, expectations, etc. then it becomes a free for all. Supplying a docker image, or some form of a virtual machine, means that no matter the platform, you can expect consistent results and can test on your own platform.

I’m sorry but the way you present this is not clear which requires me to put the time in organising it and making it clear. I’d like to because I’m interested but I don’t have that time, and that you haven’t done this also means it may well be fruitless. I’m an interested observer with to much else to do!

Plugin for what, exactly? Presumably a compute node will be running in some kind of container, by necessity. If you are proposing that people create plugins that pull down different libraries and bundle them for the compute node, and perhaps create a wrapper with an API, I’m 100% with you. I’m simply saying we need a standard to work off of and develop towards.

How are you going to write a plug-in that can run on Windows/Mac/Linux and perform the same compute? You also don’t want to have JoeBlow consumer having safe compute nodes running amok on their systems. A container gives the consumer an easy way to constrain resources and stay secure.