Contracts & Distributed Execution

I’m not at all au fait with Etherium and won’t pretend to be. I don’t understand their smart contracts and quite probably the notion I have of them is completely different to how they actually work. Having said all this, this post doesn’t actually have anything to do with Etherium. I’m merely mentioning them because what I say next may well have factored into their design and has already been considered by people in MaidSafe and/or the SAFE network community.

This is not going to be another post on “illegal content” but it will be a minor factor. I’m going to try and delve into a way in which truly distributed applications could work. I don’t think any competing systems allow for this. The concepts are simple - the technology won’t be quite so simple but far from impossible.

I’ll start with a few statements I believe we’ll all will agree on:

  1. With large groups it’s impossible to have full consensus.
  2. Entities should not be forced to participate in actions they disagree with.
  3. Entities should be allowed privicy.

There will be many more statements that can be added!

It seems quite logical that if a system enforced contracts that it would mean that the system adhered to each of these statements. E.g. If you are a vegan then your machine shouldn’t be used to store/distribute content that promoted animal testing.

The network as it is doesn’t allow for distributed software to run on it. It allows for distributed data, not execution. In order to achieve this you’ve obviously got to have nodes on the network sacrifice CPU time. Not only this, you’ve got to have nodes on the network willing to risk allowing operations they’ve no knowledge of executing on them. Do you want to risk having some process/thread on your machine running in an infinite loop - especially if you’re getting no benefit from the process/thread to start with?

I don’t know all the details about how distributed websites are planned to work at the minute. However, my understanding is that the sites would pretty much be client heavy, with JS calls through to the SAFE API. I’m struggling to think how this can be done in a secure way with the hardware currently available. Things may be fine for the end user but how about the site operator? If sites aren’t tamper resistant then who’s going to create sites on the network?

For truly distributed and secure applications to function you need to be able to execute operations on neither a server nor client. These operations need to be executed on some arbitrary machine(s).

So, here we are again with contracts. Currently network users are assumed to to sacrifice diskspace for tokens and these tokens allow the users to “do stuff” - this is the default network contract. Diskspace is cheap and most people will be fine with this. Not all users will be happy sacrificing CPU time and taking the other risks involved with executing operations on their CPU. Also not all users will be happy helping to distribute X, Y and Z.

Entities that create websites on the network won’t want to trust users not to look at their JS and start messing with their datastores…this means you need distributed execution…which means you need contracts.

I’m very confident it’s possible to achieve truly distributed execution…and if done cleverly it could perform almost as well as (maybe better in some years) current centralised systems - obviously assuming there’s adequate resource within the network.


It’s a difficult problem to do distributed remote code execution. You would have to create a distributed virtual machine with a bunch of instances, then run an operating system such as QUBES which uses security by isolation, least privilege, and other paradigms.

Code execution dangerous though. I think QUBES so far is the best attempt at trying to secure an operating system and it is how I would go about doing it. To do it on the distributed level would require trusted hardware.

At this time I would say it’s not feasible or reasonable to secure it. Maybe if the market cap rose exponentially so that the economy has enough money or perhaps if Rivetz solves some of these problems it might be possible then.

The main security vulnerability is that you can’t trust your computer hardware or the software. There are leaks everywhere too. I think even the Ethereum team is vastly underestimating the difficulty of doing this stuff but they are ambitious enough to try.

Maybe use the process of coinization to our advantage?

Maybe capability tokens could be capability coins. Only the node with possession of the coin could have privileges. Maybe other elements of the virtual computer could become coins but I’m not sure. Tokens or coins could be used to identify the purpose of different nodes which make up the computer and to signal what they are doing because it has to be coordinated.

I do know you can do access control quite well with coins. Also there is the concept of the API coin which allows for something similar but not exactly the same motivation. All I can say is this is a very hard problem as difficult as what Satoshi Nakamoto had to solve when solving the Byzantine Generals problem. There isn’t even a good metaphor from which to try and use to describe the problem to communicate it is there?

Nick Szabo attempted to describe something similar in 1997.

And there are many people working on it now under other names which are quite technical. Even in these cases they haven’t completely solved every issue.


It seems like it would be possible to do distributed execution in theory but it would be too slow to be practical. Secure multi-party computation is becoming faster but it’s still slow.

1 Like

I suppose if you just want to do it in the most simple way you can do the execution on oracles external to the network like the Ripple protocol will be doing with smart oracles.

It’s decentralized enough but it requires you to trust the oracles. It’s not as secure as I would want it to be either.

I don’t think you’d need to go that far. It seems like the network might not be a million miles away from what I’m talking about as it stands.

Say sites are written in JS then you may want some bits to execute on the client and other bits to execute on other machines. There could be various reasons for wanting to do this; security (because you don’t want the client to tamper with calls), performance (in certain areas), etc.

From the post I’ve referenced above it seems developers can call through to centralised servers if they need to - which helps with certain security issues.

One way I could see this working without any centralisation would be for JS code to be stored within the network, just as files are. You could place a request with the network to execute some function and then you’d get the result. This function would execute within a JS engine on some node on the network that was signed up to the “Allow 3rd Party JS execution” contract. The node that executes the function would be paid by the caller.

There’d obviously be questions around security here. One of them being how do you prevent nodes that are signed up with the “Allow 3rd Party JS Execution” contract from abusing their position. This is where trusted computing could come in but there may be different options.

In terms of performance, of course if this type of thing was used without much thought (on the end developers part) then performance would be terrible. However it could be used in ways which actually improve performance. Imagine you have some house-keeping operation that needs to be performed after an order is taken. You could just fire off an async request into the network to perform this house-keeping op and it’ll get done at some point, we probably don’t care if it’s done immediatly but we can carry on with other work at the same time.