What about decentralized computing?


The Alpha 2 and the browser demo with right management, data hosting and websites publishing is really cool. But mainly it is about " data storage". What about the calculation (GPU/CPU usage)? Will be decentralized computing a feature on the safe network? For example if I want to build an application which need to do lot of heavy calculation in background, will it be possible to run such things on the network? Is it planned for the beta?

I don’t know if I was really clear but don’t hesitate to ask precisions.


This is not going to be in the beta but is something that will be a fairly high demand / priority enhancement after launch. There have been a few speculative discussions on how to do it here and on the Dev forum.


Ok because i was thinking maybe this will be available before launch. Is it a complicated step to integrate in the network?

1 Like

It remains to be seen how complicated, but in principle it isn’t that hard. Getting it right could though take time and I think anything that isn’t essential and delays launch would best be added afterwards.


If there is going to be a computing network it needs proper incentives, how does it fit with the current concept of safecoin mining?

If it is implemented after the beta there will not be any kind of mining by cpu/gpu resource contributions. It will have to be some kind of utility consumption model like X safecoin /GHz/h, in a way in which the network connects the provider and the consumer but the economic exchange remains p2p without intervention by the network.

Sorry if its not clear, english isn’t my mother language.


Distributed and secure calculations are “must have” feature for implementation of modern web 2.0 websites.
Without it, usage area is limited to simple homepages / file storages.
Ability to host JavaScript is not enough, it can be modified by advanced user and that will bring chaos in data.


I would hazzard a guess that native distributed compute is several years away. To do it properly will take time; lots of analysis and experimentation, etc. A lot of new ground will need to be covered.

Ofc, doing something simpler at the app level may suffice for many use cases. For example, integrating something like Golem, but taking advantage of native storage may be a useful step forward.

Long term, I see SAFENetwork providing a distributed pool of compute that is just as accessible as storage. Infrastructure-less computing, in essence.


You don’t necessarily need distributed compute for dynamic sites. There are multi user appendable data structures available for example, which is a new paradigm in the evolution of client side processing.

There will be a somewhat different approach to development relative to current web development. It means the likes of SOLID apps should be able to run on SAFENetwork even without distributed compute.


Are there any working examples, which can hold as many objects as needed?
Appendable structure have limited capacity, and its secure autonomous (without server and administrator intervention) expansion is not possible, as far as I know.

1 Like

You could however link them in a list or tree

1 Like

If you do this before service launch, available object count will be still finite (and there will be lots of unused nodes at the start).
If you let users to expand it on demand, they can break it (because data format is controlled only by JS code). For example, list node can contain 999 objects and 1 pointer. If object #1000 gets written instead of pointer, service becomes broken.