Efficiency on the Safe Network

Efficiency is an important aspect of decentralized networks. The various meanings and implications of efficiency on the Safe Network can be subtle and expose various potentially conflicting goals.

The first and most obvious point is Safe Network is designed to not be a blockchain - which is usually taken to mean not an energy magnet the way bitcoin and ethereum and other proof of work protocols are. Safe Network uses different mechanisms to increase the efficiency compared to proof of work. The amount of energy consumed per unit of work will be greatly reduced compared to proof of work.

Keep in mind, efficiency is how much is put in compared to how much we get out. That’s not always easy to compare between systems.

There’s energy efficiency - Safe Network is constrained by data storage and transfer resources (ie harddrive and bandwidth consumption) as the underlying unit of work, whereas proof of work is constrained by computation resources (ie energy consumption) as the underlying unit of work.

Safe Network will be more energy efficient than proof of work because the amount of energy needed to power harddrive and bandwidth consumption is less than the amount of energy required to power computation (for an equal amount of output; meaningful comparison of inputs and outputs is itself wide open to debate).

This leads to a question about capacity efficiency: is it inefficient to have spare harddrive and bandwidth capacity left unutilised (ie not doing anything)?

On one hand it’s inefficient for Safe Network to have 100 copies of data if only 10 are needed to achieve the goals of the network (primarily ensuring permanent storage of all data at an acceptable speed to store and retrieve).

On the other hand it’s inefficient to mandate 10 copies if capacity which could be used for storing 100 copies (to be faster and more reliable) is used elsewhere for less efficient purposes (perhaps nothing).

Capacity efficiency also touches on the potential for decentralization of the network. Consumers often have spare harddrive and bandwidth (a sunk cost), but datacenters do not, which should mean a reduced amount of datacenter participation compared to proof of work. Or it may mean any excess datacenter space is used temporarily for Safe Network while it awaits a ‘real’ use, since leaving it unused is a waste for the datacenter operator. The magnitude of this excess space may be substantial for a large datacenter.

I feel the impact of capacity efficiency on Safe Network (both near and long term) is not well explored.

There’s also pricing efficiency to consider, which is a two-fold consideration and is intended to reflect the best information available of supply (farmers) and demand (uploaders/downloaders). The first consideration is the price for storage demand, the price uploaders pay when storing data, ie the storecost. The second consideration is the price for supplying resources, the price farmers are rewarded for storing and transferring data, ie the farming reward. To what degree are these two factors linked, and how are the distinct? How can we come to understand the impact on efficiency by setting the price at one particular point rather than another? How does efficiency change for varying growth rates (roughly controlled by price)?

There’s also timing / performance efficiency, which asks how much time does it take to achieve something? Ten seconds to fetch a document would be less efficient than one second to fetch the document. How efficient should the timing be? Should performance considerations be different for upload vs download? Should increased resource be used to improve pricing efficiency or timing efficiency or a bit of both? Should the network be capable of streaming a 720p or HD or 4K video, or should people download ahead of time? When does interacting with a large database count as ‘taking too long’? The timing and performance efficiency target is a difficult one to pin down, but I feel it has the greatest impact on the broad perception and utility of the network, so is worth considering carefully.

Are there other types of efficiency that haven’t been covered here? Mistakes or misconceptions? Improvements or comments? Would be interested in your thoughts.

I reckon we all want an efficient network, but what does that mean to you?


To me efficiency should be a goal across all avenues. Efficient in power consumption, speed and cost to store and retrieve data, on-boarding a new network denizen, governance to effectively update the network, ai learning on network health and security so it can enhance itself, ai to target and possibly prevent spam to avoid congestion, distributed computation to purchase from the network to speed up services, form of homomorphic encryption to feed big data anonymously, and on an on to any potential facet either existing or not yet imagined. A never ending pursuit but hopefully a process that can be sped up by ai and genetic algos.

I guess to give a more specific response, everything you mentioned are all levers that should be tweaked at as many possible variations to see how the user experience is affected by each tweak and what variations offer the most benefit. Are some short comings even noticeable? Is a balance best or will people prefer consumption speed first and foremost? If uploading takes too long then will be get enough content on the network fast enough? How long could it reasonably take and how long is too long? We have to test it to really know I suppose.


To me smooth UX is an important part of efficiency.For example, I haven’t downloaded any torrents for years just because Spotify is more efficient way to reach my goals.

I expect Safe to become more efficient compared to Oldnet at least in a way that I can ‘click’, ‘agree’, ‘log in’… much less.


I believe that the greater the flexibility of the network to allow for market mechanisms to value/price different aspects of the network, the more efficient it could become as it progresses.

That’s obviously just a philosophical view on how I’d like to see the network engineers consider the problem while designing it … ultimately, I believe that so long as the network can evolve in such a manner, it could become highly efficient in time.


Top of head below bit of a ramble…

Obviously, we don’t want too much of one interest and not enough of the other; so, the question is how is that balance resolved, and in a way that can reflect complex interests evolving over time.

These kind of balances between end points have me wonder about 1/rr and the like of gravity, that is so natural, creating potential wells; those that pull the balance back an interest to a fair mid point, though with risk harmonics if not dampened. A stronger answer is exponential from midpoint but I don’t know, if that harder resistence is better.

timing / performance efficiency

There is a balance to be struck between many factors and part of that is having availabile flexibility; so, having resources unused that could be made available at short notice. I don’t know, whether the kinds of storage available will be considered differently, relative to their performance; or whether too much tailoring make for liability relative to stability. By default, my instinct is to the 80:20 rule and have 20% available for short/immediate response; a further 20% of what remains that can reponds a bit slower; and the remaining 64% can come up a bit slower again.

pricing efficiency

Once you introduce price, you tempt perception; users might want to pay for higher performance.
So, the other thought is, price relative to the midpoint… so, if someone wants to pay then exponential or 1/rr cost for the difference from normal, will encourage a fair balance.

volume / service type

Safe is for everyone, tempts that there is not preference gifted to one kind of interest?

The performance need though for continuity on streaming is different than for individual file bursts. I don’t know if bursts of streaming can be compared to single file response. If you go down the route of one service getting a different response, then that tempts management - which tempts politiks, if only for encouraging one over another.

I’m all for keeping it simple.
Efficiency is a balance between interests.

Interests are the sum of what provides for those. I don’t know preferring one over another is appropriate but the way resources/performance sum is made, is important to see equity across interests. Is it simple streaming v individual files; or is it also chat and pings of updates - dust versus block content. At one time perception of what is more important will change. Having cost flexibility as exponential I wonder will help support any interest… if it is that important then the interest pays in some way to force their interest relative to others. So, that everyone is pushing for their interest and the balance is found that way.

In all cases the network should not fail; so, I would expect some prioritization that sets some nice values for processes and allows then catering for luxury performance at a price. That is, in worst case under stress the network stabilizes and performance is a secondary.


This is a really important insight I think in general, and I’m glad you brought it up, because I think it’s not one that most people consider. My impression is that, for most, “efficiency” refers to a single continuum of being better than or worse than something else.

But when you’re dealing with a high-dimensional space (e.g. there are a lot of factors to consider for Safe like the various costs you mentioned), it’s no longer straightforward to say something is more “efficient” than another. It’s not particularly insightful (in terms of the whole picture) to make singular claims about any one of those costs. Saying “SAFE is cheaper in terms of energy” feels good, but there can be cross dependencies among terms and the space could be highly non-linear.

For example, if I own a widget factory, increasing per-widget cost doesn’t always decrease size, or it may even increase size at some point for some reason. So if I tout the small size of my widgets, I’m saying nothing about the cost, which is contextually still very important. Only considering them holistically can we gain any insight on my factory. Or worse are the costs we don’t model, which occur anyway. In this case, maybe my factory is highly polluting and overall output decreases at larger timescales due to the impact of my factory on the environment.

This hints at the my main point that “optimal” depends on your metric, and not on some “true optimum”.

In some sense, this is your classical convex optimization problem. Safe’s answer to the “optimal” operating point is governed by some set of constraints and an arbitrarily-defined cost function which we attempt to minimize. That cost function reflects Safe’s goals in how it weights the various costs associated with the Network and in what we choose to optimize.

While we might find an operating point at which we meet all of our goals and are “efficient” in the ways we want to be, I suspect Safe (and most things really) can’t be simultaneously efficient in all ways, and I think that’ll be important going forward.


Just a quick comment not not get too caught up in ideas of efficiency. An efficient network is one that achieves it’s goals first, and does an “efficient” job of that second.

Like evolution, is it efficient to have so many species rather than just a few really good ones?

Efficiency is not just using less energy (for example), than Blockchain, it is being around after the once in a few decades event has obliterated the competition.

Having said that, I’m not saying this isn’t an important discussion. I want to add another facet or context within which to consider what is “efficient”.


I don’t think this is true, at least in the beginning stages of the network. To get widespread adoption speed and ease of use will be the most important ingredients. Energy and capacity efficiencies are more philosophical in nature and not worthy of a lot of foundation work imo, best to leave those considerations for another day. Pricing efficiency is a moving target that will need much data analysis to manage effectively and, hence, must also be reserved for future attention. Performance (mostly speed) and user interface are the two most important factors that should get attention right now (and I believe that to be the case, actually).


Efficiency is “achieving maximum productivity with minimum wasted effort or expense.”
It’s inherently a balance in that… it’s a judgement call on what is a priority.

? ->
An effective network is one that achieves its goals first, and does an “efficient” job of that second.

1 Like

Interesting and yes correctness should always be a goal, but efficiency, I am not so sure. So absolutely 100% no waste (don’t do work you don’t need to) is great. So I offer a counter point, or perhaps just a deeper dive, perhaps?

I have noted the single biggest issue from the COVID-19 pandemic and it’s worth debating.

Efficiency is the enemy of resilience

So I see efficiency as actually not using 100% capacity or “Just in time delivery”. I think for survival we need food stores, spare food, spare water, all taking up space we could use more efficiently if it was all delivered on time.

So I think the conversation is best focused on doing work efficiently, so no wasting energy, but also have spare capacity and resources available for the inevitable crisis nature throws our way.

Interestingly I know from speaking to Deborah Gordon that a typical ant colony she surveys (harvester ants) she finds 50% of the ants in the colony do nothing, they are just there “just in case”, now if they were “efficient” they would all go and get food, doubling the colony due to double the resources, but 150million years of evolution has taught them to have resilience as that resilience and spare capacity is the most efficient way to survive.


efficiency [ ih-fish-uhn-see ] 1. the state or quality of being efficient, or able to accomplish something with the least waste of time and effort; competency in performance. dictionary.com

i.e. efficiency is secondary to the goal.


off topic but there’s something to wonder about getting the r value above 1 - perhaps that’s just marketing but perhaps aspects of what makes for effective/efficient lend to infection, that it bests its competition… or what defenses there are against use of the network.

Performance then perhaps an important part of marketing but at what cost. Sustainable is also attractive; something that will persist, in the way that C19 does… no vaccine, it just keeps on chugging… and then reliable because it always works, if a bit slower for it.

1 Like

I’m with @VaCrunch. Different efficiencies are required for different use cases and at different stages in the network’s evolution. Most processes can be made more efficient over time, but at the start low latencies for downloading in particular, and low-friction sign-up - meaning the minimum number of button clicks and unexpected processes - will be crucial for adoption. Without a sufficient user base other efficiencies are neither here nor there.


I did wonder targeted use cases might be the way to go, if you can supplant a messaging service like Telegram; then a Dropbox; then website hosting etc, that would allow focus on UI and performance, rather than trying to boil the ocean or eat the elephant or whatever other meme is that.

1 Like

Secure social and storage applications are the obvious early use cases, sitting on top of a lot of extras such as convenience, energy efficiency, simple wallets etc. Applications that don’t require a large audience already in place (so web hosting and e-commerce will be slower and come later once the audience grows large enough to make them worthwhile).


Yes, the thought was more that blitzkrieg is efficient…

I’m not familiar with this, what is 1/rr?

I agree with what you’re getting at, but would maybe aim for ‘lightly used’ than ‘unused’. May as well do something very small with it rather than nothing, right? Or is it useful to have unused resource around? And to be really pedantic, is unused-but-waiting-specifically-for-Safe really unused, or is that mode of ‘being available’ a kind of use? Haha it really feels like I’m being too pedantic!

Ah what I was getting at here is not the specific use case of video but where the balance is for data volume and speed. At what point do we say ‘sorry your use case needs a different way of using the network’. Like, if someone wants to analyse a 10 PB satellite imagery database in 10 seconds, that’s different to wanting to watch a 720p movie in 2h. At some point we say ‘this use case is within / outside expectations’. It’s not fixed in stone obviously, but it’ll be interesting when someone says ‘oh Safe Network can’t do xyz’, how do we respond - ‘ok let’s aim to do that’ vs ‘do it a different way’.

This is a great point and I totally agree. However I would say that the goal does have efficiency built into it, for example, if the goal is to use spare resources for farming then that is implying a desire for efficiency (of otherwise wasted resource).

I suppose it’s a bit like money - money is an abstraction of value and when we start chasing money for the sake of money itself we have lost the point. Efficiency is only part of the goal and when we start chasing efficiency for it’s own sake we have maybe lost the point.

Good to be reminded of that.

A bit of wordplay here to tamper with the word ‘need’ - the network needs to have resources it doesn’t need so that it’s resilient. This is the conflict that efficiency is aiming to address.

Well, I’d see efficiency as an adjective that is part of the goal, rather than some secondary noun sitting below the goal. Getting pretty pedantic again but the word efficiency has surprised me with how many different ways people are looking at it. Kinda cool but can make for a very complex conversation.

Network virality.

I’d say telegram is the hardest of these examples to supplant, because it has a dependency on network effects (others must also use it). Dropbox can be quite useful in isolation or with few other users so it’s probably simpler to supplant than telegram. But that also depends on how easy/hard it is to get new users on board the Safe Network - if onboarding is easy then maybe telegram is not a difficult target.

For me the main indicator that efficiency is being achieved will be reflected by a low storecost. If it’s cheap to upload then it means spare resources must be getting utilized. (Or maybe it means there’s a lot of altruism happening? Or maybe it means people are using farming as a speculation on future prices?) But my gut leans toward low price signalling good efficiency.

And then second to that I would use fast GET response time as the next best measure of efficiency, since that means farmers are seeing the value of delivering data on the network, and the value of the data (beyond just safecoin) is to me a big part of the efficiency.

Another form of efficiency that I left out is maybe onboarding efficiency which means how long it takes to go from first hearing about the network to contributing to it. That means easy to get safecoin, but it also means good documentation for devs, clean modular code, simple test environments, new ideas and improvements coming in continuously, there’s an efficiency that comes from the way people engage with the network not in a purely ‘time spent’ way but in a relationship sort of way, how people relate to and conceive the network.


“Efficiency” is not a good starting point to frame the ideas/ discussion. Objectives, constraints, and Pareto Optimality provide a better framework. “Efficiencies” emerge from investigation of a Pareto optimal system.

For example, say you want to maximize performance while minimizing cost for a complex system ( like the Safe Network). There are many input control settings to the system, each set of inputs yields a cost metric and a performance metric. There will be a set of these input values that are all equally optimal, meaning one control strategy might offer a small performance advantage over its competitor but it will also be more expensive.

But this is a highly nonlinear set of relationships! Usually there are inflection points in the Pareto set where by sacrificing a small or negligible amount of performance, you can reduce cost by a huge amount. These inflection points in the design/objective space can be said to be the most efficient input/control parameters. This is where the efficiencies reside. I.e. :

A more familiar example might be how you select a processor when building a new PC. Do you go with a Ryzen 9 or do you go with a Threadripper for 2x performance and 10x cost. The most efficient choice considering only performance and cost objectives is to use two Ryzen 9 for a 5x savings to your budget while receiving equal performance.


Lazy writing of 1/r2

It’s then a potential well … inversely proportional to the square of the distance from the source… which pulls the object back to centre… like a pendulum… the further it is the stronger the draw sucking it back to centre… with a dampening then it would see preference for the “normal”.

It’s just one way to encourage that what is at a distance from normal “costs” more and compensates the extra “work” involved.

Oddly perhaps that again is “Potential” if it’s not active and Kinetic, then it’s sat ready … passive/active would be another. What matters is whether any latency has been removed… so, “prepped and ready” would be the same.

Yes, you cut my quote off there where I’d noted streaming is of a different kind… but if there is a way to compare fairly with transfer of individual files, then if the likes of NetFliks starts up, with the volume that demands, then the network is not unable to also do everything else.

I think that’s right… it frees up resources in the real world that can then be put back to the network to support its growth; even if initially it’s just redundant backups, until there’s full assurance the network will survive long term.

Difference is resistance; so, all those elements that can be the same… which is why I queried the use of CLI in cgi-bin … if there’s a way to see normal websites make use of Safe Network, that will be a great boon for service providers storage if nothing else.
One time cost for storage is a huge selling point.

1 Like

Lots of good points about efficiency and its true @mav that we are including some idea about efficiency within the goal - implicitly I suggest, so good to recognise that and so we could try to be more explicit. Other wise we can lose sight of the impact on the goal when chasing efficiencies.

In that vein, maybe we have a goal for the network and wish utilise resources in ways that contribute to (not undermine) the goal most effectively - more goal per “buck”.

So while considering efficiency in any area, we shouldn’t always be considering how that might affect the goal.

As David notes, we might lose resilience if we use fewer resources. For example, how many nodes do we need, and can we be sure we’ll achieve our aims with fewer, and so be more energy and resource efficient, and also what types and levels of disturbance do we wish the network to be able to survive etc.