Questions testnets may be able to answer

The next testnet will be able to hopefully clarify some burning questions many of us have about the Safe Network.

For example these economic questions from @Sotros25 :

  • Will the adjustment in rewards be more continuous or discrete?

  • In what increments will rewards be adjusted?

  • How much time does it take for the new rewards amount to come into play?

I am hoping to get some insight into

  • How fast is upload and download?

  • How much difference in workload is there between an adult and an elder? This would be traffic volume (Mbps), traffic shape (bytes per request, how many extra small requests do elders deal with), number of connections made per minute/hour/day.

  • What sort of growth rate is possible? How much spare GB added per day, how much GB uploaded per day, how much GB downloaded per day, how many unique farmers/uploaders/downloaders per day (not sure how available this info will be though).

If we keep track of what we hope to learn from the test we have a better chance of actually learning it. Please add any more stuff you’re hoping to learn from the testnet.


I think this is a useful initiative and will have a think. I’m a bit cautious of raising expectations for the next testnet though. I think we can begin to learn about these things but maybe first there will be a few bugs to iron out!


Yes indeed, this is a good point to make and I completely agree.

I see testing falling into roughly three categories, 1) find bugs and tick boxes that normal functionality is working 2) profiling resource usage and 3) compare alternatives for various features.

I think mainly 1) will happen with the next testnet, maybe a little bit of 2) but probably mostly just testing ‘yep it works’.

I’ll edit the title to be more general.


I think we must add the hidden category, security. That thing we cannot test for effectively, but will need many eyes on.


Is balancing already taken to be more important than upload/download?..

Survival instinct is more important than catering to others.

How fast does the network respond to ensure balance; how much work and extra stress is there ensuring the data… and how is that balanced against upload/download… so, for example, is upload/download more important than ensuring the very last copy is tidy.

How much extra work is generated for each node going offline?

If a chunk of the network goes off line or if there are rolling brownouts in the electric grid, what effect does that have. Where is that work … is it distributed or does it have an impact local to the section it was part of. What kind of events would panic an Elder?

Does a new node have to download as much as an average node, before it becomes useful - and before it becomes paid?

Is that a reason to lower the barrier for entry to the network perhaps below 50% full??.. or does that initial work help create inertia than encourages stability.

Then, how much extra work is required to become an Elder before being paid?.. or is that simple continuity for what is stored and not a risk of stagnation as too many nodes are doing nothing but prepping for their next role, without responding normally.

Are there any exponential growth risks with the kind of tasks being done.

How many Elders; how many nodes, need to be stable at any point in time to not create too much work just balancing… is the balancing dampened; so that the abstract to-do list will never get larger, faster than can be worked off.

Worst case in the early days, where some stress tempts there is not enough space, does the copy count drop to what is possible… that’s perhaps not an option but where are the limits of how fast and flexible each node is; and then each segment. These kind of stupid questions might become FAQs to help ensure confidence??

If the Elders become busy with certain types of task, how do they prioritize.

If one Elder is stressed, does it alert others and is that useful… if the network is stressed how does it communicate that… or does it not and is that a liability. Perhaps this is offset by data being well distributed.

Perhaps then a query whether Elders are overthinking their role.

What part do Elders play in ensuring data…

Does the sum of all Elders hold one copy of all the data - which perhaps would be a good simple confidence builder.

Are the interests of the longterm data storage liable to become corrupted by shortterm interests?

Perhaps just a repeat of balancing but if services that are transient exist, like instant messaging, could those risk that data storage is not able to flex under stress. Sort of a Netflix effect.

Is the movement of data to cater for popular downloads, seen as a luxury task?

How much of what the network is, will be visible?

What metrics for network stress?.. even if those are abstract internal to network only… will the network be able to communicate state… is there a risk that it doesn’t know that it is being stressed. Perhaps the sum of Elders thought on how busy they are will be an indicator? Perhaps certain metrics and tracking can be in place and removed later where that’s a liability…

It’s the unknowns that are a risk to confidence. The more insight into how the network actually works, the more it is clear where the limits are that break it; the more it is clear the network is setup never to see those limits, will all help to answer FAQs.

:palm_tree: :evergreen_tree: :deciduous_tree: :tanabata_tree:

(I’ve just edited to make Elders with a capital E because I like the idea of those being like Ents :speak_no_evil: )



How much is growth required, for long term stability?

What assumptions are being made about the user base?.. if growth is slow for a time because of an inertia to confidence… people waiting a couple of years to ensure the network is stable before committing their data to it - especially commercial volume, then will a slower growth pay for itself.

Should the cost of storage be relative to the network size?

Assuming the network size is known; some binding to a logarithm, might see high cost initially and over time then becomes stable and almost linear.

What difference the reward for Elders?

Should Elders be paid more?.. does it matter if they are not. If the transition from normal node to Elder is simple and not a pause on profit, then does it matter if Elders are not rewarded more.

If Elders are paid more, then how much… is that a liability for those trying to game the system, that they will have motive to disrupt the network for others?

All nodes are equal but some take their role more seriously, perhaps will just naturally balance with the real human interest in the network. So, expecting more Elders than supporters in the real world, tempts that you have to pay - to buy those Elders, rather than just expect them to exist.

If Elders carry a copy of the data; so, the sum of Elders is one full copy, then their role is important and should be rewards extra… but there is a liability - the more you pay, then more some will try to game the network, in ways that see them owning more Elders.

Could a pool, risk the like of 51% attack?.. if the network can be games because of the economics, then could the Elders become at risk from single point of failure.

Are nodes in the cloud a risk?

If Elders tended to be hosted on AWS or other, those do fail on occasion. Is there a way to avoid that Elders gravitate to certain kinds of hardware.


Is farming idiot proof?
Can everybody start farming, from the UX point of view.

How farming changes the UX of my farming device for other purposes?

What is the general feeling of using Safe Network?

How does my friend react, when I show him the history feature of the browser?

How much easier it is to communicate to others the significance of this project?

Who is first to regret publishing something for good?
What are the reason for his/hers regret? Does she/he realize that “Phew, I’m glad it was just a testnet!”


That’s an interesting one… Node should not resource hog. Will the owner be in control of the nodes resource usage - will there be a minimum and is there a risk allowing the user to affect what a node can do from one moment to the next. A reliable node one moment might become unreliable or slower the next?.. reasons to spread the load, so the network can tolerate flux in online nodes capability, not just those going offline.
Simplest option perhaps that the node doesn’t use all resources… limits itself to 80% of bandwidth and cpu ?

1 Like

Some programs have switch to let it know whether the computer is dedicated to this program (database for exaple) or shared with other programs and tasks.


Yes - and I wonder it’ll be the opposite, that mostly it doesn’t need to push a limit because the network is well distributed but in the case of network stress perhaps important that resources can ramp up … to a point. Also, some users might not mind that it takes whatever it needs.


That’s an interesting point about throttling to spare system resources. I imagine the average throttling of nodes which comprise a section will one day need to be considered in the calculation for a sections resource needs. I don’t think it’s been really considered yet, so it’s still an open question, but I could see something like throttling being impactful enough that to push the coin value in turn, so as to incentivize or dis-incentivize more nodes joining the network (e.g. in the case that there’s a lot of nodes running, but many node instances are shared).

Just thinking out loud here, so don’t read too far into that, but it’s an interesting idea to consider.


I am a bit sceptical on the idea of using spare resources. Where I see the future is that we have somekind of “set and forget” boxes, possibly with Safe Network node software pre-installed. These boxes doesn’t have a screen and no need for cooling so they don’t make any noise. You can just plug them to the wall and once in a while use your smartphone to check how it is doing. Maybe there is a simple led that is green when everything is working and red if something is wrong. They don’t do anything else, they are just nodes for Safe Network. No other software causing unnecessary crashes, system updates or whatever.

Well, actually there could be another hardware combined: part of the casing is an empty box with a slit that you can use to put a physical coins in. “Yes my daughter, this is your piggy bank. Make sure it’s tail is plugged in the wall all the time. There is electrical money there as well.” The desing should reflect this idea, of course.

Piggy Bank On Pennies (5915295831)
Ken Teegardin from Boulder, Boulder / CC BY-SA (Creative Commons — Attribution-ShareAlike 2.0 Generic — CC BY-SA 2.0)


Yes, but we might want as many nodes as we can find for stability… so, a mixture of those which are committed and those which are shared with other applications. Depends what the requirement is for a normal node…


I suppose this is one of the roles of node aging as well, and the whole concept of elders/adults/children and division of labor serves to address this point I think. Considering shared vs dedicated could be integrated with this system I’d say, perhaps influencing the rate of aging. Also node “health” and performance integrations into the network resource algorithm would help with this, which has been brought up in the past afaik.

Those who run dedicated nodes will probably comprise the bulk of the work in the network while those running temporary ones will probably just take a supplemental role. I see it similar to how most communities in general are propped up by a core userbase who really drive the service, while the rest have a higher churn rate and are sort of along for the ride. It’s a risk worth considering, but I don’t think it’ll be game breaking.


For sure, and it is really important to have wide spectrum of suitable devices. I just personally hate having anything hissing, humming, blinking… around in my home. And if I have Safe node running in windows computer, I’m quite sure the other non-node crap is going to seek my attention any time, or interrupt my node in it’s job.


and with a profit motive, some users will just have nodes do whatever alongside noncritical applications and want to control that the node doesn’t go above certain resource limit… if money follows quickly from a node turning on, then more likely and more of those kind of nodes… but to be hoped the kudos and profit positive feedback helps encourage whatever balance is most useful to the network.


As long as node can be run on this Sony stuff, we’ll serve everyone:

OK, I’ll better stop procrastinating and derailing a perfectly valid thread and get back to my real job. :wink:


Personally i plan to run on something like a RPi or Odroid and leave them running 24x7 and battery backed up so they do not need restarting. Yes the internet might go down on a power failure but the node software keeps running and hopefully the node software soon after launch will have a way for the running node to reconnect.

I have unused drives that I can connect up to them and its a very cheap node. Basically spare resources after a new SBC. The drive is the more expensive part anyhow.

And then if I want I could run node(s) on my PC and/or laptop if I wish.

I too do see that node software built into say

  • NAS devices like synology (spelling?) freeNAS etc
  • the open source router software since many routers can have a usb drive off them
  • RPi and Odroid type devices
  • 3rd party setups that make a commercial product for a node. (prob a RPi/Odroid inside)
    will be a big boost.

And testnets can tell us how viable it will be to use SBCs


My questions:

  1. What is the smallest section size that is longterm stable?

  2. What is the relationship between communicatipn overhead and section size, (ie how large can a section get before decision making and consensus slow to a crawl)?

  3. Is there a security benefit to limiting communication routes within a section to form a communication heirarchy? (Nodes can only communicate if their difference in age is small in relative or absolute terms.)

  4. How long (measured in network time or wallclock time) does a chunk stay “at rest” before it gets moved/copied and checked for errors (chunk ECC)?

  5. What effect does variable vault size have on section dynamics? Would fixed vault sizes improve performance/stability?

  6. How does section performance scale with redundancy? Is there a point where there are too many copies to be manageable relative to section size and section churn dynamics?


How does differing Internet Speeds by Country or region affect the network.

What is the lag across the planet… what is the distance in time across the network… what difference in time and speed for retrieving copy 1,2,3,4 etc. Reduced difference might be useful for flexibility?