Pre-Dev-Update Thread! Yay! :D


Lol but seriously bro, I’m right there with you.

Every aspect of today’s world screams out for SAFE


It’s called config_file_handler but what does it exactly do? Well it creates, reads and writes JSON-encoded config files. So are we very close to some installers including this brand new config_file_handler to take care of our settings? Boy, so exciting :grin:. And what about chunk_store? They call it: “Simple, non-persistent, disk-based key-value store”. It seems the Ants are coming. And the Vaults are coming as well :grinning:.

Disclaimer: this reply is full of speculation and hope.


Exciting times indeed,

I hope we can run multiple vaults in one application window.

During my early video tutorial, I manually opened 1 vault per terminal window. If I’m able to run 100 vaults on my system, that is a LOT of windows. An interface to manage/organize all my vaults would be very helpful.


Throw some jet fuel to that speculation fire @polpolrene , the moment I’ve been waiting for !!


At first I thought :thumbsup:

Then I wondered, do we want to make it easy for individuals to run large numbers of vaults? Even though I would definitely want to do this too! :wink:

Might it not be better for the network if that wasn’t made so easy?

Just a thought. :slightly_smiling:


What’s the purpose of having many vaults on the same box ? Some time ago I read it would be better for farming but is this still relevant ???

Many small vaults = less resources per vault
One big vault = all resources

Is it about little manipulating the get/put requests in xor space or which way do I have to look @ this ??


I don’t think the “vault setup” difficulty will stop people from running large number of vaults. I’m personally motivated to do it already.

The one limiting factor is the amount of RAM needed to run the process. I wrote about it on our SAFE Club channel on Slack. Even if they could start 1000 Vaults, their system may not be able to handle it. That’s not even counting the bandwith needed to support all the routing, caching, and GET requests.

So many questions to answer… once testing starts.



More vaults = higher chunk collection rate.

More chunks collected means more opportunity for GET requests.
More GET requests served means more farm attempts, which means more Safecoin.

Imagine you were in a Network of 100 vaults, including yours.
If you have 10 vaults, you might collect 10% of chunks stored on the Network.
If you have 1 vault, you might collect 1% of the chunks stored on the Network.


my view on this was like this

100 vaults of 1 gig => catching 1MB chuncks and serving Get/Puts on request and delivering from address x in xor space => so 100 (x) in xor

1 vault of 100 gig => catching 1MB chuncks and serving Get/Puts on request and delivering from address x in xor space => 1 (x) in xor

so totally wrong

Why more chunk collection with more vaults as they collect 1MB chunks ?


Chunks get routed to the closest address. So chunk “ABC” should be as close to a Vault with address “ABC” as possible. When you have 100 Vaults, you are closer to more chunks. When 100MB is PUT to the network, it means 100 new chunks will be routed. If you have more addresses the chance is bigger you’ll get more of these chunks.


This is because of the way chunks are distributed (stored) on the Network.

If someone uploads (PUTS) a 100MB file. That file will be broken up into 100 chunks (1MB each). Each chunks goes to a different vault address in XOR space. This is done intentionally to decentralize the file itself.

So by having more addresses (vaults) you’re able to collect more chunks.


I’m sure this is the next question, so I’ll answer it right now.

Why not just make all vaults 1MB?

The partial answer is the sigmoid curve reward ratio.

Quote from @dirvine

Sigmoid curve will allow very small nodes to get rewarded but encourage use up to average required storage. Otherwise we have all tiny vaults and that will be of no use to anyone. So sigmoid allows even small vaults to be rewarded but also encourages greater provision. A balance of reward verses requirement.

The other part of the answer is… hosting many 1MB vaults will be insanely resource intensive with a negative return of Safecoin farmed.


okay thanks for clarifications :wink: I definitely will go back to the technical docs in the near future :slightly_smiling:
@dyamanaka @polpolrene


It’s not about stopping anyone :slight_smile: If you make something easier, more people will do it, harder then less people will do it. So its just about whether we want to put something out there - the request or the software - that increases the average number of vaults people run overall.


Anyone else expecting a awesome update tomorrow? :smiley: any predictions?


This is me right now.


That time of the week again

So exciting and getting a bit serious now…

Wonder if we’ll hear any confirmation for a tesnet launch date? Oh my goodness… just can’t wait for that and then T minus to live launch :slight_smile: weeeeeeeeeeeeeee!


:scream_cat: safe_vault



Wouldn’t get your hopes up too high, there are at least some relatively new problems it seems:

We can’t have a test net without a proper distributed hash table implementation. No clue how long this would take to fix though, might be pretty quick if we’re lucky. Still, it’s not easy stuff.


That makes tonight’s update even more tense :confused:

/gulp, hope it can be done quite quickly


Yeah, I personally wouldn’t bet money on a February release. On the other hand, the uncovering of these problems shows real progress in testing and the devs grip on the code base.