SAFE Network Dev Update - August 30, 2018


This sounds cool, but so far everything unexplained is explained by the word random. I could argue all day that nothing is random. So if a vault is testing an upgrade at random periods. All you are testing is that updated version (B) can talk the same language as previous version (A). Thats what interfaces and unit tests can tell you. If (B) behaviour changes, (A) is never going to be able to agree with (B), so the network can’t upgrade until enough people are randomly using (B) reinforcing that (B) has the correct behaviour (This will take longer than 50% because they are all randomly trying the the new behaviour). So the tipping point is still basically adoption. Unless the change has no impact on behaviour, then the upgrade is seamless.

So at a technical level you are going to use dynamic linking to provide updates? A bit like plugins I guess. Are you only going to keep the previous and the ''update" version loaded?

@dirvine honestly I’m just trying to work out how you expected aspects of this network to actually work as you intend. I feel resources for describing these things are limited. They really should be aim at the ‘look how simple this is.’, not ‘look how clever we are’.

Yes, but which nodes do you pick from the ones that are waiting? I know, I know the answer is random

Okay, agreed, any device with an output is a server. Yet, the network cannot be expected to work efficiently with 100 home laptops, both consuming and serving data. You are going to need servers, so lets update that definition, an always-on computer device with dedicated resources for the sole purpose of serving others. Which you clearly expect to be an important part of the network because node-ageing trusts based on ‘on time’.

:joy::joy::joy::joy: You totally just reminded me of one of my lectures I attended at uni, it was based on the same argument with a different example.


Some people like to give constructive criticism to help people understand and improve the design. Others just like to be unconstructive, throw barbs and not really care whether the points are valid or not. Just saying.


You only have to divulge a single id, possibly throw away one, as in " Allow users to associate multiple identities (key pairs) to their account.", right?


Not with me though :smiley:

No (this is again absolutism, there is many ways to upgrade, i.e. we are speaking to skype dudes to see how they did it etc.) , any node could speak both language A and B and via multiformat or other means know which language to speak. This is RFC territory though and that RFC does not yet exist.

I appreciate that (a lot), but I think this list does not come across as us being clever in any way. It is a set of principles/foundations the network requires in our opinion. It has not changed but we felt it did need re-emphasising. Your explanations make it sound as though you are trying to be clever telling us there is only one way and then creating that way in what really needs to be a design discussion/doc (RFC). All we are saying is the network needs upgrades and may …

Whichever one is waiting at the time we want one in a section. Nodes will try-join and be accepted or rejected (probably do resource proof work etc.). Again though RFC

I don’t agree, after all it is designed to. This is why so much work in crust with NAT traversal and reliable udp etc.

If you mean computers, then I agree :smiley: :smiley:

Let’s not :wink:

:smiley: :smiley:


I am not sure if I am on the same page here as I am a complete newbie [; … however,
If we have reports in “text” we can write a parser that will convert these into json an then plot graphs via vue.js, d3.js, GoJS or similar style diagrams in real-time… no?

Are Safe Vaults servers or not: a discussion

For me it feels like Thursday to Thursday is getting faster and faster, damn you Einstein for discovering that time is relative. :joy:

@jonhaggblad Seems like you are getting up to speed, excellent, keep it up!!!


The SAFE Fundamentals list is fabulous.

It’s great to see, finally, a clear and unconfusing statement of safecoin distribution!!


I am not sure if I am on the same page here as I am a complete newbie [; … however,
If we have reports in “text” we can write a parser that will convert these into json an then plot graphs via vue.js, d3.js, GoJS or similar style diagrams in real-time… no?

Well, these things are already in dot syntax, so we can (and do) already trivially generate images from them with graphviz

The point of this task is to be able to read them and reconstruct our data structures (like the gossip graph) so that we can use it as input for functional tests.

So current workflow:

  • code generates data structure
  • code dumps graph in dot format
  • code generates graph from dot file
  • human reads graph

In addition to this, we want to add this workflow:

  • human writes graph in dot format
  • code reads file and generates data structure
  • human writes visualises graph as image and writes test
  • code ensures invariant based on test

I hope this explains the motivation :smiley:


What the user written particular test suppose to test? Can you give an example?


Sure. The idea is to have humans easily express a subsection of a gossip graph (in dot format) so they can look for a certain behaviour.

For instance if I want to test how a fork is handled properly, I can write:

digraph Example {
subgraph Alice {
alice -> a_0 [style = invis]
a_0 -> a_1
a_0 -> a_2
subgraph Bob {
bob -> b_0 [style = invis]
b_0 -> b_1 [minlen = 2]
b_1 -> b_2
b_2 -> b_3
alice, bob [color = white]
a_1 -> b_1 [constraint = false]
a_2 -> b_2 [constraint = false]

which renders roughly like this:

and then I can write test code that tests that the fork is detected and the author is punished.


Well, ehm you would be probably looking for some kind of webgui microservice - user-friendly tool - that will let you add “nodes aka alice bob, n+1 (using jquery e.g.)” then define direction from parents to objects or objects to objects or mixture of these as shown on the attached graph including length metrics obviously. To do this it could be a tool like: Then the diagram created could be saved in json. Then json file can be converted to dot again. And from dot you can again “code generates data structure”

I think I am missing the point of:

Which code generates what data structure? Specifically what is the data structure.

I think I am going too deep :smiley:


Well, ehm you would be probably looking for some kind of webgui microservice - user-friendly tool - …

We would if we were targeting end users. This is specifically needed for us developers to write tests for our functionality, so we find our text editors (vim for the ones of us who are correct and emacs for the other ones) to be the most user friendly tools to input information.

Which code generates what data structure? Specifically what is the data structure.

The parsing code that we are implementing reads the dot syntax and populates the data structure we want to test (namely, a gossip graph)


I think I am going to stop here…

Have a good weekend [;


Of course, we do visualise our dot files if that’s what you’re hinting at.
I use dot from the command line and also sometimes to see it rendered in real time :smile:


I was troubled by this when it was mentioned in a previous topic. It seems like the control system is being over-constrained. Not specifically the case I just quoted, but the reverse case, where the network decides it does not need more resources.

My original understanding of the control loop was one where the network would raise and lower the cost of puts as needed, so that humans would close the control loop and selfishly choose to increase or decrease storage to take advantage of the new cost incentives. This felt right to me.

Now, it appears that the intent is to do that … but to also limit users from starting new vaults, if the network doesn’t feel like it needs them. This is to rate limit attackers from batch starting millions of vaults.

I am fearful that the rate limiting of new vault creation, will override the original control scheme of network raising and decreasing PUT price to entice humans to add/remove vaults. Now, you have a situation where the network says “I don’t need any more vaults, so I will only very slowly allow new vaults to be added”, which will keep the cost/put from ever needing to be reduced. Won’t you end up with a ratchet, where cost/put continues to climb? We wanted the price/put to reduce as the network grew, to encourage more people to use the network. If the network instead of reducing price, opts to reduce vault creation, you never reduce price, new users never show up, and the network stagnates.

I’m concerned that there are now two mechanisms that effect the rate of vault population change, and the possible dynamics of having the rate limiting effect the price control loop. I do like the idea of rate limiting attackers and I’m sure that the two controls can be made to work together, somehow, if done carefully.

EDIT: Perhaps humans will save the day here, and I am worried for nothing. If I am a human (I am) who knows how difficult it will be to get a vault going (because of rate limiting), I will think long and hard before I decide to turn off that vault. So perhaps a supply glut will indeed still occur, even with rate limiting, thus allowing the price/put to be reduced as per the original control mechanism.


Anybody that runs BitTorrent on their computer is serving content to peers in a similar manner to a SAFE Vault, except without the anonymity that SAFE provides. Have you ever met ANYBODY, that calls their personal computer that happens to have a BitTorrent client running on it, a server? Why then, would somebody whose personal computer happens to be running a SAFE vault, suddenly define it as a server?


Knowing that you would have to potentially queue to join the network with your vault. Would you be more inclined to run your SAFE vault from a machine that is less likely to be switch off?

The ecosystem promotes using a server to run a safe vault.


You are a lawyer, aren’t you?


Great update! I just now got around to reading it. I’m pumped about the SAFE network fundamentals, it’s critical that our message is tight and consistent when we share with others.

Congratulations @Shankar on the wedding!!!

While I love quality discourse as much as the next person, I feel that @zeroflaw may be veered us away from it altogether. Let’s keep this productive, please.


I’ve read the whole topic.

I really like critique, I like when people are able to identify and point out discrepancies, contradictions or just black spots on the map.

I think your critique is rubbish @zeroflaw. Some said thank you, but from the very start the quality of your objections has been very poor. The main reason for this is that you have done very little reading up on the fundamentals of the technology, and head right in with the assumption that you don’t need to; that you will be able to formulate relevant critique without it. And this exposes a proclivity of yours, to assume an effortless understanding possible, and all following correspondence confirms and shows that you are quite convinced that you have all information you need to correctly assess these topics. This is very much different to how curious, inquisitive, exploratory minds work; whom I find to be the best at formulating quality critique.

I have a couple things I could critique about MaidSafe or various parts of what they are doing. I don’t find it very important (that’s why I don’t rant about it at length), but they are things I feel could be criticized.

  1. I find it a very very ambitious statement that the software would know bugs. It might hinge heavily on the definition of bugs, and sure software can very well detect undesired results and repair itself. But a bug might be that it is not able to correctly identify such. A bug is something that makes that what you think work, not working. I just find it a bit too much to state that the upgrading software will know anything that is wrong, and know what to do about it. That’s perfection, an absolute.

  2. When presenting what the SAFENetwork will do, I would see it more fit to say something like “We believe it might be possible, and will try to find a way ...” => “.. to pay the providers and maintainers.”. That is very different from saying “The network _will_ ..” because it is under research, and there is no clear idea yet of how, and no one is certain that it is possible to pay maintainers in such a decentralized, secured and autonomous manner that is the ethos of the project. That to me is much more honest about what this whole project is doing, where we are, what we have been doing up till now, and where we are heading.

  3. Having a reward fixed at 5% or 10% seems very arbitrary to me. Sure it can be a “good enough” estimation, and any pragmatist knows those are always necessary and required. But is it really not an area too important and vital for such a coarse approach?

Finally, I would like to direct myself at MaidSafe technical people (as a group), and say I am a little dissapointed you spend 20-30 (I’m guessing, didn’t count) posts on debating with the likes of @zeroflaw, and giving absolutely zero response to my polite request for your intentions, ideas, visions and knowledge on the topic of SQL databases and SAFENetwork. (Yes there is a short “what he said” kind of response to my question about Azure like functionality on SAFENetwork). I have been waiting. Was thinking first that it was the decentralized web summit taking up time. But now I just think, it wasn’t important enough. This debate on what a server really is, is a lot more important.

It makes me kind of sad really.

EDIT: I forgot to say one thing. I think it is a very sympathic thing that so busy people partake in the cleaning up and sorting out of also trivial misconceptions, or repeated misrepresentations of the tech or related. And I see how it is hard to just draw the line, so it very easily goes on longer than you wished. But would have loved to see that attention directed towards for example the topic I mentioned (and as a matter of fact consider it better spent time).