Profiling vault performance

Thanks for the clarification; I was wondering whether it was real, a simulation, or that you were dipping into the testnet. But even if a private network, the lack of a time series, how it might evolve over time, makes it largely irrelevant to the current stage of development. SAFEnet testnets nearly always start out promising and then always go downhill. Does your network do that? If so, please point me to where you describe that.

This topic is about Profiling vault performance. I removed several personal back and forth between people. Please keep on-topic.


Chill @bluebird , this kind of emotion leads to missing stuff. I find this info helpful and I kinda design these things, Andreas finds it helpful and he is the team leader of Routing (and a very exact thinker), perhaps that is good enough? then if others in the community can see benefit it’s even better.

if you don’t then that’s also cool. If you see error then perhaps be specific in helping identify the error, if any and as unemotional as possible, then we will all gain.

It’s easy to ignore something that is very important otherwise.


Yes it does do that, as stated in a previous post:

There’s also a chart and logs showing the slowdown.

I understand you’re referring to longer time-spans and not over the course of just a single file, but my test can’t pass a single file upload so it doesn’t make sense to test longer than that yet. You’re just a couple of steps ahead by trying to understand testnet problems. I’m not there yet!

Please read the posts. My previous response to you about the test environment was also covered in an earlier post. I’ve also asked repeatedly at the end of each one for suggestions about how to improve the testing, but so far I have yet to see any offered by you. If you have suggestions for how to improve the performance or testing of vaults, I’d be keen to hear it. Otherwise I’m afraid I’m a bit unclear as to the purpose of your posts.


As I dug further into routing it was important to explore and understand the message processing queue.

These stats are generated using the same test as before, uploading a single 655 MiB file (I was using 686 MB earlier; I am now switching to MiB since chunks are measured in MiB).

Routing Events

Hop messages dominate incoming routing events. Here’s the frequency of received events, measured at

Total events: 844663
                            Source   Duration (s)     Count
                      Message::Hop   11623.452889    771594
           Action::NodeSendMessage    1155.635871     10202
     Message::Hop::UserMessagePart     627.514541     33307
       CrustEvent::BootstrapAccept      27.649895       689
                   Action::Timeout      21.810442       470
                 Message::Hop::Ack       8.026048       728
                   Message::Direct       3.038348       607
                      Action::Name       2.441469     22900
        CrustEvent::ConnectSuccess       1.775223       239
              CrustEvent::LostPeer       1.740036       741
                           Unknown       1.655566       939
CrustEvent::ConnectionInfoPrepared       1.652124       247
                Action::CloseGroup       0.175473      1993
        CrustEvent::ConnectFailure       0.001992         4
       CrustEvent::ListenerStarted       0.000365         1
                 Action::Terminate       0.000072         2

Hop Frequency

Hops account for about 95% of all received events. Hop messages are heavy on signature verification.

Considering the file is about 655 chunks, there’s an average of about 1200 hop messages per chunk!

With a group size of 3 and quorum size of 2, how many hops should be expected? I need to get a better understanding of the architecture to answer that; maybe someone can shed some light?

Crypto Workload

My hope is to understand how much reduction in upload time may be possible if crypto signing / verification / hashing are offloaded to some mythical ‘instant’ hardware module. This may give some idea of how much non-crypto overhead is possible to address.

Signature Generation
Total events: 2101191
Total duration: 1060.10999838
                            Source   Duration (s)     Count
                     HopMessageNew     677.496887   1193402
                  SignedMessageNew     382.492785    907452
                  SendNodeIdentify       0.095010       287
                SendClientIdentify       0.025316        50

Signature Verification
Total events: 806283
Total duration: 672.703389051
                            Source   Duration (s)     Count
                  HopMessageVerify     672.454186    805927
              VerifySignedPublicId       0.208050       302
        StructuredDataCheckAllKeys       0.041153        54

Total events: 591238
Total duration: 469.917269425
                            Source   Duration (s)     Count
                  GroupMessageHash     359.539391    586913
               ImmutableDataDecode     110.364227      3930
                       PublicIdNew       0.003636        44
          HandleGetNodeNameRequest       0.003110       101
             Authority::ClientName       0.002464        69
            CalculateRelocatedName       0.002045       101
           HandleBootstrapIdentify       0.001315        43
              HandleClientIdentify       0.000965        36
                     NodeFirstName       0.000116         1

The total time spent doing crypto operations is about 36m. But this is deceptive since many of these operations will be / should be performed in parallel, so it’s difficult to compare directly with the total upload time.

Best case

Assume the best-case scenario (ie most benefit) for eliminating time spent on crypto.

This scenario happens if all crypto operations are synchronous. Thus the total amount of upload time spent on crypto would be about 36m (1060+672+469)/60.

There would still be at least 24m upload time (60m-36m) even if crypto was instant. So there is definitely room for improvement on the handling of messages.

Worst case

Assume the worst-case scenario (ie least benefit) for eliminating time spent on crypto.

This scenario happens if all crypto operations happen asynchronously across all 28 vaults at the same time. Thus the total amount of time spent on crypto would be about 1m18 (1060+672+469)/60/28.

There would still be about 60m upload time for this 655 MiB file.

Crypto Conclusion

During the 60m upload there is somewhere between 1m and 36m spent on ‘unavoidable’ crypto operations. The exact amount depends on the degree of parallelism between vaults.

These stats look at the network as a whole, so there may be additional insight gained by looking at loads for individual vaults, eg some nodes have much higher load than others, so should be target of further analysis.

On a related note, RFC-43 async safe_core seems to be aimed at improving processing speed, which is very exciting.

Lower Bound

For comparison, simply copying the file over the network takes about 45s (using scp). That means there’s about 70-fold more time spent on ‘stuff that isn’t client upload’ due to the operation of the safe network.

Measuring Improvements

Performance can be improved with some simple changes. I’m not suggesting these changes be incorporated into production, simply trying to pinpoint where the bottlenecks exist and their magnitude.

Full Sized Image

When both changes are incorporated the test was slower than expected, which seems to be due to variation between identical tests. It would be best to perform tests on a deterministic network (which requires big changes to the codebase) or multiple times with average / standard deviation (see next steps below).

In summary, most of what I have investigated may not be directly applicable to production vaults, but the numbers do indicate areas that may be of interest. Such areas include:

  • save files to disk as a background process
  • message architecture may be optimized (ie regardless of the speed of the underlying crypto operations)
  • hop messages (and the required signing) may be optimized since they’re the highest message type by both volume and duration.

My next steps are

  • upgrade to newer versions and compare performance with old versions
  • test how much variation there is by running the same test multiple times
  • consider a set of metrics that might be applied to production vaults to indicate bottlenecks with real-world loads. This will hopefully result in optimizing for real usage and not contrived scenarios as I have been exploring so far.

With a group size of 3 and quorum size of 2, how many hops should be expected?

Currently routing splits every 1 MB chunk into 50 individual messages, and every group member sends a message for each chunk (either the chunk itself or its hash), which would be a factor of 150 for messages sent by a group, and a factor of 50 for messages sent by an individual node.
Then each chunk gets routed, so the number of hops would be the average route length (~ log2(network size)) times 150.
Every message gets an acknowledgement, so you’d have to roughly double that number again.

And then e.g. a Put requests goes from the client to the client manager and from there to the data manager, so that would be one node message and one group message, causing 400 times average route length hops, I think.

But there are also other messages sent by routing itself to learn about new nodes, establish connections etc., that have nothing to do with the data itself.

(I know, there’s a lot of room for obvious improvements here. But we have to make the network fully secure and resilient first, before we start implementing optimisations.)


This post is about testing variability and repeatability.

Software Versions

Vault 0.11.0
Routing 0.23.4
Launcher 0.8.0
DemoApp 0.6.0
SafeCore 0.19.0

Changes from default operation

group size: 3
quorum size: 2
upload / storage limits: extremely large
remove one-vault-per-lan restriction


  • Load and start 28 vaults on a network of 7 pine64s.
  • Create an account using random password / secret.
  • Upload 655 MiB file via the demo app (ubuntu-16.04-server-amd64.iso).
  • Record the timing of the upload.
  • Stop and delete the vaults.
  • Reboot pine64s and repeat for a total of ten identical tests.


Test  Time (m)
   1      59.5
   2      59.6
   3      54.7
   4      55.5
   5      54.6
   6      52.5
   7      65.6
   8      55.4
   9      51.6
  10      59.9

Min: 51.6
Max: 65.6
Average: 56.9
Median: 55.4
Standard Deviation: 4.2

I was quite surprised by the degree of variation, considering the network is a completely isolated / controlled environment. Factors that may contribute to this variation are

  • arrangement of nodes relative to each other due to the randomized naming process
  • entry point to the network for the client due to the randomized login credentials
  • message routes and message queue length, thus processing demand and delays due to blocking
  • processing load due to the ‘heavy’ processing nodes being on different vs the same pine64
  • the Edimax ES-5800M V2 switch being used has three different priorities depending on the physical port on the switch.

Factors that probably do not contribute to variability are

  • ram vs swap - the 2GB of ram per pine64 is never fully consumed
  • disk speed - all devices have the same brand / model of microsd
  • network speed - network cables are the same length and brand of cat6
  • churn - there should be no network churn during the upload since vault names are the same at the start and end of the test
  • other running processes - the devices are dedicated to this test with no other processes running (except to keep the os running of course!)

It’s a little confusing why there is so much variation. I assume this is mainly due to differences in the vault names and thus the topology that messaging must negotiate, but it’s hard to know without measuring.

The main takeaway for me is that the effect of changes to the codebase should be measured using averages over multiple tests, since the error on a single test may be quite significant (much much larger than I initially thought).

I did a second test where the file was uploaded, deleted, then reuploaded multiple times. These tests also shows an unusual amount of variation. In this test the vault and client naming is identical between tests, so the messaging patterns between vaults should be very close if not identical. Yet there was still significant variation.

In summary, there’s much less consistency in upload time than I would have expected, which must be considered when measuring the effect of changes to the codebase.


Is there any advantage to this variability? Can it be used in any way?

The creative in me says yes, there are ways it may be used.

However the engineer in me says no. Randomness (when required) should be taken from a source with a known quality of randomness. Since this variability may (will?!) be reduced in the future, it should not be used as a source of randomness.

Sorry for late reply, I’ve been away :slight_smile: :palm_tree:


That makes a lot of sense to me. Along that line we are aware that SAFE has min number of nodes threshold for security- along the same line and above that threshold or above such a measureable and consistently knowable quality of randomness could this randominity be useable and potentially reliable? I understand we want to eliminate it and don’t want to rely on it as that could be a point of breakage or vulnerability. Still, if Ive understood correctly new approaches use noise as a channel and it makes me also wonder about noise as a clock or time signature. Say an index of the network’s potentially unique noise gathered sequentially as clock. All the nodes of another network might have a tough time being in the same place at the same time. The network’s biometric print across time? Could be hard for another net to spoof or estimate this to internal precision.

1 Like

This test focuses on the effect of group size and quorum size on upload time and network performance.


As usual, the results first:

Upload time increases dramatically as group size increases.

Upload time does not change significantly as quorum size increases.


This test uses the same software versions as the last test, primarily centred around testing vault 0.11.0, only modified for additional logging.

The file uploaded for these tests is Hippocampus (downloadable from vimeo), and is 96.2 MiB.

  • Compile vault with custom group and quorum size
    • Deploy and start 28 vaults on 7 pine64s
    • Wait for network to be formed
    • Upload 96.2 MiB file using demo_app
    • Record the time to upload and the number of hop messages received by vaults
    • Stop vaults and remove old logs
    • Repeat 5 times with the same group/quorum size
  • Repeat with a new group/quorum size

The tests spanned group sizes from 4 to 14 and quorum sizes from 2 to 12.

The results below show the median from five repetitions.

Upload Times

How many minutes did it take to fully upload the 96.2 MiB file? (I’ll put a dynamic table here if the forum can allow <table> tags)

Hop Messages

How many million hop messages were received by all vaults? (I’ll put a dynamic table here if the forum can allow <table> tags)

Variation Due To Group Size

The effect of different group sizes can be observed by keeping the quorum size the same. There is a strong increase in upload time and the number of hop messages as the group size increases.

Variation Due To Quorum Size

There is relatively little increase in upload time as the quorum size increases, but hop count does seem to increase slightly (but not as much effect as changing group size).

Raw Data

The raw log data for all tests is available on the alpha network at http://www.mav.safenet/2016_10_10_group_size_test.7z. Download is 80 MiB, and when decompressed is about 4.8 GiB. For a more permanent link, it’s also available on mega.


These results seem to match intuition, but the increase in upload time was more than I expected.

It’s reassuring to see quorum size only affects security / durability of data and is not a factor in network scaling (as expected).

I’m not sure how disjoint groups will perform, since there will be significant variation in group size. Not saying it’ll be bad, just saying I don’t know.

As @AndreasF pointed out in a previous post about expected number of hops, there is room for improvement on hops but security must come first. Will disjoint groups improve the hop messages situation?

Does the small size of my network (28 vaults) affect the results? I wouldn’t think so, since any node only sees part of the network anyhow, but I’m open to speculation about this.


  1. To compare with the alpha network (group 8 quorum 5), it took 8.5m to upload the same 96.2 MiB file to the alpha network. This is an average rate of 1.55 Mbps. shows my connection can reach 35.16 Mbps upload, so it wasn’t saturated. I reckon that’s pretty good performance, at least compared to the pine64 network which was about 20.5m for similar group and quorum sizes. I’d say this is due to two main differences: alpha has more nodes, and the nodes are (presumably) on more powerful machines than the pine64s I’m using.

    This result was a surprise to me. I expected alpha to be slower than my dedicated network. Shows that cpu performance matters when latency is low.

  2. Received hop message count includes those required for the network to start up and registering an account etc (the point being this was consistent across all tests).

  3. There was a huge variation within the results of each configuration, which is still very surprising to me. Hover over any of the cells in the tables above to see what I mean (hovering shows the data for all five tests).

  4. The last 1% (going from 99% to 100%) always takes much longer than any other. If a normal percent passes in 25s the last percent will pass in about 400s. What is happening in that last 1% that takes so long? It’s very frustrating!

  5. The first 10% (going from 0% to 10%) is almost instant (300ms). I don’t know whether this is because the progress bar is faulty in the demo app or if the vaults are really saving 10% of the file very quickly.

  6. The amount of time for the network to start (ie populate the routing table) increased as group size increased. I didn’t measure it, but subjectively it was quite noticable.

  7. There appears to be a loose correlation between upload time and hop message count. This wasn’t the point of the test, but anyway here’s the chart for anyone curious:

  8. The Disjoint Groups RFC has an interesting point in the drawbacks section related to these tests:

    The group message routing will involve more steps, either doubling the number of actual network hops, or making the number of direct messages per hop increase quadratically in the average group size!

    Although this seems to be addressed in the Test 11 update:

    we will implement the new group message routing mechanism. It will be slightly different from the one specified in the RFC, however, delivering the same level of security but without the huge increase in the number of hops or hop messages.

    It’s still not clear how this will perform relative to the existing hop mechanism but it is being considered.

  9. There were some tests that did not complete and were discarded. This was either due to very slow bootstrapping of the network (mostly with large group sizes), or the demo app showing a Request Timeout error.

  10. The safe network is amazing. Seeing it come together is a real privilege.

Main Point

My main takeaway from these tests is the underlying performance characteristics of the safe network arise as a result of being primarily a messaging platform, with data storage being ‘merely’ the end result of that. Messaging is the key. This conclusion is not surprising in hindsight, but the tests have shifted my balance of thought strongly toward messaging and away from data.


A fascinating read. We all owe you a big thanks for the time and work put in!


Awesome, thanks for the detailed analysis!
Most of the numbers are intuitive, although something weird seems to happen with quorum size 2, going from 8 to 10 nodes.

Disjoint groups are not about performance, and at first, they may well impact it negatively: the average group will probably be at least 150% of the minimum group size and we’re not working on optimisations yet. That RFC is about defining the web of trust of the node keys, and how to authenticate message senders.

But in many cases, those two things - the cryptographic web of trust and the actual web of TCP connections or who sends direct messages to whom - won’t need to coincide, and that’s where we’ll be able to significantly reduce the number of hop messages in the future: if A signs something for B it doesn’t matter what actual route that signature takes in the network. In theory, not even nodes in the same group have to be fully interconnected. The notion of “connected” that the RFC really refers to is just “having each other’s public key”.


This test is around performance of vaults with structured data. I admit to being slightly provocative because it may be compared to the bitcoin transaction rate (which is currently under some heavy contention).

The test is to modify structured data as rapidly as possible. This should hopefully find the maximum transaction rate for my personal safe network (which isn’t especially powerful).


SafeCoin will be implemented as a structured data, and is intended to scale very efficiently, both in cost to the end user and load to the network.

Since the cost to update structured data is zero, it’s worth investigating the impact of high loads on the network, and what the transaction rate of SafeCoin may be.


Using Test 11 software versions, which is required for low level apis.

  • safe_launcher 0.9.2
  • safe_core 0.22.0
  • safe_vault 0.12.1 (assumed from release date of 2016-10-19)
  • routing 0.27.1
  • custom script to modify structured data

Modifications were only to remove check for peers on LAN. Account limits don’t matter for this test, since updates to SD don’t count toward usage. Group size is 3 and Quorum size is 2.

Hardware for the network is same as all prior tests:

  • Network of 7 pine64s running 4 vaults each on gigabit ethernet
  • Client is laptop with quadcore i7-4500U @ 1.80GHz, 8GB RAM uploading on wifi at 300 Mbps


The general idea is to rapidly modify structured data until an issue comes up.

The test script does the following:

  • Create a structured data containing 30 random characters
  • Measure the time taken to update that data X times with 30 new random characters
  • Repeat several times to obtain an average update rate


Update Rate

How long does it take to update SD, and how many updates / second can be achieved?

Whether updating 1 time or updating 100 times in rapid succession, the rate was usually between 120 - 130 updates per second.

However there was some variation where up to 300 u/s were seen.

The good news is that increasing the load did not cause any increase in update rate (as expected).

CPU load across all vaults was negligible.

CPU load on the client was 100% of one core (presumably due to the single-threaded nature of the launcher and safe_core).

Maximum Updates and Failure Point

The maximum number of updates before an error showed was about 1000 updates in a row.

The error returned is simply ‘EOF’, so the launcher server seems to drop requests (ie not the vaults flaking out). This is good news in one way (the launcher is a relatively easy fix) but bad in others (the launcher should be more stable than this). The other aspect to this bottleneck is it makes it difficult to test the limits of vaults with respect to DOS using SD updates.

I had intended to test many concurrent requests to the launcher, but it threw errors with only a low synchronous load so didn’t see much point pushing it further.

Hopefully the revamped authenticator and safe_core will expose a more robust interface to the network.


The update rate for structured data is between 100 - 300 updates per second per user (on my private safe network).

The current bottleneck to increasing this is the launcher, which throws errors when the load becomes too high (around 1000 updates in a row).

It costs nothing to update structured data, and it can be updated at a fairly fast rate. This leads to the notion that some vault rate limiting may be required when dealing with structured data updates.

I’m not terribly satisfied with this test, as it doesn’t really profile the vaults at all. But it satisfied a curiosity about the approximate order of magnitude to expect for structured data performance.


  • The launcher UI froze after running these tests. The server was still running but the tabs wouldn’t change. I assume this is from the logging tab which has a lot of work to do in a very short time. Once the UI locked it never recovered, needing to restart the launcher to make the UI work again.
  • The value of the SD was not checked, so it’s assumed it was updated to contain the correct value and there were no race conditions. This is a pretty big assumption, but these details were outside the scope of this test.
  • I didn’t test this on the live test 11 network.
  • The rates in this test are for a single user, and the overall network transaction rate would presumably be much higher with many concurrent users. I didn’t get into modifying the launcher to test this, mainly due to the imminent overhaul to safe_launcher > authenticator and structured_data > mutable_data.
  • Other simultaneous upload activity such as immutable data may affect this rate in the real world.

You are doing absolute great work for SAFE. Thank you very much :+1:.

Is it safe to say that each Disjoint Group (say between 8 and 20 nodes) can handle at least 100 Ts/sec.? And does this mean that we could scale to 10.000 Ts/sec. with 100 groups?? Like you say, it all depends on how many other structured data objects need to be handled by that group as well. but this looks promising.


In this test the group size is 3 and quorum size is 2 and with bigger group size we can expect that the number of SD(MD) updates will be lower because we need more messages between nodes to reach consensus.

But even if we divide the final number of transactions by 10 or 20, a single disjoint sector (group) is capable of more transactions than the whole bitcoin network. Multiply by hundreds, thousands or millions of sectors, which the network is able to support, and the network capacity is simply amazing.


As @digipl says, there are a few factors that affect the tx rate…

  • Group size and quorum size will be larger on the real network and thus expected to be slower than my test network. However since the vaults in this test never came anywhere close to breaking a sweat the result of 100 tx/s is extremely conservative (from a vault perspective). The actual tx/s that vaults can handle would be much higher but couldn’t be tested because the launcher couldn’t handle that much data. To put out a haphazard guess: since cpu load didn’t change on the vaults during testing and is measured in 0.25% increments, there’s potential for at least a 400-fold improvement in tx rate from the vault side taking the estimate to at least 40K tx/s/group.

  • Global transaction rate increases as the network size increases (ie number of groups), so there is no upper bound on global transaction rate. This is an amazing property of the safe network and is so different to bitcoin that comparing tx rates between the networks is basically an instant red-flag for trolling (guilty!).

  • The rate will be affected by other work groups have to do such as storing immutable data, churn etc. However prioritizing certain data may help retain the high overall tx rate (if that’s considered a priority in the first place).


Vaults with Disjoint Sections

It’s been a long time between tests, so let’s see how vaults perform with the new Disjoint Sections feature.

Results first: I couldn’t get the network started. But the test was still very interesting for other reasons.


Same versions as Test 12.



Same as prior tests:

  • Start 28 vaults on 7 pine64s
  • Time the duration to upload a large file (ubuntu-16.04-server-amd64.iso 655 MiB)
  • Repeat nine times and take median upload time


The network never got started so no uploading could be done.


Resource Proof

v0.13.0 of the vault introduces another new feature besides Disjoint Sections: resource proof (ie upload speed must be at least about 6 Mbps)

Resource proof finds its way into the vault via the routing source code.

Of most interest is the verify_candidate method.

The candidate must provide a certain size of data (RESOURCE_PROOF_TARGET_SIZE = 250 MiB) in a certain amount of time (RESOURCE_PROOF_DURATION_SECS = 300 s) which ends up being a little over 6 Mbps.

This is an overly simple calculation, since there are other factors of timing to consider such as the time taken to traverse routing paths. From the source code comment: “This covers the built-in delay of the process and also allows time for the message to accumulate and be sent via four different routes”. This added complexity is very interesting from a security perspective, as it potentially allows nodes to alter the perceived performance of other nodes on their routing path. Enabling the network to ‘monitor itself’ creates many interesting considerations. The links above are a good starting point for more detail.

Private safe networks would most likely want to modify these parameters to make it faster to join the network, although at this stage the bandwidth requirement should be trivial to complete for any local network.

The implementation is quite elegent, and is easy to see how it can be extended to a cpu proof of resource.

My main doubt of the current resource proof is it uses the SHA3-256 hash function as the basis of proof (with trivial difficulty), yet the majority of current operations on the network are signature validation operations. The real-world performance of a node (especially one with custom hardware) depends on signature verification, so proving they have fast hashing isn’t necessarily going to determine how useful their ‘real’ performance will be on the network. Again, this is a slightly-too-simplistic look at things, but is a starting point in the consideraiton of resource proof. Hashes are perfectly useful as a means to determine bandwidth, but I have doubts about how long it will remain that way due to their disconnect with actual resource demands.

Proof Of 6 Mbps

I first tried running the vaults with the original 6 Mbps proof setting. The gigabit network should trivially handle this proof, and the logs showed the expected message:

Candidate XXXXXX… passed our challenge in 0 seconds.

However shortly after, log messages began showing the challenge taking some nodes 10+ seconds.

The consecutive times to pass the challenge from the first vault log were 0 0 1 2 2 4 7 5 8 6 10 12 8 31 18

This is still way under the 300s threshold, but the degree of variation seems like it can get quite large. It begs the question ‘what exactly causes it’ and ‘how far can it go’ and finally ‘can it be exploited by other nodes to their advantage’?

The variation is concerning to me, but resource proof is a complex topic and one I’ve only just started exploring. I’m sure there will be many interesting developments in the near future as the topic is explored further.

As to a reason for this delay… subjectively, there were a lot of log messages ‘Rejected XXXXXX… as a new candidate: still handling previous one.’ Unfortunately I don’t have the time to investigate this more deeply just now.

It’s tempting to draw conclusions from observations, but I think it’s important to take observations as-is and not make incorrect assumptions about the potential causes. I’m not familiar enough with resource proof to draw much meaning from this test, but find the observations interesting in their own right.

Ultimately my network never got started with 6 Mbps resource proof. The largest I saw the routing table get to was 7, which was from a sample of five attempts each given about half an hour to start.

Proof Of 27 bps

I changed the proof from

TARGET_SIZE = 250 * 1024 * 1024 = 6 Mbps


TARGET_SIZE = 1024 = 27 bps

The reason for lowering target size and not increasing allowed time was because I wanted the vaults to acknowledge quickly and the network be operational sooner, not simply to accept lower bandwidth.

The network would still not start. The cause isn’t clear from the logs.

Proof Of 0 bps

My expectation for 0 bps proof is still failure, since it seems that messages are not getting through regardless of the proof requirement.

0 bps resource proof also failed to start. I’m not sure how the Test 12 network was started in the first place. If there’s anything I might have overlooked that could help get the network started, I’d be interested to know.


I couldn’t get a network to start, so the performance of vaults with Disjoint Sections could not be tested.

The implementation of resource proof is extremely interesting. I’m looking forward to seeing how it progresses in the future.


Always love to see your results and really respect your very unbiased and logical approach. I’ve been seeing commits on github improving log messages so perhaps those changes will help and stick around! There is talk about test 12b coming soon!


I have a very old Intel Q6600 and I have succeeded in running a local network of about 50 nodes on it. Besides removal of check for peers on same LAN like you have done, I have modified handle_candidate_identify() method of routing/states/ module:

        let (difficulty, target_size) = if self.crust_service.is_peer_hard_coded(&peer_id) ||
                                           // A way to disable resource proof in local tests but not in global network
                                           self.crust_service.has_peers_on_lan() ||
                                           self.peer_mgr.get_joining_node(&peer_id).is_some() {
            (0, 1)
        } else {
             RESOURCE_PROOF_TARGET_SIZE / (self.peer_mgr.routing_table().our_section().len() + 1))

This is my way to implement the 0 bps proof of resource. The advantage is that the same binary can be used for both local network (with 0 bps PoR) and global TEST network (with PoR as programmed by Maidsafe).

I also do not start the vaults all at once. I start the first one with RUST_LOG=info ./target/debug/safe_vaut -f and then I launch a group of 10 vaults with a 6 seconds delays between each vault with the following script:

for i in {1..10} 
	echo "Launching vault $i"
	sleep 6

I launch it 5 times, but each time I wait for the routing table size to reach the next multiple of ten before launching the next group. It can take several minutes before this and the CPU can be high.

Lastly, I have modified vault_with_config() method of safe_vault/ module:


        // To allow several vaults on same station
        use rand::{self, Rng};
        use rustc_serialize::hex::ToHex;

The aim is that the vaults on a same station do not share the same chunk store directory, because I am afraid that there is a lock contention on the chunk file created by vaults handling the same chunk. It doesn’t explain why your network doesn’t start up but I wonder if it could explain the observed slowdown during uploads and its variability.

In conclusion, I would say that Maidsafe made it very difficult to run a local test (I mean a real one not a mock one) with following obstacles to overcome:

  • check that vaults are not on same LAN
  • sharing of a common chunk store directory
  • costly Proof of Resource

Ideally, they could add a simple -l flag in safe_vault program to allow such a use case. I would thank them a lot if they implement it.