Support Issues with Baby Fleming Version 1 (Vaults Phase 2a - single-section network)

Guys I understand that the network is currently uploading small files, but it seems kind of sluggish. I assume that is why it is called baby fleming, it cant crawl nor run :smile:

I would like to hear from you experts, what your overal idea is on the Baby Fleming after a week of testing.

Looking forward to hear your opinions!


Next iteration very soon. This tested a suspicion we had about a delay in write consensus so we can move on. Expect a lot of iterations


It would be good to get some testing guidelines.

Am I wasting my time reporting results from one run only?
Should I do a minimum of n test runs with different random data but constant input file size ?
Should I put any more effort into correcting and extending my wee test script?


Test scripts a good as they can be shared. We have most of what we need so far I recon. It’s incredibly valuable and has created a ton of great work for us, really good.


Makes sense if everyone is testing in the same manner.
Reduces the variables to hardware and OS.


Do you have a test script for uploading successively larger files and logging the outcome (and possibly CPU and memory used) or do you do that manually?

EDIT: thinking about it that’s going to be OS-specific.


It 's getting worked on now YOu are one of these Windows persons, yes?

Can I interest you in VirtualBox and a Linux LiveUSB stick?


Yes - I ring a little bell to warn passers-by that I’m unclean


I also use Linux on a stick but I have to use Windows mainly for Excel - unfortunately no open-source alternatives come close.


Aye the VB macros just dont work too well on Libre Office Calc


here is the latest versiion of

Expect frequent iterations.

This script will archive any existing Baby Fleming logs and save them to a timestamped directory. It will then create a set of credentials for a test user, create the account and login.
as yet it does not do any actual testing but does give you a reproducible baseline to start with.

Linux only I’m afraid but might, just maybe work on a Mac.
Mrs Southside has served me with an exclusion order on her new Mac tablet from work. I am barred from going within 5 metres of it. So you Mac fanbois are on your own…

I save this file to ~/.safe/vault, make it executable with chmod +x and run it from there.
It is strongly recommended to build the latest CLI from master. Quickie instructions for doing this in Linux coming soon.


Quick n dirty instructions to build the latest CLI

first run rustc -V

if your version of rust is < 1.41 then run
rustup install stable

cd to your working directory

git clone
cd safe-api
cargo build --release

make yourself a coffee…
cd target/release && sudo cp safe /usr/local/bin

I’m quite sure there are more ‘correct’ ways of doing this - but it works :slight_smile:


If anyone wants to help improve the script, it is now at


Change nappies? Well as long as it’s just wee as you say


Some stats from my tweaking gossip intervals to understand their impact, seeing how long it takes to upload files of various sizes.

Baby-fleming Change A Change B
Size (kB) Time (s) Time (s) Time (s)
700 11.271 49.47 21.688
800 12.074 53.454 16.894
900 14.047 60.628 19.654
1000 14.595 57.409 19.647
1100 13.292 50.306 23.107
1200 85.711 62.416 19.867
1300 23.079 67.187 24.865
1400 32.758 48.352 22.146
1500 killed for cpu 74.33 26.145
1600 49.884 25.31
1700 46.471 27.088
1800 68.727 24.599
1900 50.433 29.201
2000 46.137 30.306
2200 60.206 26.344
2400 70.675 42.646
2600 57.213 62.955
2800 74.288 49.274
3000 360.143 93.373
3500 63.956 66.507
4000 81.542 57.404
4500 101.312 killed for cpu
5000 65.263
6000 81.199
7000 182.018
8000 162.767
9000 229.971
10000 killed for ram

Change A: Gossip slower, instead of 1s updates change to 7s updates
src/ (or use ROUTING_GOSSIP_PERIOD environment variable)

-pub const GOSSIP_PERIOD: Duration = Duration::from_secs(1);
+pub const GOSSIP_PERIOD: Duration = Duration::from_secs(7);

Allowed larger files to complete but was overall slower for all file sizes. Shows that maybe nodes are being flooded by other gossip so can’t make up their mind about their own stuff…?

Change B: Gossip spread out between 1-3s rather than always every 1s, to try to help any lockstep situations that may be happening.

pub fn gossip_period(&self) -> Duration {
-    self.gossip_period
+    let t = rand::thread_rng().gen_range(0, 2000);
+    self.gossip_period + Duration::from_millis(t)

Allowed larger files to be uploaded without the performance hit from change a, but files were not as large as change a.


I will have a go at this with a tweak to Change B with a random gossip interval 2-5secs.
Can you share the script you used to run these tests, please? I’m working on one of my own but I suspect yours would be a lot more polished and ‘correct’ than my fumblings with bash.

Even so we seem to be hitting a limit of ~10Mb which does not bode well for storing large media.
How much RAM was in your test box and which processor?

1 Like

I’m manually running these uploads.

This was on 16GB i7-8565U CPU @ 1.80GHz laptop

You’ll also need to add this near the top of src/ for change b

use rand::Rng;
1 Like

Seems like this is where CRDT comes in?


Hey @mav, seems like you figured it all out by yourself :slight_smile: Just a couple of notes:

  • The “routing table size” prints only elders. This is by design because we don’t know how many non-elders there are in any other section except our own. I can see how it can be surprising though, so we should probably change the wording to make it obvious.
  • The MembershipKnowledge message is currently sent periodically (every 2 seconds) which isn’t strictly necessary but it was done this way because it was simple to implement and seems to cover all the edge cases. We do plan to change it though, so that it is only sent when something changes like you are suggesting.

See Baby Fleming Update post here