Guys I understand that the network is currently uploading small files, but it seems kind of sluggish. I assume that is why it is called baby fleming, it cant crawl nor run
I would like to hear from you experts, what your overal idea is on the Baby Fleming after a week of testing.
Am I wasting my time reporting results from one run only?
Should I do a minimum of n test runs with different random data but constant input file size ?
Should I put any more effort into correcting and extending my wee test script? run-new-network.sh
Test scripts a good as they can be shared. We have most of what we need so far I recon. It’s incredibly valuable and has created a ton of great work for us, really good.
Do you have a test script for uploading successively larger files and logging the outcome (and possibly CPU and memory used) or do you do that manually?
EDIT: thinking about it that’s going to be OS-specific.
This script will archive any existing Baby Fleming logs and save them to a timestamped directory. It will then create a set of credentials for a test user, create the account and login.
as yet it does not do any actual testing but does give you a reproducible baseline to start with.
Linux only I’m afraid but might, just maybe work on a Mac.
Mrs Southside has served me with an exclusion order on her new Mac tablet from work. I am barred from going within 5 metres of it. So you Mac fanbois are on your own…
I save this file to ~/.safe/vault, make it executable with chmod +x run-new-network.sh and run it from there.
It is strongly recommended to build the latest CLI from master. Quickie instructions for doing this in Linux coming soon.
Allowed larger files to complete but was overall slower for all file sizes. Shows that maybe nodes are being flooded by other gossip so can’t make up their mind about their own stuff…?
Change B: Gossip spread out between 1-3s rather than always every 1s, to try to help any lockstep situations that may be happening. src/parsec.rs:L364
pub fn gossip_period(&self) -> Duration {
- self.gossip_period
+ let t = rand::thread_rng().gen_range(0, 2000);
+ self.gossip_period + Duration::from_millis(t)
}
Allowed larger files to be uploaded without the performance hit from change a, but files were not as large as change a.
I will have a go at this with a tweak to Change B with a random gossip interval 2-5secs.
Can you share the script you used to run these tests, please? I’m working on one of my own but I suspect yours would be a lot more polished and ‘correct’ than my fumblings with bash.
Even so we seem to be hitting a limit of ~10Mb which does not bode well for storing large media.
How much RAM was in your test box and which processor?
Hey @mav, seems like you figured it all out by yourself Just a couple of notes:
The “routing table size” prints only elders. This is by design because we don’t know how many non-elders there are in any other section except our own. I can see how it can be surprising though, so we should probably change the wording to make it obvious.
The MembershipKnowledge message is currently sent periodically (every 2 seconds) which isn’t strictly necessary but it was done this way because it was simple to implement and seems to cover all the edge cases. We do plan to change it though, so that it is only sent when something changes like you are suggesting.