I am still uploading large file to 7 IP address all with same speed at a same time.
3 times 46.101.xxx.xx
3 times 188.166.xxx.xx
1 time 139.59
Some of these are/were elders ?
Well deserved. Always been great seeing your creative mind at work and your ability to jump into seemingly anything and completely own it. Thanks for being so technically supportive to the community you are very much still a big part of!
Oo I just checked the forum, from a few days off in countryside. I wish I could have joined the tests ! Next week
So nice to see the work is going on !
I don’t expect a testnet today, still analysing results from yesterday afternoon.
As Savage points out, we merged AE into node yesterday. It seems we’ve resolved all the major bugs there but we still expect a few minor one’s to show up. We’ve got it upstream so we can ramp up the internal testing there so we’ll see how that goes.
EDIT - just my own view of it at the moment, as more of the team come online during the day I’ll get a clearer view and post in the “Update from MaidSafe HQ” thread.
Thanks so much Stephen for all of the work you and the rest of the staff do!
In the future, could you put calendar dates instead of saying “this week” it’s a little less confusing for people who read it later. Also if the posts in the update category could do this too.
Congratulations @bochaco on your well deserved promotion! Thank you for your many contributions that have brought us to this point in the network’s development.
Suits me - gives me more time to refine my deduplication analysis tool
It would help if I could get some confirmation that
getting a large no of measurements of the time taken to PUT and GET standard data of various sizes will actually be of some real use. And what the sizes of that data should be.
I’m thinking 10, 20 50 100 200 500kb and 1 2 5 10 20 50 Mb std data testfiles. Anybody got better ideas?
Anecdotally - cos Im not organised enough yet to collect the results in a csv, I see that -on baby-fleming locally at least - the original put is a reasonable time but the subsequent put where no actual data is being stored - network refuses to store an identical chunk takes roughly 30-50% longer. Further runs attempting to store the same chunks usually take slightly longer each time. But this is working with data from at most a couple of dozen runs.