Update 15 June, 2023

First thing to say is that we’re massively happy about how ReplicationNet panned out. It was a bit of a gamble chucking up so many nodes - our biggest testnet yet by some margin - and we were expecting some major wobbles, but it took everything we could throw at it in its stride and without complaint, until full nodes stopped play. Most encouraging of all, this stability was despite some messaging errors around data replication that might have been expected to bring it down. Instead it swatted them away like a fly. Heartfelt thanks once again to everyone who took part :heart:, and a special mention to @shu for his fantastic dashboard work :trophy:.

ReplicationNet - findings and actions

So, having gone through the logs, both those kindly shared by the community and our own, we can report the following.

  • The slowly rising memory issue is almost certainly due to nodes reaching capacity. We do not see this behaviour until a number of nodes get full (1024 chunks in this case). Once the network is operational we shouldn’t see this as new nodes will be incentivised to join.

  • Out-of-memory issues seem to be caused by too much data being stored in cache as the node approaches capacity. (And for that case, we’ve too many nodes on too small a machine it seems). That’s not a bug per se, libp2p should disperse that cache and data would be stored as more nodes joined.

  • We’ve identified and squashed a bug whereby data replication was causing connection closures, and consequently a lot of dropped messages around replication. This is something likely to spell doom, and it’s a testament to the underlying stability of the network that it had such little impact.

  • Another bug fix was to do with Outbound Failure errors.

  • Data distribution across nodes is pretty uniform. Again, great news because we can use percentage space used as a trigger for reward pricing as planned. The issue of some nodes not filling up is a bug, likely something to do with new nodes not promoting themselves into others’ routing tables strongly enough.

  • There are a few anomalies in the logs where put requests and chunks stored metrics don’t seem to match up. We need to work on clarifying those.

  • To give users with lower bandwidth more control, we’ve added the ability for the client to set the timeout duration for requests and responses from the network. We’ve also increased the default timeout duration from 10 to 30 seconds.

  • We’re now thinking about payment flows and rewards for the different scenarios: new data, replicating data and republishing data (where valid data has been lost for whatever reason)

The next testnet will help us test these suppositions and fixes, as well as validating some work around user experience.

General progress

All eyes are now on DBCs, with @bochaco and @Anselme working on securing the verifying the payment process for storing chunks, including checking the parents of the payment DBC are spent, and ensuring their reason-hash matches against the payment proof info provided for each chunk. Anselme has fixed a flaw whereby the faucet and wallet were not marking DBCs as spent. Turned out this had to do with synchronous activity by the checking nodes causing a read-write lock, whereas we need it to be async.

Similarly, @roland is eliminating a deadlock in PUT and GET operations to ensure they can be performed - and paid for - concurrently. Parallelisation is the name of the game. He’s also ensuring our data validations occur regardless of when the data comes in to a node, preventing some “sideloading” of data via libp2p/kad protocols (which would essentially have allowed free data).

@bzee is still tinkering with the innards of libp2p, currently tweaking the initial dialling of the bootstrap peers.

@Joshuef and @qi_ma have been mainly working through the findings of the last testnet and fixing as they go.

@chriso has been hard at work getting safeup updated, more on that soon.

And @aed900 has competed a testnet launch tool to automate the creation of testnets on AWS and Digital Ocean.

Onwards!


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

59 Likes

Thanks all you fine Ants! Thanks also to those who participated in the test-net - most appreciated :wink:

Seems like a HUGE move forward this week. DBC’s soon? - looks like it! Exciting days.

Cheers all! :beers:

27 Likes

Second in line here :laughing:

Good to hear the test net was such a successful. The year of the test net is certainly upon us

23 Likes

From the data of the post below, half-filled nodes also can be seen.
And that half-filling was measured when network as whole needed additional space.
This is not how uniformity looks like.

8 Likes

Fourth! Go go go.

16 Likes

Sounds so clear now. What worked, what didn’t and way forward. :+1:

14 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

The testnet was wonderful! Everyone loved it! :clap:

15 Likes

Thx 4 the update Maidsafe devs

For everybody who participated in ReplicationNet :clap: :clap: :clap:

Clueless if this is possible, but would be fun if the testnet logs could be fed into
ChatGPT function and somehow help find/fix the bugs.

Hopefully also how to create 2023 SAFE nodes when launch :sweat_smile:

Keep hacking super ants

14 Likes

Congratulations on this successful test! The pace of progress is impressive, and very promising.

Looking forward to new test nets in the coming weeks. Keep up the great work!

13 Likes

Well done team and testers looking forward to a testnet within my capabilities :wink: and no pressure following ReplicationNet @bochaco and @Anselme :crazy_face:

Edit: Good luck @Josh :muscle:t2:

12 Likes

Slowly slowly then all of a sudden.

Ready to represent in a week when I try to fly for the first time since, well, almost not being able to walk.

If you can do this, I can do that, if it is not a challenge it isn’t worth doing!

23 Likes

Based on ReplicationNet entries, I have to admit that the team and many people involved in testing did a lot of work, they deserve a big thank you and applause for their effectiveness in discovering bugs! And special thanks to @shu of course —>

Respect GIFs | Tenor

The next steps promise to be very interesting and I hope that there will be some simple instructions for testers, so that the range of tests can be increased.

19 Likes

Really positive updates recently, great to see that!

17 Likes

Oooh …… I need to do that to my windsurf boards

10 Likes

Thanks for the very positive update, thanks for all the hard work that has taken us to where we are now and heres to more of the same till launch.

And then more after launch :slight_smile:

@aatonnomicc and me were trying to dissect this update in the pub and explain to the rest of the company just how brave and stunning this project is, groundbreaking and totally not content to adhere to the norms but the main take away is that URL stands for “Urny Really Listening”
@aatonnomicc had to bail early, came up with some weak excuse about being ordered to attend some kindergarten graduation by MrsNeik - which was sad - So me and MrsSouthside had to ensure standards were not slipping in the Lauriston as we wended our way back southside. Beer, service and ambience were chust sublime and in honour of our SAFE meetups in that esteemed establishment they have engaged a time traveller to paint a Grey Dorian portrait of the esteemed BDFL - which was nice.

Anyhow, please dont launch any new testnets until I sober up a bit, OK?

6 Likes

20 posts were split to a new topic: Providing constructive criticism

A post was merged into an existing topic: Providing constructive criticism

Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:


Privacy. Security. Freedom

5 Likes