Update 18 May, 2023

Thanks to everyone who took part in the DiskNet testnet this week. Despite its ‘rapid unscheduled disassembly’ (© SpaceX), we really did learn some valuable lessons from it, and fortunately the fixes shouldn’t be too tricky. We also found a bug related to logging which has already been sorted, so we’ll be fully ready to go once the next iteration is ready.

Community thanks

Thanks marcelosousa for their PR removing some over the top reviewpad summaries :bowing_woman: .

Thanks to @mav for his work thus far on improving wallet ux :bowing_man:

General progress

Happy to say the memory and CPU spikes we saw in the previous testnet when uploading data seem to be things of the past, thanks to a change in the data republishing code. @joshuef has been running tests on this and the behaviour hasn’t recurred, so fingers crossed that’s that.

@bzee and @aed900 are making progress on AutoNAT - detection of nodes behind home routers/firewalls. They’ve been studying the testnet logs to spot potential issues and work through how AutoNAT might mitigate them.

The other remaining piece of the puzzle is how to store registers. Is the libp2p way good enough for now, or do we need to come up with a custom solution? The same applies to DBCs, but since there is no CRDT logic involved in that case, these should be much easier. This is what @anselme and @bochaco are looking into at the moment, working through the pros and cons.

@qi_ma is optimising the data republication process. What we really want is that every time there is a churn event in a close group (eight closest nodes, XoR-wise) then the data gets republished to any new data-holders. As well as providing redundancy, the purpose of that is to ensure the routing tables held by nodes are always up-to-date. The libp2p way is not quite right for us as it is periodic rather than event driven, and can be quite heavy. We’re taking a look at using this as a backstop, in conjunction with more event driven replication.

Qi and @bochaco have also been digging into the connectivity problems experienced during the testnet, which seem to be caused by code panics in the RecordStore module.

Related to this is data republishing on churn, which is a little more complicated with registers. @bochaco has created a new end-to-end test for verifying register data integrity during node churn events.

And @roland is working on improving the logging process in preparation for the next testnet. Hold onto your hats. :cowboy_hat_face:


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

52 Likes

First! Hehehehe. Now to read

19 Likes

I know these things rattle some feathers unnecessarily but is Rusu still around?

12 Likes

Beauty update Maidsafe :wink:

Steady as she goes and thanks.

:beers:

18 Likes

Next test should be interesting, fingers crossed.

No Edward/work on payments? I’m really looking forward to that slotting in.

Then NAT and we should have something serious to play with.

Thank you team. We just see a fraction of the work you put in. Most people have no idea how hard software is to build so :clap: from me.

P.S. now naming testnets is a thing, how about using a theme like fictional computers, but maybe save SkyNet for launch. :face_with_diagonal_mouth:

18 Likes

Thx 4 the update Maidsafe devs

This is really great news, so fingers and toes crossed.

Really love it how quickly the community took part in the DiskNet testnet :clap: :clap: :clap: for all those involved. Would’ve like to also participate, but got to refresh memory how to put up an cloud instance again.

Farmers should take note that cloud farming could also double as SAFE income stream, so well worth exploring. Amazon’s AWS 1 year free tier is incredible for this.

Keep hacking super ants

15 Likes

No David moved on to a project with his pals in Canada.

14 Likes

Really wish these testnets would last longer… by the time I find out about them they are already done lol

12 Likes

Mile-high summary of the big things left:

DBCs plugged in + NAT sorted + some mechanism for efficient event-driven updating of routing tables rather than the periodic and heavy libp2p implementation

The routing table thing sounds like it could be thorny enough. Hope something is found that ticks all the boxes. Looking forward to getting involved in a testnet too! Hats off to all the team, cheers for all the grinding.

18 Likes

@Josh you just do fingers :crazy_face:

Looking forward to the next testnet :clap:t2:

4 Likes

What happens when they are not up-to-date?

As I see it, they cannot be truly always up-to-date, but there’s some sort of (small?) delay in between node drop and the routing table getting updated again.

3 Likes

Forgive me if this was answered before, but is data de-duplication still a thing in this new model?

5 Likes

Pretty sure it is as it’s a function of self-encryption which happens before chunks are uploaded

5 Likes

Within allowable tolerances I guess. So long as most of the close nodes have the correct current record it will sort itself out.

3 Likes

I have not been able to join DiskNet, but that was probably one of the biggest problems, so since the guys are already solving it, the pace is amazing!

Thank you to the team for the hard work and another test, which, as you can see, has brought a lot of progress :clap: :clap: :clap:

There is power! :muscle: :muscle: :grin: :+1:

10 Likes

Since node drop is a silent event, I think there will inevitably be cases, where no peers know about it before trying to do some business with the non-existent node. I don’t think this is necessarily a problem. I’m just curious what happens in that case? My guess is something like “wait a minute, we need do a bit of re-organization, try again soon”.

4 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

10 Likes

OTOH, if they are finding bugs quickly and can iterate to next version faster - then that’s great too. :partying_face:

100%, de-duplication happens because of where data goes in the network - XOR addressing. XOR addresses can’t be duplicated, so same data going to same address gets de-duplicated.

6 Likes

For chunks data deduplication is really a function of the client encryption of the file before uploading. A duplicate chunk will always be sent to the same address and thus deduplication.

For other data types then this is not possible anyhow.

EDIT: Oh its been answered except the bit where other data types like append cannot be dedup due to dynamic nature or unique nature like payment data blocks

7 Likes