Update 18 May, 2023

Pretty sure it is as it’s a function of self-encryption which happens before chunks are uploaded

5 Likes

Within allowable tolerances I guess. So long as most of the close nodes have the correct current record it will sort itself out.

3 Likes

I have not been able to join DiskNet, but that was probably one of the biggest problems, so since the guys are already solving it, the pace is amazing!

Thank you to the team for the hard work and another test, which, as you can see, has brought a lot of progress :clap: :clap: :clap:

There is power! :muscle: :muscle: :grin: :+1:

10 Likes

Since node drop is a silent event, I think there will inevitably be cases, where no peers know about it before trying to do some business with the non-existent node. I don’t think this is necessarily a problem. I’m just curious what happens in that case? My guess is something like “wait a minute, we need do a bit of re-organization, try again soon”.

4 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

10 Likes

OTOH, if they are finding bugs quickly and can iterate to next version faster - then that’s great too. :partying_face:

100%, de-duplication happens because of where data goes in the network - XOR addressing. XOR addresses can’t be duplicated, so same data going to same address gets de-duplicated.

6 Likes

For chunks data deduplication is really a function of the client encryption of the file before uploading. A duplicate chunk will always be sent to the same address and thus deduplication.

For other data types then this is not possible anyhow.

EDIT: Oh its been answered except the bit where other data types like append cannot be dedup due to dynamic nature or unique nature like payment data blocks

7 Likes

Justified, given its pleasantly rapid assembly!

This is such a powerful way to build SAFE. Love seeing the rapid releases, and fixing.

End result will be so solid and ready for anything the wild can throw at it. The world isn’t ready for this SAFE :joy: :lock:

Just give us a browser & baby APIs again so we can tinker with apps :pray::pray::pray: :sweat_smile:

13 Likes

After all the talk about libp2p, is there a possibility that it is actually quite poor and a lot of custom solutions are actually needed?

I’m not saying it is, but I’m wondering when you guys are now deep diving into libp2p, are things as rosy as they once seemed?

Thanks

6 Likes

It’s really good, actually. The different impl’s though (go/rust/python) do differ in terminology used and also API. So there is some wrestling to use it correctly. We have tended to do it all ourselves and the key here is to use s much as libp2p has to offer and for us not to customise / fork/ rewrite parts. So we are really pushing it’s limits before we add in some Safe magic. So far it’s a wrestle to do that, but we are getting there.

Still, no NAT with QUIC yet, but TCP looks like it’s close enough. So we will end up with a load of TCP testnets which is a shame as QUIC is so much better/faster/lighter etc. but that is all OK.

18 Likes

How close are the libp2p people to getting the Rust implementation’s QUIC library going? Or will MaidSafe have to jump in there and contribute back to libp2p to get that done sooner?

https://libp2p.io/implementations/ looking here, the Rust implementation is so tantalisingly close to having everything ready… I suppose it’d be a terrible idea to rewrite everything in Go? :face_with_monocle:

4 Likes

Patience grasshopper. These can only be built on solid foundations. There is still a (little) bit of concrete to be poured before we can confidently spend time on a browser and APIs. Remember a lot of that work is done already, we will not be starting from scratch. Just need to be certain exactly what we are building on top of.

4 Likes

Correct! :man_facepalming: Don’t get me started. :rofl:

7 Likes

So we will have relatively resource-heavy nodes for the next wee while then?
I say relatively, last night I had 25 nodes running in 12-14Gb RAM, uploaded 2000-off 10k files and downloaded them all without apprent error. No substantial “ratcheting” of the memory and CPU use seemed a bit lower than the past few days.

Anyhow - building the latest and I’ll try again.

Not so good today: I was only able to download 641/2000 5k test files with the latest from GH. Much more testing needed to confirm/deny this performance.
EDIT:
Still failing - but with a degree of consistency - I am only ever able to download 6-700 files out of 2000 uploaded. This figure is constant(ish) over 5 runs no matter if I store files of 5kb random data or filesizes in the range 1-100k.

10 Likes

Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:


Privacy. Security. Freedom

5 Likes

Right now it looks like libp2p is optimised for smaller records and less churn (and so less republishing). They operate in the span of days we operate in the span of seconds to minutes (or would like to). Really we want to be doing data republishin on any churn.

So relying on their impl for large chunks etc is a balancing act.

Right now we’ve seen the limits of that approach and how it’s not necessarily optimal.

As @dirvine says, we’re really pushing libp2p and seeing what we can get away with there. I think we’ll be on a hybrid approach for a bit while we see what would make sense for libp2p.

So right now we’re looking at what would be optimal and how that might work into libp2p to see if there’s some PRs we could upstream there :+1:

14 Likes

I wonder if you’d be interested in writing some CI jobs replicating your tests there?

If you fancy it, we have eg this PR: ci: add more tests to nightly by joshuef · Pull Request #290 · maidsafe/safe_network · GitHub

Which adds some nightly job runs (this is where we’d want larger tests like you’re trying). The resources are limited (one github machine), but we could look to run a 2k file upload/download eg every day on CI to caatch regressions in perf here…

Let me know if that’s something you might be interested in and if you need any help / pointers!

12 Likes

What’s happening with payment to upload?

I see from github that the initial implementation has been removed as planned but there is no mention of what’s happening in this area in the OP.

7 Likes

That is still happening. Right now it’s getting DBCs working well and securely. Then we will add the payments to upload. We need that to prove data is valid (i.e. it has been paid for) to allow data republish should any ever be lost etc.

19 Likes

7 posts were split to a new topic: Acceptable Replication