Update 06 January, 2022

Happy New Year everyone! We’re back in the saddle again :cowboy_hat_face: and raring to go.

Many thanks to @josh for setting up the recent community testnet, and to everyone who took part. Some of the anomalies reported we’ve seen mirrored in our own test results, and there were a few surprises too including the max CPU spikes which we’re looking into now. Cheers guys!

Just a quick one this week to let you know what we’re working on right now. Happy to say we have already tied up a few loose ends and are very much ready to roll.

General progress

Over time, we’ve considered various different ways to calculate free space on the network. Recently, we have been traversing the few db directories and adding dir sizes to calculate used space. Led by @anselme, we have now simplified data storage by removing the advanced database to replace it with a simpler straight-to-disk process with a binary tree directory structure. This required us to replace the directory traversal process with one that counts every byte as it’s written and keeps track of the total used space, which is much quicker and more scalable now that we will have a deep tree of dirs instead of one or two. It will also greatly simplify the process of having a node with chunks rejoin the network because we don’t need to measure it every time, along with simplifying section splits.

Anselme is also looking at how the storage space freed up when private registers (mutable data) is deleted can be easily calculated.

Talking of mutable data, we’ve been considering what sort of charges should be attached to the register data type. Pay-per-change would be very clunky, we want to separate edits from the DBC process. Currently we’re thinking of charging a multiple of the price of an immutable chunk (blob) PUT for a register PUT, and allowing infinite edits by the data owner (and parties they choose to share with) thereafter.

@bochaco has continued to refine the membership process, i.e. how we maintain the correct number of elders in a section and how we add new adult nodes to the section when required. When a new node requests to join it kickstarts a process that includes AE messages and the resource proof test, and once the elders agree to accept it, a message is sent back to the joining node. This flow is now integrated with the sn_membership crate - at least for adults being promoted to elders and elders leaving. However, this sn membership process currently assumes that all nodes involved are voting members (i.e. elders) so it excludes promoting an adult to an elder. @bochaco and @davidrusu are working on this one now. We’ll explain more in a future update.

Meanwhile @lionel.faber has been looking at speeding up the CI/CD process with self-hosted GitHub runners on AWS. Native virtual machines in GitHub Actions can be rather slow - especially for Windows - which has proved to be a bottleneck in testing. By hosting the service ourselves on AWS we can use more powerful VMs to complete our workflows more quickly. Lionel is also documenting the Distributed Key Generation (DKG) process we use for agreement.

Staying on testing for a moment, sometimes tests can be too rigorous. How so? Well tests are designed to catch every error when they happen, whereas a fault tolerant network with CRDTs may be able to work around these glitches, arriving at a guaranteed consistent state eventually. So it can be wasteful to build tests that will catch everything - but it’s a tricky balancing act!

A case in point is a missing data error, like the one you may have seen on the testnet. Is the data really missing, or has it just not arrived yet? Perhaps it will show up later, or maybe it is there but there’s been another error.

In this scenario messaging between actors is nuanced too, and this is another topic of discussion. When a chunk has been successfully PUT the client should be (optionally) told it has been stored, and thanks to CRDTs this should be 100% sure, but if it has apparently ‘failed’ this is not 100% certain because of asynchronicity and other factors, so the client needs options on how to proceed.

@Qi_ma has been looking at the logs to track down the missing data error, and @yogesh is looking into the reason for the floods of AE messages that sometimes overwhelm communications between nodes. Are they related? We should know soon. Meanwhile, @joshuef has been looking at a bug that could cause issues in nodes that are being hammered by clients, causing the node’s memory to spike and occasionally crashing them. Right now we’re just capping that at an arbitrary limit of concurrent client messages, but we’ll look to make this dynamic based upon the node’s load as we progress.

Every step is another step closer.

65 Likes

First !!! My first first :slight_smile: And first first of the SAFE network first year :sunglasses: :crossed_fingers:

23 Likes

second that

so cool! rock on!

17 Likes

I’ll take third as third times a charm.
Welcome back to the team and we are all looking forward to the year ahead

14 Likes

OSes (Windows particularly) do not like tons of small files.
For example, such problem is implemented in i2pd.
There peerProfiles directory with 75590 files uses 8.7 MB of data in total.
But to place them in filesystem, 295 MB of space is needed.
I hope that you will not do this. It will be step back.

8 Likes

How to prevent this from being used for DoS attacks? Or is that somehow not an issue?

9 Likes

Happy New Yers and I wish you great 2022 :partying_face:
What does it mean for perpetual web ?

6 Likes

Thx for the update Maidsafe devs

Totally forgot that today was the magic day :partying_face:

Happy new year to all

And may this year be the year that we go totally berserk

Keep hacking super ants

9 Likes

Maybe it’s possible to have a smart user-interface that looks at data before upload and asks user if it’s okay to ‘tar’ the files (or similar) and in return the user can save 100’s or 1000’s of % on storage fees? edit: or is all data stored in a blob file in any case - so no worries about small files?

Nice update ants – Thank you!

7 Likes

Happy new year, everyone.
I wish you a new year in which your dreams come true. Thx all

12 Likes

We will cap updates for sure, but that’s a separate issue right now. Also, the entries themselves are capped already and this change will lead us to further cap these, so no files in registers, only pointers.

12 Likes

Happy new year!

About the following:

You said you are going to explain more later, but I can’t help wondering if this self-contradicting way of things makes any Comnets a’la @Josh impossible, or at least failing on the first split? That would be a pitty. Maybe I am misunderstanding something here?

Another thing I am wondering is why the PR tests for CLI are always failing even though the CLI has seemed to be working on the Comnets? Is this a case of “too rigorous testing”?

9 Likes

Yes, we are looking deeper at what we can measure in an eventually consistent network and in a way the tests complete soon enough for us.

14 Likes

New year new chances! Good work MaidSafe :smiley:

10 Likes

Nice update team maidsafe. Hoping for a network of some kind this year, if not never mind, you’ll get there. Keep plodding on!

Happy new year to all :blush:

15 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! Keep the magic going in 2022! :racehorse:

I keep reading about “Web 3.0” is that what Maidsafe is working on? How are they related? :racehorse:

4 Likes

Web 3.0 is a nebulous idea of a blockchain backed, sort of private, redo of the Internet. Right now it really means nothing. The Safe Network could certainly fulfill the stated goals of Web 3.0, but it isn’t a “Web 3.0” project.

5 Likes

It’s a pertinent observation.

There are some pros and cons with the two options implemented so far.

On an NTFS w 4096 cluster size, there is about 4x overhead (Sled db caused about 2x).

But without using sled batching, it’s faster to write to the dir hierarchy … interestingly enough!
Much because it seems sled is compressing. But not only I think.

Sled batching though, wipes the floor with both of them… (order of magnitudes).

13 Likes

Or better yet, some numbers:

Db comparison

456 GB SSD, NTFS 4 kB cluster size
Intel Core i7-10750H CPU @ 2.60GHz
16 GB RAM

Chunks Size On disk Time Compression
Sled Db
100k 1024 2.05 x 282 s 2.35 %
100k 4096 2.05 x 282 s 3.94 %
1M 1024 1.95 x 392 min 2.32 %
Dir hierarchy store
100k 1024 4.0 x 117 s 45.81 %
100k 4096 1.0 x 113 s 0.0 %
1M 1024 4.0 x 149 min* 59.61 %
Sled Db Batching
100k 1024 2.90 x 6.0 s 2.31 %
100k 4096 2.64 x 8.5 s 2.35 %
1M 1024 2.85 x 272 s 2.27 %

(*Compression took another 2.5 hours, giving ~300 min comparable time.)

Note: Compression column shows how many percent the resulting files were compressed when zipping afterwards. Low number means they didn’t compress much, i.e. were more optimally stored to start with.

11 Likes