How fast how large (deterministic sized nodes)

I’m sat here pondering (while i’m waiting on some test nodes to start) the ideas that some folk on here have punted w/r/t fixed sized nodes… or rater. deterministic sized nodes.

If we were wanting to adjust the size of nodes without a magic number (or with much less of them)… How much might we want to increase (minimum) storage capacity every split?

I know there’s a lot of variables in this. (node size, node count, upload speed etc etc).

Does anyone fancy having a stab at some modelling around that :man_shrugging: ?

9 Likes

Wish I was qualified to give an opinion on that one definatley don’t want a magic number.
So If you have a theorie on what’s best put up a test net and I’m sure we will all be happy to hammer it to see what works best

2 Likes

I’m not sure there’s no magic number in this imagining of mine.

But if we stared w/ 1gb nodes… how far would that go? vs +10% a time?

3 Likes

If you start with a 1gb at the beginning and go up 10% per split it’s worth a try as it’s still at least not a static magic number that would trip us up in the future.

Let’s test it and see how she goes.

Also if anyone else has another idea let’s test that as well if its possible to set up.

1 Like

Can anyone explain what this question about?
If split happens, user’s HDD magically becomes 10% larger?

3 Likes

I think it means if split happens, user’s max-capacity becomes 10% larger (if they actually have the physical HDD space)

3 Likes

So network decides instead of user how much space he want to share?
And makes such decisions in unpredictable ways (user can’t predict when split will happen).

This presumes that the user can always set the ultimate limit and SAFE will only use what it needs/wants up to the user-defined limit.
Very early days for this…

3 Likes

I thought it was to mean the maximum they could provide which would rise with the network size

3 Likes

I think we are all second-guessing @joshuef here and should wait for him to clarify

2 Likes

A related question that can help your decision is what you are aiming for the final network?

As an example, how do you see the implementation of 1018 bytes of data storage?

  1. thousands of nodes, each storing PBytes of data => then you need large nodes with a fast growth
  2. billions of nodes, each storing GBytes of data => then you need small nodes with a slow growth

IMO the latter is better for decentralization and robustness against attacks, but I have the feeling that @MaidSafe targets the former. A few hints that make me think this:

  • IPV6 is not managed
  • New nodes are accepted only when needed
  • Nodes cannot run on mobile phones
2 Likes

I believe MaidSafe target 1., based on many discussions here over the years.

6 Likes

The one “problem” would be will the increase in minimum size suddenly make running a node inaccessible to a segment of node holders.

This would occur for different reasons. One would be phones or hardware similar to phones like SBCs. If I were to take some parts I already have like a SBC (Say RiPi) and an old drive, at what point would that now become unusable because the minimum size is now larger than the spare space on the drive.

I would suggest that the set size be something that can survive longer term. Do we accept that hardware is not expected to be suitable when it is 8 or 10 years old and have the minimum size allowed to be suitable for that old equipment. EG if xGB was the common sized drive 10 years ago then minimum size has to allow x/2 GB.

Just bringing up the desire to make SAFE as accessible as possible for even those wishing to run a node but is not able to buy new stuff.

4 Likes

Would it be possible to have tiered nodes. IE the node runner picks the sized from a set of sizes (tiers).

The idea being that somehow the storage can be monitored easily.

1 Like

Yea, that’s the design. Magic numbers and magic hard drives … or perhaps not :wink:

To me this is advantageous for many reasons.

I think the confusion may be how we supply storage. It is for me right now :slight_smile: what I mean is make the node process so light you can run perhaps hundreds on a big machine.

Then folk with small machines run only a single node.

However the archive node idea is still there, and I think has loads of merit. But initially, to get rapid adoption across as many devices as possible small nodes make sense. It’s also parallelising down and uploads much more (each node is a separate channel etc.).

This is a good point again we are at as we see the testnets stabilising and looking at what might make them better. It is tweaking for sure, but we want to make these tweaks as public as possible and get as much input as possible from the community on these points.

I feel everyone can get involved in these meta discussions easily and we will all learn a lot. I think there is a working setup and then there is a working setup that allows even mobile phones to contribute. Also archive nodes etc. but debating this is great. The code part here is not the blocker, it’s all about a few settings that for now are manual, but when we agree on the best path we can try and male those automatic, at least I hope so.

7 Likes

We have played with this idea a bit but not got too far because

This makes it hard, it’s not impossible, in fact, we removed the space reporting code yesterday, but we could not trust it anyway, and it’s really hard to measure. The closest so far is an archive tier I think, where we can have huge nodes that can report how much of the address space they cover. This is something I have been working on myself along with other ideas.

  • So we take the stance we don’t trust nodes. That’s cool.
  • We also have a fixed tier so elders can know when that tier is full or filling up and split etc.
  • We have an archive their where nodes can be as large as they want but measured in address space coverage. When they are asked for data in that space they risk being penalised for non-delivery. Also, they need to store in that space or report a smaller range to cover as the network fills up.

So I think we can achieve a 2 teir approach with some simplicity.

5 Likes

Excellent news. For after release.

Now can we concentrate on a simple single tier for now to get to a working if not entirely full-featured release ASAP? I keep getting nasty whiffs of feature creep.

2 Likes

A way to think of this is. We release a GUI (hot tip, Roland knocked out a quick GUI this week, more to come)

So this gui runs your node. If we make the number of nodes invisible, then we can say how much space in increments of 5Gb you want to run. Then the GUI runs that many nodes if possible. OR we also offer the number of nodes to run as an option. In addition, we can have the option to run a single archive node etc.

The archive nodes will likely earn more than a small node but less than the equivalent number of small nodes (who knows yet), but when the network calls on them, archive nodes should be rewarded even more as now they are covering potential data loss. So an archive node is

  • A value for the network
  • A bit of a gamble for revenue efficiency
6 Likes

Yes, many features will be post-release after as we are focusing on releasing with minimal features. (but we have to have DBCs payments rewards and data storage/retreval at minimum)

On release, we will likely not satisfy all the fundamentals, but those will have to be fulfilled before a v1.0.

Please don’t anyone feel that these discussions about direction are future creep in any way, but if we block them for being possible, it’s a ton of rework along the way. These are design discussions, and we are dead keen they are public, and they will involve future work that is post release.

I know we all fear feature creep and many folks will always accuse us of that (I know you are not) but it’s far from it.

9 Likes

Screenshots or it didn’t happen :slight_smile:

If we run zillions of tiny nodes on phones, will we not need IPv6 sooner or later and likely sooner?
I see a revived market in stolen phones of a certain spec…

2 Likes