How fast how large (deterministic sized nodes)

After some more brainstorming, good magic numbers for A,B, and C may become readily apparent given the power law rule for disk size manufacturing increases per decade.

Another simple relationship that may be more appealing could yield ridiculously large values rather quickly:

RequiredNodeSizeInGB = A^(B*NodalAge) + C

1 Like

Every contribution is valuable?.. over time those with little contribution could be considered less valuable, relative to others. Those able to offer a bigger contribution could be preferred - or grouped, so the older network ages. Why the minimum with all the arbitrary formulaic that tempts? If certain size of nodes were gravitated to each other - a broad order of magnitude, then the network could overlay itself new on top of old; new associating more often with new; the older nodes and perhaps the weaker elders will age and those with better capacity rise.

read like a maximum bound rather than minimum and makes sense to ramp up trust over time.

2 Likes

Food for thought:

6 Likes

Interesting indeed!

2 Likes

Isn’t this the old method of promotion. Keep promoting till the person cannot perform the job they are promoted to. This was a major problem in old institutions where seniority meant that most managers end up poorly performing their job.

You could end up with nodes that perform fine till the point where they are “promoted” resulting in the size of their node no longer able to keep up with bandwidth, while younger nodes that have smaller size could be taking some of the load if node size was more evenly spread out.

Its a good idea though since older nodes have performed well in the past and are trusted not to misbehave. But eventually the node will be too large to be able to relocate in a reasonable time.

It would be better that there is more nodes rather than keep increasing till it falls over. Obviously not all nodes will have this problem but not all long lasting nodes will have great disk space or bandwidth.

6 Likes

… Peter principle.

Also if the parallel stands up in any way…

The most recent estimate predicts that bacteria account for over three quarters of all species of life on Earth.

Small can be useful flexibility to retain.

There’s also perhaps that liability of too big to fail, if large nodes are rooted in one kind of hardware.

Not sure I yet understand the need to drive the minimum upwards… the strong should survive on merit and the weak can continue to try making a contribution where they can… perhaps later those weaker ones might be tasked differently… holding that last copy that is slow to retrieve but always does because there are so many small slow nodes. Guessing too much what the problem is…

2 Likes

Yes, but it is also easy to be demoted. Incompetence is kept under control via demotion. If a resource proof was required before promotion then there would be a better guarantee that a node could handle the extra work to minimize the frequency of demotions.

The min size requirement could also be optional for nodes with elder status if desired.

1 Like

That could work. It wouldn’t need be a resource ‘proof’, just a test because it is in the node owner’s interest that their node is not over promoted. So it’s fair to assume a node won’t overstate or understate their capability.

Excellent idea at first sight. @maidsafe would this be a major hassle to implement?

It sounds good, but we would need to detail this thoroughly as it is guaranteed to have the usual edge cases as all these changes have.

For instance, we need to find the appropriate value for the levels if increased, then work out all the detail. That detail will be what defines the complexity.

My gut says this is not a massive change in terms of the algorithms, but those details are critical and will not be found in a quick 5 min guess. We will need to spend a lot of time agreeing on those or somehow setting them and testing them in a testnet (very hard to test that when no payment and folk uploads junk at huge rates as it throws the economic advantages out the window). However, it would be great to detail it in significantly more detail in relation to node age.

In parallel, I have been trying to break node age :smiley: Well actually, I am trying to remove the age and relocation part, but it’s proving to be incredibly hard. Node age is way more powerful that it seems. However, @jlpell, there still would be age as such, but it would be the position in list of section members ordered by first seen. So no age as in an integer, but a position in an ordered list.

7 Likes

21 posts were split to a new topic: Node age and relocation

For elders to have an accurate estimate of node sizes, this also assumes/requires that our hashing/sharding mechanism is evenly distributing chunks across nodes, correct? which the testnet demonstrated it presently does not.

iiuc, there are two parts to this:

  1. specify an algorithm that evenly distributes chunks. (hopefully simple enough)
  2. enforce that clients actually use this algo when storing. (hard, perhaps not possible?)

I’ve missed a lot of convo, and maybe this is already solved…?

2 Likes

It does.

Not sure the testnet refutes this as of yet, but early kademlia test networks like Gnutella showed that it took up to 2000 nodes to get to an even distribution. It should distribute as long as SHA3 is secure and also we ensure node id’s are actually random an not targetted. This last part is where we could have distribution issues, but there is work there.

3 Likes

Why we need bigger testnets before we make any huuuge changes IMHO

Are we close to being able to demand SNT for uploads?
Is there any point in me^ working on a trivial faucet that would distribute a small amount of SNT to requesters who can then use that to upload data for testnets? Exercises DBCs and may mitigate flooding of testnets by well intentioned but clumsy gits like me and someone who slaps up 40Gb of mp3s HNY BTW @neik :slight_smile:

Perhaps we may get better “buy in” to the testnets if they had more features?
I see a faucet working something like this…

Someone (who might just be known as @Josh) fires up a test net, They then put up a page that asks for your username walletURL and PK. You connect to the testnet (no need to join as a node just yet), safe keys create --for-cli && safe wallet create

Copy n paste relevant values into the form.

On hitting submit the values are copied into an SQLite (or whatever ) db. The genesis DBC is copied into a wallet and a very small % is transferred to $FAUCET_STASH
Then a wee script can get run to send a small amount from $FAUCET_STASH (say sufficient for 2-300MB upload - discuss) to everyone in the SQLite db.
Then users can put and get as usual. Perhaps make it so when you publish a valid URL to data you just paid to upload, you get sent another wee bonus of SNT to encourage further (limited) uploads?
But the vast majority of the genesis DBC is retained for other yet to be decided DBC fun and games.
None of this precludes anyone from joining as a node but may make it more engaging for those who are not yet ready to jump into running their own node.

^ and my pal ChatGPT

PS - Am I OK to use that logo here?

3 Likes

Been pondering my uploading antics as we are creeping closer and closer and that perhaps I need to find a public data set :slightly_smiling_face:

But I dident want to ram the test net with randomly generated guff.

Could be fun to try out your stash for cash @Southside

But I’d like to see a rerun of the 50gb static and store till done without a crash as @danda mentioned before let’s set a goal and achieve it before moving the goalposts.

2 Likes

Yes absolutely - first things first but no harm in looking (somewhat) ahead,

1 Like

Dont know how far off payments are but always happy to find it a reason for a comnet :smiley:
I feel we need more reason for it now than in the past when it was mainly to tide us over while waiting on official tests.

2 Likes

The post above is just me thinking out loud before I put an initial commit up to GitHub - safenetforum-community/faucet

Right now all thats there is a wee Readme.

Lots to do to make this work, apart from the obvious waiting for the devs to implement Pay to Upload so no need for another comnet right now
So I will put up some code from me and ChatGPT and you can all tell me what you would do better while we are waiting.

Is there a better way to limit comnet uploads and test Pay to Upload ? Probably… But until then maybe we can play with this.

1 Like

Perhaps you misunderstood, I am excited for any reason you or anyone can come up with that justifies a community test, I am failing at finding a reason for one. I like your plan, lets hope it is soon possible.

2 Likes

I do this a lot. Meantime I will test what I have on a baby-fleming built with the latest.

Validation checks

  • Usernames on the forum are assumed to be between 2 and 50 alphanumerics

  • A safe walletURL is of the form safe://+exactly 70 lowercase alphanumeric chars

  • A public key is always exactly 97 hex chars long

Can anyone confirm/deny the above assumptions or point me at the right place to look in Github?

1 Like