Initial price for upload

Long thread, many thoughts…

As was written somewhere above, distributing coins too fast will lead to high inflation, hurt current coin owners and it will lower trust in the project. Distributing coins too slow will slow down growth of the network and posibly create risk of being overtaken by faster growing copy.

I think coin distribution should not be time based. During the years of development, I saw the effort to not have the need of time in the network and think we should follow that also with coin distribution.
When I try to imagine state of the network 1 year or 3 years from launch, it is allmost imposible, but when I try to imagine network with 100000 users or XY PB of data stored, that gives at least somethng to work with.
So my proposal is base the speed of releasing coins on some set of network size rules or milestones. Completely ignore the time factor, because In this equation I feel time is the hardest variable to estimate.

7 Likes

This is very true, it shoudl be based on (if anything) uptake for the network or resource requirements and that is influenced by so many factors. Time is just a kludgy guess at best and ignores (or worse folk think they consider) those factors.

This issue of speed of distribution of the networks wallet is a bit more subtle. So if we imagine there was only tokens in the hands of users then it is simpler. So folk pay, farmers receive etc.That is a demand/supply balance. However the networks bag, upsets that balance in some ways that I feel are not always positive.

There may be a way to consider it a bit different @mav is poking about there are well.

Way back at the start I was planning the network wallet would pay for those who could not afford it. OF course proving that is hard, but things like flattening human equality in terms of ability to pay can be nice. We try with farmers being cheap to run and hopefully earning for those folk, so not all one sided,

Anyway, all random thoughts at this stage :wink: But I agree with your main point, time is irrelevant here as something to code into an algorithm and perhaps even as a thing to measure, unless we can look back in time and say, hey that took X days.

5 Likes

Is this a concern about a section getting compromised? In the case of farming rewards, isn’t the security of a the transfer mechanism between the clientwallet->networkwallet and networkwallet->nodewallet fundamentally secure with the at2-dbc-bst approach? I’ve always considered having the network as an intermediary within the network economic model a genius part of the design. It would be a shame to see it go. It’s not entirely clear if this is what you are in fact proposing though…

1 Like

What I am looking to do is make the networks ability to change data redundant. Not I think sections will be compromised, but imagine they could be and it still does nto affect money supply/flow. It’s a long way off, even if we don’t have a network wallet as even DBC needs a mint/bank/re-issuer etc. but it’s a step closer to what I feel woudl be a holy grail. Looks impossible right now, but I live in the impossible when I can as it’s exciting there.

tl;dr Just looking at reducing attack vectors where possible.

10 Likes

This is nice concise explanation of the problem, thanks for making this clearer

I’m a bit unclear on the word redundant though. Redundant means ‘excessive, superfluous, surplus, unnecessary’. It seems to me by redundant you probably mean the network cannot possibly change data (so replace redundant with impossible) rather than meaning an additional source of change. Is that right? Just want to make sure I’m on the same page.

5 Likes

Yes that’s exactly on point. I would love it to be a notary only, at least in terms of data handling.

Of course we confirm it did hold the correct data etc. and rewards are paid on provably and constantly doing so. It’s kinda much more a holy grail I recon, but a fine target and it feels within our grasp.

4 Likes

That sounds amazing. When you say within grasp, does that mean you feel there is a really good chance this could happen? Or that there is a chance, more of an outside chance, that it could happen at the moment?

Cheers :+1:

2 Likes

I feel like it should be set to what the Amazon price is now, and then let go, so the market of farmers and users can property price to to market value. Kind of like placing a paper boat on water. You give it initial calibration and let it do it’s thing

2 Likes

A far outside chance, but a chance never the less.

1 Like

What is the price of Amazon Aws?

If you use Aws, you already know there are various kinds of options.

Basically if you want to use DDoS defense “Shield”, you have to spend at least 3,000 dollar per month. AT LEAST :slight_smile:

And if you want to make the loading speed of web very quickly, you should use the “Elastic service”. It is also at least 3,000 dollar per month.

And as I know, the data type is also affect there spent. So, if you want to go deeper, it is dev’s field. Not easy work.

“Just setting the price equal to AWS” is something… hum… very hard.

2 Likes

This post explores a way for the network to never own anything, not even the unrewarded tokens.

The problem with section wallets is the section owns the tokens, and if the section colludes or is hacked then those tokens can be stolen.

The aim of the idea in this post is to not have the section responsible for any tokens at any time. If anyone tried to steal unrewarded tokens it would be seen as an invalid operation and rejected.

Taking inspiration from bitcoin script, particularly the transaction puzzle (notice the transaction does “not contain any signatures”). We can see the power of extending beyond only using signature verification for transaction validity. If you’re not familiar with P2SH, I recommend reading Pay To Script Hash Explained and miniscript since an understanding of this will make the rest of this post much simpler to understand (I wrote this post to be understood as-is so the background knowledge should be a nice-to-have rather than essential, hopefully!).

The idea is to create a bunch of reward wallets where the secrets needed to transfer the funds are unknown but can be discovered after some amount of work/time.

For a simple example, create 232 distinct reward wallets containing 1 token each. Instead of having those wallets require a signature for a valid transfer, each wallet has a unique puzzle as the condition for valid transfer. Those reward wallets can only be spent by solving a puzzle. Eg the puzzle might be to concatenate a section-signed PUT and the recipient node id, then hash that and see if it starts with a certain number of zeros; if it does then the transfer is valid and the reward will be transferred to the recipient node wallet (ie these inputs form a solution to this puzzle). If the PUT data is not signed by a valid section, or the recipient node id is not currently part of the network, or the hash of all those values does not start with the right number of leading zeros, or <any of the conditions expressed in the puzzle being invalid>… the transfer would be invalid.

Let’s look at a reward wallet that specifically requires 5 leading zeros. This would mean after about 25 uploads we’d expect one of those uploads to have a high chance of forming a solution to that puzzle. We could have other wallets with a puzzle requiring ten leading zeros, some with twenty leading zeros, etc to have varying levels of difficulty to claim the reward and spread them out over time.

Reward wallets are not owned by anyone in particular (the puzzle is the owner) and each reward can only be claimed after some predefined number of network operations (statistically speaking).

That’s the simple version. There’s lots of design considerations. But the idea is lots of different puzzle-owned reward wallets with varying degrees of difficulty to claim them so that they’re spread out over time. Make sure the solution to the puzzle depends on some verifiable network activity.

Some benefits:

It can be set up in a verifiable way. A simple example, the first reward wallet is at address hash(“Safe Network Rewards Wallet 0”) with 1 token and puzzle difficulty of 10. The second reward wallet is at hash(“Safe Network Rewards Wallet 1”) with 1 token and puzzle difficulty 11. The third reward wallet is at hash (“Safe Network Rewards Wallet 2”) with 1 token and puzzle difficulty 12. Keep iterating until all 232 reward wallets (minus ICO tokens) are created. Choosing the reward amounts and puzzle difficulties will need some consideration (discussed more later). Anyone can take the initial wallet creation conditions and iterate to check those wallets really exist on the network.

Everyone can know exactly how many rewards have been claimed (and unclaimed) at any point in time. Look through all the reward wallets starting from the start and when we find the first unclaimed one we can know how many prior rewards have been claimed.

Different reward wallets might contain different amounts and different difficulties. Not sure exactly how to approach this. Maybe uniform amount with varying difficulty is best? Maybe having some amounts be large and some small might be better? (discussed more later)

Some drawbacks:

We need to change the mutation mechanism to go beyond just verify(signature) for validity and move toward a blockchain style scripting system. Luckily we may get away with having very few operations, but it would need to be more than just the single verify signature operation. I don’t see this as a drawback, I see it as a necessary feature from the start, but it will require more design and dev time. If we expand our definition of ‘ownership’ beyond ‘signatures’ toward ‘scripts’ the idea seems doable.


Going a bit deeper into the puzzles.

I’ve been using leading zeros as the proxy for difficulty-to-claim-the-reward. In reality this is actually just a big number exactly like how bitcoin difficulty works (it looks visually like leading zeros when binary encoded but actually it’s about the whole number being less than the target difficulty, not just the leading zeros). This gives a fine-grained resolution on the rate of rewards.

It’s worth chaining the rewards so they must be claimed one-after-the-other rather than multiple rewards at the same time. Say we have 10 reward wallets with puzzles that range from 1 leading zero to 10 leading zeros. Say the wallets needing 1 and 2 leading zeros have already been claimed. Everyone is now aiming to claim the 3 leading zeros reward. Someone flukes on a 9 leading zeros solution. We want them only to be able to claim the 3 leading zeros solution, not 3 4 5 6 7 8 and 9. Maybe we can prevent this by requiring the previous reward solution to be one of the inputs to the puzzle, so we can’t use the solution to the 3 leading zeros puzzle to claim the 7 leading zeros reward (we’d need the solution to 6 leading zeros).

We could set the puzzle to use various network events as inputs. There might be puzzles requiring section-signed PUT data as the input (reward on PUT). Other puzzles might require section-signed GET data as the input (reward on GET). Some puzzles might be based on section-signed relocation data (reward on relocate). Or based on joining. Or departing. Or section split. Or any signed network event. These could be parallel reward tracks (eg claim the next PUT reward only by supplying the previous PUT reward, and simultaneously the next GET reward could be claimed by supplying the previous GET reward, so there’s a chain of PUT rewards and a separate chain of GET rewards operating simultaneously); or the rewards could be completely sequential (eg the next reward is based on GET but it depends on the previous reward which was based on PUT).

How can we design the puzzle difficulties so the rate of claiming rewards is not too fast and not too slow? This is a tough question. As a first take, we can look at current potential network bandwidth and potential upload rates and design the reward difficulties around that. Assuming bandwidth will improve maybe 10x over 10 years but not 1000x over 10 years we have some idea of what sort of range the difficulty of the puzzles should lie. If we end up rewarding slightly faster or slower than we predicted that’s not so bad. Just so long as it’s not 1000x faster or slower.

Another option for the difficulties might be to have an ‘escape’ clause in the puzzle. You can either solve the zeros puzzles to claim the reward, OR if the network is a certain age let maidsafe set a new difficulty for any unclaimed rewards. Allowing an adjustment after a certain amount of network time has passed allows rewards that are too difficult to be made easier if needed. I think this is not a great idea, it makes that adjustment key too valuable, but it’s part of the conceptual toolbox so might be handy in some other form.

If multiple nodes come up with a solution simultaneously and try to claim the same reward at the same time, how is this dealt with? This is like the problem of orphan blocks in bitcoin. Is it also like the problem of many people in an office simultaneously trying to edit the same document?

I think this idea might solve the ownership problem and make the network really not own anything, just be a notary. Have I missed some aspect that would make this idea not work well?

14 Likes

To allow the network to gradually “pay out” this could be prefix based? So I mean as the prefix grows (network grows) then we pay out some wallets. So rather than just leading zeros, we say at each prefix we pay out X. So we can set up the wallets all at the start for prefixes up to (for example) 10 long. So after that all network “owned coins” are paid out.

Then the leading zeros changes to matching prefix + some other calc.

A nice thing woudl be the earlier prefixes pay out more faster (as it’s easier to match a prefix).

Still thinking, but you are onto something here and it’s quite exiting. The

Is vital IMO.

So getting there removes a lot of issues over startup etc. IT would make us all happier.

In any case, this is a great idea @mav I like the simplicity and ability to prove what is happening with a simple code inspection. We need to consider it much deeper, but a nice route for sure.

12 Likes

A further “tool” we have. With section chains we can validate the BLS public keys used at each prefix as the network grows. So we can validate a particular key or set of keys existed in a prefix. That prefix perhaps has a limited supply it can pay out (limiting, but not forbidding) payouts.

Anyway just more thoughts.

5 Likes

Yes, the puzzle format could be quite detailed depending how far we want to go with it.

One thing that might be interesting is setting the difficulty for 10% of the puzzles to be probably all solved within about one year. The other 90% of the puzzles are set to a very high difficulty not solvable for at least 100 years into the future, and include a condition at the one year mark that allows adjusting the difficulty of unsolved puzzles (by ‘one year’ I mean something measured in network-time like ‘1 million uploads achieved’ or ‘section prefix of 10 bits exists somewhere’ etc). When the adjustment period for the remaining 90% of rewards becomes active, look at how fast or slow the first 10% of rewards was claimed (ie judge how well the difficulty setting approximated one years worth of rewards) and then adjust another 10% of the remaining 90% so they would be solved over the next year, leaving 80% at an impossible 100 year difficulty level. Keep doing this each year until all rewards are issued. This would reduce uncertainty for the rate that rewards are claimed since we would know it would take about ten years (10% every year).

This is risky since it involves maidsafe involvement for adjustments, but it’s an interesting way to address the uncertainty of the growth rate and periodic feedback. Maybe there’s a way to remove maidsafe from the equation and keep the adjustment mechanism? It would be nice to not have to try to forecast the entire reward system.

In the end I feel like issuing rewards slightly too fast or too slow is not that big of a deal, so am probably leaning away from having adjustment clauses in the puzzles and just set them all to a predetermined difficulty from the start.

8 Likes

Thanks @mav, very well explained and looks like a brilliant idea. Seems sound and while we need to think about difficulty, I’m inclined to agree we should try to avoid it being controlled by humans, only by the network. If we’re can do this it looks like a very robust solution indeed.

Maybe there are attacks, but they aren’t obvious to me and it looks a hard system to game for very little reward. Maybe the danger would be if individual rewards became more valuable due to appreciation?

Anyway, really interesting and cleverly put together. :clap:

5 Likes

Clever mav. However, the negative that I see with something like this approach is that now you have a two tiered system for protecting data. Token balances get a different security model than actual user data. If this extra complexity is warranted for tokens, then why not alldata? Imo some of the recent talk has shifted to assuming that closegroup consensus will be broken and section elders compromised, so mechanisms need to be in place to work without cgc. This is a little confusing/concerning. If the emergent behavior of the network cannot be trusted, then how can any operation be relied on?

10 Likes

I’m not sure it’s that different. What’s changing here is how the tokens are issued, not how transactions or data are mutated.

So the aim is to just leave concensus to validate mutation of data or wallet, while moving as much of the rest of the work from the section to the edges.

For data and transactions that means assembling the necessary information, signing it, then passing to the section to be checked and signed off.

For token issue, again I think the work is done by the claimant which will detect when it has solved the puzzle, and if so sign the result and send it to the section to record that this has been solved and have that recorded, and the resultant wallet balance signed.

So to be valid, a data mutation or token issue ‘event’ needs to have been signed by a section, and any node ignoring that will be detected and punished.

So nobody can just take over a section and mutate data or issue tokens without detection because the actions can all be checked, and if a section is misbehaving this will be detectable, just as with a dodgy node, and the dodgy nodes of the section can be identified and punished by other sections (eg dropped from the routing of other sections, or having data allocated elsewhere).

A lot of this is reading between the lines so if I’m explaining this wrongly or stating alternative facts :wink:, please someone let me know.

5 Likes

Thank you to all who have and will contribute to this thread, the thinking on show is truly impressive.

3 Likes

You have it 100% on the nose there Mark. Make the network valuable and important, but not God :wink:

7 Likes

In a strict sense I see your meaning and would agree except the fact it requires more context than that in truth. The data is chunked and encrypted/no one knows it’s value or importance whereas section funds has a clear monetary value and controlled exclusively by elders whereas adults help with data. Elders get compromised then yes things go bad but malicious actors would only want to control section Elders for the monetary gain, take that away and what would be the incentive? Then it’s not just expensive and hard but zero reward. So I have to strongly disagree here. Take away the monetary incentive to control a section then I’d argue the data is probably safer too.

@mav I think a puzzle problem for either sequential release or based of section prefix/network growth is brilliant and the best way to measure/monitor the total rewards distributed and avoid large honeypots to be targeted.

8 Likes