Update 17 August, 2023

Thanks to all of you who have mucked in with testing ThePriceIsRightNet. It really is a massive help, and once again you’ve uncovered some unexpected issues.

Indeed, this one has been a bit of a curate’s egg - good in parts.

Starting with the good, all our nodes are still alive with no serious CPU or memory issues. As planned, the store cost has increased as nodes filled up, which is great, although it’s not been massively consistent.

Which brings us to the slightly whiffier bits. The cause of the dreaded “Network Error Could not retrieve the record” is most likely due to bugs in the payment and verification system with clients making insufficient payments in return for storing chunks.

If the payment is insufficient, some nodes will reject the chunk, meaning it fails to replicate across the close group. Repeated attempts also fail because the client has been reusing the initial payment proof again. The payment is still insufficient, so nodes reject the chunks again. The client can’t retrieve the chunks since they were never properly stored. The node price may indeed be right, but the client isn’t calculating it correctly at the moment.

So… we’re working on improving the cost calculation, checking current prices on reuploads, and paying per chunk. This should help ensure sufficient payments to all nodes meaning everyone’s happy.

Elsewhere, client bootstrapping is still sloooow despite reducing the number of nodes from 20 to 8. We have found a bug and have a fix in place there, which is speeding things up nicely in our testing.

Other stinkies are still hanging around. We continue to see nodes ending up with no stored records; there’s still a slow memory leak. These are not showstoppers and we’re investigating them while moving on in other areas. Unfortunately, we still can’t use QUIC as a transport because the libp2p implementation still wrongly identifies nodes behind NAT as public, and appears to have a larger memory footprint than TCP at the moment.

General progress

@Anselme has been looking at streamlining DBCs. The current model is to some extent a hangover from the previous pre-libp2p design which used section keys as a point of reference. These no longer exist and we are now looking at a flatter setup that behaves more like a decentralised ledger, with transactions stored on the network and with nodes tracking unspent coins. More about this in a future update.

@Joshuef and @Roland were debugging the logic clients use to estimate storage costs before making payments, including inconsistent cost calculations between clients and nodes.

There’s also been progress towards a more granular pay-per-chunk model

@Qi_Ma has been digging into the issue of incorrect storage cost calculations and payment flows, and found an issue when distance ranges are not set up properly.

@aed900 continues to look at relays and hole punching, seeking to add this to QUIC and TCP transports. Still a work in progress - we really look forward to cracking this one.

@Bzee is also working in this area, including looking at potential workarounds to the current AutoNAT challenges with integrating QUIC.

And @Chriso continues to work on automating the deployment of testnets, and making improvements to the UX based on all the valuable feedback from the testnets.


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

49 Likes

Excellent work team! Cant wait until the next testnet :smiley:

15 Likes

Great job team Maid! Looks like the problem I had in this testnet and the past one has been uncovered. I’ll be looking forward to the next one!

Thanks for all your hard work as always.

Cheers. :beers:

16 Likes

It starts to look good

18 Likes

Thank you all for your hard work. Looking forward to testing the next iterations.

15 Likes

Great work to all the team and testers. I am looking forward to joining in when I get some free time.

14 Likes

Maidsafe’s fire!

12 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

And also, the forum members helping with the testnets too! :horse_racing:

13 Likes

@joshuef Here is 2 issues I can see.

  1. If some nodes decide to profiteer then they will not accept as many chunks causing a long term failure to get 8 (currently) copies of the record stored. And if this happens to too many records then the network will seem unstable. Unless the client coughs up the price. Associated with this is if the client blindly ups the payment amount to the profiteering node then what is to stop that node charging massive amounts (eg others charged 10 nano and profiteering node charges 1x10^8 nanos (0.1SNT))

and

  1. Lets say that 4 nodes charge 4 nano and the other 4 are charging 32 nanos. Then what is to stop the client only sending it to the 4 nodes and in the final system only need one anyhow. Additionally as nodes go off line the replication will ensure there is 8 copies thus in the end the person has effectively put up the record and only paid 4 cheapest nodes to do it. Obviously using a modded client.
14 Likes

The economics here are going to be tricky. My intuition tells me that either we have a system that can be gamed, or we have a system wherein individual clients or nodes can and will either win or lose with each trade (data for SNT).

The trick will be in limiting the gains and losses - probably averaging them out.

Seems like the team is already onto this. I expect there are some who might think they can have some control over the pricing to prevent losses and I’d warn that such a view is a pitfall into a system that can be gamed.

Hopefully just stating the obvious.

11 Likes

If the price is fluxing, perhaps expecting to pay more and receive some change back, might be better for reflecting that data is most impt and payment is a secondary priority?? Exacting payments might be hard to land unless the opposite is done with a slight normal overpayment for actual cost… so slightly less profit where flux is occurring??.. perhaps is not that simple :thinking:

5 Likes

what if … the network employed a tax curve - akin the “Laffer curve” in economics.

In the beginning where all nodes are empty, 0-1% tax is split up and given to all nodes (or at least those in the payers close group). With the remainder going to the node that actually stores the data.

And then as the network fills up toward 90%, that tax goes up toward 50%.

Then from 90% capacity that tax quickly goes back down to 10%.

In this scenario, all nodes are encouraged to add capacity to 90% network fill but not to add significantly more beyond that level. If it does start to go beyond that level new players are incentivized to join.

It also helps to insure revenue stream for long term permanent storage of data.

probably rot over-thinking here

The drawback, maybe, is that it tends to keep out new players (not new nodes, but new owners of nodes), as existing nodes that already store data are incentivized to maintain oligopoly control by adding new capacity themselves. That’s perhaps good for stability though, so may not necessarily be a drawback and may not have much of an impact either way.

Maybe this idea can be gamed in ways that can’t be overcome -and hopefully someone will show why, or maybe it’s decent or can be improved with more thought.

lol … a few minutes after I posted I started to see the gaps and mistakes in my thinking here. Will keep thinking on it, maybe there is something here still … be back in a bit.

Okay, so corrected some inaccuracies and enhanced the explanation. Still seems workable to me … open to hearing thoughts.

BTW, not much thought went into my given percentages. They could be anything reasonable and perhaps that 50% tax is a much smaller (or larger?) amount. But even if 90% tax I suppose that the cost to store would at max be near double compared to a system without this tax. But again, probably not as node owners would be subsidized to an extent by their total nodes … so hard to say.

1 Like

If we have fault detection (we will; we have a basic impl from the prior network setup), then nodes not supplying data that the majority of the close group does have, will be punished / removed. Thus validating clients should naturally remove profiteers simply by requesting data they’ve added to the majority of nodes.

They could, but as long as the client doesn’t have to pay it, it’s okay , I think? They should be removed reasonably swiftly. (And we should have checks for this when fault detection is in)

We can check that we have majority of payments at PUT time. If we don’t, we can drop it. That a client may choose the cheapest 5 only is okay, I think.

Naiive initial impl says you can pay CURRENT_STEP or LAST_STEP essentially, allowing for smoother transitions. Could be gamed, but if you risk losing money by trying to pay too low (you do), i think this should be okay?


Great to have more thinking on this from everyone! What’s nice is the current system is simple that we can make changes here easily and then get to testing them out. Feels like a great place to be!

10 Likes

Sounds good.

The issue will be the behaviour of the client automatically getting the updated price and just paying it. What is there to limit the client only agreeing to tiny increase and paying it. If need user interaction then the user will get info/decision overloaded and end up just cancelling all changes or approving all just to get their upload done.

You might have missed the point that this would be a modded client that only writes to cheapest nodes and skips some. This way it wins many times because replication as nodes go offline will eventually see 8 copies. And could be an option so important files follow standard functionality and public movie uploads just write to 3 nodes for each record

6 Likes

The thinking here is, if you upload only to a few nodes you do risk losing data. It’s best to play the game and be assured or even pay more for extra comfort, but paying less is a lottery style store of data. You may have paid less and get nothing from it and then have to upload again.

The key point we are trying to get here is super super simple and I hope we all bash on simplicity and it’s very base level to find that wonderful pattern that is natural and deceivingly powerful. So any additional levers, conditions etc. should be pushed against as much as possible (I know you are not adding levers here Rob, just as a message to us all though)

12 Likes

Of course me uploading a meme or yacv (yet another cat video) would be tempted to just pay a couple of nodes and if eventually gets lost then so be it.

I would rely on the fact safe will only need one node to have a copy for it to successfully download. Also rely on replication to cause eventually there to exist 8 nodes with a copy. (One of the 2 nodes dies and replication kicks in)

2 Likes

One of the 2 nodes dies and the other is honest :wink:

It’s a lottery, but if everyone then said I will only pay 2 nodes and it did work then it works out OK. If the payment was 2 of 8 instead of 8 of 8 and it was a level playing field the supply/demand balance algo still works/

4 Likes

Another reason to get to close group being 5 IMO, actually. (maybe less?)

5 Likes

Guess so :wink: 2 for stuff, forum replies/images, etc that can be lost without hurt. And 8 for real things that you want to keep. Thats one modification of the client code that might take off.

2 Likes