Safecoin resource (consumption) model

I repeatedly argued that everyone pays the same price and I only considered those option in order to give feedback to those who wanted to discuss them. But okay, let’s leave it at that until we have some solid proof from real life. I’ve no doubts most of my assumptions will be proven on testnet3 or during beta testing. :slight_smile:

If I understand your question correctly (I will start my sentences using this template :-)) I opened that topic here, not many takers so far:
https://maidsafe.org/t/granularity-of-charge-cost-of-rewrite-modify-and-caching-mechanisms/1135

At first glance, it appears that way. The Network does not “discount the user” if the PUT chunk has already been PUT, if you get my meaning.

I raise this issue when asking about users being charged for a blockchain file that has been de-duplicated. The answer was yes, they still have to pay.

This means every PUT is charged, regardless if the file already exists. If exceptions were made, I am not aware of them. Maybe I am wrong and there is an exception for the data owner.

I think this the case. Any edits to a file from your local machine should not take effect until it is PUT on the Network. If your computer crashes before you hit “update/upload” you will have lost your changes. I don’t see how you can edit file chunks while they exists on another persons computer on the Network. Dropbox is different because you are given edit rights on their server to directly modify your files.

Basically, we’re doing a new upload every time and pay the full price for that file even if we previously uploaded it.

Solution
If the owner can delete their data, the NSL balance would be credited.

  1. If you have NSL 100Gb and uploaded 100Gb worth of data.
  2. Your NSL balance is now 100/100
  3. Any PUT above 100Gb would incur a charge.
  4. Delete 10Gb of data and your NSL balance is now 90/100.
  5. Now you’re free to upload 10Gb of data at no cost.

This only works if data can be deleted by the owner.

1 Like

Ugh, this is an awful mess and not communicated well. I wonder how many people here realise this - I’ve been involved since January and didn’t get this until now. It now makes sense that data is never deleted and all versions of a file are preserved!

This is the first thing I learned where instead of wow, that’s amazing, my feeling is what, that’s crap! :slight_smile:

We have to be careful how this is communicated because there are significant consequences, as well as big scope for misunderstanding which never goes down well.

Unless PUTS are really really cheap, normal office usage is going to have to be done on local machines and synched to the network periodically, so neat in-app solutions and a local file synch app are going to be sought after early wins. This kindof changes SAFE from being a super-computer in your pocket to a massive cheap backup system. Ugh! This is horrible.

1 Like

Yes like free :smiley:

1 Like

David, I’m having a hard time understanding what the bloody plan is. Please can you lay it out. How can we evangelise when the charging mechanism is - for me - so confused.

If PUTS are free, how are users charged? Can you lay out the detail.

1 Like

I am trying to let everyone poke around this :smiley:

I think that PUTs are free, after you have PUT X (which is a % of the network average) then you require to pay in safecoin for another chunk of network space. This average will be increasing (hopefully very fast) and the cost in number of safecoin decreasing (balanced by the supply/demand side on the farmers end of the deal).

Farmers are rewarded on Get (it may be extended, interested in the ideas being thrown around). The Get will be a modulo arithmetic number (the rank rate) and decided by the peers near the farmer that this Get request can apply to earn a safecoin.

A lot comes into play here, if a bad person uploads loads of data farmers farm it (we can detect many bad data patterns etc. for deletion). As data increases then people get more space free (average increases). If uploaded data is not read it is archived (very cheap in network terms, bundled together in larger chunks, which are marked as deletable if we need space).

In terms of delete then there are some easy network options I have not shared yet. So you have a file, it has X copies on the net and you delete the last Y versions. The network can credit you for that space again (if we keep a tiny identifier of the number of chunks that version had) so then you are not forced to pay for hundreds of versions back., This will be automated and likely cut off automatically at 4 copies (leaving last 4 copies). Then you are not paying for lots of versions. This will eventually be settable per directory.

That leads to a position where we can possibly ensure you only pay for what you use. I have been working this out in the background for a couple of months since we decided to abandon deletes (as thats not really all that great) and I believe we can do that, but its kinda optimisation in terms of launch.

This way we can manage deleted data and ensure you uploaded it without overburdening MaidManagers and that way keep the network very fast and agile. This is one route to tiny farming units, be removing account info on certain personas.

TL;DR initially free PUT, farmers reward on GETs, pay at a fraction of network average and in chunks of space. Later on we can apply great granularity that safecoin is used more efficiently as we measure actual used space per account better (as we can with the remote side deletes I am talking about here).

5 Likes

That has been discussed in this topic:

My conclusion:

  1. You will probably need a local file system with a copy of data (or at least of your “hot” data).
  2. There are various approaches and mechanisms that will make certain workloads (including low-intensity write workloads) suitable for keeping a single copy on the Project Safe Network.

I don’t think it’s bad, but if you lived under the impression that you’ll have a global file system with distributed locking running over an extremely loosely coupled network with tens of thousands of clients dropping out and joining the cluster every hour, well, then you had unrealistic expectations.

We have opted for a structured data versioning pattern here (like a github approach). It’s very interesting, but we are not pushing it just yet (although its core to inner workings of some network components). The idea is you cannot distribute real time locking in an eventually consistent system, but we can return conflicts and version mismatches. The edge case can refuse updates to data if the number of branches allowed is zero. This can allow versions to arrive out of sequence and combine properly etc.

The issue with this is that two people editing a file then need some tool or work to resolve a conflict, which gets beyond a normal user fast. It is an interesting approach to distributed sync of data though.

Its not an issue right now, but will be for private shared data.

2 Likes

Thanks, I already guessed you’ve held lots back so the community can get our teeth into things :-). So I spoiled the party… a little. Personally I think the community is ready for the real deal so how about it? :wink:

Anyway, continuing this one…

I take this to mean there’s a free amount (NSL), which varies (hopefully grows fast) based on network average. The free amount is like a data upload “cap” [alarm bells ring] like those evil data charging telcos you mentioned on POD chat #3!

I know we don’t have hard answers to these questions, but I also think that we need some answers ready for the beta. For example:

  • what is the free NSL?
  • how much extra must I buy after I’ve saved up to the NSL (do we need to start saying “saved” not “store”?)
  • how much Safecoin will it cost per GB above the NSL?

I guess once users can delete, it is back to more like charging for storage, rather than charging for data saves (PUTS)?

The big plus for people is that while every file save uses up allocation, once data is there there are no ongoing charge for keeping it there or for accessing it.

A problem though, is that unless PUTS are so cheap that people can effectively not worry about them for “office tasks” such as general editing, it skews how people see and think of this aspect of SAFE. SAFE becomes an ideal backup server, but feels like it might be costly for working data. This is why real numbers, ranges, or even estimates will be helpful.

Apps will be a different issue I guess, different kinds with different pros some cons. I never realised ANT Tech could be so complicated :slight_smile:

Anyway, continuing this one…

OK spoiler :smiley:

I take this to mean there’s a free amount (NSL), which varies (hopefully grows fast) based on network average. The free amount is like a data upload “cap” [alarm bells ring] like those evil data charging telcos you mentioned on POD chat #3!

Yes, but I expect this to be a moving target and not get in the way of average users. I hope so anyway.

I know we don’t have hard answers to these questions, but I also think that we need some answers ready for the beta. For example:

  • what is the free NSL?

This will vary upwards as the network grows. I suspect a setting of 80% (beware magic number !!) of NSL

  • how much extra must I buy after I’ve saved up to the NSL (do we need to start saying “saved” not “store”?)

I see this as blocks, so NSL * X where you make make X any integer

  • how much Safecoin will it cost per GB above the NSL?

This is tied to farming rates, so will vary, but again it should be the cheapest about as it is for unused space.

I guess once users can delete, it is back to more like charging for storage, rather than charging for data saves (PUTS)?

Yes I think it is, it will be interesting to see if we ever care about deletes (although we can calculate it at minimal cost now).

The big plus for people is that while every file save uses up allocation, once data is there there are no ongoing charge for keeping it there or for accessing it.

Yes its a lifetime deal :smiley:

A problem though, is that unless PUTS are so cheap that people can effectively not worry about them for “office tasks” such as general editing, it skews how people see and think of this aspect of SAFE. SAFE becomes an ideal backup server, but feels like it might be costly for working data. This is why real numbers, ranges, or even estimates will be helpful.

At the moment there is a 3 second delay in case data is changing (i.e. data change is a directory thing. so as a file closes we wait to see if there is another file etc. this is under review right now in drive refactor). Its possible to do some tricks here though (sign private chunks with a mutable key. delete those if you are the only owner etc.).

Apps will be a different issue I guess, different kinds with different pros some cons. I never realised ANT Tech could be so complicated smile

Yes you should see some of the inner algorithms, the edge cases are mad until we get the right algorithm.

3 Likes

Many have tried. Most failed. Remember this one? It worked, but not well enough.

In v1.0 I’d like to see the s/w be of high quality and “nicely behaving”. If the foundations are solid, it will be possible (and that’d be very good news because it oftentimes not) to add various nice features later on.

Yes, but (I know you know, this is for others) there’s good news here too:
a) There are some scenarios where even that can work fine.
For example, a group of authors can have a joint caching gateway that periodically pushes data out to the world, while the rest of the world/team can reads that data off the MaidSafe network. (The gateway would be a regular file system and many companies with such needs already own them).
b) Some apps can deal with minor inconsistencies due to the lack of locking despite concurrent updates (example: http://www.pmwiki.org/wiki/PmWiki/FlatFileAdvantages).

1 Like

[quote=“happybeing, post:19, topic:1230”] I think @fergish’ glossary will help - please add.
[/quote]

Done.

2 Likes

Well said and thanks very much for that post, it helps loads. This community are really pulling together and arguing and innovating like true champs. I am again humbled.

2 Likes

I really don’t know what this means and so find it baffling. Could you contextualize?

1 Like

Thanks @dirvine, this short conversation has been very helpful. I can see some of the complexities and that it is incredibly difficult to tie down actual figures. Sounds like there’s significant scope for a lot of sanding down the edges and streamlining the system. Must be a bloody nightmare :slight_smile:

2 Likes

YEs deletes meant keeping a lot of data at the managers of the connected client. i.e. we keep a hash of what they upload with a size, when they delete we can reduce their account. If you imagine millions of chunks per node then the network cost is huge. With safecoin we took the opportunity to rid the network of this and went for famers will mind the data and be paid.

None of us were happy really, delighted to get a better network, but felt like a cop out. There is a solution though now which answers many questions. So private data (unique) will be private and delete-able (and obscured so whistleblowers (or anyones) data can never be recognised, even if its a complete copy) but we may not push for that on day 1. Many reasons (speed mostly), but it may be a way to get fast growth if we remove all barriers to entry, at least at the start when safecoin is not so available as it hopefully will be.

HTH

1 Like

Is this part also not for launch then? That would be a shame because its a great answer to one of the concerns several people have raised (e.g. IP addresses being able to be linked with known files). It also allows us to be championing the whistleblower which I think is rather a good counter to: but paedophiles, but terrorists!

Just so I’m clear: When you say “abandoned deletes” you’re just referring to abandoning the reduction of the tally of data stored by an individual user?

Like, I put 300 Mb and then delete 50 Mb, The actual data I’m storing is 250 Mb, but the network “considers” I’ve used 300 Mb?

Is that what we’re talking about?

Or are you saying that the data just doesn’t ever get deleted? That seems a real waste.

:wink: I never said that :smiley:

1 Like