Should storecost be based on PUTs or bytes?

I think the main concern in this thread is for when chunks are less than 1MB. I guess “chunk divisibility” options would be a catchy phrase, kind of like safecoin divisibility. I argued above that the smallest “dust size” should be no less than 4kB to match hdd sector sizes, and no less than 1kB in the extreme rare instance to match a good udp packet size.

Seems like a way around this is for an app to cram lots of text messages, likes, or emails into a single AD that has a 1MB allocation???

2 Likes

Having thought about it a bit more I’d go for min size for charging be 128K giving 8 different charges for a PUT.

And of course I still stick to fixed portion + variable portion.

1 Like

How did you arrive at that number?

1 Like

You have 10 choices. If first choice is wrong then the other is correct

Just so there is not too many and there is no “dust” issues. It allows posts with a small (in size) image to fit in the lowest priced option. Also the overhead cost is large so going too small is not of any benefit

1 Like

I think what you’re getting at here is specifically about the extra 0.1, and yes at this stage 3.1 MB would cost the same number of PUTs as a 4.0 MB file. In reality the accounting is not so simple due to the details of self-encryption but essentially the concept of cost granularity is correct.

2 Likes

Ok, thanks @mav. So then it seems to me that storecost should be based on PUTs, not bytes because:

  1. We already have a “chunk” unit called a PUT.
  2. There is some fixed cost to each PUT, and thus very small uploads cannot economically be stored by individual byte.
  3. charging by fixed chunk should incentivize app devs to store data more efficiently in batches, and not in many tiny uploads.

Now, if it is judged that 1Mb is too big as a base unit, then perhaps we could consider defining a PUT to be something smaller eg 1k, 10k, or 100k. Or eg call it a “centiPUT”. that’s just naming.

4 Likes

rethinking that, perhaps an equation to cover all bases … maybe @neo’s way is better, but here is another go:

variables:
N = estimated network put overhead cost
S = file size rounded up to nearest whole byte
F = base duplication * single farmer cost estimate of one byte
F2 = base dup * single farmer cost estimate of one KB
F3 = base dup * single farmer cost estimate of one MB

F, F2, F3 breakout of cost is to allow farmers to incorporate their own overhead per unit size.

So Put cost = (S * F2) + (S * F1) +(S * F) + N

Maybe the devil is in the details? What is missing? Is it too complicated? Is it too simple!?

It seems like charging for a large size only (1KB chuck) will just end up driving the creation of hacks to consolidate data … plus puts for simple things like comments or likes will be overly expensive, thus lessening the potential impact of developers to create social media apps for the network.

There are all kinds of emergent effects from these fundamental decisions. IMO, the poll is too simple and I feel like this requires deeper thought and more discussion.

2 Likes

Yes.

It is better if there is a fixed ratio between a put and a quantity of bytes… then it is easy for a user to understand the cost of storage.

This is done to improve efficiency and should be encouraged.

Last I looked, mutable data and appendable data have some built in ways to break a chunk up in 1000 pieces. Seems like a pricing model that takes advantage of this would meet the needs of likes and chats or small text messages.

2 Likes

The put cost is going to have to change as the cost of farming and network overheads will change – otherwise there will be attack vectors targeting the economic imbalances that occur. The better the network is able to close gaps between reality and the algorithms estimation of reality, the lower the economic incentive to hack for profit and so the smaller the attack surface.

I suppose that after running a test network for a time we would be able to generate a graph of cost versus data regarding puts and this would inform the user. But whether or not the ratio is fixed in a simple or complex way, users would still just refer to a graph of current cost IMO.

I don’t think they will be calculating their put cost in their heads or on a calculator in any case, they’ll likely just budget so much over a week or month and then see how far it takes them, then add more if they feel it was worth it intuitively.

1 Like

In the past I always have equated the PUT concept with the upload of a single 1MB chunk. Let’s decouple this for a moment and consider the following fee schedule.

1 PUT = 4kB

Standard fixed chunk size = 1MB

1 Immutable data chunk costs 250 PUTs.

1 mutable or appendable data chunk costs 500 PUTs up front to create the data type on the network. These two datatypes can be updated or appended 250 times. Each time the data is updated 1 PUT is credited back to the client account.

3 Likes

Well this topic is concerned more with do we put the minimum charge at the put cost of one chunk without regard to the actual size or not. And it seems you are suggesting we have a variable PUT cost based on conditions aka algorithm for determining current PUT cost. Well yes we will, but will it for chunk or according to the size of the chunk. That is what I think you are suggesting.

I am sure that the client side interface will give us an estimate of the number of PUTs available at the then PUT cost amount. Most likely based on the full 1MB chunk cost to PUT it no matter if partial PUT cost is implemented or not as the dashboard cannot know your future uploading so best to indicate minimum number of chunks one can upload.


The other question of consolidation, well it depends on the application doesn’t it.

In any case it will benefit all around that any consolidation be done anyhow.

Now as to consolidating many mails to different people, well the way the current system will work then no that is not possible, nor is a consolidation service since the individual parts will have to be written by each person anyhow and this means same or more actual PUTs

Consolidation of say forum postings then that too seems unlikely since all the consolation that can be done is making one reply in the thread rather than multiple and the person can do that anyhow. BUT to make replies to multiple threads and some how consolidate that will not be logically possible since they need to appear in different places.

At some stage there has to be a way to distinguish which reply goes to which topic. That needs some sort of index for each and that means a PUT for each. Much simpler all around to just do one post to each topic since its the same number of puts in the end.

tl;dr
If consolidation can be done (put) cost effectively then it should be done whichever way we do the charging since it reduces PUTs. If not then it will not save anything in the long run and just add unnecessary complexity

4 Likes

@TylerAbeoJordan on the cost of likes, comments etc I’m not sure we can say this makes social media apps less viable. Having a price floor that is high enough for people to notice could just as well improve confidence in the fidelity of the network, reducing spam likes and comments etc, or making people pause before commenting which may also affect the nature of what people share.

That’s just speculation, we don’t know the actual PUT cost anyway. I’m just suggesting that we don’t know the effect this might have on things such as likes, tweets or comments. It will be interesting to see, because at the moment there are ‘costs’ (in terms of our privacy and indirectly monetary) but they aren’t perceived as a monetary cost and most people including myself don’t think about them much when posting. I think that seeing your PUT balance decrease while spending time posting here, or on Twitter or Facebook will cause us to recognise that we are paying to post, tweet, like etc so I expect there will be an effect, but only if the cost is high enough to matter.

5 Likes

Wouldn’t this be a transfer of safecoin and not a PUT cost?

4 Likes

If the cost is small fractions of pennies, it should have a nice effect of steering spam, while being practically unmetered for regular use.

Perhaps this opens the question on what the minimum cost to the network can be and whether this will change over time. This will no doubt be answered when we have a live network to experiment with and monitor though; no need to try to boil the ocean on some of this stuff.

2 Likes

At the application level, there should definitely be good scope for bundling data items together though. There will also be market forces at work to make this a desired feature where possible.

Even on a forum like this, there could always be the option to queue several replies and then push them at once. It may not always suit, but for those who browse infrequently, it may be sufficient.

Perhaps the apps could even be smart enough to bundle and send IM messages after say 30s. So many times I get a string of one line messages each beeping my phone annoyingly. No need for that and there could always be a flush option to send each instantly if required.

Maybe apps will just need to be a bit smarter to make them a bit more efficient without disrupting the user experience.

3 Likes

But in the long run there is no savings on PUTs.

There is one set of PUTs for each reply to be queued then the putting on the forum. No savings for the users and now extra for the forum operator.

For a forum, you made two replies one after the other just now, and you could have a way to write the two as one entry, but then other features may be lost.

1 Like

I’m not sure I necessarily agree with this - it depends on the message format and how they are indexed. Handling threads would certainly be more tricky, but a single chunk could certainly contain multiple messages, delimited accordingly.

I suspect people will get pretty creative with ways to bundle data if there is a financial incentive.

Yes, exactly. If it can be done cleverly without feature loss, that is a good selling point for the forum software.

1 Like

If 10 people supply replies and then you format it up then write to the forum with special indexing to handle the formatting?

Consider the PUTs required to do this

10 Puts into a data object, one Put into another data object, 10 additional index Puts to handle how to use this new concatenated set of 10 replies

Breaking down who pays
10 PUTs by 10 people
1 put by forum operator to make the new data object (with delays by the forum operators PC)
10 extra puts by forum operator for the access into that object
10 Puts (or more) by the forum operator to make a index entry for each reply
the 10 people do not own their own data anymore

Now the way without trying to be “creative”
10 PUTs by 10 people to write their reply into ADs
10 Puts (or more) by 10 people to do their indexing

  • operator has no costs
  • the 10 people still own their own data,
  • no need for forum operator to have a special server App to do the extra work
  • no delays because of extra work to be done by another computer
2 Likes

Do we know the impact of lower limit for 1 PUT on networking and IOPS ?

2 Likes

My intention was not to try to design a multi-part messaging solution today, but merely to point out that there will be a financial incentive to do so. This will lead to creative ways of programming to achieve it.

Perhaps just concentrating on how condensing multiple IM messages into one multi-part IM may help me make my point here, likewise for emails. Indeed, the MIME protocol was defined to allow multiple messages within a single email, often of differing types.

People get inventive when there are financial incentives and bundling data to make optimal use of PUTs will be a likely outcome. How much is done by the client, the operator, etc, will no doubt be thoroughly analysed.

1 Like