Sacrificial data vs non-permanent data

I am not talking about persistant vault. I am talking about permanent data.I propose that the older data may be automatically deleted under a stress condition on remaining disk space.

100000^(1/5) really equals 10

I took 100000 because the result is simple. For this value the put cost is too high and yet the reward is too low. A different value will generate either a higher cost or a lower reward.

Apologies, I didn’t bother to fill it in.

Okay, the formula seems badly chosen. I still don’t think the OP’s solution is required. This is what I posted under that RFC a few months ago:

Another way to look at this is considering the SAFE network as a
decentralized autonomous (non-profit) organization. The SAFE network has
income (from PUTting clients), expenditures (rewards to farmers), and
capital (non-issued SafeCoins).

At some point in the future no more new SafeCoins can be issued, but
SafeCoins will still be recycled. This means for the SAFE network to
maintain equilibrium (not “bankrupting”) in this late stage, income
(over any given period) needs to at least match expenditures. So in
general, the network should ask just enough SafeCoin from PUTting
clients to be able to accommodate vaults. This way the absolute lowest
PUT price will be found, maximizing accessibility for all.

One way to do this is having a target amount of SafeCoins in
existence. At the late stage of the network, this could perhaps be 99%.
If over 99% of SafeCoins are in existence, PUT prices would be raised,
if less than 99%, PUT prices would be lowered. The remaining 1% would
function as a buffer to protect the network against sudden fluctuations
in demand (amount of PUTS).

If such an algorithm would be used, it would seem logical to me to
also have a target amount of SafeCoins in existence before this late
stage, right from the start. The target could be derived from any
variable or combination of variables the network can autonomously
measure or approximate (for example, current total network capacity).

As an added bonus, if derived wisely, this usage of a target amount
of SafeCoin could also protect against potential malign high inflation
rates in the early days of the network, when a lot of new SafeCoins
would be issued. I must admit I’m not aware how predictable and balanced
the current algorithm of issuance of new SafeCoins is, so maybe malign
high inflation is already impossible.

So basically, what I propose is an algorthm that sets a target percentage of SafeCoin in circulation at any given time, and the PUT price is adapted to aim for that target. The GET rewards already balance dynamically. This way we also get a dynamic PUT price to match it, achieving income/expenditure balance of the network.

2 Likes

OK, StructureData has an owner_keys field but, I think they remain anonymous. Besides the chunks don’t need such a field, anybody can send a payment, there is no need to control who is it.

The network remains safe with this proposal, your RFC deletion was unfortunate.

when?
If you don’t control, How did you find out that you have to pay?
The chunks have not date information, which you delete and which do not?
If a chunk is deleted it can affect thousands. How do you control that?
Etc, etc, etc…

Sorry but on your idea the SAFE network become an uncontrollable nightmare.

No but seriously when will we know how much space a safecoin buys and how and who will decide that or is this it 1SC/MB? :slight_smile:

When:
It’s a UI problem to present a dashboard showing the files that might be deleted if the network is about to run out of disk space. User configured programs can also automatically do the payment when necessary.

Who:
For private data the owner will pay. For shared data anybody can pay.

Nightmare:
People are used to paying a recurring fee with traditional cloud storage service (Dropbox, Google Drive, OneDrive, …). If presented correctly in the UI this isn’t a nightmare.

Besides the network is currently supposed to work with permanent data that do not need to be deleted. This proposal is only a security measure that would be useful only if this assumption is not true. In this hypothetical case it is better to delete old chunks that people don’t care about instead of random chunks (because this is what will happen if people stop their vault when there is not enough space to store the chunks they held).

It is definitely not 1SC/MB. We may get an indication in the test nets, but we’ll only be sure when SafeCoin launches for real. This is because it is calculated dynamically in an attempt to keep the network healthy and growing under varying conditions.

1 Like

Files? What files? in the SAFE network you don’t have files. You have millions, billions or trillions of encrypted chunks. You pretend that all the users be connected all the time to control in real time if one chunk will be deleted. To do that you need to ask continually about the status of thousands or millions of chunks. And all the users must do the same in the same time.
And if you automatize you broke the basic security of the SAFE network linking the information and the proprietary.

Nightmare not, something worse.

P.S. I beginning to thinks that you don’t understand that the SAFE is a completely distributed network. And the rules of a client-server solution don’t work here.

1 Like

I think what he’s saying is that when you log in, its the job of the UI you’re looking at (that can tell what chunks you own) to tell you that you need to pay your “upkeep” fee. That is possible, but I’m entirely against this entire line of thought.

1 Like

And if your not log in in a week o a month? Who can trust a network who can delete your data in any moment if you are not aware of them?

Why only on log in? A distributed network doesn’t know when will need beginning to erase chunks. So you, and all the other users, need to control permanently all the chunks. That’s millions and millions of request per second and a fatal stress for the network.

And what chunks? To choose each chunk you need specific information about time and payment, and in the SAFE network doesn’t exist time servers. You need those servers and create a dangerous attack vector.

You need to modify the chunk file structure to add metadata.
Etc…
Etc…
Etc.

This is not a small change. This affect the basic functioning of the SAFE network and only to worsen.

1 Like

what does this mean?

When a vault goes offline and comes back online, it may not serve the chunks it still has on its hard drive. It must acquire a new location for itself in XOR space and thus completely new chunks to store.

2 Likes

So the network deletes the old chunks for you?

Or do I have to do that?

And does it’s reputation go back down to zero all over again?

1 Like

All this has been covered in this thread I think: Non-persistent vaults

You don’t need to log in at all. This shouldn’t be your intention because you think that the safe network currently designed can manage permanent data, so you don’t need to make such payments.

My proposal is only an added security over the current design for people who thinks that sacrificial chunks, automatic farming rate and even safecoin price valuation on external exchanges might be not enough to manage the case in a distant future when the network contains plenty of old data that are never accessed and only a few percentage of recent data that can provide rewards.

In addition to the security of personal data owned by people, another motivation to do such payments will be the security of the general network offered by this new source of recycled safecoins. I am sure there will be a lot of people who will be willing to give up a fraction of their farmed safecoins just for this. This isn’t philanthropy, it’s only the realization that the security of all is also their personal security.

Time servers are not needed because a consensus on absolute time can be reached in a group by taking the average UTC time given by its nodes, excluding the values that are too different. Nodes with such values can even be deranked, that way we are sure to get rapidly a uniform time value over the whole safe network (which is an added functionality of the network in itself).

2 Likes

Your proposal broke the backbone of the safe network. If you think your solution is better, fork the safe network and try to convince devs to follow you.

Like I said, I just think that’s what he was trying to say, not that I agree.

Considering the exponential increase in the amount of data humanity produces (and can store), “recent data” will never be just a few percent of the total data. I’m in my mid twenties, and I can clearly remember from the time I was in elementary school that a hard disk of a few hundred megabytes was quite big. Now I have to mutiply by ten thousand to be able to say that of a contemporary consumer level storage device.

Even when we hit fundamental physical limits of particles, we will still be able to improve the efficiency and increase the production of the ultimate storage technologies. So, old data will more likely be a few percent of total data than the other way around.

2 Likes

Storing 700TB of data per gram, some room for growth in the data storage technology field.

3 Likes

old article, but crazy idea nonetheless! Imagine we would get to sequencing DNA on demand that you could plug a DNA drive in your machine :slight_smile:

1 Like