RFC: ImmutableData deletion support

There is a way around this I think. An SQRL type approach. It’s messy in their docs but here goes for a quick explanation.

You can create a repeatable crypto keypair from a seed value instead of using random input.

With this then what you can do is, take data you have and create the same keypair over and over.

So for instance if we had a chunk ABC and wanted to own it we could take the chunk name + some random data we have and hash both of these. Then we can create a keypair that should be unique but repeatable.

If we later know we have that chunk under our ownership, we can send a message to the data managers and request that it is deleted. We do so by recreating the keypair based on the chunk name.

Anyway, this is a very basic approach, but one very useful for logging into many sites without exposing your ID, but create anew one for each site. It also can be used as I described as well. So interesting adjunct.

2 Likes

Also, BIP32 wallets. The common thread in all of these is that, whichever is picked or invented, it will need support from the Launcher (not too hard) in a way that is easy to understand (more challenging); nobody will use them otherwise.

1 Like

And creating a Nonce in each new Datamap?

You Hash(ChunkName+Nonce) and send, with the data, as Owner.

To delete you send a request with the Nonce (maybe can add a wallet for reimbursement) and the data manager calculate Hash(ChunkName+Nonce) and delete is exist in Owner::Users.

This system don’t suffer the replay problem and, if don’t send the wallet data, completely anonymous. Of course we must store the Nonce linked with the Datamap but serves as a list that what data can erase.

2 Likes

The way you explained it sounds like a simple challenge-response style adapted for this purpose. Yes I know SQRL can be (or is) a type of challenge response system.

But in any case if that was done then it would help solve the loss of anonymity by a thousand cuts. But it has to be done from the start or else it will be one of those things left to later then too hard to implement because too much data has the old method.

I still hold to the choice should be in the hands of the uploader to upload file as deletable (temporary). Let the user set defaults in his account settings (Public-temp/perm, private-temp/perm). Any system that forces all files as temporary (deletable) is going against what many see SAFE as. Even to force a default of deletable is wrong, the default should be allowed to be changed and I’d suggest that the defaults are part of setting up the account. Just a simple page of defaults as the account is set up and in a “settings” tab of the launcher along with other things like coin address, IDs etc. In other words if the API gets nothing for the “deletable” attribute then it is permanent

1 Like

I like the idea of public not deleteable, private optional. Once shared I’m inclined to say no longer deleteable but it will confuse users so I’m not certain.

Actually not deleteable public data will confuse users too :slight_smile:

Just initial thoughts. I find it tricky to decide as I haven’t thought about this enough, or heard enough different views.

I am still wondering whether the issue of old data is a non-issue. Old, stale, data will move less and be relatively smaller than new data - both of these being amplified over time.

I should also imagine that a lot of junk that fills up drives is not new, unique data, but rather the same junk that everyone else is gathering (old downloads, old application data, etc). These would get dedupicated and the waste would tend towards insignificant as a result.

I seriously doubt that people are generating disk filling quantities of new and unique data, which is frequently changed.

I also suspect that the churn of this data will slow, as the data migrates towarda the long standing archive nodes.

Perhaps would be prudential to examine the impact over time, before designing complex solutions?

4 Likes

I can see why for public facing data having the network operate in this way is beneficial and for truly public content (PtP) it could make sense. However, we need a mechanism which allows us to share private data only with those we trust, e.g we allow specific keys rights to the data and this data should be DELETEable when we choose to delete it. This gives us a finer level of control over who can access our private data and ensures that this shared data can be deleted since at least our copy of the data won’t be proliferated publically, if one of the peers we share with then shares that data publically well, thats fine because we trusted them.

2 Likes

I noticed there’s a strong bias towards the status quo (“DATA IS FOREVER”) and, when it’s challenged, the reaction is rather irrational.

I pay once, and my data stays forever. That’s already a huge thing: all other places demand periodic “rent”; even my HDD needs to be replaced every once in awhile.

  • Why can’t one change his mind?
    • “Because it’s already public”: Can I rightfully demand access to something that somebody else paid for?
    • Deletable data would encourage supporting what we consider important: if I truly care about something, I can just chip in.
  • Why only the first one has to pay? (“no PUT if it’s already there”, correct?) Shared ownership…
    • … could lower the cost/MB, because a lot of stuff would be financed by many
    • … would alleviate most of the ethical arguments against PtP:
      • an artist can profit from it: if she uploads her own stuff, people can expect she won’t delete it, so only a few would pay to keep it => most of the PtP goes to the artist
      • a pirate can’t profit from it: if he uploads a movie, everybody knows he has no incentive to keep it up for long, so others would have to chip in => he can’t expect to profit from it

The interface doesn’t need to be complicated; in fact, this has been done many times already. Somebody sends you a Dropbox link, you open it, and you get the option to save it in your own account.

In a browser, it could be like the star in Chrome: Click once, it’s yellow, just bookmarked, can disappear. Click again: it’s green, stored forever. Hover: shows cost, as computed from document size.

No if you upload a file and whether it is already on the network or not you pay.

To me the obvious thing is to allow the uploader to choose permanent or deletable. No matter if public or private.

1 Like

Except: The obvious thing to do is a (free) GET and then, if nothing is returned, a paid PUT. The fact that the reference client doesn’t work like this doesn’t mean that the forks won’t do it like that. Basically, under the current design, only the first uploader will pay.

I have suggested that myself, but an APP has to be created for that and will most use it for small files? Will most use it anyhow?

There are many free options on the internet and in real life yet the many still pay for simple services.

Maybe we should lay odds to how long for such an APP to be created and lay odds for how many will actually use it. (I know some will, but will it be a large %age)

Nope, no APP. It’s a simple feature that can (therefore: will) be implemented somewhere in the core, launcher, etc. We’re talking about open source, so nothing would stop a fork from becoming more popular than the reference client; the fork makes stuff cheaper, so people will switch to it.

It’s not even an expensive check: Let’s say I want to upload a 1.4 GB file. All I need to check is one of the blocks: if it’s not there, I can assume I need to upload, so I can stop checking. Otherwise, I’ll keep checking, just to make sure.

As a result, making the paying explicit is not more resource intensive than the current scheme (assuming the obvious change I outlined, may it be as part of the reference client or a fork.)

1 Like

I’m posting way too much here, sorry about that…

As a continuation of the idea about making payments more explicit.

Uploads could be split into two parts:

  • PAY(hash) would have two uses:

    • to pre-pay for a new block that is about to be PUT
    • to ensure an existing block won’t go away if the original uploader decides to delete it
      (could be integrated into bookmarking in a browser maybe, or sth like Pinterest’s “Save” button)
  • PUT(hash) then could return:

    • 201 Created after uploading a new block that was just paid for by somebody (not necessarily the uploader; let’s say, a commissioned artist uploads her work to the customer, who pays for the storage)
    • 402 Payment Required if the block was not paid for yet
    • 409 Conflict if the content already exists on the network

When storing content people could chose between
4 options besides a default storage time : 10 years :

short : 1-5 years
long : 30 years

precise expiry

forever ( pay a premium )

All options except forever require a network mechanism for time /
intervals to trigger deletion or omission from propagating its chunks

… which we don’t have.

1 Like

I wonder if it would be possible to use another xor address space ( or a range ) filling itself up at a
regular interval while occupying minimal space , ticking away into eternity , creating a measurable & responsive hook for all kind of interval related functions safe dapps and services might find useful .

It’s a circle though: how can you you define a regular interval (to measure time) without having a concept of time first? But I was thinking along a similar line: if coins had a counter that would get incremented each time one is assigned, then maybe we could have some kind of a time. Also, though I didn’t look much into it, @dirvine’s data chains could give some sense of measurable progression… Why don’t you open a thread to start a discussion about ideas to introduce time into the network? :smirk_cat:

1 Like

The regularity would come from writing to that xor range always the same tidbit at its ‘max’ speed and having a way to up the counter and query for the result , it is like a chain that creates itself counting the rings as it goes sometimes a bit slower or faster , creating an average number count per interval of time .

There will be a measurable speed limit to such a function executing itself on the network which may also vary due to network size , network transmission times , storage technology … , but still something that ticks away within boundaries . The point is to have a counter that over the lifespan of the network does not run out of space , occupies very little space and writes at whatever max speed the network allows it counting up .

Being able to call up an always continuously increasing number count
into services / dapps that want to use it , to trigger some kind of event .

Just the fact that an address is already written to does not give us enough information , but
counting the numbers as they are occurring might give us an idea close enough to be of use .
If the writing of the xor addresses is linear by design even searching for the beginning of yet
unwritten address spaces would give some approximatively positive result after a few trials .

Just ideas , there a re also functions that search for groups and consensus maybe there is a way to
be aware of moving forward and counting that already is available and we don’t know it because we haven’t looked at it from this perspective . I think it is a great idea to open a thread to find The Ideas .

Dr. Louis Essen invented the first Atomic Clock and it was a big challenge , counting atomic hits safely .
He retained that Einstein’s Relativity Theory weren’t scientific enough . Essen was able to create an atomic clock , nobody else has been able to create before him , he was a scientist . Let’s get inspired !

2 Likes

You may be onto something! I couldn’t follow everything, but exploiting the latency is such an awesome idea :scream_cat: I don’t believe we need XOR magic for it to work though. I started a new thread about it.

2 Likes

There is an elephant in the room here, which is so large that it is invisible to everyone. These forum discussions seem to ignore a simple fact: that TPTB will be so totally opposed to the success of Safenet that in order to shut it down or make it unworkable, no amount of money and effort will be too much for them. Were I working for the government, and charged with this task, then persistent and undeletable data would be my attack surface of choice. I’d recruit say, 1000 people who have signed the official secrets act , ex-soldiers would be ideal, and sit them in front of 1000 computers, each of which would be programmed to produce terabyte after terabyte of random garbage. My team (who would be told that Safenet is an unmitigated evil used only by paedophiles, criminals and terrorists) would upload this to Safenet, say 20 Tb per man per day, each using multiple identities. So that’s 20 petabytes a day. There’d be no storage charges after the initial Put payment, which HM Gov would provide, printing the money as necessary. This would clutter up the network, never being accessed, so the farmers would only receive Get payments when someone accessed the 1% or so of data on their hard drives that wasn’t garbage generated by my team. Result being that farming would be totally uneconomic unless Get payments were astronomically large. In which case people who wanted to store data would use one of the many existent cloud storage services now available, and Safenet would implode. Now, what are you going to do about this?

1 Like