So far my thoughts and understanding has been that if I have an amount of data that I want to store in SAFE if I am providing at least that amount from my node, it should wash. I “pay” to upload the data. So if I am providing 10 GB to SAFE then uploading and storing up to 10 GB should be covered.
Again, as I understand it, once the data is uploaded, it is in SAFE forever.
So if I then drop my sharing, I could go to another computer and access all that data as it is already there.
Ok, consider I am a malicious person (or threatened big data company or government) with lots of resources. I could assemble say a petabyte of storage and provide that to SAFE so as to to upload a petabyte of data. Once in SAFE, it is stored forever on the general participants provided storage.
So if I dropped my providing of storage, reconnected to safe with reinstalled computers and then provided a “new” petabyte of storage and then uploaded a “different” petabyte of data (I say different in order to defeat the deduplication aspects) and then kept repeating this process. Would this not result in lots of storage being used thereby taxing to good actors and having a lot of data/storage that is useless but causing the need to keep adding nodes and storage just to maintain?
The smaller the participant sample, the more this is a problem but could it not be used to cripple the startup?
What am I not understanding?