If a download abruptly fails, what happens to the chunks that made it into vaults? I imagine that they are retained by the network. Deduplication will prevent chunks that were already uploaded from being reinserted. The remaining chunks will complete the file once reuploaded regardless to who does it. Is this correct?
Another question. If the file is already on the network (uploaded by another user), will I have to pay when I try uploading it again? Will the network tell me it already exists?
Everyone pays for the PUTs they do (upload a chunk of any size, create a SD)
The network for security/anonymity does not tell anyone the chunk already exists.
If you paid before then you will pay again.
Now if the RFC someone suggested about an API for self encryption is implemented, then an APP could be written to test for you if you have a datamap of the file you are not sure if you uploaded before.
The upload reached 86% then remained there for 20 minutes. I closed the both the demo app and the launcher. Restarted both then attempted the upload exactly the same way. Service name and all. I did this three times with the same result. It never went beyond 0%. Has anyone else experienced this?
That’s a very good Idea. Safecoin is too precious to be wasted.
At this time the chunks that are written you paid for them and the ones that did not get written after (& including the one it failed sending) the 86% you do not pay for.
Someone had this problem with the first test release. Maybe look through the “bugs” category for one similar. I seem to remember that there was some discussion about that.
Of course I was talking of the system without bugs and may even not be coded in these test releases.
I read some of the code and might be able to provide a bit of insight. Adding a file is actually a two step process, create and then modify content. These are non-atomic and non-transactional. The former really only messes w/ the structured data of the dir to say you have created a file (it technically is a data file of 0 but that is embedded in the data map, not necessarily written as separate self encrypted data). Technically it appears the file is written a chunk at a time so if it failed sometime between you could have a half-written file, but the data map isn’t returned until the end. And it’s at the end that the data map is given to the directory metadata to say where the new file chunks are. So basically it completes or the chunks for the file are not updated for view (but are still out there un-referenced in safe-land).
For you closing the client or whatever, that’s a bit of a different story on how it handles termination/signals on whether it finishes or not. As for overwrites, it appears the demo app overwrites data because it starts at offset 0 of the file (which presumably changes the entire file data map because I doubt safe supports splicing of data).
I don’t know much more than that…I suppose one day there will be integration tests to check all of this. I’d love to see some kind of Jepsen tests that test network failure and what not.