These crypto people are absolutely nuts, thats a definite.
Thank you for the response,
I’ve just read some forum threads (search on delete or remove).
For instance about the archive functionality for inactive data. But that’s not deleting: still storage necessary,
I’ve little doubt it is already covered/can be easily covered later
, but it is still not completely clear to me how that (unlikely?) possible attack by constantly replacing data can be countered:
If ‘deletion’ on ‘user’ level is completely refunded,
And if there is no effective deletion on ‘network’-level.
Or is the following whitepaper not the correct one, not uptodate or not understood correctly by me?:
chapter 4 ‘Proof or resource’: …
‘data_cost: data_cost will be calculated as the data_size that user stored to network. It will be refunded once user deletes the stored data.’
But anyway: Nice To Cee MaidSafe Go Swift To Rust
Don’t want to interrupt this too much.
I believe that reference is out of date. My understanding is that you pay to PUT data to storage, period.
You pay for an amount of ability to store a certain amount and that is subtracted from as you use it. When needed, you buy more.
There is no recredit for delete. It’s not like paying a monthly/yearly fee to keep your data, though. It’s pay once, store forever.
That’s my understanding from a lot of past threads, anyway. And it makes sense. Part of the security is not having identifiers on chunks as to who owns it, and it is quite possible for multiple people to own access to identical chunks.
The network has no way to know that someone else also owns it, or not, when I decide to delete. So I just lose it, basically.
I have concerns about data bloat, but David Irvine has indicated that this is not as big a concern as it might seem. For now I chose to believe him.
Deletion has been extensively discussed before, search the forum for 3-4 topics about that.
It has nothing to do with attacking. You are welcome to attack the network, just make sure you purchase enough SAFE because you’ll need them, thank you very much.
I think I understand.
and from the faqs:
How long is my data stored for?
All data is stored on the SAFE Network forever unless the user decides to delete it. Data that has been held for a long time, but not accessed, will be moved into archive.
-> I get that data chunks can be deleted (with the aid of the subscrib counter: if it is zero: remove key to the data).
What was meant by ‘never be deleted’ in the forums is probably: wait as long as possible to effectively reuse that (physical) space for other data chunck(s).
And if that will never be necesarry, then the data itself is indeed never deleted.
So, if in case the data chunk is referenced again by the remade key in the future, there is less work to do, because already there.
@draw I think the docs on “subscribe” counter are out of date. It is an option to do that but not an attractive one for performance reasons I think. Best you check the code itself on this to find the current status. I think for the time being data is never deleted, though it remains possible that will be implemented later in some form.
What I think it means is that in case something is deleted, it is un-linked from the client perspective, but the block/chunk is not unallocated on the vaults.
In case the chunk is referenced again (say, someone uploads the exact same file), I don’t think old deleted chunks would be reused because tracking all deleted chunks would be a lot of work for a negligible benefit.
Archive, to me, means it remains “out there” but isn’t referenced or used, until perhaps sometime in the future if the devs estimate this could benefit the platform while not being overly “expensive” to the network.
(I did take a look at the code, but I don’t understand it.)
I don’t think this is the case, the network is not aware of the deletion whatsoever. The link between the data managers and the chunks would be preserved for sure, this is necessary for de-duplication to work without ownership tracking.
I’m still reading about this, hope to gain more clarity, please allow me to know of my understanding below:
I have read the safecoin.pdf and completed a running curve with Wolfram’s Mathematica to get a visual.
I assume that this ^^ is not the kind of curve you preferably envision. It’s an interesting question, so follows a short animation with Wolfram’s Mathematica. If this uses B-Splines, however, I think that still requires more information to further clarify.
What is required some points in 2D space to have a curve connecting those points. This is the topic of interpolation. While ordinary interpolating polynomials oscillate heavily, one uses e.g. cubic splines in order to have a smooth curve. Safecoin Token Document proposes an algorithm where there incurs an increased curve as well as in frequency of momentum and response to the SAFE NETWORK in general, and thus as more persons obtain the safecoin in the following four methods listed in Section 3. A type of self-regenerating and group-sustaining purchase behavior. The cycle led by network demand and user incentive. What is slowly at first increases with frequency, and the more it increases, therefore the more it increases with more veracity, or accuracy, though is it so that the frequency of increase and decrease cannot be tracked at all? There should be a plateau here, though it will be virtually invisible and miniscule, almost being overrided instantly upon its commencement, leading to “level” off prior to continuation with a more balanced stability.
If you want to have some prediction for future points then the topic is different. Then you want something like the regression analysis, though unsure how this would occur to be able to predict at commencement (or track) attributes of feature aspects specifically. I see some attempts at doing so on this forum.
Wolfram Alpha is a nice search engine, though one is not able to carry out real math with. Therefore it would require a computer algebra system such as Mathematica, Maple or Matlab. Alternatives include Sage or Maxima. Used and Mathematica surely remains the best tool for the math.
I’m open source so feel free to share any and all information where you find it especially beneficial to P2P networks. I guess my question is direction to understanding this material presented more deeper.
In best regards and Thank You
Very interesting post. I also find it very hard to understand the MaidSAFE architecture.
Your explanation is brilliant. The interaction between the different node’s personas, however, is a bit confusing IMHO. A diagram there would really help.
Also, you write:
- All of your pieces are then passed to your 32 client manager nodes (CMN)
Are these all the pieces for a given chunk or for the file?
Are you saying that each chunk is split into 32 pieces and each piece is copied to 32 CMN? Or is it the chunk that it is copied to the CMN?
Then you write:
A minimum of 28 out of 32 of these client manager nodes will then pass their chunk pieces to groups of 32 data managers whose IDs most closely match the chunk IDs
If I understand correctly, each of these pieces is copies to other 32 machines, the data managers (DM)?
If that is the case, then each piece is copied to 32 CMN and each CMN copies its stored piece to other 28-32 DM. That means that about 892 machines are involved at the very least. Plus all that transferring to the Vault Managers and the actual vaults.
Another point that it is not clear is whether there can be an overlap between these groups of machines.
Pretty sure that’s out of date now. The post you responded to is 3 years old. Quite a bit has changed since then.
Where can I find a good introduction to the current architecture and model?
This is bang up to date:
Thanks a lot! I shall take that as a starting point.
Awesome post! Thank you!
We are closing this topic because some changes have occurred since this was posted.
Please view the official SAFE Network Primer that can be found here - https://maidsafe.net/#safePrimer
Feel free to discuss what you’ve read in this corresponding topic - The SAFE Network Primer: An Introduction to the SAFE Network