Are your data managers outside of your section? Or am I out of date with the concept of data managers? (or am I mixing stuff here now?)
I thought for a get there happens precisely one relay hop for anonymity on top so it’s not you who requests the data - but I somehow assumed because of the one additional hop the relay would be inside your section… Since this is not necessarily the case if you cannot know the section of your relay(s) my assumptions don’t hold anymore
(but if you can reconnect until you have a relay in a neighbour section it again would only be one path)
This graphic (old and dated) is what I base my mental image of the different intermediaries between a client get/put and stored chunk.
I think most of your concerns relate more to the game-ability of PtP than PtF. We want the network to be resilient against ddos attack. PtP gives a direct incentive for the attacker to try a ddos, but it also gives us a means to focus the discussion to creatively come up with ways to mitigate ddos. In other words, I see it as : PtP solution == Ddos solution and vice versa. So far it seems that caching is the primary defense, but there may be other ways…
Hmmm not sure it’s “that simple” - for PtP I need to try and get all my data into a singled vault (+relay to the neighbour section to only have one path and manage to get around caching) and reward is only 10% of the farming reward while if I own a vault and know which data I can request+ I don’t have the hassle & cost (with mutable data it’s easy to target a section but not that easy to target a specific vault…) and get 10x the reward if I manage to have enough data to blow the cache…
But as @neo says, it is the elders, applying the routing logic, who make the requests to the different Vaults.
BTW, for various reasons I am against PtP, at least during the beginning of the network. I find it technically dangerous, as it could encourage different attacks, and globally, it could give a negative view of the network since piracy and porn would be dominant in the early days.
You don’t know which data you can request by xor address just from owning a vault. Iirc you would need to own the vault, one of its pmid managers and one of its data managernodes. I don’t believe that is possible with random relocation.
With PtP you could create a data chunk and know the xor address. This address could then be requested by a botnet of getters to earn PtP rewards. Caching will slow it down unless as as you described in earlier to there is a cache overload, probably very unlikely as neo described. Growth is key… If the size of the network is much larger than the botnet then caching will probably handle things just fine. What rewards could be used to grow the network as fast as possible?
Okay - requests first go to (all? Oo) elders and those (vote?) decide how to route the data for all gets? Oo
But doesn’t reliable message delivery just specify that all messages/get requests get relayed to the next section/the corresponding vault…? So all elders cache all (yes a subset of 1/3 - but even when multiplying I only need 10 times the data a cache has - that’s possible I would assume) requests and therefore I would (if requesting from the neighbour section) just blow the cache of all elders in my and the target section at the same time…? (edit: the cache size would roughly need to be the around the size of the data storage of the attacker to make sure this cannot happen - just my estimation/expectation)
I would love to believe this - but that would mean that I need to store all data with key information and all the data managers/pmid managers store a table with info which key of my list has which xor address and the elders then only know which data managers to contact for getting the chunk => those tell me to deliver the corresponding key… That would indeed obfuscate the thing largely but I have a hard time believing it’s done like this (just a feeling - I have no proof)
That’s imply section consensus for each section on the path of each GET, isn’t it?
Hmm, RFC 57: Safecoin Revised would make more sens if there is a section wide consensus for GETs (a vault would know if an internal GET isn’t valid, the proxying vault could still do an external GET tho, and repeat if it gets the GET).
How would the elders know from outside how large the cache of the other elders are supposed to be and what gets are supposed to be cached without doing exactly the same work and holding the same cache…? (edit/ps: and only 1/3 of elders are the delivery group according to the rfc reliable message delivery)
Probably never know it because it would add a routing complexity that, maybe, will not worthwhile. When you live in a completely decentralized world you must assume that a majority of nodes will act honestly. Otherwise the network will never work.
And all the Elders participate in the reliable message delivery. Only that in each hop of each message only a subset of elders will be involved (otherwise the number of messages and the cost of a network operation would increase almost exponentially).
This thread is very confusing to me so hopefully someone can help clear things up.
How would a chunk not be able to be requested individually? That’s the whole point of the xor url system isn’t it? Am I missing something here?
If you could find a source for these bits of info I would appreciate it. I don’t know of the network doing any encryption to chunks. I only know of encryption being done by the client. But I could be wrong so would be interested in where I missed this info.
There’s a lot of different levels of encryption including optional pre-encryption (eg pgp), self-encryption (automatic by the client), transport layer encryption (crust/quic) but none of these clarify to me why vaults would be encrypting chunks in a way that prevents knowing what chunk is stored. (Important clarification: vaults know the name and content of the chunk but don’t know the file the chunk is part of or the meaning of the content in the chunk).
I must be missing something here because as far as I understood it every vault knows the xor names of every chunk they store. And xor name is all that’s needed for a client to request that chunk (see above for link to xor urls).
This is another thing I can’t recall having read. If you could provide a source that would be great so I can fill the holes in my reading. As far as I know any new chunk once it’s created by the client has a fixed xor name and content. The only role of the network is to ‘place’ the chunk in the correct location, not to obfuscate or modify it. Am I wrong in my understanding?
The vault does (locally) know time and cache is entirely locally controlled so in theory they vault may choose to expire cache by time. But probably as you say it would be optimal for them to expire by volume so whatever. But my point is: cache is local, so is time.
These are old terms, have been replaced by Elders:
“Existing group Authorities such as NaeManager and so on will all fold into a single type, Elders” (source)
Overall this topic has made me feel there’s a need for a simple explanation of how routing actually works, and maybe some explanation of how it used-to-but-no-longer-does work.
I know a lot of changes have recently come about, especially the secure/reliable messaging, so misunderstandings and legacy knowledge are quite understandable. But it’d be nice to have a consistent and up-to-date understanding because much of this thread is not consistent with what I thought I knew about routing and chunk storage, which is a little concerning considering how much time I spend on this stuff!!
Just what I’ve come across on github and been able to infer on my own. Outdated info, yes. However, it’s pretty clear from the old vault personas that the separation of duties allows the retrieval of chunks from a vault while at the same time the vault is incapable of knowing the xor addresses of the chunks it holds. The neighboring nodes that manage the vault can know by keeping a lookup table to translate from xor request coming in from a client to a vault’s chunk index. The vault manager can also encrypt a chunk before passing it to the vault for storage. Is this exactly what is going on currently in the latest code? Unsure… But that technique makes farming rewards unable to be gamed unless multiple personas are bad actors that collude.
Just want to throw in that I assume that data managers need to call each chunk the same name and therefore all share this table as common knowledge - so if elders are data managers now I would still just need to be in control of one elder (maybe +a second regular vault in the same section if the elder doesn’t know its own table /only stores table)… The systematic worked well with the old more decentralised structure but elders seem to have kind of become a central authority which looks to me not capable of resolving this particular issue
Wait a minute… isn’t last proposal that farming rewards are paid immediately on PUTs?
That for sure removes any gaming angles on farming. (An additional benefit of that solution, as well as reducing complexity, it would seem.)