I see two interpretations here:
send each chunk on a separate connection [to the network / to the client manager] so that they are effectively uploaded in parallel.
I’m assuming the first interpretation is what you mean. Is this correct?
The client connection is currently handled by the maid manager persona (see wiki). Looking at the structure of maid manager there is an
accounts field, ie data entering the network is aimed at the
accountname xor space rather than the
chunkname xor space.
Could it be redesigned so data can aim at the chunk level rather than the account level? I guess it could… but the authentication mechanism would probably add some overhead since the chunk must be checked against the account anyhow, which presumably means checking back with the maid manager.
The idea has merit though, since it would distribute the network load for the main chunk content across many nodes, leaving only the authentication handling to the maid manager. Could be an improvement… but I’m not familiar enough with the design of the personas to fully grasp the feasibility!
On the other hand, clients interfacing to the network via just one node (the maid manager) provides a degree of consistency, whereas interfacing to many points on the network may mean inconsistent responses, which may increase the complexity of the user experience and chance of upload failure if something goes wrong.
Not sure about this, my understanding is a client interacts with a single entry point on the network. Like on TOR, once the network establishes your entry point it stays that way for the session. Maybe someone more knowledgeable can clarify?
I’m not sure. There’s an async branch of safe_client_libs but I haven’t been following it so don’t know if it represents actual work on the async feature.
So I admit overall there’s not a lot of clarity from me, I’ve not followed the code as closely as I used to.