For the web layer in itself, lower speed could be a blessing in disguise. Lighter pages. Imagine lighter pages
Just eliminating all the tracking links and loading of ads gives a massive performance boost.
The will be some speed niggles but I think easily overcome. There’s less to concern ourselves with here than the change from having servers to do fancy stuff and being limited in terms of backend and doing more on the client.
I’m not saying that’s a problem either, but it is a much more significant change IMO.
That’s precisely what I was thinking about. Web layer should be fine just by cutting out all the spying fat.
Nomenclature is important. Can these three terms be understood to mean something like this?
Write: The process of sending something to the network for storage and retrieval, a “PUT”. A measure of time for this process would start from the moment of transmission from the sending device to the moment said data is available for download.
Download: The process of requesting data stored on the network. Measure of time would start when the requester submits the request and end when the full document is available on his device.
Upload: The process of fulfilling the download request. Decentralized vaults send chunks to the requester. A vault’s measure of upload time would start when the vault receives the request and end when the relevant chunks appear on the requester’s device. Each vault would have a slightly different “upload” time depending on the resources (specifically upstream bandwidth and “computing power”) available to the vault. The upload time will determine, in large part, the requester’s actual Download speed and will be improved by system-wide caching of data.
Please feel free to correct any inaccuracies in this explanation of terms. A more thorough description of how to measure the time considerations of caching would certainly be welcome. It would be useful to be able to measure the Download time of a fetch request when no caching is involved and compare it to actual network performance using caching, if this is feasible on the testnets or eventual Safe Network.
Write - A network mutation of any kind requires payment and agreement. This takes a period of time commensurate with the work the network requires. Think of this as a network command
Read - An operation that does not mutate the network data and an request that can be repeated ad-infinitum. These non mutating events are free to all users. Think of this as a network query
As a read is non mutating, it can be done in parallel, therefor reading a huge file start to end is slow, however reading all the bits in parallel is very fast, like bitorrent/gnutella etc. shows.
Hey @dirvine is opportunistic caching coming later? I don’t hear it mentioned or see it on project boards etc. I remember there being another form of caching mentioned before too, deterministic caching? Don’t want to be a bug because as I know there is plenty to do but it’s more fun than anything for me to keep up to date with the state of the network and felt it’s relevant to the topic.
All caching will come after Fleming. Opportunistic first then deterministic afterwards. Former for non changing data (Immutable) and the latter for changing data. Now with disjoint sections and flattened routing infrastructure it is less important for reads.
soon after Fleming it will be important and probably done a little bit more efficiently.
Great incite and much appreciated captain.