Hmm, some odd errors there. You’d downloaded and verified these urls previous? Url parsing errors look like we’re not even hitting the network there…
We were never aiming for 100gb on all nodes. If we weren’t limiting the section, it would split well before this point (to be confirmed in future testneting). (Plus as I just realised, the machines didn’t actually have much more than 50gb storage it seems )
to be honest, now I look at it, I’d not actually downloaded them and verifies them myself. But the puts reported as being successful. In future I think I’ll download them and verify them myself after putting them.
Would it be difficult to include in the next testnet a very unique and easy to understand message that is broadcast back to the client when a Put fails due to the section being full? Something like this for example:
“Error: Upload failed due to insufficient storage capacity in section < ABC > for chunk id < XYZ >. Wait 60 seconds for section updates to complete and then try again.”
My Terminal finds a new image to upload in every minute or so, but the actual download is not progressing at all.
It’s not surprising that downloading doesn’t progress, if the network is offline, but how come the next image seems to start again and again? Like this:
/Safe_alle_10_C/Portus_E_192.jpg - files: 844 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 8928460 (0%)
/Safe_alle_10_C/Portus_E_193.jpg - files: 845 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 8542083 (0%)
/Safe_alle_10_C/Portus_F_194.jpg - files: 846 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 7040333 (0%)
/Safe_alle_10_C/Portus_F_195.jpg - files: 847 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 6815009 (0%)
/Safe_alle_10_C/Portus_F_196.jpg - files: 848 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 7243192 (0%)
/Safe_alle_10_C/Portus_F_197.jpg - files: 849 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 10050582 (0%)
/Safe_alle_10_C/Portus_F_198.jpg - files: 850 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 9988542 (0%)
/Safe_alle_10_C/Portus_F_199.jpg - files: 851 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 8237527 (0%)
/Safe_alle_10_C/Portus_F_200.jpg - files: 852 of 942 (90%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 9032682 (0%)
/Safe_alle_10_C/Portus_F_201.jpg - files: 853 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 8116434 (0%)
/Safe_alle_10_C/Portus_F_202.jpg - files: 854 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 9428729 (0%)
/Safe_alle_10_C/Portus_F_203.jpg - files: 855 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 7750092 (0%)
/Safe_alle_10_C/Portus_F_204.jpg - files: 856 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 7552671 (0%)
/Safe_alle_10_C/Portus_F_205.jpg - files: 857 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 9541365 (0%)
/Safe_alle_10_C/Portus_F_206.jpg - files: 858 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 9956141 (0%)
/Safe_alle_10_C/Portus_F_207.jpg - files: 859 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 9564187 (0%)
/Safe_alle_10_C/Portus_F_208.jpg - files: 860 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 9654520 (0%)
/Safe_alle_10_C/Portus_F_209.jpg - files: 861 of 942 (91%). transfer: 6289153167 of 8163742817 (77%), file: 0 of 7248582 (0%)
It would seem that the client is hitting some kind of error, failing the upload download for that file, and then jumping to the next file instead of failing the entire files putfiles get operation.
Two things seem wrong/odd to me here:
why is no error message displayed?
why does the operation continue after error?
wrt (2) I would think that the default should be to fail on any error, with (perhaps) an optional cli flag to try-next-file on error.
Maybe caching of recently uploaded and stored chunks would help here.
When the client requests to upload a chunk and its already there then the elders (node?) responds with Good got it and the client is refused to upload the chunk. The cache helps because it saves extra requests to the node to see if its there. No need for cache to be large since any attack would of necessity require “rapid” uploading (prob from multiple clients using orig DBC) in order to disrupt.
Other uploaders would still get the “have it” response but since they paid for it already they still pay without uploading.
tl;dr small cache (XOR address) of recent successful chunk uploads and respond with “have it” to prevent extra work. This helps to reduce the attack vector.
I would have liked it to stay up for a few days so we could have done some more downloading and verifying after we had hammered it with all the uploads.