I have seen some files available to cat at some point, then disappear a few minutes later.
Now my node is idle since 17:27 utc
what are we meant to use when killing a node
safe node killall
or
pkill sn_node
Either should work but I always use safe node killall.
OK, bypassing authd helps (safe keys create --test-coins --for-cli)
I managed to put an empty file and cat it back, but cat does not work with other files posted by others (while dog does work.) putting a 7K file is taking forever though, in the order of minutes.
Edit: The cat was quick though.
just uploaded ~1.3MB image in 5 minutes:
safe cat safe://hygoygycib9jujeahfow9i4stbrpcra4bwhtikus6fmwxgdoj76wspn7joc > img.jpg
download takes about 1 minute for me
My node works surprisingly well:
I see WriteChunk, ReadChunk, ReplicateChunk, ReturnChunkToElders and immutable chunks still keeps appearing.
Really good work team. A few explosions today on my terminal but mostly good. We have lift off for sure. Looking forward to AE being integrated in. Onwards and upwards.
![]()
Random thought - is there work that can be done offline to prepare upload or does the CLI need to be connected to know the signatures to apply?
I just wonder for testing it would be easier to know what the balance of client to network is. Perhaps a question more for the optimisation.
and
as I type just having got back⦠my node joins after third attempt. Parallel is upload of the usual, which will see if that is any quicker this occasion.
Canāt believe I missed a whole .1 of a testnet! ![]()
Looking good⦠Writing chunk succeeded! and uploads working too⦠and the node log ticking over nicely; so, evidently itās alive!
Anyone willing to put their neck on the line with a guess?
Iāll go first⦠DBC.
oops I broke vdash 0.6.2 ?
Summary
$ vdash $HOME/.safe/node/local-node/sn_node.log
Loading 1 files...
file: /home/safe/.safe/node/local-node/sn_node.log
thread 'main' panicked at 'index out of bounds: the len is 210 but the index is 18446744073709551615', /home/safe/.cargo/registry/src/github.com-1ecc6299db9ec823/vdash-0.6.2/src/bin/../custom/app.rs:692:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
$ RUST_BACKTRACE=1 vdash $HOME/.safe/node/local-node/sn_node.log
Loading 1 files...
file: /home/safe/.safe/node/local-node/sn_node.log
thread 'main' panicked at 'index out of bounds: the len is 210 but the index is 18446744073709551615', /home/safe/.cargo/registry/src/github.com-1ecc6299db9ec823/vdash-0.6.2/src/bin/../custom/app.rs:692:21
stack backtrace:
0: rust_begin_unwind
at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:483
1: core::panicking::panic_fmt
at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/panicking.rs:85
2: core::panicking::panic_bounds_check
at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/panicking.rs:62
3: vdash::custom::app::TimelineSet::increment_value
4: vdash::custom::app::NodeMetrics::gather_metrics
5: vdash::custom::app::LogMonitor::append_to_content
6: vdash::custom::app::LogMonitor::load_logfile
7: vdash::terminal_main::{{closure}}
8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
9: tokio::macros::scoped_tls::ScopedKey<T>::set
10: tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on
11: tokio::runtime::context::enter
12: tokio::runtime::handle::Handle::enter
13: vdash::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Oi thats my job!!!
Well done ![]()
Youāre obviously slacking⦠was expecting 4.3 when I came home ![]()
Today Everything began splendidly no problems joining the network, and carrying out many CLI command successfully. But then my node was no longer excepting chunks, so I killed it, and now Iām attempting to rejoin, but having no luck so far up to 32 tries. So anyway, Iām glad I could break something today, lol
So, Iāve had one folder of many upload but itās not looking quick⦠same with a nrs create seemingly stuck
but Iāll leave those running and come back later.
![]()
my node join just got stuck again after almost an hour of trying to join.
Interestingly when itās working normally I only see network activity during those 3 minute intervals but once it gets stuck there is network activity almost constantly (just in the range of a few 100 bits/s to 1.5kb/s)
Maybe it can help someone:
I had problems with safe files put earlier.
Then decided to recreate account with safe keys create --test-coins --for-cli.
Now puts work again for me.
So looks like account data can be corrupted in the same or similar way as regular data.
Why is the consumed space not increased but fluxing?
grep āconsumedā ./sn_node.log
in time order is like:
237484
1048604
908
304
978828
293
1048604
305
30748
305
1048604
301
1048604
1048604
1048604
343276
537
304
1048604
Summary
<snip />
[src/chunk_store/mod.rs:106] consumed space: 293
[sn_node] INFO 2021-04-23T19:25:26.074291511+01:00 [src/chunk_store/mod.rs:106] consumed space: 1048604
[sn_node] INFO 2021-04-23T19:25:32.647432674+01:00 [src/chunk_store/mod.rs:106] consumed space: 1048604
[sn_node] INFO 2021-04-23T19:25:46.618358501+01:00 [src/chunk_store/mod.rs:106] consumed space: 142
[sn_node] INFO 2021-04-23T19:26:28.605598246+01:00 [src/chunk_store/mod.rs:106] consumed space: 349548
[sn_node] INFO 2021-04-23T19:26:29.108381752+01:00 [src/chunk_store/mod.rs:106] consumed space: 1048604
[sn_node] INFO 2021-04-23T19:26:35.817164592+01:00 [src/chunk_store/mod.rs:106] consumed space: 308
[sn_node] INFO 2021-04-23T19:28:01.649188103+01:00 [src/chunk_store/mod.rs:106] consumed space: 1048604
[sn_node] INFO 2021-04-23T19:28:04.864299750+01:00 [src/chunk_store/mod.rs:106] consumed space: 349548
[sn_node] INFO 2021-04-23T19:28:24.376404277+01:00 [src/chunk_store/mod.rs:106] consumed space: 349548
[sn_node] INFO 2021-04-23T19:28:39.533233700+01:00 [src/chunk_store/mod.rs:106] consumed space: 1048604
[sn_node] INFO 2021-04-23T19:28:52.202402066+01:00 [src/chunk_store/mod.rs:106] consumed space: 349548
[sn_node] INFO 2021-04-23T19:29:15.731296692+01:00 [src/chunk_store/mod.rs:106] consumed space: 1964
[sn_node] INFO 2021-04-23T19:29:15.902018275+01:00 [src/chunk_store/mod.rs:106] consumed space: 304
[sn_node] INFO 2021-04-23T19:29:35.437745030+01:00 [src/chunk_store/mod.rs:106] consumed space: 293
Please save that logfile!
If you have time to help narrow it down you could try saving sections of the logfile to separate files and loading each one at a time to see if it is a particular part of the logfile causing this. Maybe some corruption, or a special case Iām not parsing properly.
If you have time to do that it would be a great help - also in case I canāt replicate this.
Otherwise by all means just send me these logfile and Iāll take a look.
User_1@DESKTOP-4QAR72F MINGW64 ~
$ safe cat safe://hygoygycib9jujeahfow9i4stbrpcra4bwhtikus6fmwxgdoj76wspn7joc > img.jpg
User_1@DESKTOP-4QAR72F MINGW64 ~
$ safe cat safe://hygoygyp7xwuk4d4ygdgfu7o3shppd8g8pnoq6zu5d8k5q1ao4ribhnf33r > slav_survival_pack.jpg
Error: NetDataError: Failed to GET Public Blob: ErrorMessage(NoSuchData)
User_1@DESKTOP-4QAR72F MINGW64 ~
$ safe cat safe://hygoyeyxscg4gw35dzy44ezz3u8tq5ooquzsc495m7iip5rn1pekpurdgiy > imageprocessing_lady_lena.png
Error: NetDataError: Failed to GET Public Blob: SelfEncryption(Storage)
User_1@DESKTOP-4QAR72F MINGW64 ~
$ safe cat safe://hygoyeyxscg4gw35dzy44ezz3u8tq5ooquzsc495m7iip5rn1pekpurdgiy > imageprocessing_lady_lena.png
Error: NetDataError: Failed to GET Public Blob: SelfEncryption(Storage)
User_1@DESKTOP-4QAR72F MINGW64 ~
$ safe cat safe://hygoyeym3ynhup3q3bo6fd5qw1s9z5ugarqxbsesp3zmte6diropostwmkc > logo.jpg
Error: NetDataError: Failed to GET Public Blob: ErrorMessage(NoSuchData)
User_1@DESKTOP-4QAR72F MINGW64 ~
$ safe cat safe://hygoyeymasmcyc86iehubpywh81eroedagqto1q1yfpthwwdbt8qb1qzw7o > 1.png
Error: NetDataError: Failed to GET Public Blob: ErrorMessage(NoSuchData)
User_1@DESKTOP-4QAR72F MINGW64 ~
Node is up and running again but still cant download anything.
It prints added space.
Look for use space total lines.