As Windows user I was also able to get some chunks(25MB so far), but unable to view files.
Error: NetDataError: Failed to GET Public Blob: SelfEncryption(Storage)
Error: NetDataError: Failed to GET Public Blob: ErrorMessage(NoSuchData)
Every time GIT some command is stuck I opened other window to continue.
No more chunks after this event:
use space total : 24093444
[sn_node] INFO 2021-04-15T21:13:54.504657700+02:00 [src\chunk_store\mod.rs:125] Writing chunk succeeded!
[sn_node] INFO 2021-04-15T21:13:54.505258900+02:00 [src\node\handle.rs:27] Handling NodeDuty: No op.
[sn_node] INFO 2021-04-15T21:40:05.556332900+02:00 [src\event_mapping\mod.rs:44] Handling RoutingEvent: EldersChanged { prefix: Prefix(110), key: PublicKey(08a0…a159), sibling_key: None, elders: {c621ad…, c8716e…, cfeb37…, d6f7d6…, dceff2…, dd263d…}, self_status_change: None }
[sn_node] INFO 2021-04-15T21:40:05.852711100+02:00 [src\node\handle.rs:27] Handling NodeDuty: No op.
[sn_node] INFO 2021-04-15T22:15:33.427340200+02:00 [src\event_mapping\mod.rs:44] Handling RoutingEvent: EldersChanged { prefix: Prefix(110), key: PublicKey(1993…2411), sibling_key: None, elders: {c621ad…, c8716e…, cfeb37…, d46904…, d6f7d6…, dceff2…, dd263d…}, self_status_change: None }
[sn_node] INFO 2021-04-15T22:15:33.428083700+02:00 [src\node\handle.rs:27] Handling NodeDuty: No op.
Ah Ken, I had 1 beer and then more work. Believe it or not had to answer a shareholder who was annoyed we sent him the dev update and a para to let him know where we were. There is no way to please some folk. Lucky I never had 6 beers I suppose. You can never win sometimes
I uploaded 46 MB in 4m8s using cat data.bin | safe seq store -
I am successfully running a node (started about half an hour after the original post going on about 12h now) with no signature verification at all and it all worked well. Would love it if someone can try to exploit that somehow, I will run this same noverify setup in every testnet.
Total stored data: 84.61 MiB
Total chunks: 140
Total chunks at max size 1048604B: 64
Ah yes, I expect this to happen on occasion especially in Churn. As part of AE they will retry continually but right now the client probably (almost definitely) connects to wrong elders form time to time and fails. Great snooping though, all helps and all these angles are why the community tests are so much better than us in house.
Yes, we will be altering that one actually. Right now you can store anything in seq, but it will be restricted to only pointers and metadata. So it will change a bit. However interesting it was so much faster perhaps the client is waiting on chunks soring, but it should not. We will check that out (they used to but it’s very wrong in an eventually consistent network).
Wouldn’t it just get your elder node in trouble with other elders if you are approving operations that others are rejecting? Guess you don’t have to tell us yet if you suspect this leaves some vulnerability we should find.
Can anyone help me to understand how logging in Rust works?
I have tried 2 different settings for now and both looks wrong for me.
First one: RUST_LOG=info
It have such useful data as
[sn_node] INFO 2021-04-16T06:35:12.122179400+03:00 [src\bin\sn_node.rs:106] Running sn_node v0.37.8
[sn_node] INFO 2021-04-16T06:35:12.187179400+03:00 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\sn_routing-0.60.3\src\routing\mod.rs:127] 41a402.. Bootstrapping a new node.
[sn_node] INFO 2021-04-16T06:35:22.491779400+03:00 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\qp2p-0.11.8\src\endpoint.rs:270] IGD request failed: Could not find the gateway device for IGD - IgdSearch(IoError(Custom { kind: TimedOut, error: "search timed out" }))
[sn_node] INFO 2021-04-16T06:35:22.700779400+03:00 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\sn_routing-0.60.3\src\routing\bootstrap.rs:196] Bootstrapping redirected to another set of peers: [165.227.231.195:52131, 139.59.177.49:42192, 188.166.172.140:37214, 138.68.180.216:49021, 138.68.135.139:49440, 165.227.229.96:46868, 165.227.224.142:38670]
[sn_node] ERROR 2021-04-16T06:35:22.763779400+03:00 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\sn_routing-0.60.3\src\routing\bootstrap.rs:177] Network is set to not taking any new joining node, try join later.
[sn_node] INFO 2021-04-16T06:35:22.864779400+03:00 [src\bin\sn_node.rs:118] The network is not accepting nodes right now. Retrying after 3 minutes
and not so useful [tokio-runtime-worker] INFO 2021-04-16T06:35:12.279179400+03:00 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\quinn-0.7.2\src\connection.rs:286] drive; id=19
which represents network activity I think.
Second setting is RUST_LOG=sn_node=debug
In this mode I see only
[sn_node] INFO 2021-04-16T06:18:48.249480700+03:00 [src\bin\sn_node.rs:106] Running sn_node v0.37.8
[sn_node] INFO 2021-04-16T06:18:58.645080500+03:00 [src\bin\sn_node.rs:118] The network is not accepting nodes right now. Retrying after 3 minutes
Which lacks many useful details.
I have several questions:
Why RUST_LOG=info mode contains [sn_node] lines, which I can’t see if I use RUST_LOG=sn_node=debug (which should be more detailed)?
What is actual relationship between -vv parameter for sn_node and RUST_LOG env variable?
Is it possible for me to have messages like IGD request failed while silencing drive; id=? (last time my log grew to 2.5GB, which is not good).
You’re right that debug should be more detailed than info, the reason it doesn’t come up that way here is because the extra lines come from qp2p and sn_routing not sn_node, so you’d want something like RUST_LOG=info,sn_node=debug to get info as the default for all modules except sn_node which is debug.
You can also do something like RUST_LOG=sn_node=debug,sn_routing=debug,qp2p=debug
It’s complex.
sn_api+sn_cli incorporates sn_launch_tool which then launches each node.
The flag -vv is a sn_api flag.
This flag value is passed onto sn_launch_tool, which then uses it to set the env var and/or the sn_node verbosity flag.
You want to mainly look at this file: sn_launch_tool/lib.rs especially DEFAULT_RUST_LOG and verbosity and nodes_verbosity and rust_log
I’m just gonna say it’s complex. You need to understand a) sn_api b) sn_cli c) sn_launch_tool d) sn_node e) verbosity flags and RUST_LOG env var across all those.
Yes, you can set something like RUST_LOG=info,quinn=off
Hope this helps, I too struggle with specifying logging. There used to be log.toml file too which was kinda neat. Logging is very important and I feel we can improve it but we need to start with the use cases (what works well now and what doesn’t / causes confusion) then develop from those use cases rather than some intellectual or theoretical ideas. Logging works as it is but it can be confusing.
A while back I did some work on exactly this, see 68f64e09 but I still get confused despite that. So the situation is better than it was but still probably not quite there yet.
Yep. But so far to my knowledge this is not punished (yet!) so it’s no consequence, just pure speed-of-(non-)verification. And also to my knowledge there are no invalid signatures being sent around in the first place (my own node also sends valid signatures), everyone just behaves themselves.
So I’m sort of trying to motivate this from two opposing perspectives - can it be exploited and can it be fixed? What will come first the exploit or the fix?!