[Offline] Fleming Testnet v6.1 Release - Node Support

IGD issues… Tried to connect 3 times… All good, best of luck with latest test net :+1: also best of luck to @StephenC, will be missed!

5 Likes

Also didn’t think that would be possible…bug! cc @lionel.faber !

5 Likes

I’ve updated the cmd to launch a node when using port forwarding in the OPs on both topics to contain all the original droplet node IPs - this is because the genesis node, which was the only IP specified originally, seems to be having more errors than other nodes - perhaps because it was being used by those port fwding?

If you are port fwding then consider killing your node & relaunching with all IPs listed as hard coded contacts

$ $HOME/.safe/node/sn_node --public-addr <public ip:port> --local-addr <localnet ip:port> --hard-coded-contacts=[\"178.128.164.253:52181\",\"46.101.21.162:34546\",\"178.128.169.151:34948\",\"178.62.20.236:12000\",\"159.65.60.126:44800\",\"178.128.161.75:34431\",\"178.128.162.219:55531\",\"178.128.169.114:35708\",\"178.128.161.137:59232\",\"178.128.170.69:45043\",\"178.128.169.206:59730\",\"178.128.169.23:52762\",\"178.128.166.44:50687\",\"178.128.169.93:58040\",\"178.128.172.153:52125\",\"178.128.167.101:53725\",\"178.128.169.109:55504\",\"159.65.24.159:56465\",\"178.128.170.97:37173\",\"178.128.172.235:53286\",\"178.128.160.4:52049\",\"178.128.169.33:46106\",\"178.128.169.57:42902\",\"178.128.168.75:36398\",\"138.68.191.226:46181\",\"178.128.175.25:33290\",\"188.166.175.46:46503\",\"178.128.169.227:54173\",\"206.189.121.46:32982\",\"178.128.169.152:33365\",\"178.128.162.146:59775\",\"178.128.172.1:54636\",\"178.128.161.127:48264\",\"46.101.17.48:59649\"]

EDIT - updated cmd above to add a \ before every " to escape it.

7 Likes

I see more Encountered a timeout results now and less Retrying after 3 minutes. Same as with v6.
With v6 Retrying after 3 minutes count dropped to zero on the next day. Let’s see what will happen with v6.1.

Any successful nodes out there? With or without fixed IP? What OS?

1 Like

Node PID: 10792, prefix: Prefix(1), name: c8384f..
Public, static IP, Windows 7.

2 Likes

My node is getting huge amount of ChunkStorage: Immutable chunk already exists, not storing messages.
So it tries to store the same chunk over and over.

2 Likes

Mine has finally been connected, despite the errors,
I had to let it try to connect every minute though…
On fixed IP (skip IGD) and RPI

2 Likes

I got "Retrying after 3 minutes’’ on Mac with fixed IP, but now it is back to ERROR :frowning:

1 Like

Looks like a bug:
My node, c8384f, stores chunks c0... to df... - 249 files.
But I see in logs:

[tokio-runtime-worker] DEBUG 2021-06-25T00:21:47.158351+03:00 [src\node\event_mapping\node_msg.rs:26] Handling Node message received event with id fb5cec99..: NodeQuery { query: Chunks { query: Get(Public(9e82a8(10011110)..)), origin: EndUser { xorname: 121758(00010010).., socket_id: 121758(00010010).. } }, id: fb5cec99.. }
[sn_node] DEBUG 2021-06-25T00:21:47.158351+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: ReadChunk
[tokio-runtime-worker] INFO 2021-06-25T00:21:47.158351+03:00 [src\node\chunks\mod.rs:74] Checking used storage
[tokio-runtime-worker] INFO 2021-06-25T00:21:47.158351+03:00 [src\node\data_store\mod.rs:153] Used space: 115099548
[tokio-runtime-worker] INFO 2021-06-25T00:21:47.158351+03:00 [src\node\data_store\mod.rs:154] Total space: 50000000000
[tokio-runtime-worker] INFO 2021-06-25T00:21:47.158351+03:00 [src\node\data_store\mod.rs:155] Used space ratio: 0.00230199096
[sn_node] DEBUG 2021-06-25T00:21:47.158351+03:00 [src\node\node_api\handle.rs:51] Handling NodeDuty: Send [ msg: OutgoingMsg { msg: Node(NodeQueryResponse { response: Data(GetChunk(Err(DataNotFound(Chunk(Public(9e82a8(10011110)..)))))), id: 2144af70.., correlation_id: fb5cec99.. }), dst: Section(9e82a8(10011110)..), section_source: false, aggregation: None } ] 

Why anyone should ask 9e82a8 chunk from me?

1 Like

By the way, ReplicateChunk( storm is still going.
Using 2-3 MB/s of incoming traffic and writing 0 files for ~half an hour.

5 Likes

Nice one this could be the source of a bug we have missed. Replication was redone last testnet quite quickly. @yogesh and @lionel.faber will check this out. Could be the last of the data missing issues if we are not relocating chunks properly. We have not seen this in house even with churn network, tests. So great catch

8 Likes

Well that was fun while it lasted - @dirvine said he did not expect this one to be up for long. I hope plenty was learned from this iteration and fresh bugs uncovered.

EDIT: posting at the same time - looks like valuable clues gathered

2 Likes

I want to clarify what happened.
At 0:12 of local time my node joined network.
Instantly it started filling with chunks.
At 0:19 node is having 100 chunks.
At 0:21 - 200 chunks.
Then both things happened: file creation speed slowed down, but replicate commands were not slowing down.
At 0:27 node is having 249 chunks.
At 0:52 node is still having 249 chunks, replicate commands arriving.
At 0:54 one additional chunk arrived: 250 in total. Replicate commands are slowing down.
At 1:01 node activity is stopped. But not completely.
During night node stored 12 additional chunks and now, at 6:57, it have 262 of them.

8 Likes

@StephenC @dirvine

Is the network up still? I was late to the party and there seems to be a suggestion it is not up anymore, but no direct statement.

1 Like

Network is looking like zombie now.
Some activity is happening, but it mostly broken.

For example, someone sent me chunk 86e107 4 minutes ago and checked if my node actually stored it 2 minutes ago.
(why 86e107 by the way? Other chunks are in c0df range)

4 Likes

Yes, it is in zombie mode as @Vort says. So we can mark this as now effectively offline. There is an issue with data replication for sure. So we need to crush that.

8 Likes

@neo I’ve just tried creating keys, uploading a small text file to the network then cating it and it worked for me, so I’d say the network can still be used for very light testing at least.

5 Likes

I missed most of the action last night so I will try to see ifit is sufficiently alive that I can play wit @Traktion 's IMIM blog app.

3 Likes

wsl 2 ubuntu

safe node install doesnt work with 1.1.1.1 dns with WARP,

only dns 1.1.1.1 works