User run network based on test 12b binaries

I changed it to null, and still not able to connect. :x

my config looks like this:

and i have no problems getting online

{
“hard_coded_contacts”: [
“35.167.139.205:5483”,
“52.65.136.52:5483”,
“138.197.235.128:5483”,
“73.255.195.141:5483”,
“108.61.165.170:5483”,
“185.16.37.149:5483”,
“78.46.181.243:5483”,
“31.151.192.2:5483”,
“52.59.206.28:5483”],

“bootstrap_whitelisted_ips”: ,
“tcp_acceptor_port”: 5483,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “user_network_12_b”
}

Do we have to use the safe_node_api?

Still not able to connect.

Edited: Oh I need to port forward. Gonna test it again.

1 Like

for the launcher there shouldn’t be any need for port-forwarding

i don’t even have the password for my router atm …so there are no special settings active at my side …

what i did: download the launcher 10.1 → unpack it → edit the config → double-click on the safe-launcher
at least with ubuntu 16.04 that should be enough to get connected …

Even with the port forward open, I still can’t connect…

Yeah. I am using launcher 10.1

the network seems to be quite slower my side since about 30 min, not sure if it relates with your difficulties getting in

2 Likes

maidsafe will have had their reason to turn off 12b xD

maybe we re-animated a mortiturus (is that right? i was reeeeeally bad at latin) :confused:

1 Like

Given that a relatively small number of vaults that are handling the traffic on this network, I find it interesting to see the load it places on the computer’s resources. I have two vaults running with about 6.5BG of data being served on each. RAM usage on each is a mere 80MB, but the CPUs (quad core i5’s) are running at ~25% for safe_vault.exe.

I’m curious to know if the CPU load is proportional to the amount of data being held in the vault, the amount of traffic relative to the routing table size, and/or some other factors. Does anyone have any thoughts or feedback on this? Is anyone seeing similar results vis-à-vis CPU load?

How is it possible you guys have so much GB data in the vaults on 24-48h ?
Some tests ago I only had 400-700 MB on every vault from the 5 I was running for 2weeks.

the high cpu load could come from data-reallocation because of other nodes dying lately

as shown by @tfa for “normal operation” the cpu isn’t too busy but very busy when filling up

at least at my vault routing table was shrinking right now and data was rising from ~5GB to ~6.7GB right now

1 Like

someone seems to have uploaded a lot of data ^^

and since there are only few vaults running atm every node has to store way more data than in the tests i participated before … (yes i had way less data in the last testnets as well)

1 Like

Still not able to connect with safe_launcher. I give up. Le sigh…

Any idea how much upstream Gbit/s the free aws instances have ?

routing table just dropped by 2 - managed clients increased by 60% - aaaaand i accidentally killed my vault - sorry - so the other tables now dropped by 3 ^^

no clue - i don’t use aws

1 Like

The network is under quite some stress. My Vault (home) is constantly having 600KB/sec. down and around 800KB/sec. up at the moment. I have over 6 GB of Chunks in my Vault. Remember that each chunk is stored 8 times on the network. So filling up 500 PUTs means 4 GB of data for the network. If 1 node goes offline, this means that we see 6 GB of chunks being relocated to someone else. I guess that’s what we see now.

2 Likes

aws pricing and free tier terms are quite obscure to read ! I couldn’t find anything clear about bandwidth.

2 Likes

Assume $0.01 per GB of data for your public IP on AWS. But you get the first GB for free.

1 Like

within the last 7 minutes at least 3 nodes went offline → 6GB*3 => 18GB of data are being relocated right now :smiley: :smiley: :smiley:

2 Likes

Just checking, did you change your safe_vault.crust.config file as @polpolrene suggests here?