User run network based on test 12b binaries

how large is your routing table?

I believe you, but for some reason me and others can not join.

I must check the log…

The last information about routing table is:

hmmm - okay - so 9 survivors left :slight_smile:

i wonder what made the others die … but maidsafe will know that and hopefully have it fixed till the next testnet =)

And now the Vault is quite hungry…

1 Like

Retreated while money was pouring in, storing up to 10 GB for other people in the community and logging going berserk.

With the help of the nearest participants in Test12bis but not geographically:

5 and 6 residing with Amazon.

1 Like

Maybe it is already resolved, but I suspect that one or more of the ip’s of the config file was/is still an active node of the previous network.
So if you get the response from that ip, it seems logical to me that you get a ‘network name’-name mismatch.
Next time when going to new network it is maybe better to take another port nr e.g. 5484.

This are the ip’s that are active on port 5483 the last time I checked:
108.61.165.170 (@nice)
52.59.206.28 (my node, still trying)
Then the 3 ip’s which maybe could still be connected to the old network:
52.65.136.52
78.46.181.243
185.16.37.149
–>If you still have a mismatch error: find the correct one(s) to remove…

4 Likes

You are a genius @draw . Just removed 78.46.181.243:5483 and I’m up and running. :clap:

Not much company yet though: Routing Table size: 1

My node ran out of disk space and died (used 11GB on chunks)

2 Likes

mine too, I filled up the 10 gb limit at my provider

2 Likes

a maximum chunk-amount would be useful then :-\ …

1 Like

Well, I see where this data go… Almost 12GB. and rising…

1 Like

There is one, if you edit safe_vault.vault.config and change the max_capacity value. The default is 21474836480, which is 20GiB (20 * 1024 * 1024 * 1024).

4 Likes

oh then i just didn’t realize - ok! very cool!

My vault died overnight.

My ssh showed “Killed” on the last line

I went back through my VM metrics that aws provide and found that I had exceeded my CPU credit balance due to high CPU usage. Guess there must be some conditions in the vault that cause high CPU usage. I suspect @Southside, this would be what your problem was too.

Maybe due to excessive churn or similar. So the free tier may only be useful when the network is large enough to not have excessive amounts of churn. OR there is a bug :slight_smile:

I’ve restarted my node, my CPU credits have risen since the vault automatically being killed CPU usage monitor. Hopefully this doesn’t happen often. It was the 1st time in over 3 days.

3 Likes

Yeah, the “free tier” is low-to moderate CPU-usage.
I have been afk for a few hours
I have killed my instances and will relaunch them with new config files

There seems to be a slow node in the 7 nodes that my resource proof is using and I get to 85% in no time, but the last 15% takes over 300 seconds.

So I have not been able to rejoin the network yet

Which all indications were that the vault rarely rose above 5% CPU usage. But at some stage in the last 24 hours it jump quite a bit higher for a lengthy period of time.

Seems also that I have been removed as a seed node in too many people’s config files and with one slow seed node it is seemingly impossible to rejoin.

There is one, if you edit safe_vault.vault.config and change the max_capacity value. The default is 21474836480, which is 20GiB (20 * 1024 * 1024 * 1024).

Thank you - thats handy to know - I think on AWS the max free tier storage is 8GB so all us free-loaders would be advised to change that value.

1 Like

This one is now offline 86.164.140.138