Testnet tool

right up yer kilt, thats where ;->

3 Likes

I was planning on joining @Josh’s network.
Lets see how I get on with other work today and I’ll maybe try the testnet script myself late afternoon.

4 Likes

If that’s which way the South wind blows :wink:

4 Likes

Bad news, it’s a chilly nor’easter

4 Likes

I’ll post some connection info in a separate thread in a little bit for those who want to join.

7 Likes

If you can, include a step by step how to, or at least a list of what’s needed (as in version of CLI etc).

Great work - I’ll try to find time to run a node and upload some stuff.

8 Likes

Am I changing that in provider.tf 1024mb default? if so what do you suggest I set it to?

edit: @joshuef
As for AWS which I have never used…
export AWS_ACCESS_KEY_ID=AKIAQxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=Fer9ThbXxxxxxxx

I also updated backend s3 with my bucket.

But Terraform complains about AWS not found.
Is there anywhere else that I need to make changes.

2 Likes

Yeh, that’s simplest. If it’s set to 500mb or so you should still be able to upload a bunch of files without hitting storage limit (assuming folks w/ larger nodes are joining).

Issue comes w/ sled as it’s not been tweaked or tested for optimum write time (it holds a lot of possible duplicated state which can increase storage counts).

So until folk join, if anyone is firing up large files capacity may get maxed out…

That of course is another way to let folk join though… just use up space, and they’ll be allowed in naturally.


edit
@Josh you’ll need the aws region too: export AWS_DEFAULT_REGION=eu-west-2

3 Likes

I did sorry just did not include that up above, ill need to play with it again when the network comes down so that I have droplets to use (seems soon) and perhaps we can get to the bottom of it.

just tried again and it is always the same error
josh@pc1:~/sn_testnet_tool$ ./up.sh ~/.ssh/id_rsa 11 "~/safe_network/target/release/sn_node" "" "-auto-approve" aws could not be found and is required

but there was a terraform.state file in my bucket.

My way around it is to remove aws as a dependency in up.sh

1 Like

@chriso as mentioned in the other thread, I have built 0.32 with --features always-joinable using the local binary with up.sh results in a bunch of nodes with no logs/config?
Where am i going wrong?

Hey, so you’ve built an sn_node binary locally and you want to use that one on the nodes that get spun up?

Where are you specifying those arguments you’re talking about there?

cargo build --release --features always-joinable :grimacing:

Right ok, so that’s going to build you a node binary on your local machine.

Are you then passing the path of that binary to the up.sh script?

Yes
It all goes well using a standard 0.32 binary but not with always-joinable

Hmm, is the node actually starting at all? Can you SSH into the machine and see if it’s running?

Yes, there are no logs and no config in any of the nodes

So a node process is running? It’s just not outputting any logs?

Not sure how to tell other than there are no logs and I assume with no config that they could not communicate?

How important is making it always joinable do you think?

With a standard sn_node doing what it should I could just set max capacity way down and go that route instead?

Run pgrep sn_node to see if the node is actually running at all.

To be honest I’m not sure about the --always-joinable thing. Would need Josh to chime in on exactly what that does.

4 Likes

Ok, just ran out. I’ll get back with you.

Edit: @chriso the reason behind trying to build always-joinable is because I plan to host more nodes to start the network so I did not want to hinder participation.

Probably makes more sense to simply ask if that is worthwhile.

With the previous network I used 12 2vcpu-2gb droplets, the plan is to use 20 4vcpu-8gb for the next.

Do you think that matters… is it beneficial in any way?

Edit 2: pgrep sn_node returns nada

1 Like