If yes, then could/should this be a community effort? Or is this better left to the devs? As the tokens are essentially worthless, I am assuming that there would be no “governance” issues over their distribution. Is that a fair assumption? @JimCollinson is the expert here I guess.
We are told that a minimum viable testnet will need ~2k nodes. The new nodes are lightweight https://safenetforum.org/t/preparations-for-joshnet-testing/38185/8?u=southside and through some NAT wizardry, we should be able to run multiple nodes on one machine. Should we be looking at preparing Docker/Podman images for all architectures and getting ready to support n00bs with Docker issues? Using Docker would open it up to Windows users instead of restricting it cos of us old hands being linux-centred mostly.
I’m going to be looking into helping get people up and running with testing, including the use of DBCs/Wallets.
I’d like to clarify a couple of things here:
For running local nodes, why do you think we need to do something special with NAT? David’s post here does not imply that to me. We should be able to just run X number of nodes locally with testnet. Can you clarify here what you mean regarding NAT?
Secondly, did someone tell you the node or client doesn’t run on Windows? As far as I’m aware, there is nothing that would prevent that in the codebase just now.
I mean having multiple nodes from home - on one or more devices - connecting through one ADSL router/phone line.
Sure we can run as many as the hardware will handle locally - thats not the issue.
No, Im just aware that most of us who are in a position to offer help to new users are linux-centric to put it mildly.
Running Docker images will mean that it hugely simplifies the user support and the only Windows specific problems we need to address are installing and running Docker on Windows – and there are plenty of reliable resources elsewhere we can point folk at if that is their problem. We can confidently support a much wider range of folks than before.
OK. If you wanted to look into something like that you could do so yourself, but I would probably suggest just waiting until the work Benno is doing regarding non-local networks makes it in.
Oh btw, I think multicast DNS from libp2p will work for LANs.
To be honest, I don’t think I agree about using Docker on Windows. I’m not seeing why it would really be any more difficult to run on Windows than it is on Linux, and I would like to see test coverage on the platform.
Im moving forward on the premise that we can have lots of lightweight nodes. To me that screams containers. I use Docker where perhaps I should say “containers”
Im just poking at it right now and sharing my experiences with anyone thats interested. Yes its fast-changing incomplete code and will be for a couple of weeks at least so much of my effort will be wasted. But hopefully I’ll know it inside-out by the time we are finished and be in a position to help others.
Deep joy Once I have Docker working on the SBCs then of course I need to get them to talk reliably to each other and then to the wider network. Sure I can have 3 or 4 Pis on my LAN at home but need to get a bit cleverer than that. I have a LOT to learn about Docker networking.
Right, well, it certainly wouldn’t hurt to have a container-based distribution, if it’s something you’ve been working on, but I don’t think it’s necessary as a solution to running on Windows.
What containers are really great for is when you have an application and have a bunch of stuff you need to distribute with it. So for example, if it’s written in Java, you can distribute the Java runtime in the container, and that’s a big benefit. There can be other types of scenarios: when you want to deploy on Kubernetes, or get monitoring information from the container, but I don’t think we’re near that just now. Since our code is written in Rust, it has almost no dependencies, so there is not a huge difference in running many node processes inside the container versus just running them on the host.
Part of my thinking was that there are many people who have dabbled with Docker on Windows, Linux and Mac. If we have standard images to distribute - ie a known release version + whatever monitoring is deemed necessary then we know everyone is using basically the same setup and supporting it becomes much easier
And instead of saying “download this from Github and add these monitoring components” we just say “grab the image that suits your OS and architecture” its all been tested beforehand.
TBH I hadnt taken into consideration the advantage that Rust gives us anyway, but I still think the containerisation is worth the effort. I could be wrong…
The thing is, we don’t need any users to be running anything for monitoring just now. This is something we might do on our own node deployments, but I don’t think we would ask community members to participate with that, and definitely not at this stage.
OK - so any container is basically just going to need the safe and safenode binaries for the foreseeable then?
In that case I can see your reticence about containers, as I understood it from the updates, we were going to need some monitoring on all nodes for testing, possibly right up until release.
Nah, there will not be any requirements for users to do anything with monitoring.
If you were running the node in a container, it wouldn’t really be very different from just running it on the host. For the client, since that’s a short running process, that would be even more complicated, because you have to pass the varying arguments inside the container, and it can get pretty convoluted to do that.
If you are curious about how Docker works and you want to learn that, there’s no harm in using containers if you want to, but I don’t think it’s something we should recommend the wider community to get into as we ramp up testing and to try get more folk involved.
One interesting thing you could try WRT containers would be getting some nodes running on Kubernetes. There may even be some advantages to that in a home setup, if you wanted to ensure you always had some node processes running and such things. That would be cool to see someone get that working, but it would be a side project.
Indeed - though I have always thought that the AC power and router/phone line were the most likely points of failure in any home setup.
It would be instructive at some point further down the line to do throughput tests on say 10 nodes running natively vs 10 containers on similar boxes.
What do you think about a Joshtoshi faucet? Should I bother?
My thinking is a very simple web page with a form to fill in forum username, wallet URL and public key ( if owned DBCs are still to be a thing) The page would collect records and store them for batch output later a couple of times a day. 5 Joshtoshi to each wallet. You would need to trust me or someone else to hold a stash of Joshtoshi though
It could be. I actually don’t know enough myself yet about the DBC implementation. I’m gonna get familiar with it very soon and I’ll let you know. I do know that we don’t have bearer DBCs any more, but every DBC still has a public address associated with it, so an owner should still apply.