[Offline] Playground sn v0.53.0, sn_cli v0.44.0

Baby steps. Seems like there was a bug not to do with splits and they want to see if that has been fixed first.

6 Likes

it would be more fun for everyone, i know!

but what @happybeing said :+1:

8 Likes

Haha, I’m gonna bargain, how about 8GB?

If the next one goes roughly along the same lines with the last one, we should have at least 24hours stable, and after that see how it goes with splits?

2 Likes

Has this been taken care of?

3 Likes

Nah @joshuef and @happybeing are quite right.

Think of it like developing a race-car.

Change one and only one thing at a time and run it till you find out how it breaks. Then back to the garage, change another thing, rinse and repeat… The podium awaits if you do it right.

8 Likes

Ok, 15 GB is my last word! :sweat_smile:

My original point was that community would benefit from some entertainment.

7 Likes

We will get some entertainment whether its 20Gb or 3Gb. And its very possible that we could get them both within a day or so of each other.

2 Likes

Ok, but if our goal is to see if we can cat the uploaded files fully, then it would be cool to have a script that automatically tries to cat the files you have just put. Because the manual copy-pasting in terminal is extremely un-entertaining, even tedious, especially for a noob like me.

3 Likes

The bugs show up faster when they are small and more nodes get to participate. I suggest we can learn more with small nodes. Consider the following growth pattern for the ongoing chain of testnets:

1GB, 5GB, 10GB, 50GB, 100GB, 500GB, 1000GB

For example, the next testnet could use 1GB nodes. After we learn what we need from it, a decision would be made to increase the node sizes to 5GB or stay with 1GB for the testnet after that.

2 Likes

I’m all for it. I like scripts. I may even try to put one together myself, if I feel like practicing.

I propose the command hen, but an independent script would do just as well.

Safe hen or hen.sh: (time) put $file, dog $hash, cat $hash.

2 Likes

I’ve reduced those emails (no notifications no problems :stuck_out_tongue_winking_eye: ). No, the limit was set super low, and the droplets have a global traffic limit rather than mbp/s limit. So it wasn’t a real warning. They should not have been dropping packets for Digital Ocean reasons. (for the curious, warning msgs were above 20mb/s/5mins, we routinely saw peaks of 45-75mb/s though.)

(They may have been dropping them for other reasons)

That’s actually the default behaviour atm. If you see the could not verify, we’ll be asking you to reupload until you get it verified before posting it in the thread. (This will eventually be automated, but one thing after another).

Depends what bug :wink:

For the next, I want to know if chunks stick about for days. Last time after one day, it seemed like a chunk was lost. (it may not have been, the query might have been dropped/blocked from returning to client for a bug in routing the response there).

So that’s what I want to know about myself. Smaller nodes = other classes of bugs that might obfuscate that fact.

8/10gb might not be a bad bridge. We saw ~8gb on nodes last time after a few days. Obvs depends on how much the nodes are hammered though.

Before that, we need to see wth is going on w/ the release automation. A new smart-release release is appearing not so smart :thinking: So can’t playground anything until we have those release bins out :expressionless:

12 Likes

Please, include this advice in the OP of the next Playground thread.

I might hammer harder now that I have learned that recursive put of more than 3GB is not a problem on my machine.

So, let’s say I have recursively put a folder of 200 files, is there a command to recursively get? Was dog a command for that? Please include some directions to reach your aim of finding out the data persistence.

My goal here is not guide the testnets / playgrounds into finding out how the splits go, but to heighten the satisfaction of individual and community involvement. My own curiosity is towards the splitting behavior, but it feels good to work towards your goal as well now that I understand the reasoning behind it.

I am a bit worried about the community after the participation in Comnets was lower than I expected. Sooner or later we should have hundreds of individuals running a node or two from their home, if we want to test the network in a truly decentralized and “natural” setting. If there is going to be something like 60-100 nodes per section, getting even to four of them in a dencentralized manner is going to take a bunch more active people compared to what we have now.

2 Likes

I’m not worried myself, but if you think it’s worth it how about reading through the Safe CLI docs and producing a short guide of things people can try to make it more interesting?

3 Likes

You think we have enough people to run hundreds of nodes from home?

Gave it a glance. It seems that the docs are way out of sync with current status of development, and at least part of it is too hard for me to find out if it applies to the current design or not.

But I did find what I asked earlier:

safe files get <the name of the container>

Is the way to get all the files in folder recursively put!

3 Likes

I think we will when we’re ready to do that.

You said yourself you are basing this on forum posts which we know will be an underestimate of current activity. Things have over the last few years gone quiet here but can soon pick up again. I remember how much more interest there was when there was an expectation of a working network, it was hard to keep up with everything that was posted.

Once we have stable testnets and the peripheral code - Safe App, Browser, language APIs and bindings, demos, third party apps - begin to reappear, interest can grow again, and I’m sure it will.

4 Likes

I thing so, if the testnet runs long enough. Many people read the forum only once in a few days.

4 Likes

I think that could be quite soon, in a few weeks. Maybe that’s hopeful thinking, but thinking nonetheless :grinning:

Difficult to know the size of the underestimation though.

Sure, and let’s hope so. Maybe I’m just impatient. My personal threshold for inviting more people is the possibility to run a node from home.

That’s a fair point.

2 Likes

I’d wager that there are many many casual observers watching that are simply too intimidated by the CLI

7 Likes

Just thinking…

If there is data loss, it would just as likely apply to the data the network itself has saved for it’s own needs? And that would make split trials a bit pointless before we have cleared one evident reason for possible failures?

Am I correct that the data the network itself creates is probably quite small in size compared to user generated data? Would it then make sense to make it’s replicate count much higher, like something between 10-20, for example? Would that make some operations more speedier or reliable?