SAFE Network - Test 16


hmm… I had a thought to take down the one large file I’d put up, save that being drawn too many times too easily but then can’t authorize demo_app for ‘No such data’ error.

Would one option be to lock the unauthorized GET until other elements in place?


I would hope we do not need to go that far, but yes this would be another option.


Mods please move if this is off-topic

What is the shortcut to get the Tools in beaker?
On safe://nostrils.scotcoin I was missing a background image.
When I check locally I see I had a typo in the css file. But it would be nice to be able to check “live”.

Another beaker Q

How do I get the Favourites to persist between sessions?


I’ve been thinking about this for last few days, with aws, droplets and the likes thereof a determined attacker with a few dollars to spare can so easily disrupt the network. What’s the quick fix?


Ban AWS Vultur and Digital Ocean IPs…

You didn’t say you wanted a quick, clean fix…


There may not be a quick fix for folk to run vaults from home right now. We need to give some thought to it, there is vault tunnel spam on the network now so this attacker is really trying hard to spoil things while we iterate the tests. We could run with known IP’s and whitelists to ban unknown vaults etc. but it’s a bit backwards really and does not include the community in building this. We could have a simple setup your own network but again it’s not a good solution.

Lets see what we can do though before taking draconian measures like that.


Wouldn’t it be a simpler solution to limit Vaults to invitation-only for the time being?


Yes that is pretty much the same as whitelist, but it’s not a good solution. Node age and a few smaller bits would prevent this current nonsense really. It’s not hurting us, but it is hurting the community, although it does mean a few late nights for us to speed through some parts that should be post alpha 2. So we will see :wink:


Would a larger network help mitigate these attacks? If the upload threshold was lowered a little and the 1 Vault Per LAN restriction was eliminated I suspect the network size would grow significantly. Would that make a difference?


I’m sure I’ll be corrected if Im wrong here but I thought the one vault per LAN restriction was to do with NAT hole-punching - or a meringue?


I was under the impression that it was due to people starting a ton of instances and inadvertently disrupting the network by causing huge churn, back then resource proof also did not exist so slow connections running multiple vaults is clearly not helpful.
That was my understanding.


Your explanation makes more sense than mine.


Isn’t something like this a solution, were it configured to update itself and just keep joining test networks?

If it isn’t, tell me what does solve this problem with users running nodes at home, and I will try to put together a solution. SAFE was such an inspiration that I can truly say there’s no chance I’d be doing what I do today if it weren’t for stumbling on this forum. This isn’t a profit-play: I’ll facilitate this however gets it done, even if that means simply making introductions.


This is somewhat flattering right? The Safe Network is a growing threat to all existing power structures and similar tech, not that it has to be seen that way but that is how it will be to those grasping for money and power. I hate to hear that the team will be having to spend time on work arounds :expressionless: but I trust and respect the teams decisions to have as many folks engaged in testing as possible. Although I would love to see Mutable Data delivered as soon as possible but obviously now there are some extra hurdles, so I just have uncontrollably blurt this out…

Bring on the Node Aging!!! :smile:


I would say one doesn’t need to exclude the other and we could both have an invitation-only based network up to serve those wishing to focus on building apps, while an open network is used to tweak the settings needed for resisting attacks on the network.


Could running the two separate network instances concurrently aid development somehow? Perhaps WRT debugging?


I think this taps into something I’ve been feeling progressively more about testnets - the purpose should be relatively clearly defined.

‘Arbitrarily open testing’ is a useful purpose, but having this ‘need’ for arbitrary testing escalate into developing temporary workarounds seems a bit backwards to me.

Perhaps more clearly defining the purpose for tests would help reduce the need for temporary fixes to the network (this is not intended to force users into a specific test mode, but is intended to reduce the need for arbitrary testing on public testnets).


Well, I would also prefer having node aging in place earlier, rather than adding more temporary hacks which have to be removed afterwards. Also if this means waiting X (insert random number) more weeks until the next public test.

However I’m not 100% sure if a constantly up public net is even needed atm. Personally I would focus on 1) making it easier to spawn own private test nets (e.g. allow multiple vaults per hosts) and 2) including mutable data so we can develop against our private networks icluding this new data paradigm.

I mean, what’s the point in investing resources into building a semi-stable public net which still uses an outdated auth flow (launcher) and outdated data structures?


As the previous posters have already said, drop all temporary workarounds etc. Just bite the bullet, suspend tests and implement node aging, implement data chains and whatever you need to have this thing finally working.

Every half-assed solution like oauth invitations just end up hurting the project in numerous ways.

Perhaps revert to running a maidsafe hosted alpha2 for external devs until you can get the system working in the wild.


The actual tests are extremely useful, for example, detecting hidden bugs or possible attacks and should continue.

Another thing is that, as was done before, could exist a developer network controlled exclusively by Maidsafe.