"community1" Test Network is Alive, Join Us!

THE PORT HAS BEEN CHANGED TO DEAL WITH A BUG. SEE THE END OF THE THREAD FOR DETAILS. PM ME FOR THE NEW CONFIG.

This topic is a reboot of a previous one that was buried under out-dated information about a previous test network.

I have started a new community network, called “community1”, using the named-network and “first” features of the most recent versions of safe_vault that Maidsafe have provided for this purpose. It is for ongoing testing and experimentation until the next official test, and not a rogue network of any sort. It is not my network, I merely put up a config file (see below), and configured my cloud server, to get it started. At present there are anywhere from 12-20 vaults active on it.

If you would like to join, please copy this text into your vaults’ and launchers’ configs (both, because the launcher needs to find the seed node of the network) and restart them. No command-line options necessary.

Alternatively, you can download it from here (for safe_vault), and here (for the launcher) with a right-click followed by “save as”, directly into your safe_vault and launcher folders respectively and over-write the default files. And don’t worry if they don’t display properly if you left-click; that is due to your browser getting confused by the braces or (on Windows) by the (Linux type of) line endings - they have been tested to work correctly on both Windows and Linux.

Vaults behind NAT should be fine with the latest hole-punching, but anyone who has a vault in the cloud, connecting directly to community1, and who is comfortable publishing their IP, feel free to send me your IP and I’ll add it to the config:

{
  "hard_coded_contacts": [
    {
      "tcp_acceptors": [
        "91.121.173.204:5483"
      ],
      "tcp_mapper_servers": []
    },
    {
      "tcp_acceptors": [
        "185.16.37.156:5483"
      ],
      "tcp_mapper_servers": []
    }
  ],
  "tcp_acceptor_port": 5483,
  "service_discovery_port": null,
  "bootstrap_cache_name": null,
  "tcp_mapper_servers": [],
  "network_name": "community1"
}

I ran the seed vault at that address with the “–first” command-line option, but you should not use that option since their is only supposed to be one such seed and your vault might not connect.

Just to reiterate: don’t use the --first flag, because there can only be one seed. Just replace your config and that’s it!

Don’t forget to use the current distributables. The earlier versions are rejected. the current ones are here:

SAFE Vault binaries
SAFE Launcher binaries
SAFE Demonstration Application binaries

I have put up a statistics page here.

EDIT, May 27: The config now has a second IP address. Please update to this config in vault and launcher.

9 Likes

Added 3 vaults and http://netstats.safenet site that gives current approximate number of vaults following empirical formula (currently 13).

This is a static site automatically updated every 5 seconds. No Javascript => update results with browser refresh button.

1 Like

You could add auto-refresh with:

  	<head>
	<meta http-equiv="refresh" content="5">
	</head>
3 Likes

http://hello.safenet/

1 Like

@tfa How do update the site without manually pushing at buttons each 5 secs!?.. Is there an easy option for that posting from a commandline?? Being able to update static webpages automatically would be a big positive.

1 Like

Thanks for the tip. I have just added it.

I have a stats site too, on clearnet, adding to it as I get the time. Updates every minute. I know the stat is a bit lame but I am now adding more useful things after I finish this message.

http://91.121.173.204/

EDIT: Here it is with size of vault added:

http://91.121.173.204/plot1.svg

…using a three-column data file. I didn’t want to merge it with the current main page’s two column data file. Instead, I’ll cut over to it at midnight UTC, a little under two hours from now.

Now that I have it down, extracting and displaying this data, it will be short work to add others.

Does anyone have any particular other statistics in mind that they would like to see?

I’ll get the script to remove the data file at midnight, and retitle the red line "unique vaults added today, which is a more useful stat than unique vaults since the vault’s log file was created at some unknown time in the past. That will give a normalized measure of the activity level of new vault names being created. it will also brings the lines closer together so i can stretch the vertical axis and make the detail of green line larger.

2 Likes

Succesfully running a vault and my beloved http://drogenlied.de.safenet

2 Likes

Looks like it’s working http://frostbyte.safenet

Anyone finding that once you hit a site that’s no longer available you then can’t access any other safenet site until you reboot the launcher?

1 Like

This is an example of what I’ll cut the main statistics page over to at midnight UTC (I mess around with ideas on this address). The missing points from earlier were because I was running it manually. It is now updating every minute like the main page.

http://91.121.173.204/plot1.svg

Since that plot covers vaults and clients, I’ll add another plot covering types of packet statistics, errors and warnings.

Yes, and the data stays alive only if you access it regularly, once an hour or so.

[citation needed]?..

It is yet to be written. Mostly guesswork. :slight_smile:

1 Like

A wild guess at odds with what users will expect. That said, I did wonder if there was more to Test3 that was creating that effect… which is why I was checking you.

I don’t know how the network will behave in future… is it always a maximum of x4 copies… and if those are for a popular file, that see demand ramp up, will those fragments on slower vault hosts be moved to faster vaults?

I like the idea that the network is clever about its use of vaults; so, those that are lower bandwidth might perhaps be put to work routing messaging and other light weight traffic, then higher bandwidth requirement is put to hosts that can deliver that… but I don’t know the liability there of churn.

It’ll be interesting to see what vaults know of the network’s average vault… unclear atm if that is taken from the local connected nodes and averaged or known more widely but if a vault had some sense of the work and throughput of its neighbours, perhaps it could adapt what its focus was.

It’s an informed guess, generalizing from the few days that I have been using this iteration, community1. Some things do hang on but are inaccessible, such as IDs that are “taken”. I have found, from a small sample size, that if you access everything once an hour then it will keep working.

I don’t know of any secret tricks in testnet3; any more than what was in the official topic on it. I guessed there might be early on because of my frustration in testing, but now I can see what compensation steps need to be taken to cope with its several limitations:

  1. That the binary makes no apparent distinction between large and small bandwidth vaults.

  2. That it times out and restarts. This is a design feature but it aggravates, and is aggravated by, the other limitations.

  3. Small size of network and the resulting lack of redundancy to keep all the data alive.

  4. Lack of timezone even spread, a function of point 3.

  5. Aggressive networking/crust that can flatten a modest-sized connection.

Certainly the users should not expect any permanence at this stage. I’m pleased that it works as well as it does.

1 Like

Surely it still makes x4 copies?.. I don’t know a reason that it wouldn’t, just because the network is small.

I took the loss of data to be just a result of high churn and deliberate stress in this test… although if there was a code to keep fewer copies this time to increase the test stress, that would make sense.

In future I wonder we could do with keeping to relatively stable versions for running a community network. If the devs need another volatile version then we can run that or not, as a second alongside the stable one, expecting that others will prefer a stable one.

Neither do I know a reason, but @Ross did emphasize in the OP of the testnet3 topic that the emphasis in the current software was on networking rather than storage.

Sounds good, but what version would that be? At present it is all experimental. As it is, I am running a slightly newer version of 0.8.1, compared to the testnet3 distributable, that I compiled myself. It is here for anyone who wants it.

Currently immutable data chunks are stored 8 times. A few months ago they were stored 6 times (divided in 3 groups of 2: Normal, Backup and Sacrificial). But now the 8 chunks are all equal.

I think the problem is that people put too many big files in the test networks. As they are few vaults, the churns are costly in bandwith and some data may be lost in the process. Until safecoins are added I think the account limit should be lowered. Currently it is 100MB and I think it is too high.

1 Like

Any that’s not declared to expect data loss.

Yes, I wonder even test Safecoin will have a big effect on stability, as hosts try to maximise the gain - albeit a karma/kudos that test coins would represent. Having feedback by way of safecoin, could be really useful for stability?.. we’ll see. I wonder what can really be done about weak vault hosts. Running a rPi at home behind a firewell pushing everything through a USB, perhaps brings more liability than an unrestricted droplet. It’s all good but I wonder if there’s a lower limit on what is useful to the network… perhaps not if there are 8 chunks - that would suggest any contribution is worthwhile, just the managing of it needs sorting.

All I know is this, as a reproducible pattern:

ID and credentials created last night don’t work. An ID created this morning, six hours ago, still works, login is fast, demo app comes straight up and the website is still there. The intervals between accesses of that recent login and data are no more than two hours. The interval from last night’s logins and my attempts to use them this morning is six hours. I have noticed such a pattern several days now. Conclusion: login and access your data at intervals less than two hours and everything will work; much longer than that and it won’t.