MaidSafe Dev Update - 14th June 2016 - TEST 4

App Logging

Is there logging on the launcher and demo app?

I opened them from the terminal in Ubuntu 16.04 but there isn’t any log messages showing.

cd /path/to/launcher
./safe_launchfp

cd /path/to/demoapp
./maidsafe_demo_app

No logs show as actions are taken in the application windows.

Proxy Status

Another gotcha is setting up the global proxy. Not sure if it’s possible to show in the launcher whether the global proxy has been correctly set up or not, but I forgot to do this and then wondered why I couldn’t load .safenet websites. In the launcher under the ‘settings’ tab it says ‘Web proxy server is enabled’, so to me that means I should be able to browse safenet sites, but the extra step was required.

People are going to download and run. Without clues from the app itself, it seems broken. Most people won’t (and shouldn’t have to) read instructions.

Other than that, another solid release. Looking forward to the fully fledged network in the future.

Edit:

Seems there are error logs for safe_launcher; might be an idea to also add more logging, eg info about app actions.

To clarify, users should be able to know when / what an app communicates to the launcher, and detail for how progress on uploads is going (not just a percent complete value, also time passed, approx time left, upload rate etc like a wget dialog). Actions that have no log are a little disconcerting to me for some reason!

Strange Error

I saw an error dialog when uploading a new public file.

Dialog:

Upload failed
Failed to create file /public/www/filename.mp3
# Press ok
# Everything looks ok, file is in demo_app list, and plays in browser

Terminal output:

ERROR [safe_core::ffi mod.rs:302]
FfiError::NfsError -> NfsError::FileAlreadyExistsWithSameName

The file upload was successful and I could load it in a browser, so not sure why an error showed. Definitely was not a duplicate file, no doubt about that.

1 Like

Thanks @Pierce. @scott has been working on a UI iteration to address the UX issues better. But for the immediate release (last week of this month), we would only be trying to stabilise the backend of the launcher, which should also improve UX to a certain level.

Once the backend is stable, we will be implementing the ui updates and workflows to improve the UX in the next version. @scott has been pushing us already on that front.

12 Likes

I wouldn’t say out of sync with vault. But there are few edge cases which is throwing some wierd issue at times. And the good part is that, we have spotted few things :wink: and those are planned to be addressed in the v0.5 of launcher. For example, say user creates a directory and the network connection is lost while the directory creation is still in process. The user might loose his data and again has to upload it. We are trying to address issues like these which eventually improve the UX to a minimal level at the least.

We will also be trying to improve the error handling from demo app.

Focus is on backend stability and testing the same in the next version. Ofcourse UI related improvemnets will soon follow. As said in my previous comment, @scott has been working on the UX improvements. We would take it up once the version 0.5 is ready.

Launcher/demo app need love A.S.A.P.

Yes, for sure :smiley: We are working on launcher for the next few weeks.

10 Likes

Like the last test net it’s got that issue where you create a service, add a folder, it chokes while trying to upload the folder, and from that point on it seems to regard the service as existing (and so won’t let you try to create it again) and not existing (and so won’t let you use it)

Maybe needs an option to ’ recreate ’ a service in such circumstances?

I agree with this one. Maybe after some technical setting in the terminal people could have more, but like you say, only for the technical folks. the average user should just run 1 Vault. Another option is being paid Safecoin for every mb. you provide to the network, including caching. So that way you would get paid for bandwidth without the need to run extra Vaults.

4 Likes

All works flawless for me, except for xhtml pages. Check:

http://quake3v2.perico.safenet/
http://quake3.perico.safenet/

The first one is html. The second is xhtml. The error I get for the latter is {“errorCode”:-1503,“description”:“InvalidPath”} I guess it is just because so far the launcher only accept html files.

@Pierce

Honestly, this testnet seems like a regression. Slow load times, upload instabilities, small routing table, tons of warning messages, connection failures, etc.

You say that as if we should not expect that to happen.

This is the nature of testing. It isn’t a smooth process - it is bumpy (I think David actually used that word) and particularly so when building something that is experimental, by which I mean has never been done before!

Even in simple software one small innocuous change can have dramatic regressive effect. That’s the nature of software, and you only find that out when you test.

Plus, we know that this release involved a deep rewrite of the code to switch crust from multiple threaded to asynchronous operation. That’s a big change and in an area that you can’t easily test in isolation. Lots of issues that kind of change can create will only reveal themselves when you let a bunch of unruly demanding users loose on your precious code (users are the best test of any software I can tell you).

So it is not necessarily due to premature release, because it is the nature of the beast we’re hunting here. How would MaidSafe discover and eliminate the problems you have listed without… testing? :slight_smile:

If course they do internal tests, but the issues we’re seeing here all seem to be exactly the kind of thing you can’t expect to eliminate until you let it out into the wild. But even if they were not, new bugs in previously good code / features are to be expected.

TL;DR during testing expect bugs! :joy:

7 Likes

This is it, we cannot test humans, only humans can give us that input and it’s great. In terms of this test, so far the networking is a huge improvement. We did regress the UI due to a last minute bug, but were determined to find out about the network and the new message flow. That all works and we need to see how well. Initially looks veyr good though and gives a direction we can use that is wonderful.

So today we will dive into the UI and launcher to ensure it catches up now with the speed of other changes. The lower API’s now hopefully are more stable and will let the launcher/ui folks get some traction.

In terms of the test we switched of cache etc. to allow us to not be able to fall back to cache on errors, that also has been good as far as I can tell. So yes we now have a lot behind us and can get back to quicker iterations now.

11 Likes

Vault broke my office internet connexion. I have a very old router so maybe that’s the problem. I report the issue.

Launcher and Apps without problems.

http://wave.digipl.safenet

2 Likes

Thats a good question(by that I mean a tricky one to answer). Think @Southside made the right call actually :slight_smile: with

except I guess he meant 200Kbps :wink:

To answer your question chris, it’s a bit hard to say running 1 vault per machine is the always the “best” solution cos its a bit complicated to know for sure.

It kinda boils down to the bandwidth capability of the endpoint. If a user who has say a 100kbps upload connection, runs 5 vaults he’s just getting the group average a lot down at that point. However at the same time, if a user with 100mbps connection runs 5 vaults, even if they share the same bandwidth, they’ll prolly still help the network more as now more nodes in the network are capable of handling traffic surges than just one node(if he chose to run a single node). This is an area that definitely needs discussed and some approaches tried so the network itself can decide if a peer is worthy of storing data with.

Thats why I suggested the very conservative option of running a single vault for now.

Now as for people wanting to run more nodes to increase safecoin earning chances, think this can get quite easily tackled by not looking at “arbitrarily limiting users running multiple nodes” to “checking how soon someone is able to provide requested data”, that way if a person runs 20nodes on the same machine but is able to satisfy every request of the network adequately, the network doesn’t need to care and at the same time if they aren’t able to, they might not earn safecoin at all in any of their nodes cos their average is prolly lower than other nodes in the same group and their motivation for running multiple nodes prolly goes out and they prolly earn more by just running a single vault than 20.

As for partitions and running multiple vaults, not too sure I follow that one as they should be quite independent I’d hope. If a user has multiple partitions he wants to contribute to the same vault, then vaults should be able to accept that and use all partitions equally to store chunks I’d think and at the same time if they wanted to run a vault per partition, they should also ofc be able to do that.

Tricky part is the motivation to running multiple vaults which is ofc largely influenced by increasing the chance to earn safecoin. If the network itself profiles the nodes capability of handling data requests, then users would see its not just a linear equation of increase number of vaults = increase chance of safecoin generation.

Sorry if I’ve confused you more.

11 Likes

I agree: Better detection of and adaptation to different nodes’ bandwidths is definitely on our agenda, yes, but we’re not there yet.

The state changes should be the same for the vault in the cloud and the local one: Every joining node starts in state Disconnected; once it makes a connection to any node in the network it changes its state to Client. It then requests a new name from the network, changes its name to that and inserts itself (by making connections to the corresponding peers) in the position in the network corresponding to its new name. At this point it moves to state Node.

The client count on your cloud vault is 0 because it’s not configured as a contact in the config file packaged with the launcher, so no launcher will try to connect to it. User’s launchers will only connect to our droplets currently, as their addresses are in the config file …
… except if you are running a local vault! Then the launcher will make connections to that one, which is why you are seeing a positive client count there.

10 Likes

yeah, it was 2 in the morning…

Right now I am unable to upload either files or sites. When I try to create a site using the template, I getl

Filed to upload Template
Failed to create directory /public/farquharyoo

I have restarted both launcher and demo_app

1 Like

Everything is going smoothly for me, though I’m only running one vault. Had no problem logging on (and then back on) with the launcher and uploading files and creating a website with the app was fast. CPU usage ~5%, sometimes spikes above 10% but not resource intensive.

3 Likes

Yes, that is what I am suggesting.

Most (non-technical) people don’t know their “upload” bandwith, and may confuse it with their “download” bandwith, which is often 10 times higher.

Farming Progression
Everyone starts at level 1, running a single vault. More advance farmers progress to higher levels by adding more vaults, knowing they must adequately resource those vaults.

We should discuss how the Network communicates the level/rank of a vault to the farmer. Example…

  • A farmer starts 10 vaults and sees they’re all ranked “poor.” They know their vaults are under resourced and need to either reduce the number of vaults or increase their resources.
  • A farmer starts 10 vaults and see they’re all ranked “excellent.” They know they can add more vaults because the Network is satisfied with their performance.

Here’s a stupid simple way of communicating this information, which is extremely helpful for the farmers and the Network.

Poor (Red) = low Bandwith, and/or vault is hosting too many chunks, unable to keep up with GET requests.

Good (Yellow) = adequate Bandwith, and/or vault is hosting enough chunks. Able to keep up with GET requests.

Excellent (Green) = abundant Bandwith, and/or vault is capable of increasing storage capacity. Potential to become an archive vault.

This is the perfect role for a (Vault Manager) because it communicates what the Network requires and manages the vaults for the farmer.


It has been suggested that vaults should be graded/evaluated by the Network.

I agree, the Network should autonomously grade vaults based on bandwith capability AND determine if a peer is worthy of storing data. The last part discourages “poor” vaults. If a farmer’s single vault collects more chunks compared to running many vaults, they will quickly get the idea.

13 Likes

I call this screenshot “Maidsafe 101” :joy:

4 Likes

Apart from caching being disabled for now - this will need to be reimplemented to work with split messages -, another aspect that might impact the performance of the network is message prioritisation: Instead of sending all messages in the order in which they were put into the queue, higher-priority messages are sent first, delaying other messages, and under extreme load, low-priority ones may even be dropped. Unfortunately for the user waiting for a website, the priorities are (and probably have to be), from highest to lowest:

  1. Keeping the network structure intact so that routing is functional in the first place, i. e. when nodes join or leave, make sure the connections are established/replaced as required. (This is very little traffic, as it’s just messages like: “Connect to me!” or: “Node 1337 joined, you might want to connect to it.”)

  2. Relocating the data to keep it replicated, e.g. if a node leaves, create another copy of every data chunk that node has been storing. (This is a lot of traffic, as it’s actual data, but it is only relevant while lots of nodes are joining or leaving. Also, as of this version, these messages are not duplicated anymore: Only one of the remaining holders of the data is sending the actual data.)

  3. Mutating the data: E. g. Put and Post requests should not be dropped after half the group authority has received them, as either all or none of them must process the data mutation.

  4. Getting the data to clients: If that fails, at least there’s no damage to the network, although the user will have to retry and possibly wait until the network is less overtaxed.

We should have included that in the original post. Sorry I missed that one!

Anyway, note that this is just a crude solution for now. There is very much room for improvement, like trying to route around slow nodes, etc.

18 Likes

Would it be good for a TEST to have more testers as we have now? I guess lot of people joined, but if you want to go up to 3000 or so we might set up a thunderclap campaign with lot’s of promotion on social media maybe combining it with a blogpost on the Maidsafe website. Just poke us here and we’ll set it up :thumbsup:.

12 Likes

Happy days my data allowance just went up from 300GB to 1TB. It must be fate :slight_smile:

Comcast in select US areas if anybody else felt throttled you may be in luck.

1 Like

Locking in 1 vault per machine funny business will cause trouble for my door to door free home routers for all of my 200,000 local 1000Mbps(1Gbps) uploaddownload/ neighbors.

Limiting vaults per machine would be no problem for data centers (thought we were trying to limit centralized farming). As I said (years ago) Bandwidth is going to be the equivalent to bitcoin ASIC’s. At least a bandwidth arms race will benefit everyone. SafeCoin should just skip the chase and tie directly to bandwidth only (CPU and hard drive are a moot point) and if possible be able to determine my 200k individual 1Gbps fiber connections are more valuable than 200k instances in a data center.