Installing Safe_Vault

Thanks, but I knew about that. I’ve also had it running on Linux.

My purpose in compiling, and on Windows, is to become familiar with developing and modifying this software, and on Windows in particular because that’s where most of the users are. My goal isn’t to run vaults per se.

EDIT: The download in the video (Release Safe DNS Example · maidsafe-archive/safe_examples · GitHub) is from September 2015, ancient history. It is 7MB compared to my build (after the “strip” operation in the build script) of 9MB. I guess there’s been a whole lot of routing machinery put into it since then.

Just now I reproduced what I had done weeks ago, of the same steps as the video, using the downloaded safe_vault_win.exe, and indeed it runs a vault if you have two or more instances running on the same machine (no launcher). They find each other as in the video. If instead I use my safe_vault.exe build then, no matter now many instance are running on the same machine, they give the errors below.

Same result if I have it on separate machines.

This is regardless of how I tweak the safe_vault.crust.config; e.g., with only the other machine’s LAN IP under “hard_coded_contacts”.

I am a bit purturbed by that environment variable, which is indeed required for (my) safe_vault.exe to run without exiting with a panic error, which is an undocumented “secret sauce” that I have not been able to find in any md file at github, and only ad hoc mentions of it on this forum.

What other undocumented requirements are there, such as environment variables, that are only found on the developer’s computer?

Something else: Not only does the installer produced by the Powershell script not set that environment variable, it also doesn’t adjust the security settings of the installed files/directories properly. To get this far I had to manually grant the user full access to the c:\programdata\safe_vault\ directory, otherwise it exits with a panic error.

INFO 17:43:53.763490700 [safe_vault safe_vault.rs:105]

Running safe_vault v0.5.0

WARN 17:43:55.307893400 [w_result lib.rs:288] Failed to find an IGD gateway on n
etwork interface {34BA2F5E-B975-4CFC-8AEB-F4DE3CF94ABD} 192.168.10.11. igd::sear
ch_gateway_from_timeout returned an error: IO error: A connection attempt failed
because the connected party did not properly respond after a period of time, or
established connection failed because connected host has failed to respond. (os
error 10060)
WARN 17:43:57.320297000 [crust::connection connection.rs:160] TCP direct connect
failed: No connection could be made because the target machine actively refused
it. (os error 10061)
WARN 17:44:20.114737400 [crust::bootstrap bootstrap.rs:117] Error connecting to
bootstrap peer: No connection could be made because the target machine actively
refused it. (os error 10061)
WARN 17:44:21.144339200 [crust::connection connection.rs:160] TCP direct connect
failed: No connection could be made because the target machine actively refused
it. (os error 10061)
INFO 17:44:23.421943200 [routing::core core.rs:1226] Running listener.
INFO 17:44:23.437543200 [routing::core core.rs:422] | Node(f572…) PeerId(a391…
) - Routing Table size: 1 |

EDIT1: OK, I suppose I’ll have to trawl through the source code to find out what environment it might be expecting. That’s what I signed up for…

1 Like

I indirectly took @upstate 's advice and looked at open ports: not scanning exactly, but netstat.

Doing “netsat -an > out.txt” before and after running an instance of safe_vault, on both machines and with a reboot along the way, and comparing those results, gave me the following insights:

safe_vault listens on two TCP ports and on two UDP ports that change in an unpredictable pattern when it is restarted. It also listens on UDP port 5484, that doesn’t change.

The safe_vault.crust.config file has a list of “hard_coded_contacts” which appear to be other instances of safe_vault, along with the port they are listening on (5483 in every case) and both TCP and uTP transport protocols.

As I understand it, since uTP is a layering on UDP, then it is uTP that is being listened for on those open UDP ports that I found.

But how can I use this?

If all those “hard_coded_contacts” are listening on uTP/UDP port 5483, as if it were a default port, then why are my machines listening on 5484, as if it were a default port?

In each machine’s safe_vault.crust.config file I added an entry under hard_coded_contacts for the other machine. On machine A I used netstat to determine its current listening ports. Leaving that machine running safe_vault, on machine B I changed the entry in its safe_vault.crust.config for machine A to various of machine A’s listening ports. I restarted B after each change.

What I found was a different type of error (than connection “refused”), when some kind of handshake seemed to have occurred:

ERROR 23:11:00.544586400 [crust::sender_receiver sender_receiver.rs:71] Deserial
isation error: Deserialise(IoError(Error { repr: Custom(Custom { kind: Unexpecte
dEof, error: StringError(“failed to fill whole buffer”) }) }))
WARN 23:11:00.544586400 [crust::connection connection.rs:160] TCP direct connect
failed: Deserialisation failure
WARN 23:11:21.854223900 [crust::bootstrap bootstrap.rs:117] Error connecting to
bootstrap peer: Deserialisation failure
INFO 23:11:22.821425600 [routing::core core.rs:1226] Running listener.

Ideas, anyone?

I get the same error as you in sender_receiver.rs at line 71: “failed to fill whole buffer”

As this happens on a program that was perfectly working previously, I suppose a regression in a maidsafe crate has been recently added.

I am on Windows, I will test on Ubuntu to see if a I get the same error.

Same error on Ubuntu:

thread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value: CoreError::GetFailure::{ reason: NoSuchData, request: Structured(0d4b.., 786562251788)}', ../src/libcore/result.rs:746
note: Run with `RUST_BACKTRACE=1` for a backtrace.
ERROR 10:50:19.211354152 [crust::sender_receiver sender_receiver.rs:71] Deserialisation error: Deserialise(IoError(Error { repr: Custom(Custom { kind: UnexpectedEof, error: StringError("failed to fill whole buffer") }) }))
ERROR 10:50:19.211567656 [crust::connection connection.rs:743] Error receiving from PeerId(c244..): Error { repr: Custom(Custom { kind: InvalidData, error: StringError("Deserialisation failure") }) }

(though I don’t get the first 2 error lines on Windows)

In both cases here are the operations I do:

  • launching 12 vaults is OK (locally in one station)
  • creating an account is OK
  • logging in is OK
  • managing immutable data is OK
  • managing structured data is NOT OK anymore

This is happening since I updated my crate yesterday (with cargo clean and cargo update). I advise people having a working environment not to do the same until problem is corrected.

Let us hope Maidsafe team will correct this rapidly, because SD objects are very important for a lot projects on safe network.

1 Like

Yes, at the moment its all vaults. We are confirming churn testing, The Immutable Data manager is currently undergoing surgery. It was very highly coupled and currently being modularised a lot internally.

MaidManger SDManager etc. were all updated this week and testing yesterday well enough. ImmutbaleData manager though was very difficult to work through sensibly, so this is an area we are working on now. Never thought it was so coupled, but it is and that needs fixed. A few hours ago I took my local branch and started big changes. Scary though at this point to do this so quickly, but voults now are really very simple code (or should be). So yesterday was tough and today will need a lot of focus.

PmidManager and PmidNode will be very simple though, we already know what they need for churn (Answer is same as MaidManager and nothing respectively). So we are OK there.

So coming over the line, but having to get a lot of last minute energy to do so. We will get this asap, but not huge as I say, this part of the code we control, unlike much lower areas such as crust. So we are good in that respect.

13 Likes

Thanks for the feedback, @tfa and @dirvine.

Looks like I’m almost there on Windows, so today I’ll relax by setting up a vault on Debian. :slight_smile:

1 Like

It is probably obvious for you, but I will tell it anyway, just to be sure everything is clear: Launching only one vault is not enough. You have to launch several of them to create a local network. I don’t know the minimum number of vaults needed, but I usually create 12 of them and it works. You don’t need to use several machines, I launch the vaults successfully on a single old PC under Windows. SDs are non-functional anymore but this is temporary regression.

Later you will be able to launch one (or more) vault to join the global safe network but this is not possible yet.

3 Likes

Thank you for information, which wasn’t obvious for me. I didn’t know of that minimum.

Are you sure of that? There’s a test network of vaults and a live-looking safe_vault,crust.config.

I’m a recent arrival and no-one has written otherwise (asfaik), that we can’t join and test every aspect of the test network.

Only client programs can join the global test network, vaults cannot join it yet.

You can create your own local network, and you don’t need a config file for that.

Ah, the light dawns (said the mushroom).

1 Like

I post here, and in the following couple of comments, my notes from Sunday, having fun by running (regression and all) SAFE_VAULT on a fresh install of Debian 8. Apologies if this seems unduly simple to the initiated, and also for repetition of things probably addressed in various posts, but I felt that it might be of use to beginners too shy to ask for guidance. It is a start-to-finish recipe for getting a one-computer “test network” running, starting with a clean OS and going the full compile route.

PART 1. Rust and GCC Setup

Rustup is the latest iteration of the toolchain management system formerly known as Multirust. It was created by the same people who created Rust and is intended eventually to become the default management environment for Rust. With Rust version cycles so short, and the different flavours of platform and cross-compilation, it is the easier option than having to reinstall Rust frequently.

(More information about Rustup here: Beta testing rustup.rs - Rust Internals)

Uninstall any previous installations of Rust, Multirust, or Multirust-rs.

First let’s create a projects directory, if you don’t already have one, and configure bash to start there:

Open a shell and give the commands:

	mkdir projects
	
	echo "cd ~/projects" >> ~/.bashrc

Then close and re-open the shell, and the prompt should now be in the projects directory.

Now we must install some tools. Open a shell and give the command:

	sudo apt-get install build-essential tar libtool autoconf pkg-config

and then:

	curl https://sh.rustup.rs -sSf | sh

and follow the instructions.

Then:

	rustup update stable-x86_64-unknown-linux-gnu    # to download the toolchain we want
	
	rustup default stable-x86_64-unknown-linux-gnu   # to set the default toolchain
	
	rustup show		# to display the default toolchain
	
	rustup update nightly-x86_64-unknown-linux-gnu   # (optional) to install the latest, bleeding-edge build
	
	rustup toolchain list    # to display all the toolchains currently installed.

If we ever need to change toolchains then:

	rustup default {and name of the toolchain you want to switch to}

Now we can proceed to use Rust:

	rustc -V        # to show the Rust version, and prove that Rust is available
	
	cargo -V        # to show the Cargo version, and prove that Cargo is available
	
	cargo new hello-world --bin      # to create our first Rust project
	
	cd hello-world
	
	cargo run       # to prove we have a working Rust compiler

PART 2. Install the Sodium/Libsodium Library

In the following, I compile the Libsodium binaries, as an exercise, rather than using the pre-compiled binaries.

Open a shell and give the commands:

	git clone -b stable https://github.com/jedisct1/libsodium.git      # cloning the stable branch because the default, master branch generates many warnings
	
	cd libsodium
	
	./autogen.sh
	
	./configure
	
	make && make check
	
	sudo make install

The last step above installs the binaries to /usr/local/lib, which might not be part of your shared libraries (e.g., on Redhat derived distros). Fortunately, for Debian-derived distros you only need to run the following command to reload your libraries cache:

	sudo ldconfig

Run the following command to see if the new library is present:

	sudo ldconfig -p | grep libsodium

… whose output should include, in part (instead of nothing!):

	/usr/local/lib/libsodium.so

Now to install Sodiumoxide, which creates a Rust binding of Libsodium.

Give the commands:

	git clone https://github.com/dnaq/sodiumoxide.git
	
	cd sodiumoxide
	
	cargo build
	
	cargo test

If that last command contains only “ok” as the test results, and no error messages, then Libsodium is now available to your Rust compiler.

PART 3. SAFE_VAULT Test Setup

Open a shell and give the commands:

	git clone https://github.com/maidsafe/safe_vault.git
	
	cd safe_vault
	
	cargo build
	
	cargo test
	
	cargo install

The last command will put the executable, called “SAFE_VAULT”, in your .cargo/bin directory, ready to run.

To test it, create a shell script called “run.sh” on your desktop that contains the following code:

	#! /bin/sh
	for i in `seq 1 12`;
	do
		x-terminal-emulator -e /home/user/.cargo/bin/safe_vault
	done
	qdbus org.kde.kwin /KWin org.kde.KWin.cascadeDesktop

Then make the script executable:

	chmod +x ~/Desktop/run.sh

Now click it to run 12 instances of safe_vault.

That’s all for now!

Explanation of the script:

  1. “x-terminal-emulator” is a symbolic link that, by a couple of steps, points to your default terminal emulator, whether konsole, gnome-terminal, xterm, and so on. So the script should run on any Debian-derived distro at least (I hope).
  1. The last command in the script (qdbus) will neatly cascade all the windows, so clear your desktop of other windows before you run the script, or they will be swept up into the cascade.

EDIT 2016.05.14: The procedure above is out of date, and was only appropriate for testing of getting it up and talking but no networking. You should add the –release flag to cargo build and cargo test in order to get an optimized binary. You should also have a separate folder for each instance to be run so that it can create its own cache and log files and, optionally, so that its config can be customized as to network parameters such as IPs and ports.

1 Like

I assume vaults will automatically run at start up but will they always be visible on the taskbar?
Would be nice if they can run hidden so that they can be installed at work etc…

1 Like

I think they will, just like a virusscanner or so running on the background. And when you click it you get stats and options etc.

Don’t do it, because I suppose you need your day job. A vault could be encapsulated in a service so that it doesn’t appear on the taskbar but it will still appear in the process list and will consume abnormal network resources.

For more detailed answers see: Farming at work.

1 Like

It’s not really for me but many large businesses have uncapped commercial internet and won’t notice an odd vault here or there. Hotels, restaurants etc but i should not be suggesting these things :wink:

1 Like

Some are concerned we won’t see enough upload speeds when too many people log on. I think that small businesses like you say will provide enough resources next to the people at home. Look at all these little internet companies with 20 to maybe a 100 users. They have the infrastructure to run Vaults and they can make some money on it. My guess is we’ll see a lot of them Farm for some little profit.

1 Like

Exactly I feel depending on where you live bandwidth could be a huge hurdle but this won’t affect business internet packages so they have role a to play. Here in the states caps are dismal. Verizon FiOS advertises uncapped but threatened to disconnect a guys service for using 7TB in a month. He was paying $315 per month for a 500Mbps plan, hardly seems unreasonable usage to me.
My provider has a 250GB monthly limit ($50 for every 100 after) currently not inforced but no doubt they will throw a fit if its exceeded by much.

1 Like