User run network based on test 12b binaries

It’s not that easy to understand this cloud instances. I wasn’t even able to log into the OS, but I might have done something wrong. So I’m working on it. I hope there’s a GUI for ubuntu :slight_smile:

1 Like

There is no particularly easy way of getting a GUI on a cloud instance of Ubuntu - unless you are reasonably familiar with Ubuntu in the first place - but its really not necessary.

If or rather when you do get logged on, here are the commands I ran to get my various vaults running

Enter these one line at a time Anything after the # is a comment - don’t type it

sudo mkdir /maidsafe #Use root priveleges to make a directory called maidsafe
sudo chown -Rv ubuntu:ubuntu /maidsafe/ #Use root priveleges to change the owner of that directory - so you dont have to mess too much with root priveleges
cd /maidsafe/ # Change to your maidsafe direectory - the prompt will change
wget https://github.com/maidsafe/safe_vault/releases/download/0.13.0-dev/safe_vault-v0.13.0-linux-x64.zip #Download the SAFE vault code
sudo apt-get install unzip #Use root privs to install unzip - think Winzip but far superior
unzip safe_vault-v0.13.0-linux-x64.zip #Unzip the code
cd safe_vault-v0.13.0-linux-x64/ # change into the code directory
nano safe_vault.vault.config #Edit the vault config - change the max size (delete the leading ‘21’ insert ‘7’ (rough but it works)
nano safe_vault.crust.config #Edit the crust config - just replace with a copy grabbed from further up this page
sudo ./safe_vault #Use root privs to actually run the vault

You need that last sudo because the chunks are written to /tmp which is owned by the root user - again there are more elegant ways round this - but this works well enough for our testing here.

I’m sure others will point out more “correct” ways of achieving this but its not really difficult - just follow the steps and shout for help if you get stuck

1 Like

Thanks for this detailed description! I’ll give it a try!

2 Likes

You mightt want to type
screen
before you do any of these commands - then just hit space a few times before continuing - that will let the vault run in the background and you can close your terminal window.
When you want back in , just log onto your colud instance again and type
screen -rd

and your running vault output will appear.

1 Like

Personally I use filezilla to upload the vault software to its own directory on the ubuntu machine

So my procedure is

on my PC run filezilla and upload the vault directory to the VM

SSH into the ubuntu VM (for aws use the key file and user name of “ubuntu”)

cd to the directory

chmod 700 safe_vault

./safevault


no need for sudo ever.

I have previously downloaded the vault software and modified the config files on my machine. This is what I use filezilla to transfer to the VM

3 Likes

Thanks for the tip

Have you used “nohup”

I am wondering which one to use.

sudo means Super User Do do something with root priveleges - always be very careful with that command - not too much tragic can happen here but on other systems, you MUST be very aware when using root (or superuser) priveleges.
Also if you are not used to Linux then the tab completion will come as a welcome surprise

for cd /maidsafe - just type cd /ma and hit tab - it will complete the command for you
unzip safe_vault-v0.13.0-linux-x64.zip - just type unzip s and hit tab

You will love it :slight_smile:

1 Like

I think screen is easier for beginners - nohup has its advantages though.
I actually used tmux locally to give me 6 terminals and had each AWS instance running in its own terminal with screen on each instance

I’ve never needed sudo for vaults

1 Like

I found I needed it cos the chunks are stored in /tmp which is root:root
It could be changed but it was easier/faster/lazier just to use sudo.

yes I know you shouldn’t use sudo unless you have to, but hey it works and these are throway test instances.
It will be a very different story when we are on production boxes.
But we aren’t :slight_smile:

A question I keep meaning to ask …

Just where does it specify that the chunks are stored in /tmp/safe_vault_chunk_store ?
Surely that should be configurable in safe_vault.vault,config?
Or perhaps I should just wait until that gets implemented when the guys are ready?

ubuntu@ip-172-31-17-18:~$ cd /tmp 
ubuntu@ip-172-31-17-18:/tmp$ ls -l
total 80
drwxr-xr-x 2 root   root   73728 Feb 16 01:02 safe_vault_chunk_store
drwx------ 2 ubuntu ubuntu  4096 Feb 16 01:51 ssh-66BqQsGyEe

I have not had that problem. It runs as my user without that need

Maybe you changed the permissions on /tmp or /tmp/safe_vault_chunk_store?
Which would be the more ‘elegant’ way to do it.
I just wanted it working fast - as I say when we start doing this on production boxes it will be done “correctly”

NOPE

I just do this and only this

  • create VM
  • filezilla the directory from my PC to VM (directory on my PC already has the modified config files)
  • SSH into the VM
  • cd vault
  • ./safe_vault

That is it.


I am thinking of using screen/nohup so I can detach my SSH terminal, but upto now I have simply left it open

Are you sure you are not logging in as root?
Is the prompt ‘$’ or ‘#’?
If it is ‘#’ then you are logged on as root.

If using screen and you want back out to a prompt - say to check disk space or whatever, then
Ctrl-A+ d will detach you back to a prompt.
Then screen -rd to go back to the vault output.
screen has many many more useful commands but that does me 99% of the time.

1 Like

Yes

The default user that aws assigns to the ubuntu VMs is “ubuntu” and is not root privileged. You have to use the substitute restricted user do command (sudo - which defaults to root) to elevate privileges.

1 Like

Also your /tmp directory should have drwxrwxrwx the last “x” could be a “t”

The tmp directory is meant for any temporary files and as such the users should be able to read/write files to it.

But NO I have not even checked it on my VMs let alone changed it

If we get new code tomorrow, I will try to find time to snapshot a properly sorted vault install and make it a public AMI so that others can simply launch an instance of that image.
I was going to do it for this network but it likely wont last past tomorrow so there is not much point.

1 Like

BTW I finally reached the free limit of my network data out. So far been charged 11 cents

The limit on AWS for free was 15GBytes out.

4 Likes

I think /tmp is owned by root because allowing unfettered access to it can be a security loophole in certain circumstances.
So the proper thing to do would be to make /tmp/safe_vault_chunk_store r/w for your user rather than all of /tmp.
No doubt some security guru will tell us all the recommended way of doing it properly.