Shared Vault Test Pages

I don’t think we can. If we use versioning, it is logical that the name is applied to the container and not to the index file.

1 Like

The NRS name resolution doesn’t apply to the path of a URL, this is just like clearnet. Once the host/domain part is resolved by NRS to be content X, then the path is resolved accordingly to what the content X is, e.g. if it’s a FilesContainer it will look it up in the flatten hierarchy it contains following the corresponding convention/format spec, if it’s otherwise a Wallet it will try to find a spendable balance which names matches the path (against following the Wallet’s own convention/format spec), and so on.

I imagined that if we eventually have RDF on FilesContainers, then that’s where we could have things like “default-page = index.js” in order to have a Browser app or any other app to be able to find out which the default entry is in a page/site by looking at the RDF data, this is what I think RDF is for, for self-describing content.

1 Like

Here’s me uploading a website folder safeblues which contains an index.html file. (Using Windows)

$ safe files put safeblues --recursive
FilesContainer created at: “safe://hnyynyww3tkh7azot3pkxgogbdfbbzr183m71e1upfsd5149znihqr495ebnc”

So it seems to be mounting the folder on / . Am I doing something stupid (has been known)? I’m sure it used to resolve to the index.html file.


I think you just need the ending slash in the source to flag you don’t want the folder but its content to be uploaded:
$ safe files put safeblues/ --recursive


That was it - doh!


I think this will merit a FAQ!


Or more simply a correction. I have opened an issue about it:

It was replied in another topic that this was implemented on purpose for consistency with rsync command. But I am not convinced this is helpful. On the contrary there will be more people falling in this trap than people knowing this peculiarity of rsync command.


I’m inclined to agree, although we should first ask why rsync is like this.

It’s a subtle way of adding functionality that allows you to copy the directory or the directory contents to the destination (no slash or trailing slash).

Without this, you have to add an extra command to create a destination directory if you want the ‘no slash’ behaviour.

We should also consider who will use the CLI for this and how. Most will I expect be following instructions in a how to so they will be copying an example.

Only those who want to write their own CLI commands will fall foul of this and they will in general be people who will appreciate the subtleties, even if like me we occasionally fall foul of them.

I’m torn.


Just to add that when I was discussing this with other folks I saw like some arguing against, but as many as those in favor (specially our IT folks back then found it natural), and it wasn’t clear enough to me it should be changed (and higher priorities came up also).

Personally I find it very easy and powerful that I can decide just with a slash if it’s a folder or its content/children that I’m uploading, and the same when I decide where I’m uploading it to. Although, I do understand that can also be easily missed and/or it could confuse others. It sounds to me this is probably related to different background of users which makes it look very strange and unnatural, and for others like a no brainer thing.

E.g. if the slash was ignored in the source how would you like to signal when you want to upload the folder and not only its children? i.e. "i want to upload folder ~/myfolder onto safe://mynrs/path1 so for example the file at ~/myfolder/file1.txt it’s then found at safe://mynrs/path1/myfolder/file1.txt".

Then, what if I want only the children of ~/myfolder to be uploaded onto the same destination path? if the slash is ignored from the source you then need to provide a destination path to make a distinction between these two use cases, a destination path which I presume you will also want an ending slash to be ignored from?, and if that destination path doesn’t exist entirely, should it be created? or it shouldn’t and it should fail? otherwise it sounds you’ll need an additional flag to tell the CLI what you expect to happen…?..

So what I see with the slashes in the source and destination you have all the flexibility and power for all the combinations of use cases in a very consistent way.


Just pass a destination path argument to the command for that, this is already implemented.

Currently we have 2 ways to do this. I propose to remove one way (local path without trailing slash), so that trailing slash doesn’t matter and becomes optional in local path.


Yes, this is consistent with local path behavior I suggest.

Yes, this is consistent with current safe files command philosophy which creates all needed intermediate directories.


I’m not particularly fussed which way is chosen as there doesn’t seem to be much to choose between the ‘right’ and ‘wrong’ ways. So within Linux / bash utilities you have examples of the trailing slash being meaningful (rsync) or ignored (cp). I note that AWS treats a trailing slash as meaningful in commands like ls and sync although cp requires you to add the directory into the target URL if required.

Reading around, it seems that lots of devs and admins add a trailing slash to denote a directory as best practice even if it’s not required in order to differentiate a folder from a file or a symlink, so perhaps that’s the way to go, so long as it’s internally consistent.


regardless of the trailing slash behavior (I prefer cp), I think the most important thing is to allow multiple <location> args.

This enables the following types of commands to work, all of which are invalid now:

$ safe files put file1.txt file2.txt file3.txt <url>
$ safe files put file1.txt *.csv <url>
$ safe files put *.txt <url>
$ safe files put photo[1-10].jpg <url>
$ safe files put photo{1,2,3}.jpg <url>

note that all of the above commands become expanded to the form of the first command by the shell, so the program normally sees the form: <src> [src] [src] … <dest>

If we look at commonly used unix commands for copying, transferring, compressing files, they all support this, eg: cp, scp, rsync, tar, zip, sftp.

CLI users are used to this behavior when working with files and will expect it.

See also


I completely agree.

Accepting multiple files/directories as input arguments would be a first step to compatibility with rsync command but there is still the rsync main feature that that would be missing: copying files/directories in the other direction from network to local host.

We cannot justify imitation of the peculiar missing trailing slash behavior by compatibility with rsync command if its main features are not implemented. I would list the following ones:

  • Implement copy from network to local host
  • Allow multiple files/directories as input argument
  • Implement missing trailing slash behavior on each input directories. For example, safe files put a/ b would copy only the content of a but the whole b directory.
  • Implement rsync flags that make sense on safe network (–include, --exclude, --delete, …).

Maybe I am missing some features, but these are the basic ones to claim rsync compatibility, especially as a backup software.


Thinking further, I think there are other elements that should be consistent between safe files command and rsync command:

  • rsync is a unique command that can both create and update a mirror/backup on a remote host, whereas safe files command currently needs 2 sub-commands for that.

  • The equivalent of rsync [USER@]HOST:DEST argument is missing in safe files put command. Instead, this command has an optional destination path argument but with no special lexical markers (: character).

  • The absence of a lexical marker makes last argument ambiguous with multiple directories: What safe files -r put a b c would mean? Copy a, b and c at root, or copy a and b in /c path?

  • The location of this argument also defines the direction of the operation: if positioned at the beginning of arguments list, then it means a copy from remote host to local PC (it is named [USER@]HOST:SRC in this case). For consistency it should be the same when we introduce copy from network to local host.

To take into account all these elements, I propose to remove safe files put and to modify safe files sync with the following syntax:

  • To create a new container: safe files sync [OPTION...] SRC... safe:[/DEST]

  • To update an existing container: safe files sync [OPTION...] SRC... safe://XORURL[/DEST]

  • To create or update local files/directories from a container: safe files sync [OPTION...] safe://XORURL[/SRC] [DEST]

The SRC… part implements the multiple files/directories feature. The optional [/DEST] part replaces current destination path. There is a similar [/SRC] optional part when the user wants to copy a subdirectory of a container to his local host.

We could even copy a container inside the network with no local transit:

  • To create a new container from an existing one: safe files sync [OPTION...] safe://XORURL[/SRC] safe:[/DEST]

  • To update an existing container from another one: safe files sync [OPTION...] safe://XORURL[/SRC] safe://XORURL[/DEST]

and this would be even more powerful than what rsync command is capable of (the latter cannot copy from host to host).


@tfa I can’t speak for the cli devs, but I guess the way I would look at it is that the safe-cli tool is a swiss army tool for doing a whole bunch of safe network related things, of which a subset of those are file related things. So it really just aims to support the basics of copying… kind of like scp does – plus a basic sync command. And that may be enough for a minimum viable product/experience.

rsync otoh is more of a specialty tool for bi-directional syncing/mirroring with a unique syncing algo and lots of options around that. Someone else could start a safe-rsync program/project at any time that specializes in these aspects.

Or maybe full rsync capability will make it into safe-cli. dunno. I’m just saying there are other paths that can be taken for same/similar result.


I also see it in the way @danda is looking at it, so all the ideas being thrown to expand the CLI feature set are good, but as said we just need to make sure we now cover the ones which are really needed for operating something like Fleming release first, and then we keep adding the rest. E.g. having the files command to support most/all of the features supported by rsync should/could be a long-term goal and direction, but right now we should focus on the most important ones so we can also cover the most important ones for other commands.

In summary the way I personally see it is try to have a solid SAFE CLI which allows us to perform all type of operations with Fleming, Maxwell, etc., and with the lowest barrier to entry so a newbie who saw the news of SAFE can download the CLI, and after runing a couple of simple commands have already uploaded/retrieved data to/from the network, and/or have already transferred safecoins.


safe://pies may eventually turn into something more than trivial proof that elementary files put and nrs create actually works. But I wouldn’t hold my breath.



I would also appreciate the rich functionality of an rsync-like tool for SAFE.

At the same time, I agree with @danda’s perspective of the CLI as a Swiss army tool.

Those design goals are typically in opposition to each other.

I believe what I describe beginning here and further here would allow both with no necessary compromise. In brief, the CLI tool with its default subcommands being broadly functional, and with externally developed subcommands being deeply functional as-needed.

1 Like

I guess I don’t yet see the need for external sub-commands (plugins). If one wants an rsync equivalent (for example) why not just create a new CLI program eg safe-sync? This seems more in-line with unix philosophy of do one thing and do it well.

At some point, a single CLI program that tries to do too many things becomes unwieldy/confusing even just reading the help/usage.


wot @danda said…

Calling @JimCollinson – tell em about KISS, Jim!!!

1 Like