SAFE Network Dev Update - March 28, 2019

#46

Maidsafe, and beyond. To write a paper you need to use maths, if that makes you a mathematician then yes, but none of the authors is pure maths only people. So I am not sure what you mean really, maths is part of Engineering, just a tool like any other :wink:

11 Likes
#47

It was a simple rust program creating an account in the network, but not an app that needs to authorize against a running authenticator.

If you follow link to my post in dev forum, you’ll see that I wanted to use safe_authenticator sub-project from safe_client_libs crate but I don’t know how to do that.

I found a workaround by creating an example program inside the crate, but this is certainly not the proper way to use an external crate as a lib.

The safe-authenticator-cli has just appeared and could be used as an example but:

  • A complete real program is not a documentation (a proper documentation is composed of explanations illustrated with small examples)
  • It is too complex for my need
  • It only works on mock (my need was for a real network)
  • It isn’t ready yet (because there are references to personal GitHub repositories)
4 Likes
#48

Right, trust me, I can see and understand your frustration, we effectively faced the same type of problems when developing the auth CLI, and this is why I believe it’s a good thing to do, to uncover any missing piece/feature, and learn/understand better how is it that the Rust API in safe client libs can be enhanced/improved.
Thus, our so far experience with auth CLI triggered some internal discussions about this specific aspect you brought up in that thread (and now), and there is already a plan to start looking at how to expose a similar type of user facing API to those you can find in FFI layer, but from the pure Rust layer. As you noticed already, the reason there are references to personal GitHub repositories is exactly because all this. So we had the option of saying “let’s first solve those issues before sharing a PoC CLI”, or “share it so people can learn, help us in the process by testing it, providing ideas and criticism, just like we also do ourselves”.
So what I mean is that, as you correctly say, this is not finalised and there are several things to work on yet, like those you point out: documentation, Rust APIs in safe_client_libs so you could depend on them from an external crate, make it function with real network and not only mock, etc.

Now, there are also some other aspects to all this, which I believe will be a big challenge for all of us if we want to support our SAFE users:

It’s clear that as we progress in the project there will be more and more different type of applications with different type of requirements. Such requirements could simply be “have a simple UX”, and such a req could lead devs to try to create alternatives to how users log in to the network and authorise apps.

This is a big topic I’d say, and we could even start a thread for discusing this, but in summary, we have to be careful and try to work on solutions that end users can effectively trust, e.g. if every application embeds an authenticator, meaning that you have to enter your account’s secret and password in each of them, we are creating a very risky environment for users, no doubt about the bad UX due to user exhaustion.

This is why we believe we need an authenticator which can manage the credentials, and apps only connect to the network after credentials were provided by this authenticator, thus apps never get the account’s credentials but just their own and the fixed set of permissions. If having embedded authenticators in apps becomes the norm rather than the exception, it’ll become too risky to use any application out there.

We have to work on having solutions that support our users (good UX) and applications (good integration protocols, we have now only one which is the system URI protocol) so they can manage credentials and permissions solely from a very well trusted authenticator.

16 Likes
#49

Hate to see folks leaving in key positions yall just had been fighting to fill such as networking but such is life, hope he did not take a look at the ask and go holy hell we need to make what work how?!? :open_mouth: , :laughing: . Road to Fleming continues onward, best of luck on the path forward. Maybe enough internal talent to pick up some of the slack too if needed or those eager to learn and try new things.

Some of these dev updates begin to blend to me, we updated __ lib from 1.1 to 1.2 etc, or change __ protocol to __ or we did some testing here on this and saw this behavior, would be good if we could see we have a simple checklist view of ____ features/functionality left to code for Fleming listed somewhere. Doesn’t matter if week to week things don’t get checked off, as many times it takes 2-4 weeks to knock out a hard feature but getting a view and perspective on what features we lack would be helpful.

1 Like
#50

Engineers do not think this way, at least not decent ones. We look and say hey a challenge that I am going to succeed in.

3 Likes
#51

Technology always has limitations in a given time period(which do adjust for the better as time progresses), current modern day if someone said Jeremy I need you to write an API that needs to call 20 distinct data sources and the hosts are on 5 different continents and aggregate all the data its a 1GB payload and I need a response in under 1 millisecond to my client server and I will pay you 1 million dollars, I am going to tell you hell no, that is impossible and a waste of my time with current technology limitations. Maybe that makes me not a dreamer, I tend to stay rational, just like I think building SAFE network is entirely possible as a distributed secure network on top of the internet is entirely possible, do I think I could stream videos or download content as effectively as clear net? To me the rational answer is likely no, not for a long time until lots of complex caching happens between requester/source destinations and we have a large # of nodes in the network hosting data. But you gotta start somewhere :slight_smile: .

#52

This is just a basic application of the paradox that if God is almighty then can he/she/it create a rock so big that even God cannot lift it.

Your example does not fit with the current project and yes there will be problems out there that cannot be solved by todays technology. But hey how do you think new technology is developed, its by engineers applying physics and creating that technology. Your “too big rock” is possible with known physics (barring the impossible 1millisecond in all cases) but needs to be developed if its not already by engineers. Also the process of design takes the paradox into account and only designs projects that can be developed with current or create-able technology. Otherwise it usually fails early on due to a lack of funding, since funders take the time to consider the paradox as applied to engineering. As an example I created a new technology for my thesis that people are still using today on the internet. And no one had done it before and did not exist, yet the University put up the 500$ (1970’s $'s) for me to actually do it, because the proposal showed how I could apply known physics/maths to do it.

But back to the project, the word from the developers is that the problems (research) needed for the project has already been done and its now the hard work of writing the code, testing, adjusting, testing, testing, testing, writing further modules (eg safecoin), testing, adjusting, testing testing, testing. So no I do not accept your possibilities.

Also smart people are looking to expand into areas that interest them that were not necessarily areas that interested them 5 or 10 years ago. Being in one project for too long can present problems for young, highly motivated, very smart people because it can limit their development in broadening their expertise. As such they need to move on if they wish to not be stuck in one area of employment.

EDIT: so expect more to move on before safe is launched. Its not a negative and any good business manager knows some people will move on within the life of a long project.

3 Likes
#53

At least you understand most of it. I dont read most of it since its alien to me. But try to read i must, even if understand i do not :grin::see_no_evil:

1 Like
#54

I agree with this, and the Authenticator app is perfect for this, for the reasons you mention.
I just want to point out that there is more to the picture than this. I’m sure you know, but I’ll just put it here for the broader discussion :slight_smile:

There are use cases (I have now with the SAFE.NetworkDrive for example), where the flow is completely different, and it just doesn’t make sense to use the Authenticator app for it:

The user gives the SAFE.NetworkDrive app a mnemonic, an entropy, to derive n SAFENetwork credentials from, since every new drive is a new account. When creating new accounts like that, derived from the mnemonic that the user is supposed to remember, it makes no sense to create those accounts (or login to them) through the Authenticator app - it gives no additional security. It would be to try press a square peg through a round hole.
Also, there’s the new accounts for creating drives that are supposed to be given to others as a thumb-drive, i.e. sharing drives. This does not use the mnemonic, but this also would make no sense to use the Authenticator app to create it.

I’m sure there will be more cases like this, which is part of why I made a NuGet pkg out of it (another part is that it’s a cleaner way to keep it separated from my app(s)).

10 Likes
#55

TL;DR for @maidsafe people Please watch the video at the end about capability based authorization and delegation (and about the abysmal state of the authorization and access schemes currently in use everywhere). I’m shamelessly begging you guys to consider going down this direction about authorization because this alone would put the Safe Network decades ahead of all the others. There’s no need to repeat the failures of the past.

I’ve brought it up before but I can’t stop myself from bringing it up again. Why not just use something tried and tested, like capability based authentication? Rolling authentication schemes from scratch is almost as bad as rolling your own encryption.

After verifying the permissions through a GUI similar to what we’re shown when we install an app on a phone, the authenticator would just create a restricted derivative of the certificate it already has: restrict it by folder, add a time limit, remove write access, etc. (The authenticator’s certificate may already be a restricted version of the master key, for example when I don’t want my relatively insecure phone to access my more sensitive data.) The new certificate is then handed over to the app. At this point, the authenticator’s job is finished.

There’s no need to keep track of the certificates and there’s no need to store them on the network because they are standalone just like your passport in real life. Apps would just attach the relevant certificate to each request and anybody (vaults, sections, whomever concerned) could verify if the request is valid.

You may have noticed that there’s no talking about “fixed set of permissions” or similarly insecure, unreliable, and inflexible blasphemies. The root cert for an account (which may be just the document owner’s private key) has all access, and each derivative certificates may have a number of caveats attached to restrict its scope.

Please also note that we already solved the question of not just authorization but also delegation (something ACLs, for example, can’t do at all).

Capabilities fall right into place here as well. Authentication (identity) is irrelevant for capabilities because we don’t have to care who you are as long as you can show us a certificate that gives access to perform the operation on the thing. So, each drive’s being a new “account” is not confusing because an account is little more than a private key that encodes ownership.


An implementation for the the above are Macaroons, with a Rust library, and I have written about them before here: Bearer credentials with caveats for distributed authorization

A talk about them from the 2017 Rust Bay Area Meetup:

Slides for the talk:

10 Likes
#56

This is timely since I’ve been grappling SAFE logins for a basic IoT project.

I use a modified version of the client_stress_test which uses Locator/Password login credentials via safe_core (ie the bad but easy way!).

I didn’t use the SafeThing way which uses an auth_uri for the simple reason that I didn’t know how to get one.

The SafeThing js example doesn’t have any auth (ie it’s hidden somewhere magical).

I had to decide between

  • safe_core using Locator/Password
  • App using authenticator requests and responses

In the end just coding it in was easiest, so the authenticator technique wasn’t used despite being the theoretically superior way of doing auth.

Let’s not forget, this is just the client side of auth.

There’s a network side too, which I guess ultimately boils down to ‘does the account have a valid invitation for the network’, or in the future ‘does the upload have a valid safecoin transaction with it’?

Network auth is important to consider for the client side, since now there’s a third option of uploading which is to supply a pre-funded private key in the application code.

I was structuring my IoT example with the intention of replacing ‘Locator/Password’ values with ‘PrivateKey’ value. When the device starts it shows the address and balance and if it needs funds it prints a message to the user. No need for auth workflow. Simple to understand. Isolated. Unique address per device allows simple accountability. Works with secure enclaves.

All these workflows for uploading data have pros and cons. But I go for easiest, and I know many others will too, and futzing around with authenticator workflows just seems too hard for me right now. I don’t want to have to run my authenticator and my application for IoT. But I do want the benefits of the authenticator, so uploading data is a bit of a schizophrenic situation for me!

Really pleased to see the conversation flowing from @tfa @bochaco @oetyng and @JoeSmithJr etc

11 Likes
#57

Yo that was what I meant if there are pure mathematicians at maidsafe. Thanks

#58

PARSEC v2.0 (properly asynchronous with a common coin) paper for review to the Journal of Parallel and Distributed Computing

It is just strange enough that community do not participate in a peer-reviewing.

#59

No not really since peer review needs to be done by people not associated with the paper. That is the point of peer review, to get people being critical of the paper and not attached to the paper.

Community may see things and suggest things in a community review but it would not be considered a peer review. Not many here have done post-grad mathematics.

4 Likes
#60

To summarize current status of safe_client_libs:

  • It is still under Rust 2015
  • it cannot be used as an external library
  • Maidsafe vision is to expose it via an authenticator

My personal vision is to allow a direct access API because it is simpler, and it works on a headless system. So, I rolled up my sleeves and migrated this crate to Rust 2018 and found the problem that prevented it to be used as an external library (one line to modify).

I have issued a PR in the case Maidsafe want to integrate it in their code base.

To use my fork as an external library the line to add in Cargo.toml file is:

safe_authenticator = { git = "https://github.com/Thierry61/safe_client_libs", branch = "edition2018" }

A complete program that opens a connection to an account would be:

Cargo.toml:

[package]
name = "open_account"
version = "0.1.0"
edition = "2018"

[dependencies]
unwrap = "~1.2.1"
maidsafe_utilities = "~0.16.0"
safe_authenticator = { git = "https://github.com/Thierry61/safe_client_libs", branch = "edition2018" }

main.rs:

extern crate maidsafe_utilities;
extern crate safe_authenticator;
#[macro_use]
extern crate unwrap;

use safe_authenticator::Authenticator;

fn main() {
unwrap!(maidsafe_utilities::log::init(true));
println!("Opening account...");
let locator = "<Your Secret>";
let password ="<Your Password>";
let _ = unwrap!(Authenticator::login(locator, password, || ()));
println!("Success !");
}

This program was tested successfully on the community network.

18 Likes
#61

Awesome! Very exciting development :hugs:

Once you entered your password the first time your program can store that granted authentication and use that in the future (probably store it encrypted with a short key) no reason to put in your credentials multiple times :face_with_monocle:

5 Likes
#62

well - and since @dask and me stumbled accross this some days back (and we’re talking about authentication process)

…when i write a program - i provide the properties of it and geht my encoded AuthGranted …
this granted authentication is what enables me to log into the network independently from the authenticator.

so if i don’t want to hit ‘yes i accept - yes i accept - yes i accept …’ on computer startup i will enable automatic re-authentication…

if the safeBook-app then obviously needs access to my calendar, my family photos and my address book
i as a 3rd party can just use the same name to contact the authenticator which will give me the granted authentication to use the account and read all of this information


other scenario is that the safeBook app is nice and doesn’t want to annoy me; so it will simply save the once granted authentication - because they are careful people they will do this in plain-text and store the information locally in their folder.

now i use their name and the granted authentication and do whatever i want with their rights …

This means that either programs need to be authenticated every time you start them or they should store the granted authentication in an encrypted way; simple solution …

…i just mentioned this to point out that not allowing other projects to just use it as external library [and helping them to secure the data (with encrypting/decrypting the string) so they at max need to ask for the credentials once and then use an access token + password] won’t make it more secure because the granted authentication is [once out there] a danger anyway if not handled with care … {and automatic reauthentication should not be implemented in the official browser}

2 Likes
#63

Just for clarity @tfa, this is not accurate unless I misunderstand you here, there is a similar discussion here: SAFE Authenticator CLI

3 Likes
#64

Ok, I probably misunderstood you when you said:

This is why we believe we need an authenticator which can manage the credentials, and apps only connect to the network after credentials were provided by this authenticator, thus apps never get the account’s credentials but just their own and the fixed set of permissions.

But the best solution is to provide both possibilities: a direct access API and another one via the authenticator.

1 Like
#65

Right, what I was trying to say is that having an authenticator is probably the safest approach for users, since all auth goes thru it and account secret/password is not exposed to apps when they embed an authenticator in it. E.g. if I see in a forum someone saying that he created a SAFE app, I download it and when I run it it asks for my SAFE secret/pwd, I’ll doubt a lot and I’d personally prefer to only give a set of clear and specific permissions to that app thru my trusted Authenticator app (regardless if it’s an authenticator app with GUI or CLI UI). Now, this cannot be a reason to not expose an API from the auhthenticator library in SCL.

3 Likes