Hey, guys. What was wrong with the proxy way of connecting to SAFE? Was it that bad that you chose to be tied to some 3rd party software (beaker browser) and external non-official libraries (safe-js)? Why these two approaches couldn’t work side-by-side and let users choose?
The proxy was a security risk, and hard for many users to set up.
At some point, not long is my expectation, somebody will revive the Firefox plugin and extend it to support safe:// and include safe-js. This is quite a simple project I think and I’d look at it myself if I wasn’t already working on other things. With this users will have two options. More may follow.
Safe-js doesn’t need compiling, so I don’t know what you mean by that, and removing the proxy makes it simpler for the user, not more complex.
Safe-js will become a standard API which app developers can make use of, but can ignore if they don’t want to use it - so it is not reducing choice or options.
So it’s not possible to reference clearnet URL from Beaker browser, is it?
AFAIK the latest version has a toggle switch to enable clearnet browsing.
While i partially understand the decision to drop the proxy, it should not have happened before a ff/chrome plugin or standalone proxy was made available. I doubt it caused many problems in the development of the other aspects. So it seems to be a political statement restricting your choice rather than a technical necessity.
I doubt it caused many problems in the development of the other aspects. So it seems to be a political statement restricting your choice rather than a technical necessity.
You are overlooking two very good reasons to drop the proxy:
- the proxy is a security risk, and the last thing MaidSafe want is to expose us to security risks
- many users found it tricky to set up the proxy, or had problems because they didn’t realise it was necessary to do so.
If it is an issue, it would be quite easy for someone to reinstate the firefox plugin and update it to provide the same safe-js interface as Safe Beaker.
I think the reason the plugin was dropped in favour of the proxy was the desire to support multiple browsers, and to avoid the need to create and maintain a plugin for each (and for users to have to install a plugin).
With SAFE Beaker, there’s a real no brainer solution for a new user. Easy to understand: “to get on SAFEnetwork run SAFE Launcher and browse using SAFE Beaker.”
MaidSafe can focus on this, and the community can extend to other browsers with plugins.
I had a beaker-browser in mind, not safe-js. But that is also not true, I was just misled by names of files to download from safer-beaker-browser download page: there are .tar.gz and .zip files, which I assumed were source tarballs, and in fact they are distro - agnostic binary releases. So I was afraid that lack of package for my linux distro will force me to compile the browser.
Anyway, I found a thread about these security conerns: Public Notice - How to hack SAFE Browser Plugin users. It seems that main security problem is a possibility of SAFE users accessing normal “clearnet” (http://) pages, right? So to prevent that, you force them to use browser with clearnet blocked. It seems to me like “let’s forbid children to use knife, because they can hurt themselves” attitude. Eventually someone will want to open non-safenet link from safenet site and you cannot prevent that. They will just find another way to do it.
No, that’s not the main concern with the proxy. We were discussing the removal of the proxy.
I don’t have time to rehash all the different security discussions, just to give you an idea of what I surmise (just my opinion) was behind this change. If you want to go into the issue in more depth I suggest you start a new topic, or better still read up on the background further first, because as I’ve said, the issues around this are more involved than you have picked up so far. You seem to want to paint the decision a certain way, but without knowing the full background or indeed asking MaidSafe to explain their reasons. If you ask they will usually respond, so that’s another option for you.
Thanks for clarifying the “compiling” question. I’d like to step out of this now.
Can someone explain how this browsing actually works?
- The launcher has the REST API right?
- Where is the part for browsing documented? I can only find docs on auth, NFS and DNS.
- How come some apps need to auth with safe launcher but Beaker Browser does not?
- What does
safe-jshave to do with it?
If you’re browsing public data, then you don’t need to identify yourself; you just need to make well formed GET requests to the launcher and the browser does that. A page with the new comment.blog function prompts for authorisation because it looks to note who is commenting, unless they opt-out as anonymous.
I’ll guess at the answer for:
is what you’re looking for is the DNS get file… that includes Authorization in the header, which I suppose caters for requests to private data but for public data browsing perhaps is just empty. I’m not sure yet the diff between the DNS and NFS get file but it seems NFS is only for authorised requests and DNS accepts unauthorised ones.
NFS get file can be used to stream the file in chunks, look at the “Range” header.
I can’t however seem to do any request without authorizing. I have the feeling that for (non-authenticated) browsing there is a different API that’s not documented on api.safedev.org.
Make sure you don’t provide the Authentication header and token in your DNS get file request, if you do send it, it will try to look it (the token you provide) up and find that you are indeed unauthorized.
E.g. I’m able to get the image with this request: GET http://localhost:8100/dns/site/bochaco/bg1.jpg (you can even try this URL with any browser and just the launcher)
Not sure, but do you have launcher running?
Or wait no, that was only when we needed the proxy
Everything on the clearnet and your computer harddrive should be questionable security wise, but what if you generate Bitcoin Wallet addresses on the SAFE Network?
Check it out @ safe://safecoins.iou/
You can also download the bitaddress.org file and upload it to your own SAFE ENVIRONMENT
Right now CTRL + S is not possible with the SAFE Browser, it would be fun if you could CTRL + S and save directly to your demoapps public or private folder. This way you no longer have to download stuff onto your vunerable computer harddrive.
This is where a mounted safe net drive would be handy. You could then just save it directly from the browser to safe storage.
Yes; one don’t often receive (error) messages from non-existing APIs.
Darn, I really like to do some quick tryouts, but as a busy person I can only allocate myself 30 minutes per week to play with the SafeNet stuff. There is a lot of information and tutorials and webpages and knowledge in different threads on different forums scattered around, but there’s not exactly one single uptodate entry point for quickstarting curious devs with time constraints.
Example - the Web apps section on that page is out of date and going to trip folk up right away:
The web proxy injects CSP headers to prevent mixing clearnet with safenet.
Here are the CSP Headers:
default-src 'self' *.safenet; object-src 'none'; base-uri 'self'; form-action http://api.safenet; frame-ancestors *.safenet; child-src *.safenet
Can you tell me (cos I can’t get it to work!) what the necessary is to access the launcher directly (with Beaker 0.3.0), without using safe-js, assuming this is still possible. A conversation with @joshuef leads me to understand that it is, but I am hitting a problem Working locally with XmlHttpRequest.
woaaah that is a great update again !! I am totally behind everything as I have to focus on other work projects, but I am really eager to take these tutorials… Great news about funding, I’m really glad lights turned green ! Congrats everyone, lets keep this going
I’ve just been looking at this latest test today and have noticed that every time I load a page with comments I have to allow the API. Is this something that will be “improved upon” in the future? I don’t think the average user that wants to browse some pages wants to deal with pop-ups asking for permission every time a page allows for comments.
Kind of reminds me of those pop up ads in the 90’s. Can’t say I have been missing those…
Agreed. There is some discussion starting on the Launcher user experience so a good time to think about this.
What do you think would be the ideal user experience here, taking into account the need for authorisation before a site can store data in the user’s account (e.g. a comment)?
Maybe also think about other kinds of app, and what / when they should seek authorisation.
Input from users in things like this will help developers create better apps.