DevCon talk Supercharging the SAFE Network with Project Solid

When I first heard of the internet in the early 90’s (showing my age now) my initial thought was … fantastic, we’ll all get a space on the internet i.e. our own private vault which I called a Universal Information account (UIA), which can’t be taken away from us, where we own our data (including our genetic makeup) and engage with multiple interchangable tools/services!! Of course we all know it went the exactly the opposite way hence this very discussion! Project CommunityLink Open data ecology (CODE) Lighthouse project and the Redland “Enterprise” smart city (Mutual Asset Pool/MAP) proposal introduced are designed around this fundamental premise of personal data ownership and the premise/view that only “we can hold the keys to our data” via our UIA.

So from a personal and community perspective I am very excited to see David’s and Marks comments re

  • with the use of semantic data sharing and privacy in its apps all categorised and searchable, so you don’t like my messenger app, OK switch out the app, but keep your data empowering for users and taking our message of no need to worry about infrastructure to a new level. A win win win for safe, solid and citizens
  • No more content forced on you for Facebook’s own purposes. No more crawling of your data, except by the individuals or organisations you trust with it, and only on your terms. No more “click here to agree or you won’t be able to use a social network with all the data you uploaded”.
  • This solid safe app "that uses LinkedData can pull in different data types available in the user’s own storage (produced by other apps) and from public resources (is a dominant design as Facebook was)
  • It creates the world we want, and I don’t think there will be any need to force devs to take this route once we have some decent Solid+SAFE apps running which other devs can improve. So devs who don’t do this will find it harder to acquire users on these new platforms.

which to me describes what a Universal Information account (should do) does!!

One of the challenges in creating new knowledge around the Linked data and personal data ownership concepts and learning from each other as we are doing now is that there are many new words, multiple concepts and ways to say the same thing. We need some simple language/terms which encapulsate the Linked data and personal data concepts and which give a very simple picture of the outcomes. Having a clear picture of the “data” outcome makes it a lot easier to get there.

I’m wondering if introducing the term Universal Information Account into the lexicon helps to simplify and assist discussion forward by providing a “common generic/genomic label/term” (based on universal principles) which encapsulate the above “data” concepts/arguments etc.

It provides a easily understandable label for us i. as citizens and end consumers to access/understand the argument/service to make an informed decision ii. as advocates working to deliver this capability. This could be one of the “colloquialisms, community preferences, abbreviations, legacy names” referred to in the excellent Hunter Lester Gene Ontology post.

3 Likes

I think it’s important not to enforce too many standards and definition from the get-go as they’ll invariably end up having issues and need time to evolve, or they’ll end up working for some use-cases, but not others, at least for things like ontologies. Making something like schema.org into THE standard would means getting stuck with its limitations.

If two data sets about genes are published using separate ontologies, a third-party could link these together to make it feasible to query the data as if they used the same ontology and then if there turned out to be issues with how the data was linked, another way of linking the data/ontologies could be published by someone else.

5 Likes

Tim gave a lecture yesterday at MIT calling for people to get together and re-decentralise the Web.

Short summary here:

Extract:

Such an endeavor would entail bringing together “the brightest minds from business, technology, government, civil society, the arts, and academia” to establish a system “in which people have complete control of their own data; applications and data are separated from each other; and a user’s choice for each are separate.”

The overarching goal? “To build a new web which will again empower science and democracy.”

“Let’s re-de-centralize the web,” Berners-Lee declared. “It was designed as a de-centralized system, but now everyone is on platforms like Facebook,” he added, detailing how social media can be polarizing to a degree that it threatens democracy.

23 Likes

IMO the ability for people to have complete control of their data will never be achievable if we have to connect our data thru proprietal hubs as they will retain ownership of the data and try to extract as much as they can from this position of control/power. So we need something that is the complete opposite to the current system. One that is free of the corporate capitalism middle man extraction model designed to “maximise their exchange margin”.

So we need (the missing link IMO) an open data system model/design/method which can interlink (communitylink/CL) proprietal data for the common good. This model must provide citizens, communities and industries a structured approach to enable members to establish trusted commercially neutral ODE (solid/safe) hubs which connect proprietal data within and through the open data ecology (ODE) network with user established governance.

These Community Interlinks/exchanges must be Non-profits (mutuals) whose sole objective is to maximise the linkages (on behalf of members), minimise the cost of exchange and pass the ODE CL or public infrastructure surplus to members via an CL trading algorithm (remutualisation). An additional benefit of the ability to interlink through Open data hubs which are not extractive is that it will enable us to create “closed loop automation product solutions” (Codemaps) across multiple rightholders/stakeholders creating mutual asset pools and efficiency benefits (not currently possible) within the decentralised autonomous network (safe). This appears to be a natural extension of theOpen Data Institute initiative also established by Tim Berners Lee established with Nigel Shadbolt and the “Who should hold the keys to our data article” you posted.

As I see it by signing up for the UIA account (as per the safeplume demo using solid & safe) within the solid/safe hub enabled Open data network system you can support the network as a farmer, you get control of your data and as an Account owner (your private space on the net that you own and control) you become as a Universal basic equity (UBE) rightholder (identity token) to share in the profits created by the system and from permission based access to your data. People will be able to establish their own “closed loop networks of trust and join closed loop solutions such as the local community hub which is and (ODE). This will also enable you to connect to other “products” (close loops/mutuals) sharing in the “profits of the system” e.g. for your health and financial products. So in theory a better product at a better price with profit share. The Who holds the keys article concludes with “Unlike those blue hyperlinks, this is a step forward that will only happen with state intervention”. Again IMO we can’t rely on state intervention and that this can be a consumer and community driven change.

So our collective endeavour needs to be able to place some ODE interlinks between some proprietal data. This is all well and nice in theory. To start we need a Closed loop product/prototype which builds on safeplume (the endeavour to bring the our minds together), the tools (a proposed set of ODE CL Codemap conventions and tools build around solid & safe) and context (ODE CL model, business case e.g. smart city & Investment case)

So where do we start?
IMO progressing the safeplume demo is the strategic and systemic path of greatest leverage for both the Solid and Safe vision and path to market so i. it needs to be funded and ii. built in an open data model context described above, with a Codeamp product/prototype, the tools, context. So how do we fund this? Who would have the greatest interest/benefit in making an contribution and support our transition to the new open data ecology and circular economy/democracy vision as per Tim’s dream?

1 Like

I agree with this. Unfortunately I don’t understand much of the rest of your post. I googled ‘open data ecology’ and it came up blank. Likewise with ‘codemap’ and ‘closed loop automation product solutions’. Could you write a tl;dr version?

1 Like

Tx for your comment. I have just added a response/comment in the Redlands “Enterprise” smart city - #4 by CommunityLink related project which might assist. It provides an intro to the Codemap closed loop etc terminology in context of a practical commercial path forward for Marks safeplume innovation. Happy to write a tl:dr version. Can you explain what that is?:blush:

I’ll bite…

tl;dr too long; didn’t read

3 Likes

Handy diagram of Solid architecture…

https://rawgit.com/solid/solid/master/diagrams/solid-architecture.svg

5 Likes

SAFE + Solid = no need for Google

No doubt there will be a use for search, but if people use apps which create RDF / LinkedData, then search will be built in from the start. Here’s Tim explaining:

I think it is really important that you understand why it is useful to make little RDF pages which all link to each other.

I want you to understand that if people do that, then they will make lots of triples in the end which you will be able to index in your sparql store … but they can do lots of things using just local links and no global search.

Like the web was very functional before Google, because people learned how to follow links (Easy) and make links (done carefully). After people had made lots of links between for example blogs or calednar entries and home pages, then Google came along with a disk big enough to store them all so you could search them — but lets go backwith Solid to before that. Solid can wok at a basic level with just making and following links. If it does work,then linked data will spread like a crazy vine and we will have masses of it. So I need you to wrap your head around that model.

– Tim Berners-Lee (today)

24 Likes

That will be good. Search engines can be good, but I do remember the days of the web being the web of links.

Didn’t know that cooking would be involved though “wok at a basic level with just making and following links”. Guess I’ll have to buy some of these “links” and bake them into my next pie

2 Likes

My talk in the OP was focused on how Solid is a platform for different user oriented apps sharing data, and concluded with the point that this is not going to be just your data, but everyone’s data - all able to be made, mixed, mashed, shared and edited by different apps.

From the other end of the spectrum we have big projects adopting the same semantic LinkedData / RDF in government, NGOs and commercial applications. Even these don’t tend to be well known which means it’s easy to think this isn’t gaining much traction, so I’m going to post one every so often for those who don’t follow it in other forums. Here’s one…

11 Likes

Would this “forced” semantic data API entail the network storing linkage information for all data uploaded to it? I’m thinking the biggest win here will be the ability to crawl the graph of the internet in any direction. If the network offers a service to go backward (e.g. find all blog posts that link to this website, or find all research papers that link to this one), that would change the internet in a BIG way. You could easily figure out who is talking about what, precisely how widely-cited is a resource etc. From what I understand, SOLID doesn’t mandate bidirectional links, and couldn’t be expected to without some kind of graph database being available. Being able to see and explore the raw connectivity of the Internet would be a very exciting superpower (and decrease our dependence on search engines with their own massive databases and in-built biases).

The big reason I see for not having a querying service as an app separate to the network is that any external app will have to find data before backwards links can be established. However if SAFE knows about links, then it can maintain a graph just by incrementally updating it with new links the moment data is published.

Does this make sense or am I thinking about this all wrong?

2 Likes

Solid kind of includes database functionality without it necessarily being implemented using a database. Solid defines both a Web interface (LDP) for resource storage and a query language (SPARQL) that can be implemented in the client (or server if you have one) to query and crawl the dataset which is not just what is in your storage, but the whole Web, so everyone has their own tiny Google like crawler, without needing one big database owned by Google.

So it isn’t quite as you imagine, but I think feasible to expect much of it will be along those lines. For example, the network can’t decide to link blog posts together etc., that has to be the decision of the app, but it can adopt Linked Data for internal structures to encourage this. So for example, using Linked Data for user profiles, or providing a Solid / Linked Data API to SAFE NFS, providing any extra ontologies needed etc. all makes it easier for developers to discover this semantic approach and realise the benefits. Benefits such as linking a user profile with documents the user publishes, and linking citations in those documents with the profiles (or rather WebIDs) of the authors of the cited documents.

An app that does this will produce data for the user that is more useful than one which doesn’t, which gives developers who do this incentive and will further drive adoption of this approach.

3 Likes

I think apps that do link as much as they can correctly via types of AI / evolutionary programming or other methods though will be much more capable and valuable to people. Then they will help create those links to other data sets of interest. Hopefully, it will happen as the app is used more and will not censor info or try to filter it for a person’s beliefs (like google search etc.) but continue to show opposing views, different opinions and more. I believe with a semantic web that is secured peer to peer then this is entirely possible as apps that offer more value like this will become dominant. If those apps ever change that behavior when switching to another will be simple, as long as we keep the data on the network (no problem) and semantic.

At least the tools will be there.

3 Likes

Thank you @happybeing and @dirvine, these are all useful insights, however I think we dodged my key question here. Today on the internet, links are all unidirectional. If I make a blog post and I link to a news article, then it’s easy for you to travel from my blog post to the article. However, if you happen to have stumbled upon that news article, you have no way to know what blog posts, tweets, or Facebook posts have linked to it. Perhaps you’re interested in this because you want to see whether people think the article is credible or not. Internet technology simply doesn’t support that kind of exploration.

For this to be plausible, links between pages can’t be hidden within them. The links have to be stored outside the pages, so that if I’m reading the news article I can see all the connections to it. This capability would give us an unprecedented ability to understand how we as a global network of people are communicating, and allow important discoveries regarding who is talking about and working on what.

To link all the world’s information together bidirectionally, we can’t rely on individual apps to publish the backwards links, because I could write my blog post with a different app to the one used to publish the news article. Instead, what we need is a way to query these links independent of how they were created. This is something the SAFE network could do. The only new feature needed is an in-built distributed graph database of all the links between data that is published on the network, so I can ask it “show me all the things that link to this news article”, and I can discover the blog posts, and the tweets, and the Facebook posts. A more compelling example: every piece of data on the internet will have a built-in comments system available. All you have to do to leave a comment on a piece of data is link to it. Anyone else who stumbles onto the data can then query the network for comments on it.

The technical nitty-gritty is beyond my ability to discuss, but if the SAFE network is going to store all the world’s data, there’s a once-in-a-lifetime opportunity to have it store all the world’s links as well, so we can see how all that data fits together.

I’m not the first person to see the value in this: Ted Nelson had bidirectional links as one of the defining features of his never-to-be-realised Xanadu project. We finally have the technology to make it happen.

8 Likes

Nick, thanks for clarifying.

Indeed, though not a feature that is present, simple or I think feasible to build in at the network level. So it would have to be at the application layer IMO and therefore a choice rather than mandatory. This could be made easier for apps to support, by providing libraries or an API, and could be encouraged by clearly demonstrating the benefits to users.

I think any benefits or downsides will depend on the use case (so not necessarily a bad thing for it to be optional), and I suspect that some apps already attempt to do this or something along these lines. But I don’t see it as something that can be made mandatory by the network.

If you want to look further I would start by looking at what Dokieli does (a collaborative authoring application) and from a database perspective I’d look at Wikidata.

Thanks for your input.

9 Likes

Just to make this public knowledge: Dokieli looks to be an implementation of exactly the kind of birectional linking system I was referring to, using the terminology annotations. In fact I just discovered that web annotations is a W3C standard now too, and this is what Dokieli bases itself on.

Dokieli and the W3C standards deal with “backwards linking” by having annotations (e.g. user comments) emit notifications to the resource (e.g. news article) they’re referring to. These notifications enter an inbox owned by the resource. The inbox can then be exposed to viewers of a web page in order for the annotations / backwards links to be displayed. Because the inbox is owned by the resource author, they can choose to censor any of the annotations. Whether this is a good thing or a bad thing is up for debate.
In summary, this kind of system differs from my pipe dream because:

  • It is opt-in, and enough people have to decide they want to integrate this annotation system into the web before it becomes useful.
  • It enables censorship, because authors of resources can filter out the annotations they don’t want to be visible from their resource. The annotations will still be present on the web, they’ll just not be easy to find.
8 Likes

I don’t think this qualifies as “censorship”. If I give a lecture and refrain from telling the whole world who attended that lecture, I’m not really “censoring” anything.

Also, let’s say I’m interested in hunting rifles and put up a page about them. Then some crazy organization finds my page useful for their terrorist or whatever purposes and starts linking to it. On a purely emotional level I may be reluctant to show the whole world that not only hunters but actual criminals like my page. So, if I understand the issue correctly, I definitely think the annotations/backwards linking should be optional and opt-in.

3 Likes

I agree that there are valid reasons to want to be able to filter out back-links from a resource you’ve published, like illegal activity. That’s certainly not “censorship”. However if your resource is a news article on how Donald Trump is amazing and you hide all the opposing comments, leaving only the positive ones, that could be considered censorship. I think the appropriate classification depends on the intent.

Nevertheless I’m seeing a good reason to be able to hide links, so maybe it does make sense for apps to handle this.

4 Likes

Just realised this had never been posted on r/cryptocurrency.

It feels like a good time to get some more awareness now the markets are making a little comeback.

Only posted 45 minutes ago and it’s already had 177 views.

12 Likes