DevCon talk Supercharging the SAFE Network with Project Solid


Handy diagram of Solid architecture…


SAFE + Solid = no need for Google

No doubt there will be a use for search, but if people use apps which create RDF / LinkedData, then search will be built in from the start. Here’s Tim explaining:

I think it is really important that you understand why it is useful to make little RDF pages which all link to each other.

I want you to understand that if people do that, then they will make lots of triples in the end which you will be able to index in your sparql store … but they can do lots of things using just local links and no global search.

Like the web was very functional before Google, because people learned how to follow links (Easy) and make links (done carefully). After people had made lots of links between for example blogs or calednar entries and home pages, then Google came along with a disk big enough to store them all so you could search them — but lets go backwith Solid to before that. Solid can wok at a basic level with just making and following links. If it does work,then linked data will spread like a crazy vine and we will have masses of it. So I need you to wrap your head around that model.

– Tim Berners-Lee (today)

Is there a safe search engine?

That will be good. Search engines can be good, but I do remember the days of the web being the web of links.

Didn’t know that cooking would be involved though “wok at a basic level with just making and following links”. Guess I’ll have to buy some of these “links” and bake them into my next pie


My talk in the OP was focused on how Solid is a platform for different user oriented apps sharing data, and concluded with the point that this is not going to be just your data, but everyone’s data - all able to be made, mixed, mashed, shared and edited by different apps.

From the other end of the spectrum we have big projects adopting the same semantic LinkedData / RDF in government, NGOs and commercial applications. Even these don’t tend to be well known which means it’s easy to think this isn’t gaining much traction, so I’m going to post one every so often for those who don’t follow it in other forums. Here’s one…


Would this “forced” semantic data API entail the network storing linkage information for all data uploaded to it? I’m thinking the biggest win here will be the ability to crawl the graph of the internet in any direction. If the network offers a service to go backward (e.g. find all blog posts that link to this website, or find all research papers that link to this one), that would change the internet in a BIG way. You could easily figure out who is talking about what, precisely how widely-cited is a resource etc. From what I understand, SOLID doesn’t mandate bidirectional links, and couldn’t be expected to without some kind of graph database being available. Being able to see and explore the raw connectivity of the Internet would be a very exciting superpower (and decrease our dependence on search engines with their own massive databases and in-built biases).

The big reason I see for not having a querying service as an app separate to the network is that any external app will have to find data before backwards links can be established. However if SAFE knows about links, then it can maintain a graph just by incrementally updating it with new links the moment data is published.

Does this make sense or am I thinking about this all wrong?


Solid kind of includes database functionality without it necessarily being implemented using a database. Solid defines both a Web interface (LDP) for resource storage and a query language (SPARQL) that can be implemented in the client (or server if you have one) to query and crawl the dataset which is not just what is in your storage, but the whole Web, so everyone has their own tiny Google like crawler, without needing one big database owned by Google.

So it isn’t quite as you imagine, but I think feasible to expect much of it will be along those lines. For example, the network can’t decide to link blog posts together etc., that has to be the decision of the app, but it can adopt Linked Data for internal structures to encourage this. So for example, using Linked Data for user profiles, or providing a Solid / Linked Data API to SAFE NFS, providing any extra ontologies needed etc. all makes it easier for developers to discover this semantic approach and realise the benefits. Benefits such as linking a user profile with documents the user publishes, and linking citations in those documents with the profiles (or rather WebIDs) of the authors of the cited documents.

An app that does this will produce data for the user that is more useful than one which doesn’t, which gives developers who do this incentive and will further drive adoption of this approach.


I think apps that do link as much as they can correctly via types of AI / evolutionary programming or other methods though will be much more capable and valuable to people. Then they will help create those links to other data sets of interest. Hopefully, it will happen as the app is used more and will not censor info or try to filter it for a person’s beliefs (like google search etc.) but continue to show opposing views, different opinions and more. I believe with a semantic web that is secured peer to peer then this is entirely possible as apps that offer more value like this will become dominant. If those apps ever change that behavior when switching to another will be simple, as long as we keep the data on the network (no problem) and semantic.

At least the tools will be there.


Thank you @happybeing and @dirvine, these are all useful insights, however I think we dodged my key question here. Today on the internet, links are all unidirectional. If I make a blog post and I link to a news article, then it’s easy for you to travel from my blog post to the article. However, if you happen to have stumbled upon that news article, you have no way to know what blog posts, tweets, or Facebook posts have linked to it. Perhaps you’re interested in this because you want to see whether people think the article is credible or not. Internet technology simply doesn’t support that kind of exploration.

For this to be plausible, links between pages can’t be hidden within them. The links have to be stored outside the pages, so that if I’m reading the news article I can see all the connections to it. This capability would give us an unprecedented ability to understand how we as a global network of people are communicating, and allow important discoveries regarding who is talking about and working on what.

To link all the world’s information together bidirectionally, we can’t rely on individual apps to publish the backwards links, because I could write my blog post with a different app to the one used to publish the news article. Instead, what we need is a way to query these links independent of how they were created. This is something the SAFE network could do. The only new feature needed is an in-built distributed graph database of all the links between data that is published on the network, so I can ask it “show me all the things that link to this news article”, and I can discover the blog posts, and the tweets, and the Facebook posts. A more compelling example: every piece of data on the internet will have a built-in comments system available. All you have to do to leave a comment on a piece of data is link to it. Anyone else who stumbles onto the data can then query the network for comments on it.

The technical nitty-gritty is beyond my ability to discuss, but if the SAFE network is going to store all the world’s data, there’s a once-in-a-lifetime opportunity to have it store all the world’s links as well, so we can see how all that data fits together.

I’m not the first person to see the value in this: Ted Nelson had bidirectional links as one of the defining features of his never-to-be-realised Xanadu project. We finally have the technology to make it happen.


Nick, thanks for clarifying.

Indeed, though not a feature that is present, simple or I think feasible to build in at the network level. So it would have to be at the application layer IMO and therefore a choice rather than mandatory. This could be made easier for apps to support, by providing libraries or an API, and could be encouraged by clearly demonstrating the benefits to users.

I think any benefits or downsides will depend on the use case (so not necessarily a bad thing for it to be optional), and I suspect that some apps already attempt to do this or something along these lines. But I don’t see it as something that can be made mandatory by the network.

If you want to look further I would start by looking at what Dokieli does (a collaborative authoring application) and from a database perspective I’d look at Wikidata.

Thanks for your input.


Just to make this public knowledge: Dokieli looks to be an implementation of exactly the kind of birectional linking system I was referring to, using the terminology annotations. In fact I just discovered that web annotations is a W3C standard now too, and this is what Dokieli bases itself on.

Dokieli and the W3C standards deal with “backwards linking” by having annotations (e.g. user comments) emit notifications to the resource (e.g. news article) they’re referring to. These notifications enter an inbox owned by the resource. The inbox can then be exposed to viewers of a web page in order for the annotations / backwards links to be displayed. Because the inbox is owned by the resource author, they can choose to censor any of the annotations. Whether this is a good thing or a bad thing is up for debate.
In summary, this kind of system differs from my pipe dream because:

  • It is opt-in, and enough people have to decide they want to integrate this annotation system into the web before it becomes useful.
  • It enables censorship, because authors of resources can filter out the annotations they don’t want to be visible from their resource. The annotations will still be present on the web, they’ll just not be easy to find.

I don’t think this qualifies as “censorship”. If I give a lecture and refrain from telling the whole world who attended that lecture, I’m not really “censoring” anything.

Also, let’s say I’m interested in hunting rifles and put up a page about them. Then some crazy organization finds my page useful for their terrorist or whatever purposes and starts linking to it. On a purely emotional level I may be reluctant to show the whole world that not only hunters but actual criminals like my page. So, if I understand the issue correctly, I definitely think the annotations/backwards linking should be optional and opt-in.


I agree that there are valid reasons to want to be able to filter out back-links from a resource you’ve published, like illegal activity. That’s certainly not “censorship”. However if your resource is a news article on how Donald Trump is amazing and you hide all the opposing comments, leaving only the positive ones, that could be considered censorship. I think the appropriate classification depends on the intent.

Nevertheless I’m seeing a good reason to be able to hide links, so maybe it does make sense for apps to handle this.


Just realised this had never been posted on r/cryptocurrency.

It feels like a good time to get some more awareness now the markets are making a little comeback.

Only posted 45 minutes ago and it’s already had 177 views.


Am reading some headlines in CNBC related to Tim Berners-Lee (SOLID project). Coverage is being done by major publications on how web was supposed to be and how it evolved and how to get it back. I think the more collaborations we can do with SOLID team and perhaps even become the defacto storage layer for any/all SOLID development, it will help the SAFE project and its vision as it is completely aligned with the vision of the SOLID team.


When I was driving to work this morning, I heard Ruben Verborgh of Solid on the public ‘national’ (Flemish half of Belgium) radio because 30 years WWW. Solid was not mentioned. But in this (Google translate) article, on the website of the public broadcaster, Solid is mentioned.


I noticed TBL hasn’t mentioned Solid in any of his interviews or articles. Maybe they don’t want to muddy the waters, or maybe too many people arrived last year before it was ready (I felt the announcement was a bit premature). I wouldn’t be surprised if they asked media outlets not to cover it.


Could be, but concerning the radio news I heard: it can’t be too long, so it makes sense that Solid isn’t mentioned there and only in an article where you can give more details.


I think you are correct @JPL, the Inrupt message is that it is not for users, only developers, so it makes sense they are being more focused.

1 Like

And when were these interviews recorded. Its possible some where recorded months or more ago (just in case the star was not available at the anniversary)

1 Like

Hi David and Mark, this post was from april 2018. Is SAFE still working closely together with Solid in order to create the supercharged SAFE/Solid network or are both projects running their own course?