Thanks for the link. Seems like its a relevant project, though details sparse or burried. Shame its not easier as it would be interesting to read the equivalent discussions on a similar project - simlar aims, challenges. Might find some things we haven’t catered for etc.
I’m seeing some differences. It seems closer to transport layer, focussed on request / receive / republishing data addressed through URI’s and tagged (signed?) to validate content, originator and restrict access. While there is talk of not relying on servers - I read it as not actually eliminating them and providing cloud style pooled storage for users. It seems more like a way of decoupling the data from where it is stored (so users/clients request data without knowing where it is stored, and it is retrieved automatically from wherever it happens to be (the original source, or any computer that already accessed it). Data could start on servers, or on users’ own machines if these allow public access, but this isn’t I think doing the same thing that MaidSafe does: explicitly store data on the network in a secure, encrypted way and ensure it is always accessible (with versioning etc.), nor is there any reference to trustless stuff needed for crypto and smart contracts. No mention of bitcoin etc either.
I think the claim to eliminate servers does have validity because the architecture means that information that is widely requested will be sourced from other places than the original source. So any old computer could be the original source, and provided the data has been accessed already and is still cached on the network, it would be available whether the original source is still online, and even if that source can’t supply the demand. So essentially any computer could deliver content with very high demand, regardless of its own bandwidth. This effectively makes all computers peer perform alongside more powerful servers. What it does not do is provide any guarantees about data being retained, at least non stated, so I think this is not MaidSafe. It seems to be a lower level layer that provides some of the benefits of MaidSafe.
There is no indication of a deployment project - this was a demonstrator only. The project may be dormant. I’ve queried their twitter feed and will update if they reply.
UPDATE: I was referred from twitter to email, but ten days and no reply, so just went back to twitter!
UPDATE: I got an email response from the project leader. Here are the relevant bits:
Dirk Trossen wrote (30-5-2014):
… Anyways, it was interesting to look at Maidsafe, also in relation to PURSUIT. The goals are very similar when it comes to security and privacy, and we also looked into obscuring information requests, lowering the ability to profile…But there are also differences. Pursuit was an infrastructure project with the ambition to replace current (IP) routing approaches. This ambition adds more goals compared to Maidsafe, such as optimising resource usage along the dimensions of computation, storage and communication (e.g., using constraint-based path computation approaches, cache placement solutions, …).
In terms of implementation, we rely on our node implementation called Blackadder, which implements the high level goals (including the support for optimised dissemination strategies) in a flexible software architecture. It runs over native L2 or tunneled via IP. We also have done work on diffusion through net coding that’s included in the implementation. Akin to original IP implementations, a node can take several roles, not only be an end node.
In conclusion: the main aims are the same but the infra target of PURSUIT leads to differences in order to achieve scale and performance (we are already at a level of 10gbit/s forwarding performance).Although the project has ended, its results and code is used in other projects as well as planned efforts at EU level.
PURSUIT is not at the same level as MaidSafe when it comes to commercial readiness. There will be follow-on research activities, including corporate developments. But again, they’re unlikely to yield in products in the near-term. You can find some of the developments in the IRTF ICN research group, which also includes some of the corporate players. Also, the EC H2020 funding programme includes references to ICN (information-centric networking) and therefore solicits follow-on work.
Please feel free to share this communication –there’s nothing secret, really, and PURSUIT has always been very open.
Article on Cambridge Uni Website
“…a proof-of concept model for overhauling the existing structure of the
internet’s IP layer, through which isolated networks are connected, or
“…users would be able to obtain information without needing direct access to the servers where content is initially stored.”
“Instead, individual computers would be able to copy and republish
content on receipt, providing other users with the option to access
data, or fragments of data, from a wide range of locations rather than
the source itself. Essentially, the model would enable all online
content to be shared in a manner emulating the “peer-to-peer” approach
taken by some file-sharing sites, but on an unprecedented, internet-wide