Network performance concerns

From what I gathered so far about quantum entanglement is that information can’t be transmitted in this fashion, otherwise it would allow retrocausality

Even if it is real, its everyday implementation would be a decade or two away, and so irrelevant to the current discussion.

3 Likes

of course not, I was just commenting on his OT.

Yes that is true, two way transfer has some paradox’s not always written about, such as http://www.scientificamerican.com/article/quantum-teleportation-acr/ (which missed the paradox of time travel :wink: )

Who knows what tomorrow brings, but like quantum crypto, it’s not worth waiting on for now. There are really very cool things closer to hand such as Atom wranglers create rewritable memory | Nature and the likes that seem to show the increase in disk sizes is not stopping. There is plenty of advances in bandwidth as well, so were looking good for continued rapid growth. We will definitely use that growth though as we have shown to date with all our data consumption and transfer.

1 Like

OT-ing again, but @dirvine how do you manage to find time to be on top of all these scientific advances, be actively present on the forum, and developing.
It is like you are omnipresent, are you… God?

1 Like

OR does quantum entanglement based comms already exist :wink:

5 Likes

You can find high level uptodate physical stuff on the PBS Space Time youtube channel. I don’t pretend to fully understand those videoclips, but they are never the less interesting.
In this videoclip about Quantum tunneling around 7:10 they say they will make one in the future about Quantum entanglement: Is Quantum Tunneling Faster than Light? | Space Time | PBS Digital Studios - YouTube

1 Like

Teleporting people through space, as is done in Star Trek, is impossible by the laws of physics. Teleporting information is another matter, however, thanks to the extraordinary world of quantum mechanics. Researchers at TU Delft’s Kavli Institute of Nanoscience have succeeded in deterministically transferring the information contained in a quantum bit – the quantum analogue of a classical bit - to a different quantum bit 3 metres away, without the information having travelled through the intervening space: teleportation. The results will be published online in Science, on Thursday 29 May.

These smart Dutch folks ;-).
What I can imagine with this is that we one day have communication using a technique like this. It won’t go faster than light because at the beginning and the end you always need to work with photons or electrons etc. But as the information does seem to take some loophole in the universe (as it doesn’t travel through space/time) we might have sub millisecond connections all over the universe one day :yum:.

5 Likes

Un-huh, it seems David’s new chlorine-drive was also built by TU-Delft scientists?

A kilobyte rewritable atomic memory

Here, we present a robust digital atomic-scale memory of up to 1 kilobyte (8,000 bits) using an array of individual surface vacancies in a chlorine-terminated Cu(100) surface. The memory can be read and rewritten automatically by means of atomic-scale markers and offers an areal density of 502 terabits per square inch, outperforming state-of-the-art hard disk drives by three orders of magnitude. Furthermore, the chlorine vacancies are found to be stable at temperatures up to 77 K, offering the potential for expanding large-scale atomic assembly towards ambient conditions.

And never mind this older TU-Delft-project, nicely funded too, like Maidsafe :sunglasses:

About Tribler

Tribler is a research project of Delft University of Technology. Tribler was created over nine years ago as a new open source Peer-to-Peer file sharing program. During this time over one million users have installed it successfully and three generations of Ph.D. students tested their algorithms in the real world.

Work on Tribler has been supported by multiple Internet research European grants. In total we received 3,538,609 Euro in funding for our open source self-organising systems research.

Roughly 10 to 15 scientists and engineers work on it full-time. Our ambition is to make darknet technology, security and privacy the default for all Internet users. As of 2013 we have received code from 46 contributors and 143.705 lines of code.

Anonymity

Tribler offers anonymous downloading. Bittorrent is fast, but has no privacy. We do NOT use the normal Tor network, but created a dedicated Tor-like onion routing network exclusively for torrent downloading. Tribler follows the Tor wire protocol specification and hidden services spec quite closely, but is enhanced to need no central (directory) server.

Known weaknesses

We weaken security with decentralization.

We build upon the excellent work by the Tor community. Decentralization unfortunately makes our approach weaker than the original Tor protocol plus network. The Sybil attack is a known weakness in the current anonymous streaming design. We have conducted years of research on Sybil attack defenses. However, it will take a single developer over a whole year to implement these ideas. Volunteers welcome!

Privacy using our Tor-inspired onion routing

Search and download torrents with less worries or censorship

Disclaimer

Do not put yourself in danger. Our anonymity is not yet mature.

Tribler does not protect you against spooks and government agencies. We are a torrent client and aim to protect you against lawyer-based attacks and censorship. With help from many volunteers we are continuously evolving and improving.

Their Discourse-forum

3 Likes

And AES ( Advanced Encryption Standard - Wikipedia ), was developed by Belgians :wink:

5 Likes

The speed of light is (currently considered to be) the fundamental limit.

Let’s try some figures:

The Earth’s circumference = 4 x 10^7 m

Speed of light in vacuum, c = 3 x 10^8 m/s

Therefore, the minimum time for a light ray to travel between two points on opposite sides of the Earth (say, by a perfect optic cable of minimum length),

t = 0.5 x (4 x10^7) / (3 x 10^8) s
= 67 ms

But in practice, the path won’t be a great circle (there will be doglegs) and optic fibre transmits at something less than c. Not to mention that there is various relay equipment along the path.

So I’d be surprised if that figure can ever be reduced below 100 ms.

So the Verizon latency figures, say 150ms for US-Australia, is not all that much above the fundamental limits.

And recall that the graph I posted is indicating a maximum of about 50ms acceptable latency for FPS games (because those enemies that you’re trying to shoot move fast).

The inevitable conclusion is that for some SAFE applications it is going to be necessary* to use out-of-SAFE channels for some of the data, with communications within SAFE for the setup. Telephony will be the big one.

* Unless: The devs (core or app) might “optimize” by introducing a ping test to allow an app to, in effect, sort prospective partners according to geographic proximity. But such discrimination would have to include all nodes along the SAFEnet path and not just the end nodes. In which case SAFE will no longer be geographically agnostic. Is anyone actually working on that?

2 Likes

We could call the proximity check just geo·p , give it an op-code & a toggle to
switch it on or off . I think that would suffice to have a turf for experimentation .

We don’t have to include it by default into the core functionality of the network .

2 Likes

Another possibility is SAFE “sidenets” that are analogous to Bitcoin sidechains:

Say you are a SAFEnet telephone/gaming/enhanced reality startup. You have a “server” for your app in each region, eventually in each urban area: a node on the local backbone running your app. I put server in quotation marks because its only purpose is to bootstrap a local mini SAFEnet, and perhaps act as an index of options by analogy with a clearnet torrent site, and after that the app runs peer-to-peer. SAFEnet users join a sidenet by connecting to the app network on the main SAFEnet. I hope that makes sense!

EDIT:
A sidenet need not be a different network, it can be a geographically-filtered group of nodes of the main SAFEnet, tested to be within a specified ping distance.

Of course, since a ping function measures time, and SAFEnet does not have time, right there we are going outside SAFEnet’s canonical functions in order to solve a real-world problem.

4 Likes

But no reason that your PC running an APP cannot measure time. This will be happening I am sure. It will be speed in relation to your PC, so it can even be a function of any “launcher” too.

If it’s non profit you’d be ok, but for profit…the patents may come into play with private safenets. Maidsafe have a side business scoped that deals with private safe networking.

I found this out when brainstorming on here about large orgs running safe networks internally with bare metal docker setups and wondering if they could use public safe in a sync private/public cloud equivalent.

After further thought a better analogy for what I have in mind is Bitcoin colored coins rather than sidechains, with a SAFEnet node having a latency flag, while the default node configuration has no such identifier. So forget about separate networks, which are unnecessary if you can tag nodes in such a way.

EDIT: The latency flag should be present in the core software but turned off by default, and settable case-by-case by consent of the user. So if you want to join some service which is latency-sensitive then you consent to a ping test to set the latency flag, viewable only by the party running the test.

By contrast, promiscuous use of such tests would allow you to be located to a geographic area by triangulation, and in general that should not be encouraged. If your group activity is so local that you literally have other the people in sight, as in laser tag, then I can’t see the point of using SAFE at all.

1 Like

This is a very important issue. The other day I was talking to a friend who has a hosting company, using Amazon Cloud services, and I mentioned to him that the future of the internet was decentralised, pointing out to safe-network as an example of what is to come. But he immediately said: “That network will probably be very slow, compared with modern servers and cloud services”. I really had no response.

Any advice about what to say in the future when I keep promoting the network to my acquaintances? Will SN ever be able to match speed of standard servers and cloud services?

1 Like

My response would be: “Just as slow as BitTorrent?” :upside_down:. Chips and network cards have become way faster since the introduction of BitTorrent years and years ago. P2P has proved itself over and over again. So even while SAFE adds several encryption layers it still could be very fast. Remember we have 4 “seeders” for every mb. of data. So a 600 mb. file has 2400 “seeders” ready to serve you.

5 Likes

If there are Speed issues in the beginning we could specialize in iot to get spread into as many places possible…

As people have noted, rapid advances in processing power will drive encryption overhead towards zero.

But latency won’t go away: it will be no better than the Internet backbone, but multiplied by some factor according to how many hops it has.

As long as SAFEnet is an overlay of the Internet then it can handle all use cases either entirely within SAFEnet (for latency tolerant applications) or with workarounds such as out of band data transfer (for latency sensitive applications) with a latency flag filtering out the remote nodes.

Meshnets would fit into the latter scenario, but they are never going to replace the Internet backbone for long distance.

3 Likes