Should hit duplication algo and thus no actual store, just work for the network to do
Routers have a limit on the number of concurrent connections it can sustain. Home routers often are in terms of one thousand. This is one issue with bit torrents.
Thus a limit is there, but not necessarily set by the protocols but rather the router you connect to or perhaps the routers the nodes connect to. For data centre nodes this is unlikely to be an issue.
Partially; there was a fair bit happening in that directory, files been renamed, moved, deleted, rewritten, even files that somehow were written wrong by
But thar raises the question: what happens when files are added/delete etc to a folder while that’s beeing uploaded using the command
safe files put?
Of course I was not talking about user DoSing himself
Not every router need to do NAT, port forward etc.
Often router just get packets from one interface and put them to other interface.
In such case it does not matter what packet contents are - 1 fast connection or 10000 slow connections - only total bandwidth is limited.
Property of large amount of connections is that they:
- Use resources of server (RAM for example). Lots of resources if developers do not think about optimizations.
- Trigger race conditions, if developers do not have perfect understanding of thread safety.
This is what is tested, not capabilities of routers
I was addressing the implied situation with parallel access as you mentioned. IE a single client or computer doing parallel accesses. And the situation that most users will be facing with their internet facing router.
They are stateful, in 99.99% of cases. The few who used bridged mostly have a router of their own which keeps track of connections.
Of course it would be possible to have a setup where your computer is effectively connected directly to the internet with a bridged connection and routing only occurs once you hit the ISP routers. But this has its own set of issues.
I think it was a very large file container. I don’t remember who was uploading a lot of stuff to one? But it’s a good test we’ll be adding. (ah, @stout77 perhaps! ).
At the moment, I don’t think there is one set up. We’ve had the idea that we’ll need to limit them in terms of data payment / capacity. But that’s not in place yet. (Looks like something we’ll need to add though to prevent this sort of thing happening).
Ah, as to the “which commit” Q @Josh , i did reply in the other thread, but to be clear: main should work with those bins, I believe. There’s be no breaking comms changes in the meantime afaik,
I think we can call this testnet
DONE. I’m leaving the nodes up for now in case any of the team want to poke at anything today. But w/ the register edit issue it seems pretty clear. So we can get tests written up for that, and look to refactor or limit the code around handling them too.
I’ll conced this to @neik , mine was ‘only’ 4gb split in 10mb zips
For what purposes?
Maybe I don’t know about it because I use hardware switch mode on my router for this PC.
But I never encountered problems with connections from ISP routers too.
10000 is usual amount of simultaneous connections for my PC.
And I doubt that something breaks if I add several times more.
(I know about 2^16 limits for open ports)
You have to watch here. For TCP and udp each connection is a socket (file). What limits you is the number of open files. These limits are small and gave us issues. It’s a few hundred open files by default (IIRC 512 and 1024 for mac /linux). Your machine starts baulking out at that point.
However, quic is different. As it uses multiplexing, you can have 10s of thousands of connections on one socket and you can have outgoing connections use that socket too. That was a big reason to go the quick route.
I planned to tell about SN using UDP a little later
Anyway it is better to know more about problems with limits at router side. Even for TCP.
Or, maybe, routers store some addresses and still can have problems even with UDP.
As for file limits, I’m glad that in Windows sockets are not files
And amount of connections are limited only by RAM amount.
In routers, UDP connections are opened and left open for a small period 10-30 seconds. It’s connectionless so the router cleans up or garbage collects (this period the hole we refer to when hole punching).
TCP keeps connections alive and the router can offload these to different sockets when you set up a listener on your computer. So the listener will spawn more connections (files) on each received incoming connection. Those connections stay alive until closed be either end.
So you have computers using files (sockets) per connection and routers mapping incoming to new ports or using holes for udp.
The key for us is the p2p node which will require potentially several hundred open connections and sometimes more. For TCP or udp this quickly kills the computer unless you ask users to change the operating system parameters and that’s not good enough for us.
Interesting, it used the be the number of open files for a single process was 512, has this changed in recent versions? I suppose it does not matter to us as we need to work on many OSs
That’s why I said to @neo about no NAT and no port map case. But he still says about 99.99%. And I wonder what state router stores in such cases.
I doubt that 99.99% of users have NAT and port map.
For example, my ISP gives public IP to every customer.
Which means that router have no need to do tricks.
First result from Google:
(512 is limit for C runtime, without it millions of files can be opened)
I am not sure on the figures but I think NAT is a huge percentage and possibly close the 99% or somewhere. In any case, I feel we need to try and support all connections. I wish we were ip6 though as ip4 is pretty much past end of line now.
That is excellent, what a difference that would make to all p2p networks. It would be huge, but it is very rare that folk have a public address on their computer.
The point is Windows is a c runtime and therefore 512 is the limit per process.
You can maybe write a wee test script to connect to 513 points to prove this one. It may be different with later versions of windows.
C is not the only language in the world.
And even with C you can use Windows API directly.
In other words, it is not a problem at all, I wonder where you got such information.
What points? I have processes running right now with thousands of established TCP connections.
I know, but it is what windows use for its kernel. This is the c runtime that causes the problem.
It seems you think this is a debate, but it’s not. I am not interested in being correct here.
We had CI fail with this issue in windows linux and osx. When we used TCP with either c++ or rust as the prog language. The prog language does not matter, the underlying runtime does, i.e. the thing that provides the Windows API (and that is c).
If you have a single process with more than 512 connections, then it would seem to prove windows can now support more than 512 connections per process. Can you double-check check this is the case, just for information it’s interesting and would be a good move by MS to have changed that.
Almost everybody (on IPv4) has NAT, most people go through several of them. CG-NAT is (sadly) a standard thing, even if you get your own pubic IP from an ISP, it is often via 1:1 NAT.
If you somehow get public IP from ISP, you get one and most people have more devices they want to connect = > you get home router with NAT.
Question is, why we are not on IPv6? Rather than spending time with all the broken things in IPv4, go IPv6 only and embed some tunneling option for people who don’t have native IPv6 yet.
I spent some time on this with great hope. It is moving forward but, unfortunately very poorly.
It’s a brilliant side project for anyone to move forward, regardless of Safe. So worth always watching out for.
An interesting approach may be the Safe nodes do ip4->6 tunnelling etc. (like a proxy) but it can tend to more complex trust issues. In any case if we could we certainly would I think. Bitcoin nodes would too I think.
That’s why I asked for source of this information.
Because I’m almost sure that it is not correct.
Using and spreading wrong information because of… what?
It may be fine for Rust to inherit restrictions from C runtime, or maybe adding their own. I don’t see other reasons for the problem. But if you are not searching for the source of problem, then I can’t help.
Because I don’t want to paste huge list here, I will use
d:\msys64\usr\bin>netstat -p TCP -o -n | grep -c "ESTABLISHED 37288" 7075
I remember only single limit regarding connections in older versions of Windows: there was restriction for amount of half-open connections. But it is gone for a long time (if I remember correctly, don’t want to search for proofs right now).
This is important note. I have seen this in more places, that problem was not total number of connections, but in the half open. Malwares used that as DDoS attack if my memory is good.
What was the main problem? Today >40% of people has working native IPv6, if only half of the rest would be able to make it work (pessimistic gues I think), we would be on 70% availability and that is viable from my point of view. Nothing works 100% and on IPv4 there will also be some percentage of people who will be unable to connect because their internet is too broken.