Reserve pages for companies when Safe Network goes live?

These problems go away with the perpetual public reserve concept.

2 Likes

Yes, good point. Even so, fishing could be less sophisticated - it could just suggest their account has been prepared for Safe Network and they need email a scammer’s email to enable it. Technical people may smell a rat, but many people won’t.

Ofc, if there is a clever way to get the user to store a public file, or send a message, without it being too obvious that it is suspicious, scammers will be all over it. I don’t mean to spread FUD though, so if this is technically challenging, that is good news.

1 Like

I’d say it would need to be a one time capture of the top X domains. It would have to be done prior to public launch, as otherwise it would not be possible - it would be a race against the squatters.

X + 1 would be unlucky, but it is better than 1 - X being targets for scammers. Maybe X could be as big or as small as needs be, to limit potential damage?

For user protection, if what if google.com is the one NRS name they want, after Google has added it? Would we want to prevent the real owners from helping their users find their Safe Network sites?

1 Like

In your example, the first person to get “com” as a name has control over all .com’s, google is not the unique term. So the only public names you need to be concerned about are the three letter tlds such as com, org, net, edu, gov etc.

1 Like

There is this list Top 1000 Domains · GitHub

Perhaps a list of banned hard coded sites/tld and do nothing. Just don’t let them be registered on the network? Then it’s a free for all for all :wink: A new way.

3 Likes

Couldn’t you just hardcode ban “com”? Then all *.com is covered?

1 Like

Under the current NRS system yeah.

There are >1500 current TLDs BTW

1 Like

Does that help the user find what they are looking for though? If they are looking for Google, Amazon or whatever, shouldn’t they be able to reach it with the URL they are used to? Wouldn’t that help them avoid falling into the scammer’s arms, when they go looking for something similar (potentially by a scammer)?

I think there is a strong argument that new, less technical, users should be given all they help they can get. Not just for usability, but for their safety too.

This seems like a good idea, given the different nature of TLDs and their hierarchies.

Yes, good point and a bad example from me. Instead, assume google or www.google.

Although, I wouldn’t advocate MaidSafe, or anyone, dispensing these TLDs. If we were gonna do anything, it would be to lock them out.

But there will always be new clearnet TLDs, so issues would still crop up later on. Or perhaps the scam will happen in reverse!

3 Likes

I am not sure it stops them? It depends how easy it is to find stuff on safe, i.e. search engine/index etc. If the old url is just not there then it’s not gonna scam them. We could have redirects in those locations to known safe sites we verify (big brother like) or let folk find the real Amazon / Google.

I would leave it up to amazon and google etc. to state on their clearnet site “we are on the Safe Network at this address” and give them the problem.

If we ban tld’s or even just .com then users won’t go there to any scammer.

Thoughts anyway, this is all brainstorming

[Edit, then those folks could pay for whoever has the browser for browser autocomplete for their names?]

2 Likes

True, but I think the immediate impact of the major TLDs is the critical area. I think we can only reasonably expect to ease the transition, rather than solve every issue. Longer term, users may be more wise to how Safe Network works too.

2 Likes

Yes, very true - essentially disabling TLDs would go a long way, IMO. I suppose this could also be done by Maidsafe just registering these, then setting up links to the genuine sources where required.

Of course, the TLD isn’t needed on Safe Network, so the likes of Google, Amazon, etc, would likely go without the TLD (just literally their name - safe://google, safe://amazon). I suspect squatters may anticipate this too though, which loops back to the original point. Maybe they would be safer from scammers, but it will still be undesirable. I would rather Maidsafe were the squatters and were re-investing the money into the network.

1 Like

FWIW, I would put usability and communicability, first, and then verifiability slightly behind that in order of priority.

I think there will be more solutions to verifiability that can be retrofitted to the network after launch, and solutions that linked data is perfect for solving as the network grows. So locking out a bunch of names or TLDs is I think a bit of a bunt and unsophisticated response, in that regard.

So put usability first I say.

And in that regard (slightly off topic, but related) I do think it is worth considering publicname.subname rather than what we have at the mo. Widest, to narrowest.

2 Likes

I’m not convinced it as big an issue as we might imagine, hence I’d like us to test the idea before making the network have a list of 1500 TLDs embedded in the code of every vault and having to check it on every DNS registration. That’s as lot of energy, over an indefinite period.

I think there will still be scams, but that the protections of Safe Network will make them much, much harder and less profitable. Focusing on this aspect by strengthening the underlying protective features that make scams less profitable on Safe, rather than trying the whackamole approach of chasing specific types of scam feels likely to be more effective.

3 Likes

That’s pretty insignificant. Banning only those would alleviate the OP concerns completely.

You don’t need a list in every vault. Just pre register and make the keys public or burn. They will then become nomansland.

1 Like

Thinking about this from a scam-first perspective is the wrong way around in my book.

Think about it first from what a user is aiming to achieve, and then build a system around that with the powerful new tools we have, and scamming becomes far less likely.

I.e. create a useful, communicable, hackable, URL structure, then a system of metadata, and pet-naming, for findability and verification, and you have a system which if fit for the task.

3 Likes

That’s why I like the concept of doing away with “domains” per se. I thought alpha 2 did this quite nicely with the “safe://<service>.<location>/path” type approach but see your point about the large to small descending hierarchy.