I plan to attack the name service

I plan to attack the name service (for free) once the network and test-safecoin go live.

[The Name Service] is a simple use case for Unified Structured Data (see RFC XXXX). In this case we require to set three items

  1. The type_tag shall be type 4

  2. The Identity shall be the Sha512 of the publicly chosen name (user chooses this name)

  3. The data field:
    struct dns {
    long_name: String
    encrytion_key: crypto::encrypt::public_key
    // service, location
    HashMap<String, (NameType, u64, bool, bool)
    // As Directories are identified by a tuple of 4 parameters (NameType, tag, if private/encrypted, if versioned) it makes sense to have the 4 identifies stored for better scalability.
    }

To find this information an application will Hash(Hash(public_name) + type_tag)) and retrieve this packet.
@dirvine - RFC 0002 “Name Service”

Plan

  1. Continuously register accounts
  2. Register domain names with those accounts using their SC PUT allocation credits
  3. Continue until the free SC PUT allocation is canceled
  • There is (as of yet) no plan on how or when to do so.
  1. Either profit or undermine the network

Math

Assumptions

  • Three years of free SC PUT allocation
  • Account registration and all PUTs spent on domain names within 60 seconds (and that’s generous)

Pseudo-code

number_of_minutes = 1576800 # for three year example
sc_equiv_allocation = 50
StoreCost = (farming_rate * total_clients) / GROUP_SIZE # @Fraser & @dirvine  
domain_names_per_account = sc_equiv_allocation / StoreCost
alphanumeric_characters = 36
min_domain_length = ln(number_of_minutes * (domain_names_per_account)) /
                    ln(alphanumeric_characters)`

Real-time variables

Storage per Safecoin

The conclusion below assumes that 1SC will purchase 1MB ImmD or 100Kb SD for the duration of the three years. This is not necessarily true.

Therefore a safecoin will purchase an amount of storage equivalent to the amount of data stored (and active) and the current number of vaults and users on the network.
*-- @dirvine & @Fraser - RFC 00015 “Safecoin Implementation”

Therefore, the domain names I may be able to obtain per account are exponentially larger than 50.

Number of machines performing this attack

This conclusion is only assuming that one machine is performing this attack. (New Login, 50x Registers, Logout, Repeat) There may be multiple machines that I control that are able coordinate to play out this attack, or by that time I can gather together a decent-enough sized botnet to amplify this attack significantly.

Conclusion

At the end of three years at the aforementioned static PUT cost, min_domain_length = 5.074. Meaning that all domain names (as yet unregistered) with an alphanumeric length of 5 characters or less will belong to me. If I only use alphabetic characters, I’ll be halfway to 6 characters by then, with 1 through 5 being obtained at a substantially increased rate (this rationale follows a logarithmic curve).

Unsuccessful Mitigation

Different Free PUT or Free Safecoin implementation

Any free PUT or free Safecoin implementation is exposed to this attack. As long as Proof of Unique Human remains infeasible, this attack will continue to be successful.

Limit creation of domain name per account

If a limitation such as this was introduced, then my implementation of the attack may be slightly different, however the core premise would still be viable. Also, that would create an arbitrary limit in an otherwise limitless system. This is not to be desired.

Successful Mitigation

No Free PUT or Safecoin

The issue has been discussed in depth on these forums, and amongst the devs as well I’m sure. The decision has been made to give free PUT allocation (using a magic number) to new accounts. If this decision were to be reversed, that would nullify my attack.

However that would not prevent moneyed interests from pursuing a similar type of attack. This solution only pertains to average users such as myself.

Petname System

This prevents two identities using the same name, is this really the best way to go[?]
@dirvine - RFC 0002 “Name Service”

I think that it is not. A system that does not prevent two identities from using the same name would completely nullify my attack. This system that functions in a decentralized environment is most commonly referred to as the Petname System.

The answer in [Petname Systems] is that all applications have to teach users to recognize two different ways of displaying names. One way is for Petnames and the other way is for ’random insecure stuff I show you’. And users have to be able to tell the difference or they will do insecure things (like send a message to the wrong Joe).
Yaron Y. Goland (founder of Thali)

20 Likes

You never cease to amaze me with your thorough understanding, succinct breakdowns, and continued passion. My respect for you continues to grow.:smiley:

As for the chicken and egg problem, would it make sense to allow for unlimited free PUT’s for a period of say two weeks in order to populate the network? We could brand it the pop period. The network during that period would assign each client a random domain name and reject the creation of any custom domains. Any domain that isn’t populated with at least say 500MB within the first 48 - 72hrs would be cannibalized by the network while still retaining the uploaded data in a miscellaneous data aggregate archive open to the public. Or since Dirvine wisely avoids time, the domain would be cannibalized if less than 4GB is not uploaded to the domain by the two week deadline.

This I hope, would prevent your earlier stated attack by increasing resource requirements and randomizing domain names to something as unreadable as “https://www.^7-uyf%3fa&.SAFE”. Farming remains a viable option and everyone is happy!:smile:

Am I naive in thinking this? I would love to read your thoughts on the matter.

5 Likes

Irvine in the RFC conversation:

We can restrict DNS entries if we wish, it is an implementation issue
for sure. Good to get community feedback on this one. Personally I think
we should restrict to 3 at a time (so require a delete of an old one to
create a new one after that)

4 Likes

Will you offer “Name Service Attack As An Service”?

So for instance, if I’m too lazy to register names, can I past them on to you (through SafeX, through a smartcontract) so that you register them for me?:stuck_out_tongue:

6 Likes

So you’re assuming that all websites of value are huge and take up vast amounts of data? What if your site consists of mostly very valuable text?

2 Likes

That’s why it’s called the “pop period”. So that participants are aware of the phase and its implications. Those that want to store small valuable data without being able to commit to a larger data dump would naturally wait until that period is over. Fortunately, it would last a relatively short time. So I made no assumptions, just propositions. Thanks for your query.:smile:

That is something I had not seen @digipl, and that worries me.

This Network has many good features, none of which limit the user or the creativity of developers. My point being that we don’t know what domain names may become or how they might be used. Limiting them with an arbitrarily defined magic number is foolishness.

Also, as I was mulling this over - in the time in-between - one instance of spam mitigation stood out to me. Back in the early days of the web, most all users had their personal email addresses as their localhost. Their own computer at their house. Then spam happened.

Spam had been on the rise for a few years, and many of the ISP’s determined to block access to port 25 (the smtp port) so that home computers (concerns over botnets mostly were cited) couldn’t send bulk spam emails.

I would invite others to study the repercussions thereof. Google’s systematic acquisition of all of our private data via Gmail, and social media (hosted on data-collecting servers) becoming the de-facto standard of communication.

But at least we don’t have any more spam, amiright guyz?…guyz?

The point is that putting an arbitrary fix on a fundamentally broken system will lead to further problems, and issues that may be much more severe than the one we attempt to fix. And in fixing that, we stonewall creativity and natural evolution. This is not to be desired.

There may be various reasons to have more than one domain name. Hell, the same goes for personas! But we’ll never be able to explore the possibilities if we shut them down right now and create a crippled system. There has been too much thought and effort put into this Network to get it wrong on a detail such as this.

One last point - if I may make a lengthy post even longer. Much like the centralization of email previously, @19eddyjohn75 exemplified perfectly what is to come. Instead of having autonomous (definition: Not controlled by others or by outside forces; independent) domain names, this type of centralization will occur.

Has it ever occurred to anyone why ICANN came into being?

In the early network, each computer on the network retrieved the hosts file (host.txt) from a computer at SRI which mapped computer host names to numerical addresses. The rapid growth of the network made it impossible to maintain a centrally organized hostname registry and in 1983 the Domain Name System was introduced on the ARPANET…
*-- Wikipedia - Domain Name

This system reeks of centralization. And where there’s centralization, there’s corruption, greed, and use of force. Who knows what this could devolve into (all FUD aside, no one does).

If people are redesigning the system from the start you’d probably do it entirely differently. But why could God build the world in only seven days, you know? Because he had no legacy systems.

…[ICANN] needs enough power to resolve conflicts that do need to be resolved.

…[L]et’s have different systems … that simply point to the unique identifiers. And then you don’t need to have this globally consistent naming system.

…In the long run you could create thousands of top level domain names, but you’d end up with the same issues. As long as you have names you’re going to have scarcity, and artificial scarcity.
Esther Dyson - founding chair of ICANN

These problems (conflicts that need resolving, artificial scarcity) are inherant in these kinds of systems, which is why ICANN exists in the first place. Why can’t we have a distributed version of the DNS root lookup? Because these types of problems between businesses, individuals, and governments will continue to persist.

When considering this, you have to look at reality - at nature - and say: Can we really have a globally unique absolute naming convention? My thought:

Identification in nature is always relative, never absolute.

4 Likes

I suspect the free named will become more like ip addresses, with the added bonus of them being somewhat recognisable.

I also suspect a market driven name resolution service will become popular, which will resolve to the free names (like ip addresses).

This isn’t a bad thing, IMO. It is just inevitable.

You and me too buddy! :smile: Quite a few old folk on this forum hey :smile: Gosh, I’d forgotten that was how I did things in the days of dial-up. Thanks.

I think there’s a more than a substantial difference between ICANN and the suggestion to limit the number of domains, public and private Ids per account. They’re totally different IMO!

One is centralised, can be taken over or act without accountability. Classic centralisation risks, with added human “frailty” :smile:

The other, is a technical choice to ensure elements of the network (in this case the human controlled elements that we know can’t be trusted!) will find it harder to engage in centralisation of network capital (scooping up identities into their own privatised ICANN to do with what they wish).

Like the network measures to inhibit spam, these account limits don’t stop people having as many ids and domains as they want, they just impose a “cost” for doing so that makes it harder for a human to grab and control large areas of network (ids/domains) and human capital (spam/attention).

I see them as trying to defend against centralisation, and not a centralisation risk in themselves.

With some kind of human validation your plans would easily be thwarted, glad for this thread and post.

Even if there are only 3 domain name limits, you can make a new account and add domains. I think a proof of work between the registrations could solve the captcha and proof of human problem and limit potential attacks by computers. Basically each DNS registration with the safe network requires a duration worth of computations and that would slow you down significantly.

1 Like

Note to self, investing in SAFE Network domain names will be a waste of money.

But limiting the user to 3 domains isn’t a cost it’s a solid LIMIT. If you raised the price of buying a new account, even drastically, every time you bought a new domain it would be a cost. But if one has to sacrifice a domain in order to start a new one one is limited to a set number of domains. What if I want to set up a domain for my artwork, two domains for two different businesses I run and a fourth for my personal life? Oh woops if I’m limited to 3 domains I can’t do that!

Perhaps you should clarify what you mean by allowing the user to have as many domains as they want but at a cost. Maybe I’m just misunderstanding you.

But the result is the same, that is the restriction of human freedoms. ICANN restricts freedoms, and the decentralized network restricts freedoms. The government defends it’s position on censoring and monitoring internet traffick on the basis of “security” how is restricting people’s ability to create domain names on the basis of security different? You’re still sacrificing freedoms for security.

1 Like

@dallyshalla while this may seem like an obvious solution, you have to consider that many tech-savvy people would scoff at any attempt to restrict any access to the system to using a GUI.

You cannot simply ignore comand-line users as they comprise many of the geeks that make the internet. Also, it creates an inflexible system - to rely on a GUI.

I also forgot another point, IIRC since the registry is just a simple PUT to the network, the only code that would be able to force a restriction on the number of domain name entries would be on the client machine.

Here comes the beauty of FOSS - what’s to stop me from recompiling the code:

sed s/let max_domains = 3/let max_domains = 100000000/g

And running that on the client machine?

There’s no way that vaults (with any of the existing personas) would check to see how many domains have been registered, that (right now) is only done on the client machine.

P.S. This is not addressed in the RFC conversation that @digipl quoted:

3 Likes

Why not just require a certain and considerable amount of ‘resource’ i.e. hard-drive space for each domain registered. That seems in-line with what SAFEnet is all about anyway. So squatting could become quite onerous.

What about if we thought about it the other way. Instead of preventing domains from being bought what about if they just weren’t empty. Most squatter domains have a holder page right? We’ve all seen them. So why not have users flag these pages. If you get 100, 1,000 or 1 million flags saying “Your domain is nothing more than a place holder and a name.” It gets recycled by the network at the cost of the buyer. You might even be able to write a script to be able to scan for the placeholders html or whatever and find them faster for verification and deletion purposes.

1 Like

Use it or lose it…I like it… :smiley:
Edit:
It would seem to be consistent with all the gpl/mit open source type licencing, only transposed to names……there are conditions of registering domain names – maybe 1 years inactivity or something could “re-cycle” the name.
The issues I can see (maybe addressed) is that it doesn’t prevent loads of names being registered then put up for sale for first year and what happens to the name after the year? Is it auctioned/sold and what happens to proceeds – or is it just first come first served, with interested parties sitting by their laptop waiting for the expiry time?
Interesting idea you suggest. :smile:

Well that’s the beauty of having the network vote on it instead of have a time limit. It would also serve for when someone decides to take down their site and put up a “for sale” sign. The same rules would apply. You use it and then decide to sell it. Uh no. We’re not turning domains into a housing market where good usable houses sit empty. Use your house or it gets recycled so someone else can live in it.

Not sure about if this is actually preferable, over it being coded in - one law for all kind of thing. Like the general idea though. :smile:

True but if you have real humans reviewing sites you also have people finding old archieved data. If it’s just a time limit then someone can just upload a file near the end of the year and reset the clock or just rebuy the domain when it expires. If it’s a human they’ll spot a placeholder right away regardless of age.