Self driving cars that own themselves

Such a great idea because it could be a first step toward flushing Wall St. down the toilet and it addresses the problem of profit efficiency. Next to the problem of profit efficiency ROI is practically irrelevant.


From the article:

There will still be economies of scale in production and in purchasing. So, I think you’ll still get dominant groups emerging - maybe you’ll sign up with Bumblecar one day and the next day switch over to Freewheel or something - but there will still be these big groups that operate fleets. And they would be able to out-compete individual self-owned vehicles.

I don’t see why these autonomous cars wouldn’t be produced en masse. And after removal of the middle man (the owner) who gobbles up profit as their share, these autonomous cars actually have a bigger budget for improvements, so they should out-compete the human car owners!

In addition, I don’t think these cars should go at it alone. It seems more efficient to me to let the cars form an autonomous hive instead. A bit like ants. :slight_smile:


ROI is absolutely relevant. It doesn’t matter if a man or a machine owns it… Any algorithm is going to want the best profit the market allows, or it risks a future where it cannot pay it’s bills. Markets surge and wane. If your business is programmed to break even, you will die in the long run.

Profit isn’t evil. It is just food in the pantry… What is evil is using government to manipulate the market and gain such profit via monopoly or regulatory capture, keeping competition out…

In this sector, Lyft and Uber are undermining the Taxi monopolies. That isn’t wall street being evil that is unions being evil. If Automated taxis come into the market the monopolists will fight even harder.


Might be relevant


An intresting conversation. Cars and machines that own themselves. I don’t see why not as our legal fictions are just that, fictions, totally seperate from the human being flesh and blood entities that we are. You as a human being have very different rights than you as a person, a legal fiction. I see no reason at all that a machine couldn’t be assigned it’s own legal fiction. Of course this begs a couple questions.

Whom do I sue if something goes wrong and want for whatever to take my autonomous taxi to court? And how is this taxi to appear in court and be punished if convicted? You can’t exactly lock them up in a jail cell.

If a a human has natural rights as a man under natural law and also state privileges granted to his strawman via possitive law; then if one is to grant a robot a strawman does that robot also have natural rights of some form?

What happens if a robot does evolve true sentience at some point without it being programmed in?

If robots have the right to own themselves why don’t we as humans have the same right?

How do we avoid or deal with the same issues we’ve run into with human kind when allowing robots self ownership: I.e. poverty, prejudice, accidents, health/well being and whatever else might evolve.

Will robots have emotions? Do we really want something we’re trusting our lives to to lack empathy or the ability to feel? And what happens if your car or robot can feel? What happens when your robot waiter gets depressed or your car gets angry? Will we be have A.I. therapy sessions?

Will a robots A.I. be inexorably linked to it’s robotic body or will it be able to upload itself and download itself to different physical bodies? Will an A.I. be able to “work” as a taxi one day and then get a software upgrade to “learn” new job skills and work as a waiter the next? Keep in mind that robots can live a lot longer than human beings and an A.I. could theoretically be immortal. The same A.I. that served your great grandmother as a little girl could be serving you into your old age and serving your grand children to come.

What happens to all the human workers who lose their jobs to A.I.s and robots? How do they make a living and occupy themselves?


@Blindsite2k Maidsafe will own itself.

You could ask yourself the same questions about it.

I don’t think we have all the answers yet.

I do think that there is a strong arguement that humans don’t need to work unless they want to, provided that the infrastructure that humans have built provides adequately for subsistence or better. As the barriers to entry are diminished and the capital needed to be productive is significantly reduced, productivity could spike without a lot of human labor.


I don’t know if Maidsafe can own itself because it’s a decentralized network and beyond the Maidsafe foundation distinctly lacks a self of which to own. To own oneself implies ego and the Maidsafe network has none, it has no sense of self. At least none that I’ve seen as of yet. Therefore no Maidsafe does not own itself rather it is collectively owned by those that use it. Now if the network developed it’s own independent intelligence separate from the users that utilized it and started making decisions on it’s own THEN it could be argued to have developed a sense of self and self ownership. Of course then we’d be entering into a whole new ball game as we’d be dealing with a self aware and sentient A.I. that spanned the planet and had vast amounts of information on everyone.

I don’t know. It will pay it’s own bills, and buy it’s needed resources, more or less… If it doesn’t own itself, who exactly does?

I see it as pretty much the same thing. Cars are certainly more tangible, but the idea that they pay their bills and negotiate fees according to market conditions will likely be just in line with SAFE.

A.I. is a separate matter and concern, and neither technology ought be assumed to have it, or assumed not to be able to evolve into it.

Note the edit… As I said in my edit we collectively own it.

Same reasoning why corporations should not be granted personhood. Because they are collectives without a specific self to take responsibility for one’s actions good or bad. If Maidsafe has self ownership then it follows it also has responsibility. Therefore to whom do we assign this responsibility? Who do we hang or give a medal to? An anonymous decentralized hive mind is not the poster child for ownership and accountability. In fact that’s exactly wherein lies Maidsafe’s strength against centralized organizations that are based on ego like law and government. It is decentralized, it does lack ego, it doesn’t have a sense of self, there is no core to attack, no throat to cut or strangle, no palm to grease or ego to stroke. The network is secure and it doesn’t care who uses it. It doesn’t have an agenda That’s the point. One of the strongest security features of Maidsafe is that it does NOT have self ownership. If it had self ownership then it would cease to be secure. Even at launch it requires what is it 15 core farmers to start the network. then those will launch more and it’ll forget the one’s that started it and self perpetuate

One could say Maidsafe is owned by it’s users. However how would one go about suing the collective network? You can’t because who would you send the court summons to? Moreover how can you hold someone responsible for the actions of someone else half way around the world one had no idea about and had no control of etc etc. Maidsafe very effectly shreds the concept of legal liability via anonymity. Which is perhaps my ultimate point. Are we doing away with the concept of law and legal obligation? I’m an anarchist so this isn’t entirely an unpleasant concept however I want to be entirely clear here.

Say there was an accident involving an automated taxi and it damages my car or nearly runs me over while I’m crossing the road. Do I a) sue the taxi. Or b) throw a grenade in it and blow it up. Yes I am taking it to that extreme. We already have enough problems with corporations using limited liability. If I have a business via Maidsafe and decide to dump nuclear waste in someone’s backyard but avoid prosecution because I’m using the safe network and am anonymous is this a good thing? Additionally what happens if I’m an A.I. running multiple corporations doing the exact same kind of thing? Now what?

What happens when someone has someone assassinated because they have a personal grudge against them. Do we simply shrug and say “Oh well. Tough luck.” Yes when people are wronged they want someone to blame. They want vengence or justice (they’re pretty much the same but people enjoy quibbling). And people throw extreme tantrums when they can’t find someone responsible.

Maidsafe isn’t just getting rid of law but also that of personal accountability. All of law is based on holding people personally accountable for their actions. Yes there is also the public sharing aspect of maidsafe but until something is made public one is private with total impunity. This begs the question how are we going to interact not only with robots but with each other.

But who is going to get sued if MaidSAFE does something some government considers naughty?

There is nobody to sue… It is pretty well designed that way.

We can debate weather there ought to be fictional entities or not all day, but the odd thing we have to wrestle with is that there are entities that are not fictional… (like Bitcoin or MaidSAFE)

I am not a big fan of liability anyway. Chances are good that a computer driver car is not out to hurt anyone… You could probably check the code to make sure… A sizable chunk of society is fixated on finding somebody to blame for life happening.

Yes because finding someone to blame also means finding out who pays the bills, like say medical bills for instance. Say you’re completely right here and the programming is completely sound, the car totally it someone by accident, just slipped on some ice or something. Things happen. But still who is liable? When the human ends up needing traction for a month who’s paying the hospital bills?

You do point out something very important here. We have a culture of avoiding responsibility. Of trying to pin responsibility, and therefore paying for stuff, on someone else. I too find it distasteful if not outright disgusting but I have to account for it and deal with it. Perhaps maidsafe will help change that as people will no longer be able to simply pin the blame on someone else because there will no longer be anyone else to blame. The only actions they will be able to count on and hold accountable will be their own. Perhaps this will lead to an increased sense of integrity and responsibility. But then again perhaps I’m just being optimistic.

The police did arrest murderers before there was an NSA or DNA tests, right? Assuming this was a thing that happened, police or hopefully private protection agencies should be able to find the people that are deliberating trying to hurt people, and reputation should curtail the agorists from unintentionally hurting people.

Gosh there’s such a can of worms here huh?

Automomous businesses will need to be accountable too.

I agree with the overly blaming culture. It sucks in lots of ways. As does the avoidance of being held accountable - which part of the justification for a legal system - to stop people having to “ride in with guns” and hold people accountable themselves. Its a fine line between that and coercion of others on the excuse of having been wronged, so we have “courts” (first of the village elders, then … until what we have now with all its faults and foibles) to try and strike a balance.

(BTW I’m not suggesting we debate this here, but it could go into an off-topic thread for those who want to. When that thread comes up with a way that SAFE can contribute to the issue it can come back to the main on-topic categories.)

I think “machines that own themselves” is just a lack of a better phrase. A little bit of click bait or a way to make you really dig deeper into your philosophical mind, which looks like it did what it intended :smile: which I personally think is great. But I think it’s more or less just like this project, autonomous. It takes care of its self. Outside of its perameters it knows not what to do. someone put that there.
I don’t think the legal system is just unless it’s by coincidence because somehow benefiting so I’m not supporting it when I say most likely someone, meaning a person who put that autonomy there, would be held accountable. To me what’s brilliant about maidsafe is that it’s not coming stock on a machine or purchasable. It is a free experiment that upon an individual’s personal will downloads and uses and connects to other users and when a mass of people use something powerful it’s hard to stop them especially when it is so well built with privacy and physically intangable. So “someone” being the maidsafe foundation put it “out there” but not “into” a particular machine owned by a company that owns a fleet of “autonomous” vehicles. Since there is and owner to that fleet of cars that put that software in there then someone is accountable and to me makes this idea more feasible within the current legal and regulatory systems. Because someone can still go to jail. If it was prove-ably put there by someone then someone can be held accountable therefore the car is not going to court and the system grins its devilish grin that it has a body to put in jail

1 Like

Part of the advantage of Self-owned machines is that they are free agents. If I can push a button on my phone and a car arrives to take me where I need to go, that makes it a lot necessary to have a car – or perhaps a second car. If we ty it back to “you must have an owner” we lose some of this benefit… If you force some company to take on the liability for what the machine does when it is working on it’s own volition, not for them, then you greatly reduce the likelihood this asset will be shared.

No fault insurance is a simple enough solution to this problem. You could do that through Smart contracts and cryptocurrency easily enough.

Unlike people, cars can be programmed to drive responsibly, and we can expect them to follow their code… Machine cars can probably drive better than humans can in part because they should be able to communicate with one another and negotiate right away etc… There are some pretty big advantages there.

True humans do follow a pattern of behaviour. Although it might be a bit difficult proving it.

The question is how? If one makes an entirely online business on the safe network and conceals one’s identity then how is one held responsible? The worst thing that can happen is one’s business tanks and then all one has to do is start over with a new idenity.

And if there is no owner as has been suggested?

What happens when we create stand alone automonous androids that own themselves? Because you know if we create cars that own themselves and maidsafe that owns itself we’ll create androids that own themselves. What happens if you have a robotic butler or maid or nanny that owns themselves and then runs amok? Are we saying that reputation will deter bad behaviour and if that fails we’ll then just send in a hit squad to destroy the rougue android? Would we also apply this to a human populace? How would this apply to a decentralized hive mind like SAFE?

I’m not making judgement calls here. I’m trying to understand how you envision accountability to exist without liability.

I suspect the situation that everyone is so afraid of already exists.

If you are insured and wreck your car into somebody, chances are quite good that payment is made in full and in abundance. If you are uninsured and you hit somebody, unless you have substantial assets, folks rarely get blood out of a turnip.

So liability is a luxury… It exists where we can afford for it to exist, and it is a “legislating that rivers run upstream” where it cannot afford to exist. Mostly liability is a government construct to enrich lawyers… Most everything in the world is dangerous to one degree or another. If you fall of a ladder, it is rarely the ladder’s fault - but a good lawyer will likely get you a settlement anyway as it costs the ladder company 40k or more to go to trial even if it is totally innocent.

Machines don’t really change the equation at all.

No fault insurance is the best solution, and it would be quite easy to build and buy robotically with Cryptocurrency. A robot could submit a hash of it’s code, and share it’s maintenance records with the insurer and the insurer could charge appropriately.

If it caused injury without insurance it could be impounded until it paid it’s fines, or until somebody paid the fines on it’s behalf in exchange for future service…

These objections are rather petty, it seems to me.

1 Like

The question is how? If one makes an entirely online business on the safe network and conceals one’s identity then how is one held responsible?

You don’t need to have much imagination to come up with ways. In fact, this curriculum of finding a person, gives some a convenient justification for objecting to and preventing such operations.

Seeing aside the how: do you agree it’s a good thing for an autonomous entity to be held accountable? Both because otherwise they could do harm with impunity, and because this would create a backlash that means they’re all labeled bad (as we can observe happening with business that are increasingly unaccountable).

I would suggest with something like SAFE as compared to a self driving car, instances or real liability will be fairly rare… It isn’t like you can run over somebody with SAFE… Any damages are likely non-tangible, and part of the risk of doing business on the network.

Fraud is nothing new. Remember Nice HYIP that folded a year ago. Sponsored a bunch of conferences, had reps touring the country giving away a Bitcoin Car… “We’ve been hacked” was the last anybody saw of the business… As they ran off with a million or two USD worth of BTC… Even with business parters all over the world, nobody knew who the principals where, and no jurisdiction had much incentive to go after them. Are they in the UK? Are they in Panama? Are they in South Africa or the US? Yes, they are everywhere and nowhere at the same time. Will SAFE make it easier for folk like that? Probably, but does it matter? Not really, they aren’t going to get busted either way. Buyer beware…

On the other hand transparency is becoming easier than ever before… If you are doing legitimate business it is easy to show you are doing legitimate business.

Smart contracts can eliminate or mitigate liability. One probably ought not do business with an unknown entity without one…

I don’t believe it’s a question of good or bad but rather it’s something human beings invariably do. When a person does good or bad the recipient of that action generally wants to know who is responsible for their fortune or misfortune. In fact counteracting this propensity is where we get phrases like “Don’t look a gift horse in the mouth.” or “Turn the other cheek.” Because it is our nature to seek vengence or to find out who is responsible for doing us good. Our entire legal system as it stands is based on this concept. Nevermind the moral and social implications. The word “robot”, iirc, is Slavic for slave after all. If a robot can own itself it is essentially being granted it’s freedom is it not?

Personally I don’t believe that any authority should be able to hold one accountable however it does give me pause when we start referring to collectives as singular entities in one breath and then turn around and use those same collectives to avoid responsibility. If something is to have ownership it must also have the ability to have the burden of responsibility. My greatest problem with corporations is that they avoid responsibility for their actions using limited liability vis a vie the arguement that they are merely a collective and no one of their number is singularly responsible. Sound familiar? Say Monsanto or BP Oil or some equivilantly evil corporation migrated or appeared on Maidsafe. How would it be held accountable? And if one is to argue that it shouldn’t or that it’s pointless as in the case of the bitcoin scammers then how are we as a society to account for the harm done?

I think the advent of ownerless machines is forcing self ownership as a whole into the limelight because as much as we like to think we live in a free society the reality is our entire legal system and culture is based upon the notion that we are merely vassals and serfs to the state, that we ourselves are property. To actually argue for machines to have freedom is to mirror that freedom for ourselves because we are treated not much better than the mechanical slaves we have created.