Instigator / Pro
7
1695
rating
76
debates
74.34%
won
Topic

On balance, self-driving cars are ethical

Status
Finished

All stages have been completed. The voting points distribution and the result are presented below.

Arguments points
3
0
Sources points
2
0
Spelling and grammar points
1
1
Conduct points
1
1

With 1 vote and 5 points ahead, the winner is ...

Intelligence_06
Parameters
More details
Publication date
Last update date
Category
Cars
Time for argument
Three days
Voting system
Open voting
Voting period
One month
Point system
Four points
Rating mode
Rated
Characters per argument
10,000
Required rating
1000
Contender / Con
2
1448
rating
17
debates
32.35%
won
Description
~ 305 / 5,000

Rules:
1. Forfeiture = loss
2. Concession = loss
3. Definitions should be backed by valid evidence in order to be considered valid
4. Burden of proof is shared
5. We are referring "self-driving cars" to the overall concept of a self-driving car, instead of any one brand of self-driving cars.
5. Have fun.

Round 1
Pro
Sorry if the argument is too simple and not elegant at all due to this being the third try after I clicked "inspect" two times resulting the argument being erased.

1. Ethical

A self-driving car is any non-railed land vehicle that operates automatically[1].

Merriam-Webster cites "Ethical" as[2]:
1: of or relating to ethics
2: involving or expressing moral approval or disapproval
3: conforming to accepted standards of conduct
4: of a drug restricted to sale only on a doctor's prescription
A self-driving car is not a drug, so the fourth definition can be ignored due to irrelevance.

The rest three can be considered in the argument below.

Relating to ethics

Ethics is defined, by Merriam-Webster, [3]:
1 ethics plural in form but singular or plural in construction the discipline dealing with what is good and bad and with moral duty and obligation
All in all, an ethical thing, according to this definition, must:
  • Deal with good and bad
  • Deal with moral duty/obligation
The way self-driving cars work is basically to use the sensors to collect data and the computer, arguably trained, picks the best thing to do when it has to make a choice[4]. A trained computer AI fit to be placed on a working self-driving car would arguably make the best choices, or the most desirable choice, or the choice that it is capable of bringing the most amount of happiness. The AI would possibly have a list of things to do, and based on the observed state of the road, the AI would, based on its experience, pick the option that will carry the intended people to their intended areas while keeping the car safe. This is the reason why a self-driving car would turn left when you are supposed to go to a place just to the left turn of this road, and it doesn't crash into the car in the front. Simple: The computer is trained so it can identify the right and wrong things to do in different situations.

The self-driving cars would arguably have "moral obligations" due to how the term is defined[5]. The AI is programmed to do the right things, and it considers going into the directed place while safe a right choice and causing public damage a wrong choice, and self-driving cars simply abide by the general tenets it is programmed to do: deliver the car to the directed place and not cause any damage to itself or to the outside world. The AI is programmed to do this and trained to do the right things, which is just that. The AI has moral obligations.

Moral Approval

Self-driving AIs can morally approve and disapprove things. By trained experience, it simply considers crashing into buildings(Taking a turn while the generally-accepted thing to do here is to go straight) wrong as well as driving safely right. It approves to things it is trained to do and disapproves things it is not meant to do by simply filtering out those wrong things, and not triggering them. The second definition supports the first.

Conform to standard

Most of the US allow self-driving cars[6], as well as that many different nations does[7], which means that self-driving cars conform to a standard, to an extent.

Conclusion
  • A self-driving AI does the right things and doesn't do the wrong things and chooses what to do by analyzing the presented scenario, doing the right things and avoiding the wrong things.
    • The AI morally approves and disapproves things by doing or not doing it.
  • The AI is obligated to obey the commands of the drivers as well as delivering the car to the destination while keeping the car intact. The car is trained to obey only the best choices it can provide. That is practicing moral obligations.
  • Self-driving cars conform to some extent of standard by being legal in most of the US as well as being ready in many different countries.
  • By definition, a self driving car is ethical.
Sources


Con
Hi.

My argument will be super simple too. As I do not have time for anything substantial.

And I have experienced similar issues in the past with preview, so better to be satisfied prior to posting .




On balance, self-driving cars are ethical.
Definitions accepted, and I see no references to the ethicality of a motor vehicle per se....So no....Motor vehicles, self driven or otherwise are not subject to ethical scrutiny.

One would suggest that ethics are assumed human qualities and consequent judgements, that humans apply to themselves and to others. Rather than judgements made by an object itself, or by humans directly of an object.....So no......Ethicality does not apply.

So in order to apply ethical considerations to the proposition (and contrary to the proposition), one must extend the parameters of consideration, to enable the inclusion of human ethics in relation to self-driving motor vehicles. Thereby excluding the objects (self-driving cars) from consideration.

So Pro is basically questioning the ethicality of the human (+ car), and whether the human (+ car) interacts morally within a society or sub-society, and within the accepted  ethical parameters laid down by that society or sub-society, in relation to motor vehicle transport.

Within this context, the ethics of powered transport have long since been globally established and accepted by the majority. Therefore the proposition becomes a tad disingenuous.

To extend the debate further to include ethical considerations in respect of material progress, which in this instance would be the ongoing development and application of technology, would be to question the inevitability of both human evolution and consequent material evolution (technological development). Another debate entirely, methinks.



So thus far and "on balance", I would suggest that an object is not responsible for human behaviour. And even though an object maybe a defining influence of human behaviour ( a weapon for example), nonetheless ethical considerations can only be applied to the human, and not to it's associated objects.


So actually and "on balance" It would be better to say that self-driving cars per se, do not necessitate ethical consideration, and therefore are neither ethical nor unethical.







Round 2
Pro
Rebuttals

Definitions accepted, and I see no references to the ethicality of a motor vehicle per se....So no....Motor vehicles, self driven or otherwise are not subject to ethical scrutiny.
  • P1: If a self-driving car deals with right and wrong, act out in a way that evaluates and judges the rightness of an action, and meets the standard conduct, it is ethical.
  • P2: A self-driving car certainly does that and there is nothing so far disproving the fact that automated cars do so
  • C1: A self-driving car is ethical.
Con has accepted the definitions, meaning now he either has to change his mind and give criticism to the definitions, or disprove the arguments in the first round in order to prove that a car is not ethical. So far, nothing like that has been seen.

Con has yet to criticize that the computer of the car considers different things the car could do now and picks the best based on experience(because it thinks it is the right one to do); or that a self-driving car has a coherent obligation: to serve the owner of the car and deliver them to the desired location safely, which the programming of the computer considers it right to do and thus does it. Con has refuted none of the two.

One would suggest that ethics are assumed human qualities and consequent judgements, that humans apply to themselves and to others. Rather than judgements made by an object itself, or by humans directly of an object.....So no......Ethicality does not apply.
Upon close inspection, the term "one would suggest" is not of anything suggested by any reliable sources, not a dictionary nor an encyclopædia. Con did not establish himself as a renowned expert of the philosophical and ethical area, and there is not proof that Con is one. There is no reliable proof from Con that suggests that ethics are human qualities, however, Pro's sources from Merriam-Webster, a dictionary, would suggest that ethicality is merely the dealing with the right and the wrong, as well as having obligations to the right things. A self-driving car does that according to the source[3] in R1.

The rest of the argument falls because there is no established base that ethicality is exclusive to humanity.

Within this context, the ethics of powered transport have long since been globally established and accepted by the majority. Therefore the proposition becomes a tad disingenuous.
Though, this paragraph agrees with the Pro proposition, especially in accordance to the third definition of "ethical": 3: conforming to accepted standards of conduct.

So actually and "on balance" It would be better to say that self-driving cars per se, do not necessitate ethical consideration, and therefore are neither ethical nor unethical.
No... The definition of "unethical" states "not ethical"[1] which means that the phrase "therefore are neither ethical nor unethical" carries no practical meaning since it is neither ethical nor not ethical.

Suppose Ethical=A
Self-driving cars is simultaneously in a state of A and yet not A. This would mean that Con's final conclusion means nothing meaning Con did not argue anything at all. Even if his points prior to this sentence contributes to the Con position by a substantial amount, the state of this sentence would at least be incoherency, which is a fallacy. Either disregard this sentence or disregard the whole argument. 

Conclusions

  • Attempts of challenging Pro's position is not based on authentic evidence and Con assumes that ethicality is a human quality, which is suggested by nothing authentic and given.
  • Attempts of challenging Pro's R1 arguments are nonexistent, or at least amount to nothing because anything that may suggest to it is based on the assumption that ethicality is a human quality, which is unproven.
  • Con agrees that a self-driving car is ethical in the 3rd sense.
  • Con's final conclusion means nothing since "Unethical" is "not ethical".
  • Overall, Con's argument, at places, is not consistent with its position. Con did not successfully dismantle my argument. Pro's arguments still stand. Vote Pro.

Con
So.

P1.   "If"..... Then yes it would.

But:   
Currently, motor vehicle transport computer systems are programmed to do what they do. Whether that be systems management or self-driving.
Self-driving cars will function either correctly or incorrectly relative to their programming. Ethical considerations are not the remit of non-human objects 

P2. See above.

C1. See above.


Criticisms:

One sees no reason to criticise definitions.

One sees no reason to criticise computers per se....Computers currently function relative to programming.... Therefore criticism can only be aimed at programmers and manufacturers.


One would suggest:

One would suggest that ones suggested statement is accurate.

Currently there is no exception to the fact that sentient function and consequent decisions in relation to motor vehicle transport are solely the responsibility of the human.

3. Conforming to accepted standards of conduct.

Rather:
A globally accepted mode of transport to which the human applies accepted and expected standards.  Thereby bestowing all necessity of conduct solely upon the human, to provide safe and efficient options.


Therefore are neither ethical or unethical.

Was clearly proffered as an accurate alternative to the proposition, based upon the obvious knowledge that currently unintelligent mechanical and technological devices do not have the capacity to consider ethics.

Pro, incorrectly refers to my comment out of context.

So one can only assume Pro's relative ethical considerations.

And one would certainly not criticise Pro's personal computing device.


Further.

Con employs semantics and pseudo equative analysis, to distract from the fundamental question.

Q. Does the function of a self driving car, rely upon that car making ethical judgements.

A.  Currently no.

A motor vehicles onboard management systems are programmed and not autonomously intelligent. Therefore the responsibility for ethical consideration lies solely with human programmers.


The final decision is clear, and boils down to the voter deciding whether a car, actually thinks for itself, or is programmed to respond in a specific way to sensory input.

Many times in Forum discussions I have proposed that we the organic will eventually, have to cede the responsibility for material development to the technological inorganic.....And as yet no one as been prepared to agree......So it would be a tad contradictory to now agree with  Pro's assertion that self-driving cars are capable of ethical consideration.

Cars are not ethical, by the virtue of the fact that cars do not possess the ability to be either ethical or unethical.

A discerning voter should be able to deduce this from reasoning alone.


Round 3
Pro
P1.   "If"..... Then yes it would.
Con has not refuted that regardless of that car engines are sentient on its own or not, it judges decisions and act them out in a way that the computer thinks is right and sticks to decisions such as obeying the owner of the car and deliver them to places safely, in which they think is right, and thus is in accordance with the definition of "moral obligation", which Con agreed upon. There is absolutely no denial of that a car does those things, so in the end, Con did not achieve anything substantial.

But:   
Currently, motor vehicle transport computer systems are programmed to do what they do. Whether that be systems management or self-driving.
Self-driving cars will function either correctly or incorrectly relative to their programming. Ethical considerations are not the remit of non-human objects
Con is assuming, baselessly, that ethicality is exclusive to humanity. Con has given not a single scholarly article nor any reasoning that ethicality is exclusively human, on top of the fact that the definition, which Con agreed with, mentioned nothing about that only humans can be ethical. The rest, before the last sentence of this specific quote, questions nothing that self-driving car computers do what they do: judge actions as right or wrong and does the right ones. A self-driving car is proven to be ethical by definition, and Con's baseline of that ethicality is human is in itself, of no solid proof.

One sees no reason to criticise computers per se....Computers currently function relative to programming.... Therefore criticism can only be aimed at programmers and manufacturers.
Regardless of who we should blame, it is the car computer that made those mistakes, if they made any to begin with. The car company is not directly controlling the cars, in a sense where multiple people sits in a room with technological screens and analogue sticks controlling the cars. No, the car is autonomous. The car company is only blamed due to it being responsible for the car's flawed consideration of right and wrong(hence the mistakes). This is similar to that we tend to blame a child's parents when they are rude to other people for bad parenting. The child still did what they have done, but the parents are responsible for their flawed system which results in them doing so. The car company is responsible for programming an incomplete and non-perfect car-driving system which leads to the car, although autonomous, making the mistakes. Regardless of the car company having made mistakes, it is no less true that the cars made the mistakes on its own.

When cars make mistakes, it is no less true that cars acted in a way consistent with the definition of "ethical". They are still trying to distinguish between right and wrong from experience no matter how good or bad they are at it.

Currently there is no exception to the fact that sentient function and consequent decisions in relation to motor vehicle transport are solely the responsibility of the human.
And the child making bad life choices sometimes is the problem of the mother's bad home education and parenting. That still doesn't mean the child is not doing anything or is not responsible of their choices. The car, although being programmed by humans, still does the things directly and autonomously. It is responsible of just doing it directly.

Rather:
A globally accepted mode of transport to which the human applies accepted and expected standards.  Thereby bestowing all necessity of conduct solely upon the human, to provide safe and efficient options.
And humans allowed self-driving cars to roam the streets, thus making it conforming to accepted standards of conduct. No challenging that? alright.

Q. Does the function of a self driving car, rely upon that car making ethical judgements.

A.  Currently no.
False. Based on how autonomous car works, the computers makes the car consider things to do and makes the car do the right things, which qualifies as ethical by the definition Con accepts. The AI simulates human perceptual and decision-making processes using deep learning and controls actions in driver control systems, such as steering and brakes[3]. If the AI can make decisions just like a human and makes decisions all the time, how is it not ethical?

CONCLUSIONS

  • Self-driving cars make decisions just like humans and consider the choices either right and wrong and act out the ones they consider right.
  • Self-driving cars act by tenets such as "always deliver the owner of the car when demanded" or "Try not to cause crashes" in which the computer would consider right.
    • Those two statements, if true, fits the definition of "ethical" in which Con agrees with.
    • Those two claims are not successfully rebutted, and thus stands.
      • Con assumes baselessly that ethicality is human.
      • Con assumes baselessly that self-driving cars are not capable of making decisions, even though they are[3].
      • Con assumes baselessly that cars' making mistakes are the faults of the car company and not itself, which does not mean self-driving cars don't do things that are considered to be "ethical".
  • Self-driving cars are, to an extent, accepted by standard conduct in the world, thus making it, ethical.
    • Con agreed with that.
  • Overall, the acts of self-driving cars suits the definition of "ethical", making it just to conclude that, self-driving cars are ethical.
  • Vote Pro.
(This is a used source in R1)

Con
Just spent/wasted at least 90 minutes on Round 3 response....And it's disappeared.

Not prepared to re do.

So will forfeit this round.

This is why I tend not to debate.
Round 4
Pro
Con did not attempt to refute anything at all. Extend all points that have been made. Vote Pro.

CONCLUSIONS

  • Self-driving cars make decisions just like humans and consider the choices either right and wrong and act out the ones they consider right.
  • Self-driving cars act by tenets such as "always deliver the owner of the car when demanded" or "Try not to cause crashes" in which the computer would consider right.
    • Those two statements, if true, fits the definition of "ethical" in which Con agrees with.
    • Those two claims are not successfully rebutted, and thus stands.
      • Con assumes baselessly that ethicality is human.
      • Con assumes baselessly that self-driving cars are not capable of making decisions, even though they are[3].
      • Con assumes baselessly that cars' making mistakes are the faults of the car company and not itself, which does not mean self-driving cars don't do things that are considered to be "ethical".
  • Self-driving cars are, to an extent, accepted by standard conduct in the world, thus making it, ethical.
    • Con agreed with that.
  • Overall, the acts of self-driving cars suits the definition of "ethical", making it just to conclude that, self-driving cars are ethical.
  • Vote Pro.

Con
Many thanks to my opponent for taking on board my Round 3 frustration. 



So:    On balance, self driving cars are ethical.

Therefore voters primarily, must appreciate the fact that Pro well and truly places the proposition in the present tense......"Are".

So voters, ask yourselves two questions.

1. Where are all the self-driving cars?

2. And do they as my opponent suggests, "roam the streets" being autonomously ethical.

Though it has to be said that street roaming tends to result in unethical outcomes.....But that's another issue.  

The real issue is, does any form of powered vehicle, currently possess the capability to think for itself.....And as far as I am aware the answer is no.

Drones are still flown....... And they do not consider the ethicality of warfare.

Unless, Pro is party to Top Secret information....But I see no suggestion of that. 



Any device, whether it be washing machine or motor car is subject to human input and instruction via their rudimentary or more advanced onboard computer.

Does a washing machine consider the environmental implications of it's actions and refuse to work.....No.....It is wholly reliant upon us to switch it on and select a relevant programme.

A motor vehicle, even though it's onboard computer may be more advanced, still remains inactive until we push it's button and tell it where to go....It would be no  good whatsoever, if one needed to get to Los Angeles for an appointment and the Car decided that it would prefer to visit Big Sur.  

"Self-driving" is perhaps therefore an inaccurate representation of current expectation. The onboard computer is only capable of functioning within the limitations of it's programming.....It certainly doesn't decide to tootle of to the coast for the weekend with it's mates, without you.....Though that is not to say that it could not be programmed to do so....Nonetheless all ethical consideration rests with a human.

A simple test of autonomous ethicality would be to programme said vehicle to drive of a cliff when it got to the coast.....Would it independently decide to apply it's brakes.....No....Not unless previous instruction was overridden.


Briefly:

My opponent briefly touches upon the subject of AI.

And I will very briefly respond to this, by saying that AI currently isn't.....Though I firmly expect, that one day it will be.

A separate debate I would suggest.



In conclusion.

Do motor vehicles currently make their own decisions separate to, and independent of their programming,

No.

Therefore it would be a tad disingenuous of voters to agree with Pro....Unless they too are party to Top Secret information.