Instigator / Con

You choose the topic


Waiting for the next argument from the instigator.

Round will be automatically forfeited in:

Publication date
Last updated date
Number of rounds
Time for argument
One week
Max argument characters
Voting period
One week
Point system
Multiple criterions
Voting system
Contender / Pro

You choose the description that I will be the con of.

Round 1
You choose the topic that I am the Con of.
Resolution: It is plausible to make a safe self-driving car before 2050 provided enough resources is invested.
BoP: Shared.
Round 2
It is not plausible to have self driving cars by 2050 assuming enough resources are invested.

  • Technological Advancements
Even if enough resources are allocated to making self driving cars, there will still not be enough time to make self driving cars. Let’s look at cancer for example. Millions of people die from each year. Because of this, millions, if not billions, are donated to finding a cure to cancer. Even with all of this money being donated, we still have yet to find a cure for cancer.

  • Unforeseen Challenges
In history, many inventions that have been made have also come with challenges that we still face today. Let’s take a look at antibiotics. We discovered them in that they could help us win the war against evil bacteria. However, there has been an unforeseen problem with this. Like all living things, they adapted. Since they adapted they are slowly becoming immune to these antibiotics. Along the timeline of making self driving cars, there may be challenges that we haven’t thought of, yet will show up.

  • Cybersecurity 
Since electric cars have software, they can be hacked into. People with malicious intent can hack into the software used to drive the car into ditches, rails, incoming cars, houses, etc. Because of this, they will also have to account for this when trying to advance technology further.

  • Ethical Problems
When you yourself are driving a car, you may have to occasionally make split second decisions, such as swerving around a tree branch, or stopping suddenly when a deer passes in front of the car. Because of this, the car will also have to make split second decisions when it comes to people, such as what people to save, or what car to hit. This will also require more technological advancements

  • Legal Trouble
When you are making anything new, you will have to abide to laws that are in place. This becomes an even bigger problem when you are making something that has never been seen before. In our case, a self driving car. Because of this, a bunch of new laws will be put in place regarding ai, human safety, its quality, etc. Even if you have made an almost complete self driving car, one law may have to make you completely change the design of the car and start from square one again.

In conclusion, it is not possible to make a self driving car with enough resources provided. There is legal trouble, Cybersecurity issues, Unforeseen challenges, etc. And this is why it is not possible. 

Thank you, Joebob, and welcome to the site. 

  1. seeming likely to be true, or able to be believed. 
  2. possibly true, able to be believed
  • I will be required to prove that the production of a safe self-drivng care by 2050 is likely to be true or able to be believed. 
  • CON will be required to prove that this is not likely to be true and not able to be believed.
The resolution does not mention mass production or even commercial availability. A single safe self-driving car being built by 2050 is enough.

Main syllogism
My case will be a simple syllogism. By modus ponens, an logical structure as uncontroversial as baisc arithmetic, the conclusion of my argument is true if the premises are true.

Premise 1: A self-driving car should be considered safe when rates of deaths, as a function of distance travelled, is as low or lower than rates of deaths on human-driven cars. 
Premise 2:  Technological roadblocks is the only type of limitation that can prevent a safe self-driving car from being produced by 2050.
Premise 3: Technology allowing for self-driving cars to have that level of safety performance could plausibly be created and implemented by 2050. 

Conclusion: A self-driving care that is safe is plausible to create by 2050. Affirming the resolution.

Now let us justify the premises. 

Premise 1.
This is the easiest to justify. Safety of self-driving cars ought to be defined in relation to that of normal cars driven by humans. The relevant death rates are those that are produced by dividing deaths by distance travelled, because that automatically takes all relevant factors into consideration. In 2019 the US death rate per 100 million vehicle miles traveled was 1.1

Premise 2. 
The resolution is not a blanket statement, the claim specifically has the caveat assuming enough resources is invested. Not only money, but also factory time, area usage, testing grounds, etc. are all types of resources. It is common sense that legal action, which is essentially a government induced lack of resources for the project, is assumed not to happen. 

So this premise is affirmed. Technological advancements enabling 1 safe self-driving car to be built by 2050 will prove the resolution correct, regardless of logistical or legal roadblocks.

Premise 3:
The safe-driving car is not dependent on any technology that is implausible to develop in less than 20 years time. Quite the contrary actually, it is nearly guaranteed to be possible. 

The resolution is affirmed so long as CON cannot knock down any of the 3 premises. Most of my contentions hereafter will be focused on affirming premise 3, the only controversial one.

FIRST CONTENTION: Self-driving cars will almost certainly be safer than humans by 2050. 

How good are autonomous vehicles today?

 Autonomous vehicles experience a reduction of 90% in crash rate when compared to regular vehicles. About 90% of self-driving cars accidents are caused by human error. [1].

Waymo’s driverless cars were 6.7 times less likely than human drivers to be involved a crash resulting in an injury, or an 85 percent reduction over the human benchmark, and 2.3 times less likely to be in a police-reported crash, or a 57 percent reduction. That translates to an estimated 17 fewer injuries and 20 fewer police-reported crashes [2]

Vehicle safety promises to be one of automation's biggest benefits. Higher levels of automation, referred to as automated driving systems, remove the human driver from the chain of events that can lead to a crash. While these systems are not available to consumers today, the advantages of this developing technology could be far-reaching [3].
Turns out that human drivers are horribly inconsistent which is a reason why automated systems are much safer. The vast majority of road accidents happen because of bad judgement from humans.

  1. Speeding. 
    1. AI can be programmed to not break speed limits.
  2. Not wearing your seatbelt.
    1. AI can refuse to drive untill everyone has their seatbelts on. 
  3. Distractions.
    1. AI cannot get distacted.
  4. Drink and drugs.
    1. AI cannot get high.
  5. Careless and inconsiderate driving.
    1. AI would always follow traffic rules and never drive recklessly due to impatience, fatigue or road rage.

Target performance 
As you can clearly see, self-driving cars have a clear advantage over human drivers. Their performance is not hampered by dopamine addiction, emotions and reckless disregard for traffic rules. As a result, even a very flawed and imperfect AI driver will still outperform the safety of the average human driver in the long run, causing fewer road accidents overall, and killing less than 1.1 people per 100 million miles traveled. So the target performance for self-driving safety does not require extraordinary driving skills. High safety does not require perfection, only consistency

Necesary elements
There are only three bottlenecks to how skillfull a self-driving car can get:
  1. Software
  2. Hardware
  3. Sensors
The hardware and sensors available is already far above what is necesary. Cameras, microphones and computer chips already operate much faster and with fewer errors than brains. The only missing puzzle piece is a software that is capable of doing the task of correctly analyzing the sensory data and steer the car without making huge and frequent errors.

Software - more specifically, AI
I firmly believe, despite my argument not requiring it, that an AI program that is a better driver than focused adult humans is not only highly plausible, but damn near an absolute certainty.

P1. Driving is a specialized rather than generalized task.
P2. AI programs without fail eventually come to outperform humans in any specialized task.
C: An AI program that has better driving skills than humans is inevitable. 

Regardless of the difficulty of doing it correctly, the task of driving is fundamentally simple and specialized. You have a set of inputs telling it about its surroundings and where it is headed and the laws and principles it must follow in order to reach a destination, and outputs which is just the steering of the vehicle. The task is no more complex than winning at video games. 

The historical record shows us clearly that AI programs are simply better suited for completing specialized tasks than humans are. After a sufficient ammount of trainings, an AI will always be better at answering the question "this is the current state, what should we do to get to the desired state?". The company deepmind, founded only in 2010, have trained AI that curbstomp us at chess, go and even Starcraft 2 which is a real-time strategy game [4]. The latter is very extraordinary because they heavily nerfed Alphastar - they limited the computing power and ammount of actions per minute it was allowed to use and forced it to spend time looking around the map rather than view everything simultaniously. Despite this, it quickly picked up skills far surpassing the best human pros and even developed new strategies never thought possible.  

This is due to many factors:
  • AI can train virtually for millions of years, far more than a human could
  • Meaning it can encounter and learn to deal with situations that most humans never will
  • Also AI when trained properly will discard bad habbits and adopt better ones
  • The human brain, which is designed for multitasking, extrapolation and generalization, is not able to beat specialized AI that can perform one specific task nearly perfectly.
Two years ago an AI passing university level written tests in stem and law fields was unthinkable, but gpt-4 is 90th percentile. Imagine what 20 years of heavy investment in driving AI can do.

An AI system that can drive better and safer than humans is inevitable. When AI in just a few years can surpass humans when it comes to skills such as complex video games, why would driving, a very simple task that only requires that you get enough training, be implausible for them to surpass us at. A self-driving car would be allowed to look in every direction simultaniously and have instant reaction times and as much processing power as is necesary. It would also never forget about the specifications and limitations of the car.

How long must we wait?
Given the currently incredible speed of AI research that is still accelerating, the AI that can drive better and safer than humans is almost definitely going to be developed before 2050.

SECONDARY CONTENTION: Hardly anything can be considered implausible in 2050

Technological progress is uncontroversially seen as exponential. Meaning that it progresses faster and faster. It would be impossible to predict the limits of technology in 25 years. 

THIRD CONTENTION: Surprise innovation makes anything plausible.

Validity of argument
CON has used "unforeseen challenges" as a point in his argument, thus conceeding that the unforeseen must be taken into account in our discussion, validating this argument.

The argument
Unforeseen developments might make anything possible. An article saying that flying machines would take a "million years" to develop was posted only days before wright brothers succeded [3]

Here is a list of things that 

FOURTH CONTENTION: Subjective unbelievability does not prove anything implausible.

Flying machines were definitely plausible to create within less than a year, despite the majority of people and reporters deeming that prospect physically impossible. 

Plausibility is immune to mere doubt, you need some hard hitting evidence, physical limitations or definitive proof that something cannot be done in order to negate an ideas plausibility. 


"Even with all of this money being donated, we still have yet to find a cure for cancer."
You are talking about a task that even humans don't know how to perform. But a self-driving car only requires that we teach an AI a well-understand skill we know that we can teach AI.

Unforeseen Challenges
Human driving has already revealed all the challenges that driving entails. But even if we ignore that, this point is counteracted by unforeseen sucesses. 

They will still have the option of humans taking controll. The idea that offline cars on the road can be hacked and go amock is not backed by evidence, nor unique to self-driving cars.

Ethical Problems.
Not very often does the road turn into trolley-problem-like scenarios. We can just program AI to always follow trafic laws and in emergencies drive into as few people as possible. 

Legal Trouble
My argument is not that the roads will be litered with safe self-driving cars, only that a safe self-driving car will plausibly exist by 2050, regarldless of if they are allowed on the road.

CON's arguments mostly discuss tangential topics. He does not adress why making such a car by 2050 is implausible, just that it will be challenging. The project can easily be both. 

A safe self-driving car by 2050 is plausible. Current automated systems are already safer than humans. When we eventually fully automate driving, increased safety is all but guaranteed. 

Round 3
Not published yet
Not published yet