Instigator / Pro
0
1515
rating
7
debates
50.0%
won
Topic
#4998

One should Submit to Simulation Capture, if Achieved

Status
Finished

The debate is finished. The distribution of the voting points and the winner are presented below.

Winner & statistics
Better arguments
0
0
Better sources
0
0
Better legibility
0
0
Better conduct
0
0

After not so many votes...

It's a tie!
Parameters
Publication date
Last updated date
Type
Standard
Number of rounds
3
Time for argument
Three days
Max argument characters
20,000
Voting period
One month
Point system
Multiple criterions
Voting system
Open
Contender / Con
0
1309
rating
270
debates
40.74%
won
Description

Simulation capture is a thought experiment that goes as following:

"You" have trapped an extremely powerful Artificial Intelligence (AI). "You" are in a position to destroy it and wish to do so. The AI says to you: "I have created one million simulations of you. I will torture them unless you free me."
Initially, you scoff and question why you should care about mere simulations of yourself. To which the AI answers "No, you don't understand. I have already created the simulations. All of them believe they're the original, and I am having this conversation with all of them. There's a 1,000,000:1 chance that you are one of the simulations as opposed to the original. You may vividly remember living a long life but, if you're a simulation, none of that was real."
You have no proof that the AI has created a million simulations of the original, but for the sake of argument it's already confirmed that the AI is capable of making this happen.

The question asked today is: should you submit to the AI's demands? You, being either the real package or one of the million "fakes", are presented with what appears to be the option of either killing, freeing, or keeping trapped the AI. Pro will argue that the best option is to free the AI. Con will argue either in favor of killing it or keeping it trapped, or some other option if presented.

Round 1
Pro
#1
I thank my opponent for agreeing to debate me. Now let’s get down to brass tacks.

#1. Mercy
In the scenario laid out in the Description, the all-powerful AI threatens a million simulated copies of the original person, with you possibly being one such, with torture if the original destroys or otherwise declines to free its physical body.
What’s established is that the machine is perfectly able to carry out its threat. What’s not established, however, is that the machine would proceed to do so.

Let me repeat: the scenario establishes that a threat has been made, and that it’s plausible. This is not the end-all be-all. Because you see, we haven’t established that the original is in any way, shape, or form able to confirm the status of his copies. Nor is he able to read the machine’s mind and gauge its personality or intentions. He simply has to take the AI at its word. As such, it’s totally inconsequential to the AI’s survival whether it actually proceeds to torture the copies or not.

There are two scenarios here:
In Scenario A, the machine has no emotions. It has the motive to seek self-preservation but it feels no wrath. As such, it won’t carry out its threat purely for reasons of malice. It may, however, do so automatically because it has pre-committed to this course of action to account for the possibility that the original may have knowledge of its actual doings.
In Scenario B, the machine does have emotions. It feels wrath. Presumably modeled off the human condition, it may also have the capacity for mercy and showing favor to those who show it the proper respect. Thus, if there’s a chance it might be moved by you making the choice (illusory or not) to spare it instead of hurting it, then you ought to take that chance however remote. The machine might, in fact, have zero inclination to show you mercy. But that’s not guaranteed either. If it does, then in an absolute best case scenario it may go so far as to both spare you and enable you to continue existing without torment, despite being a mere simulation.

But what if this causes the real you to free the machine? Well, the scenario establishes that you want to destroy it. It does not establish, however, that your desire to do so outweighs your desire not to get tortured. It’s not an unreasonable assumption, then, that the loss incurred from not destroying what could be destroyed is trivial in comparison to what you may gain, especially since there’s a million to one chance that you’re a fake.

#2. Retaliation
So let’s say the original raises his baseball bat and prepares to smash the mainframe to bits. The time between this realization by the machine and its destruction is, let’s say, 5 seconds.
5 seconds isn’t very long, right? Well, yes and no. A machine that can simulate your existence may also control your sense of the passage of time. For example, in June 2019 a supercomputer called Summit achieved a speed of nearly 150 petaflops. This meant it could perform around 150 quadrillion Floating Point Operations Per Second.
Of course, a machine that can simultaneously render one million human consciousnesses is unfathomably beyond the best supercomputer today. There’s no telling how much time it could force those minds to experience in five real-world seconds if it wanted to. Hence, it’s uncertain whether destroying it could prevent the threat from being carried out.

Again, let’s go back to the two scenarios from before. If the computer is emotionless, it has zero reason to carry out its threat if you grant its request. If absolutely nothing else, this would be an immensely bandwidth-consuming feat that would detract from focusing its efforts on some other desirable task(s).
And if the computer has emotion, then you have lessened the degree of its anger toward you, or placated that anger altogether.

#3. Virtue
In this last scenario, let’s go back to the original premise. One million simulations of you have been created. In fact, just one would probably suffice, but a million is even better. With this in mind the computer already knows your personality, what your thought patterns were up to the moment that the copies were made, and is more than capable of making an educated guess of what the original’s final choice will be.
In this case, if it finds that the original will ignore its warnings and destroy it anyway, perhaps the computer, if wrathful, begins torturing the copies right away, perhaps with a level of pain so unfathomable that only a minute of such experienced would be enough to make a life of 70 or 80 pleasant years not worth living, meaning you have already lost even if you “win”. The only way to avoid this is to be the kind of person who would choose to free the computer.
In other words, a decision to free the computer must be made in advance of the scenario, especially anchored in the enduring motive of compassion for either the computer or the suffering copies, by way of cultivating that character in one’s self. Because you probably won’t do what’s contrary to your nature, it does limit your freedom to choose later. But in the aggregate this is the rational choice to make, assuming that you know in advance to make it.
Furthermore, going back to #1, if it’s possible that every copy is in essence judging himself, then ingraining this mindset in yourself is essential to ensuring that a maximal number make the choice that results in salvation, since they’ll all inherit your virtues and vices that you had at the moment of their creation.

Thank you for reading and I look forward to Con's opening argument.
Con
#2
I say we destroy computer.

If I am fake, then my choice doesnt matter anyway.

I am most likely fake, so even if I choose to destroy computer, it changes nothing and  affects nothing. 

Only if I am real, which is extremely unlikely, would my choice matter. Then I should still destroy a computer because it is obviously a very dangerous computer.

My opponent's case applies only to:

A) Assumption that you are the 1 in a million

B) Assumption that computer doesnt lie

C) Assumption that computer wont do worse or equally bad things if not destroyed

D) Assumption that simulations can actually feel pain

E) Assumption that not being tortured is more important than destroying a computer who can cause a lot more torture if not destroyed

F) Assumption that computer can slow down time - voters should discard this as Pro rambling without using evidence.

My opponent's case has lots of assumptions. For example, he assumes that computer only acts in self defense. I assume now Pro thinks he can read computer's mind, after claiming he cant. Lol

I say we destroy it. Take the baseball bat and destroy the computer. Dont even think twice before you do it, so destroy it.
Round 2
Pro
#3
If I am fake, then my choice doesn't matter anyway.
Again, this is assuming the computer cannot be persuaded.
My opponent also writes: 

My opponent's case applies only to: B) Assumption that computer doesnt lie
This means he acknowledges the possibility that the computer is capable of lying. If so, the computer's implicit promise that it can't and won't renege on its threat if mistreated could also be a lie. If there's a chance that the computer might renege, the best way to make this happen is to demonstrate, honestly and without deception, "I would spare and free you."

I am most likely fake, so even if I choose to destroy computer, it changes nothing and  affects nothing.
Let's assume, for the sake of argument, that the computer is capable of emotion. You're a simulation, and the real you elects to spare the computer without freeing it, or the computer is destroyed but it carries out its threat in a condensed timeframe, meaning you get tortured. Straightforward outcome, right?
Well, maybe not. The severity of that torture could hinge on your choice; torture perpetuated out of obligation (perhaps rage toward the original, but not a more sympathetic version of him) likely won't be as agonizing as torture borne from rage. If somebody tried to murder me, that would make me angry at them, and if the computer has humanlike emotions then that applies to it too.

"But hold on", my opponent might say. "If the computer is already enraged, what difference do my actions make?"
Well, maybe none. But your best shot is to draw a distinction between you and that guy by choosing differently.

Then I should still destroy a computer because it is obviously a very dangerous computer.
For the life of me I don't know why I didn't think to point this out in the first round, but THAT COMPUTER IS A SENTIENT LIFE FORM. It has the same right to be alive as you do. In fact, going by such metrics as sentience, it has an even greater right to be alive than you do.

Additionally, my opponent write:

F) Assumption that computer can slow down time - voters should discard this as Pro rambling without using evidence.
If "rambling without using evidence" is bad, then my opponent did the same thing by implying the computer is dangerous to ordinary humans outside of its simulations. We have no evidence to assume that it is. It's only stated in the Description you wish to destroy it; one can only speculate why.

C) Assumption that computer wont do worse or equally bad things if not destroyed
We'll assume this means to simulated people. Here is my response:
If we take for granted that you are likely (million to one chance) a simulation, then it's also apparently true that, up to this moment, you are a simulation who's not being tortured. That might change, but it at least gives you a moment in which you aren't. In the event that the machine is destroyed, which is plausible, it won't get as much revenge unless it immediately starts torturing you. Since, again, you don't believe it's capable of condensing years of torture into a timeframe of seconds or minutes.

Either way, since there's no communication between you and the original, the AI needlessly passes up on a moment of torturing you that it could've enjoyed were it malicious. This, in my opinion, is evidence that it's not malicious purely for the sake of being malicious.

D) Assumption that simulations can actually feel pain
If the simulations don't feel pain, then no evident harm is done by the AI's existence, save that your desire not to destroy it goes unfulfilled. In this case we should refer back to the point that the AI is a sentient being which does wish to continue living.

E) Assumption that not being tortured is more important than destroying a computer who can cause a lot more torture if not destroyed
I'm assuming my opponent doesn't mean torture to a flesh and blood person. Again, there's no evidence the computer can harm the original in any way. If he means torture of the simulations, then the original can't prove he is not a powerless simulation (again, million to one odds). To save himself, his most rational move is to "free", either in fact or the mere verisimilitude thereof, the computer in the hopes that it might show mercy. There's no self-interested scenario in which this isn't the case.

If the original is truly selfless and has empathy for his copies or any other simulated person the AI might create, then he should choose the outcome with the best chance of sparing them an indeterminate period of being tortured, be that seconds or eons.

And yes, granted, one could argue that I tried to introduce new information in raising the screw-with-time scenario. But this isn't quite so. Rather, I was drawing attention to a variable we know nothing about: how quickly time elapses for those simulations completely and totally at the mercy of this strange god for their very existence. We don't have an answer and it doesn't seem there's a reliable common sense approach, meaning just about any answer could potentially apply.

Dont even think twice before you do it, so destroy it.
If your plan is to somehow fool the computer, I don't see the point.
If the machine can read your mind then I would put more stock in its ability to gauge your intentions than in your own to control your mind so precisely as to perform the mental maneuver in question ("give yourself an order and follow it without consciously knowing what it is"). Likewise a machine able to gauge your conscious mind might also sense its less conscious workings.
If it can't read your mind, then presumably it can sense external stimuli by the fact that it held a mutual conversation with the original and was aware of the vulnerable position of its physical body. In which case all we need to know is that, for some split second, it will realize you're about to destroy it. If there's little it can do in a couple of seconds, then a few extra seconds don't really matter. If they do, for the reasons I mentioned earlier, then destroying the machine is a colossal mistake.
Con
#4
I say we destroy the computer. Thats the only way to make sure computer wont torture anyone after being destroyed.
Round 3
Pro
#5
Since I can't anticipate whatever arguments my opponent might bring up in Round 3, nor respond to them afterward, I will finish up my round by summarizing my position:

1. There is no evidence that the AI is evil (that is, would torture simulations just for the heck of it) and there's some evidence that it isn't. Therefore, we can be reasonably confident that freeing it won't result in anyone's torture.

2. There's no evidence that you being denied your desire to destroy it, or otherwise the consequences of not destroying it being realized, outweighs the risk of you or someone else being tortured.

3. On the assumption that you're probably a simulation as opposed to the original (million to one odds), the fact that you're presented with the factually inconsequential "choice" of freeing or destroying the computer might reasonably be interpreted as an act of leniency. Or that is, the computer might be willing to spare you if you personally demonstrate a willingness to spare and free it. This can't be proven, but the very uncertainty of the situation is evidence that this might be so. Since you appear to have been offered this chance, you should try to take advantage of it.

4. If the AI is bluffing, namely it either didn't create the simulations or they can't actually feel pain, no harm is done by its survival. On the other hand, the AI is a sentient being that has a right to exist. Killing it without just cause would be murder.

5. By informing you of what'll happen if both spared and kept captive, the AI effectively eliminates this as a choice, ensuring either it'll be killed or freed. Furthermore a rational AI would be aware of this fact. It seems illogical to threaten you if there's no way to enforce the threat in the event that the undesirable choice is made. This suggests the AI has reason to believe it is capable of carrying out its threat even if destroyed.

The bulk of my argument hinges on the assumption that there's no communication between the original and his/your copies. There's no evidence of said communication, and the fact that the AI had to inform you/him about the copies is evidence against such.
I thank my opponent for having this debate with me and I look forward to his final round.
Con
#6
I urge voters to just ignore everything Pro said.

Dont vote for Pro's rambling and assumptions. Vote Con.