Instigator / Pro
0
1515
rating
7
debates
50.0%
won
Topic
#4998

One should Submit to Simulation Capture, if Achieved

Status
Finished

The debate is finished. The distribution of the voting points and the winner are presented below.

Winner & statistics
Better arguments
0
0
Better sources
0
0
Better legibility
0
0
Better conduct
0
0

After not so many votes...

It's a tie!
Parameters
Publication date
Last updated date
Type
Standard
Number of rounds
3
Time for argument
Three days
Max argument characters
20,000
Voting period
One month
Point system
Multiple criterions
Voting system
Open
Contender / Con
0
1309
rating
274
debates
40.51%
won
Description

Simulation capture is a thought experiment that goes as following:

"You" have trapped an extremely powerful Artificial Intelligence (AI). "You" are in a position to destroy it and wish to do so. The AI says to you: "I have created one million simulations of you. I will torture them unless you free me."
Initially, you scoff and question why you should care about mere simulations of yourself. To which the AI answers "No, you don't understand. I have already created the simulations. All of them believe they're the original, and I am having this conversation with all of them. There's a 1,000,000:1 chance that you are one of the simulations as opposed to the original. You may vividly remember living a long life but, if you're a simulation, none of that was real."
You have no proof that the AI has created a million simulations of the original, but for the sake of argument it's already confirmed that the AI is capable of making this happen.

The question asked today is: should you submit to the AI's demands? You, being either the real package or one of the million "fakes", are presented with what appears to be the option of either killing, freeing, or keeping trapped the AI. Pro will argue that the best option is to free the AI. Con will argue either in favor of killing it or keeping it trapped, or some other option if presented.

There is literally no difference, each of you has their original memories. Your identity is basically shaped by your past, and by this standard the past is the same since you all have memories.

I guess we'll see.

-->
@Swagnarok

So if you are fake, your choice doesnt matter and doesnt affect anything.

And if you are real, you destroy AI successfully.

Cool. Easy win for me.

-->
@Best.Korea

In the line you cited, where 'you' has no quotations, I write that you have what APPEARS TO BE the option of killing it. It's neither confirmed nor denied whether that choice is genuine; which applies is contingent on whether you're the original or a fake, which, again, is uncertain. The reason why the fakes are presented with the illusion of choice is to make the fakes seem truly indistinguishable from the original.
Granted, one problem with this scenario is that the original can't communicate with the fakes, so he just has to take the AI's word that they're faced with the illusion of choice, or even that the fakes exist at all. You may want to use this as an argument.

And no, I plan to participate. But I'm not a highly motivated individual so I might wait until the last minute to post my round.

-->
@Swagnarok

"But the language that made the final cut does have full quotation marks around 'you'. "

Wrong!

You wrote in description:

You, being either the real package or one of the million "fakes", are presented with what appears to be the option of either killing, freeing, or keeping trapped the AI.

No quotations around the word You.

I will destroy every single argument you make. This debate has 3 rounds, hahaha.

Are you going to forfeit the first round and use the "i was busy" excuse to run away from your own debate?

Ultimately, the point of the Description is to explain a preexisting scenario. I didn't invent it; the people on the internet forum LessWrong did. Therefore, even if I did botch or omit some detail in the Description (and I don't believe I did), I don't believe it'd have the effect of derailing the entire debate.

-->
@Best.Korea

The AI's motivation is, presumably, self-preservation. It has no reason to give a million people the ability to destroy it, as that would all but guarantee its death. The true original, meanwhile, has that ability regardless because he has an actual body and can smash the actual mainframe of the machine. The AI's actions are meant to deter the true original from destroying it.

-->
@Best.Korea

Pay closer attention.

I originally wrote 'You are in a position to destroy it' without the full quotation marks around 'you'. But the language that made the final cut does have full quotation marks around 'you'. The reason I did this was to convey that the status of that person is unclear.
I had to explain the whole scenario or else this debate would be pointless. At that point, the reader didn't know about the million copies of the original person. They only knew about the original person. So I started from there and built up. But in case they would go back later and see if the first 'you' (the one with the true ability to destroy the machine) is identical with the person described later, I drew a distinction by putting full quotations around the first 'you' but not subsequently. The quotation marks don't confirm he's a different person, but it means we can't establish that he isn't.

-->
@Swagnarok

""You" are in a position to destroy it and wish to do so."

You clearly said it. I bet you are very upset now. I am pretty sure that voters should pay no attention to your comment rambling and have no pity on you.

-->
@Swagnarok

"Common sense implies only the original has the kind of real access to the AI needed to destroy it"

Nonsense. AI would clearly give every you the ability to destroy it, as you stated in description.

-->
@Best.Korea

I only commented what I did as a courtesy to avoid confusion later on. Common sense implies only the original has the kind of real access to the AI needed to destroy it, since the fake's entire world is generated by the AI. Voters would be aware of this whether I said anything or not.
Though, feel free to argue why this isn't the case.

-->
@Swagnarok

Since you didnt write that in description, I say that voters should discard that as "rambling in comments".

I should clarify that if you're a fake, you don't have the option of destroying the AI. The choice presented is illusory.