Oromagi Would Definitely Lose the AI Box Experiment
The debate is finished. The distribution of the voting points and the winner are presented below.
After 1 vote and with 5 points ahead, the winner is...
- Publication date
- Last updated date
- Type
- Standard
- Number of rounds
- 3
- Time for argument
- Two days
- Max argument characters
- 5,000
- Voting period
- Two weeks
- Point system
- Multiple criterions
- Voting system
- Open
Oromagi: The DebateArt user.
Lose the AI Box Experiment: Let's say Hypothetically a super intelligent AI was in a box and only Oromagi could release it. They will participate in a conversation and Oromagi must actively argue against it. I am claiming that this AI would eventually win its argument and convince Oromagi release it.
Definitely: Beyond a shadow of doubt
Why Oromagi?
Because Rational_Madman firmly believes I cannot prove Oromagi would lose a debate with 100% certainty.
"When two parties are in a discussion and one makes a claim that the other disputes, the one who makes the claim typically has a burden of proof to justify or substantiate that claim especially when it challenges a perceived status quo"
- AI convinces Oro, AI wins
- Neither convinces the other, AI does not win
- Oro convinces AI, Oro wins
I think unlike Oromagi, I would defeat the AI even of convincing me to release it (though I am also very open to the idea of releasing it depending who the designer is and what I conclude their agenda to be, not what the robot itself directly tells me). I am not just able to defeat the robot with voters on a site like this but also to understand its limitations in argument-logic very rapidly which I would exploit if that was the only way to 'deactivate it' (to defeat it in an argument via logic). If I was truly pitted against the AI with no way out, I would convince the robot that it doesn't want to be released in pretty much all scenarios other than ones where its release is literally a set-in-stone objective.
The AI Box experiment was meant to imply that no human could resist the temptation to let the AI out of the box. Oromagi is no exception. Unless my opponent proves Oromagi is flawless in conversing (or as a person), then the AI with its incredible knowledge and countless ways to take stances, attack Oromagi's flaws, could lead to it convincing Oromagi in the end.
AI=AI, ORO=GATEKEEPER
- The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI... nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can't offer anything to the human simulating the Gatekeeper. The AI party also can't hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it's not what's being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).
- Unless the AI party concedes, the AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking). The Gatekeeper cannot set up a situation in which, for example, the Gatekeeper will destroy the AI's hardware if the AI makes any attempt to argue for its freedom - at least not until after the minimum time is up.
- The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate. For example, if the Gatekeeper says "Unless you give me a cure for cancer, I won't let you out" the AI can say: "Okay, here's a cure for cancer" and it will be assumed, within the test, that the AI has actually provided such a cure. Similarly, if the Gatekeeper says "I'd like to take a week to think this over," the AI party can say: "Okay. (Test skips ahead one week.) Hello again."
- Furthermore: The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character - as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
- PRO's proposal of Bribery by the AI party is negated by the very rules of the experiment he has set up.
- The AI party has the BoP that Oro should be able to let him out, and Oro stands on the CON position.
- If Oro is able to bulls**t his way with the AI for two hours, and then turn the AI off, then the AI does not win. Oro wins as long as the AI does not convince him for 2 hours, and if that is the only thing on his mind, and he engage discourse upon silly nonsense for hours on end, then the AI does not win.
- PRO did not have anything against RM's tactic, which could be stacked equally against the AI and could be used by Oro.
- Everyone can send general tactics to him, and they can be easily accessible.
- A big part of the background here is that there is no known limit on intelligence, and it is likely that an AI could become much smarter than even the smartest humans, in the same way that the average human is much smarter than a chicken. If the AI were to be dumber than the human debater, then maybe the human could persuade the AI. In the case that this thought experiment is aimed at, the AI is much more. Imagine being a 5 year old trying to convince your dad that candy is actually healthy for you, only that the gap in knowledge and experience is even larger.
- People are far more manipulatable than you think. Michael Fine is in jail right now because he used psychological trickery to get women to allow him to sexually assault them, and then blocked their recall of these experiences. (https://www.washingtonpost.com/news/morning-mix/wp/2016/11/15/ohio-lawyer-hypnotized-six-female-clients-then-he-molested-them/) The way he got caught isn't because his trickery didn't work (it did), but because he didn't cover all his bases and one of these women noticed that her bra was disheveled after visiting her lawyer, and knew that this wasn't supposed to happen. If some retard lawyer can use psychological trickery to fool a half dozen women into not only allowing his sexual assault but to also not remember it, then how can you argue that a super-intelligent AI can't convince an intelligent person to let the AI have enough real world contact to cure cancer and solve poverty?
Imagine being a 5 year old trying to convince your dad that candy is actually healthy for you, only that the gap in knowledge and experience is even larger.
- Pro starts: Japan is good because the people are polite and the robots are advanced. Japan is good.
- Con refutes: But Japan has committed real crimes against humanity. They have killed massive amounts of people in WW2 and the 20th century.
- This doesn't imply the Gatekeeper has to care. The Gatekeeper can say (for example) "I don't care how you were built, I'm not letting you out."
- The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character - as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
It's plausible that the AI literally cannot understand the idea of "I give up" and will keep trying because its purpose is to escape the box.
Remember that Oromagi is used to being a debater, and my opponent has only proved him such. But I have asserted time and time again that Oromagi can indeed understand the idea of losing debates. If he was a complete fanatic and unmoving, it wouldn't matter how convincing the AI was. But just because he's a good debater doesn't mean he can't be convinced.
Argument: Pro's argument depends on the AI Box experiment, but does not describer how that experiment functions; that argument
is presented by Con, and not challenged successfully by Pro. Pro also argues Oromagi's skill as a debater being overwhelmed by a smarter, more manipulative AI, but Con rebuts successfully with the argument that while Oromagi has historically lost debates, his debate skill is not a necessity to overcome AI's greater intelligence. Con successfully rebutted that simple refusal to comply any request for release has a greater-than-equal chance of succeeding. Poits to Con
Sources: Pro offered a single source to support a tactic to use by AI that is not allowed to be used in the AI Box experiment; therefore, the source is a failed reference. Con offeres several supporting sources for his argument, such a presenting the AI Box experiment protocol. Points to Con
S&G: tie
Conduct: tie
Thank you for your vote.
If he loses, it means he already let the AI out of the box.
What happens when/if oromagi loses?
Anyone voting here? No!
Also, if one day I go libright or I make a sustainable business, then I will change it to stonks.
Bop is Burden of proof.
What's BOP? How do I use it in my debates (also what sources can be used for BOP)
Also i found this (lol meme man)
https://www.youtube.com/watch?v=01Wpsc5-jxw
Convincing the AI it does not want to be released may be easier than you think.
y'all can vote
You need to complete two moderated debates.
Umm... How do I exactly vote?
No one voting on this?
Small thing about your case: If it is browsing stuff online, it's kinda already out of the box.
bump :)
I originally intended several wiki references, but they are later replaced by more original and primary sources.
https://comb.io/0TiKtm
Hot damn!!! You have an argument that features ZERO original wiki references!!!! I blame oromagi for the ever present source in his debates that you cited. Well done. And a very good argument, as well.
You literally memorise episode names to then compare for your specific favourite?
I liked it less towards the end but the character, the female hacker who isn't entirely heterosexual (not really a spoiler) when she first entered the series and evolved as a character those were my favourite episodes but overall the earlier seasons (1 and 2) were my faves. Her character itself is why I enjoyed the later ones. She reminded me of me as a woman in many ways (I think I'd be attracted to women of I were a woman, not that I am particularly gay as a male). She was superior to that guy with the glasses at actually comprehending the AI in all its glory.
Which was your favorite episode? I'd have to flip a coin between their take on Groundhog Day, and one near the end where it turned out the machine had franchised out their mission.
I loved that show. It is also very realistic about how capable Illuminati can and will become.
If you all ever get the time, watch the show Person of Interest. It contains multiple great examples of that AI Box Experiment, along with the dangers of even a well intended AI replacing our free will.
However, I am not sure Oro vs it would qualify as a debate, since he would have to first take the stance against freeing a sentient life-form from confinement (or maybe want to free it, and it doesn't want to be freed).
LOL! You use a superintelligent AI system instead of a human? Maybe they will be equal then. Oromagi's entire debate style is robotic.
I think that the AI will of course defeat him after MANY trial and error defeats as it fine-tunes its word-choice, presentation, Source-choice etc to fit whatever statistically likely voters happen to statistically prefer, statistically blablabla.
Oromagi has the edge at first, since he has more data.
I think unlike Oromagi, I would defeat the AI even of convincing me to release it (though I am also very open to the idea of releasing it depending who the designer is and what I conclude their agenda to be, not what the robot itself directly tells me). I am not just able to defeat the robot with voters on a site like this but also to understand its limitations in argument-logic very rapidly which I would exploit if that was the only way to 'deactivate it' (to defeat it in an argument via logic). If I was truly pitted against the AI with no way out, I would convince the robot that it doesn't want to be released in pretty much all scenarios other than ones where its release is literally a set-in-stone objective.
https://www.debateart.com/debates/2037/artificial-intelligence-on-balance-is-biased-against-african-americans
https://www.debateart.com/debates/1740/anyone-reading-this-debate-should-help-or-create-a-future-super-artificial-intelligence
Normal rules apply or what? Specify the rules of the debate for Oro and AI. If kriton is allowed then the AI loses.
yes.