For most people, you should not fall in love with AI
The debate is finished. The distribution of the voting points and the winner are presented below.
After 2 votes and with 6 points ahead, the winner is...
- Publication date
- Last updated date
- Type
- Standard
- Number of rounds
- 3
- Time for argument
- Two days
- Max argument characters
- 5,000
- Voting period
- Two weeks
- Point system
- Multiple criterions
- Voting system
- Open
fall in love: develop a deep romantic or sexual attachment to someone.
AI: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
I am arguing the benefits outweigh the negatives. It is indeed a person's choice, but I say you shouldn't do it.
Seems like a debate that wanted to go a lot deeper than it did (such as con challenging if AIs shouldn't fall in love with humans, without first properly exploring the idea that AIs might count as people).
---
Pro's case:
Love would be non-reciprocal. To which con counters that most people don't care, so long as they get off.
If it's true AI, then us altering the code or otherwise making us love us would be the crime of slavery. OR If not a true AI, then it's akin to falling in love with a microwave (so many jokes could be made here...), which denies there being the chemistry for it to be a good match.
Whichever form, pro estimates that about 88% of us would abandon human relationships (pro really should have mentioned children here...) if sufficiently advanced sex dolls were available (or possibly just really sexy microwaves that know just how you want your hot pocket...). Con counters this by suggesting that the consequences of relationships should not be considered within the scope, soley the act of falling in love.
---
Con's case:
Love isn't a choice. AI's will soon complain of headaches and such ruining the dream relationship...
Love really isn't a choice due to chemicals in our brains. It's no different than any other addiction. Therefore should or should not is irrelevant.
Pro eventually defends why lack of self control isn't a valid criticism, using a potato chip analogy.
---
Conclusion:
Going into this I thought it would be a matter of weighting cost to benefits. The costs definitely could have been laid out better. The benefits were very much lacking (including a point about headaches, which seemed to be arguing the wrong side of the resolution). Sure people will do it, but that doesn't imply that they should do it; just like how pro should not kill himself by binging on potato chips, which choice or not, he clearly shouldn't.
Sources:
Only pro had them, but they did not contribute enough to gain those two points.
A few suggestions for if doing this one again:
Lack of children.
How unhealthy human relationships tend to be.
Training relationships.
AI overthrow (neatly this could be argued both pro and con, since the world might be better off without the meaty oppressors).
Interpretation of resolution:
Weirdly enough, this is the most crucial thing for this debate as far as judging. Since no one presented an interpretation, I'm just going off of what the resolution says at face value. Thus, I am only weighing arguments that have to do with how AI is right NOW, as the resolution is in present tense and the description makes no indication that we are assuming that the AI in question is a fully developed futuristic technology. This is the biggest problem for CON, as his entire case is based on the assumption that the AI is capable of passing a Turing test. Ironically, if the resolution/description specified we were talking about fully developed AI, I believe CON would win in a landslide here.
Main Arguments:
PRO opens up with an argument that AI is not capable of properly replicating human emotion. I buy this since we are talking present tense.
CON counters that in some ways most people are already in love with AI. I don't buy this due to the definition of "a deep romantic/sexual attachment." That said, I'm sure some people out there have some AI kinks. CON's other argument is that with current trends, AI will inevitably enter its way into our love lives in the future. I think this is true, but it does not fit into the scope of the debate.
Not sure what you mean. Anyway, if the scope of the resolution were broader you probably would've won.
Based upon recent indiscretions, not voting would have been the more honourable approach....Though power corrupts, as the saying goes.
vote plz
Are the codes of numbers creating emotions in a digital brain fundamentally any different than the chemical reactions going on in our natural brains?
That's a hard question to answer.
But functionally? If the AI is advanced enough, the emotional simulation accomplishes the same feat as the "real thing." PRO's argument is like saying music recorded through analog is real music, and digital recordings are nothing but simulations of the "real" thing, and therefore not real music. While it is technically true that different methods are used, the same result is achieved with differences too subtle for many to perceive or understand.
Considering we do not understand our own brains as much as PRO would purport, and considering we can not share "feelings" with the AI, while we CAN observe their reactions to stimuli... I'd say if the reactions pass the Turing test consistently, why wouldn't we label them as at least conscious and emotive? I see no evidence to support the conclusion a self-aware, human-like AI is not as conscious or emotive as we are.
There are a lot of complicated moral questions surrounding AI and their rights as sentient entities. We are approaching a point where we might have an sentient AI on accident before we're ready, but politicians aren't interested in laying out policies surrounding non-human minds because their terms are shorter than the expected timeline for AI.
excellently said, but aliens and humans together is very controversial, especially if the developer is able to hold control over the device's program. Consider if he decided to remove the AI you fell in love with. Is this murder? Is this kidnapping? I wonder...
If we met an intelligent alien species, it would not function in the same way as us. It would not have oxytocin or dopamine either, and it probably would not experience emotions in the same way that we do, but you would not assume that it is unfeeling. A proper artificial intelligence will likely be just as alien to our own lines of thought and biochemistry. Most research today favors developed neural networks rather than structured code written step by step. The closest things to a true AI today like GPT3 are literally alien in mode of thought. Not even the people who develop it can look at the code and say, "it is thinking ____."
they cannot produce dopamine, oxytocin, etc. The coding only allows specific functions and there is no coding variable for "emotion".
How can you prove that AIs don't feel emotion?
Hol up
stop kink shaming me
I just took a look at it. Nevermind.
It's called IBB? I should check it out then.
A chinese debating entertainment show. Apparently Seldiora watches it too. Since china is so strict on politics(Communism gud! Government gud!), only common life problems are there(such as should I end my relationship if X or Y happens, Should I buy a house if it has X or Y qualities, etc).
Whats I BB