Instigator / Pro
1
1516
rating
9
debates
55.56%
won
Topic
#2707

Machines can, in theory, think.

Status
Finished

The debate is finished. The distribution of the voting points and the winner are presented below.

Winner & statistics
Better arguments
0
3
Better sources
0
2
Better legibility
0
1
Better conduct
1
1

After 1 vote and with 6 points ahead, the winner is...

Benjamin
Parameters
Publication date
Last updated date
Type
Standard
Number of rounds
3
Time for argument
One week
Max argument characters
15,000
Voting period
One month
Point system
Multiple criterions
Voting system
Open
Contender / Con
7
1777
rating
79
debates
76.58%
won
Description

-Full resolution- In theory, it is possible to create a machine which is capable of thinking in a similar manner to humans

-Definition-
Possible = Something of which has a chance to occur
Machine = a mechanically, electrically, or electronically operated device for performing a task
Human = of, relating to, or characteristic of humans

Wagyu's burden of proof: "Machines can, in theory, think. "
Contender's burden of proof: "Machines can, in theory, not think. "

-General Rules-
1. No new arguments in the last round
2. Since this is a thought experiment, sources are not essential
3. Burden of Proof is shared

-->
@Barney

^^^

Rewording a faulty vote does not justify the vote.

-->
@Wagyu

You apparently had "Sloppy and contradictory use of sourcing," even disagreeing with your own source at one point. Whereas con apparently used ones which "systematically support his arguments and rebuttals, such as his rebuttal of AI-hard, and more particularly by his defining of neuron, cell, and think."

Legibility may be awarded for "at least comparatively burdensome to decipher" with one specific example being "Overwhelming word confusion." Clearly the voter became distracted by issues of this, especially when your language directly conflicted with itself. Again, it's not an award I would give on this debate, but not everyone needs to vote the same way to have still put the thought and effort into attempting to fairly grade your work.

-->
@Barney

The legibility point is “awarded as a penalty of EXCESSIVE abuse committed by the other side, wherein section of the debate become ILLEGABLE or at least comparatively burdensome to decipher”. Personally I have never seen someone loose a legibility point in any Dart debate, but nevertheless, could you point out where I committed EXCESSIVE ABUSE, or where my debate became outright illegible? Moreover, the sources point is also odious. The point is supposed to go to the side with a STRONG QUALITY LEAD, not because one provides source for what is meant to be a thought experiment. If we once again inspect the document authored by you, “a side with unreliable sources may be penalized, but the voter must specify why the sources were unreliable enough to diminish their own case”. Which did I use which was unreliable? The one about artificial photosynthesis provided by the future of life institute? Or the one about the Kardashev, a well known theory?

-->
@fauxlaw
@Wagyu
@Benjamin

Please understand this was a lengthy debate and a well detailed vote, with very little time for review...

**************************************************
>Reported Vote: Fauxlaw // Mod action: Not Removed (borderline)
>Voting Policy: info.debateart.com/terms-of-service/voting-policy
>Points Awarded: 6 to con
>Reason for Decision: See Votes Tab.
>Reason for Mod Action:

The vote was borderline. By default, borderline votes are ruled to be sufficient.

My only hesitation is the legibility award. While I personally find the legibility award to nitpicky, the voter does not seem to be attempting an unfair allotment but rather trying to guide pro in an important way to do better next time. To be clear, that I would personally not award something, does not mean someone else is wrong to do so.

That said, the vote left me understanding the debate pretty well (and my skimming using a word search for key phrases backed this up). The resolution failed due to a key bit of word confusion, calling on theoretically instead of hypothetically.
**************************************************

final vote push

-->
@Wagyu

It is “burdensome to decipher” conflicting arguments. Don’t cry to me. Appeal to a mod.
However, I will note that machines do not think like humans. For example, facial recognition uses entirely different methods. You could take a clear photo of a person, then rearrange the facial elements such as separating eyes and putting them at the forehead and chin, and AI would still interpret the face as the same as the untouched photo, of the person's real face. We do not think that way at all, exhibiting the fact that machines do not process like us. This point was never argued one way or another. but Con did make the claim of difference in processing, and you did not demonstrate a counter argument that is superior.

You have a number of conflicting arguments, and used very little sourcing to support your arguments. You claimed they were not necessary, but when you even arfgue against a source you cited, that is sloppy. My suggestion: use sources and be true to them.

-->
@fauxlaw

I think this vote was a little too harsh. To recall, the resolution is the following.

-In theory, it is possible to create a machine which is capable of thinking in a similar manner to humans-

You stated that

"Pro’s resolution sets a standard that even Pro fails to uphold, losing ground in just the second round by claiming: “It is impossible for a machine to be identical to a human being.”"

Of course! If it were identical to a human being, then it would be a human being! The debate isn’t “is it possible to make a human being” it is, “can the process of thinking be replicated through electronic means”.

If we revisit the BoP, we can find that I do not need to create a human being, I just needed to imitate the process of thinking and evaluating information. The purpose of this debate, as I have stated during the debate, isn't to test the current capabilities of technology, and to discuss whether in the future, a thinking machine is possible.

“By contrast, in Con’s first round, his rebuttal applies the clear separation of mind and body function”

I have already demonstrated that the “mind vs body” issue does not affect the course of this debate. If in fact the mind is real, then the question becomes why can the mind connect with a fleshy brain but not a metal one with the only difference being material. I posed this question and was rebutted with “The immaterial mind is strange, we only believe in it because we experience it ourselves”. To say that the only justification for the mind is personal experience is justifying murder on the basis that “I personally enjoy it”.

“Pro drops Con’s free will argument by the simple claim, unsubstantiated by argument other than that the free will does not exist by claim that thinking is like seeing. These are entirely different functions.”

Completely incorrect. To recall, I stated “Even though I don't believe in free will, your major premise is still incorrect” and then went on to rebutting the free will argument, by pointing at the flaw of assuming that just because two beings participate in the same act, this does not make them the same thing.

“Pro buries his argument by two contradicting phrases: “With perfect technology, we can do quite literally anything,” which is followed by [a bit later] “this is not a debate about technological possibilities, but about whether theoretically, this is possible.”

Incorrect. My first statement was a response to my opponents constant complaint that “this is not possible”. To debunk that claim I showed that 1) that the claim is incorrect and that 2) the claim is not even relevant.

Sourcing.

“Pro offers very little in sourcing, claiming it unnecessary in a “thought experiment,” yet offers a source on the Kardashev Scale in R2”

This is completely ludicrous. I only offered the Kardashev Scale simply because my opponent strayed away from the original thought experiment regarding the debate. It would not have been given if we stated on track. It is completely ridiculous to penalise someone for using sources.

If we read the DaRT Voting Policy, under sources, it clearly states that "(sources) goes to the side that (with a strong quality lead) better supported their case with relevant outside evidence and/or analysis thereof."

A STRONG LEAD. Have you demonstrated that my opponent has a strong lead?

“Con’s sources systematically support his arguments and rebuttals, such as his rebuttal of AI-hard, and more particularly by his defining of neuron, cell, and think”

And my sources show that different processes can be mimicked by different things, such as the process of photosynthesis. You have essentially given a point for defining terms, something both parties did.

Spelling.

“Pro’s reversals of argument [as demonstrated in Argument, above] loses his legibility point”

This is ridiculous. If we once again visit the DaRT Voting Policy, under the Legibility section, we can find that this point is

Awarded as a penalty for excessive abuse committed by the other side, wherein sections of the debate become illegible or at least comparatively burdensome to decipher.

Examples:
Unbroken walls of text, or similar formatting attempts to make an argument hard to follow.
Terrible punctuation throughout.
Overwhelming word confusion, or regularly distracting misspellings.
Jarring font and/or formatting changes.

Please show me where my wall of text, terrible punctuation and word confusion is.

Please try better next time.

-->
@fauxlaw
@Theweakeredge

I think you might be interested in this debate.

Two days remain for voting.

-->
@Fruit_Inspector
@fauxlaw
@Sum1hugme
@Theweakeredge

This might be a debate worth voting on. I would appreciate it if one of you did : )

vote bump

-->
@Benjamin

" The only problem I have with your claim is that a brain is not thinking, rather it is organising information - the mind is thinking."

It's great that you've said that it may be possible for an AI brain to be created. The question then becomes, if I were to slowly replace the neurons in your brain with the artificial neurons, at what point will you cease function.

-->
@Wagyu

I actually agree with you on the posibility that sometime in the future, an AI could be structured like the brain. The only problem I have with your claim is that a brain is not thinking, rather it is organising information - the mind is thinking. In other words, an AI could simulate a brain, but if the mind does not exist, neither humans nor machines can think. When it comes to whether or not machines can have a mind, it is not up for debate as it would be a religious or pseudoscientifical claim.

-->
@Wagyu

Yes.

-->
@Benjamin

so common ground..?

-->
@Wagyu

Correct. I have no reason to doubt my own existence. Therefore, "I" exist. If I did not exist I could not debate you.

-->
@Benjamin

you don't. Using the Occam's razor, you can conclude that questioning everything in your life is unnecessary. Consider the following.

Imagine if I told you that there were intangible, invisible, inaudible and undetectable fairies inside my garden bed. There would be practically no way for you disapprove of this. After all, they cannot be detectable. What would be your reaction. Should you near live your life believing that these things exist?

Of course the Occam's razor states that you should pick the easier option. No, there are no garden fairies because their is no reason to believe not to.

Returning to my existence, I exist because I can experience the things around me. There is no reasonable reason for me to doubt my existence.

-->
@Wagyu

Ok - tell me now how you prove your own existence.

-->
@Benjamin

Thanks. I figured that out after reading your first argument. Nevertheless, I attempt to rebut this popular position in my argument. Thanks for a good debate like usual, benny boy.

-->
@Wagyu

Yes
If you do not believe that statement, one cannot prove ones own existance.

-->
@Benjamin

Do you believe in the statement "I think, therefore I am"

-->
@Wagyu

In "theory" anything can do anything...Getting things to do stuff in reality is a whole different ball game.

Though "can" do, is a whole different ball game to might do.

dang it, I thought no one was going to take this.

-->
@Wagyu

Just so you know, I want to take the debate, I just got to prepare. Hopefully I will get around to accepting it, no promises though. Also, if a robot cannot feel emotions does that mean, according your definition, that robots cannot think like people?

-->
@Sum1hugme

"I think, therefore I am" - is an illusion if a soul does not exist

Assuming our minds are the same as the consciousness of ones own inner state:

1. Consciousness is the awareness of surroundings, which is just information stored in the organisation of atoms in our brain (scientifically)
2. Atoms only exist as a concept without our minds, which makes atoms (and our brain) a product of consciousness
3. In conclusion: The idea that our mind is purely scientifical leads to circular logic and self-contradictions regarding all knowledge, including science

Thus, while theoretically, such a thing as a physical mind could exist, but It would not be able to understand itself nor science without contradictions
As we work with the priority of consistency over evidence, there is no real reason to believe that our mind is purely scientifical

This makes for an interesting problem:

If a mind is supernatural in nature, it is independent of the material world
Thus the mind can be an outside observer, counciousness our connection to reality and our body our connection to others (Mind, Consciousness, Body)
Science would then be the conclusions the mind made based on the information our body collected and our consciousness interpreted
If this view is correct, then concepts and reality exist independently

Concepts exist only within the mind - all things, not being the primacy of existence are concepts
If "the mind" is physical in nature, it is a concept describing a physical process
If concepts exist only within the mind, and the mind is a concept, then neither concepts nor the mind exists.
Science, philosophy, physical world and reality are concepts and do thus not exist
Thus nothing can be known, everything is just atoms moving around. Oh! I forgot atoms do not exist, they are science, thus concepts thus mind.
This would render any truth outside of our control. Our words would have no meaning and tho virry meakeng ef reogmdosfi odjaoemajfoieacmda

Thus this approach makes no sense

Thus we are left with two choices:
1. Accept the existence of a mind/soul as supernatural with blind, almost religious faith
2. Ignore the question, pretending the first answer is correct
3. Admit nothing is true and nothing makes any sense

If both the observer and the object being observed is the same thing,

Yes, but if your statement is true, then computer AIs able to control a robot, are already conscious.
Consciousness in that sense would be just as nonexistent as the randomness of flappy bird.

According to that theory, consciousness is only the physical process, and because of that, "I" do not actually exist, only the atoms my body is composed of. But atoms cannot "feel" alive or have an identity, so nothing composed of atoms should theoretically "feel alive". Do you agree that you could be deceived by my actions to believe that "Benjamin" exists, while I actually am just a bunch of atoms? The difference is that a bunch of atoms would act in the exact same way that "I" would. I think it is necessary to distinguish between consciousness, a product of our brains, and soul, which makes us able to "experience" our consciousness on a metaphysical level but still cannot influence our brain.

If "I" do not exist independently, consciousness is purely physical, rendering philosophy meaningless as "Consciousness" is just as arbitrary as "fire"
We must start using the word "soul" for describing the experience of consciousness, as it and consciousness itself are not the same.

-->
@Benjamin

Conscious v unconscious is a false dichotomy. Consciousness is a state of awareness, and as we enjoy it, it is the state of being aware of being aware. How aware one is is directly tied to their neural complexity, And we wouldn't call a dog as conscious as a person but we would call a dog more conscious than a goldfish because they're operating on different levels of awareness and therefore different levels of consciousness.

The real question is whether or not consciousness is a metaphysical thing or not.

And also, if the thinking computer would have a "soul".

But we would never be able to see the difference between a computer with a soul and a soulless thinking computer.

The question boils down to whether or no we believe that "I" exists, or if only the brain exists.

-->
@Safalcon7

The ability to comprehend information.

Put up the definition for "Think"

-->
@Benjamin

Eh? A lot of my debates are at 30,000, the only downside is that when you have longer debates it takes longer to get them voted on

-->
@seldiora
@Jarrett_Ludolph

I will be making a case for hard AI. As for machine or AI, I will be arguing that, if the technology allows, machines can have the potential to think. I can say that machines will be apart of the argument I will make.

I am very interested.

But I cannot say I disagree, and the word count seems a bit scary X D

so basically a debate of whether we will ever achieve "hard" (or "strong") AI.

-->
@Wagyu

Machine, or AI? This debate looks like it was made for me. (Though think =\= understand)