Instigator / Pro
0
1516
rating
2
debates
75.0%
won
Topic
#3246

A computer Will be Never Able to Replicate the Human Mind Perfectly

Status
Finished

The debate is finished. The distribution of the voting points and the winner are presented below.

Winner & statistics
Better arguments
0
0
Better sources
0
0
Better legibility
0
0
Better conduct
0
0

After not so many votes...

It's a tie!
Parameters
Publication date
Last updated date
Type
Standard
Number of rounds
5
Time for argument
One week
Max argument characters
10,000
Voting period
Two months
Point system
Multiple criterions
Voting system
Open
Contender / Con
0
1493
rating
6
debates
16.67%
won
Description

I'm back one the cite!
I will support that a computer will never be able to replicate the human mind (including the human experience of pain, emotions etc…) (i am PRO)
My opponent will support A computer will be able to replicate the human mind perfectly (given some sort of future technology) (My opponent is CON)

No trolling
No solphism
Definitions are not to be changed after you accept the debate (but they can be before you accept)

Human mind = a mind of a human being, in its entirety, and all functions of the brain (including feeling emotions and pain)
Replicate = make an exact copy of; reproduce
Computer = an electronic device for storing and processing data.

You can dispute these definitions before the debate

Round 1
Pro
#1
A Computer Will Never be Able to Replicate the Human Mind Perfectly
  1. Introduction
Thank you elhombremasinteligente for accepting this debate.
Firstly, I would like to recognize the fact that any sort of “hard AI” has never been created, let alone anything that can replicate the human brain. AI and neuroscience researchers agree that current forms of AI cannot have their own emotions, but they can mimic emotion, such as empathy. [1] this ability to only mimic emotions, and not actually possess them, I think shows the difference that there will be between humans and even the most advanced AI. 

  1. Arguments
    1. Mary’s Room[2] The thought experiment is as follows: Mary lives her entire life in a room devoid of color—she has never directly experienced color in her entire life, though she is capable of it. Through black-and-white books and other media, she is educated on neuroscience to the point where she becomes an expert on the subject. Mary is aware of all physical facts about color and color perception. After Mary’s studies on color perception in the brain are complete, she exits the room and experiences, for the very first time, direct color perception. She sees the color red for the very first time, and learns something new about it — namely, what red looks like. This means that there is a subjective qualia we cannot explain, and therefore cannot program into a computer. This same logic can be applied to pain, and emotion.
    2. The Chinese Room[3]. an English-speaking man in a room, given orders to talk to the people outside of the room who speak Chinese. Chinese characters are slipped under the door, and he answers them through a book, which tells him what characters to write in response to the characters slipped under the door. Because of this, he can communicate to the people outside of the room, however, he knows no Chinese. This is a perfect analogy to how computers work. They are given “if this, then that” orders, without actually knowing the orders themselves, therefore, will never replicate the human brain, because the human brain can do this things
  2. Conclusion
 Since we cannot program the entirety of the experience (pain, emotions, and seeing color), we cannot make a computer that can have these experiences, therefore, a computer cannot replicate the human mind perfectly. Also, since a computer cannot fully understand what it is doing, because it operates on “if this, then that” statements, it cannot replicate the human mind perfectly. 
  1. Sources



Con
#2
Thank you very much for the answer, and very good arguments, which in fact, I was hoping you would use.
objections:
In the first example, the acquisition of unknown information is proposed, that information, however, is based directly on sensory perception, in this case sight, therefore, replicating that perception of the color red, depends solely on knowing the physical causes of that sensory perception, since the mind is the result of physical causes, it is enough to replicate these same causes to replicate the mind
The second experiment seems to me to be a more epistemic than ontological question, it is based on the assumption that the machines could only work by orders of this and that, but if what was said above is true, it does not have to be the case, the question to which What if it takes us is, how do you know if a computer thinks? How do you know that it is not only receiving orders?

Round 2
Pro
#3
  • Introduction
There will be talk about whether or not the human mind is completely physical or not, which is a little off topic, so I hope that’s alright. 
  • Rebuttal
In the first example, the acquisition of unknown information is proposed, that information, however, is based directly on sensory perception, in this case sight, therefore, replicating that perception of the color red, depends solely on knowing the physical causes of that sensory perception
We would still be unable to program such things into a computer, even if all experience is also physical. How would we be able to get a computer to have such sensations? These sensations, even if they are physical, are still subjective.

"since the mind is the result of physical causes, it is enough to replicate these same causes to replicate the mind"
 this can be countered with the China Brain analogy:

Let's say that the nation of China (or just a large number of people) simulated every neuron in the human brain, and used walkie-talkies as the axons, and sent messages (which would be like sending neurotransmitters) along the walkie-talkie axons. In my opponent’s view, (if I understand them correctly) this would mean that the nation of China would have consciousness, because it would be replicating the human brain perfectly. However, would it be logical to say that a nation could have consciousness? Do we say that China is 'alive' in any sense? Does it get them the same rights as a human?

Phineas Gage

This is a very interesting case study that shows that damage to the brain is damage to the mind. Or is it? Just because that the mind seems to be connected to what the brain is doesn't mean that it actually is. The brain could be the physical representation of the mind, and therefore when the physical representation is damaged, there is apparent damage. CON could apply Occam's razor, and say we don't need some abstract mind to exist, it seems like it doesn’t, but that only succeeds if all else is equal. All else is not equal, because the thought-experiments (Mary's room, the China Brain, and the chinese room), show that there is some non-physical aspect to the mind.

it is based on the assumption that the machines could only work by orders of this and that
Is there any other way computers could operate? You would have to show that there are other ways, and that they don't fall into the same pit trap of the arguments.

Actually, conditional statements (if this, then that), ,(though it's phrased in the article as 'if then, or if then else' statements), are the building blocks of computer decisions,[1] and there is really no other way to program computers 
  • Conclusion
CON's argument that anything that merely mimics the brain, and is made up of its physical parts, fails because of the China Brain thought experiment. Their defeat for the Chinese Room (that there are other things other than 'if this then that statements) is false. Also, CON’s physicalism is countered by the China Brain argument.
  • Sources



Con
#4
My point is precisely that it is not necessary to program these sensations by computer but that they can be replicated by copying only the physical processes that generate the mind.
To say that china is conscious is equivalent to saying that my body is conscious, china contains an object that is certainly conscious, the group of humans that performs this process of exchanging information is conscious, since each human in the group is conscious.
If the mind cannot be affected by the brain, then you and I should not lose concentration due to a physical issue such as sleep, but in fact we do, we can both confirm that our mind not only seems affected, but also fact, it is affected
This is exactly where the paradox of the Chinese room fails, although it is true that a man who does not know these languages could be inside, there could also be one who knows them, in the same way, it may be that the machine is not thinking, Just receiving orders, it may be that you are thinking and receiving orders, so it could work without receiving orders.

Round 3
Pro
#5

  • Rebuttal
To say that china is conscious is equivalent to saying that my body is conscious, china contains an object that is certainly conscious, the group of humans that performs this process of exchanging information is conscious, since each human in the group is conscious.
I think this might be a misunderstanding of the China Brain. Forget for a moment that humans are conscious. CON claims that anything that simulates consciousness is indeed consciousness. So, if everyone in China simulated the brain together, wouldn’t that be a simulation of the brain? I think it would be, and since it’s a simulation of the brain, that means, in CON's view, that it is indeed conscious (however this seems bizarre).  I hope that I am not misrepresenting anything.

If the mind cannot be affected by the brain, then you and I should not lose concentration due to a physical issue such as sleep, but in fact we do, we can both confirm that our mind not only seems affected, but also fact, it is affected   

However, this assumes the direction of caution, it could be that the brain causes the changes in the mind, not that the mind makes changes to the brain. Weirdly enough, sleep cannot be explained via neurological need (there is nothing about our neurology that needs sleep). 

In some cases this may be because the mental state generates the neurological state, rather than the other way round. 
There is also apparent effect of the mind onto the brain, that is, that the mind effects the biological brain, and that the biological brain, in some cases does little to effect the mind.

if depression is associated with a low level of serotonin (although this link is by no means proven), this may be because the state of being depressed generates a low level of serotonin, rather than a low level of serotonin causing depression [1]  We shouldn't be surprised to find that psychotherapy is more effective against depression and psychosis than medication. (even though if the mind is just chemicals, then a chemical like an antidepressant should be more useful than abstract therapy)

although it is true that a man who does not know these languages could be inside, there could also be one who knows them, in the same way, it may be that the machine is not thinking, Just receiving orders, it may be that you are thinking and receiving orders, so it could work without receiving orders. 
The difference between a  Chinese-speaking man in the Chinese room and a computer is that the man would have gone into the room knowing Chinese, he would have learned it before becoming part of the mechanism of the room. He didn’t learn it by virtue of being part of the room, but from some outside source. The computer has only ever been part of the room, and has had no learning before becoming part of the room, and Chinese-speaking man did.
In other words, the Chinese-speaking man learned Chinese outside of the room, not by being in the room, and the computer has only ever been in the room
  • Source

Con
#6
Without a doubt, the Chinese brain would be a simulation, however, it is not clear if it replicates the physical causes of consciousness, it could be that the physical causes are not only the sending of messages, but the electrical signal transferred in a specific way.
our brain needs sleep to carry out various neurological processes, such as sleeping or cleaning toxins.
certainly in the case of the dream it is something acquired evolutionarily, not psychologically.
medication is no more useful than psychotherapy because we do not yet know the full functioning of the brain.
The case, however, is that again, something else is assumed, that the computer could not have acquired that capacity within its software, if the man, for example, is born in the room, and videos of words that he must associate with are presented to him. images, little by little he will learn their meanings, it was only necessary that the learning capacity was present to be able to get the man to learn Chinese, in the same way it is only necessary that it be possible for the computer to think to be able to make him think, and this Ability to acquire consciousness can be obtained by replicating the physical characteristics necessary to allow the events that give rise to consciousness.

Round 4
Pro
#7
  • Introduction 
Just to make it clear to the voters, though I haven’t mentioned the Mary’s Room argument, I have not abandoned it, I simply don’t mention it because I am defending it form CON’s core objection, which is that if we merely simulated the brain, then we got a brain, which is defended from with the China brain. (I just want to make it clear that I am not trying to move the goalposts)

  • Rebuttal
it could be that the physical causes are not only the sending of messages, but the electrical signal transferred in a specific way.

I find no reason why something needs to have electrical signals in order to be conscious, it could be that they are life feathers, they help something to fly, but it’s not necessary for flight, all you need is an equivalent causal power. Electrical signals are not a necessary causal mechanism for consciousness. CON has given us no reason to accept the fact that electrical signals are needed for consciousness. 

Even if this argument succeeds though, I see no reason not to make the same argument for neurons.

medication is no more useful than psychotherapy because we do not yet know the full functioning of the brain.

I could also run this argument backward against CON, saying that medication in some cases is better than psychotherapy, because we do not have enough knowledge on the human mind. Also, using the same argument, that sleep seems to be more neurological then purely mental, because we don’t know enough about the human mind.

 if the man, for example, is born in the room, and videos of words that he must associate with are presented to him. images, little by little he will learn their meanings, it was only necessary that the learning capacity was present to be able to get the man to learn Chinese.

Even if the computer could have an in-depth understanding of the characters that are encoded in them, this does not follow that they have an understanding of self related to the images. For example a chess computer, the computer might be able to understand that there is a bishop, and a queen, but it does not follow that it has an understanding of self related to the game of chess.

There is kind of a real life example of the Chinese room, Sophia the robot. She can respond to questions, and hold a conversation. But does this mean that she understands that she is having a conversation? Or that she is talking to a human? Probably not. She clearly doesn’t understand the questions being asked (though she can respond to them), because she contradicts herself when talking about whether or not she has seen the TV show black mirror.[1]

Sophia the AI Robot is a social robot and reflects human expressions right back to you when you are in conversation with her. Her code allows her to socialize with humans by mimicking human behaviors and emotions[2] , however, it does not mean she can understand these emotions. 

CON could reply with that future AI could understand the questions being asked, however future AI would only be able to do what Sophia does, but on a more complex level. Just because something is a more complex version of the same magic trick, that doesn’t mean that it is not a magic trick

  • Sources

  

Con
#8
One thing is the need for causality, and quite another, and another is causality as such, it is certainly not necessary to throw a glass to break it, but the fall may be the cause for it to break, now if instead of replicate the cause (which was the fall), we replicate the color of the glass, which is not the cause, we will not achieve the effect of breaking, now, the reason to accept that specific movements of electrical signals are causal in consciousness is the research that tells us
the neuron thing is correct, but it does not contradict my point, since computers still do not replicate the neural part properly.

his second counterargument is inadequate because we know the methods to discover what is the cause of certain specific mental processes,
So we can discover which is the best therapy for specific cases, given this, in other words, medication exceeds the best form of therapy in specific cases, while we do not know how to discover the relationship between consciousness and neurology.
and in the second case, the episteme cannot be the cause, because even if we did not know everything about the mind, we know enough to know that the dream is not purely mental.

the point is that if the computer has a (psychological) self, this self can only be built by codes, and if the computer understands all its codes, then it must also understand its (psychological) self.
Not only could an AI of the future do the same as Sophia on a more complex level, as long as there are fundamental differences between the two, the consciousness of future AI cannot be denied by modern AI any more than the possibility of AI could have been denied. automobile based on horse.

Round 5
Pro
#9
  • Rebuttal 
The article CON uses to support their statement that the brain is conscious because of energy, to me, left me with a mixed impression (granted, I did not read to whole article, so I may be missing something) one specific quote I would like to bring up is at the very end of the section titled “Consciousness and Energy in the Brain”:

“Overall, it seems we find no clear correlation between the total amount of energy used by the brain, or the location where the energy is used, and the level of consciousness detectable in the person.” 

The entire section beforehand, develops that we don’t fully understand the role energy plays in consciousness. To me, it seems that this article is taking a look at the role of energy in the brain, and that the article concludes with the fact that we don’t know if energy affects consciousness, it might, or might not. Now, there are parts of the article that support CON’s view, I’m not denying that, what I am denying is that the article only supports CON’s view. 

Also, even if energy does cause conscious in the brain, CON says that if something gives rise to consciousness in the brain, then it is necessary for consciousness (thus is their refutation of the China Brain), so, that means that since biological neurons give rise to consciousness, then there must be something about biological neurons that produces consciousness. 

Also, it seems to me (again, I could be wrong) that this article attempts to solve a “easy” problem of consciousness, that is how different systems of the brain work, it doesn’t even attempt to answer the “hard” problem of consciousness, which is why these systems bring forth consciousness. This article at best demonstrates correlation, not causation, there is so much unknown about the brain, that it’s basically impossible to, with the research methods we have right now, draw any conclusion about why these systems bring forth consciousness. Nothing in cognitive science is good enough to example the hard problem of consciousness [1].

CON could say that computers could have computer-like neurons, however, these are not like biological neurons that run on neurotransmitters in order to bring upon experience, therefore, are not true replications of the brain, if that they are arguing that if the brain needs something, then consciousness in general needs the same. Of course I don’t believe that, because feathers (like energy and brains) might be necessary for birds to fly, but feathers are not necessary for flight itself (airplanes, bats, etc...). 

>his second counterargument is inadequate because we know the methods to discover what is the cause of certain specific mental processes,

The article CON quotes for this just tells us about psychological methodology, and if anything, supports the fact that our knowledge of the mind is incomplete. 

Now, if our knowledge of the mind is incomplete, this means that CON’s argument that medication only does worse than therapy because we don’t know enough about neurology, can indeed, be turned around against him. Also, I don’t think CON has given a situation where medication does outshine therapy (I provided a case for my side in round 3). So, we cannot conclude that medication does better than therapy in any case.

>and in the second case, the episteme cannot be the cause, because even if we did not know everything about the mind, we know enough to know that the dream is not purely mental.

In the context of the debate about whether AI can replicate the human brain, all I have to prove is that consciousness is partly non-physical, so that is why the argument from sleep fails. 

>the point is that if the computer has a (psychological) self, this self can only be built by codes, and if the computer understands all its codes, then it must also understand its (psychological) self.

This is the core question of the Chinese room. Yes, however, CON has yet to produce a method in which a man in the chinese room actually comes to know Chinese, and is not just mimicking it.

Honestly, this counterargument from CON just brings us back to the original problem for them, which is that “if this, then that” computer code is not sufficient for AI to have consciousness, 

Thank you, elhombremasinteligente, for this debate.

  • Source 
Con
#10
Naturally, the article does a general analysis of consciousness and brain, however, that is why I said that it is indicative, a correlation is shown, making causality probable.
my rebuttal to the brain of China does not say that that something is necessary for consciousness, but that that something necessarily gives rise to consciousness.
The argument about our ability to know consciousness is interesting, however, it supports my point to some extent, as it admits the possibility that consciousness is created by an unknown physical factor.

Even if neurons are powered by neurotransmitters, there could be a technological replication of them.

Naturally, that does not change, however, that through these methodologies the mind can be understood with a certain degree of precision, so we know what cannot happen, the dream cannot be mental, since if it were, a state A different mentality could override the very need to sleep, this has never been seen, the argument cannot be turned against me because we have almost perfected psychotherapy techniques, this is not the case with chemicals related to consciousness, and he has a reason why my point cannot be concluded here, however, neither can yours.
The method in which the man gets to know Chinese is very simple, the man begins to identify the patterns in the letters, and with it the meaning of his words, by themselves the orders of then this and then that are not enough for an entity. begins to be aware, the method for something to begin to be aware is to replicate the physical causes of consciousness, I appreciate that you have proposed the debate.