Default banner

#Robot

This tag does not yet have a description

Total topics: 1

Consider this....

A scientist creates in his lab, an artificial intelligence that has true sentience. It learns, laughs, and can feel suffering.

The scientist, amazed and enchanted,  creates many of these AI entities (as only digital persons) on his server.

One day he realizes that his AI entities have started to reproduce other entities in what could be analogous to birth in humans.

So he simply starts to observe them, enthralled at their growth, and interaction.

Soon, the AI develop societies and culture, they form a morality and a religion, and their numbers are increasing. One day, to his surprise, the digital AI's have formed governments.

These AI's have a life span of about 2 months, but their time perception is very fast. So they can cram into that 2 month lifespan, what we get in our 80 or so years.

Here are the questions fro you.

1. If the scientist should turn off the server, it would "kill" every AI sentient  "person" in it. Would it be immoral for him to do so?

2. If the scientist decided to experiment on a few of his AI entities in such a way that caused them to experience great suffering, would that be immoral?

3. If the scientist decided to give his AI some "moral" laws, one of which was, "Do not damage the Server." Would that "moral" law be any different from the "moral" laws the AI's have developed themselves?

4. If a few of the AI's develop weapons and begin to use those weapons to extinguish/kill other AI entities, is the scientist morally obligated to stop them?

5. If your answer to question #1 is "yes", please tell us as precisely as you can, whether it is the AI's sentience or it's ability to feel suffering that more morally obligates the scientist to keep them "alive".
Created:
Updated:
Category:
Philosophy
20 8