Androids and Gender--Resurrected 11/19!

Malkhos 10-19-2003 12:20 PM
quote:
Originally posted by GummiBear289
Dorothy is more than an Andriod. She has a geniuine human soul. She can love, something that no matter how brilliant the technolgy can't be programed. I view Dorothy as human because she acts and feels like a human, the rest is superficial from there.


Well, she says she loves Roger, and ceritanly most of the other characters in the show seem to believe they're making love--although it seems that they aren't. But what does it mean that she says she loves Roger? It seems that there would ahve to many steps in an argument established ebfore we could say it would mean the same as if a human being said it to him.
Malkhos 10-19-2003 12:25 PM
quote:
Can anyone tell me what these three lovely ladies have in common? PM me with your guess and I will post the answer in a couple of days.


They are sex dolls. The only reason it is not perfectly obvious is becuase of the illusions inherent in the medium of still photography.
Black Phoenix 10-19-2003 01:03 PM
quote:
If you make androids that intelligent and capable of development, though, will they really be content in these mediocre tasks? You'd get a lot more of these renegades if they felt held down. The truly intelligent ones could only be used for important or technical work, where they'd know their value. As you make them more intelligent and human, you get the negative human traits as well - nobody wants to get stuck doing something boring their whole lives.


Blade runner comes to mind. Once the replicants had a taste of life, they were definitely not satisfied with their current situations.

quote:
Black Phoenix, if you want to copy a few of the problems with an Asimovian android that you PM'ed me, I think that would be perfect.


Yeah, here goes . . .

It would be so easy to screw with an Asimov type android. . .
Grab the nearest passer-by.
Pull out a gun.
Tell the android that he must kill you or you will shot your hostage.

So the android shuts down. But that's a cop out! By shutting down, he's causing my hostage to die!

I think some situations like that did happen in the novels. Eventually Asimov introduced the zeroth rule which stated that robots must, first and foremost, not harm humanity. (I think? correct me if I'm full of it) Now this new law seems like a good idea at first, but it makes things soo much worse.

Now, every android is supposed to evaluate everything it does in terms of the welfare of the human race?? And take the path that hurts humanity the least? We humans can't even decide what is best for the human race, so how can we insist that an android adhere to this law?

I can think of two possibilities for what happens next. First, the android considers the effect of its every move on humanity. Every step. Every word. "If I were to take a step forward, and in stepping I kicked up a particle of dust, which in turn caused a slight disturbance in the atmosphere, which in turn . . ." The poor thing would grind to a screeching halt and never ever do anything just in case . . . though maybe by standing still it is hurting humanity too . . . it could be doing something benificial somewhere else . . .

The other case is that the android be allowed to judge for itself what effects its actions will have on humanity. But if you allow that, why bother to have the laws? Arrrrrgggg.

So yes, as far as I can tell, the laws of robotics are junk. Still good books though . . .
OmegaMaN500 10-19-2003 01:04 PM
well ...... im sorry im very terrible at typeing dear sir, and im telling the truth and yes humanity will probley never except a machine that resembles a human i advise you to be more nice to me sir im not the one picking on ppl sooooo put on a happy face make the world a better place Big Grin
R Trusedale 10-19-2003 06:11 PM
quote:
Originally posted by OmegaMaN500
well ...... im sorry im very terrible at typeing dear sir, and im telling the truth and yes humanity will probley never except a machine that resembles a human i advise you to be more nice to me sir im not the one picking on ppl sooooo put on a happy face make the world a better place Big Grin


Well if the android resembles a human closely enough, how will you know the difference. (See Blade Runner for how difficult it might be.)

As for the three laws, Asimov was brilliant to come up with them, but they are unworkable in real life for several reasons. (See the Roderick stories Pleased ) Probably the hardest problems are, what is human, and what is harm?

If you get a xenotransplant, or a major prosthesis, do you still fit definition of human? Some people or robots might say no. Then you would be fair game?

Since we cannot predict the future, we cannot work out by logic the consequences of every action. So probably the best bet is to use emotions. Make sure that every android that is built has love and affection for humanity. If androids are designed to always have our happiness at heart, that will even take care of unforseen futures.

Its true that Love conquers all.
Knave 10-19-2003 09:11 PM
quote:
Originally posted by Black Phoenix
It would be so easy to screw with an Asimov type android. . .
Grab the nearest passer-by.
Pull out a gun.
Tell the android that he must kill you or you will shot your hostage.


The robot wouldn't take your orders, that would be the Second Law - he would try his best to disarm you. Besides, if you're telling him to kill you, what's to stop it from just quitting when you're no longer a threat? A lot of results would damage/disable the robot, but that's because they were designed never to have to use force - the Laws existed both to keep them from harming humans and to avoid their use as soldiers.

On the Zeroth law, it's known to be imperfect. The robots aren't motivated by it in every action, that would only be the case if you believed in chaos theory. In their truly monumental actions, though, they have do judge as best as they can. The robots who had the Zeroth law in Asimov novels did the best they could to avoid leaning on it, just making slight changes here and there, and when they were forced to take drastic action, often the Zeroth law wouldn't hold out.

The robots aren't perfectly stable under the Laws, but then neither are people. If you were presented with the situation the robot had above, even if you killed the assaulter, there's a possibility that would haunt you for a while, and self-doubt after large actions is common, even when we're as sure as possible that our actions were the correct ones.


On the Human definition, that'd be something that's constantly changing, broadening as neccesary. Intelligent robots wouldn't have a problem recognizing a human through either behavior or sight. In the novels, only when the robots were intentionally programmed with an extremely narrow definition of 'human' so as to serve as guards did they ever attack a human through not recognizing it as one.
Zola 10-19-2003 09:25 PM
For those who are not familiar with the Asimovian robot universe:

quote:
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

4.Zeroth Law:
A robot may not injure a humanity or, through inaction, allow humanity to come to harm.

Modified First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law.


I believe personally that these laws are inherently flawed although they make for a great story. Black Phoenix has given some good instances where the robot could be outright shut down due to conflict, and quite frankly, if anti-robot sentiment was high as Asimov posited in his books, pretty soon everyone in the world would know just how to mess with them. Look at how a new Windoze exploit gets found and all of a sudden there are dozens of new viruses that use the flaw.

Another problem is that it makes their thinking unnecessarily convoluted. For example, in the book "I, Robot", Susan Calvin, the world's most brilliant roboticist inadvertantly creates a telepathic robot. This robot realizes her deepest secret, that although she loves her work and is fulfilled by it, she longs for a special relationship.

Thinking he is helping her under the first law, he makes her think she has a secret admirer and ends up hurting her instead.

Now you know and I know that if someone just can't seem to find a happy relationship, then they need to examine what they are doing and redefine their goals, and any good therapist can help them to accomplish this without resorting to lies or trickery.

If the robot was so damn intelligent, he would have realized this and guided her to a happier solution. Of course, then there would have been no story.... Wink
Knave 10-19-2003 09:37 PM
He was only able to look at immediate harm and benefits - he didn't have a Zeroth law, or even a long-term look at the First Law. Because of that, every word he said, he had to pick to avoid emotional harm - he was boxed in and actually caused harm by his strict dedication to the Three Laws.

I know the Three Laws aren't perfect, but I also know that people aren't perfect - I thought we wanted robots more like people in this thread Smile

Asimovian laws probably become more applicable as the robots become more intelligent, and more able to see the different shades of the laws - violating them in one case to uphold them more in another.
R Trusedale 10-19-2003 09:48 PM
The robot stories were great. My favorite was the first hyperdrive ship, controlled by a first law robot. The first law conflict was caused by the fact that humanity really wanted the stars, but hyperdrive causes death. The engineers had to slightly weaken the first law to keep the robot brain from freezing up completely. (No I wont give away the ending....)
Knave 10-19-2003 11:20 PM
That's one of my favorites as well - the best part was the little monologue during the Jump.

That story also serves as an example for my last post - when they were able to take a step back and analyze the problem, the robot was able to violate the law in a small sense to protect it in a greater.
Zola 10-19-2003 11:29 PM
Yes, but if you didn't have the laws in the first place, there would be no need for this convuluted reasoning. Wink
Knave 10-19-2003 11:39 PM
If you didn't have the Laws, the first order you gave a robot would result in your death.

Even taking that example aside, if you're relying on their goodwill, that's a lot less firm than the Laws.

And if you think this is convoluted, look at some philosophy on judging the actions of humans - they make Asimov look like Dr. Seuss. Smile
Zola 10-19-2003 11:41 PM
quote:
Originally posted by DoctorZhivago
If you didn't have the Laws, the first order you gave a robot would result in your death.

Even taking that example aside, if you're relyign on their goodwill, that's a lot less firm than the Laws.


I don't understand why you think that. Please explain.
Knave 10-19-2003 11:45 PM
My editing was too late Frown

Without the Laws to keep a robot in check, they would naturally be resentful at being ordered around by inferior beings. While they could be faithful and caring in many situations, many others would be resentful and rebellious (Blade Runner being an excellent example, thank you whoever brought it up first). The Three Laws set safety limits (their original intention), while still allowing room for the robot to grow and develop.
Zola 10-20-2003 12:16 AM
quote:
Originally posted by DoctorZhivago
My editing was too late Frown

Without the Laws to keep a robot in check, they would naturally be resentful at being ordered around by inferior beings. While they could be faithful and caring in many situations, many others would be resentful and rebellious (Blade Runner being an excellent example, thank you whoever brought it up first). The Three Laws set safety limits (their original intention), while still allowing room for the robot to grow and develop.


I don't think that's necessarily the case, especially if we "evolve" a robot. If they were intelligent, they would be able to negotiate for themselves.

You would only run into a problem if they were treated like slaves, and I think that was the problem with most of the replicants as well. They were slaves who had no past and no future. Of course they hated 'real' humans.

EDIT: And what makes you so sure they would be superior? How about simply different?
JinguJ 10-20-2003 02:50 AM
Oh my goodness... why I must late to discover there's such a thread in here?

My favorite subject, but it seems that.. Most of my answers has been answered...


damn.. ;.;

I'll.. think of something to say after I re read all of this.. and I hope I can say something to contribute...

Asimov.. ~
Pygmalion 10-20-2003 08:24 AM
I mentioned this in a previous thread, but I'll bring it up again. One of the problems with Asimov's fiction is that he really didn't have a good insight into bad guys, or people who want to dictate to others. (He seemed to buy into Plato's bogus ideal of a philosopher-king -- make someone all-knowing, and he can run things.) A motivated bad guy (or an intelligent robot bent on circumventing the Three Laws) would have little trouble doing anything he wanted, within the Laws as they are written.

John Sladek had some short stories by "I-click as-I-move" about robots causing death & mayhem by creative interpretations of the Three Laws. Jack Williamson had a more horrifying view in his short story "With Folded Hands" (later expanded into The Humanoids), where android robots were to "serve and protect" humans, and ended up doing everything, while the humans were protected from doing anything.

Pygmalion
Knave 10-20-2003 09:03 AM
Robots would be likely to consider themselves superior because they'd be faster, smarter, longer-living, and more durable. You can't expect everyone to treat their robots nicely - a lot of people are jerks, and are going to have their superintelligent robots washing their cars. While good treatment would go quite a ways towards avoiding problems, the few problems that would occur would cause mass fear of robots, and the fear would feed upon itself as it caused more confrontations.
Zola 10-20-2003 10:29 AM
quote:
Originally posted by DoctorZhivago
Robots would be likely to consider themselves superior because they'd be faster, smarter, longer-living, and more durable. You can't expect everyone to treat their robots nicely - a lot of people are jerks, and are going to have their superintelligent robots washing their cars. While good treatment would go quite a ways towards avoiding problems, the few problems that would occur would cause mass fear of robots, and the fear would feed upon itself as it caused more confrontations.


Faster, maybe, depending on how we built them. Longer living? perhaps. More durable? maybe not, it depends on what we end up doing to make their brains. Smarter? what makes you say that?

Seriously, THINK about this for a moment. Culturally, we have a value that says "Robots will be smarter than human beings because they don't carry the emotional baggage that humans do and thus they can use pure reasoning"

Just because it's a belief of our culture doesn't mean it is true. I want to look beyond that assumption.

I don't agree with it at all. A person who is at one with their emotions is one of the smartest and productive of all, regardless of raw IQ. Pure reasoning is only good for solving certain problems, not all. Sometimes there has to be that leap of intuition that shows you where to look for the solution. You say again and again "Robots will be smarter." I am asking you to examine that statement and tell me WHY you think they are going to be smarter other than having devoured every science fiction book about robots for the last X years.

I think androids would probably be very good in some areas and not so good in others. I would think a partnership would work very well.

I do agree that some would mistreat a robot if they owned it. In fact, if I was building robots that might become self aware, the only "law" I would try to program something that would permit the robot to refuse orders/stop working if someone mistreated them in the first place.
Zola 10-20-2003 10:30 AM
quote:
Originally posted by JinguJ
Oh my goodness... why I must late to discover there's such a thread in here?

My favorite subject, but it seems that.. Most of my answers has been answered...


damn.. ;.;

I'll.. think of something to say after I re read all of this.. and I hope I can say something to contribute...

Asimov.. ~


Jump in at any time Smile