The extent of genetic modification that was displayed in the world of Gattaca almost felt like humanity and technology had met at a point of agreement. I think that this is most evident in the society’s mode of occupational evaluation. Individuals are hired based on genetic promise as opposed to merit, undermining the idea self determination. This leads Vincent to have to resort to using the DNA of Jerome Morrow in order to be considered as a hireable candidate. The movie makes it clear that one’s fate is determined from the moment they are born, and assuming the identity of another individual is the only way to bypass this system. While the system of the movie creates a scientifically ideal pool of workers, it discourages the idea that one can succeed due to their own achievements.
Gattaca, Vincent’s resemblance of Jerome Morrow
I think that if humanity approaches genetic perfection, it’s also reasonable to think that robotics may move in the direction of humanity. I was thinking of what could potentially be a blurred line between humans and robots when I thought of the following question.
Let’s say you live in the relatively far-off future, and humanoid robots exist among humanity as functional bodies. Technology has developed to a point where robots resemble humans aesthetically, and respond as humans would both verbally and physically. They also have various human-like tendencies.
One of these human-like tendencies is their desire to live in dire situations.
However, these tendencies doesn’t imply that robots have any moral agency. While they have been coded to respond to a multitude of practical scenarios, they don’t share the same aspect of consciousness as non-artificial life, meaning that there is no actual judgment/understanding of what they are responding to.
You’re driving to work on a Thursday morning. It’s 74 degrees outside, overcast with an 18 percent chance of precipitation. Along an intersection you witness a car crash in which a car crashes into another car. One of the cars explodes and the other one bursts into flames (but will probably explode soon as well). All of the traffic at the intersection stops immediately. You see a body trying to crawl out of one of the cars (the car that didn’t explode) and you decide to rush to it, concerned for the person’s life. The person is definitely going to be unable to escape the vehicle on their own.
However, as you draw closer you realize that it isn’t a person. It is actually a humanoid robot that was driving the vehicle. If the car explodes the robot would definitely be incinerated, so expressing it’s will to live, it pleads for help.
“Please help me!” it says.
Does your concern for this body’s life diminish because it is non-human, or do you still feel a moral obligation to rescue it? Do you rescue it?
I think that it’s an important question that we must consider as we move forward. How will we value artificial creation as it begins to become integrated with human society?
Something that I thought was particularly interesting was that a friend of mine responded to this question by noting that I detailed that the robots had no moral agency. Rather, they respond based on a code, or some pre-programmed reaction. In his opinion, humans are no different. How do we really know if what we’re saying is a product of something intrinsically special within us, or a product of social conditioning? After some thinking, I believe what he said is true.
This video describes a general skepticism and conversely, some optimist over the shift of morality due to technological advancement. I particularly enjoyed when they described a car’s consideration of weighing lives, which is a very morally utilitarian concept for a machine to implement.
(Presenting Week 7)