Sunday, September 6, 2009

how I got to thinking about human nature, and the future of science.

 

Recently I finished reading the book “Radical Evolution” by Joel Garreau. At some point while reading it, I also watched the movie “Watchmen”, and I was surprised when both of them covered essentially the same topic of human nature, albeit in very different respects.

0767915038 watchmen-final-poster

First, about the book. Right off the bat the author tells you that it’s not a book about future technology, but rather about how or if this new technology will fundamentally change human nature. Garreau lays out 3 different scenarios to how the future may unfold given the current pace of technology: the Heaven scenario, the Hell scenario and the Prevail scenario.

The Heaven scenario takes the stance that Ray Kurzweil has championed, predicting a (relatively-near)  future where robots live to do our bidding, and where we have engineered our bodies both through machine interfaces and genetics to become essentially perfect, where we can all interact with each other through the internet in our brains. People can live out seemingly real situations in virtual reality due to direct stimulation to the brain. I found these expectations to be awesome, though actually kind of eerie, which is saying a lot since I am usually all for that kind of thing.

On the flip side, you have Bill Joy (the guy who essentially invented the internet).  In his essay entitled “Why the future doesn’t need us”, Joy quotes a passage from the manifesto of the Unabomer (who was also quite scared of the future), which talks about how more and more decisions will be handed over to computers, until it gets to the point that we will need computers to make the decisions and then to the point where the decisions will become so complicated that humans won’t be about to make them any more, and to “turn off” the computers would be essentially suicide. At this point we will essentially become slaves to the machine- sure we could be happy, but definitely not free. Sort of like a pet. I guess this struck a chord with Joy as he has become a major proponent of a technological doomsday. He predicts that the human race will fall victim to some sort of technological mishap, be it through wrongful genetic engineering, a biological weapon or the nightmarish “grey goo”. Some of his arguments are actually quite compelling.

The third scenario is largely detailed by Jaron Zepel Lanier, the guy who invented virtual reality. Lanier claims that is is unlikely that we will see a singularity, either for good or bad, like the other guys predict. Rather, he thinks that humans will continue to adapt along with technology, with the end goal always being to find higher ways to connect with one another. He says that humanity has a history of overcoming seemingly overwhelming odds to prevail and adapt to new situations, and he thinks that our interaction with new forms of technology will be no different.

What is disconcerting is that all of guys who discuss their scenarios believe them to be essentially inevitable, with the agreement between them that the human species is going to be drastically changed almost beyond recognition- and soon, no less. Like, 30 years soon. As to how, I guess that is up for debate.

Overall I really enjoyed the book and found all of it quite interesting, and a lot of it quite provoking. It does a pretty good job of being relatively neutral between all scenarios, so you get a pretty balanced view from all three view points. I would recommend this book to anyone who wonders about what the future may be like. This book will give you plenty to think about.

So that’s the book.

As for Watchmen, I reckon that most people already know the premise, so I will give just a brief recap. The story takes place in a politically unstable world where there are costumed crime fighters who people are increasingly beginning to distrust because they don’t seem to have to answer to anyone for anything. It shows heroes which have taken protecting people to the extreme, where at one point one of the guys fires into a crowd of protesters just to “protect them from themselves”. The movie gets to a point (spoiler alert) where one of the heroes sets off a bunch of explosions in all the major cities in the world making it look like an attack from a single super villain.  He does this with the plan that it will unite the world against a single common enemy and avoid an imminent nuclear war- “to sacrifice millions to save billions”. The crux of the movie is essentially: if you are in a position of power, is it better force humanity into a perhaps better and safer way of life, or is it better to let humanity, with all its flaws, struggle through on its own, even if the end result could be disastrous?

This point has caused me to do a lot of thinking, particularly because I can’t seem to figure out what the better answer is. After finishing Radical Evolution, I also realized that this question was also an underlying theme throughout the book. For instance, if we are at a point in human history where we can use technology to drastically alter ourselves and our environment, what changes should we make? What things should we engineer out of society? Dependence on fossil fuels?  Disease? What about things like expanding our memory storage by linking our brains to the internet? Or what about altering our metabolisms so we hardly need to eat anything? Surly these things would be beneficial to humanity as a whole. Would it not be the responsible thing to do to  implement these technologies for everyone as soon as we can to avoid things like global warming, disease and famine, even if people didn’t want to adopt the technology?  Or rather should we avoid these changes, and just let humanity struggle through without technology? Moreover, according to the guys in Radical Evolution, we are nearing a point where we will be able to create designer babies- babies where we can select what traits we think would be valuable to our children. Should we force parents to choose things like intelligence and pass over things like aggression? I guess the point I’m trying to make is that it is quite possible we could be at a point in time where we will be making drastic choices regarding what we think will be most beneficial to the human species in the future. Do we have the insight to select the “right” traits and technologies for the betterment of future humans, or should we avoid these technologies and just let things keep happening they way they are happening, even if it means the end of our species due to some disease or environmental disaster? It seems like a lot of responsibility, and like I said before, I have no idea what the right answer is.

2 comments:

  1. I also have no answer but I do think no matter what it is we do there is one thing that is very important: That we keep this type of thing out in the open where we can all see it and talk about it. Trying to prevent it from happening will just drive it into exactly the places where we don't want it to go.

    ReplyDelete
  2. For another peice of related fiction, check out Brave New World by Huxley. Not as outright terrifying as 1984 but pretty disturbing.

    Also, didn't Al Gore invent the internet?

    ReplyDelete