lp, as long post on singularity

clever

 

sometimes i wish i lived among robots. the stupidity of humans as a collective or as of an individual is often so overwhelming that it leaves one speechless. but then again, as an artist i find our stupidity to be very inspiring. i wonder what kind of art would i create if i would be living among robotic ‘gods’. would they perceive it at all cause it’s made by a plain human being – weak, prone to illness and death, the inferior one? i also wonder, would those ‘gods’ at least laugh at monty pythons’ jokes that mock our human behavior? or, would they even laugh at all?

Jon Stewart / Samantha Bee, Comedy Central, sketch with Ray Kurzweil

SB – …so we’ll have miniature robots in our bodies?

RK – That’s right.

SB – We’re gonna become perfect.

RK – Well, how would you define perfection?

SB – The ability to blowjob with your eyes.

RK – ?…

 

i’ve reread the book by ray kurzweil ‘the age of spiritual machines’ published in 1999. first time i read it i was inclined to the notion of ‘spiritual’ robots cause it went in line with my all time favorite movie blade runner. however, in 15 years since its publication the rapid exponential growth of information technology and the arrival of new technologies made me very skeptical towards A.I. i cannot help asking  myself is artificial intelligence going to turn us into zombies and lead us to the greatest catastrophe a human has ever experienced?

mr. kurzweil and rest of A.I. supporters talk about eventually transcending humanity. they view singularity as the point at which machine’s intelligence begins to amend itself, improve itself: 

Anyone who is gonna be resisting this progress forward is gonna be resisting evolution and fundamentally they’ll die out.” Peter Diamandis

non worshipers of A.I. state the following:

“Machine’s intelligence improves, improves, improves until we get to a point where, well, it assumes controlThe singularity is the point where humans lose control. ” Kevin Warwick, professor of cybernetics, University of Reading

A.I. fans refuse to accept the ‘imperfection’of our physical bodies. ray calls it ‘the tragedy of illness and death‘ and considers mortality to be a sickness:

“I have no great respect for biological body.”…“There’s nothing in our biological bodies or brain that we won’t be able to recreate and in fact enhance. We’ll create A.I. that are real people.”

 

ray’s book is provocatively backed up with scientific facts and poses a challenge to debate him.  few scientist made excellent remarks concerning this issue:

Dr. William B. Hurlbut, neuroscience professor, Stansford University

“There are no bad genes good genes, there is a balance in genetics. We can do a lot of foolish things, trying to alter human begins to improve them. Then the result of that might be tragedy. Ray is a very interesting person, entertaining, a kind of a visionary. He’s not biologist however. And, I think working as a biologist he would be more moderate in his extinctions of extrapolations  of the uses of our technology. Engineering a better human being is going to be a daunting task. We’ve had 5 millions years of field testing and that has filtered down the existence of an organism that is attuned to range of environments and range of talents and range of possibilities. To upset that balance by exaggerating some feature is going to cost us something too. We shouldn’t just arrogantly think we’ve transcended the wisdom of thousands of years of human experience. 

“Death is not conquered by physical. Death is conquered spirituality.”

 

Hugo de Garis, Professor, Xiamen University, China

“Where am I critical on Ray’s point of view? Well, I think he’s a bit naive in a sense that he doesn’t give enough consideration to the possible negative consequences of these developments. …his reason for living is to create inventions that help humanity. His reasons deft, so for him to hear somebody like me saying these inventions may end up causing the worst war the humanity has ever had, freaks him out. He doesn’t want to hear such things.

“I’m known for the concept of ARTILECT (artificial intellect).

ARTILECT WAR theory: These machines might for ever reason wipe out humanity. There’s always that risk. Consider the analogy of the way we as humans beings look toward ants or mosquitoes as pests. We killed them and we don’t give a damn. Because we feel we’re so superior to them, they are so inferior to us. So, who’s to say that the ARTILECT which than becomes trillions of trillions above of human capacities may look upon us in the same way. We could never be sure. I predict there will be a major war in late 21. century between two human groups…The one way to assure that the risk is zero is that they never build one in the first place…But the second group for them it’ll be a sort of religion to build these things cause they’ll be God like. So you got here the source of a bitter conflict between these two human groups. Then with late 21. century weapons, you’re talking about major war that will not kill millions of people but billions. As a brain builder myself, am I prepared to risk the extinction of the human species for the sake of building an ARTILECT? Because that’s what is coming down to. Yup.

 

Dean Kamen, inventor

“I think the biggest implications of the singularity is that we don’t know the implications of the singularity”… When and if we reach the place where machine’s are more capable of things we call thinking, the consequences of that is, who is leading the world and which way it will be taken and how do we relate to that is hard to really understand.”

 

i was writing this post on singularity for several weeks now and yesterday ran into text that perfectly summarizes everything i was writing about. the author of the text is none other than ‘(im)perfect’ human brain –  mr. stephen hawking.

mr. hawking, whose life relies on technology, states:  

“A.I. could be our worst mistake in history”. part of the text is here.

 

Human beings are champions in clinging, worshiping and repeating the same mistakes. Einstein described that kind of behavior as insanity. Tesla also made a great remark: “One must be sane to think clearly, but one can think deeply and be quite insane.”

sane or insane, we should not be playing with the fire unless we first master it and know how to distinguish it.