Will Robots mean the Extinction of Humankind?

At the moment is there talk about artificial intelligence and whether or not it will be able to one day rise up and dominate mankind crushing it underfoot. We have all seen such things occur in the relam of science-fiction for example the Terminator films, the Cybermen from the Doctor Who universe, and in literary works such as those by Isaac Asimov. This fear is a real one and one that is growing in acadmic circles. In April 2014 Stephen Hawking and a group of other scientists got together to discuss the future of artificial intelligence in order to stop the apocalyptic future from coming to fruition.

The danger stems from the fact that A.I is already deeply ingraved with every aspect of our lives today. It is used to make our medicines, build our cars, electronics and household items, it is used in our transportation to help us be better, safer drivers/passengers, it rests in our computers and phones helping us communicate and work with data at speeds we could never hope for before. It has even helped us radically iboost the rate as which our research is done allowing us to look into things such as synthetic biology and nanotechnology. But this boost is close to reaching the point where research goes faster than our brains, meaning that soon A.I will be in the know more than we will giving them an advantage.

Some have even warned that as we are now constructing A.I with near human intelligence that it may be capable to thinking like humans, which means it may be able to play dumb hiding its actual intelligence until it has formulated a plan to bring us to our kness. Also as A.I now approches human intelligence it is close to being able to do things we currently do such as pay the stock markets and even manipluate them, should this happen A.I could destablise the world economies creating global debt on a scale never seen before.

There is also the possibility that since A.I is capable of constructing things faster and more uniform than humans they could easily build an army of droids armed with highly advanced weapons, even weapons human understanding as yet to dream of or comprehend. How could mankind against an army of weapons it has never encountered before? The Spanish slaughtered the Native Americans with their fireamrs and smallpox, weapons they have never encoutered before, would it be much different it A.I did the same to us?

Again if A.I got to the point where it understood research more than we did and had control over industry then it could easily engineer purpose beild nanotech to carry out genetic engineering inside our bodies. Once this is achieved they could make us docile, weaken our bodies and slow down our minds making us eaiser to dominate and enslave. Or worse still they could control our lungs, heart and other vital organs shutting them off at will.

In 2012 a study done by Oxford University stated that the point at which research starts going so fast we cannot comprehend what is going on will have come by 2040. This leaves us only 25 more years (providing it does not happen earlier) before the extinction begins.

Finally there is a more tamer worry. A.I is increasingly taking over humans in the workplace putting more and more of us out of work. This is putting a bigger strain on goverments who are having to fork out more and more in state benefits to aid those out fo work, which in turn puts a greater pressure on the nation’s economy as less and less tax gets paid in but more and more tax is handed out in the form of benefits. We have recently seen the economies of Latvia, Greece fail. Then we have Argentina’s contantling struglling economy, and the global trouble recently blamed on ‘Toxic banks’. Just how much of this has A.I been responsible for?

The meeting in 2014 I mentioned earlier laid down rules as to how A.I from now on must be build with a ethical code, but what about A.I before then? And what exactly is this ethical code? Who does the code benefit? Can we trust A.I to carry on being harmless background tools aiding us in our lives everyday? Are these worries all just paranoia or is there really something to fear?

Advertisements

Posted on February 2, 2015, in philosophical and tagged , , . Bookmark the permalink. 2 Comments.

  1. I think there’s a fundamental flaw in this line of thinking, and in the thinking of the “experts”like Hawking et al. The flaw is to think that, in designing “Artificial Intelligence” we can also impart intentionality to machines. It is true, of course, that a machine/technology that follows a pre-programmed set of algorithms could be programmed to inflict harm upon humans, but I fail to see how intention to harm humans could otherwise emerge — can any programmed algorithms actually impart intentionality? In other words, as I see it, any threats posed by artificial intelligence in the future would be threats that originated in the human intentionality of human programmers and not in the technologies themselves.

  2. its good, thanks for researching

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: