Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Fact and fiction: Is a scenario like The Terminator really the future?

Are our fears about robots justified?

Updated at 6pm

THERE HAS NEVER been a point in human history where our lives are so entwined with technology and data, but with that has brought increased fears.

Yet aside from the problems of data and tracking (which is a whole other story), our view of robotics and Artificial Intelligence (AI) is one that’s both skeptical and fearful.

It doesn’t help matters when you hear the likes of Google and Boston Dynamics (a company that had contracts with the US military) developing their own robots, and the potential drawbacks of it resulted in a number of prominent figures tech and science signing an open letter to prevent a global arms race with weapons using AI.

When such fears emerge, it’s hard not to imagine a future that looks something like this:

giphy Giphy Giphy

Yet the current reality of both robotics and AI is more like this:

SGUXbtv Imgur Imgur

To put it in blunt terms, robots are dumb. Asking one to complete a task like opening a door will take ages for it to complete, and that is only if it succeed. If it falls, it will have serious difficulties getting up by itself and so on.

Yet with all new technologies that emerge, there will always be fears that this will be the time we seal our fate as they’ll turn against us sooner or later.

Dr. Ken Ford has worked in the field of AI and robotics for decades, having worked with NASA and now the Institute for Human and Machine Cognition (IHMC) in Florida.

As you would expect from someone in the industry itself, he has a more optimistic idea of how robotics and AI will develop. Speaking to TheJournal.ie, he argues that this isn’t a question of whether robotics will become good or evil, but how they’re used by people.

“There’s no doubt that any really advanced technology, or even really simple technology… can be used for good or bad purposes”, says Dr. Ford.

My complaint isn’t with the notion of the misuse of AI – that’s something that people should be concerned about – but that’s less an argument against the science of AI and more an argument of human wisdom, judgement and nature. So that’s really a separate argument of what we think about humans and our judgement.

Still, if we’re accept that to be true, there will always be the fear that these machines will inevitably be used for war instead of improving society or our lives. Dr. Ford does say that this is already happening, but says it’s already used more for peace, operating in our day-to-day lives.

Not that he doesn’t believe such concerns are valid, or that we shouldn’t have discussions about their impact, just not to assume the work on robotics alone is a bad thing.

I think it’s sensible to be concerned about personal liberties or whether some organisations might use AI to more effectively invade our privacy but the same discussion could be made about genetics research… it’s not the inherent science that’s the issue, it’s our wisdom of which we as a society employ it.

Also, the concept of artificial creation has been around far longer than you would think.

While the Terminator and HAL from 2001: A Space Odyssey are the first to be referenced, you could go back even further to stories like Frankenstein’s monster or the three laws of robotics – an idea originally created by science fiction writer Isaac Asimov.

Stories like that do shape our general perception of new technologies in some form and it’s why Skynet and similar pop culture references emerge whenever a robot appears in the news (we’re not exactly innocent of those references ourselves).

“Most people’s ideas about AI [and robots]… are informed largely by sci-fi and there are no other ideas about it”, says Dr. Ford. “Our cultural memes come from science fiction so it’s really bizarre”.

If you think about HAL or all the others, the hazard associated with them wasn’t their great intelligence or to the artificialness, whatever that means, of their intelligence, it was due to their humanity. HAL had paranoia and numerous other unfortunate human traits that we would have to have to go to great lengths to build.”

And that’s something people might not realise initially when hearing about robotics. While great strides have been made in recent years, we’re still in the very basic stages.

One example is IHMC’s own robot Running Man, which recently took part in the DARPA Robotics Challenge earlier this year. The challenge was motivated by disasters like the ruined nuclear reactor in Fukushima and the Deepwater Horizon underwater oil spill.

The key goal is to create a dexterous mobile robot that could move through disaster zones and perform useful tasks with minimal guidance and input from remote human operators.

Running Man came second in the competition, but watching a timelapse of its progress shows just how long it takes for a robot to carry out even the most rudimentary of tasks.

Jiggy Bot / YouTube

A team of 30 were involved in Running Man’s creation over three years, it focusing on software while the robotics came from Boston Dynamics and another robotics company Carnegie Robotics.

It takes a very long time for a robot to carry out an action but Ford doesn’t see a world where robots work independently of humans, he looks at how people and robots can work together and that includes demystifying the concept of AI itself.

Inevitably, robotics and AI will reach a point where they will have a significant impact on our lives. Before then, we will need to have serious discussions about the benefits and repercussions they will have, especially since it’s people and not robots that will determine its future.

“I think the thing that confuses people is the old-fashioned story about AI’, says Dr. Ford. “The Turing Test story is one of building an artificial human. The name AI itself is singularly a poor choice to name something technical [as] it implies it’s not intelligence but it’s some artificial form of a human.”

As soon as people realise it’s not about building artificial humans with all of our good and bad points and all of our foibles, it’s about enabling those things humans do then you get a different view. I’m not suggesting there’s no hazard associated with AI but they’re not associated with the science itself. [Instead, it's] how we choose to apply it.

Dr. Ford was speaking at the George Boole Bicentenary Celebration which took place in UCC this weekend.

Read: ONE BILLION people logged into Facebook on the same day, this week >

Read: This startup founded by Irish guys is about to be HUGE… and they found a pretty unique way to tell people >

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
36 Comments
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.
    JournalTv
    News in 60 seconds