Are robots going to kill us all?

By The Wireless.

 

Must. Destroy. Humanity.

 

Are robots going to kill us all?

Photo: The Wireless/123rf

Scary as hell was The Terminator.

Hunting someone relentlessly, an unstoppable killer robot was featured in the low budget sci-fi film. This concept scared my pubescent self much more than creepy Japanese children or masked maniacs.

Even 33 years after its release, the movie’s concept of a world where an artificially intelligent computer has usurped humanity and initiated machine warfare appears rather absurd, right?

Well, not to the inspiration behind Robert Downey Jr’s Tony Stark in Iron Man – Elon Musk, the 40-year-old, Hollywood-star dating billionaire entrepreneur behind Tesla and SpaceX.

Musk is of the belief that robots will most likely, at some point, eliminate all of us.

He is also convinced that there’s a one-in-billion chance that we are not currently living in a computer simulation, but let’s set that aside for now.

In a speech in July, Musk warned, “I keep sounding the alarm bell, but until people witness robots on the streets killing individuals, they are unsure how to respond since it appears so intangible.”

Musk claims that robots pose “a fundamental risk to the existence of civilization” and is certain that they will eventually outperform humans in every aspect.

Remarkably, his statements have sparked a public dispute with another tech giant, Facebook founder Mark Zuckerberg.

In a recent live video, Zuckerberg referred to Musk as a “naysayer” fearmongering. He stated, “In some aspects, I actually believe it is rather irresponsible.”

Musk retaliated on Twitter: “I’ve conversed with Mark about this. His grasp of the topic is limited.”

Beat that, Taylor and Katy.

Two days ago, Microsoft titan Bill Gates chimed in, stating to The Wall Street Journal that global annihilation is not impending. “This is an area where Elon and I differ… we shouldn’t be alarmed about it.”

Aaron Stockdill, a Cantabrian pursuing a PhD in computer science at the University of Cambridge, stated, “We comprehend how to create intelligent systems to prevent them from embarking on murderous rampages.”

Stockdill believes that individuals should not dread an intelligent robot reprogramming itself. Should this scenario occur, the more probable outcome would be its self-optimization for laziness and apathy.

Steve McKinlay, who instructs digital ethics and artificial intelligence at the Wellington Institute of Technology, also dismisses Musk’s apprehensions.

Photo: The Wireless/Max Towle

The Wireless/Max TowlePhoto: The Wireless/Max Towle

“Elon Musk should focus on building battery-powered cars and rockets. He lacks understanding in this area,” he states.

In a fortnight, the former coder will deliver a speech before the Royal Society of New Zealand concerning the ethical implications of artificial intelligence and big data.

However, the crucial question remains – will robots abolish humanity one day?

“Not within our lifespan,” he reassures.

“That type of artificial intelligence portrayed in movies like Ex Machina and The Terminator – the killer robots – we are nowhere close to developing such artificial intelligence,” he clarifies.

“While there are viral videos of recently constructed robots seemingly at the frontier of technological advancements failing and causing disturbances, this does not pose the real danger,” he explains.

He asserts that the most significant peril lies in autonomous weapon systems, or “drone swarms.”

At present, drones are not solely utilized for creating captivating vacation videos or surveillance on neighbors; they have been deployed by military forces, such as the US in its “War on Terror,” for many years.

Swarms, a form of drone technology, can hypothetically be manufactured inexpensively utilizing 3D printers. Operating akin to a self-coordinated flock of birds without human intervention, they soar towards their target, which could be the most densely populated area they encounter. If a few are thwarted, the remainder persevere.

Major global powers are actively pursuing swarm technology. Although relatively new, it is evolving rapidly.

“Their impact could rival the invention of the machine-gun: any party lacking its own drone swarm faces swift defeat on the battlefield,” reported the BBC earlier this year.

McKinlay asserts that the potential of drone swarms is horrifying.

“We have leading world powers developing this innovative technology devoid of genuine ethical consideration, rendering a world equipped with it increasingly ominous and perilous.”

A flock of birds fly over a city.

A flock of birds fly over a city.
Photo: Flickr/Olivier Bareau

Mary Wareham, a former advocacy director for Oxfam New Zealand, has been campaigning against problematic weapons for numerous years. Currently stationed in Washington DC working for Human Rights Watch, she raised the question, “Should humans delegate the authority to select and strike a target to a machine?”

She recently reprimanded the New Zealand Government for neglecting to take a stand against lethal autonomous weapons and prohibiting them. As per Stuff, 19 other nations had signed an open letter to the UN as part of the “Campaign to Stop Killer Robots.”

“We have limited time to act. Once this Pandora’s box is opened, shutting it will be arduous,” warns the letter.

Among the signatories is Elon Musk.

The UN presently intends to institute a panel of governmental experts who will strive to “devise measures to prevent an arms race in these weapons, safeguard civilians from their misapplication, and avert.The effects of these technologies can be destabilizing.

Concerns about other technological advancements are also raised by McKinlay, such as driverless cars.

Consider the scenario where a car must make a split-second decision to either drive onto the pavement and potentially harm pedestrians or have a head-on collision to protect the driver’s life. Who should have the authority to make such critical design decisions?

Apple is introducing facial recognition technology in its new flagship iPhone X, making a fingerprint or six-digit code no longer sufficient.

Apple introduces Face ID.

Apple introduces Face ID.
Photo: Screenshot: Apple/CNET

While facial recognition technology is not new, its rapid development is noteworthy.

McKinlay acknowledges the efficiency of facial recognition for quicker airport security checks.

However, the implications are also unsettling.

Recent reports revealed that researchers at Stanford University created facial recognition software capable of predicting sexual orientation with high accuracy.

The software, dubiously named “gaydar” online, can distinguish between gay and heterosexual individuals with notable percentages.

Considering the potential misuse, McKinlay questions what would happen if such technology fell into the hands of institutions persecuting specific groups.

Imagine the consequences if facial recognition could identify individuals as potential criminals or terrorists. The implications for false identifications could be reminiscent of scenarios from Minority Report.

McKinlay’s primary focus revolves around machine learning.

Machine learning, a form of artificial intelligence, enables systems to learn and enhance themselves based on experience without explicit programming.

Applications range from supermarkets predicting consumer behavior to government entities determining benefit durations.

McKinlay stresses concerns resulting from extensive data collection, including monitoring individuals’ activities, spending habits, and locations without adequate transparency in terms and conditions.

Beyond privacy worries, he emphasizes the profound impact on democratic principles when government and private agencies utilize such data and algorithms to forecast human behavior.

This technology could significantly influence people’s lives, affecting decisions from retail purchases to social welfare eligibility.

Despite the ongoing job losses due to automation, Stockdill points out that professions like accounting and management might be the first to be replaced.

While some manual jobs persist due to cost factors, such as intricate tasks, the trajectory suggests roles demanding creativity or complexity, like teaching and art creation, will endure the longest.

Although all occupations face potential impacts, the completel automation of the workforce remains several years away.

Stockdill recognizes that technological advancements have irreversible momentum, urging society to deliberate and shape the forthcoming future.

Contrary to fears, he encourages viewing technology as a historical catalyst for progress. Past clashes between society and technology–like the Luddite resistance to weaving machines or Socrates’ skepticism towards books–have failed to impede societal advancement.

Photo: Eureka! Trust

Aaron Stockdill.
Photo: Eureka! Trust

McKinlay and Musk remain optimistic regarding artificial intelligence’s potential benefits in areas like medical breakthroughs, urban transport enhancement, and resource management.

They both advocate for proactive regulation in the face of rapid technological advancements, emphasizing the importance of collaborative efforts.

Musk, akin to a modern-day superhero, invests heavily in AI to monitor its progression and safeguard against potential threats.

The urgency for responsible innovation is evident, as McKinlay recounts a sobering revelation from a tech expert about the proliferation of autonomous weapons technology.

Amidst the complexities, he stresses the necessity for ethical considerations and accountability in designing a future shaped by technology.