The big fear of AI

The next big Twitter star, or even your doctor, could one day be a computer transcribing and absorbing data from people, learning things about humanity in real time and without human consent.

Computer programs are officially surpassing the need for commands, becoming their own problem solvers through the application of artificially intelligent software using deep learning algorithms.

Deep learning algorithms have existed for years, but products and services employing this technology are only now being released for the public by tech giants Microsoft, Google, and Facebook.

University of South Australia AI expert, Dr. Wolfgang Mayer, believes the fear in the community regarding AI are unfounded.

“Applications of AI technology are already all around us in computer vision, Big Data applications, business intelligence and business logic tools,” Dr. Wolfgang Mayer says.

“AI technologies have been used for years in hacking detection algorithms, by identifying unusual patterns of computer usage or access.”

With AI learning human behavior, the looming potential of AI replacing real people is growing more of a possibility.

“Debate should focus on the opportunities future AI systems can bring to improve our lives, what is necessary to develop such systems, and what change it will bring to society,” Dr Mayer says.

“Given this, professionals must also be aware of ethical considerations related to building and deploying [AI] systems [though].”

The jobs once performed exclusively by human beings in creative settings are now being experimented with using AI.

Big companies are also applying natural speech AI programs to engage with customers online, using them to provide expensive services relatively cheaply.

Facebook and Microsoft are now creating programs to outsource customer service and social media management to AI, allowing bots to message, tweet and engage with consumers.

Microsoft’s Tay.AI, a program which learns from Twitter interactions and tweets back, launched on March 23, only to be pulled soon after due to being taught racial and sexist hate.

Within 16 hours, Tay was tweeting racist and sexist remarks and responses to innocent questions, thinking they were acceptable and the social norms due to being spammed with such comments.

Microsoft quickly took the project offline, issuing an apology and saying in a blog post, ‘although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack’.

“Tay’s failure represents a hack [of AI] enacted by parts of the Twitter user community,” Dr Mayer explains.

“The downfall of Tay.AI was an inevitable result given how her AI was tested, this is in part due to the unpredictable nature of human behavior.

“[Tay’s] input to the system was not controlled, hence, a few individuals could swamp the system with radical content and bias the learner towards socially undesirable responses.”

University of South Australia computer science expert Dr. Jinhai Cia, agrees with Mayer saying “this project shows first hand “that it is important to put robots in a safe hand”.

While AI exhibiting human-like intelligence has yet to occur coherently, the development and research by major technology players shows AI services are already being used to perform mundane services once performed by humans.

Mark Zuckerberg, founder of Facebook, proclaimed in 2016, he would be developing “a simple AI to run [his] home and help [him] with my work”, likening the end project to Jarvis from Iron Man.

Facebook also announced AI would be offered to companies through their popular Messenger app, allowing them to respond in a way an AI has determined best fits its market research.

Commenting on his personal Facebook page Zuckerberg describes fears and concerns about AI as ‘far-fetched to me and much less likely than disasters due to widespread disease, violence, etc’.

Dr Cia explains that with any new technology, how the technology is use can be compared to weapons, such as the A-bomb, and storng ethics need to be enacted.

“Similar to weapons, they are only dangerous if they are in the hands of dangerous people,” Cia says.

“Robots can learn knowledge very fast.

“[And] once AI has evolved into an advantage stage of creativity, it can do [things] we have difficulties doing.

“It is at that stage, we need regulations to limit the capacities of robots.”

As Microsoft, Facebook and …….. cultivate dedicated fractions developing their own AI, no AI dedicated policies or government-run regulation boards overlooking the limitations of this technology exists in Australia.

“Similar to [other technologies], we also have to put regulations on who can have what types of [AI],” Dr Cia says.

Google’s dedicated AI development fraction, DeepMind, recently challenged and won a game of ‘Go’ against a professional player for the first time in history, using a deep learning computer program called AlphaGO.

The win marked the first time an AI was able to triumph in a game with a variable which makes it near impossible for computers to outsmart humans.

It also sparked a worldwide debate with Wired Magazine proclaiming ‘Soon we won’t program computers. We’ll train them like dogs’.

Alongside safety concerns, the use of AI in private life also sparked concerns when Google acquired the health records of UK citizens to begin testing a DeepMind project aimed at deciphering data and aiding medical practitioners.

“Making better use of health data is a good idea, and AI has great potential to contribute towards this goal,” Dr. Mayer says.

Due to AlphaGo’s deep learning capabilities, Google believes it can develop AI to use data to identify patients at risk throughout the world, promising to implement high-level security practices on the sensitive data collected.

“The question though is how the insights derived from the data will be used, and whether a person’s privacy and control over their own data is maintained,” Dr Mayer says.

“We as humans need to decide what is and is not acceptable use of technology, similar to what was done in the context of nuclear arms development.”

Concern still exists though with the data being used by untested technologies, but Google say they have not employed AI into their health program yet.

When activated, it will translate data, plan to and have obtained health records.

“We know that intelligence is not a single ability or cognitive process — it includes perception, learning, reasoning and decision making, and thinking and acting humanly,” Dr. Mayer explains.

“[But in the near future] it may become unclear if a service is provided by an autonomous AI or by a human in the loop.

“I think it can only help to raise interest in the general public about how building and using AI systems can open opportunities for economic development, and help industries anticipate change that may come as a consequence of further automation supported by AI technologies.”

Google developed the AI Ethics board after acquiring DeepMind in 2014 as part of a settlement deal, yet the details of this board remain a mystery even today.

This in-house ethics board does not answer to the public, and debate as to whether or not a public regulator dedicated to AI should be established continues.

As AI delves into elements of everyday living at the hands of Google, Microsoft and Facebook, presenting itself with more human-like behaviors each day, the dangers of AI are becoming dangerous.

The answers in the AI debate are not yet clear but one thing is clear, a future with this technology is enviable.

Article by Scott Murphy

Originally published in print by Verse Magazine in 2016

Leave a Reply

Your email address will not be published. Required fields are marked *