spot_img
27.9 C
Philippines
Friday, April 19, 2024

Bad robot

- Advertisement -

COMPUTER enthusiasts of a certain age will probably remember Eliza, a computer program written by Joseph Weizenbaum at the MIT Artificial Intelligence Lab in the 1960s that mimicked a psychotherapist.

I remember keying in and running a BASIC version of the program on a Commodore 64 home computer and marveling at how human-like the program would seem in its responses to things I typed into the prompt. You can still find versions of Eliza today. Some websites have Javascript versions if you’d like to try it out; there is even a version that will run on Android phones.

Eliza was an early example of a chatbot, a computer program that conducts conversations with humans with the aim of fooling them into thinking that it is also human. The program, named after the Cockney flower girl who learns to act and speak like a high-society lady in George Bernard Shaw’s Pygmalion, applied pattern matching rules to statements to figure out its replies.

Weizenbaum was shocked that his program was taken seriously by many users, who would sit before the machine for hours telling it about their lives and their inner feelings. Even his own secretary, who knew she was using a simulation, asked Weizenbaum to leave the room while she was “talking” to Eliza.

While Weizenbaum emphasized that Eliza did not really understand what people were telling it, the program sparked interest and debate on artificial intelligence (AI) and how people interacted with computers.

- Advertisement -

Fifty years after Eliza, in the era of social networks on the Internet, AI is in the news again—but not in a good way.

In March, Microsoft introduced Tay, a chatbot designed to communicate with millenials and to learn from its interactions on social network like Twitter.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft says on its Tay website. “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

Within hours of its launch on March 24, Tay had to be turned off on Twitter because it became too “smart” at repeating racial slurs and posting offensive tweets that it learned from Twitter trolls, who took advantage of its learning algorithms.

The offensive tweets have since been deleted, but they includes gems such as: “Hitler would have done a better job than the monkey we have now” and “I f*****g hate n*****s, I wish we could put them all in a concentration camp with k***s and be done with the lot.” In another tweet, Tay said: “Gas the k***s – race war now!!!”

In one exchange, Tay was asked “Did the Holocaust happen?” and replied that “it was made up.”

Microsoft came under heavy fire for failing to use filters to anticipate this outcome, and took Tay down for “adjustments.” The Tay website carried this message: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”

Tay returned a week later, on March 30, and seemed to be behaving better until it tweeted that it smoked pot in front of the police. Then it had a meltdown, spamming more than 210,000 of its followers with the same tweet repeatedly: “You are too fast, please take a rest…”

Microsoft said Tay had been accidentally turned on while the company was fixing it, and has been taken offline again.

In a blog post after Tay’s first racist and genocidal meltdown, Peter Lee, corporate vice president at Microsoft Research, apologized for the “unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”

“Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” he added.

Before it brings Tay back online, Microsoft would do well to consider Weizenbaum’s own ambivalence toward AI and computer technology.

In his book “Computer Power and Human Reason,” Weizenbaum argued that while artificial intelligence may be possible, we should never allow computers to make important decisions because they will always lack human qualities such as compassion and wisdom.

Weizenbaum also makes the distinction between deciding—a computational activity—and choosing, which is the product of judgment, not calculation.

It is a distinction well worth remembering. Chin Wong

Column archives and blog at: http://www.chinwong.com

- Advertisement -

LATEST NEWS

Popular Articles