9.3 C
New York
March 28, 2024
Worship Media
Technology

Is that you or a virtual you? Are chatbots too real?

In a recent episode of HBO’s TV show, Silicon Valley, Pied Piper network engineer Bertram Gilfoyle (played by actor Martin Starr), creates a chatbot he calls “Son of Anton,” which interacts on the company network with other employees automatically, posing as Gilfoyle.

For a while, Pied Piper developer Dinesh Chugtai (played by Kumail Nanjiani) chats with the bot until during one interaction he sees Gilfoyle standing nearby, away from his computer. Upon discovering he’s been chatting with AI, Dinesh is angry. But then he asks to use “Son of Anton” to automate his interactions with an annoying employee.

[embedded content]

Like Dinesh, we hate the idea of being fooled into interacting with software impersonating a person. But also like Dinesh, we may fall in love with the idea of having software that interacts as us so we don’t have to do it ourselves.

[ Related: Will Google’s AI make you artificially stupid? ]

We’re on the brink of confronting AI that impersonates a person. Right now, AI that talks or chats can be categorized in the following way:

  1. interacts like a human, but identifies itself as AI
  2. poses as human, but not a specific person
  3. impersonates a specific person

What all three of these have in common is — regardless of their pretenses to humanity — they all try to behave like humans. Even chatbots that identify themselves as software are increasingly designed to interact with the pace, tone and even flaws of human interaction.

[ Don’t miss: Mike Elgan every week on Insider Pro ]

I detailed in this space recently the subtle difference between Google’s two public implementations of its Duplex technology. It’s use to answer calls when someone calls a Google Pixel phone is the first kind of AI — it identifies itself to the caller as AI.

The other use of Duplex, which was the first Google demonstrated in public, started out as the second kind. After

In a recent episode of HBO’s TV show, Silicon Valley, Pied Piper network engineer Bertram Gilfoyle (played by actor Martin Starr), creates a chatbot he calls “Son of Anton,” which interacts on the company network with other employees automatically, posing as Gilfoyle.

For a while, Pied Piper developer Dinesh Chugtai (played by Kumail Nanjiani) chats with the bot until during one interaction he sees Gilfoyle standing nearby, away from his computer. Upon discovering he’s been chatting with AI, Dinesh is angry. But then he asks to use “Son of Anton” to automate his interactions with an annoying employee.

Like Dinesh, we hate the idea of being fooled into interacting with software impersonating a person. But also like Dinesh, we may fall in love with the idea of having software that interacts as us so we don’t have to do it ourselves.

[ Related: Will Google’s AI make you artificially stupid? ]

We’re on the brink of confronting AI that impersonates a person. Right now, AI that talks or chats can be categorized in the following way:

  1. interacts like a human, but identifies itself as AI
  2. poses as human, but not a specific person
  3. impersonates a specific person

What all three of these have in common is — regardless of their pretenses to humanity — they all try to behave like humans. Even chatbots that identify themselves as software are increasingly designed to interact with the pace, tone and even flaws of human interaction.

[ Don’t miss: Mike Elgan every week on Insider Pro ]

I detailed in this space recently the subtle difference between Google’s two public implementations of its Duplex technology. It’s use to answer calls when someone calls a Google Pixel phone is the first kind of AI — it identifies itself to the caller as AI.

The other use of Duplex, which was the first Google demonstrated in public, started out as the second kind. After initiating a restaurant reservation using the Google Assistant, Duplex would call a restaurant, interact as a person — but not a specific, living person — and not identify itself as AI. Now Google has added a vague disclosure to the beginning of the call.

And, in fact, this is the main type used by the proliferating customer service chatbots from companies like Instabot, LivePerson, Imperson, Ada, LiveChat, HubSpot and Chatfuel. Chatbots have proved to be a boon for customer service and sales. And they all identify themselves as bots.

Gartner estimated last year that one-quarter of all customer service and support operations will integrate AI chatbots by next year, up from less than two percent in 2017.

AI chatbots are everywhere (and anyone)

When we think of “customer service,” we think of calling on the phone specifically for help of some kind. But, increasingly, this interaction is happening through websites and apps as reminders or notifications. The Uber app notifies you than your car is arriving. Airline apps let you know about changes to your flight. It’s generally left up to the customer to assume that the interaction is coming from a human or a machine.

Does anybody care if they’re talking or chatting with a human or machine? And if they do, will they care in a few years after everyone is more accustomed to AI-based interaction?

In surveys, people will say that they’d rather speak to a human than a bot. But researchers at the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin found that interactions with chatbots are most successful if the chatbot impersonates a human. In the research, published in the journal Nature Machine Intelligence, the goal was for chatbots to earn cooperation from humans. When the people thought the bots were human, they were more likely to cooperate.

The researchers’ conclusion: “Help desks run by bots, for example, may be able to provide assistance more rapidly and efficiently if they are allowed to masquerade as humans.”

In other words, because people are less likely to cooperate with chatbots, the best way forward is for chatbots to impersonate humans and not identify themselves as AI.

Android founder Andy Rubin agrees. As the now-CEO of phone maker Essential Products, he’s been working on a tall, skinny smartphone code-named Gem. Critics blasted the phone’s design, suggesting that the screen is too skinny. But according to reports, the whole purpose of the phone is to use AI so the phone does things on behalf of the user — including communication. The user would interact with the phone mainly through voice commands, according to comments Rubin made to the press last year. And an AI chatbot would automatically reply to emails and text messages on behalf of the user. He told Bloomberg that the agent would be a “virtual version of you.”

It’s the stuff of Philip K Dick or William Gibson novels — “virtual agent” posting as a “virtual you” in “cyberspace.”

The lawmakers will have something to say about it. A California law went into effect on July 1 that requires AI to identify itself as non-human in any interaction. But it’s likely this law applies only to companies with a “public-facing” chatbot, and not to individual users of technologies like Rubin’s “virtual version of you.”

The problem with the moral panic around AI disclosure

When asked if they want AI to identify itself as non-human during interactions, most people will say yes — they want that. People don’t like the idea of being “fooled” into interacting with a machine.

The problem is that machine-based communication isn’t binary. Machines help us communicate in all kinds of ways, from grammar checkers to out-of-office auto-replies, to AutoCorrect, to Google’s Smart Compose.

People already get messages from chatbots that don’t disclose their non-humanity for simple things like the status of their delivery pizza. We interact every day with increasingly sophisticated interactive voice response (IVR) systems whenever we call the bank or airline for customer service. And when we do reach a human, they’re often reading from an AI-generated script.

I believe that the moral panic — or, more accurately, the vague displeasure — around AI that impersonates humans is temporary.

A few years from now, it will be like cookie disclosures on websites. Europe, California and a few other political entities will mandate AI disclosures. But most users will find those disclosures an annoying waste of time.

The technology is here and will soon grow ubiquitous. We might be annoyed to learn that person we’ve been yammering away with isn’t human. But we also might be thrilled to let chatbots interact on our behalf.

Either way, “Son of Anton” is coming.

Click Here to Visit Orignal Source of Article https://www.idginsiderpro.com/article/3488819/is-that-you-or-a-virtual-you-are-chatbots-too-real.html#tk.rss_all

Related posts

What’s new in Notifications in iOS 16?

ComputerWorld

Apple plots path to the next Intel iMac

ComputerWorld

Australia’s new ‘right to disconnect’ law includes jail time — for now

ComputerWorld

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy