Microsoft has received a patent to turn you into a chatbot

Illustration for article titled Microsoft has been granted a patent to turn you into a chatbot

Photo: Stan Honda (Getty Images)

What if the most important measure of your life’s labor has nothing to do with your lived experiences, but only with your inadvertent generation of a realistic digital clone of yourself, an old man’s copy for the entertainment of the people of the year 4500, long after you have left? this mortal role? This is the least gruesome question posed by a recently granted Microsoft patent for an individual chatbot.

First noticed by the independent, The United States Patent and Trademark Office has confirmed to Gizmodo by email that Microsoft is not yet allowed to make, use, or sell the technology just to prevent others from doing so. The application for the patent was filed in 2017, but was just approved last month.

Hypothetical Chatbot You (provided in detail here) would be trained in “social data”, including public messages, private messages, voice recordings and video. It can take a 2D or 3D shape. It could be a “past or present entity”; a “friend, a relative, an acquaintance, [ah!] a celebrity, a fictional character, a historical figure ‘and, ominously,’ an arbitrary entity ‘. (The latter, we’d guess, might be a speaking version of the photo-realistic, machine-generated portrait library ThisPersonDoesNotExistThe technology could allow you to commit yourself to a “particular stage of life” in order to communicate with young you in the future.

I personally enjoy the fact that my chatbot would be useless thanks to my limited text vocabulary (“omg” “OMG” “OMG HAHAHAHA”), but the minds at Microsoft were of the opinion. The chatbot can form opinions you don’t have and answer questions you have never been asked. Or, in Microsoft’s words, “one or more call data stores and / or APIs can be used to answer user dialogs and / or questions for which the social data does not contain data.” Filler commentary can be guessed from crowdsourced data from people with similar interests and opinions or demographic information such as gender, education, marital status, and income level. It could envision tackling a problem based on crowd-based perceptions of events. “Psychographic data” is on the list.

In summary, we look at a monster of Frankenstein’s machine learning, which revives the dead through uncontrolled, highly personal data collection.

“That’s horrifying,” Jennifer Rothman, a law professor at the University of Pennsylvania and author of The Right to Publicity: Privacy Redesigned for a Public World told Gizmodo via email. If it is any reassurance, such a project sounds like legal pain. She predicted that such technology could lead to disputes over the right to privacy, the right to publicity, defamation, the false light crime, trademark infringement, copyright infringement and false approval “just to name a few,” said they. (Arnold Schwarzenegger mapped the territory with this head.)

She went on:

It could also violate the biometric privacy laws in states, such as Illinois, that have them. Assuming the collection and use of the data is allowed and people affirmatively choose to create a chatbot in their own image, the technology still raises concerns if such chatbots are not clearly delineated as mimics. One can also imagine a myriad of abuses of the technology, similar to the ones we see when using deepfake technology – probably not what Microsoft would be planning, but they can be expected anyway. Convincing but unauthorized chatbots can create national security concerns if, for example, a chatbot speaks on behalf of the president. And you can imagine unauthorized celebrity chatbots spreading in ways that could be exploited sexually or commercially.

Rothman noted that while we have lifelike dolls (deepfakes, for example), this patent is the first she’s seen combining such technology with data collected through social media. There are a number of ways Microsoft can address concerns with varying degrees of realism and clear disclaimers. Embodiment like Clippy, the paper clip, she said, could help.

It’s unclear what level of consent would be required to collect enough data for even the most brutal digital waxwork, and Microsoft hasn’t shared any guidelines for potential user agreements. But additional likely laws governing data collection (the California Consumer Privacy Act, the EU’s General Data Protection Regulation) can confuse chatbot creation. On the other hand, Clearview AI, which notoriously supplies facial recognition software to law enforcement and private companies, is currently litigating over the right to monetize its repository of billions of avatars scraped from public social media profiles without users’ consent.

Lori Andrews, a lawyer who helped develop guidelines for the use of biotechnologies, imagined an army of rogue evil twins. “If I ran, the chatbot could say a little more racist as if it were me and stifle my election prospects,” she said. “The chatbot could access different financial accounts or reset my passwords (based on collected information such as a pet’s name or mother’s maiden name, which can often be accessed via social media). A person could be misled or even harmed if their therapist took a two-week vacation, but a chatbot mimicking the therapist continued to provide services and bill without the patient knowing about the switch. “

Hopefully this future will never come true, and Microsoft has acknowledged that the technology is creepy. When asked for comment, a spokesperson sent Gizmodo to a tweet from Tim O’Brien, General Manager of AI Programs at Microsoft. “I’m researching this – the application date (April 2017) predates the AI ​​ethics reviews we’re doing today (I’m on the panel), and I’m not aware of any plans to build / ship (and yes, it is disturbing). ”

.Source