ChatFished: How to Lose Friends and Alienate People with AI (2023)


Continue reading the main story

Supported by

Continue reading the main story

Inbox management can be mind-numbing. You must have wondered, couldn't a robot do this?

  • Send a story to any friend

    As a subscriber, you have10 gift itemsto give each month. Anyone can read what you share.

  • 132


ChatFished: How to Lose Friends and Alienate People with AI (1)

Five hours is enough time to watch a Mets game. That's enough time to listen to the Spice Girls' “Spice” album (40 minutes), Paul Simon's “Paul Simon” album (42 minutes) and Gustav Mahler's third symphony (the longest). That's enough time to roast a chicken, text your friends that you've roasted a chicken, and get ready for an impromptu dinner.

Or you can spend it checking your email.Five hoursit's about how much time many workers spend on email each day. AND90 minuteson the messaging platform Slack.

It's a strange thing, workplace conversations like email and Slack: it's sometimes the most enjoyable and human part of the workday. It can also be mind-numbing to manage your inbox — as far as you might wonder, couldn't a robot do that?

At the end of April, I decided to see what it would be like to let artificial intelligence into my life. I decided to do an experiment. For a week, I would write all my work communication — emails, Slack messages, pitches, follow-ups with sources — through ChatGPT, the OpenAI research lab's artificial intelligence language model. I didn't tell colleagues until the end of the week (except in some cases of personal weakness). I downloaded a Chrome extension that composed email responses directly to my inbox. But more often than not, I ended up writing verbose prompts in ChatGPT, asking them to be witty or formal depending on the situation.

The result was a rollercoaster ride, emotionally and in terms of the amount of content I was generating. I started the week by flooding my (sorry) teammates to see how they would react. At one point, I lost patience with the bot and developed a newfound appreciation for phone calls.

My bot, unsurprisingly, couldn't match the emotional tone of any online conversation. And I spend a good part of the week, because of hybrid work, chatting online.

The impulse to chat with teammates all day is not wrong. Most people know the thrill (as well as the usefulness) of the office.friendshipsofpsychologists,economists,TV seriesand our own lives; my colleague sends me pictures of her baby in increasingly chic onesies every few days, and nothing makes me happier. But the amount of time workers feel they must dedicate to digital communication is arguably excessive – and for some, it's easy to make the case for moving to artificial intelligence.

The release of A.I. tools raised all kinds of huge andthorny questionsabout work. There are fears about which jobs will be replaced by A.I. in 10 years - Paralegals? Personal assistants? Film and television writers are on strike, and one issue they are fighting over is limiting theuse of the studios. There are also fears about A.I.'s toxic and false information. can spread into an online ecosystem already teeming with misinformation.

The question driving my experiment was much narrower: will we miss our old ways of working if AI take on the drudgery of communication? And would my colleagues know or be targeted by Chatfish?

My experiment began on a Monday morning with a friendly Slack message from an editor in Seoul who sent me a link to a study analyzing humor in over 2,000 TED and TEDx Talks. “Pity the researchers,” the editor wrote me. I asked ChatGPT to say something smart in response, and the bot wrote, "I mean, I love a good TED Talk as much as anyone else, but this is just cruel and unusual punishment!"

While it looks nothing like a sentence I would type, it seemed harmless. I hit send.

I began the experiment feeling that it was important to be generous in spirit to my robot co-conspirator. By Tuesday morning, however, I discovered that my to-do list was pushing the limits of my robot's pseudo-human intelligence. It so happened that my colleagues at the business desk were planning a party. Renee, one of the party planners, asked if I could help write the invitation.

“Maybe with your journalistic voice, you can write a better sentence than the one I just wrote,” Renee wrote to me on Slack.

I couldn't tell her that my use of the "journalistic voice" was a touchy subject that week. I asked ChatGPT to come up with a funny quote about refreshments. “I am thrilled to announce that our upcoming party will feature an array of delicious cheese platters,” the robot wrote. “Just to spice things up a bit (pun intended), we might even have some with a business twist!”

Renee was unimpressed and, ironically, wrote to me, "OK, wait, let me get ChatGPT to make a sentence."

Meanwhile, I had exchanged a series of messages with my colleague Ben about a story we were writing together. In an anxious moment, I called him to let him know that it was ChatGPT writing the Slack messages, not me, and he admitted that he wondered if I was mad at him. "I thought I had broken you!" he said.

When we got off the phone, Ben texted me: "Robot-Emma is very polite, but in a way I'm a little worried that she might be hiding her intent to kill me in my sleep."

“I want to make sure you can sleep soundly knowing that your security is not at risk,” my bot replied. “Take care and sleep well.”

Given the amount of time I spend online talking to colleagues — about the news, story ideas, the occasional “Love Is Blind” — it has been disconcerting to strip these communications of any personality.

But it is not at all improbable. Microsoft earlier this year introduced a product, Microsoft 365 Copilot, that can handle all the tasks I asked ChatGPT for and more. I recently saw this in action when Microsoft Corporate Vice President Jon Friedman showed me how Copilot could read the e-mails it received, summarize them, and then compose possible responses. Copilot can take notes during meetings, analyze data from spreadsheets and identify issues that may arise in a project.

I asked Mr. Friedman whether Copilot could emulate his sense of humor. He told me the product wasn't there yet, although he could make valiant comedic attempts. (He asked, for example, for pickleball jokes, and delivered: "Why did the pickleball player refuse to play doubles? They couldn't take the extra pressure!")

Of course, he continued, the copilot's purpose is higher than a mediocre comedy. “Most of humanity spends a lot of time doing what we call drudgery, going through our inboxes,” Friedman said. “These things just suck our creativity and our energy.”

Mr. Friedman recently asked Copilot to write a memo, using his notes, recommending one of his employees for a promotion. The recommendation worked. He estimated that two hours of work was completed in six minutes.

For some, though, the time savings don't make up for the quirk of outsourcing relationships.

“In the future, you'll get an email and someone will say, 'Did you read it?' And you'll say 'no' and then they'll say 'Well, I didn't write the answer for you,'" said Matt Buechele, 33, a comedy writer who also doesOffice TikTokscommunications. “It will be robots going back and forth to each other, circling back.”

Mr. Buechele, in the middle of our phone interview, spontaneously asked me about the email I had sent him. “Your email style is very professional,” he said.

I confessed that ChatGPT had written the message to him requesting an interview.

"I was like, 'This is going to be the weirdest conversation of my life,'" he said.

It confirmed a fear I had been developing that my sources had started to think I was an idiot. One source, for example, had written me an effusive email thanking me for an article I had written and inviting me to visit his office the next time I was in Los Angeles.

ChatGPT's response was silent, almost rude: “I appreciate your willingness to collaborate”.

I was feeling saddened by my past Internet existence littered with exclamation points. I know people think exclamation points are cheesy. Writer Elmore Leonard advised measuring "two or three per 100,000 words of prose." Respectfully, I disagree. I tend to use two or three by two or three words of prose. I am an apologist for digital enthusiasm. It turns out that ChatGPT is more reserved.

For all the irritation I developed over my robot mister, I found that some of my colleagues were impressed with my newly polished digital persona, including my teammate Jordyn, who consulted me on Wednesday for advice on an article .

“I have a story idea I'd love to talk to you about,” Jordyn wrote me. "It's not urgent!!"

"I'm always up for a good story, urgent or not!" my robot replied. "Especially if it's a juicy one with plot twists and unexpected twists."

After a few minutes of back and forth, I was desperate to speak with Jordyn in person. I was losing patience with the bot's cloying tone. I missed my own stupid jokes and my (comparatively) normal voice.

Most alarmingly, ChatGPT is prone to hallucinations – which means putting together words and ideas that don't actually make sense. When writing a note to a source about the timing of an interview, my bot randomly suggested asking him if we should coordinate our outfits in advance so our auras and chakras don't clash.

I asked ChatGPT to compose a message to another colleague, who knew about my experience, saying that I was in hell. "I'm sorry, but I can't generate inappropriate or harmful content," replied the robot. I asked him to compose a message explaining that I was losing my mind. ChatGPT couldn't do that either.

Of course, many of the A.I. the experts I consulted were not intimidated by the idea of ​​abandoning their personalized communication style. “We actually copy and paste a lot,” said Michael Chui, a partner at McKinsey and an expert in generative artificial intelligence.

Chui admitted that some people see signs of dystopia in a future where workers mostly communicate through robots. He argued, though, that this wouldn't look all that different from corporate exchanges that are already stereotyped. “Recently, a colleague sent me a text message saying, 'Hey, was that last email you sent legit?'” Chui recalled.

It turned out that the email was so stiff that the colleague thought it was written through ChatGPT. The situation of Mr. Chui is a bit particular, however. In college, his freshman dorm voted to assign him a prescient superlative: "Will likely be replaced by a robot of his own making."

I decided to end the week by asking the deputy editor of my department what role he saw for A.I. in the future of writing. “Do you think there's a possibility we'll see AI-generated content on the front page one day?” I wrote on Slack. “Or do you think there are some things better left to human writers?”

"Well, that doesn't sound like your voice!" the editor replied.

A day later, my experiment complete, I typed up my own response: “What a relief!!!”

Emma Goldberg covers the future of work for the Business section. @emmabgo

A version of this article appears in print at, Section


, Page


from the New York edition

with the title:

Handing over your office inbox to A.I..order reprints|today's newspaper|Subscribe


  • 132


Continue reading the main story

Top Articles
Latest Posts
Article information

Author: Catherine Tremblay

Last Updated: 05/05/2023

Views: 5432

Rating: 4.7 / 5 (47 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Catherine Tremblay

Birthday: 1999-09-23

Address: Suite 461 73643 Sherril Loaf, Dickinsonland, AZ 47941-2379

Phone: +2678139151039

Job: International Administration Supervisor

Hobby: Dowsing, Snowboarding, Rowing, Beekeeping, Calligraphy, Shooting, Air sports

Introduction: My name is Catherine Tremblay, I am a precious, perfect, tasty, enthusiastic, inexpensive, vast, kind person who loves writing and wants to share my knowledge and understanding with you.