AI · Thoughts about Stuff

ChatGPT in Real Life

So, apparently, one thing that ChatGPT is good at is bullshitting its way through research.

From The New York Times: Here’s What Happens When Your Lawyer Uses ChatGPT.

Apparently, an attorney (who had been practicing law for three decades), used ChatGPT to help with research on a brief. It made up legal rulings and opinions. And when pressed about whether those things were real… it lied.

The making-stuff-up part is not all that unexpected; it’s an algorithm that synthesizes existing information and fills in the details to make a more authoritative product. The lying part is a bit disconcerting.

In a bit of teaching brilliance, Twitter user @cwhowell123 did an experiment with his students:

The thread is incredibly interesting. Highly recommended.

All of his students generated essays using ChatGPT and then read them critically. He says that his students were surprised to learn (just as the lawyer from above) that ChatGPT could mislead them.

The pièce de résistance (for me), was found here:

I have not (as of yet) tried to use ChatGPT. It’s good to know that it’s not (yet) ready for primetime as a content generator. It seems that at least some of its “learning” has been devoted to creating content that sounds authoritative, without actually being reliable.

Frankly, I have memories of being a college student and doing the same thing. Although, some credit to me: I know that it’s not good practice to cite a source without laying eyes on the source. (Even if I misunderstand the intent of the author(s), or draw different conclusions from the research, I need to actually read the article before I use it as a reference.) I’m not smart enough to make up sources out of thin air, so that’s not a practice I would attempt either. (Yeah, the AI may know more than I do, but I know how to use what I know in ways that are mostly appropriate.)

My husband did an exercise with ChatGPT where he fed a 600-word NYTimes story about a Supreme Court ruling, and asked it to summarize the article in 300 words. He said that ChatGPT did a reasonable job, but it still got one point wrong. (He knew that because he read the NYTimes article first.)

This is all very interesting, and is helpful to me as I try to figure out how AI works and what it’s capabilities actually are. I’m coming around to the idea that ChatGPT, rather than being a generator of original content, is a natural language search algorithm that’s capable of synthesizing information based on its “learning,” and producing a natural language result. If I assume that that’s the case, then I should be prepared to subject every bit of its product to critical analysis (just as I would have to for any other kind of search results).

Leave a comment