AI · Thoughts about Stuff

Artificial Intelligence

I’ve been hearing a lot about AI lately. More than I would like, in fact. I suspect that’s true for many of us.

For school, we received a warning that we need to check with our professors about the use of AI to do assignments. I’ve seen examples on Twitter, from teachers, about students using AI inappropriately. I follow some artists who have had their work “appropriated” (stolen) to train AIs. I’m seeing authors concerned about how their work is being used.

One law school professor advocates for teaching students how to use it effectively and ethically. I was surprised by the recommendation, but I see the utility in it — it’s here, and becoming more pervasive. We should probably all know at least the basics. The APA agrees that ChatGPT can have the potential to advance learning, in some situations.

Recently, I experienced how health care and business are trying to use AIs to do customer service. It’s not going well. It’s not ready to do anything more than the most basic of business functions. But businesses are plugging them in and letting go of their customer service reps. (Newsflash: it’s really frustrating to navigate this new landscape as a patient or customer.)

It’s causing no small amount of consternation, but I’m not naive enough to think that it’s going to go away, or that we can legislate it out of existence. It’s here, and some of it is getting pretty sophisticated.

And there are some good ways to use the technology. IBM’s Watson seems to be adept at complex medical diagnoses (in part, I suspect, because Watson can “read” everything about a particular subject much more quickly than doctors can). Here’s the thing, though: a doctor has to decide if that diagnosis is correct, based on their experience and their interactions with the patient. Watson may identify potential problems and solutions, but there’s a doctor that has to assess that information and decide if it is correct, or useful.

If I, a mere mortal (an MLIS student, not a doctor, lawyer, or artist) were going to use an AI, it would be to help with foundational work on projects by suggesting new avenues for exploration, or creating visualizations to illustrate research (more on this later, because I feel like this is inching close to a line that shouldn’t be crossed).

A couple of the more interesting uses I’ve seen:

  • An urban planner friend used an AI to generate images to illustrate an urbanist philosophy he espouses. The images are for demonstration purposes only; they are not going to be passed off as original work, are not going to be sold, and are not going to be integrated into a proposed or existing design framework.
  • Somebody on the internet used ChatGPT to come up with a week of meal planning, and then asked it to come up with recipes and a shopping list for the week.

Because there are some legitimate uses for AI, and it is a whole thing, I suspect it behooves us all to figure out what different AIs do, and whether they might be useful in some capacity. And if so, maybe to go one step further and figure out how to incorporate them into whatever workflow we use.

That said, I can see many, many red flags. There are some huge ethics challenges there, and I suspect that we are not thoughtful enough to figure those out before some really bad stuff happens.

For example, this business where entities are stealing content (images, likenesses, and text) to “train” AIs is hella problematic.

The perception that prompting an AI is the same as creating a work is not quite right. The perception that an AI-generated work is an original work is wrong. The fact that students are turning in AI-generated work, and trying to pass it off as their own is… something. (More training about what plagiarism is and how to avoid it seems to be in order.)

Culturally, we are already terrible at recognizing the work of artists and illustrators, photographers, authors, and actors, and paying for their work in a way that is equitable. Using AIs to avoid paying creators altogether is morally bankrupt. (But it is probably profitable… welcome to the icky part of capitalism.)

And then there’s the idea that you shouldn’t have to disclose if you’ve used and AI to produce work. That is… how can I say this… bullshit. If you’re going to use a tool, use it, but that doesn’t mean you should be able to hide behind it. (Looking squarely at news organizations that are producing “stories” based on data. I’m not arguing the legitimacy or efficacy of using an AI to “write” the stories. I am arguing that the AI needs to get the byline so that we know that you’re doing it.)

Those of us who choose to use AIs to help with our research should have an academically sanctioned way to cite them in references1 (maybe even with new requirements that we disclose if and how we used the AI, which one we used, and how our work with it impacted our research). People who use AIs for artistic endeavors should be bound by copyright (held by the original creator, not the AI), and again, should have a way to cite and credit (and pay, if necessary) both the creator, and the AI.

This is fairly new to me, so I can’t pretend to understand the potential of this technology, or all of the ethical issues surrounding it. But I’m diving into it, because it’s something I suspect we will all have to reckon with, sooner rather than later.

And because blog posts should have a photo, here’s an oddity quite unrelated to AI: a dandelion with fused stems and flowers.

Something weird going on with this dandelion… compound stems and flowers.

1As it turns out, the APA has a way to cite ChatGPT in references. Cool.

Leave a comment