AI

More AI Notes

Confession: I’m a big fan of science writer Ed Yong. I followed his work at The Atlantic for years, particularly during the pandemic. His newsletter, The Ed’s Up, is worth reading (bird photos FTW). His latest book, An Immense World: How Animal Senses Reveal Hidden Realms Around Us (bookshop.org link), is both fascinating and exceptionally well-written.

Image of Ed Yong's latest book, "An Immense World: How Animal Senses Reveal The Hidden Realms Around Us." It has a green background with white and yellow text, and a photo of a (very cute) monkey staring at a butterfly.
And the cover is gorgeous.

I think the quality of Ed Yong’s work is next level, so when he says something like this, I pay special attention:

“My position is pretty straightforward: There is no ethical way to use generative AI, and I avoid doing so both professionally and personally.”

To be absolutely clear: I am not an expert on generative AI. I am not a user of generative AI. I’m still trying to wrap my head around the idea of generative AI.

These are some of my concerns:

  • Do users know what content was used to train the LLM? To be more blunt, was the content stolen?
  • Generative AI tends to insert artifacts (again, to be blunt, LLMs can lie — remember this story?). Are users savvy enough to figure out whether or not that is happening, and where?
  • At what stage of a project is generative AI considered helpful? Do the expectations of users reflect the intent of developers?

For me, I think that generative AI would not be helpful for doing the bulk of my research; I need my brain to grapple with ideas, in clumsy ways, to find a way through. I feel like trying to use generative AI might add too much noise to an already messy process. I mean, maybe at the very beginning of a project, when I’m trying to wrap my head around something new, prompting an LLM to see if I can identify avenues of research I hadn’t originally thought about?

My husband uses an LLM to generate code snippets when he gets stuck, and that seems like it might be a useful application, because generating useful code requires that you understand what it is you want it to do (but you’re missing some syntactic nuance), and whatever the LLM generates has to be tested and integrated into a larger whole.

On a different front, last week in my travels around social media, I saw this article: ‘AI-Mazing Tech-Venture’: National Archives Pushes Google Gemini AI on Employees.

Whoa. If NARA (National Archives and Records Administration) is working with an AI, that might be a thing, right? It’s not too surprising to discover, as the article points out, that at least some archivists are… wary… of AI, in general, particularly after the organization told them not to use ChatGPT. Some of the cited concerns: accuracy, and environmental impact.

I can’t (and shouldn’t) be trusted to make decisions for anyone else regarding use of generative AI. I, frankly, don’t have enough relevant experience with it to speak with any authority about it. That said, at this stage, if I were going to use an LLM on a regular basis, it would occupy the role of a natural language search engine, one that I don’t entirely trust.

AI · Raptors · Studenting

Being a “Mature” Student

I am what they call a “mature” student. I’ve been around the block a few times. I’ve done a lot of formal education-related activities. I’ve got a bachelor’s degree, a post-baccalaureate certificate and a graduate certificate… and I was something like six units away from an associate’s degree in the middle of all of that (I had to abandon it for a cross-country move).

And now I’m at the point where, as a graduate student, I still want to learn, but I’m not a fan of the trappings of school. I’m working on a project at the moment that I’m kind of excited about… and while I’m paying close attention to all of the rubrics, readings, and feedback, I don’t really care what my professor thinks about it. That’s not to say that I won’t make changes to it in accordance with feedback. I will, for sure, especially if that feedback helps to move the project in a direction I want to go. But I’m intrigued enough by the subject matter that I don’t feel the need to alter the trajectory of it, if that makes sense.

I guess what I’m saying is that I’m not really looking for my instructor’s approval. I’m interested in their opinions about how I can sharpen my argument, or strengthen my sourcing, but I’m not all that concerned about whether they think it’s an amazing piece of work. I think the subject is very, very cool, and that’s what matters to me in this moment, I think.

Hint: falconry, but not in the context of falconry. Falconry is what it’s about, sort of, but folded into an information science topic. (Image source: https://en.wikipedia.org/wiki/Hunting_with_eagles#/media/File:Kazakh-Mongolian_Eagle_Hunter.JPG)

This is kind of a new way of thinking for me, and likely comes from being exhausted from a lifetime of people-pleasing. The thing is, like most people, I generally perform better doing work I’m excited about, or at least interested in. Again, like most people, I *can* do things that don’t really interest me, but I generally don’t excel at them, and that’s fine.

So yeah, this week it was a 1700-word blog post, with photos, a video, and lots of references. Next week, a 1000-word essay about an information seeking-model that pertains to my topic. As I move forward, I’ll fill out my research with more peer-reviewed, academic work (newsflash: I’m currently working with 10-12 sources from the perspectives of archaeology, anthropology, ecology, and, of course, information science).

I’ve been thinking a lot about AI over the last several months, and I’ve concluded that there’s synthesis that happens when I’m researching and writing that AI can’t really help with. Maybe it’s because I’m a deliberate thinker (not all that quick on the uptake), and I need to puzzle ideas out for myself. Maybe it’s because I’m old(er), and I still like to read papers on paper, so I can make notes and mark them up. Also on the “mature,” front, I still draft longhand, occasionally, though I’ve been moving away from that (now I draft mostly in MS Word, so that I can save versions — once a graphic designer…). I’m sure at some point I’ll have to figure out how to work with AI, but at this point, I haven’t found a way for it to be useful for my process.

One thing I am not enjoying? Formatting references (resources… whatever). I’ve always been kind of bad at it, but now I’ve had to switch from MLA to APA, and it’s a little bit different, so… that’s going to take a minute.

But you know what? If I knew how to do any of this I wouldn’t need to be here. So I’m just going to continue to nerd out on my topic, and figure out the rest of it as I go along.

AI · Thoughts about Stuff

ChatGPT in Real Life

So, apparently, one thing that ChatGPT is good at is bullshitting its way through research.

From The New York Times: Here’s What Happens When Your Lawyer Uses ChatGPT.

Apparently, an attorney (who had been practicing law for three decades), used ChatGPT to help with research on a brief. It made up legal rulings and opinions. And when pressed about whether those things were real… it lied.

The making-stuff-up part is not all that unexpected; it’s an algorithm that synthesizes existing information and fills in the details to make a more authoritative product. The lying part is a bit disconcerting.

In a bit of teaching brilliance, Twitter user @cwhowell123 did an experiment with his students:

The thread is incredibly interesting. Highly recommended.

All of his students generated essays using ChatGPT and then read them critically. He says that his students were surprised to learn (just as the lawyer from above) that ChatGPT could mislead them.

The pièce de résistance (for me), was found here:

I have not (as of yet) tried to use ChatGPT. It’s good to know that it’s not (yet) ready for primetime as a content generator. It seems that at least some of its “learning” has been devoted to creating content that sounds authoritative, without actually being reliable.

Frankly, I have memories of being a college student and doing the same thing. Although, some credit to me: I know that it’s not good practice to cite a source without laying eyes on the source. (Even if I misunderstand the intent of the author(s), or draw different conclusions from the research, I need to actually read the article before I use it as a reference.) I’m not smart enough to make up sources out of thin air, so that’s not a practice I would attempt either. (Yeah, the AI may know more than I do, but I know how to use what I know in ways that are mostly appropriate.)

My husband did an exercise with ChatGPT where he fed a 600-word NYTimes story about a Supreme Court ruling, and asked it to summarize the article in 300 words. He said that ChatGPT did a reasonable job, but it still got one point wrong. (He knew that because he read the NYTimes article first.)

This is all very interesting, and is helpful to me as I try to figure out how AI works and what it’s capabilities actually are. I’m coming around to the idea that ChatGPT, rather than being a generator of original content, is a natural language search algorithm that’s capable of synthesizing information based on its “learning,” and producing a natural language result. If I assume that that’s the case, then I should be prepared to subject every bit of its product to critical analysis (just as I would have to for any other kind of search results).

AI · Studenting

Academic Integrity

I’m so old that I started learning to type on my mom’s typewriter. (I was a child who thought that typewriters were cool, and I wanted to learn to use it. I’m not a particularly fast typist, so it turns out I was more interested in figuring out how it worked than learning to use it.)

When I started having to write papers for every class I took in college, I had a Mac, but I still took notes and drafted my papers longhand (only the first draft — every other iteration happened on the computer).

As a graphic designer, I used version naming conventions (052123_filename.indd, 052223_filename.indd, 052223_filename1a.indd…) for all my files, so that I could present different ideas, or step back if I needed to.

As I learn front-end web development, I’m getting a little bit more familiar with version control, using Git and GitHub.

I’m thinking about all of these practices as they relate to academic integrity in the age of ChatGPT. Teachers are trying to figure out how to prevent students from cheating. And we, as students, have to figure out how we are going to make sure that we can demonstrate that the work we are presenting is ours. In other words, as students, we need to develop a kind of “hygiene” to make sure that they we show the evolution of a project or paper.

(In other words, we all need to figure out how to avoid this sort of thing: Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers. )

In my case, I still tend to take notes longhand (I think better with a pen in hand); notes are artifacts, so that’s a reasonable start. Another thing I already do if I’m working on a group project is keep local files of the work I’ve shared (or with Google apps, I’ll make a copy of a file and share the copy) — not because I think my work is of higher quality than the group can produce (it is most assuredly not), but because it creates a situation where I can point to the part of the project I contributed to as an individual.

I think I’m going to start implementing version naming conventions for files if I’m working on a solo project. Also, I need to start getting better at getting to know my professors, so they have a sense of who I am, and how I approach assignments. (For online students, I think discussions are a good way to demonstrate that you’re engaging with the material in a thoughtful way, in your own style, in a less formal context.)

I’ve been pretty fortunate that most of my academic endeavors involve writing and project work, where you have to demonstrate progression of thought through the course of the work. It’s difficult to cheat that process (or the cheating involves so much more work and creativity than the assignment calls for that it should probably be lauded).

Photo offering of the day: this gorilla from the San Diego Zoo, who chose to walk right up to the viewing area, turn his back on all of us, and sit down. He decided that if he had to interact with us, he was going to do it in his own way. (Animals are capable of complex communication.)

We went to the San Diego Zoo on Christmas Day in 2021. This gorilla did an amazing job of letting us know that he was going to do his thing in his way, never mind the rest of us. I appreciate it when any animal has enough autonomy to express themself.
AI · Thoughts about Stuff

Artificial Intelligence

I’ve been hearing a lot about AI lately. More than I would like, in fact. I suspect that’s true for many of us.

For school, we received a warning that we need to check with our professors about the use of AI to do assignments. I’ve seen examples on Twitter, from teachers, about students using AI inappropriately. I follow some artists who have had their work “appropriated” (stolen) to train AIs. I’m seeing authors concerned about how their work is being used.

One law school professor advocates for teaching students how to use it effectively and ethically. I was surprised by the recommendation, but I see the utility in it — it’s here, and becoming more pervasive. We should probably all know at least the basics. The APA agrees that ChatGPT can have the potential to advance learning, in some situations.

Recently, I experienced how health care and business are trying to use AIs to do customer service. It’s not going well. It’s not ready to do anything more than the most basic of business functions. But businesses are plugging them in and letting go of their customer service reps. (Newsflash: it’s really frustrating to navigate this new landscape as a patient or customer.)

It’s causing no small amount of consternation, but I’m not naive enough to think that it’s going to go away, or that we can legislate it out of existence. It’s here, and some of it is getting pretty sophisticated.

And there are some good ways to use the technology. IBM’s Watson seems to be adept at complex medical diagnoses (in part, I suspect, because Watson can “read” everything about a particular subject much more quickly than doctors can). Here’s the thing, though: a doctor has to decide if that diagnosis is correct, based on their experience and their interactions with the patient. Watson may identify potential problems and solutions, but there’s a doctor that has to assess that information and decide if it is correct, or useful.

If I, a mere mortal (an MLIS student, not a doctor, lawyer, or artist) were going to use an AI, it would be to help with foundational work on projects by suggesting new avenues for exploration, or creating visualizations to illustrate research (more on this later, because I feel like this is inching close to a line that shouldn’t be crossed).

A couple of the more interesting uses I’ve seen:

  • An urban planner friend used an AI to generate images to illustrate an urbanist philosophy he espouses. The images are for demonstration purposes only; they are not going to be passed off as original work, are not going to be sold, and are not going to be integrated into a proposed or existing design framework.
  • Somebody on the internet used ChatGPT to come up with a week of meal planning, and then asked it to come up with recipes and a shopping list for the week.

Because there are some legitimate uses for AI, and it is a whole thing, I suspect it behooves us all to figure out what different AIs do, and whether they might be useful in some capacity. And if so, maybe to go one step further and figure out how to incorporate them into whatever workflow we use.

That said, I can see many, many red flags. There are some huge ethics challenges there, and I suspect that we are not thoughtful enough to figure those out before some really bad stuff happens.

For example, this business where entities are stealing content (images, likenesses, and text) to “train” AIs is hella problematic.

The perception that prompting an AI is the same as creating a work is not quite right. The perception that an AI-generated work is an original work is wrong. The fact that students are turning in AI-generated work, and trying to pass it off as their own is… something. (More training about what plagiarism is and how to avoid it seems to be in order.)

Culturally, we are already terrible at recognizing the work of artists and illustrators, photographers, authors, and actors, and paying for their work in a way that is equitable. Using AIs to avoid paying creators altogether is morally bankrupt. (But it is probably profitable… welcome to the icky part of capitalism.)

And then there’s the idea that you shouldn’t have to disclose if you’ve used and AI to produce work. That is… how can I say this… bullshit. If you’re going to use a tool, use it, but that doesn’t mean you should be able to hide behind it. (Looking squarely at news organizations that are producing “stories” based on data. I’m not arguing the legitimacy or efficacy of using an AI to “write” the stories. I am arguing that the AI needs to get the byline so that we know that you’re doing it.)

Those of us who choose to use AIs to help with our research should have an academically sanctioned way to cite them in references1 (maybe even with new requirements that we disclose if and how we used the AI, which one we used, and how our work with it impacted our research). People who use AIs for artistic endeavors should be bound by copyright (held by the original creator, not the AI), and again, should have a way to cite and credit (and pay, if necessary) both the creator, and the AI.

This is fairly new to me, so I can’t pretend to understand the potential of this technology, or all of the ethical issues surrounding it. But I’m diving into it, because it’s something I suspect we will all have to reckon with, sooner rather than later.

And because blog posts should have a photo, here’s an oddity quite unrelated to AI: a dandelion with fused stems and flowers.

Something weird going on with this dandelion… compound stems and flowers.

1As it turns out, the APA has a way to cite ChatGPT in references. Cool.

AI · Thoughts about Stuff

Chatbots?

Over the last couple of weeks, I have been trying to track down, and then replace, a lost shipment of medication.

Because I have chronic conditions that require daily medication, I use a mail pharmacy. I can get a 90-day supply for less than it would cost to go to the retail pharmacy three times for 30-day supplies. Win – win.

Except when it gets lost in the mail.

Express Scripts will not let you talk to a human being, so everything has to be done by email. It took a week to establish that the package was shipped from Express Scripts, and that it had not been delivered to me.

It took another week to establish that yes, I understand that it’s important for me to keep taking my medicine, and that yes, I understand that I will have to pay for the new prescription (lame, but another conversation for another time), and that I have run out of the medication in question and NEED YOU TO SHIP A REPLACEMENT.

So a refill was ordered. But it’s out of cycle (BECAUSE IT SHOULD BE A REPLACEMENT, NOT A REFILL), so it was flagged for being too early (BECAUSE IT’S REPLACING SOMETHING THAT WAS LOST IN THE MAIL.) That caused another delay, because someone had to approve it at that point.

I actually have no idea if I was corresponding with people who are overworked and have no influence over how these conversations should be handled, or it was an AI at work. It felt like I was dealing with an AI.

While this isn’t a life-or-death situation (yet), and is highly unlikely to become one, it needs to be resolved sooner rather than later.

I’m trying to keep an open mind for things like chatbots, automated systems and AI. I’ve actually seen some good uses for all of them… but customer service isn’t one of them.

Chatbots and AI can handle the ordinary, which, presumably, makes up the greatest volume of communication. Most of the time, I would prefer to not have to go through a person, for things that involve calendaring, or reordering prescriptions for which I have remaining refills. It’s easier for everyone if I can have some autonomy.

A conversation with a person is usually the best way to handle the unusual.

Microsoft SmartArt is not easy to use… just sayin’.

If a situation is or becomes urgent or emergent, it requires human intervention. While health care has infrastructure for handling the urgent and emergent, neither health care organizations, nor retail organizations, have accounted for handling the unusual; they either treat the situation as if it is ordinary, ignore you, or try to push you off to somewhere else.

In my case, the unusual (a missing shipment of medication) has become urgent (I’m out of medicine that I need to take daily, and need a small supply to bridge the gap until the refill arrives), and I’m having exactly the same communication issues with my primary care clinic that I have had with Express Scripts.

If you are an organization that wants to use automation for customer service, you need to train that AI to recognize when it can’t answer the question, and make sure you have representatives on hand to manage those unusual situations.

Off my soapbox.

Have a photo of Lu, the sweetest puppy, who is also a menace like Dennis.

“Seems like there should be hot dogs for being this cute, yes?”