The Personal Project

Teaching with AI

I’m not a fan of using AI for day-to-day activities, or for art. It’s not that I think that nobody should use AI, or that AI isn’t useful in some circumstances. I just… don’t use it for the research and writing that I do (most of the peer-reviewed work I reference lives behind paywalls). And using it for literary writing or visual art is appropriating, at best, or even outright stealing. No, thank you.

(Also, I wish that browsers, word processors, Adobe products, and even WordPress would just knock it the f— off. Stop foisting it on me; I don’t want it and won’t use it… go away.)

I am interested in how instructors are teaching with (or trying to avoid) AI, because from what I’m seeing in the world, students using AI to generate some or all of their work has become a significant issue for teachers. It’s difficult to know how to address it, because AI is a thing that’s here. How do we interact with it, and teach others to interact with it, in ways that are responsible, ethical, fair, and useful?

In other words, it’s not just about using AI — it’s also about pedagogy. I don’t have answers, but I’ve seen what seem like some good ideas, including having students generate a piece of writing using AI and then asking them to fact-check it and revise it as necessary. I’m also seeing some instructors going back to requiring at least some handwritten work for drafting.

Another approach I’ve seen is one that one of my professors used this semester: requiring a lot of writing over the course of several weeks, much of it reflective writing based on personal experience (with citations based on course reading), along with written responses to classmates. AI wasn’t forbidden, per se (we could have used it if it was appropriately cited), but when you have to produce that much original writing, in that style, your distinctive “voice” becomes recognizable. That kind of writing isn’t appropriate for a lot of academic work, and we did have a couple of longer, less narrative, writing exercises, but it worked well for this class. (It was a lot of writing, though. I don’t envy our professor having to wade through all of it.)

The other class I took this semester was a coding class that focused on PHP and Javascript. It was a good class, challenging and structured well. Do I know how to use either PHP or Javascript as an expert? Lol, no. But I know more than I did at the beginning of the semester, so I guess that’s a win — good foundation and all that. What was particularly interesting to me, though, was the way the instructor asked us to use AI.

Photo by Peaky Frames on Unsplash

For the most part, we were required to demonstrate that we could use the concepts he introduced to us in the way they were introduced, without the use of AI. Most of our assignments consisted of small, simple (if you know what you’re doing, which I… did not) coding exercises that followed the logic of his demonstrations, but focused on a slightly different problem. (In my evaluation of the class, I said that this approach is an “infuriatingly effective” way for me to learn, because the problems he asked us to solve were just different enough from his demonstrations that I really had to think through how to use the concepts he was asking us to use.)

He asked us to use AI on three separate occasions over the 15 weeks, in three different ways:

  1. Early in the semester, we were asked to prompt an AI (of our choosing) to solve a simple coding problem, make sure the code executed the way we intended, and comment on whether we understood what the AI did and why. (The AI produced code that was far more complex than I could read, much less write, at that point in the semester. It worked, but I wasn’t sure how.)
  2. Several weeks later, we were asked to manually code the scaffolding for an assignment, and once the code worked the way it was assigned, prompt an AI to add additional functionality that we had not yet covered in the class. (I had to instruct the AI not to change the structure of my code, to make sure that I could still navigate it after the new code was added — I’m a beginner! We had to turn in/upload to a class server both instances of code.)
  3. For our final assignment, we were to use an AI to code a simple web app (with HTML, CSS, Javascript and PHP) that met his specifications — vibe coding! I used GitHub Copilot/GPT-5 mini in Visual Studio Code, so I was using AI within the context of an IDE (I found working within a specific environment very helpful). It required a fair amount of intervention to get where I wanted to go. (Granted, someone with better prompting skills would probably be much more efficient.) But by the end of the semester I could at least read the code and understand most of the logic — or ask the AI to explain what it was doing — so I could make the adjustments I needed or wanted to meet the expectations for the assignment.

As a person who does not use LLMs on the regular, I thought these exercises, in this progression, were an interesting and effective way to introduce AI. The exercises asked us to solve specific problems, and encouraged us to develop an understanding not just of the end product, but of the working parts. Exercises 2 and 3 also introduced us to different ways of using AI: to generate code that complemented manually written code, and to generate most of the code and adjust it as we tested it. Using AI to generate code to let a machine do the work of machines seems like a good use case to me. And because code needs to be checked and adjusted to make sure it’s doing what it’s supposed to do — no matter who (or what) generates it — that workflow seems like a defensible use of an AI.

AI

More AI Notes

Confession: I’m a big fan of science writer Ed Yong. I followed his work at The Atlantic for years, particularly during the pandemic. His newsletter, The Ed’s Up, is worth reading (bird photos FTW). His latest book, An Immense World: How Animal Senses Reveal Hidden Realms Around Us (bookshop.org link), is both fascinating and exceptionally well-written.

Image of Ed Yong's latest book, "An Immense World: How Animal Senses Reveal The Hidden Realms Around Us." It has a green background with white and yellow text, and a photo of a (very cute) monkey staring at a butterfly.
And the cover is gorgeous.

I think the quality of Ed Yong’s work is next level, so when he says something like this, I pay special attention:

“My position is pretty straightforward: There is no ethical way to use generative AI, and I avoid doing so both professionally and personally.”

To be absolutely clear: I am not an expert on generative AI. I am not a user of generative AI. I’m still trying to wrap my head around the idea of generative AI.

These are some of my concerns:

  • Do users know what content was used to train the LLM? To be more blunt, was the content stolen?
  • Generative AI tends to insert artifacts (again, to be blunt, LLMs can lie — remember this story?). Are users savvy enough to figure out whether or not that is happening, and where?
  • At what stage of a project is generative AI considered helpful? Do the expectations of users reflect the intent of developers?

For me, I think that generative AI would not be helpful for doing the bulk of my research; I need my brain to grapple with ideas, in clumsy ways, to find a way through. I feel like trying to use generative AI might add too much noise to an already messy process. I mean, maybe at the very beginning of a project, when I’m trying to wrap my head around something new, prompting an LLM to see if I can identify avenues of research I hadn’t originally thought about?

My husband uses an LLM to generate code snippets when he gets stuck, and that seems like it might be a useful application, because generating useful code requires that you understand what it is you want it to do (but you’re missing some syntactic nuance), and whatever the LLM generates has to be tested and integrated into a larger whole.

On a different front, last week in my travels around social media, I saw this article: ‘AI-Mazing Tech-Venture’: National Archives Pushes Google Gemini AI on Employees.

Whoa. If NARA (National Archives and Records Administration) is working with an AI, that might be a thing, right? It’s not too surprising to discover, as the article points out, that at least some archivists are… wary… of AI, in general, particularly after the organization told them not to use ChatGPT. Some of the cited concerns: accuracy, and environmental impact.

I can’t (and shouldn’t) be trusted to make decisions for anyone else regarding use of generative AI. I, frankly, don’t have enough relevant experience with it to speak with any authority about it. That said, at this stage, if I were going to use an LLM on a regular basis, it would occupy the role of a natural language search engine, one that I don’t entirely trust.

AI · Thoughts about Stuff

Artificial Intelligence

I’ve been hearing a lot about AI lately. More than I would like, in fact. I suspect that’s true for many of us.

For school, we received a warning that we need to check with our professors about the use of AI to do assignments. I’ve seen examples on Twitter, from teachers, about students using AI inappropriately. I follow some artists who have had their work “appropriated” (stolen) to train AIs. I’m seeing authors concerned about how their work is being used.

One law school professor advocates for teaching students how to use it effectively and ethically. I was surprised by the recommendation, but I see the utility in it — it’s here, and becoming more pervasive. We should probably all know at least the basics. The APA agrees that ChatGPT can have the potential to advance learning, in some situations.

Recently, I experienced how health care and business are trying to use AIs to do customer service. It’s not going well. It’s not ready to do anything more than the most basic of business functions. But businesses are plugging them in and letting go of their customer service reps. (Newsflash: it’s really frustrating to navigate this new landscape as a patient or customer.)

It’s causing no small amount of consternation, but I’m not naive enough to think that it’s going to go away, or that we can legislate it out of existence. It’s here, and some of it is getting pretty sophisticated.

And there are some good ways to use the technology. IBM’s Watson seems to be adept at complex medical diagnoses (in part, I suspect, because Watson can “read” everything about a particular subject much more quickly than doctors can). Here’s the thing, though: a doctor has to decide if that diagnosis is correct, based on their experience and their interactions with the patient. Watson may identify potential problems and solutions, but there’s a doctor that has to assess that information and decide if it is correct, or useful.

If I, a mere mortal (an MLIS student, not a doctor, lawyer, or artist) were going to use an AI, it would be to help with foundational work on projects by suggesting new avenues for exploration, or creating visualizations to illustrate research (more on this later, because I feel like this is inching close to a line that shouldn’t be crossed).

A couple of the more interesting uses I’ve seen:

  • An urban planner friend used an AI to generate images to illustrate an urbanist philosophy he espouses. The images are for demonstration purposes only; they are not going to be passed off as original work, are not going to be sold, and are not going to be integrated into a proposed or existing design framework.
  • Somebody on the internet used ChatGPT to come up with a week of meal planning, and then asked it to come up with recipes and a shopping list for the week.

Because there are some legitimate uses for AI, and it is a whole thing, I suspect it behooves us all to figure out what different AIs do, and whether they might be useful in some capacity. And if so, maybe to go one step further and figure out how to incorporate them into whatever workflow we use.

That said, I can see many, many red flags. There are some huge ethics challenges there, and I suspect that we are not thoughtful enough to figure those out before some really bad stuff happens.

For example, this business where entities are stealing content (images, likenesses, and text) to “train” AIs is hella problematic.

The perception that prompting an AI is the same as creating a work is not quite right. The perception that an AI-generated work is an original work is wrong. The fact that students are turning in AI-generated work, and trying to pass it off as their own is… something. (More training about what plagiarism is and how to avoid it seems to be in order.)

Culturally, we are already terrible at recognizing the work of artists and illustrators, photographers, authors, and actors, and paying for their work in a way that is equitable. Using AIs to avoid paying creators altogether is morally bankrupt. (But it is probably profitable… welcome to the icky part of capitalism.)

And then there’s the idea that you shouldn’t have to disclose if you’ve used and AI to produce work. That is… how can I say this… bullshit. If you’re going to use a tool, use it, but that doesn’t mean you should be able to hide behind it. (Looking squarely at news organizations that are producing “stories” based on data. I’m not arguing the legitimacy or efficacy of using an AI to “write” the stories. I am arguing that the AI needs to get the byline so that we know that you’re doing it.)

Those of us who choose to use AIs to help with our research should have an academically sanctioned way to cite them in references1 (maybe even with new requirements that we disclose if and how we used the AI, which one we used, and how our work with it impacted our research). People who use AIs for artistic endeavors should be bound by copyright (held by the original creator, not the AI), and again, should have a way to cite and credit (and pay, if necessary) both the creator, and the AI.

This is fairly new to me, so I can’t pretend to understand the potential of this technology, or all of the ethical issues surrounding it. But I’m diving into it, because it’s something I suspect we will all have to reckon with, sooner rather than later.

And because blog posts should have a photo, here’s an oddity quite unrelated to AI: a dandelion with fused stems and flowers.

Something weird going on with this dandelion… compound stems and flowers.

1As it turns out, the APA has a way to cite ChatGPT in references. Cool.