So I set my timer for ten minutes, and off I went.
There’s nothing special about the 10-minute drawing. Like many 10-minute drawings, it’s awkward, and unfocused.
The interesting part of the experience had nothing to do with the drawing. But maybe everything to do with it… not sure. I started feeling antsy, and anxious, and needing to check the timer, at about 9.30. It was not a relaxing, flow-based feeling. I was feeling bad.
So I decided to lean into it for a little bit.
I suspect that this impulsive, let’s-try-something-new, little drawing — based on a prompt — didn’t feel productive enough. I was taking time out of my day to do something completely frivolous, something that was supposed to be kind of relaxing, and instead I felt like I was stealing time.
But from what? From whom? I wasn’t working on anything, or volunteering. I’m working my way through a class, but I’m not on a hard deadline at the moment. I didn’t have somewhere else I needed to be — no one was waiting for me, or depending on me, for anything in that moment.
I think that this kind of anxiety comes from some very old stuff. While I’m uncomfortable with the notion of an inner child (I don’t have any specific issue with it, but it feels weird), I think that this is the kind of thing that requires some acknowledgement and remediation (is that the right word?).
So, for myself — or anyone else — who needs to hear it: please take some time today to do something the rest of the world (or your family of origin) might not approve of because it’s not “productive.” It doesn’t have to be a grand gesture… maybe a 10-minute circle drawing will do the trick.
Apparently, an attorney (who had been practicing law for three decades), used ChatGPT to help with research on a brief. It made up legal rulings and opinions. And when pressed about whether those things were real… it lied.
The making-stuff-up part is not all that unexpected; it’s an algorithm that synthesizes existing information and fills in the details to make a more authoritative product. The lying part is a bit disconcerting.
In a bit of teaching brilliance, Twitter user @cwhowell123 did an experiment with his students:
So I followed @GaryMarcus's suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it–look for hallucinated info and critique its analysis. *All 63* essays had
The thread is incredibly interesting. Highly recommended.
All of his students generated essays using ChatGPT and then read them critically. He says that his students were surprised to learn (just as the lawyer from above) that ChatGPT could mislead them.
The pièce de résistance (for me), was found here:
opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, "I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is."
I have not (as of yet) tried to use ChatGPT. It’s good to know that it’s not (yet) ready for primetime as a content generator. It seems that at least some of its “learning” has been devoted to creating content that sounds authoritative, without actually being reliable.
Frankly, I have memories of being a college student and doing the same thing. Although, some credit to me: I know that it’s not good practice to cite a source without laying eyes on the source. (Even if I misunderstand the intent of the author(s), or draw different conclusions from the research, I need to actually read the article before I use it as a reference.) I’m not smart enough to make up sources out of thin air, so that’s not a practice I would attempt either. (Yeah, the AI may know more than I do, but I know how to use what I know in ways that are mostly appropriate.)
My husband did an exercise with ChatGPT where he fed a 600-word NYTimes story about a Supreme Court ruling, and asked it to summarize the article in 300 words. He said that ChatGPT did a reasonable job, but it still got one point wrong. (He knew that because he read the NYTimes article first.)
This is all very interesting, and is helpful to me as I try to figure out how AI works and what it’s capabilities actually are. I’m coming around to the idea that ChatGPT, rather than being a generator of original content, is a natural language search algorithm that’s capable of synthesizing information based on its “learning,” and producing a natural language result. If I assume that that’s the case, then I should be prepared to subject every bit of its product to critical analysis (just as I would have to for any other kind of search results).
A couple of days ago, The Raptor Center at the University of Minnesota announced that one of their bald eagles, Maxime, had passed away. She had been at The Raptor Center for more than 20 years. In her later years, she developed severe arthritis that interfered with her quality of life. On May 19, she was humanely euthanized.
In 2016, I had the opportunity to visit The Raptor Center to take a week-long workshop on the care and maintenance of captive raptors. Some of Minnesota’s birds are like ours at the Outdoor Learning Center. That is to say, at least a few of them came into care from the wild, and have ongoing concerns resulting from injuries sustained in the world.
I’ve only ever worked with one bald eagle, and only for a few minutes, and it was at The Raptor Center. It was Maxime.
Maxime and Me, 2016
First thing, bald eagles get heavy in that position. I had her on my hip for stability for almost the entire time I worked with her. I was closely supervised by two of her handlers. They were very patient with me, and so was she.
By the time I went to that workshop, I had worked with the OLC’s birds for about 3.5 years, so I had experience working with great horned owls, a barn owl, a barred owl, a screech owl, and American kestrel, a Harris’s hawk, a rough-legged hawk, and a red-tailed hawk. I knew how to keep myself, and the birds, safe during our handling and husbandry sessions.
I knew that raptors aren’t props, pets, or toys. I knew that most raptors aren’t social (unless they’re migrating or nesting), and that “friendship” between our species just isn’t a thing. I knew how to tie a falconer’s knot. The OLC’s facilities are much smaller and less sophisticated than The Raptor Center’s, but we had checked in with other rehabbers and vets, and Fish & Wildlife, to make sure they’re good for the birds. I understood that our goals at the OLC were (and are) habituation, and tolerance.
The most profound thing I learned at The Raptor Center was how to incorporate cooperation into the relationship. The birds are sentient individuals who have preferences, and personalities, and it’s important for those of us who get to work with them to honor those preferences when we can. It’s not always possible, but to the extent that it is possible, it’s our responsibility to try.
These thoughts are still front-of-mind today:
Do not lie to the bird.
Do not try to conceal what’s going on, especially in situations where you know that bird isn’t going to enjoy it (like restraint for medical procedures).
Project what’s going to happen.
Be safe.
Be efficient.
Let the bird recover quietly, without interference from you.
If you’re going to be involved in activities the bird doesn’t enjoy, and you want to be a regular handler of the bird, you have to put in extra time, so time spent trimming beaks and talons isn’t the only time you’re handling the bird.
It’s 2023, and I’m still working on this stuff, with our birds (many of them new since 2016) and other volunteers. It’s aspirational, particularly if a bird came into care as an adult. But I think about it often, and I’m grateful to have had the opportunity to learn it from The Raptor Center, with Maxime.
At the workshop, we broke into groups for some games, and my group was named Maxime’s Minions.
I’ve been hearing a lot about AI lately. More than I would like, in fact. I suspect that’s true for many of us.
For school, we received a warning that we need to check with our professors about the use of AI to do assignments. I’ve seen examples on Twitter, from teachers, about students using AI inappropriately. I follow some artists who have had their work “appropriated” (stolen) to train AIs. I’m seeing authors concerned about how their work is being used.
One law school professor advocates for teaching students how to use it effectively and ethically. I was surprised by the recommendation, but I see the utility in it — it’s here, and becoming more pervasive. We should probably all know at least the basics. The APA agrees that ChatGPT can have the potential to advance learning, in some situations.
Recently, I experienced how health care and business are trying to use AIs to do customer service. It’s not going well. It’s not ready to do anything more than the most basic of business functions. But businesses are plugging them in and letting go of their customer service reps. (Newsflash: it’s really frustrating to navigate this new landscape as a patient or customer.)
It’s causing no small amount of consternation, but I’m not naive enough to think that it’s going to go away, or that we can legislate it out of existence. It’s here, and some of it is getting pretty sophisticated.
And there are some good ways to use the technology. IBM’s Watson seems to be adept at complex medical diagnoses (in part, I suspect, because Watson can “read” everything about a particular subject much more quickly than doctors can). Here’s the thing, though: a doctor has to decide if that diagnosis is correct, based on their experience and their interactions with the patient. Watson may identify potential problems and solutions, but there’s a doctor that has to assess that information and decide if it is correct, or useful.
If I, a mere mortal (an MLIS student, not a doctor, lawyer, or artist) were going to use an AI, it would be to help with foundational work on projects by suggesting new avenues for exploration, or creating visualizations to illustrate research (more on this later, because I feel like this is inching close to a line that shouldn’t be crossed).
A couple of the more interesting uses I’ve seen:
An urban planner friend used an AI to generate images to illustrate an urbanist philosophy he espouses. The images are for demonstration purposes only; they are not going to be passed off as original work, are not going to be sold, and are not going to be integrated into a proposed or existing design framework.
Somebody on the internet used ChatGPT to come up with a week of meal planning, and then asked it to come up with recipes and a shopping list for the week.
Because there are some legitimate uses for AI, and it is a whole thing, I suspect it behooves us all to figure out what different AIs do, and whether they might be useful in some capacity. And if so, maybe to go one step further and figure out how to incorporate them into whatever workflow we use.
That said, I can see many, many red flags. There are some huge ethics challenges there, and I suspect that we are not thoughtful enough to figure those out before some really bad stuff happens.
For example, this business where entities are stealing content (images, likenesses, and text) to “train” AIs is hella problematic.
The perception that prompting an AI is the same as creating a work is not quite right. The perception that an AI-generated work is an original work is wrong. The fact that students are turning in AI-generated work, and trying to pass it off as their own is… something. (More training about what plagiarism is and how to avoid it seems to be in order.)
Culturally, we are already terrible at recognizing the work of artists and illustrators, photographers, authors, and actors, and paying for their work in a way that is equitable. Using AIs to avoid paying creators altogether is morally bankrupt. (But it is probably profitable… welcome to the icky part of capitalism.)
And then there’s the idea that you shouldn’t have to disclose if you’ve used and AI to produce work. That is… how can I say this… bullshit. If you’re going to use a tool, use it, but that doesn’t mean you should be able to hide behind it. (Looking squarely at news organizations that are producing “stories” based on data. I’m not arguing the legitimacy or efficacy of using an AI to “write” the stories. I am arguing that the AI needs to get the byline so that we know that you’re doing it.)
Those of us who choose to use AIs to help with our research should have an academically sanctioned way to cite them in references1 (maybe even with new requirements that we disclose if and how we used the AI, which one we used, and how our work with it impacted our research). People who use AIs for artistic endeavors should be bound by copyright (held by the original creator, not the AI), and again, should have a way to cite and credit (and pay, if necessary) both the creator, and the AI.
This is fairly new to me, so I can’t pretend to understand the potential of this technology, or all of the ethical issues surrounding it. But I’m diving into it, because it’s something I suspect we will all have to reckon with, sooner rather than later.
And because blog posts should have a photo, here’s an oddity quite unrelated to AI: a dandelion with fused stems and flowers.
Something weird going on with this dandelion… compound stems and flowers.
Over the last couple of weeks, I have been trying to track down, and then replace, a lost shipment of medication.
Because I have chronic conditions that require daily medication, I use a mail pharmacy. I can get a 90-day supply for less than it would cost to go to the retail pharmacy three times for 30-day supplies. Win – win.
Except when it gets lost in the mail.
Express Scripts will not let you talk to a human being, so everything has to be done by email. It took a week to establish that the package was shipped from Express Scripts, and that it had not been delivered to me.
It took another week to establish that yes, I understand that it’s important for me to keep taking my medicine, and that yes, I understand that I will have to pay for the new prescription (lame, but another conversation for another time), and that I have run out of the medication in question and NEED YOU TO SHIP A REPLACEMENT.
So a refill was ordered. But it’s out of cycle (BECAUSE IT SHOULD BE A REPLACEMENT, NOT A REFILL), so it was flagged for being too early (BECAUSE IT’S REPLACING SOMETHING THAT WAS LOST IN THE MAIL.) That caused another delay, because someone had to approve it at that point.
I actually have no idea if I was corresponding with people who are overworked and have no influence over how these conversations should be handled, or it was an AI at work. It felt like I was dealing with an AI.
While this isn’t a life-or-death situation (yet), and is highly unlikely to become one, it needs to be resolved sooner rather than later.
I’m trying to keep an open mind for things like chatbots, automated systems and AI. I’ve actually seen some good uses for all of them… but customer service isn’t one of them.
Chatbots and AI can handle the ordinary, which, presumably, makes up the greatest volume of communication. Most of the time, I would prefer to not have to go through a person, for things that involve calendaring, or reordering prescriptions for which I have remaining refills. It’s easier for everyone if I can have some autonomy.
A conversation with a person is usually the best way to handle the unusual.
Microsoft SmartArt is not easy to use… just sayin’.
If a situation is or becomes urgent or emergent, it requires human intervention. While health care has infrastructure for handling the urgent and emergent, neither health care organizations, nor retail organizations, have accounted for handling the unusual; they either treat the situation as if it is ordinary, ignore you, or try to push you off to somewhere else.
In my case, the unusual (a missing shipment of medication) has become urgent (I’m out of medicine that I need to take daily, and need a small supply to bridge the gap until the refill arrives), and I’m having exactly the same communication issues with my primary care clinic that I have had with Express Scripts.
If you are an organization that wants to use automation for customer service, you need to train that AI to recognize when it can’t answer the question, and make sure you have representatives on hand to manage those unusual situations.
Off my soapbox.
Have a photo of Lu, the sweetest puppy, who is also a menace like Dennis.
“Seems like there should be hot dogs for being this cute, yes?”
There are a couple of things I’m glad I learned during the pandemic, back when everything was closed because we couldn’t breathe on each other.
One of them is how to make coffee that I like to drink.
Mocha/latte in the commemorative pandemic mug (“This sucks and I hate it,” by Effin’ Birds.) Lu starts puppy kindergarten, part 2, today, and the document under the mug is for that.
Here’s how I do it: Moka pot coffee (High Drive roast, from Indaba Coffee, here in Spokane), with soy milk (agitated in a Bodum milk frother), and about 2tsp of chocolate syrup (Hershey dark chocolate syrup, to add a subtle chocolate flavor and cut the bitterness of the coffee with a little bit of sweet).
The coffee continues to be the highlight of every week. But this week was memorable for a couple of reasons:
I got my first black fly (Buffalo Gnat) bite of the season… on my face. (OK, that’s not a highlight, but it was significant.) Black flies inject you with a numbing agent and then saw a small, impressively round, hole in your skin. The wound bleeds, and then swells up like a mosquito bite and remains itchy for several days. Usually they get you on the hairline… this one is just to the front of my ear. I was bitten on Monday afternoon, and on Saturday morning, it’s still itchy and a little bit swollen. I’m pretty sensitive to bug bites anyway, but I hate black flies… at least their season is short. (Treatment: hydrocortisone cream and antihistamines (Allegra or Zyrtec — we probably shouldn’t use Benadryl anymore).)
I finished my first semester of graduate school. For our final project, my group had to rethink and redesign the navigation structure of a website, and we finished it on Thursday. I suggested the OLC’s district site, because it doesn’t (at present) adequately represent what the OLC does, how, or for whom.
(For what it’s worth, school districts generally don’t do a lot of great web development; funding is inadequate, and school employees don’t have time to maintain a complex site, because they’re, you know, teaching kids. It’s unfortunate, because there are significant information needs for students, parents, and the community, that are just not being met. And yes, I know, not all school districts.)
We got to do a card sort with the OLC staff, which was amazing. Card sorting is a really great way to get a glimpse into other people’s ideas about how the world should be organized. There are apps and orgs that allow you to do them online. We used index cards, which for a group hybrid sort, is an easy, tactile experience (nice after the teachers had worked with kindergarteners all day).
Since I’m the one who suggested it (and the OLC is where I live), I got to do some of the heavier lifting with the foundational pieces. I’m lucky the group had a tech person who could read my early drafts. It was a gift to have someone who could check to make sure she could visualize what I was describing. And then to be able to hand it off to writers and editors who could take our observations and ideas (~ 12 pages, at that point, with some photos and sitemaps) and create a cohesive report about the project. (It was quite a bit of work for 10 points.)
[5/14/23: We got full credit! Yay, us!}
I’m (still) not a huge fan of group work, but not for the reasons you might think. I enjoy collaborating with people, one-on-one or in small groups — different perspectives often makes for stronger work. And I was lucky to have landed in the group I was in; everyone was interesting and insightful, hugely talented, and wanted to be involved and get things done in a timely manner. That said, we all have lives outside of school that need to be attended to, and matching schedules and availability for project work turned out to be a bigger challenge than doing the actual work of the project. That’s a little too “real world” for work that has hard deadlines and offers no compensation.
I spent yesterday morning decompressing with a tropical smoothie. (Shakes and smoothies are like donuts to me. I really enjoy them… about twice a year. Anything more ends up being… too much.)
And now it’s on to the next semester. This summer I will be coding… a lot. I’m taking SJSU’s MLIS foundation front-end coding class, and working my way through a front-end coding certificate through the University of Washington.
(And hopefully, continuing to work with the Outdoor Learning Center on building out their district website.)
At the OLC with Ruby (the Barn Owl) last month, while the district guys were replacing the windows in the sanctuary. She spent most of the day in her crate, where it was dark and quiet, but we got to hang out in the shade for a little while in the afternoon.
I took a class this winter, in a subject I have no experience with. It was gloriously fun, but very challenging. It moved very quickly, and there were a lot of deadlines. My goal with this class was to develop a greater understanding and appreciation for the topic.
I produced a lot. I learned a lot. I am not now, and may never become, an expert, but it was a good experience.
I just had a final critique with the instructor.
It was brutal. She was not unkind, but she was not shy about letting me know many of the ways my project was lacking.
She was not wrong. It was a good critique (good critiques require quite a bit of skill). It was illuminating, and I have some better ideas — from an expert! — about how to approach this kind of project going forward.
But here’s the thing: critique can be hard. When you’ve invested a bunch of time and work in something, it’s hard to hear all the ways it doesn’t measure up. It hurts the ego; it can bruise the heart.
But it is important to be able hear it and accept it — or at least to listen to it, and decide what you want to take away from it.
I’m not sure I could have done anything better or different with this project, so even though some of it was difficult to hear, I am a beginner at this — I am not capable of greatness, at least not yet (and there were a few other life circumstances going on in the background, so even if I was capable of more, I might not have been able to bring it to the table) — and my goal for this class was to finish.
So I will let this experience sit for a minute, digest the advice I have been given, maybe do some drawing for fun and find an approach (slower, more methodical, more iterative) to this kind of work that makes more sense to my brain.
I’ll bet that all of us encountered the fact that life isn’t fair when we were children. And that it was an impossibly hard pill to swallow… that hasn’t gotten any easier over time.
Because it’s true. Life isn’t fair… it seems to relish making that point, over and over. No matter how upset we get about whatever situation is illustrating the point at the moment, our strong emotions can’t change anything about the underlying premise.
For many years, it was conveyed to me that it’s important to accept that life isn’t fair and to get on with it — there’s no use crying over spilled milk, as they say.
And while that sentiment is technically true, tamping down strong, natural emotional responses in order to demonstrate emotional maturity is neither healthy, nor mature.
So when I encounter a “life is not fair” situation (like, say, our old dog dying), my emotional response feels very, very old, as in, I recognize it because I’ve been here before. But really, it’s more like the person experiencing the emotional response is, unlike me, very, very young.
And then my question is, what exactly am I responding to right now? A universally true (no matter how long they live, dogs don’t live long enough), very sad, present situation? Or an unresolved thing (or things) from a long time ago? Or both?
Sweet Lilo… we love you, and we miss you every day.
If your inner child needs to hear this today, here it is:
It’s true that life is not fair. It is OK to be upset or frustrated, or sad, or angry about that. Even though your emotional response can’t change this circumstance, it is OK to have those feelings. It’s okay to not know what to do, or how to respond, while having these feelings. They may feel (or remain) unresolved because this situation may not have a resolution other than acceptance.
The older I get, the more I recognize that swallowing feelings doesn’t make them go away — they’ll find a way to make themselves known, sometimes in unrelated, unhealthy ways.
It is possible to hold two, seemingly contradictory, truths at the same time: that this is sad, and unfair, and hard, and that I am filled with gratitude for the life of a sweet, small dog.