>> JENNIFER PETERSON: I'm going to go ahead and get our recording started, and remind folks we do create a learner guide for our webinars. It's a way to extend your learning on the topic. We've created this as a way for you to bring the conversation to your team so feel free to use the guide as a way to have a group discussion, perhaps encourage others to view the session and then work through the guide together. It's a Word document, so you can customize it, make it your own if you have specific steps that you'd like to take, you can add those to the learner guide. I'm really excited to welcome our presenters for today's session. We have Linda Braun, who is a learning consultant at The LEO Group. We've had the pleasure of working with Linda over the years on other projects, and it's great to have you back. Juan Rubio is the digital media and learning program manager at Seattle Public Library, and Dianne Connery is the director at Pottsboro Public Library in Texas and it's again, great to have Diane back as well. I'm going to shift us on over and have Linda get us started. Thank you so much and welcome. >> LINDA BRAUN: Hey, everyone. I'm really excited to be here with all of you and with the great cofacilitators, presenters, Dianne and Juan, I feel fortunate to be here with this group. So I will just take you through what we're going to cover, and then we'll get into the meat of it all. In the introductions, Juan is going to talk about what is AI, anyway. And then Dianne is going to talk about examples for libraries and communities. I'll get us started talking about equity and ethics, and then Juan, Dianne and I are going to have a conversation about that. We thought it would be great to talk with each other. We like talking with each other, I think a lot. And then we're going to just say what's next for you and for your library and your community. Please continue to use the chat. We have a couple of different times we're going to be asking you specific questions. We'd love to have the conversation and we will learn a lot with you as well as being able to give you some info here. So we wanted to start with having you add to the Slido that is coming up right now. A word that describes what excites you about AI. We'll give you a minute or so to do this. And while you're doing that, I'll mention that all of the images you'll see, almost all of the images, we just passed one that is created from Google AI, this one on the screen now, everything else, Dianne, Juan, and I created with Midjourney, using the idea of all of the images in the style of Mad Magazine. You'll see there's similarity between our images as well as something that has to do with what we're talking about. So we're talking about what excites you about AI. Ain't. You put in your word and then you're able to see what the responses are. >> JENNIFER PETERSON: I believe so. Is that the case for you? >> LINDA BRAUN: If you're not seeing it, it's going to force you to put in a word. You should be seeing Slido on the panel. >> JENNIFER PETERSON: I do see folks responding. And you can also feel free to post to chat if you're not finding it, you can post your word in chat. >> LINDA BRAUN: Automation, nothing? Maybe at the end we will see if that person changes their mind. I hope you get to see the exciting possibilities and opportunities potential creativity possible with AI. You can keep adding, I believe, I'm going to keep going, because we have a lot to cover. Thank you for doing that. And we decided, Dianne and Juan and I decided one way to get this conversation started is for us each to tell you all about our AI journeys. And how we ended up here today. So for me, you see a text that I had with Juan Rubio on December 26th, 2022. I had been reading about ChatGPT for months, probably, if not longer, and then in this chat conversation my life basically changed when it comes to AI. And Juan said, have you tried ChatGPT already? And I was like, no, it's been on my list, just haven't done it. And then he said, it's wild, I feel it would change everything. That changed everything for me. So that was my first -- that was what, eight, nine months ago, just about eight months ago. And that was really exciting, because I started using it and one of the things I did right away was to think about how could we use ChatGPT with ethics training? With library staff or communities? So I said, write a description for a series of trainings, and I got that. And I was like oh, I can brainstorm different ideas, and I started learning about what ChatGPT can do. And then I -- as I make slide decks, I go to Midjourney or Dali, and I'm always creating images with ChatGPT. I'm not an artist by any sense of the word, so I'm able to create slide decks. Here I was looking for an image that had love, what do you love about equity work, and all of these images came up. And you see the one I selected. So I've gone from I don't use it at all, to I use it for helping me think things through, to using it to create imagery. Now Dianne is going to talk about her journey. >> DIANNE CONNERY: Okay. So I need to tell you first of all that Pottsboro, Texas, a little town 2500 people, and our library is not a traditional library. We're really about digital inclusion. So when I got involved with the library back in 2018, my adult son visited, he asked if we were doing anything with eSports, and I didn't know about that, so next thing I knew I had an IMLS grant for an eSports team. We love everything technology at our library. We have virtual reality, drones, 3D printers, computers, of course, really fast computers. So fast forward to this Christmas, again, my son asked me if I'd used AI, and I had not. He pulls out his phone and said, what would you like to do with it? And I said, write grants. And so I gave him a few key words, be and within a minute there was a grant written. So he introduced me to, there's a newsletter called Super Human, Zane Kahn writes it, I found it on LinkedIn. But it just is a daily update, because this is updating so quickly on different tools you can use. So I'm hooked too, Linda. And interesting, sort of that same timing, mine was Christmas, yours was December 26th, so we were doing the same thing parallel. I'll turn it over to Juan: >> JUAN RUBIO: Hello, everyone. Thank you for being here and it's a pleasure to be talking about my journey into AI. I'm an artist and I Corey -- and a creative person, so I started using this plug-in called Colors, and is I wanted to do, I wanted to get a color palette that will connect with the theme of landscapes in the Pacific Northwest, where I Sam, when I go and paint, in plain air painting and oils. So I arrive to these very muted colors, I went through several iterations, there were really bright colors, but I was able to arrive at this palette that gave me the feeling of the website that I wanted to create. It also helped me to come up with a name for the website, I changed it a little bit, but I asked it, why would you -- what would you name the website, and it gave me some options, I changed it to Spanish and came up with a name, and I took that color pal into Figma and created the website that you see on the right-hand side. Simultaneously really like the fact that the word cloud, the word that stand the most is possibilities, because I really see AI as full of possibilities. And the many things that are about to come. And I would like to be able to just have AI take my Figma, and just program it into my website. I try to do some plug-ins, but they're not there yet, so I'm really excited about the future and how it can help us with creativity or with many things. So speaking of creativity, I -- as an artist I'm always trying, it's really very important to me to sort of like rescue or capture what I left behind. So I started thinking during Christmas about my grandmother's garden, she had a very luscious garden that is no longer there, and Seattle weather is really damp in the winter, so it reminded me of the very heavy rainstorms in central America and Honduras where I'm from. So I asked Midjourney to give me an image of a toad, I remember a toad in my grandmother's garden, and I said create the image of a toad from central America in a garden after a rainy day. And it did that image for me, but it was just like a natural look. A photo realistic. And I asked it to actually turn that into a weert color. And I changed the angle, first it was giving me frontal view, and I said, do a profile of a toad, and then with that image, I went and created the watercolor that you actually see on the right-hand side. And I never had a problem with that, because it's a common practice to use reference photos to create your art. So I was really happy to be able to use the technology to help me with that creative process. And then I use ChatGPT as well, and this is the prompt that I had for this webinar in the beginning. It was going to be more focusing to rural libraries, but it morphed a lot, but it really helped me to encourage the ideas that I had. I think that I'm going to -- I don't know how -- what's happening. I can't move the slider. Okay. Thank you. So after our journeys, we thought it would be really important to start with some basic understanding of a AI. I know we're all in a different space, but I think for this conversation it will be really good to start with basic definition of what AI is. And related concepts about AI. I took this definition from the University of Washington here in Seattle, you can see the link in the slide. So they have put a very good guide to the use of AI in the university and includes concepts. So what I really think we can talk a little bit about this concept of AI, one of the things that stands out to me in this definition, it talks about how it is related to science and engineering to make intelligent machines. So machines that can think. And I think it's important to highlight this idea of the most disciplinary nature of AI, that it's not only science, but it's also engineering, and if you go to science, you're going to think about neuroscience, engineering, computer science, mathematics, so it's a comprehensive thing and complex study. The other thing that I really sought, it was important -- thought it was important to talk about is like the intelligent machines. We're going to go into what that means in a little bit. And then it also says in here that it doesn't have to confine itself to methods that are biologically observable, because when we think about intelligence in the sense of biology, we're thinking about our body, our body is constrained to the brain, and our environment, so we are talking about sensor processing and other forms of how we learn and how we interact with the space and how we think, and here we're moving into something that it's more mathematical, more -- less biological, so I think that's important to make that observation as well. Is so in the library context, I think it's very good to think about, you know, to think about imagine a tool that will sort books for you, when we think about machine learning and how the -- that process is going to learn from the sorting of books. It depends actually on how the learning model has been designed. So if you're sorting books, how is the machine going to be programmed to learn from that information that is being received? And I think that is also really important to highlight. And the important thing over here, over time during that processing of data and information, you can make the machine get better and better at those things that you set the machine to learn, or to do. So machine learning, we hear all about machine learning, and I think a good way to think about it is like we not only just fetch and display data, so you know just looking at the book stacks where you're actually taking the book out and you're actually learning from it, these are algorithms, formulas that give the computers the ability to learn from data, and then to make prediction and decisions based on that, those algorithms, and that learning process. We already are using many of those technologies or those things such as spam emails, or your suggested Liking in Spotify T. or Netflix, or -- Spotify, or Netflix, or whatever you use to watch videos. It's getting better and better in terms of what the suggestions are. So I want to talk a little bit about this idea of machine learning. So if you look at those numbers that are displayed on the slide, you can see that it's easy for you to recognize the numbers. It's easy for you to infer that the first number could be a 5, and the next one 0419 and 2. And I do that very rapidly. But in reality, there's a lot happening in my brain when that is taking place. And it becomes a challenge to program or to move that kind of process that happens in your brain during visual recognition into a computer program, back into the -- that can do the same thing. So something that seems very easy for us becomes extremely difficult, like recognizing shapes, like knowing that the 9, for example, has to loop at the top and a vertical stroke that goes to the right, and that turns it into a number 9. And then so what machine learning is doing to try to do the same is to learn from a lot of data. So go over an extremely large amount of data and begin to learn from training examples and through [indiscernible] and having a program. Like a complex, we talked about neuronetworks or deep learning that goes into that process of feeding information for the machine, and then training it to produce predictions or definitions, or even as we know, natural language. So is Large Language Models, like when we use ChatGPT, generate, like I said, natural language text. So they are -- when you type into ChatGPT that prompt, it's giving you back a conversation or constructing sentences, that are in the way that we speak, and that we write. And these are large language models are architecturers that are used to train machines, like to produce and generate these types of information. And one of those architectures is transformers, it's highly complex, I want to point out it's a type of architecture that includes things such as tension, or self-attention, I think it is called, so there are different other types of architectures for machine learning, transformers is just one of them. So as you can see and as you can imagine, the field of AI is extensive and complex. So I will encourage you, if you want to get deeper on learning about it, is to look at resources available online. A goods place for me to start is Google Cloud Skills Boost which has many courses on artificial intelligence. Such as the ones I'm displaying here, like introduction to generative AI, or introduction to Large Language Models, or Responsible AI. There is also ai.google where you can design your own journey for learning AI. Okay. I think I'm passing it to Linda. >> LINDA BRAUN: All right. So in the chat, I get to do all the interactive stuff. In the chat, what surprised you so far? I've been watching the conversation and lots of people talking about opportunities, fears, et cetera but what surprised you in what Juan was just talking about? The Google Cloud course, creating art? Images. I didn't say, creating grants, yeah. Writing grants. Somebody said, might have been you, Dianne, somebody said recently in a conversation I was in that anyone who doesn't use Chat AI for proposals or AI, sorry, for writing proposals and things like that, is probably not going to do as well. Just because it helps you not because it's giving you the actual proposal, but because it's helping you to maybe think about your proposal in different ways. And Dianne, I'm assuming, I'm making an assumption, which is not always good, assuming that when you wrote that grant proposal with your son, it wasn't like oh, I'm going to take this whole hog or whatever the right phrase is, but I'm going to look at this and it's going to help me to write a great proposal. Am I right about that, Dianne? >> DIANNE CONNERY: That's correct. Yeah. I used it as a basis and built on it. >> LINDA BRAUN: Yeah. Oh, yeah, and we're hearing other ways people manipulation of mind images, book sorting, writing grants, all different art and grants. And we will get to some of the biases in a little bit when we get to that, coming up with science tips, mapping trips. So many different things. That's really awesome. Keep going. The conversation has been great. And I think we'll turn it back to Dianne who will talk about AI within the library and community context. >> DIANNE CONNERY: Yes. Thank you. Okay. So I guess I should start by saying I've noticed in chat there's a wide range of experience with AI. I mean, most of us are relatively new because it's just been public, this generative AI for a short time. But I want to tell you that it is not -- it's not difficult to start exploring it. You don't have to know how to code, you just have to know how to type. And you start typing and it does amazing things for you. And I wanted to put my thoughts about why we're talking about this as libraries. It's almost as if we've been preparing for this with all of the misinformation and disinformation, and libraries have talked about looking for authoritative resources, and just fake news, all of that, we're in the place now that AI is happening. And I think we need to prepare our communities for it and help them use it and understand it in a way that is beneficial to society. Everybody's going to need AI literacy, it's going to be part of everything, I think. Information literacy is already what we do, and so I think this is part of our ethics as librarians. It is being -- this is all early on, so we are part of this conversation in society about the ethics of it, and what people need to know about it. It is changing rapidly, and as it changes what we're doing with it and what we think about it may change. We might do this webinar a week from now and have different perspectives on some of the things. But I want to talk about, does this replace critical thinking skills, or does it enhance critical thinking skills? I've been part of several listened to several panel discussions of AI with academics, and they said some schools are banning it, and some are saying, this is inevitable, let's learn how to work with it. And so does it enhance learning? That's what libraries are about. We're facing a lot of problems, big problems that humans haven't been able to solve. Climate change is one that I think about. And what I hope is that -- and maybe this will inspire some learning and maybe it could help solve some -- come up with some solutions that we haven't been able to on our own. One of the webinars I attended, the Texas State Library, Henry Stokes talked about AI, I thought it was such a good analogy, that when cameras started only certain people had access to cameras. And now virtually all of us have access to cameras in our hands at all times. And so he called it the democratization of creativity. Going back to that creativity word. So I am going, I'm here for the practical examples, rural library, what have I been using it for, I'll talk about two kind consist I've used, the text-to-image generator, I will say Midjourney requires the use of a discord server, so if you haven't used that a lot of gamers use Discord servers, it's really, Midjourney is embedded in Discord. You can create anything. It amazes me. I don't think I've seen any technology that at the same time scares me so much and excites me so much. It's just -- belows my mind daily as I play with it. And that's the other caveat I would give you. This -- it's not hard to learn, but you can definitely go down a rabbit hole as you start exploring this. You start thinking throughout the day of hey, I wonder if it could do this? And just opportunities to explore it. I want to think about what AI can do for us to make sure we're spending our time on things that matter. Increasing our productivity, librarians are frequently asked to do more with less, and what if we could have this sort of virtual intern that could help us with some of that -- the day-to-day stuff that takes a lot of time, but is not necessarily the main thing. Keep the main thing, the main thing. And I just think we talk about lifelong learners, and this can be fun for lifelong learners. I'll give you an example of how I've used it. I was approached by an agency who literally said to me that they were going to give me a magic wand to do my job better, what did I need. And they suggested a new building, and I said yes, please. So they asked me to send them a summary of kind of what I would use this building for. They're interested in funding workforce development, economic development, and so I used ChatGPT to come up with a summary to send them, and they liked that, and so they came back and said, all right, we want to come visit you. We're bringing three of our high-level officials to hear your pitch. So I used AI to design a PowerPoint that is better than anything I've ever done, and I took pride in my PowerPoints. And so as part of the pitch, I said, this is how much money I need, and they gave me the thumbs up and said, yep, let's make this happen. And then as part of the funding process, we had to have public information meeting, and I used AI again to write the minutes for that meeting. So -- well, and then step where I am now, working with an architect to design the building, which can be a lengthy process, unless you use AI, to get the outside and in 30 seconds I'd fit in key words that I was interested in, a 15,000-square-foot building, one story, made of natural materials, set in natural grasses, and it blew my mind what it came up with. So we're going to talk about some of the apps we use, there's Chat AI, Bard is part of Google, can have a has an AI thing. And I should have mentioned back on my eSports image that I showed in the beginning, it looks like a photograph. But those -- those kids in that photo are not real. So my understanding is with Midjourney, if you have a paid plan, you own the rights to whatever you create. I'm not a lawyer, but in the U.S. at least, what I have understood it is the copyright office is saying that what AI is creating can't be copyrighted. So there can be ways -- when you need photo permissions for like some kind of social media promotion, you forgot to get it, you can create these imaginary children from it and use those for your photos. So one of the apps I've used, I think this is the first one I really used, Otter.ai, and so it joins Zoom calls with me and it transcripts everything that is said in the meeting, and it is speech-to-text. So I use it to record meetings either Zoom or in person, it now can create minutes for me when, like, Friends of the Library, the secretary didn't show up to take minutes, I can have it create the minutes. And if you see this, it comes up with summary key words, and it will be talking -- we'll be talking a little bit about prompt engineering and finding the right key words to search databases, it's a skill that librarians already have, and so this can help teach us some of the important key words. So this was a Zoom call I had with someone, so it transcripts 100% of it and then it summarizes it, it emails me this after the meeting, and one of the things I discovered about it that I love is it's searchable in Otter. So I went back and -- it was a Zoom meeting where somebody had discussed something and I could remember a couple of the phrases that were used, and so I typed that in and it came right up with the meeting and showed me exactly where that had been discussed. And then I can just read the discussion of it. So that's Otter.ai. And then ChatAI.com, I've used a lot productivity wise, administrative tasks, a lot of things that it can do for us to save time. So many of us have used chatbots before, but this is like taking it to a whole other level. I had mentioned to Linda I wrote a speaking proposal for a conference. I'm not going to tell you which one because I don't want to be found out yet. But it wrote my proposal, and I had to -- as part of that I had to have three learning outcomes, which I'm not very good at coming up with those. It created the learning outcomes for me. And once again, it was -- 70% there for what I wanted. I would not just take it and use it as written, but I used it as a brainstorming tool. It would actually come up with ideas that I had not thought about. And -- hey, that's a good point. I'll include that. And then I could refine it. But it's really good with grammar and understanding those things. So speaking proposals are one way I've used it. Collection management, when we talk about the large learning language models, some -- it's old information, so it's not going to pull up the latest and greatest, but this will keep developing day by day. Developing policies, writing bylaws, that sort of thing could be a huge time saver for those administrative tasks. I had a book discussion, Demon Copperhead, and I asked it to come up with the questions, ChatGPT to come up with the questions, it wrote a list of questions and then it even said, if you have more time, here are additional thematic questions you could ask, and it even had a section of activities. You might suggest to the people who attend that these are things they could do after the book discussion. To scaffold it, take it to the next level. And that was a huge time saver for me. I had read the book, I enjoyed the book, but for it to come up with questions I could just pick and choose from the ones it had created, and it became really robust discussion in our book club. It could create book lists for us, so all these things could be used -- and I've broken it down, but there's a lot of overlap of staff-facing and public-facing. So we want to do enticing wording in our social media posts, so it can come up with that wording. Program planning, what if I wanted to have a back-to-school event, what should that include? And so you go -- what I've been doing is I go in, I type in the key words and it's, that's not quite right, so I'll add on more information. Like, okay, this is to be held in the afternoon at a library, and just keep -- it's an iterative process that I keep asking it questions. So Dectopus, I mentioned creating slide deck, Dectopus is the one I've been using it. Asks you who your audience is, it gives you options to pick the style of the slides you like, and it asks for your title and some subtitles, and then it wants to know the length of the presentation. And what I've seen it says, is this 10 minutes, 25 minutes, or an hour? And it creates these amazing slides for your presentation. So I'm a big fan of that. And then I started with talking about why libraries need to do this, and I will say a couple things. AI is sexy. I am constantly working against the nostalgia of what library used to be, since -- especially since we're so technology focused. How do we rebrand ourselves so that people know the variety of services and programs we offer now? And so if we can get out there in front and say, libraries are the natural place to learn about AI, I think that's a good position to be in. And so we had a community, I called it Introduction to AI in our little community of 2500 people, four people showed up who had varying experiences with it. Like one woman was saying she was already using AI because when she traveled internationally, the facial recognition. So she mentioned that, and an engineer mentioned that there was a really complex math equation or concept that he had never understood. He asked AI to simplify it for him, and he said he finally got it. So that kind of learning, one of the gentlemen in the group is using it for some financial planning kind of work. And as always, I think a good way to test this is ask it some questions that you know a lot about, so you can kind of check it, is it coming back with valid results and things that would be worthy of sharing. I have talked to the superintendent of schools, and asked her if she would be interested in my doing a professional development in-service for the teachers about AI, because there is a lot of fear among the teachers. And so I'll be doing that in September for the teachers. Not that I'm a teacher, so they know that piece of it, but I can tell them about AI and then I've collected some of the pros and cons of using it in education. So, let's see. Linda, I'm going to head it back to you for what type of AI tools people have used. >> LINDA BRAUN: Great. Thank you. I think we have another Slido for this, so -- >> JENNIFER PETERSON: Just one moment. >> LINDA BRAUN: And if you're the Slido is challenging, you can put in the chat how -- what type of tools. And we've broken it into different categories. So you will see this in just a second. So we have -- here we go. What type of AI tools have you used, you should be seeing it, if not you can just click on the Slido in the bottom image generation, presentation creation, these are the different kinds of things that Dianne talked about. Text analysis, video creation, data analysis, and other. And if you have another, tell us in the chat. Take a second, I'm going to have to do it, and you have to do it in order to see the results. So text analysis is the most, followed by image generation, lots of people have other data analysis is 16, 15%. And actually, I'd love to know more in the chat about how people are using data analysis. Juan mentioned code interpreter, which is my new ChatGPT for sort of obsession and trying to figure out how it works. That's the only thing I've used for data analysis. I would be curious what other people are using. But still, it makes sense. Text analysis is number one, with image generation as two. So 54% and 40%. I'm just going to go back to the chat and see what some of the people are saying. Dianne, I'm going to -- you might not have seen it, but I see a question right now, creating book lists, how did you do that with AI? Can you say a little bit more about that? >> DIANNE CONNERY: That was ChatGPT. Just inputting someone likes this, what are other similar books to this. And that made me think too, you can also have it write as various personas, so you could write as if you were William Shakespeare, and so there are all kinds of explorations to do. >> LINDA BRAUN: That's great. Alt Text for images. Juan, I know you've used video creation tools. What have you used? WebJunction I think it's called Reels. I was looking at that. >> JUAN RUBIO: I think it's called reels. It's not great yet, but I think it works. I really like the fact that it would -- I could just copy and paste to a window and then the text, and it will just turn it into a spoken word. So that was really useful. >> LINDA BRAUN: I think not yet can you limit selections to your library holdings, maybe, at some point in World CAD, unless someone those something differently, or your catalog itself, you -- if -- whatever ILS you use has a plug-in for AI, I think that's how that would work. All right. >> JENNIFER PETERSON: Can I jump in? I think these are interesting too for this question. Somebody asked can ChatGPT help write policies? >> LINDA BRAUN: Dianne, go for it. >> DIANNE CONNERY: Yes, it can. And there are -- you can put in some policies and have it refine it, or you could have it just put in the basics and see what you get with that. But I think that's one of the -- to me writing policies is pretty dry, and that would be one of the very useful productive ways. And you can even give it links of -- I want it to be one paragraph, or two paragraphs, so you can iterate it and fine tune it quite a bit. >> JENNIFER PETERSON: There was a question, what kind of AI is used to create videos? Have any of you experimented with video creation? >> LINDA BRAUN: Juan has done it the most. Go for it. >> JUAN RUBIO: I'm going to put the link in here. >> LINDA BRAUN: Thank you. >> JUAN RUBIO: That's the one that I've used. But I think there's another one called Synthesia. At the moment, it's sort of robotic voice, but it's going to get better as well. Anything Epps, JP? >> JENNIFER PETERSON: There's lots of excellent discussions around the ethical issues, and I know that we're shifting into that, so we'll save that. But please, if you do have questions for the presenters, continue posting those and we'll be looping back for those in a bit. >> LINDA BRAUN: Actually, just on the policies, I will move forward to the ethics and equity. Also thinking about your own AI policies for wherever you are working. So there's AI tools can help you write policies, you also want to probably start thinking about if you haven't and if other people have, it would be great to know, what kind of policies are you thinking about for your institutions that connect to AI and how it's used? All right. So I'm going to keep us going, and so we have ethics and equity, and as I'm going, I'm going to go off my script a little bit and talk -- there's been a lot in the chat about ethics and equity, and it's really interesting. So there's two things I want to mention before I go into what I was planning on talking about. It came up about, do we have to let people know if we have -- Dianne writes something, and does she have to let people know it was ChatGPT or another text AI tool was used to write that. And that's a question that a lot of people have, and I think part of the answer is it depends. I'll tell you my recent story, I do a lot of -- I write a lot of articles and other kinds of things, and once -- on December 26th, when I learned about from Juan about all the different things he was doing, I was like Oh! I could take articles I write and see what ChatGPT does with them to help me with my language, and just to help me think about what I'm writing. And so I'd been doing that, and I was thinking to myself, do I have to put an asterisk in anything I write that uses AI that part of this was written with the support of ChatGPT or whatever? And so I was talking to my husband about it, because he's my editor, he edits almost everything I write, and I said to him -- he said to me one day, you're not going to need me anymore. And I said, well, I'm trying to figure this out. And so his response for the articles that I write was, if I'm using ChatGPT like I would use him, that sounds terrible, but in the same way as if -- I'm just asking an editor to help me clean this up, and make it sound better, then I don't need to, right? So it's not like I'm taking ChatGPT and just, that's the article. It's ChatGPT hoop ends up being Robert for me. So I think that's one way to think about it. It's partly the purpose of what you're doing and how much it's doing of your work. So that's something I've been thinking a lot about. And then there's a couple of things that have come up in the chat that I think are really important as we start to talk about ethics and equity. And that is one person wrote, well, Jim wrote about it helps you to think more about prompts, which is so true. You can't just go to -- Dianne mentioned this the other day -- you can't go to an AI tool and use it like a search engine. AI is not a search engine. ChatGPT is not a search engine. And so you have to think about what you're asking. And you have to, as you go in, have some sense of what you're asking, and then use what you get back to help you think even more deeply. And Juan and Dianne and I have talked about that a lot. So I type something in, and that's like, no, that wasn't exactly what I was looking for. So it's forcing me to think more about my questions, which I think is something that Jim was talking about. Yet at the same time, by doing that, and I'm not sure if it was Jim or someone else who said something about, it separates out folks. So this idea is that I have the skills, the background, I've got a library degree, whatever, that helps me to think about what I'm trying to find out. And how I'm going to ask my questions. Not everyone has that. Not everyone has that experience, not everyone has that opportunity to learn that. And that's a really key part of the ethics and equity of using AI tools. Is making sure that people who do not have, even the opportunity to come to a WebJunction webinar and sort of start to learn about it and hear about the different tools and ways to use them, ways to think about it, how do we help people in our communities who aren't able to do that? And that's one of the first things I think Juan said to me. We were talking about prompts and how to write the prompts, if you are skilled at that, you're going to write those proposals. You're going to get those jobs, whatever. If you're not, then that's where it's going to be a really big challenge. Thinking about that and our role in libraries is to help people gain those skills. So I don't think -- looking at these conversations, I've been around in libraries for a very long time, and I was around at libraries when we first automated catalogs, and when the internet came. And so a lot of this is the same conversation I heard with Wikipedia, and Google, and with email, and I think when we're thinking about ethics and equity, it's what's our role in helping people to use this smartly, successfully, and as community? And so I think it's not no, it's not, it's a bad thing, it's, what are we going to do to help make it work for our communities? So that's my little -- I don't know what the word is as I get started with this. So when we're thinking about AI ethics, what I did, I put in some text from an article that I'll put in the chat, because JP, I don't think I gave you this article. So -- and I said, what with a does this say, what's a word cloud? You can give me? And I used a word cloud generator from ChatGPT 4, which is one of the plug-ins, and so I said, what's the ethics, ethical topics that come up in this? And so you see, bias, fairness, privacy, oversight, et cetera and so these are all definitely ethics. The thing I want to mention goes back to what I was just saying, and as you start thinking about ethics of AI, and all these frameworks are coming out, the White House just came up with a set of AI frameworks for -- with tech companies for how we should be thinking about AI ethically in our communities, I think one of the things to think about is that this can't happen in a vacuum. So when we're thinking about ethics and all these questions that came up, these are all really important questions that everybody wrote. But it can't happen in a vacuum, and so this article that I put in here talks a lot about the White House. Tech companies, all these people are thinking about ethics. But we're the ones in our communities. And so how are we thinking about ethics and how are we working with our communities to say, you know, what do you see as the ethical conversations for AI? First let's learn about AI together, and then are you seeing bias when we do this? What should we think about when we think about bias? Are you seeing that there are decision making -- transparency challenges here? How do we work together in our communities, not in global AI, but in the federal government or even local government, is how do we actually work together to see how this should be built? Because we can have a voice in this. Juan and I were talking with a group the other day who said, well, that should be ALA. Yet it should be ALA, should be working on this, but we also have to think about it more locally, and what we can do, because if we just focus on AI is problematic because this, this, and this, we're not helping to make the change. So talk with your communities about what the ethics are of AI. And think with other people. So instead of staying in our library boxes, what else is going on? What are other people seeing about ethical conversations of AI. And when you look at the other thing I will say, when you look at any kinds of frameworks coming out, use those with a critical lens to say, who wrote these frameworks, and if it was written by tech companies who are making this decision for the tech companies, then let's think about how this actually can be done. So you need to think about where the material is coming from, but you also need to be thinking about how decisions are being made about what's ethical. So that's quick on ethics. Like I said, I want to make sure Juan and Dianne and I have a little bit of time to talk about all of these things. And then AI equity, same article, just what is equity and equality, and I think it's really interesting that this came up with equality as a big word, and equity and equality are not the same thing. And when I think about AI and ethics, it's -- it's an equity -- it's not that everybody has access to AI, it's that people are learning how to use AI, understanding AI, critically thinking about AI, in the way that supports them and their community. So thinking about how we help people in our communities make decisions about AI, it's everywhere. Every day, "The New York Times," the "LA Times," "The Washington Post," your local newspaper has an article about AI or maybe not every day in your local newspaper, but larger newspapers seem to have it every day. And so it's bombarding us. And how do we help think about who in our communities need to have help with, or support or understanding about how to use AI? So it's not all just like AI is doing this to us, it's how are we working with our communities to make sure they understand it. And I want to point out fairness. Because fairness in AI -- fairness is based on what I can say something is fair, and die afn can say that's not fair, and Juan can say, that's not fair. So we can't decide what's fair in AI, but we can help communities -- I know I'm repeating myself, because it's all tied in together -- is help communities understand unand think about how AI is fair in their community. How is AI fair in the low socioeconomic community somewhere in your local area? How is AI fair in the higher socioeconomic community? In your area? So really parsing it out into different pieces is I think a lot easier than thinking, AI, equity, ethics. So I'm just going to have -- I have one more slide and then I'm going to open it up for -- I'm going to skip this one, open it up for what's the role of the library. You have to be proactive and get involved. If you're concerned about ethics and equity, get involved locally. Start having conversations with your community. Start having conversations with your local governments. Start having conversations with your trustees, your friends, your -- whoever. Get involved in having the conversations understanding what it means for your community, and asking people, is this ethical? What does ethics mean to you in this construct? One of the things that I see all the time in these conversations going back to the time that we automated libraries for the very first time, is that we can be reactive, but we need to be responsive. We need to respond to what's out there and think about oh, okay, I'm seeing this as not equitable. How do I respond to that? How do I work with my community? Bring in vested partners and community members into the process. So I'm repeating myself, but we can't do it alone. We just can't. We have to have other people who are helping us figure this out. Dianne had her son, Juan and I have each other as well as other people that we talk to about with this. You can't do this alone. Bring curiosity, wonder, bravery, and courage. We have to be brave and courageous. These are some of my favorite words these days when I'm thinking about this. When it's the unknown, because the unknown is super scary, there's an other article that I will give JP the link to about the uncertainty of AI, being in this uncertain time makes everything bigger and scarier, and wondering about ethics and equity. So bring curiosity and wonder, it's like, how does that work? Why did it do it that way? Who can I talk to? Be innovative and explorative, and then also be an advocate. Instead of AI ugh, this is equitably wrong, start thinking about, let's advocate for what's right about it and how it can be used. So with that, I would like to ask JP -- JP, you're not in on this conversation. I'd like to ask Juan and Dianne to define, or tell me what you think about the idea of equity and ethics. What stands out to you? I'll start with you, Juan. What stands out to you when you're thinking about expect ethics in AI? >> JUAN RUBIO: Yeah. For me, I think it is who is actually creating the technology. Because if we think about AI, we see that probably the people who are creating this language models, right, you think about the tech industry's probably going to be mostly male, heterosexual, white men, right? So for me it's like, who is really -- not at the table, at the discussion, but actually creating the training models that are generating all of this information for us. So because you have the data, but you have to actually then program the learning that is happening, you have to train the machines to learn and the data in a certain way. So for me, it's about creating the spaces such as this one, and many others. The encouraging young people to just go to the top so that they can then join AI creation, right, companies, or being more cognizant of how AI is being created is one of the things for me that is key in terms of seeking about -- thinking about the equity part of AI. I also think that the other thing that for me is really important is like, so that is like about who is deciding what? But I think there is also another important factor of, like, who is asking the questions about AI and equity and ethics? Because we see how there's, like, some efforts on regulation from the government, but then it's sort of like a self-regulation. The companies that are creating the technology are the ones that are saying, we're going to regulate ourselves. So I think -- and in the libraries, I think, incredible important role to bringing this conversation to the surface and say, okay, let's ask questions about AI. I also think for me I approach the technology less from a fearful approach, or a fearful place and more of, like, a questioning of things. Like, what are the questions that we need to be asking and how we can bring these questions to the people who need to hear these questions. Our government officials, our city council, all of those people that are making the decisions about these things, for me, is really important. So creating spaces, thinking about the decision making about creators of the technology, and bringing other voices are really important to, in my opinion. I also think there is these -- we've read some about -- in terms of toothless nature of some ethical considerations, it's like, one thing it is to say, well, we're going to have this regulation, but a different thing is to enforce the regulation, since -- and say, okay, now this had is not happening, what are going to be the consequences? So that's how I feel the questions we most definitely should be asking about a technology that is so disruptive, but so full of possibilities at the same time. >> LINDA BRAUN: All right, Dianne, what do you think about when you think about equity and ethics in AI? >> DIANNE CONNERY: I think it is biased, and that's accepted at this point. And I noticed in your prompt for that image that is showing, that you were intentional in your prompt by saying, showing groups of people of color. So I think using that kind of thought into it inextensioniality is important. intentionality is important. I'm also thinking of the ways it could help the underserved in my community. They've been talking about the uses for resume writing, and cover letters. So the whole job search thing, I even heard a discussion about it would generate a resume that is specific to the job description that's been advertised. So there are ways that we can help people in our community who haven't typically had these advantages, it can upskill them so that they could be on a more level playing field with other people. And better communication is a key part of this. We talked a lot about -- I have about time saving, but better communication. So if you are a person who doesn't speak English very well, and you need to write a letter or an email or something, this can help you communicate better, enhance communication. So in my community I mentioned we have the information meeting that four people came to, and then that evolved into a learning circle in our community. So it is the people themselves who are having this discussion about what it can do for them, expedia has started having an AI component. So you could have it create an itinerary for you. So it's the real gamut of things that this is capable of doing. But I really think it could almost be like a promotion for a lot of people, it's up-- it's helping people grasp skills that they didn't previously have. >> LINDA BRAUN: Dianne, can you talk a little bit about how you connect to people in your community so that people who might not have the skills, you can help them upskill using AI? Or whatever? >> DIANNE CONNERY: M-hmm. Of course -- well, of course it's been a long time building those relationships in the community, and just really turning outwards and looking at what is it that people in the community get excited about and care about. We've been able to plant a community garden, and have edible landscaping around the library, and we're a summer meal site, and feed after-school snacks to the kids. So we've got the library of things, which is so in line with what libraries do. It's resource sharing. So people can Barrow things that they would -- borrow things they would not want to pay for, because they only need them once in a while, and they don't need to store them. So basically we say yes to everything. We're not a quiet library. We have really cultivated an atmosphere of this is a place that you -- everybody's welcome in, you come to explore. And I will say in terms of technology, one of the ways we've been extremely fortunate is to have a digital navigator. So he meets typically one-on-one with people, and can help them, everything from the basic how do I save a picture to my phone, that kind of thing, so some of these more advanced, you know, how do you use the 3d printer, and virtual reality? So building those relationships with one-on-one I think is key. And that's -- as we go to school board meetings, city council meetings, chamber of commerce meetings, we're everywhere out in the community sharing what we do. >> LINDA BRAUN: Thanks, Dianne. Juan, people have been asking about information and misinformation in AI. Do you have thoughts about that and how to help library staff think about that? >> JUAN RUBIO: Yeah. That's a big question. I think for me libraries have such a great role into understanding information accuracy. And I think it is important to understand that the information that ChatGPT or other platforms is giving you that it's not, you should not take it at face value. That there is an extra process that you need to take into verifying that information. And then I think the library can then develop that skill in the community. In which the information literacy becomes part of a skill that patrons have, understanding when that information, it is inaccurate, I think I used code interpreter to try to explore how ChatGPT will recognize a painting from a famous painting, like in the National Gallery that I love, and then it tried to do it, and at the end it was giving me the completely wrong name of the painting. So I knew that, that that was the case. That it was saying that it was the Child and Mary and it was very obvious, but I think having that kind of test built in you, it's really important, and to when is it that you need to check the information. And if you ask ChatGPT about an opinion, it will tell you, it will say, oh, I'm not able to provide opinions. But I think you have to be really careful to really know when that information is -- when you need to check for that information's accuracy. >> LINDA BRAUN: Right. And Juan, I put in the chat an article from "The New York Times" a couple of weeks ago, the middle of July, about Wikipedia. It's called Wikipedia's Moment of Truth, and I'm fascinated, because again, in the early days of Wikipedia, we were all like oh, my gosh, Wikipedia! All the information is wrong, and it's created by humans, and it's going to explode the world! And now people are looking at Wikipedia as a better source of information than something like ChatGPT. And you can now use a plug-in in ChatGPT for -- to search Wikipedia. So it's like this crazy thing I think going on that just blows my mind. It also makes me think, Juan, you've said it several times, Dianne you've said it several times, this idea is that the library has a role in this. There's been some conversations about, is this the end of librarians, is this the end of libraries? Not if we start to think about the role we play in building equitable and ethical AI services and supports for our communities. So I think there's -- it's on us to learn how to do that and help our customers with it. You're right, Wikipedia, I will not say Wikipedia is the most awesome resource, however, it has become more like oh, we go there first and we get some ideas. Or oh, I'm looking just to get something started. And so I think it's interesting to think about how over time how we look at all of these things, changes, and grows. It's always the end of libraries. Which is just, it's the end of maybe libraries as we've known them for sure. What else are you thinking about equity and ethics as you look at this conversation and what we've been talking about here? Dianne or Juan, anybody else coming to your mind? I can ask you questions if you want me to. >> DIANNE CONNERY: I was just thinking, I don't think we've touched on the terminology hallucinations. So some of the things, what I've heard is that if you ask it to, ChatGPT or the others I suppose, to write a paper, it will create fake citations. And so you do have to check that information. I think the best way to test it and to build trust or distrust is to try it out with some topics you know really well, and see what it comes back with. There are some apps now I understand, and I can't name them off the top of my head, that for the citations they're providing links to them, so you can go and check whether or not they're real. Because I think -- yeah, the last thing we would want to do is to be caught promoting something that as if it's true that is a hallucination. >> JUAN RUBIO: Yes. And I think, you know, like that exercise that I did, I know S. humbly I'm going to accept I know a lot about art, so I was like, telling it and getting this information, and I was able to identify these small hallucination that was happening. Because it was telling me that it was not a painting, but it was -- it was this other painting that I knew, so I -- the painting of a man. So I think that for me it's -- those are the kinds of I think exercises or services that the library can do to really explain to people or let them realize how is that this technology can be used. And where it is failing, which simple will -- with simple examples like that, or creating services that really prepare people to be alert of the possible pitfalls of the use of the technology. >> LINDA BRAUN: As an artist, I want to ask you a question that has come up a bunch of times, so you're getting this because, I don't know, Dianne, if you're an artist you can answer the question too. But people are concerned about art that is in AI and that you have not given consent. Do you -- can you talk a little bit about that? >> JUAN RUBIO: Well, I saw someone was saying something about, who created the [indiscernible]. Was it AI or me? I would say it was -- I created it. Because I led and I gave the prompts, I created those prompts. And then I'm not saying when I think the ethical -- your model compass has to come into play. My ethical -- moral compass has to come into play. My core values are going to come into play because I'm not going to present the image that ChatGPT -- the Midjourney created and say, I created this. But I can create, I can present the watercolor that I created with my hands and my paper. That was my creation, and I used reference photos, and like I said before, in art reference using reference photos has been -- it's a long tradition. Everybody uses reference photos in the art world. So I don't see any ethical problems in that respect. But I can see that if I were to then present the toad I was creating, or a version of it as my own work, then I am violating some ethical issues there. Because I am a positioning myself at the creator of something that I didn't create, even when the prompts lead it to that place. And that's why I used -- I went into the watercolor and I created my own watercolor so I could claim it as my own. So that case, I also think that -- so those are like ethical considerations, and I think that we can help people navigate, or when can you claim a work of art, your own, versus work done by someone else. I think if I were to present the full creative mid-joarch, I would say, even though it was created from my prompt from my imagination, in a sense, but I will give it credit. I will say this was created with Midjourney. I will be the ethical -- that would be the ethical thing to do. And I think that's when the library, right, can help people to understand these ethical spaces that you need, considerations that you need to make. I think also I think when I started playing with Dali in the beginning, I was like remixing or telling it to do it in the style of this artist. I will say, do this in the style of Salvador Dali, for example, it will try. So I think that you can also go to that realm and you can create art that resembles an artist that is well known, that has the work of art, or art that is known. And then you can claim it as this artist. So I think those are the ethical considerations that are important to keep in mind. >> LINDA BRAUN: Thank you. And I have one -- I know we're coming down. Dianne, there's -- in the chat there's something about someone asked for a book on the history of Sugar Hill Records. It made me think about, you said you had talked to people about ChatGPT as not a search engine. Can you talk a little bit the difference of AI and a search engine? >> DIANNE CONNERY: M-hmm. So -- yeah. I guess to articulate it, or to try to, it can put ideas together to create something new with the image generators for sure. And so I think the idea of using it in a lot of different ways, one of the important things is knowing that it's not human, that we are using a tool so we have to learn how to best use it. Is one of my funny examples was I mentioned it created an image of a new library building, and because we're on a lake, I then asked it to put nautical touches on the images. And it just plopped a boat into the parking lot. So it is a tool that is learning, and we have to be teaching it what we want it to do. >> LINDA BRAUN: And one of the things I think about in the search engine AI conversation is that I don't look at it as give me an answer to a question, right, like, what are -- who's the director of this library, right? I'm not asking it that, which a search engine can do really easily. What I'm asking it to do is to help me think. Which sounds really creepy probably, and maybe might be a little scary, but it's helping me think about things in ways I might not otherwise, and then I can use that. But it's not giving me the answer to questions specifically. >> DIANNE CONNERY: Brainstorming . >> LINDA BRAUN: Brainstorming, yeah. Yeah. JP, I know we have about five minutes left. Do you see anything we should be attending to? >> JENNIFER PETERSON: Yeah. I do -- I'm glad you talked a little bit about information literacy, and that people are talking about critical thinking. As with any new thing, new technology coming, the questions that arise certainly are ones that we're curious about, because we are in the business of information literacy and citations, and obviously these are areas that we are intersecting with our users around. So I think -- I just keep thinking of flying a plane, building the plane while we're frying it kind of thing. All of these questions and considerations kind of need to be happening at the same time that we were looking to figure out how to explain to folks when they say how do I use this? We're in that position again. I see it as an opportunity for us to revisit our role as stewards of information, and to have those conversations, so I hope that we can balance the enthusiasm with the critical thinking around this. So I appreciate that you really touched on the full scope of all of that. And obviously we can see from chat there's lots in the conversation. I did want to note that somebody did mention, who are our regional or national organizations that are kind of coordinating around thinking about policies or how libraries can be more deliberate in how they present themselves within the AI world? So I hope that we can continue -- WebJunction will continue to try to keep this conversation going, but, yeah, I don't know if Linda and Juan and Dianne, if you have any final thoughts around us as a field, how do we work together to continue to be critical thinkers around this. >> LINDA BRAUN: Who's going first? >> JUAN RUBIO: I think we need to create spaces and create more spaces to have these type of conversations. I think we should advocate to bring the right questions to the people who have some decision making and really be champions of the ethical and equitable use of AI. >> DIANNE CONNERY: And I think we each owe it to our communities to learn about it ourselves. And I think really the only way is just to dig in and start playing with it. And then one thing leads to another, and you may be spending way too much time. But hopefully in the end it will save some time. >> LINDA BRAUN: I can have Juan text you and say, have you tried ChatGPT yet? You should. Maybe that will spur you on. I agree with what Juan and Dianne said, and just really thinking about the opportunity. I know that someone said they're not excited, I hope now you're maybe a little excited. Because this is an opportunity for us to really help our communities, particularly systemically marginalized communities, in helping them to use tools that are going to change the world in some ways. So thank you all for being here. And JP and WebJunction, thank you for having us. >> JENNIFER PETERSON: Absolutely . Thank you. All three of you, and a note that they've shared their emails here with you, so if you do have follow-up questions or want to keep the conversation going, obviously with their interest, please do so. I'm going to also send you to a short survey when you leave, and we collect your feedback, we'll share that with the presenters, so please use that to share your feedback. And I'm going to just, while we are wrapping up, oh, a reminder too that I'll send you all an email later today once the recording is posted and we'll add all those wonderful links that you brought to the conversation as well to the event page, remember that there's a learner guide that can be a tool to help you bring your conversation or apply this learning to your own work. And as people are leaving, I'll also invite you, we had a final question in chat, let us know how you would like to move forward with your learning from today as you continue to consider AI in the library, the community, and more. So feel free if you're heading out to drop your answer to that into chat. But again, thank you all for being here. Thank you so much again to our presenters for all the work that you put into this as well, and I'm really excited about all your MAD-inspired Midjourney illustrations. So thank you so much for -- >> LINDA BRAUN: Dianne gave us the MAD idea. The MAD Magazine idea. >> JENNIFER PETERSON: All right. Thank you so much, everyone have a great rest of your week and we'll see you at our next webinar. Thank you.