So we're so excited to have all of you here to work through this series with us. Also of special note is we have a learner guide for today's -- for our series that many of you we know are working through it, and I'll put that link in chat. Any time you see a link in chat, you can open it. And this guide is sort of a companion piece to the series. You can use it alone if you're working through this topic, or bring it to your team to learn together. This is a series of questions and activities that you can customize if you have specific ideas that you'd like to bring to the conversation, know that you can customize that. We actually have learned from -- I'll tell you about that in a second, actually. I also wanted to let you know that we are excited to be collaborating with OCLC research, we're actually a part of the OCLC research team, and if you're newer to OCLC, perhaps you do know that we do research at OCLC, we have a research library partnership that we collaborate with, I'll talk a little bit more about, and we are connected through a global network of 16,000 member libraries and counting. So we're really excited to be bringing you this series in conjunction with that work. We wanted to give you a glimpse of who you are and where you're coming from. These -- this represents the folks who have registered for the series. There are 39% of you that come from public libraries. A handful of others that come from school, consortiums, state agencies, and special libraries. And also many academic libraries, 46% of you all come from academic libraries, and there's a breakdown within that of 22% research or University libraries, 7% from vo-tech and community college libraries, and 16% from four-year college and University libraries. So we're so excited to have this cross sector collaboration for this session, and welcome to all of you who are here representing those folks. We also wanted to give a special shout out to the research library partnership. This is a partnership that exists with research library across the country, and there are folks working through this series as an interest group and we wanted to highlight one of the folks, one of the groups that's in that cohort, the national gallery of art, those library folks have been working through the learner guide, collaborating on brainstorming in their own document, they're working through a process that considers all of the players, users, potential users, their community and stakeholders, and we're going to talk a little bit later about questions that you all have been formulating if you've been working through this assessment process, and here's a peek at the questions that they are noodling on right now before they begin the process that we'll talk about in a moment. But they are considering whether or not they still need a reference desk at the national gallery of art, why do several departments at NGA meet their information needs internally, instead of using the library, and they're especially interested in how the catalog is being used by users and how it's effectively underused or ineffectively or incorrectly used. So thank you to those folks. I know many of you are gathered on today's session from the research library cohort, so congratulations for you all on your outstanding cohort learning. I'm going to go ahead now and get our recording started and introduce you to my colleague Lynn Silipigni Connaway. She's going to talk us through our introduction and welcome to you Lynn. Let me make you our presenter. >> Hello, everyone. Thank you for joining us. And I just wanted to give you a little background on some of the things we do at OCLC research. And. >> We are learning our new environment here, so thanks for your patience. Do you see the little bar -- there you go. >> I was trying it on the laptop. That didn't seem to like it. Anyway, this is a great opening for user studies. And user interfaces. And I just wanted to talk a little bit about what we do at OCLC research, and one of the areas that I've been very involved in is -- and now lead is the user studies area. And I just wanted to say that this is a part of that. Assessment is all user centered. And so it's very important to think about those individuals who are using or those potential users of our offerings. And then some of the printables for assessment, and these are things that we, again, have talked about in the previous webinar, but it's always good to reiterate and to think about what we're doing now, and what process we are in this whole assessment plan. And so number one as I said, center on users. Assess -- it's so assess changes that we make. So if we're making some type of change to maybe an information literacy program, to a children's program, then we can also test that, evaluate it, assess it. We need to really remember to build on what the library has already done, and what we as librarians already know. And I think that's something we forget. We know a lot, but we really need to think about that and sometimes write it down. Document it. This is what I know about the individuals in this community. Whether they use the library or not. Then break it down. These -- this is what I know about those individuals in the community who use the library. We need to use a variety of methods to corroborate all of our conclusions. We should always choose a small number of outcomes. We as librarians are always overachievers. We want to do it all. And we need to step back and take incremental steps. And this should be -- this meaning assessments -- should be a continuous process. It's not something that you finish and wipe your hands of it and say, we did that, okay, this is what we know, move on. No. It's continuous, it's iterative, and we should be every day be thinking about it and be working on some type of assessment. Whether it's formal or informal. So I highlighted what we're doing today. What we're talking about. Because the last time we met, we talked about the why. What do we want to know? Identify the problem, the purpose. The who. Identify the team. Who is going to work on this? And I hate to say this, but there's that saying of, misery loves company. Assessment should not be misery. But sometimes it can be overwhelming and intimidating, and it's wonderful to have collaborators with you. And now we're on the how. How are we going to collect this information and actually look at the data? Collect the data and analyze the data. And that's where -- that's what we're gag to talk about today with Linda. One of the things that we've been told, and I've heard back from some of you as well as from the research library partners interest group on assessment, that individuals have been having a hard time actually coming up with questions. And you know that is always one of the hardest things to do. I remember as a doctoral student, I couldn't come up with a question. It's not that we can't come up with questions, it's that we're too broad. And that we have to -- and we have too many things going on in our questions. And again, pick it apart one piece at a time. So I took this from the book that the research methods book for library and information finance, that Maria bradford and I coauthored. These are actual examples of projects. This was a problem statement. And the problem is, the problem to be resolved by this study is whether the frequency of library use of first year undergraduate students, given course integrated information literacy instruction is different from the frequency of library use of first-year undergraduate students not given course integrated information literacy instructions. So that's the problem, and it's even difficult to read the problem. That is a very complex problem. So then we break it down to subproblems, or your questions. So what is the frequency of library use of the first-year undergraduate student who did receive course integrated information literacy instruction? Then the second question -- what is the frequency of library use of the first-year undergraduate students who did not receive course-integrated information literacy instruction? And then that final question -- what is the difference in the frequency of library use between the two groups of undergraduate students? And so we took that very complex problem and broke it down into three manageable questions or subproblems. Now, we asked you to come to this webinar prepared to share your assessment project questions. So at this time I invite you to share them with us now in chat if you feel comfortable doing that. And in a moment Jennifer and I will select a few examples to discuss. Because we know that others would really like to hear what you're doing, so please, if you have some questions that you brought, now is the time to share them with us. I haven't seen any yet. I'm going to move to the next slide and then I'll keep watching the chat, so maybe we can discuss some of the questions. Now, this is also from the research methods book that I coauthored with Marie bradford, but this is advice from -- denuda is the University librarian, the dean of libraries at Drexel University. So she is in a library environment every day. And she says, she gave these tips, which I think are great. The techniques to conduct an effective assessment evaluation are learnable. We all can learn this. Always start with a problem, the questions, the questions. So that's what you need to do. You start with that problem and then you get down to the questions or the subproblem, like we just did. And then she says consult the literature. Participate in webinars. Like you're doing today. Attend conferences. And learn what is already known about the evaluation problem. And then she says, take the plunge and just do an assessment evaluation and learn from the experience. The next one will be easier and better. So don't worry about making mistakes. That's why we call it research. No matter what happens, we've learned something. And as long as we share that, so that others can repeat what works, and won't repeat what doesn't, it's okay. And make the assessment evaluation a part of your job. Don't say, oh, it's so much more work. And then plan the process and share your result. Now, I see that you have -- we do have a couple of questions that have come up. >> So exciting to see! I've got some for you. There's actually a couple specific to spaces, which is always interesting. Do students use the library provided technology in our group study spaces? Are students using the study spaces in our newly renovated learning commons? What emerging services do users need that we are currently not providing? How do our engagement and/or outreach activities impact those we serve? What factors most influence the research development of students as juniors and seniors? And let's see, one more, what is the impact of first-year library instruction on usage of academic journals as revealed in work cited lists? So there's some good ones. >> There are. Well, I'm going to start, the one that starts with what emerging services do our users need that we are not current -- that we are currently not providing? And with that I would say number one, you need to look at some information you already have. Remember, start with what you know. And identify what individual -- what services, what offerings individuals are using now. So identify that. And then you'll see those that they're not using. So just think about those that they may be things you might want to take off of the table. So you've created sort of this picture of the user. And then from there, I would then start to break it down into smaller subproblems. And so what services are you thinking about? And you might want to jot those down. Are there some services if you wanted to have satellite services, or most of us do have mobile services, but still, things like that. Then I would identify a specific group of individuals. Who are you going to go after to get this information? Are the people who use your library a lot? The ones you know? Or are those individuals who maybe come in sometimes, are they individuals who don't come in at all? Or is it a combination? You need to determine who you want to talk to next. And who you want to learn from. And then once you do that, then you can probably talk to individuals. You won't have to do hundreds. But you could interview individuals and have semi structured interviews. And you might not want to say, what services do you want? Remember Kara was saying, don't make the individuals do our work. What you might try to learn is how they're getting their information, what services they're using outside of the library, is the way -- is there a way the library can then compliment these? Or is there a way that if we need to, do we need to replicate these? These are things that you have to think about with a problem like that. I'm not going to go through all of them as I'm watching the time, Jennifer, should we move on, do you think, and come back to questions like these if we have time at the end? >> Yeah, maybe so. I guess there's one I -- just because I'm curious how you would respond to this one, because it's kind of a big one -- why don't students use the library? >> Oh, my goodness. That is a big one. What kind of use, number one, are we talking about? And I think that's something you need to identify as well. Is it the physical space, are they the programs, the events, are they things like your virtual reference services, your information literacy classes, what exactly are we talking about? So first of all, would I break it down by what -- what does that mean? Library services. Break that down. I think you know some of this. You know what they're using, what they're not using, and so this goes back to that whole space, the issues with space. And other things, where you need to be thinking about number one, what are the individuals doing outside of the library? And I think we forget about that. We're so focused a what is going on inside, how are people using space outside of the library? With this question, how are individuals getting their information, where are they spending their time? Those types of things. And then I would start, again, what user group, what type of individuals are you looking for? You need to break that down so that you have your sort of sample of individuals who you want to hear from. And then I think you need to get your subproblems from that. So again, what do you know, identify that, what -- then you'll see what you don't know, and then -- and the services will be outlined, and then what types of individual users potential users are those who are not using the library? Then decide who you're going to talk to. And what type of environment is this? If it's the academic environment, are you looking at the different levels in the academy, the undergraduate, the graduate, the faculty, the staff? If it's in a public library, are you looking at the live life stages of individuals? Are you looking at the women of a certain age, are you looking at the children, are you looking at seniors? So these are all things that have to come in to play as well. So again, you're breaking it down. But start out with what you know, and then you'll see what you don't know, and then can start to break this down. Again, is it space, is it -- are they resources, are they different services, offerings, events? Another question, Jennifer, do you think we should move on? >> I think that's good. I think we can move on. People, please keep -- feel free to share, and if other folks have ideas on how to refine questions, don't hesitate to respond as well. This, again, as I said earlier, this is a way for us to be efficient with your time. So thanks to those of you who have chimed in. >> If we have time we'll come back to some others, and it would be good to have Linda chime in at that time as well. And so I'm just going to end with this, I should go back to this slide. There's also a hanging together blog post that talks about assessment and that you are not alone. And the URL is included on this slide. And then the next one is from Neil young, rust never sleeps. And this is also a blog post that I wrote. And you know, this is not just for rockers, as I say, rust never sleeps, not just for them, but also not for libraries. And what that means is that libraries are constantly -- need to constantly be evolving and changing. And the questions you're asking about why people aren't using -- why students aren't using the library, or what space, what type of space do people want? Those are all questions that are part of this evolving nature of individuals. And so we really need to constantly not be asking, you know, those what questions, but the why questions. Those how questions. And be constantly thinking and moving forward. And so now I'm going to turn it over to Linda, who is the director of the library research service at the Colorado state library. And Linda is very accomplished in data collection and analysis, and I'm really anxious to hear what she has to help us with today. Linda? >> Yeah. Thank you so much, Lynn and Jennifer and the whole WebJunction team, for giving me the opportunity to participate in this series. I'm excited to join everybody today. I also want to thank everyone so far who has shared their research questions. And like Jennifer has already said, we encourage you to keep sharing them. It's always helpful to see examples of research questions, and also to perhaps connect with folks who are -- who may be pursuing similar research as you are. So please keep sharing. So, I want to acknowledge at the outset, and this might be at the -- at the risk of stating the obvious, but we're diving into a big topic today. And this is something, these methods and data analysis we could spend years in graduate school learning about. And so I do want to establish some appropriate expectations for what's going to happen during the rest of our time together today. And so I -- we are going to be focusing on the why versus the how. And so as we talk a bit about various methods that we could use to work on finding answers for our research questions, we're going to focus on considering why we might select a particular method, as opposed to how we would actually go about implementing that method. And as we get into the data and analysis piece, I will share a little bit of the how in terms of some more general tips for analysis, no matter what type of data we're collecting. But again, our focus today is going to stay more on the why. I do want to acknowledge that there are a couple, more than a couple, but a couple important parts of the how, or the process that we're not going to be touching on. Those include research ethics, so as you go about preparing to collect data, if you are collecting that from people, research ethics issues include getting their consent to participate, protecting their privacy, and other concerns. Another big part of the how, or a part of the process for collecting data is how you're going to pull your sample. How will you select your participants, or if you're doing some type of content analysis, how do you select your content? We don't have time to get into either of those topics today, but I want you just to highlight here that they are very important parts of the process. The screen shot that you see of the research methods in information and library science, that's Lynn's book, she mentioned it earlier. That's a good resource for those topics, as well as resources in the learner guide. So I encourage you to check those out afterwards. So, what method do I use? Now that I've determined my research question. And you know, as we approach this, we tend to think about this in terms of, will I use a quantitative method, or a qualitative method in order to answer my research question? And I want to just highlight the purposes and some key distinctions between these two tripes of approaches. And so you can see on the table we're starting with the purpose for each, and qualitative methods are interviews, focus groups, these help us to really dive deeply into a topic so that we can understand kind of the how and the why. In contrast with quantitative methods, so these are methods such as surveys or experiments, these methods help us to understand more of the what or how many, or to what extent. So let's me give you an example. Let's say that we're looking at college students who do use library reference services versus those who don't. If we're conducting a survey about that, that might help us to establish some of the characteristics of these two groups. So are these groups different in terms of their major, or perhaps if they had an information literacy session during the semester, characteristics such as that. A survey could help us to parse out. If we took a qualitative approach, an interview or a focus group might help us to understand why certain students are using reference services or are not using reference services. Maybe what are the barriers in place, or what facilitates them using reference services. Maybe looking a bit at their perceptions of working with a reference librarian. So that's what we could perhaps get out of qualitative work for pursuing that type of question. In terms of samples, and I apologize, you might hear sirens. There's a lot of excitement going on outside of my window this afternoon. But getting back to our samples, so qualitative samples tend to be much smaller and in terms much selecting them, typically researchers use purpose samples, selecting a San many based on a certain characteristic. For example, maybe first-year students who don't use reference services. Quantitative samples tend to be larger. These also could be purposes that researchers select them based on some characteristics such as not using reference services, or perhaps whoever happens to come into the library that day. But they could also be random. Depending on the sampling method. In terms of the types of data collected, with qualitative methods, data can be words, or images, or objects. With quantitative methods we're looking at numbers. So that means differences in terms of analysis. With qualitative data, we tend to be looking for themes or patterns across the data, whereas with quantitative methods we're using statistics to analyze. And then finally, in terms of our results, what are we getting out of these two approaches, qualitative methods can yield very richly descriptive findings, really exploring a topic in depth. These, however, aren't generalizable to a larger population, whereas with quantitative research, we are -- our results are numeric and depending on our sampling methods, they can be generalized in some circumstances to a larger population. So that's kind of qualitative and quantitative approaches in a nutshell. I do want to talk about one other distinction that we can consider in terms of what method we might use to collect data to answer our research question. And that's considering whether the method relies on self-report, or if it gets beyond self-report. And so by self-report I mean that we're asking participants to describe an experience versus getting to more direct methods such as observation or demonstration. And so let me start by talking a bit about self-report methods. And so there are a variety of research methods. I'm just choosing to highlight several today that are common in library research. And so in terms of self-report methods, the three that I'm going to highlight are interviews and focus groups, which are both qualitative methods, as well as surveys, a quantitative method. And so I think these are terms that we're familiar with, just -- so just to make sure that we're using them the same way by h, by interviews, I mean this is a one-on-one experience, where a researcher is discussing with a participant some experience or some topic. Focus group, in this scenario we're with a small group, perhaps about eight to 12 people. With a facilitator, and again, kind of having a conversation about a specific topic. And then of course surveys, which we're all bombarded with every day, but the idea of completing a questionnaire about some type of experience. So let's think about why we might choose one of these methods over another if we are considering using a self-report method. So with interviews, again, these really allow for kind of doing a true deep dive into an individual's experience. You can really investigate a subject in detail, there's a lot of flexibility with the interview, you're asking open-ended questions, and because this is a one-on-one live situation, the researcher has the ability to ask follow-up questions if they need clarification, or to probe a little bit. This of course is time intensive for both the participant and the researcher. But this really helps to answer those questions of how and why. Focus groups share several characteristics in common with interviews. But a few characteristics that distinguish them, a big one being, now you have a small group as a researcher that you're working with, and so you can really play off of that to allow the group to brainstorm. They can add to each others' thoughts. They might provide varied perspectives, and this is a bit of a quicker method than interviews to gather multiple opinions, but still with a qualitative approach. Moving then into surveys, and so now we're on to our quantitative self-report method, this is certainly the most efficient self-report method for gathering a lot of different opinions or perceptions from a variety of respondents. It can be statistically representative, meaning we can generalize it to a bigger population, depending on the accept plastic bag methods. And with surveys, most of the questions that we are asking people that respond to our closed-ended, meaning there's a set choice of responses for them to choose from. And so one thing to keep in mind is that if you are designing a survey and you find that a lot of your questions are open-ended, that's a good sign that an interview or a focus group might serve you better, because you don't know potentially enough about user responses to the topic at this point in order to be able to create closed-ended questions. So those are just some characteristics to consider. If you're thinking about using one of these self-report methods. Now let's get beyond the self-report. So we're going to think about kind of three different data collection methods that fall into this category. Being content analysis, observation, and then demonstration. Wendy asks h asked a good question about whether focus groups could be used to inform the development of a survey. Absolutely. So Austin focus groups -- often focus groups or interviews can be a good first step in terms of getting a better feel for the range of responses that your users might have about a particular topic, and then you can use those to shape a survey. Sometimes it can work in reverse as well. Meaning that say that we do a survey and we get some unexpected responses. Something that we're not -- that we're surprised at. Then interviews or focus groups might be a useful follow-up tool to try to parse out, okay, why did we get these answers that we didn't expect? So moving back into the -- these methods where we get beyond the self-report, let me start by talking about content analysis. And so the idea behind consent analysis that we are examining some type of existing contents. You're seeing a website on screen because I'm -- I want to share an example from a study that we've conducted in my organization where we did a content analysis of library websites. And so we had a coding instrument, and you're seeing just a snippet of it here. But we were analyzing these websites to look for different features, such as whether the library has an email newsletter, or a blog, features such as that. Another topic that one might do a content analysis of in libraries is any type of text. So, for example, chat logs from virtual reference would be a potential type of data to do content analysis with. A second method that gets beyond self-report is observation. And so there is a public library in Colorado where I work that recently did an evaluation of their early literacy programs. And one of the methods they used was observation. And so they had trained observers who went in to their early literacy programs with rubrics, and they were looking for certain behaviors to occur during these sessions, they're looking at whether the parents and caregivers, for example, learn certain behaviors that encouraged early literacy developments such as playing with your hands, they were also looking for certain levels of interaction between the parents and caregivers and the child. And so this way with observation we're getting kind of beyond just participant self-report, and actually seeing their behaviors. And here's an example of a rubric. This is not from that early literacy study, this is something that we've used in our organization. We were looking at social emotional learning. But the idea is that you would have a form with systematic categories to evaluate different behaviors or observe other observable events. And then a third method that gets us beyond the self-report is demonstration. And so this is an effective method if you are trying to assess some type of learning outcome. A common example, in academic library information literacy sessions, perhaps students will be given an assignment in order to determine whether they learned from that information literacy session. And so here's an example of a rubric that our organization has used to evaluate information literacy assignments. And this just helps to determine whether students learn the concepts that were taught in that session and were able to apply them within the assignment. Another example of using demonstration, it doesn't have to be as formal as an assignment or a test. Let's say in a public library we maybe are offering a basic tech skills class for seniors, and one of the skills that we have worked on during that class is how to text. So at the end we could give participants a survey asking them whether they learned how to text. Or we could actually get them to text. , so for example, we could use text polling software and ask some fun questions as the poll question and have them respond via text. Or practice texting to each other, and then if they are able to do that, we know that they have actually learned that behavior. That's the beauty with all of these methods that get beyond the self-report. That we're not relying on someone to self-report accurately about their experience. So in this example, maybe a senior for social desirability reasons, or because they're embarrass order a survey would say they did learn to text, even though they didn't. Whereas demonstration gets beyond that, it can help us to get to a more authentic assessment of an outcome. So again, thinking about reasons why we may choose one of these methods, so with content analysis, advantages to this method, it's unobtrusive. We're not bothering our users in order to conduct this type of study. We're relying on available data instead of generating new data. It certainly can be time consuming for the researchers who are involved, and it's also very dependent on how good of a coding instrument that you're able to develop in materials of it being able to be used consistently by multiple coders. So if people -- if your coders are all interpreting your categories differently, then you're not going to end up with very accurate results. Observation shares a similar challenge in terms of, it can certainly be subject to observer bias. It can be subjective. And so again, this is reliant on having a strong data collection instrument and strong training for your observers, so that they are interpreting behaviors consistently. But of course an advantage of it is that this does allow you to study a real life situation, and then can kind of provide context for various experiences or behaviors. With demonstration, an advantage of using this method is that if you are assessing learning, it is more authentic compared with self-reports. A concern with it is that participants may feel like they're being tested, and so we need to be careful to design experiences if we are going to use this demonstration method. So that it doesn't feel like a test to people. So I want to throw out a scenario. We thought now about these quantitative and qualitative methods, and as well as self-report versus going beyond the self-report. So let's look at the scenario and think for a minute about what method or methods we might use to study it. So a public library received a grant to redesign the team space in their main building. Currently the building has two spaces for teens, separated by a wall. A YA book collection and a teen computer room. But rooms are small and the only place to sit is at the computer work stations. Library staff want to make the area more engaging, and are considering adding a maker space area. But they're unsure what teens in their community want. So thinking through this scenario, and the methods we just talked about, what might -- what data collection method might you choose to use to address this research problem? So I'll invite to you share in chat the method that you might use. Great. So we have someone who does focus groups. Tanya or others who might think of using focus groups, why would you choose focus groups as a in-depth order for this? as a method for this? And Geena has a good question about those computers R they only used by the teens? Are they generally in use? Wendy suggests two methodses, doing a survey with local high school students and follow up with focus groups. And thanks, Tanya, for answering my question about why we might use focus groups. She suggested it's a more social situation that teens feel comfortable in. And, yeah, I think that's a good consideration that if we are going to engage in discussion with them, making them feel comfortable by being around their peers. I like the idea, thanks, Sarah, about asking students to draw an ideal space. I see we also have folks suggesting observations and interviews. And really, I don't think there's a right or wrong answer for this scenario. It just depends on kind of what you're honing in, what your research question is, and what information you want to get out of it. So just in the interest of time, I'm going to keep us moving. So we're going to turn now to data analysis. So if you remember the slide I shared earlier, I want to hone in on a couple characteristics of quantitative and qualitative approaches to research. And so when we think about analysis now, remember that if we're working with qualitative methods, we've collected data such as words, like if we did an interview or focus group, we might have transcripts of those. Or images, or objects. Whereas with quantitative, we're working with numbers. And so, again, our analysis approaches will be different. With qualitative we will be looking for themes or patterns across the qualitative data, whereas with quantitative data collection we'll be looking at statistics. And so I want to share three data analysis tips that apply whether we're working with quantitative or qualitative data. And the first one is that your data analysis plan should guide the design of your data collection instrument. So if you attended the first webinar you may remember that Kara talked about beginning with the end in mind. She was talking about the entire research process, but this is certainly true when it comes to your data analysis plan. You need to figure out what you want to analyze and what you're going to need to get to your answer in order to design your data collection instrument. And let me give you a very basic example of that. Let's say that we're doing an evaluation of our summer learning program, and we want to look at how outcomes may vary depending on the participant's age. Then yes need to make sure that we ask a question about age in our survey so that then we'll have that data to analyze afterwards. So tip number one, think about your data analysis plan at the beginning, and then that will help you to determine what you should include in your data collection instrument. The second tip is it's very important to clean your data. And this applies again whether we're working with quantitative or qualitative data. And so a couple things to keep in mind in regards to data cleanliness is one that if you're doing any type of manual data entry so that might mean if you're using paper surveys and having someone enter those into a computer, or if you have recordings of an interview or focus group that you're transcribing, you want to check the accuracy of that work. And so when we have paper surveys, we put an I.D. on each survey. And then after they've been entered, we take a random sample of 10% of those paper surveys and compare those to what was entered just to make sure that the data entry was done accurately. You would want to do something similar if you were working with transcripts. Another component to cleaning your data, and this is now thinking about quantitative data, thinking about numbers, is you want to look for inconsistencies. And so by that, that could mean outliers, so, for example, if -- going back to the age, if you have someone who reported that they were 99 in your summer learning survey, you need to think about -- is that possible for the group you were surveying? If everybody else's age ranged between 5-10? Another way to look for inconsistencies is to examine trends. If you're collecting the same data over time. So I collect the public library annual report data for all of the public libraries in Colorado. And if I have a library that, for example, reports a 300% increase in circulation this year compared with last year, that's a sign that probably something was entered incorrectly. And so you need to check your data for these inconsistencies and fix them before proceeding with analysis. Otherwise, your analysis is going to be contaminated. And then the third tip is that documentation is critical. So throughout your analysis process, you need to have a systematic approach to documenting what you're doing, and a key component to this is what is typically referred to as either the code book or data dictionary for your study. And you're seeing just a snippet from that website content analysis that I talked about earlier. But how we set this code book up is, for every variable or for every question we ask, we have a name of that which you can see in the left column, and then in the center column there's either a definition or it shows the question that that name is referring to from our data collection instruments. And then in the third column, you can see if you're, for example, taking text and changing it into numbers in our data entry process, we change yeses and nos into ones and twos. You want to have that identified in your code book. That way as we go through our research process, we can refer back to this information to make sure that we're analyzing this accurately. This is also critical, obviously if more than one researcher is working on a project, as well as so people could replicate it in the future. So again, tip number three is that it's very important to document your process. So we are rapidly running out of time. And I am going to skip ahead to a couple final examples. And so I want to talk about the beauty of crosstabs with both quantitative and qualitative data. So I'm going to skip a couple things just to get to that. Those examples. So thinking about some basic kind of data analysis work that we can do, crosstabs fall noose that category, but can provide a little bit more information than just simply looking at kind of overall frequencies or percentages. And so let me share an example of that. I'm going to move ahead here. So let's say that we have administered a summer learning survey to parents, and we asked them about whether their children experienced several different outcomes, which you can see on the screen. And so here we're looking at across all of our respondents in the survey, what percentage of them indicated that their kids experienced these three outcomes. Enjoyment of reading, whether they're reading skills increased and whether they're reading by choice, so choosing to read when they have other options available to them. Did that increase after participating in summer learning? And you can see that for these three outcomes, about half of all of our survey respondents said yes to this. But what about if we break out the answers to this question by other characteristics of the participants? And so let me show you what I mean by that. This is the idea of crosstabs. So what if I look at families who are participating in summer learning for the first time? And you can see that then the percentage of respondents reporting these outcomes increases. Also, what if we look at the answers by age? And so again we can see that families who are reporting about their kids between the ages of 4-6, kind of that preschool range, again, they were more likely to experience these outcomes than just the overall sample. So crosstabs help us to really parse out our data and see more of the context for various responses. This is also a good example of why we want to plan out our analysis and then let that guide our -- the creation of our data collection instrument, because on the -- if I hadn't asked them this survey whether the respondent was participating in summer learning for the first time, as well as the age of their children, then I wouldn't be able to look at their answers in this way. I want to share an example of how this can be done, this qualitative data as well. So this example -- so, there were interviews conducted in a library with staff about how staff wanted to be appreciated for their work. And the analysis, the analysis of the interview transcripts revealed that there were kind of four categories in which staff people -- staff members, preferences fell into. Getting an informal thank you, a financial reward, formal recognition from the director, and then a celebration. A party or something like that. And so again, we can look at these in terms of frequencies and so you can see here kind of the number of responses that tied into one of these categories, and I broke out celebration a little bit more, so people who were pro-celebration, those were kind of meh, and those who did not want any type of celebration. But, again, what if we crosstabbed those responses to get a little more context? And in this instance we'll look at position. Did position affect how people answered? And so you can see here that there were some patterns then. People were fine with the informal thank you, whereas those kind of more in the middle, the librarians were more interested in actually getting a financial reward as a form of appreciation. And then you can see for celebration and formal recognition, there weren't many distinguishing patterns. But that's an example of as we analyze how we can get a little bit more context and meaning for our results. We are at the top of the hour, so I want to encourage you as you think about data collection methods and an analysis plan for your research question to kind of consider what we talked about today regarding choosing a method, and then developing an analysis, and as you work through the learner guide, that will help lead you to more resources to embark on this journey. And so Jennifer, I will turn it back over to you. >> Fantastic. Thank you so much, Linda, it's really great to see, diving a little bit deeper into this process. A reminder, you can certainly continue to mull over this, bring your questions in October. There's also contact information for both Linda and Lynn on this slide, and I know that they would be excited to hear from you. You can also use the live data for impact hashtag on twitter and we can find you there as well. And be sure if you're not yet registered to join us on October 3rd. In the meantime, remember to spend a little bit of time on your learner guide, if you haven't looked at that yet, there's definitely an excellent resource section at the end of the guide that we encourage you to look at specifically to get additional ideas as well. And I will send you all an email later today once the recording is available, and it will have a link to the event page where all of the great resources will be collected, I'll put that in chat right now. And I'll also ask that you take a little bit of time as you leave the environment today, we'll send you to a short survey, and we'd love to get your feedback on today's session. And we'll share that with our presenters as well. A special thanks to Lynn and Linda for bringing your great work to today's session, and to the series, and thank you all for your contributions to chat. There were some great back and forth going on with additional ideas. So you can take a look at chat as well once we've got that posted. So excellent. And thanks to our captioner today and Wjsupport. You all have an excellent day and we'll see you in October.