We're going to go ahead and get started. I'm going to pass the ball on over to Ixchel Faniel. Welcome and thank you to all our presenters. >> Hello. So, thanks for sharing. I just presented my slide deck, so hopefully everyone can see it. >> We're not quite seeing it shared yet. >> Okay, let's see. Let me try that again. Let's see. How about now? >> Yep. >> Perfect, thank you. So, hello and welcome. Thank you for joining our webinar "What's Format Got to do with It." We're all excited to talk about the role information formats play in evaluating search results. I want to encourage you, if you're so inclined, to introduce yourself. We'll be introducing ourselves momentarily. As JP said, feel free to use the chat not only to ask us questions but also to share and learn from each other as we proceed through the presentation. It would be great to share your challenges, tips, and practices used when helping users evaluate and choose information resources. I actually believe this is the first time we've teamed with WebJunction to present research from this project. So, I think to thank them for hosting us. That also means that this project and hearing about it may be new for some. So, I just want to thank you and welcome you and glad you joined us to hear what we've been up to. I want to start by introducing the presenters. We have Brittany Brannon, our research support specialist at OCLC, Tara Cataldo, biological sciences librarian at the University of Florida, Robin Kear, liaison librarian in English at the University of Pittsburgh, and as JP said, I'm Ixchel Faniel, a senior research scientist at OCLC. So, what I thought I'd start with today, I'm actually going to be talking about some of the findings from this project, researching students' information choices, determining the identity and judging credibility in digital spaces. Brittany, Tara, and Robin will react to those findings, given their perspectives and experiences. We'll ask you some questions and get your reactions, as well as save time and address any questions you have. But I thought I'd start by telling you just a bit more about the project, which was funded by IMLS. I encourage you to go and look at the website once this webinar is over. We have a great team supporting us, and it would be great if you'd take a look and see who else is working with us on this project. So, a key concept from this study is a resources container. So, whether a resource is, for instance, a blog or a book or a website, et cetera, it also serves as a motivation for the study. This quote is one of the reasons why. These authors talk about the Google generation, and they talk about that generation being format agnostic. So, having little interest in the container that provides the context and wrapping around information that would help in information evaluation. Tara and our colleague Amy Buehler also did an earlier study in the area that motivated this work. So, we had a couple key questions that we posed. The first is, how do science, technology, engineer, and math, STEM students, ranging from fourth grade to graduate school determine the credibility of online information resources? The second is, do these students differentiate among different types of online information resources? We were particularly interested in students' point of selection behavior. By that, we mean the moment a user determines a piece of information potentially meets a research need. We were also interested in what cues the students used from web search results. So, what they were paying attention to on the screen as they were making their choices and selections. How student characteristics and experiences influenced their behavior was also of interest. So, we started this project with a pre-screen survey to collect data to recruit students into the six cohorts you see here from fourth grade to grad school. The data were primarily demographic but also included questions about internet use and librarian help. We then designed and developed a task-based simulation exercise to examine students' point of selection behavior in real-time. We used the think-aloud protocol to observe students' cognition in action. Essentially what that means is we asked them to tell us what they were thinking as they completed their assigned tasks. During a pre-simulation interview, students were asked questions about their research and technology experiences and practices. During a post-simulation interview, we asked additional questions such as their confidence in selecting online information, whether they cared about the information they gathered, and whether it was important to know the container. So at the start of the simulation, students were given an age-appropriate prompt for a school-related research project on the Burmese python and its impact on the Everglades. You're seeing the welcome screen here for the simulation, designed to capture participants' decisions, along with the think aloud. I'm going to start a one-minute demonstration that shows the simulation we used. There's no sound. I'm going to talk over the video and describe what students were experiencing. They started by watching a video, a news clip about the Burmese python. Then they entered a search query, which provides a set of results. During the helpful task, we asked students to select resources from a list that they find most helpful. They could actually click into these resources and explore them further. Next, for the site task, we displayed the resources students chose as helpful and we asked, would you put these in your reference list? Then for the not helpful task, we displayed the resources students didn't pick, and we asked them to point out things that made these resources not helpful. For the credibility task, we displayed the resources students chose as helpful, and we asked how credible did they find these resources on a scale of one, not credible, to five, highly credible. And lastly, for the container task, we displayed a subset of resources from the cohort's original search results and asked them to select the container that they thought best described each item. They could choose from a set of eight -- a blog, book, conference proceeding, journal, magazine, news, pre-print, or website. So, today we're going to focus on results from four of the tasks -- helpful and citable choices, credibility evaluation, and container identification. But first, I'd like to have Brittany talk a bit more about containers and other formats available to the students during the simulation. >> Thank you, Ixchel. My name is Brittany Brannon. I'm excited to be here with all of you today. So, what exactly is a container? In this project, we're defining containers as recognizable ways of collecting and presenting texts of particular genres for publication. So, you can think here about a scholarly journal, a magazine, or a newspaper as a collection of articles or a book as a collection of chapters or a blog as a collection of posts. When we think about containers, both of these activities, collecting and presenting, are important. So, collecting includes how texts are sought, chosen, and grouped, and presenting includes how those collected texts are packaged and shared. So for example, we can think about a magazine and a scholarly journal. They use very different criteria for selecting the articles that they're going to publish, but they also present those articles in very different ways. So, magazines tend to prioritize visual elements such as full-color photographs, advertisements, and then they're often printed on kind of a glossy paper that's best suited to those full-color visuals. Scholarly journals, on the other hand, are going to really prioritize the written word. They're going to prioritize text and visual elements that don't really rely on color in the same way. So, they're likely to be printed in matte paper in black and white. So, as library workers, you probably tend to work most often with information resources that are published in the traditional sense of having gone through a publisher. When we're talking about publishing, we're talking about this more broadly in the sense of information that's being shared formally or in public. The internet as made it a lot easier for people to share in this way without having to go through a publisher. So, blogs and websites are really great examples of this. Anyone can create a blog, and they can use it to share posts without needing to get approval or feedback from any external person or company. I also said containers deal with texts of certain genres. Depending on your background, genre could mean many, many different things. So, we want to clarify what we mean when we use this term. So, we're defining genres using rhetorical genre theory, which focuses on genres of nonfictional prose. And they're defined as typified rhetorical actions based in recurrent situations. So, let's break that down a little bit. One of the clearest examples of this type of genre is the résumé, which is a response to the situation of wanting to get a new job. Wanting to get a new job is a rhetorical situation, meaning that it can be addressed through the use of communication, and it's also a recurrent situation, which means that it repeats often enough and in a way that's similar enough that people can recognize it as a shared experience. So, recognizing a situation as a shared experience lets people use strategies that they know have been successful for other people in similar situations in the past. As more and more people emulate those strategies, those strategies start to become the normal and expected way to address a particular situation. This is what we mean when we say that an action is typified. Now, if you aren't used to thinking rhetorically, it can be a little bit odd to think about a résumé as an action. After all, a résumé is just a document. It contains information. So, what does that mean to say that a résumé is a rhetorical action? So, first and foremost, it means that we're talking about communicative actions, right, or communication as a social act. So, if you look at the graphic here, those three boxes on the right-hand side, what you'll see is that what constitutes a genre includes the substance of the communication, the form of the communication, and through the combination of the substance and form, the action that it allows a communicator to take in a social situation. So, if we think about the résumé, the form includes the headings and the bulleted lists and the positioning of the contact information on the page, and the substance includes the work and educational experiences that are described in those headings and lists. In providing that particular type of content in that particular form, the job seeker is acting. They are trying to persuade the hiring manager that they're a qualified candidate for the open position. I think that's really the crucial thing. It's not just about sharing information. It's about taking an action and trying to influence that other person. One thing that can be helpful, especially because I think it's easy to maybe confuse or conflate genres and containers with one another, is to think about kind of the unit of analysis that we're paying attention to when we pay attention to these two different things. So, for genres, the unit of analysis is the document, if we're thinking more in LIS terms or text in rhetorical terms. We can think of these as the smallest meaningful units for our purpose. You can subdivide them, right. So, for example, you could break an article down into paragraphs, but the component parts aren't really meaningful on their own. A single paragraph is never meant to stand on its own. It's meant to be part of that bigger unit. By contrast, the unit of analysis for containers is the publication. Containers may comprise several documents of one or more genres that can all stand independently. A newspaper, for example, contains texts of many different genres, and each of those is a complete text that can be read independently of the others in the newspaper. So, for example, you don't have to read all of the classified tos understand the front-page story. You can see that in the ways that newspapers tend to be aggregated into databases as well. They're often aggregated at the article level rather than at the issue level. So, I know that's very abstract, and it's a lot to process, especially if you're new to this approach to thinking about information resources. So, I think it can be helpful to have some specific or concrete examples of what we're talking about when we talk about these different elements of format. So, you can see here again, these are the eight containers that Ixchel discussed earlier that we used in the simulation activity and also in our code book that was used to analyze our qualitative data. It was important to us to keep these containers distinct from other elements of format, including genres, which we've just been talking about, but also modes and file types. You can see examples of all three of those things here. Those were all included in our code book as well. Now, what I really want you to see in these lists is that all four of these elements of format are found simultaneously in a single information resource. You can think, for example, of a PDF, of a journal article that contains text and images. So, we're going to focus on containers primarily today, but it is important to remember that people don't deal with any of these elements of format in isolation. And I think that can really contribute sometimes to the confusion around these things because many format types do tend to appear together. For example, the E-pub file type tends to be used for books, and books are almost always made up of chapters and they tend to use primarily text or the written word as the modality here. So, when we're looking at them as a package, it's easy to kind of condense or collapse all of those things or even to use them as synonyms for one another. But giving people a better vocabulary and helping them to kind of understand how these elements of format are different can, I think, help to alleviate some of that confusion. So, I'm going to pass it back to Ixchel to present our first finding. >> Thanks, Brittany. So, this first finding I'm sharing is based on data collected during the post-simulation interview. So, after everyone was done, students were asked, do you think it's important to know whether the online information is for my book, a journal, a blog, et cetera? So, is the container important? As you can see here, you'll see the six cohorts. They're listed at the bottom of this chart, starting with elementary school students on the left and moving up through graduate school students on the right. You'll see pairs of bars. So, the blue bar on the left represents those who think it's important to know the container. The orange bar on the right represents those who do not. So, we can see a steady incline from students from fourth grade to graduate school believing it's important to know the container. If we look all the way to the right, all of the graduate students and almost all of the undergraduate and community college students thought it was important. Looking at those who didn't think it was important, we can see a steady decline, right, from fourth grade to grad school. So, with this, I'm going to turn to Tara, and given her experiences and perspectives, I'm going to get her reaction to this. >> Thank you, Ixchel. And hi, everybody. I'm so excited to be talking with you today. So, right off the bat, I was really almost relieved to see the students thought the container was important, that it does play a role in their evaluation of online information. So, you know, we can kind of say based on that data that they're not necessarily format agnostic, but we do know they're format confused. So, this chart is those same 175 students all lumped together. What it's showing is how well those students did on that container task, the final task they did where they gave the resources one of the eight containers. Not one of those 175 got 100% on this. As you can see from the chart, most of them landed in the middle. We've got about 46% got around the 50% to 60% of the containers in their simulation correct. So, you know, we do know that there is definite confusion. When they were doing the simulation and doing this task, we often saw them making remarks like, wow, this shouldn't be this hard. This is also something we were seeing in practice. So, some of these observations that myself and other practitioners on the study were seeing led to the impetus of the RSIC study were things like, I had a student come to me and ask for help citing a journal article. They needed some help because it quote/unquote looks weird. It looks weird because it's not a journal article. I might have to break the news. I'm sorry, this is a book chapter or this is a government report, and boy, did that create a lot of frustration. And as the study progressed, we coined a term that we called container collapse to try to kind of identify this phenomenon and nail it down. Container collapse to us is when the physical characteristics and cues of our print resources that used to help us identify them are now obscure. They're hard to discern in the online environment, and it's creating this new phenomenon and frustration in their evaluation of resources. So, I want to show a few examples of this to illustrate. Here's the first one where this is a journal article and a book chapter on the same platform, and they look very similar. There is not a lot popping out at you to tell you which one is the other. You really kind of have to dig into it and look for some certain keywords and cues to see what's going on there. So, no surprise that there's some confusion there. The next example is of a newspaper article and a blog post on the same platform. Again, very similar layout, look. One of the only distinguishing marks is one has an ad on it. Really, the only thing when I dug into both of these, the only thing distinguishing them was that the URL of the blog post does say blog in it. We'll talk a little bit later about whether that's helpful or not. And it's not just the publishing world, folks. It's us, too. This is my library catalog. It is showing a search from one of the bibles in medicine, "Harrison's Principles of Internal Medicine." We have it in our catalog labeled as both an e-book and a journal. So really, no wonder there's confusion out there. But I'm going to stop there with that, and let's see what Robin has to say about what she's observing. >> Thank you, Tara. I'm so glad that everyone was able to join us today. Welcome. So, yes, Tara, I see the same thing in the humanities. Students also create the wrong citation because they misidentify containers, similar to what you've seen in STEM. I also see this container collapse. I also see that humanities students misidentify a book review container as a journal article, and they don't understand the place of book review in scholarship often, how valuable that is. And I often see the humanities students, undergrads especially, misidentify book chapters as journal articles too. And there are many factors at play here. Students do not understand the rhetorical differences between containers and genres that Brittany has explained. The concept is not explained to them as rhetorical. The distinction is not obvious. They are confusing one for the other, and they tend to make these incorrect citations, especially when they are not using a citation generator or a manager, and they misidentify one container for another. So, I would also like to share an example that is from a recent class that I taught. I was searching for one of the topics in the class in the MLA International bibliography. So this is a student searching within a database and finds what I can recognize as a book chapter. They find the result, but they see it here in MLA in the upper image as a book article, number one. That's one cue there. The other cue is structural to our link resolver between our databases. This is called -- the language that we have for that link resolver is "check article availability." So, no matter what the item is, it will say "check article," which can be confusing. So, if the student actually clicks through there, the check article availability, they're taken into our discovery layer into what we call PittCat. This one also can be confusing as to what the item is. There are some clues there. It doesn't state a format exactly, but it does say -- the image sort of looks like a book cover, and you can see that there are ISBNs, and students often know what the ISBNs are because they're always looking for their textbooks. And it does say hardback. So that's another clue. Still, there's really no -- you know, it can be confusing. And I can see how students are confused. So, they have the content, but you know, they're not sure what the container is. And now I'd like to turn it to Brittany for her thoughts. >> Thank, Robin. Those are really great points, and I think this isn't just, as you pointed out, a problem with the systems that our users are interacting with. It's also a problem with how they're taught about information resources. So, I think in part, students in these higher cohorts care more about the container of their information resources because they've learned over time that those containers affect their success in school. But that doesn't always mean that they understand what the containers are or why they should matter. So, if we think about journals, they're a valued information source in higher education for many reasons, including their use of peer review, the expertise of the people writing. But I don't think students are often introduced to journals or have journals explained to them in this way. Often I think all they hear is that journals equal good grades. And that really doesn't help them to become better information evaluators. And it definitely doesn't help them once they leave school and those journals aren't available to them. And they maybe aren't the best resource for their information needs at this point. So, I think teaching students to think critically about containers and the role that containers play in shaping information resources means asking students to interrogate how and why information resources are created and what affect those processes might have on the relevance and credibility of an information resource to a particular need. I think that kind of critical thinking is transferable outside of the classroom to more everyday life situations. So, now we'd like to hear from you. You can head over to Poll Everywhere through the link in the chat. Let us know, how have you observed container collapse or container confusion in your work? So, maybe you've never encountered this and it's an entirely new concept. Maybe you encountered it when you worked with your users, whether those are students or other community members. Maybe you've encountered it when working with library systems, such as databases, discovery layers, or catalogs. Or maybe you've encountered it in both places. So, head on over to Poll Everywhere and let us know. We'd also love to hear if you're doing or have done anything to address this issue. Pop that into the chat. All right, and I am seeing a couple people typing letters into the chat. You're going to want to click on the link that the WebJunction webinars posted. That will include your response in the slide right now. As JP pointed out in the chat, you can also leave that window open in your browser. There will be more poll questions, and they'll load automatically for you as we go. Awesome. So, I'll give you another minute or so, but it looks like pretty overwhelmingly, we're seeing people have encountered this in both their systems and when working with users. So, I think that tells me that this is a pretty pervasive issue. We have a good chunk of people who have never really encountered this issue. We're super happy to have you here. We hope you'll find some things helpful to think about and to learn in this webinar. I'm also seeing a few things come up in chat about things people have done to address this issue. So, Bruce has shared a handout about internal evidence in files. Stephen is talking about some usability testing and user research. Meg is talking about how different publication cycles different containers for different research needs. So, we love to hear what you're doing in the chat, and we'd love for you to be able to learn from each other in that way as well. All right, so I think I'm going to go ahead and pass it back to Ixchel to introduce our next finding. >> So, this next finding is actually taken from an article that was recently published by one of our colleagues on the project, Chris Sear. So, I want to thank him for all his hard work on this one. What we did is examined the impact of various demographic, cue, and behavior variables on the percentage of containers that students correctly identified. We used an ordinary regression analysis. So, today we're sharing results from the fourth of four models that we ran. You can find the rest in the paper as well. So, this model includes all of the variables. So, a little bit about this graphic that I'm showing you. The coefficients for each of these variables can be read by looking at the left -- far left-hand side, that axis with the numbers on it. Above the line are positive coefficients. Below the line are negative ones. These numbers represent the predicted impact of a one-unit increase in the variable on the percentage of containers correctly identified. So, let's break that down a little bit for each of these areas. So, we see three demographic variables. If you look to the far left of the screen, it's the first three bar charts on the left called cohort, confidence, and parent with a Bachelor's degree. So, all of these have an asterisk on them. That means all three of these variables were significant. They're also above zero, right. That zero line. So, they're also positive predictors of a student's ability to correctly identify containers. So, in other words, students in higher grade levels, those with more confidence selecting online information for research projects, and those with a parent with a Bachelor's degree, will all have a higher percentage of correctly identified containers. If we look at the next set of six bars, these are the key variables. So, these were things that the students attended to or talked about as they were making their choices. And the first is labeled genre. The last is labeled Google results snippet. We can see genre and source were the most useful for correctly identifying container type. Source is the organization that either hosted or published the information. So, students who paid attention to genre and source identified more containers correctly. In contrast, we see paying attention to what the content is about or it means, that's labeled about, decreases a student's ability to correctly identify containers. Visual appearance, URL, and Google results snippet had no impact on the percentage of containers a student correctly identified. Lastly, we can see the behavior variables on the far right. These are the last three bars. The total simulation duration is the amount of time students spent on the simulation. This had a slight negative impact on the percentage of containers they correctly identified. Resource clicks, which is the number of times a student clicked on a resource, also was statistically significant. It had a positive impact on the percentage of containers students identified. The number of different container labels students used, this is called different containers in the slide, was not statistically significant. So, this didn't impact the percentage of containers students correctly identified. So, with that, I'm going to turn to Brittany and ask her to share some of her insights given these findings. >> Thanks, Ixchel. So, what instantly jumps out to me here are the cues. That's both in terms of what's helpful and what's not helpful. So, I think if you think about containers as units of publication, then it makes sense that paying attention to the source of the information resource, often the publisher, would help with this, right? So, we have a lot of associations, right, that for example "The New York Times" publishes news, and that can help us make those determinations. Similarly, if you think about the relationship of containers to certain types of genres, it makes sense that paying attention to the genre of an information resource would help, right. If you know something is a chapter, it's a lot easier to figure out it came from a book. I think it also makes sense that paying attention to aboutness hurts people's ability to identify containers. So, absolutely, the substance of a resource, what it's about is important in understanding genres, but when you're thinking about genres in containers, it's kind of the least important part of this triumverate we have going on here. So, I think in other words, what this really means is we need to ask people to think beyond just what the resource is telling them about and really to start focusing on purpose and process. Why and how the resource got to them. That's really going to help them start thinking more rhetorically, but it's also going to help them start thinking about this sort of genre and container type of identification. So, Tara, I'm curious to hear what you think about these findings. >> Thank you, Brittany. Actually, I have some comments on the genre there. Before I get to that, I do want to point out that when I demonstrated that blog post versus the newspaper article and talked about the only thing distinguishing them is blog being in the URL, pointing out here we found that URL was not really helping them to identify containers correctly. It was just too buried there. But genre made a lot of sense to me in terms of thinking about how their natural thought process as we go. So prone to going, okay, that's an article, that's a post, that's a chapter. Then that leading to better correct identification. The thought process of getting those, as Brittany said, that they're often related to a container, that may make the next move up, and it helps them to identify the container that makes a lot of sense to me. I want to take the opportunity on the next slide to kind of do another -- show another practical example of why I understand where all this confusion is coming from. I work a lot with citation management software. In my practice, in the study, students were very frustrated and sometimes even embarrassed when they were struggling with these container identification. I like to say, don't worry about it. Even computers and software are getting it wrong, too, and there's confusion. This is a great illustration. This is seven different citation management softwares. Don't worry about not being able to see the text. The colors are more important. I have EasyBib in there, Refworks. The listing under them are the different reference types they have in their software you can make citations for. Some have a few, some have a whole lot. Where you see the same color, that's the software's reference type. Then where you see white, those were unique types to that software. And these are a mix of containers and genres and modes and file types and combinations of those. It is a crazy mixed bag. You know, we don't have our own standardization or our own vocabulary here. So, really, there's no wonder there's confusion going on. And while it's a great illustration to say, you know, citation managers have trouble too, look at all this, I do find using the citation manager a great teaching moment for helping to take them through the tasks of identifying their resources properly. The way to do that with any of these, when you ingest a resource of any type, wherever it's coming from, the software always tries to identify it for you. It doesn't take more than a minute to find a software example that got it wrong. So, we grabbed this newspaper article off the web, and Zotero is telling us it's a report. It's a great teaching moment there to say, okay, we didn't think that was, but let's go back and look at it. Let's look at the cues. Make sure that we then give it the right container so we're getting our citation style right. Oh, and I'll point out again -- I know you can't probably read the tiny type. Of these seven, they only had two types common among all of them. The journal article and the report. So, I have lots to say in this, but I've got to stop myself, and we'll see what Robin has to say. >> Thanks, Tara, for that great graphic, the work to make that great graphic to show how many types there, reference types, in all those citation managers. I'd like to return to the results so you can see it again for a moment, and I want to talk about aboutness as the negative predictor. So, thinking about this negative predictor for correctly identifying the container led me to start thinking about information literacy. And discerning aboutness is one of the hardest, I think, information literacy thresholds to pass for students. For me, this seemed to be related to these ACRL framework dispositions, including resist the tendency to equate format with the underlying creation process, and realize that information sources vary greatly in content and format and have varying relevance and value. To me, it was no wonder that the aboutness distracted from the container for the students. And in my practice, this ties back to information analysis that breaks down in digital environments, that the format is not discernible for the students. The containers confuse and it's difficult to figure out the substance of the content, the aboutness, especially for undergraduates. While aboutness might not relate to identifying the container correctly in the study, the skill is needed for analysis across many platforms that our students use, from their peer networks to the library platforms. And for me, this also relates to a journal article fixation, which we've talked about a little bit. A fixation on finding only the journal article container. We commonly see students, I do, finding only journal articles. In the chat, there was someone asking for a journal when they meant a journal article. The container is also sometimes substituted for credibility analysis, and I find it common for these undergraduates to receive assignments that ask for three to five citations, all articles. Hence, that's when we see them, saying they only need journal articles. They don't understand that the containers the journal articles are in, and they can't identify between issue and volume as someone else pointed out in the chat. That was easier to do in a print environment. Not easy to do in a digital environment. And again, the substitute for credibility analysis. So, instead of having students analyze the container and the content or use information rubrics with the students or have the help of a librarian, for some faculty, the container has presumed credibility. So, now Ixchel has another Poll Everywhere question for us. >> Great, thanks, Robin. So yeah, if everyone could go to Poll Everywhere, they should see this graphic. What we'd like you to do is choose which of these findings is most surprising to you. And you can make your selections in Poll Everywhere by clicking on one of these bars. And what should show up are -- what we'll see on our screen here are pins as you begin to choose from Poll Everywhere. So, I can see some pins showing up for aboutness. These were folks that were surprised at the negative impact of a particular variable. Others may be surprised that some had no effect. And those are the ones without the asterisk. Others may be surprised that some things that had a positive effect. So, we're seeing lots of pins showing up. I appreciate you. We'll hang out a little bit more to see where people land here. So, once you choose your finding by clicking on the bar, what we'd like you to do is use the chat and share with others why a particular finding surprised you. I'm seeing a lot of pins on aboutness right now, to the point where I can no longer read the word. So, I think that's a big surprise to people in terms of predicting -- in terms of identifying a container correctly. And I'll ask my colleagues to share anything from the chat that they'd like to share, if there's something coming up. I'm seeing several on URL, too. That was also a surprise for some. >> I did want to address one question from early on because I think it will give context about the students. >> Sure. >> Stephen was asking about if the students who participated even knew what a journal was. I did want to say all the students who participated, we gave no definitions really of anything except credibility. So, they all came in with whatever they knew up until that educational stage. Certainly, with like the fourth and fifth graders, you would see some who they saw journal and associated that with like a diary. So yeah, it was very interesting there. Thank you for that question. >> Ixchel, some of the comments coming through, we assume a relationship between reading comprehension and container awareness. Resource clicks because students start to click instead of thinking. When they aren't getting the answers they need. Somebody asked, does the aboutness indication change over time? And aboutness is a low cognitive hurdle, but a lot of students are focused on content and memorization as the main elements of learning. >> Thanks for sharing those, JP. We might be able to circle around to some more questions and comments around these towards the end, but I'd also like to move forward to the third finding. Here we're going to look at the various container labels students applied to a single resource. So this goes back to something Tara said earlier on in terms of students being format confused. So, the resource here is an article in a magazine called "Science News." That was hosted by JStor. Here we see high school, community college, undergarage -- undergraduate, and graduate students. These were the only cohorts who received this resource. Across the top are the various containers they could select. The numbers within each box represents the number of students within each cohort who chose each container. So, we can see looking across students, every container was selected at least once with the exception of conference proceeding. Students chose journal and news the most, and that was followed by magazine, website, and book. So with, this I'm going to turn to Brittany to see what she has to share here. >> Thanks, Ixchel. I think this is actually really interesting because what it illustrates is how confusing containers can be when information resources are regularly picked up and repackaged in various ways across a variety of different information systems. So, in this case, the text is encountered in JStor, which is perhaps best known forking a aggregating journals. We can see the effects of that. But I really want to point out this phenomenon isn't limited to academic resources, especially in the open web. I think this is even more common. So, social media does the exact same kind of thing, and it's a much more common information source for most people than an academic database would be. So for example, we can think about the same "Wall Street Journal" article that might get shared as a story on the "Wall Street Journal" Snapchat. It might get posted by your friend on Facebook. Or an excerpt of it, a quote, might get tweeted by an influential media personality. So when an information resource is shared in ways that make it difficult or even impossible to connect it back to its original source, identifying the container can be really, really challenging. Then of course understanding the container is even further complicated by the other types of information resources that people tend to encounter on that platform. So if you get most of your news on Twitter, you might be more likely to think information on Twitter is news, and that might help you identify the container of that article. But if, for example, you're used to other kinds of content on Twitter, you may be less likely to think that's news, and that could actually mislead you. So, when users conflate certain systems with certain types of content, as were seen here in the JStor example, it makes that kind of container identification difficult because they don't think that they have to think about it. And I think when we start talking about the open web in particular, we also have to think about the fact that evaluation needs to include not just how and why information resources were created but also how and why they're being shared in that moment. So Robin, I know that Jstor is an incredibly popular resource in humanities. I'm curious to hear what your thoughts are on this. >> Thanks, Brittany. Yes, JStor is definitely well known by the time students are undergraduates. I know that JStor is commonly used in high schools, and public libraries will often have JStor as well. So undergraduates are very familiar. They consider it sort of gold-standard credible in the humanities, especially, no matter what the container is that shows up in JStor or the genre. It is mostly journals and journal articles, but there are different kinds of genres mixed in to the journals. They also assume it's in the journal container. Even graduate students. So, in this case, the credibility of the platform is, I find, equated with the quality in the containers. And Tara will take us further with this idea. >> It's so interesting. Before this study, I never thought about the linking of an entire interface in someone's mind with only one container. But it wasn't just JStor. We kept seeing it over and over again. So many of the students thought Springer only published journals. So many of them had textbooks from Wylie. They thought Wylie only must produce books. It didn't help it's called the Wiley Online Library because library means books and all that. But even Gale resources. The higher education students, many of them would think that was associated with K-12 and move along, just not even digging any deeper, just assuming the content was probably below the education level they were looking for. I'm like, I spend a lot of money on Gale databases in higher education. So, it was a really interesting thing. At first, I was asking do you even care if they distinguish your stuff on your platform? And they do care, especially when talking to their authors. They want them to be able to distinguish them, and they are definitely starting to work more on trying to help us out on their websites. But there's definitely more to be done. But let's stop there, and let's do another little Poll Everywhere. I want you to think about the systems you work with. In the public library, what's your discovery system? What aggregators are you working on in your school library? Whatever systems you work with the most, think about those and respond to this question in Poll Everywhere. Which cues do you find the most misleading when trying to identify containers? Some of the cues we've already mentioned. Genre, source, Google snippet, URL. But there's so many. Anything you would attend to, like author information, titles, references. Oh, so many. Anything you could be looking at. Even the ranking order on a page. What are some cues that you think are misleading? Pop them in, and we'll see if anything pops to the top here. Database resource. A lot are evenly matched. Google, yes. Google is creating confusion. I love seeing periodical here. Periodical versus serial, especially in our catalogs, is definitely confusing. The database, what is a database? You know, the catalog is a database. And all these other ones. Again, a lot of this comes down to vocabulary and are we introducing that vocabulary or just assuming they pick this up somewhere along the lines? Database is rising to the top there. Lots of periodicals. Most of these have come along. Icons, that's one I haven't really talked about yet. We're seeing more and more of that. That could be confusing if the icon doesn't match what you're talking to. >> I saw layers had a handful of responses too. >> Yeah, Tara, I saw Ferberizing. >> Yes, I hadn't thought of that either. That is interesting. Very good. Oh, I'm going to be very pleased to capture this word cloud. But we better move on, unless there's any questions in the chat we should address right now. >> I think just a couple of things to sort of clarify. Kate had asked, isn't Science News a magazine? Yes, Kate, it absolutely is. I think that's part of what was so interesting about this. It was a magazine. It had the name of a different container, news, in the actual title of the periodical. Then it was in JStor, which most people associate with journals. So, I think there was a lot of potential for confusion in the different cues that were there. And then I think from the previous finding, Dara had asked for total simulation duration. I'm surprised that people didn't learn more based on what they were being shown during the research. Usually there is a learning curve within a simulation. Expected this to be higher. I kind of really agree with you on that. I really thought that how long they spent as they worked in the simulation was going to make it easier for them. I assumed it would mean they spent more time looking at each resource and that they would have been better at this. We're not exactly sure why that's the case yet. Hopefully that's something that we'll kind of figure out as we continue to dig through some of this data. But I do think that one thing that might be at play there is age. So, you know, the different age groups, we noticed, it spend different amounts of time. That might have been cancelling out the significance of that finding. >> Okay. So, let's turn it back to Ixchel to go over the last finding. >> So, for this last finding, we'll look at students' helpfulness, citability, and credibility judgments for resources that have cast the same information in three different resources that happened to represent different containers. So, in this first slide, we're showing the helpfulness and citability judgments across four student cohorts. Those student cohorts are grouped into these three areas. So, if we look at that first grouping of four, we looked from left to right, they're in the order of high school students first, followed by community college students, undergraduate students, and graduate students last. So, across the bottom in each of these four cohorts, underneath those bars we can see the resources that students are judging. So on the far left, that first group of four charts, they're judging an article in "Time" magazine. In that middle set of four bars, they're judging a press release in the journal "Nature." And in that third set of four bars, they're judging a research article in the proceedings of the royal society bee, I'll call this RSPB. The research article in RSPB is the original source used to create the article in "Time" magazine and the press release in "Nature." So, if you look at these bars, each can have up to two different shades. We see a lighter shade if there is one, and that tends to reach beyond that darker or more solid shade. So, within each cohort, that lighter shade represents the percentage of students who thought a resource was helpful. So for instance, across all four cohorts for the research article at RSPB to the far right, RSPB is shown to be the most helpful, followed by the article in "Time" magazine and the press release in "Nature." So, we're looking at the height of the bars in terms of helpfulness. So, within each cohort, there's also this darker or solid shade. That represents the percentage of students who thought a resource that was helpful was also citable. So, the important thing to look at here is the extent to which that bar is filled with the solid shade. So, if we look at the last bar on the far right for the graduate students specifically, we see all graduate students who thought the RSPB article was helpful also thought it was citable. Looking across all four cohorts for that article in RSPB, RSPB is also considered the most citable, right. Those four bars are the most filled, if we look. So, now if we turn our attention and look at "Time" and "Nature," we can see that higher percentages of graduate, undergraduate, and high school students who thought the "Nature" press release was helpful also thought it was citable. When compared to students in those same cohorts who thought the "Time" article was helpful. In other words, lower percentages of graduate students, undergraduates, and high school students who thought the "Time" article was helpful also thought it was citable. So, this second slide shows the credibility judgments across those four student cohorts. High school students are the square. Community college students, the diamond. Undergraduates, the circle. And then graduate students are the triangle. If we start with that RSPB article at the far right, we see that it has a high average credibility rating. If you look to the left for graduate students, that average credibility rating for the press release in "Nature" is about the same as it is for RSPB. If we look at grad students to grad students, so the triangles. For high school students, community college students, and undergraduates, we start to see a slight decline, right, in the mean when we look at RSPB versus the "Nature" press release. That's to the left of that. And lastly, we can see the mean credibility rating for the article in "Time." Again, that average credibility rating for the "Time" magazine is lower than the other two resources for all cohorts except community college students. For community college students, that average credibility rating is about the same as the press release in "Nature." So, this last slide just combines the two charts together in preparation to hear from some of the others. We'll start with Brittany. >> Thanks, Ixchel. I'm actually going to split the findings back apart. I think for me, these are a really good illustration that the role genres and containers play in setting people's expectations and sort of interpretations of the information they're using. So, if we isolate just the helpful judgments, as I have on this slide, we can see that the two articles, one from a container that's a journal and one from a container that's a magazine, are picked as helpful much more often than the press release, which is in the journal container. So, what this suggests is that genre may be an important factor when students are deciding which resources are helpful to them. That kind of makes sense. So, if you think about this particular example, articles are usually written for more explanatory purposes. So, they tend to be a little longer. They have more information. They may be more detailed. Whereas, press releases really are written more for promotional purposes. So, they may talk about the same topic, but they're really going to present that a little bit differently. When we isolate citability, seen on this slide inside those black boxes, we can see that pattern changes. So, students who selected the resource as helpful were more likely to say that the resources in the journal containers, the two on the right-hand side of the screen, to say that those were more citable. You can see that in that much smaller light-colored portion at the top of the bars in those two clusters. Fewer were willing to cite the resource in the magazine container, which is the cluster of bars on the left. You can see that in that much larger light section on the top of those bars. So, even though students thought that the article in the magazine was helpful, they really weren't willing to cite it. And that pattern is really repeated when we look at credibility. So, students rated the credibility of the resources in the journal container, the two on the right-hand side, more highly than the resource in the magazine container, the one on the left-hand side. I think what this suggests, at least preliminarily, is that genre may be a more important factor when students are determining helpfulness, but container may be a more important factor when they're determining whether they would cite something and how credible they think that it is. So, our study deliberately separated out these three judgments, helpful, citable, and credible. But I'm willing to bet that that's really not how most people approach the evaluation of online information in their daily lives. Most people probably stop at helpful. I know I definitely do for many of my information needs. So, one way to address this is that we can help people by having them start with containers that are more credible to begin with. So, when they find something that's helpful, hey, credibility is not a big deal. And libraries really already do a pretty good job of this. We spend a lot of time curating resources that are more reliable for our users. What I think is really more important here, though, is helping people learn how to incorporate credibility evaluation into their everyday interactions with information so that it becomes almost second nature to them. And I think what this finding suggests is that container identification may be a really important part of that. Tara, I'm curious to hear what you think about these findings. >> So much to talk about this. Three different pieces of the information life cycle. But I'm going to zero in on those credibility rankings of time. I think there's something interesting going on here. As you went up the education levels, the credibility ratings of time is going down. Then you get to the grad students, and it gets a bump up. What's happening there? What the graph can't show is the think alouds that were going on, them talking about what they're rating here. We saw with many of the graduate students where -- you know, they're more experienced. They were digging a little deeper. They took note that in their simulation that the "Time" magazine article linked to that Royal Society journal article, ranked to the primary study. And they gave "Time" points for that. I thought that was very interesting. In all the evaluation models that we put out there, I've never seen one that kind of talked about bread crumbs and, you know, changing the credibility of something because it leads you to something better. I see this in practice, too. Almost all higher education students from the study, but graduate students in particular, are staunch, staunch defenders of Wikipedia. What they like in that is they know anybody can edit it, but they like a good Wikipedia article that has lots of references that leads them to the primary sources. So, since it's not something we're teaching and it's definitely something that's evolved as they've been in the online environment so long, is this becoming more one of those typified reactions that Brittany is mentioning? Because it's had repeated success. So, let me stop there. Let's see what Robin has to say. >> Thanks, Tara. I'd like to return to this slide, back to the container equaling credibility. I think that came out in our last finding here. Again, in my practice, there's a strong faculty desire for credible in sources. Of course, they want to see the best kinds of things from their students that they're finding. This is expressed to me as help my students find credible sources. Yes, I will try. And sometimes this is focused on the container as credibility. I find that this can be incorporated into instruction in library resources. However, this quickly breaks down on the open web. And let me explain a little bit more about what I mean by that. So, if the container is presumed to have credibility, it removes the onus from the students to analyze that credibility. I understand that. This is a way for faculty to simplify work. After all, our universities and libraries, all of our libraries, public school, pay a lot of money for subscription resources that are curated in various ways. We want our students to have the best information. Of course, we as librarians, libraries teach these resources that are the very best and in different disciplines. However, for students as the boundaries of the containers and genres and formats are raised in the digital world, the container as credibility breaks down as soon as a student or young person leaves the bubble, you know, the university library discovery layer or website. This is quickly insufficient when students are finding information from peers in their social networks or on the open web. Yes, of course students can certainly tell the difference between a TikTok video and a research article in a journal, but they're still constantly analyzing the information that they're presented with. And there's one more way that I see this confusion around containers, and it's in public dissemination of research. This is confusion when popular news is synopsized from research. This is similar to what Tara said about research bread crumbs. This can be complicated container confusion. Which should a student use and cite? I've actually been asked to go into classes before to talk about sort of popular news sources and then work students through the process to find the research that the popular news source came from. You know, you're finding the non-expert information sort of in the popular news, right. Newspapers and popular magazine container. They're synopsizing that complex research article that is the genre in the journal, which is our container. And students will cite the newspaper article as the source, not conceptualizing that the information is also in another container. So, they have confusion over which to use and cite. So, this relates to our last question for you, our last Poll Everywhere question. Other than scholarly journals, what is the most common container that you send your users to for credible information? And you can select the one that most likely speaks to your environment, your library environment. And you can also tell us in the chat if there are specific sources that you direct people to for different kinds of information. I think book is by far winning in the poll, even in the early part of the race here. Someone in the chat has said all of them. I direct students in practitioner-based programs to news and magazines. Reference books, yes. Books, newspapers. Yeah, depends on what they're looking at. Government documents, newspapers. Someone talks about websites. They work in a digital environment with digital collections only, it looks like. Government document credibility shifting, ah. Shifting over time, not seen as credible as it once was, perhaps. A back or a website, depending on how up to date the physical material is, yes. And of course right now in our pandemic world, if they can access that print material at this time. Yes, accessibility. I think accessibility in different ways can help you decide what container. You know, if you don't have access to scholarly journals. And I find that for referral to books, it does help that many reference materials have been digitized. I try to, you know, compare that to use of Wikipedia in that, you know, this is background and you can use it to find other information. So, that's kind of a type of book that's been turned into yet another digital container for students to get confused over. But finding that background content is important. Well, thank you for those answers. It seems like the book, that's encouraging for us. So, thank you for the participation. So, that was our last finding and our last Poll Everywhere question. So, thank you for all of your contributions today and your participation in the chat and the Poll everywhere. Most of these references are on the website that WebJunction has for the webinar, along with the learner guide and the slides are there. So, now I invite you to ask questions in the chat about the whole presentation. And we've been compiling questions to keep track of. >> Yeah, so we have a few questions from earlier in the presentation that we can address. I see another question from Stephen here just recently in the chat, but it looks like we have one of his earlier questions that we can circle back to. He asked earlier, how much is error, and I think by that he was talking about sort of student errors in identifying containers. How much is that due to misunderstanding cues and identifiers versus misunderstanding due to the bad design of search results pages in discovery layers, search engine page results, et cetera? >> I would definitely say it's a mix of both. Part of it is, again, what we built for the print world and then automation layering on top of it and how it changes it. That was part of the problem with that example I showed of the textbook that was showing both as an e-book and a serial. It had been around a long time. Someone decided to label the print resource from decades ago as a serial. When we layered on a discovery layer, they took everything serial and called it a journal and put this tag on it. And it's not something easily fixed. I think it's quite a mix. >> Yeah, absolutely. We didn't get the chance to talk much about our qualitative data, so what students were talking about as they went through these different tasks. I would say at least sort of anecdotally, there's a lot of confusion about what containers are. People really don't understand the differences between them. They don't understand the significance of them, but equally, I think a lot of it is about design. As Tara showed those really great images toward the front of the presentation kind of showing journal articles and book chapters next to each other and showing a news article and a blog post next to each other, and there's very little in online design that helps people to distinguish these things. So, again, I really think it's both. It's about the design, but it's equally about helping people understand some of these publishing practices. >> This is Ixchel. It looks like we have another question asking, and what about a magazine like "Forbes" that lets people pay to publish on their platform? What are we to make of that? Interesting question. Brittany, Robin, Tara? Would one of you like to chime in? >> Sure. I definitely find a lot more information analysis and nuance that has to go on among even the different kind of containers. If you don't know the publishing practice of "Forbes," allowing people to do that, you know, of course all of that is being indexed into our databases eventually, right, as it just came from "Forbes." It doesn't tell you, you know, that person paid, necessarily. It might just seem like any other piece from "Forbes." And I also found this when I was looking up a journal article for a faculty member from a student, and they were questioning the quality of the article. I found that it took a lot of digging to find that it was sort of a paper mill journal. A student certainly wouldn't have been able to do that digging. It was a journal that published any type of subject, right, any type of article, any type of subject, which was a big red flag for me. But this article was indexed in our discovery layer as an open-access article. And so, that's just another example, I think, of the nuance that we all have to deal with now. And I'd like to speak a little bit to Stephen's questions about "The New York Times" and different publishing modes and different practices. Like, what do you call something like the climate change series or the 1619 project from "The New York Times"? It has so many different kinds of content and the file type and the genre. So, how do you classify that, and how will that eventually be indexed in our databases? You know, will it? >> Yeah, I mean, I think those are really great questions. I really wish we had answers. But you know, one of the things that strikes me about this is it's not just that, you know, you've got digital-only articles, et cetera, but especially one of the things that really strikes me is with the explosion of social media. A lot of companies are publishing the same article in different ways on different platforms, which means different people have access to different bits of it. I think that also creates this like really complicated information environment or ecosystem in which people are struggling to kind of identify these different things. You know, it comes back to this sort of purpose and process, but as Stephen points out, do we know enough about the publishing practices to really understand how some of those purpose and process decisions are made? I'm not sure what the answer is to that question, but I do think it's a really important thing to keep in mind while we talk to our users about these different issues. >> Yeah, this is JP. I was going to say, just looking again at that citation management tools comparison and seeing all those different reference types and also then thinking about all the different players, the publishers, the learners, the citation management tool managers, I'm curious what you -- I would just love to hear what your sort of dream scenario would be, and I'm also curious if there are next steps in your research or which cohorts it makes sense to bring it to. I don't know if there's a cohort of citation management tool folks that are looking at your research and thinking, oh, yeah, we should all get together and figure out how to do this in a more streamlined way. So, I'd love to hear where you think this information is going to go and hopefully how others will use it. >> So interesting. So, the graphic, I actually made when we were first putting this presentation today. I just went wild with it. So, I haven't gone to those people with that, but in terms of where we need to go, I think we've all been saying over and over again that vocabulary is so important, introducing this, not assuming it, and how many times do all of us in the library world say this needs to start way earlier and more consistently in K-12 with information literacy and giving them the right tools and vocabulary at that early stage and not just letting it develop in a bunch of different ways. I could go on and on about that one. Anyone else want to jump in? >> I think for me, just in terms of from a research perspective, what I'm kind of interested in seeing is kind of how some of these variables -- so, we talked about the demographic variables, the cue variables, the behavior variables. We only looked at those, remember, when we presented this today in terms of correct identification of containers. I'm just interested in seeing how some of these key variables play out in the other tasks, right. So, what they're attending to, to decide something is credible versus not. So, URL wasn't important in terms of correctly identifying a container. How does it play out in terms of credibility? So I'm kind of interested in seeing how these different things come into play in the different kinds of tasks as students move through them. >> Yeah, I think that's a great point, Ixchel. That's certainly, I think, where my mind goes as well. You know, it's one thing to kind of say this is, you know, sort of how well students did or didn't do, but I do think to me, the more interesting and maybe the more important part is really what strategies are they already using when they do these kinds of evaluation? And how do we use that to kind of help them develop maybe better strategies or more effective strategies? I think that's really what I'm looking forward to the most. >> Yeah, absolutely. That makes total sense for I'm sure all the folks on the line here, eager for those tools to help the students clarify. So, excellent. >> Exactly. I think Anne says something really nice here. This is so important to help students avoid being frustrated in college and making sure they don't think academia is a bunch of esoteric rules. >> Yeah, strategizing across information platforms. >> Yeah, and I think there's an element for me here, too, of, you know, as information professionals, we always have space to continue learning about this because it's a landscape that's always changing, right. So, I don't necessarily think of it as we have all the answers, but that we're trying to ask the questions in ways that help to clarify some of this stuff and help our users ask those questions as well. So yeah, I'm very excited to kind of get a better sense of how the students themselves perceive this because I think it has such broad implications beyond just their college coursework. >> Fantastic. Well, I think we're just about at the bottom of the hour. Thanks to those of you who were able to stick around. We'll send the recording to those that had to leave. I encourage you to connect with the folks in this research project. Be sure to check out the updates that come through the project site. I've just put that link in there. And our presenters have provided their emails here on this slide. So, definitely follow up, keep thinking about it, let them know other aspects of this work that come up for you to share. It's definitely a huge value to the field, and we so appreciate you all coming to share this important research today. Thank you, all, for joining us. I will send you to a short survey as you leave. We'd love to get your feedback. We'll share it with our panel as well as guide our ongoing programming. And a reminder that I'll send you all an email once the recording is posted, and I'll send you a certificate later next week for those who were here today. Although, you can get certificates for all the learning you explore in the WebJunction catalog. Thank you so much again to Brittany, Tara, Robin, and Ixchel. You all are doing great work in so many ways. So, thank you for bringing this excellent learning to the broader field. >> Thanks for having us today, JP and Kendra. Appreciate WebJunction hosting. >> Yes, thanks, JP. >> Thank you. >> Thanks, everyone.