Professional Discourse about the ACRL Framework (a chronology)

person with many questionsAs many of us are, I am still grappling with the ACRL Framework and what it means for me individually as well as the profession. I fully admit I am still on the fence about the Framework — however, I’ve never really been a fan of the Standards. I have several professional friends who served on the TaskForce and others who help elevate the Standards. I feel like I am in a bit of professional dilemma and often ask myself if creating my own middle ground will resolve this odd place I find myself in. Ultimately, the instructional designer and educational theorist in me suggests there can be a way for both to co-exist or at least an attempt should be made.

A colleague of mine recently asked if I had a list of sources regarding the ACRL Framework. What resulted was a fairly long list of blog posts and articles. I thought I would share the list — certainly not exhaustive and not the first of it’s kind.

The list is my attempt to create a chronology of both informal and formal commentary surrounding the Framework to provide a snapshot of the issues, questions, and concerns. If there are any posts or articles you think should be included, please add them to the comments.

Blog posts

Swanson, Troy. “The New Information Literacy Framework and James Madison.” Tame The Web, February 14, 2014. http://tametheweb.com/2014/02/20/the-new-information-literacy-framework-and-james-madison-by-ttw-contributor-troy-swanson/.

Berg, Jacob. “The Draft Framework for Information Literacy for Higher Education: Some Initial Thoughts.” BeerBrarian, February 25, 2014. http://beerbrarian.blogspot.com/2014/02/the-draft-framework-for-information.html.

Burkhardt, Andy. “New Framework For Information Literacy.” Andy Burkhardt, February 25, 2014. http://andyburkhardt.com/2014/02/25/new-framework-for-information-literacy/.

Fister, Barbara. “On the Draft Framework for Information Literacy.” Library Babel Fish, February 27, 2014. https://www.insidehighered.com/blogs/library-babel-fish/draft-framework-information-literacy.

Pagowsky, Nicole. “Thoughts on ACRL’s New Draft Framework for ILCSHE.” Nicole Pagowsky, March 2, 2014. http://pumpedlibrarian.blogspot.com/2014/03/thoughts-on-acrls-new-draft-framework.html.

Swanson, Troy. “Using the New IL Framework to Set a Research Agenda.” Tame The Web, May 5, 2014. http://tametheweb.com/2014/05/05/using-the-new-il-framework-to-set-a-research-agenda-by-ttw-contributor-troy-swanson/.

Wilkinson. Lane. “The Problem with Threshold Concepts.” Sense & Reference (blog). June 19, 2014. https://senseandreference.wordpress.com/2014/06/19/the-problem-with-threshold-concepts/.

Swanson, Troy. “Information as a Human Right: A Missing Threshold Concept?” Tame The Web, July 7, 2014. http://tametheweb.com/2014/07/07/information-as-a-human-right-a-missing-threshold-concept-by-ttw-contributor-troy-swanson/.

Dalal, Heather. “An Open Letter Regarding the Framework for Information Literacy for Higher Education.” ACRLog (blog). January 7, 2015. http://acrlog.org/2015/01/07/an-open-letter-regarding-the-framework-for-information-literacy-for-higher-education/.

Swanson, Troy. “The IL Standards and IL Framework Cannot Co-Exist.” Tame The Web, January 15, 2015. http://tametheweb.com/2015/01/12/the-il-standards-and-il-framework-cannot-co-exist-by-ttw-contributor-troy-swanson/.

Fister, Barbara. “The Information Literacy Standards/Framework Debate.” Library Babel Fish, January 22, 2015. https://www.insidehighered.com/blogs/library-babel-fish/information-literacy-standardsframework-debate.

Farkas, Meredith Gorran. “Framework? Standards? I’m Keeping It Local.” Information Wants To Be Free (blog). February 4, 2015. http://meredith.wolfwater.com/wordpress/2015/02/04/framework-standards-im-keeping-it-local/.

Accardi, Maria. “I Do Not Think That the Framework Is Our Oxygen Mask.” Librarian Burnout, May 14, 2015. https://librarianburnout.com/2015/05/14/i-do-not-think-that-the-framework-is-our-oxygen-mask/.

Becker, April Aultman. “Visualizing the ACRL Framework for Students.” Librarian Design Share, September 22, 2015. https://librariandesignshare.org/2015/09/22/visualizing-the-acrl-framework-for-students/.
All visualizations can be found here: http://researchbysubject.bucknell.edu/framework

Articles

Oakleaf, Megan.  “A Roadmap for Assessing Student Learning Using the New Framework for Information Literacy for Higher Education.”  Journal of Academic Librarianship 40, no. 5 (2014). http://meganoakleaf.info/framework.pdf

Morgan, Patrick K. “Pausing at the Threshold.” portal: Libraries and the Academy 15, no. 1 (2015): 183-195. https://muse.jhu.edu/article/566428.

Burgess, Colleen. “Teaching Students, Not Standards: The New ACRL Information Literacy Framework and Threshold Crossings for Instructors.” Partnership: The Canadian Journal of Library & Information Practice & Research 10, no. 1 (January 2015): 1-6. http://dx.doi.org/10.21083/partnership.v10i1.3440.

Beilin, Ian. “Beyond the Threshold: Conformity, Resistance, and the ACRL Information Literacy Framework for Higher Education.” In the Library with the Lead Pipe (February 25, 2015). http://www.inthelibrarywiththeleadpipe.org/2015/beyond-the-threshold-conformity-resistance-and-the-aclr-information-literacy-framework-for-higher-education/.

Baer, Andrea. “The New ACRL Framework for Information Literacy: Implications for Library Instruction & Educational Reform.” InULA Notes: Indiana University Librarians Association 27, no. 1 (May 15, 2015): 5–8. https://scholarworks.iu.edu/journals/index.php/inula/article/view/18978/25096

Kuglitsch, Rebecca Z. “Teaching for Transfer: Reconciling the Framework with Disciplinary Information Literacy.” portal: Libraries and the Academy 15, no. 3 (2015): 457-470. https://muse.jhu.edu/article/586067.

Foasberg, Nancy M. “From Standards to Frameworks for IL: How the ACRL Framework Addresses Critiques of the Standards.” portal: Libraries and the Academy 15, no. 4 (2015): 699-717. https://muse.jhu.edu/article/595062.

Communications in Information Literacy 9, no. 2 (September 11, 2015) – Special Section

Jacobson, Trudi E., and Craig Gibson. “First Thoughts on Implementing the Framework for Information Literacy.” Communications in Information Literacy 9, no. 2 (September 11, 2015): 102–10. doi:10.7548/cil.v9i2.348.

Battista , Andrew, Dave Ellenwood, Lua Gregory, Shana Higgins, Jeff Lilburn, Yasmin Sokkar Harker, and Christopher Sweet. “Seeking Social Justice in the ACRL Framework.” Communications in Information Literacy 9, no. 2 (September 11, 2015): 111-25. doi:10.7548/cil.v9i2.359.

Hosier, Allison. “Teaching Information Literacy Through “Un-Research.” Communications in Information Literacy 9, no. 2 (September 11, 2015): 126-35. doi:10.7548/cil.v9i2.334.

Pagowsky Nicole. “A Pedagogy of Inquiry.” Communications in Information Literacy 9, no. 2 (September 11, 2015): 136-44. doi:10.7548/cil.v9i2.367.

Critten, Jessica. “Ideology and Critical Self-Reflection in Information Literacy Instruction.” Communications in Information Literacy 9, no. 2 (September 11, 2015): 145-56. doi:10.7548/cil.v9i2.324.

Seeber, Kevin Patrick. “This Is Really Happening: Criticality and Discussions of Context in ACRL’s Framework for Information Literacy.” Communications in Information Literacy 9, no. 2 (September 11, 2015): 157–63. doi:10.7548/cil.v9i2.354.

Dempsey, Megan E., Heather Dalal, Lynee R. Dokus, Leslin H. Charles, and Davida Scharf. “Continuing the Conversation: Questions about the Framework.” Communications in Information Literacy 9, no. 2 (September 11, 2015): 164–75. doi:10.7548/cil.v9i2.347.

Anderson, Melissa J. “Rethinking assessment: Information literacy instruction and the ACRL framework.” SJSU School of Information Student Research Journal 5, no. 2 (2015). http://scholarworks.sjsu.edu/slissrj/vol5/iss2/3

Berkman, Robert. “ACRL’s New Information Framework: Why Now and What Did It Discover?” Online Searcher (March/April 2016). http://www.infotoday.com/OnlineSearcher/Articles/Features/ACRLs-New-Information-Framework-Why-Now-and-What-Did-It-Discover-109503.shtml.

Posted in ACRL Framework | Leave a comment

The Road to Validity #2 (Assessment Series)

Validity Techniques

In my previous post, I highlighted some of the literature that discusses pre- and post-test assessment. In this post I will discuss the challenges of creating an effective online questionnaire. One thing I have learned over the years is that acquiring quality data can only occur by creating and using valid and reliable assessment tools. For online questionnaires, poorly written questions will throw off the results and make analysis almost impossible.

The reason we create and administer questionnaires is to help us find answers to broad overarching questions. If a library, a team, or an individual puts forth effort to gather data and potentially publish the results, it is far better to take the time up front to develop well-crafted questions rather than to find out after the fact that the data gathered cannot be used because the original survey questions were not the appropriate questions.

Validity of questions, and ultimately an assessment tool, ensures that the data gathered represents the stated purpose and goal. The questionnaire as a format for research, while having many benefits, has several disadvantages. One of the most significant drawbacks is that they often have poorly formed questions due to the ease in which these instruments can be developed.[1] Valid questions are clearly written and eliminate any possibility that the individual taking the assessment could misinterpret or be confused by the question.[2] Alreck and Settle state that survey questions should have focus, brevity, and clarity.[3] Multiple-choice questions are particularly prone to being invalid if the question writers are uninformed about what makes a question valid or invalid.

Developing overarching questions before gathering data is imperative and a good technique to writing appropriate questions. For instance, if an overarching question is whether or not students’ skills improved after an instruction section, asking students if they liked the session and instructor’s teaching style cannot answer that question. When I started at my current library, one of our first attempts at instructional assessment was to create a questionnaire with pre-existing tutorial quiz questions because we thought this would be a time saving measure. Even though, generally, we knew we wanted to know if students’ knowledge had improved after a library session, we didn’t take the time to create overarching questions and identify what it was we really wanted to know. Not surprisingly, once we started looking at the data, all we really could conclude was the number of questions students got right and wrong. The tutorial quiz questions, as written, were out of context and the library sessions didn’t specifically address many of the skills or competencies linked to the questions. Also, in discussing the data, it became clear that everyone on the development team had different opinions about what we were supposed to be measuring.

When embarking on the development of an assessment questionnaire it is important to be aware of the different levels of assessment: Classroom Assessment, Programmatic Assessment, and Institutional Assessment.[4] The data gathered for each of the assessment levels tells a different story. A mismatch between the assessment level and what questions need answering has numerous consequences but primarily produces invalid data and the inability to conduct proper analyses.

Administration options within classroom assessment are quite numerous. Radcliff, et al. points out that these can be categorized in the following ways: informal assessment, such as observations or self-reflection; classroom assessment techniques (CATs); surveys; interviews; knowledge tests; concept maps; performance and product assessments; and portfolios.[5] Each of these has advantages and disadvantages. Informal and CATs, often used within library instruction settings, fit well into a one-time guest lecture scenario, due to quick, easy administration and analysis but the drawback being the difficulty in gaining a well-rounded picture of students’ skill set and transference. Assessments such as interviews and portfolios, while providing the most in-depth data, require significant amounts of time for data gathering and analysis. Other types of assessments, like surveys and knowledge tests, can address the time factor and often provide more information than informal or CATs. When administered as a pre-/post-test, acquisition or improvement of skill sets can be tracked.[6] However, depending on administration and data analysis, these assessments may or may not address the question of transference to other courses or within real-life scenarios.

One Example of a Validity Process

Being aware of all of the different assessment options just for classroom assessment was really important for the instruction department at my library. When developing a questionnaire, understanding the strengths of each helped bring into context what questions we could realistically ask and answer. Since an online questionnaire was really our only administration option, we concluded that the questions needed to be in the form of a knowledge test and given as a pre- / post-test.

We chose a knowledge test because the questions in this type of assessment do not focus on self-reporting of skills by students or self-efficacy of capability, nor do they focus on the effectiveness of an instructor. Instead, they focus on specific knowledge, competencies, and skill sets.  For administration, a time series design was selected as it utilizes giving several posttests over a designated time period after the pre-test and library instruction sessions conducted.[7] This would provide the opportunity to gather data of student knowledge at specific times of the academic year and if developed well, involve a minimal time commitment on the part of the course instructors and students.

However, we didn’t just want multiple-choice questions, as we were also interested in getting some insight into how students approached researching a specific topic. We decided to create a two-part questionnaire, Part A that included multiple-choice questions and Part B that gave students a scenario and open questions asking them to describe how they would research the scenario. To avoid the same situation of gathering completely invalid data, we engaged in two activities. One was using a validity chart on the multiple-choice questions and the other was to map the questions to our established learning outcomes.

After our first attempt, we did rewrite several questions but there was uncertainty if they were written appropriately. One statistical validity/reliability analysis that is often used on questionnaires is Cronbach α (alpha). However, we wanted to keep the number of multiple-choice questions to ten, which is too small of a set for a Cronboch α or other validity analyses. The team used a slightly modified validity chart from Radcliff, et al.[8] The validity chart is a yes/no checklist that clarifies how questions should be constructed and what types of answer options should be present (see Figure 1). Any presence of a ‘no’ is an indication that the question is invalid and should be rewritten to generate all yeses.

FIGURE 1—Example of the modified validity chart checklist using one of the original assessment questions[8]

modified validity chart

Applying the validity chart to all of the multiple-choice questions revealed that none of them (even the rewritten questions) were valid. It was a great exercise in revealing assumptions on our part as librarians and really forcing us to clarify what we really wanted to ask. All of the questions were rewritten until they generated a check in every Yes box. Once the validity chart showed that all of the questions were clearly valid, we still did some wording revisions to increase reader comprehension. During this process we discovered that one question when first revised was valid, but advances in library database search algorithms ended up making the question invalid in that two answer options became correct instead of just one. This reinforced the need to regularly check the questions and answer options to make sure they are in-line with current tools and services. Below is an example of how one question was revised from its original version to final version (Figure 2).

FIGURE 2—Example of how one question was revised over the course of the tool development

outcomes map

The other activity we did was to make sure the questions were linked to our learning outcomes. To verify this, each question was mapped to one or more learning outcome. Our initial mapping revealed that the first set of questions developed was almost completely grouped with the outcomes that concentrated on finding and searching for information. Other outcomes, such as addressing source types and plagiarism, were completely omitted from the question set. The question set was revised to encompass all of the outcomes. After a second review, some slight adjustments were made to the questions to create an even stronger alignment between the questions and outcomes.

Even though we did not utilize extensive reliability and validity testing of this assessment instrument, the processes we used served our needs at the time. Should there come a time where the University wants to have a standardized assessment instrument of research skills, this process will better situate the library to evaluate or develop such an instrument. In consulting the literature during our validation process, I did come across some good articles (listed below) that articulate more rigorous validation and reliability processes.

Recommended Articles on Validity Testing

Ondrusek, Anita, Valeda F. Dent, Ingrid Bonadie-Joseph, and Clay Williams. “A Longitudinal Study of the Development and Evaluation of an Information Literacy Test.” Reference Services Review 33, no. 4 (2005): 388-417. doi: 10.1108/00907320510631544

Ondrusek, et. al. discussed the development of an online quiz associated with a group of online tutorials as part of their University’s first-year orientation seminars. In this article, the authors highlighted how the quiz went through multiple iterations and testing to develop valid questions in addition to using various statistical analyses such as score summary, standard deviation, and item analysis for test reliability. This extended and thorough development process helped establish the assessment within the university curriculum.

Mery, Yvonne, Jill Newby, and Peng Ke. “Assessing the Reliability and Validity of Locally Developed Information Literacy Test Items.” Reference Services Review 39, no. 1 (2011): 98-122. doi: http://dx.doi.org/10.1108/00907321111108141

Mery, Newby, and Peng described the methodology used in the development of an information literacy test associated with an online credit course. To determine validity and reliability, they used classical test theory and item response theory in correlation to SAILS test items. The data was gathered over two semesters and administered as a pre- and post-tests to students enrolled in the course.

Cameron, Lynn, Steven L. Wise, and Susan M. Lottridge. “The Development and Validation of the Information Literacy Test.” College & Research Libraries 68, no. 3 (2007 2007): 229-36. doi: 10.5860/crl.68.3.229. <http://crl.acrl.org/content/68/3/229>

Cameron, Wise, and Lottridge reported on the development of the James Madison University Information Literacy Test (ILT) and the methods used to create a reliable and valid instrument. The questions where based on the original ACRL Information Literacy Competency Standards. Their statistical analysis included content validity and construct validity. Additionally, they used standard-setting methods to determine expected proficiency levels and performance standards so the test could be administered across a variety of student cohorts.

Mulherrin, Elizabeth, and Husein Abdul-Hamid. “The Evolution of a Testing Tool for Measuring Undergraduate Information Literacy Skills in the Online Environment.” Communications in Information Literacy 3, no. 2 (2009): 204-15. http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path[]=Vol3-2009AR12

Mulherrin and Abdul-Hamid provided an overview of the processes taken to develop a valid and reliable final exam for the information literacy credit course provided as part of the general education curriculum. As with similar articles, the authors discussed the use of the content and construct validity, item difficulty and discrimination, Cronbach α (alpha), and item characteristic curves (ICC) analysis methods. A clear and ongoing theme in these articles is the importance of using reliable and valid instruments when conducting large-scale assessment.

Endnotes

  1. Bill Gillham, Developing a Questionnaire (London: Continuum, 2000).
  2. Linda A. Suskie, ed., Assessing Student Learning: A Common Sense Guide, 2nd ed. (San Francisco, CA: Jossey-Bass, 2009).
  3. Pamela L. Alreck and Robert B. Settle, ed., The Survey Research Handbook. 3rd ed. (Boston: McGraw-Hill/Irwin, 2004).
  4. Elizabeth Fuseler Avery, “Assessing Information Literacy Instruction,” in Assessing Student Learning Outcomes for Information Literacy Instruction in Academic Institutions, ed. Elizabeth Fuseler Avery (Chicago: Association of College and Research Libraries, 2003).
  5. Carolyn J. Radcliff et al., A Practical Guide to Information Literacy Assessment for Academic Librarians (Westport, CO: Libraries Unlimited, 2007).
  6. Carol McCulley, “Mixing and Matching: Assessing Information Literacy,” Communications in Information Literacy 3, no. 2 (2009), http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path%5B%5D=Vol3-2009AR9.
  7. Alreck and Settle, The Survey Research Handbook, 414.
  8. Radcliff et al., A Practical Guide to Information Literacy Assessment for Academic Librarians, 94-95.
Posted in assessment | Leave a comment

The Road to Validity #1 (Assessment Series)

Literature Review

Something I think about on a regular basis is assessment; particularly how to do it well and where to focus my department’s and library’s efforts. Of course assessment means many different things to many different people. How librarians approach assessment or even define assessment is often very different from a college dean’s definition and approach. Although, I do believe that at the core, most people in higher education view assessment as a vehicle to improve the student learning experience. Fortunately, the literature for both library and information science and education are ripe with articles, books, presentations, etc. discussing assessment that range from types of assessments to design techniques to analysis methods.

In my journey into assessment, I found that for many librarians, the focus of assessment efforts are typically on student learning; however, it needs to encompass much more. As Avery emphasizes, when considering the overall student learning experience, assessment needs to focus not just on evaluating students but should inform and impact how teaching programs are developed.[1] An important step is establishing a foundation from which assessment efforts can be built. This may be establishing the skill sets of a specific group of students (e.g., first-year) or documenting what classes have had or not had library instruction. When determining students’ knowledge, conducting pre- and post-tests are a good mechanism to determine what students know or have learned.

Utilization of pre-/post-tests within library instruction is not new. A general search on this assessment method will return a significant number of articles where pre-tests and post-tests were used to determine the level of student learning, the impact of instruction techniques, or the direction of an instruction program redesign. In making my way through the assessment maze, I personally found it challenging to determine which sources would be the most useful. Much of it came down to the goal of the assessment effort my library was trying to achieve and how closely aligned an article was with that goal. One of the earliest reports of using pre-/post-tests is Kaplowitz’s description about a program’s impact on library use and attitude towards librarians.[2] This was an interesting article to read as it highlights the beginning of a trend within library assessment efforts.

As discussion about the use and effectiveness of online courses and utilizing digital learning objects has increased substantially, research investigating the types of instruction delivery methods provides context in the evolution of this movement. Not surprisingly librarians were assessing these methods just as Web 2.0 was beginning to explode. In their article comparing teaching in online only, in-class only, and hybrid environments, Kraemer, Lombardo, and Lepkowski used an identical pre-test and post-test taken by all of the students participating.[3] Not surprisingly their analysis showed most improvement in the hybrid class. However, this research was done in 2007 when online learning software and platforms were really just beginning to mature. In a more recent research study, Mery, Newby, and Peng evaluated scores from pre-tests and post-tests of students who received different types of instruction to identify if a particular method, either an online course or a one-shot guest lecture, had a higher impact on student learning.[4] Their results showed that online course yielded the greatest improvement in students’ skills. Credit-bearing information literacy courses offer one of the best environments for assessing student learning and using pre-/post-tests. Research, by both Joanna Burkardt and Bonnie Swoger, shows effective use of the pre-/post-test method in credit courses.[5] [6]

Determining the best design or method to use for assessment is not always straightforward. In a review of 127 articles focusing on assessment tools used in libraries, 34% of them were placed in the multiple-choice questionnaires category.[7] When written well, multiple-choice questions can assess recall, understanding, prediction, evaluation, and problem solving.[8] While having a strong preference within the library teaching community, it should be noted that these types of assessment tools do have their limitations. They are not effective in gathering a holistic picture of students’ skills and competencies.[9] While many pre-/post-tests are developed in the form of multiple-choice tests or questionnaires, this does not preclude the option to use other assessment types or blend different types together as one tool.

Open-ended questions provide the option to gather qualitative data. Patton categorizes qualitative data into three kinds: interviews, observations, and documents.[10] Within the documents category he identifies open-ended survey questions as a data collection technique. In his book, Research design: qualitative, quantitative, and mixed-methods approaches, Creswell discusses the advantages of using various types of assessment methods.[11] Mixed-methods research can refer to either a) methodology, b) the philosophy behind the research, or c) method, actual techniques and strategies used in the research. Specifically defined “it focuses on collecting, analyzing, and mixing both quantitative and qualitative data in a single study or series of studies.”[12] Ultimately, Creswell concludes that a mixed-methods approach provides the best way to not only assess a population as a whole, but to also accumulate more granular data for subgroups or even individuals.[13] In her research study using mixed-methods assessment in the form of pre-/post-tests and interviews Diana Wakimoto was able to explore the impact of a credit course on students’ learning and satisfaction.[14]

Despite the growing evidence that mix-methods assessment is the best route to take, it is important to note that institutional culture has a large impact on the type and success of any assessment initiatives. It is not uncommon for large scale assessments to be unrealistic for many libraries and smaller scale assessments are highly dependent on personal relationships with individual departments, programs, or faculty. However, assessment is a necessity and quality small scale efforts can often lead to larger scale initiatives.

Endnotes

  1. Elizabeth Fuseler Avery, “Assessing Information Literacy Instruction,” in Assessing Student Learning Outcomes for Information Literacy Instruction in Academic Institutions, ed. Elizabeth Fuseler Avery (Chicago: Association of College and Research Libraries, 2003).
  2. Joan Kaplowitz, “A Pre- and Post-Test Evaluation of the English 3-Library Instruction Program at Ucla,” Research Strategies 4, no. 1 (1986).
  3. Elizabeth W. Kraemer, Shawn V. Lombardo, and Frank J. Lepkowski, “The Librarian, the Machine, or a Little of Both: A Comparative Study of Three Information Literacy Pedagogies at Oakland University,” College & Research Lilbraries 68, no. 4 (2007), doi: 10.5860/crl.68.4.330. <http://crl.acrl.org/content/68/4/330>
  4. Yvonne Mery, Jill Newby, and Ke Peng, “Why One-Shot Information Literacy Sessions Are Not the Future of Instruction: A Case for Online Credit Courses,” College & Research Libraries 73, no. 4 (2012), doi: 10.5860/crl-271 <http://crl.acrl.org/content/73/4/366>
  5. Joanna M. Burkhardt, “Assessing Library Skills: A First Step to Information Literacy,” portal: Libraries and the Academy 7, no. 1 (2007), doi: 10.1353/pla.2007.0002. <http://digitalcommons.uri.edu/lib_ts_pubs/55/>
  6. Bonnie J. M. Swoger, “Closing the Assessment Loop Using Pre- and Post-Assessment,” 39, no. 2 (2011), doi: 10.1108/00907321111135475. <http://www.geneseo.edu/~swoger/ClosingTheAssessmentLoopPostPrint.pdf>
  7. Andrew Walsh, “Information Literacy Assessment: Where Do We Start?,” Journal of Librarianship and Information Science 41, no. 1 (2009), doi: 10.1177/0961000608099896 .<http://eprints.hud.ac.uk/2882/1/Information>
  8. Thomas M. Haladyna, Writing Test Items to Evaluate Higher Order Thinking (Boston: Allyn and Bacon, 1997).
  9. Davida Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” The Journal of Academic Librarianship 33, no. 4 (2007), doi: 10.1016/j.acalib.2007.03.005. <http://www.njit.edu/middlestates/docs/2012/Scharf_Elliot_Huey_Briller_Joshi_Direct_Assessment_revised.pdf>
  10. Michael Quinn Patton, Qualitative Research and Evaluation Methods (Thousand Oaks, CA: Sage Publications, 2002).
  11. John W. Creswell, Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 3rd ed. ed. (Thousand Oaks, CA: Sage Publications, 2009).
  12. John W. Creswell and Vicki L. Plano Clark, Designing and Conducting Mixed Methods Research (Thousand Oaks, CA: Sage Publications, 2007): 5.
  13. Creswell, Research Design.
  14. Diana K. Wakimoto, “Information Literacy Instruction Assessment and Improvement through Evidence Based Practice: A Mixed Method Study,” Evidence Based Library and Information Practice 5, no. 1 (2010), https://ejournals.library.ualberta.ca/index.php/EBLIP/article/viewFile/6456/6447.
Posted in assessment | Leave a comment

Humor in the library classroom or ‘why are we so serious all of the time?’

I just recently came across Mashable’s video about the deep web (embedded below). While not holding your gut as tears roll down your face funny, it is humorous. In my institution we do have videos that discuss the deep web (we use the term hidden web) and to be honest they aren’t funny. They are straightforward and well constructed but not funny. Mashable’s video started me thinking about how academic librarians construct our instruction whether online or in person. To be honest, most of us are serious most of the time. I often wonder why that is.

I’ve been told throughout my life that I am funny but not in a stand up comedian type of funny, in a dry witty way. (I could go into a tangent here about how this is an insult to comedians but I won’t digress.)  I should note here that I don’t consider myself funny. Anyway, in the classroom, any shred of humor I may have with my friends and colleagues goes out the window. I am not sure what it is but I have this mental barrier when it comes to humor related to instruction whether it is in the classroom or through digital learning objects (and frankly, if I am honest with myself, all of my public speaking). I marvel at people who seem to be able to be funny with ease in front of large groups and in the classroom. Is it off the cuff or do they really prepare and practice to be funny?

The best summation of humor I found came ironically from an encyclopedia; certainly not a format one would associate with humor.

Humor is a ubiquitous, pervasive, universal phenomenon potentially present in all situations in which people interact. It is a complex, multifaceted phenomenon involving cognitive, emotional, behavioral, physiological, and social aspects that have a significant effect on individuals, social relations, and even social systems. 1

This definition made me realize that I perhaps make humor bigger then it needs to be. I perceive it as elusive so therefore it will always be elusive. Various research into humor as an effective teaching tool indicates that it helps relax the environment, creates a stronger connection between student and teacher, and can enhance critical thinking.2 However, it has also been found that certain types of humor are better then others such as funny stories/comments, professional humor, and jokes and in the students’ perception the most appropriate.3

In my quest to know more, I am not surprised that over the years, academic librarians have talked about ways to be funny in the classroom. While many students are reluctant to admit it, they do find the overall research process stressful and challenging. In the most recent report on student research behaviors Alison Head specifically examined college freshman. One of the key findings in her research was that

[n]early three-fourths of the sample (74%) said they struggled with selecting keywords and formulating efficient search queries. Over half (57%) felt stymied by the thicket of irrelevant results their online searches usually returned.4

If there is every a time to cut tension it is at a time when students are really struggling. During my short investigation into use of humor in the classroom, I’ve tested the waters a little bit. While I didn’t spend any significant time planning a “comedy routine,” I did keep myself open to opportunities to insert humor. Interestingly, I found the most opportune times to be when exploring selection and use of search terms. Just being aware of current pop culture provides a plethora of humor opportunities.

References

1. Westwood, Robert. “Humor.” In International Encyclopedia of Organization Studies, edited by Stewart R. Clegg and James R. Bailey, 621-24. Thousand Oaks, CA: SAGE Publications, 2008. doi: http://dx.doi.org/10.4135/9781412956246.n214.

2. Chabeli, M. “Humor: a pedagogical tool to promote learning.” Curationis 31, no. 3 (2008): 51-59. http://www.curationis.org.za/index.php/curationis/article/viewFile/1039/975

3. Torok, Sarah E., Robert F. McMorris, and Lin Wen-Chi. “Is humor an appreciated teaching tool? Perceptions of professors’ teaching styles and use of humor.” College Teaching 52, no. 1 (2004): 14-20.

4. Head, Alison J. Learning the Ropes: How Freshmen Conduct Course Research Once They Enter College. Research Report. Project Information Literacy, December 4, 2013. http://projectinfolit.org/images/pdfs/pil_2013_freshmenstudy_fullreport.pdf.

 

Posted in Uncategorized | Leave a comment

Scarce and abundant resources

I was recently reminded of the article, “Tech is too Cheap to Meter: It’s Time to Manage for Abundance, Not Scarcity” by Chris Anderson and published in Wired Magazine. Even though it was written in 2009, it still has merit. Anderson puts forth an interesting discussion on how we perceive the abundance or scarcity of technology resources and how this influences our ability to leverage it in different and unique ways. The examples he uses in the article are computer storage space and streaming video. Both no longer are an expensive or scare resource yet both are often viewed as such. The key question he asks is “when an organization views an abundant resource as scare, what impact does that have on the organization’s ability to meet customer needs and continued goodwill?”

As I read this article I of course began thinking about academic libraries and how we view certain technologies/resources along with the policies we implement based on those views. The No food and Drink policy is a good example of a policy based on the perception that items such as computer keyboards, mice, and all print sources are scare or irreplaceable. Yet, as computer hardware costs continue to decrease, peripherals like keyboards or mice are no longer a scare commodity; particularly since many institutions often buy these in bulk for a fraction of the off-the-shelf retail cost. With print-based resources, really only a small percentage of these within a typical academic library are truly scare or irreplaceable. As more materials are digitally born or converted, document delivery systems become more robust, and consortia or shared library systems are more common, any print items that do not fall into that rare and irreplaceable category are more easily replaced or accessed should they become damaged. A more important policy for an academic library becomes one focused on good stewardship of resources and methods in which to meet the unique needs of their community.

Anderson also asks the opposite question as well. What are the implications for an organization that views a scare resource as an abundant one?

Many instructional initiatives within academic libraries use a variety of technologies including those that are part of Web 2.0. While these technologies are abundant, how they are used in connection with student learning vary significantly among librarians. Some approach utilization of these within teaching environments as the scattered shot approach — try everything and anything so maybe something will stick/work. Those that take this approach view time, students and their own, as an abundant resource. We of course all know (even those teaching in a scattered shot way) that time is not an abundant resource and teaching as if it were is not effective. Why do people do it, then? There are many potential answers to that question. However, in my experience, those who are new to teaching are more likely to take the scattered shot approach more so as a lack of experience rather then ill intent.

As someone who views technology as a conduit and support mechanism for learning outcomes, I am not one who is prone to take a scattered shot approach. Although, I do not necessarily take a wait and see approach either. For me the important questions to ask are ‘what is the impact on student learning when using X technology and how does it connect to established learning outcomes?’ Student time and our time is a scare resource; we need to be cognizant of that. Being enthusiastic to try what’s cool and new, won’t necessarily put us at risk of losing students’ goodwill and in turn potential lifelong supporters. However, having a bad learning experience can often translate into a student not asking for help when needed, perpetuate attitudes of “I already know how to do this so I don’t need to participate,” or even establish a lack of respect for the expertise a library science professional does have.

In his closing remarks Anderson talks about a hybrid model evolving in our society using YouTube and Hulu as an example. Both resources provide free access to streaming video but one, YouTube, takes the scattered shop approach — open to all for viewing and uploading, no criteria for content or quality (certain exceptions apply), and no commercials. It should be noted, since Google purchased YouTube, a Google Ads overlay appears on most videos. The other, Hulu, which has a more focused content type, provides high quality streaming video, is restricted to who can upload, and requires viewers to watch short commercials. Even though both are free, each has a different model and perspective on abundance and scarcity. Both of these services are doing extremely well and in the foreseeable future neither is in danger of having fading use or activity.

For a complex organization like an academic library choosing one model over the other (scare or abundant) is not necessarily appropriate. As academic libraries reevaluate today’s learning environments and focus on student learning, we will need to consider how we engage students and what we consider to be scarce and abundant resources. In my view, my time and student time is scare and therefore valuable — the learning opportunities we provide should take this into consideration. We should try to avoid the scatter shot approach with technology. However, with today’s environment of abundant web-based technology and applications, we have multiple choices for creating interesting learning environments with different types of technology.

I am continually amazed at the creativity of my fellow librarians. Teaching strategies to watch will be the flipped classroom, blended learning, and strictly online learning. While all of these methods have been around for many years, what will be important is the movement away from generic, one-size fits all to a personalized learning experience. How the personalization occurs can be multifaceted and should be connected with the goals of the program or institution. Take, for instance, the flipped classroom methodology. This strategy can allow for use of various technologies and at the same time provide a powerfully positive learning experience for students. In the past year, more discussion has popped up regarding academic libraries using this model. Here are three articles discussing interesting initiatives.

Arnold-Garza, Sara. 2014. “The flipped classroom: Assessing an innovative teaching model for effective and engaging library instruction.” College & Research Libraries News 75, no. 1: 10. http://crln.acrl.org/content/75/1/10.full

Datig, Ilka, and Claire Ruswick. 2013. “Four quick flips: Activities for the information literacy classroom.” College & Research Libraries News 74, no. 5: 249. http://crln.acrl.org/content/74/5/249.long

Lemmer, Catherine A. 2013. “A View from the Flip Side: Using the ‘Inverted Classroom’ to Enhance the Legal Information Literacy of the International LL.M. Student.” Law Library Journal 105, no. 4: 461-491. https://scholarworks.iupui.edu/bitstream/handle/1805/3815/Published%20Flip%20Article.pdf?sequence=1

Posted in Uncategorized | Leave a comment