The Road to Validity #1 (Assessment Series)

Literature Review

Something I think about on a regular basis is assessment; particularly how to do it well and where to focus my department’s and library’s efforts. Of course assessment means many different things to many different people. How librarians approach assessment or even define assessment is often very different from a college dean’s definition and approach. Although, I do believe that at the core, most people in higher education view assessment as a vehicle to improve the student learning experience. Fortunately, the literature for both library and information science and education are ripe with articles, books, presentations, etc. discussing assessment that range from types of assessments to design techniques to analysis methods.

In my journey into assessment, I found that for many librarians, the focus of assessment efforts are typically on student learning; however, it needs to encompass much more. As Avery emphasizes, when considering the overall student learning experience, assessment needs to focus not just on evaluating students but should inform and impact how teaching programs are developed.[1] An important step is establishing a foundation from which assessment efforts can be built. This may be establishing the skill sets of a specific group of students (e.g., first-year) or documenting what classes have had or not had library instruction. When determining students’ knowledge, conducting pre- and post-tests are a good mechanism to determine what students know or have learned.

Utilization of pre-/post-tests within library instruction is not new. A general search on this assessment method will return a significant number of articles where pre-tests and post-tests were used to determine the level of student learning, the impact of instruction techniques, or the direction of an instruction program redesign. In making my way through the assessment maze, I personally found it challenging to determine which sources would be the most useful. Much of it came down to the goal of the assessment effort my library was trying to achieve and how closely aligned an article was with that goal. One of the earliest reports of using pre-/post-tests is Kaplowitz’s description about a program’s impact on library use and attitude towards librarians.[2] This was an interesting article to read as it highlights the beginning of a trend within library assessment efforts.

As discussion about the use and effectiveness of online courses and utilizing digital learning objects has increased substantially, research investigating the types of instruction delivery methods provides context in the evolution of this movement. Not surprisingly librarians were assessing these methods just as Web 2.0 was beginning to explode. In their article comparing teaching in online only, in-class only, and hybrid environments, Kraemer, Lombardo, and Lepkowski used an identical pre-test and post-test taken by all of the students participating.[3] Not surprisingly their analysis showed most improvement in the hybrid class. However, this research was done in 2007 when online learning software and platforms were really just beginning to mature. In a more recent research study, Mery, Newby, and Peng evaluated scores from pre-tests and post-tests of students who received different types of instruction to identify if a particular method, either an online course or a one-shot guest lecture, had a higher impact on student learning.[4] Their results showed that online course yielded the greatest improvement in students’ skills. Credit-bearing information literacy courses offer one of the best environments for assessing student learning and using pre-/post-tests. Research, by both Joanna Burkardt and Bonnie Swoger, shows effective use of the pre-/post-test method in credit courses.[5] [6]

Determining the best design or method to use for assessment is not always straightforward. In a review of 127 articles focusing on assessment tools used in libraries, 34% of them were placed in the multiple-choice questionnaires category.[7] When written well, multiple-choice questions can assess recall, understanding, prediction, evaluation, and problem solving.[8] While having a strong preference within the library teaching community, it should be noted that these types of assessment tools do have their limitations. They are not effective in gathering a holistic picture of students’ skills and competencies.[9] While many pre-/post-tests are developed in the form of multiple-choice tests or questionnaires, this does not preclude the option to use other assessment types or blend different types together as one tool.

Open-ended questions provide the option to gather qualitative data. Patton categorizes qualitative data into three kinds: interviews, observations, and documents.[10] Within the documents category he identifies open-ended survey questions as a data collection technique. In his book, Research design: qualitative, quantitative, and mixed-methods approaches, Creswell discusses the advantages of using various types of assessment methods.[11] Mixed-methods research can refer to either a) methodology, b) the philosophy behind the research, or c) method, actual techniques and strategies used in the research. Specifically defined “it focuses on collecting, analyzing, and mixing both quantitative and qualitative data in a single study or series of studies.”[12] Ultimately, Creswell concludes that a mixed-methods approach provides the best way to not only assess a population as a whole, but to also accumulate more granular data for subgroups or even individuals.[13] In her research study using mixed-methods assessment in the form of pre-/post-tests and interviews Diana Wakimoto was able to explore the impact of a credit course on students’ learning and satisfaction.[14]

Despite the growing evidence that mix-methods assessment is the best route to take, it is important to note that institutional culture has a large impact on the type and success of any assessment initiatives. It is not uncommon for large scale assessments to be unrealistic for many libraries and smaller scale assessments are highly dependent on personal relationships with individual departments, programs, or faculty. However, assessment is a necessity and quality small scale efforts can often lead to larger scale initiatives.

Endnotes

  1. Elizabeth Fuseler Avery, “Assessing Information Literacy Instruction,” in Assessing Student Learning Outcomes for Information Literacy Instruction in Academic Institutions, ed. Elizabeth Fuseler Avery (Chicago: Association of College and Research Libraries, 2003).
  2. Joan Kaplowitz, “A Pre- and Post-Test Evaluation of the English 3-Library Instruction Program at Ucla,” Research Strategies 4, no. 1 (1986).
  3. Elizabeth W. Kraemer, Shawn V. Lombardo, and Frank J. Lepkowski, “The Librarian, the Machine, or a Little of Both: A Comparative Study of Three Information Literacy Pedagogies at Oakland University,” College & Research Lilbraries 68, no. 4 (2007), doi: 10.5860/crl.68.4.330. <http://crl.acrl.org/content/68/4/330>
  4. Yvonne Mery, Jill Newby, and Ke Peng, “Why One-Shot Information Literacy Sessions Are Not the Future of Instruction: A Case for Online Credit Courses,” College & Research Libraries 73, no. 4 (2012), doi: 10.5860/crl-271 <http://crl.acrl.org/content/73/4/366>
  5. Joanna M. Burkhardt, “Assessing Library Skills: A First Step to Information Literacy,” portal: Libraries and the Academy 7, no. 1 (2007), doi: 10.1353/pla.2007.0002. <http://digitalcommons.uri.edu/lib_ts_pubs/55/>
  6. Bonnie J. M. Swoger, “Closing the Assessment Loop Using Pre- and Post-Assessment,” 39, no. 2 (2011), doi: 10.1108/00907321111135475. <http://www.geneseo.edu/~swoger/ClosingTheAssessmentLoopPostPrint.pdf>
  7. Andrew Walsh, “Information Literacy Assessment: Where Do We Start?,” Journal of Librarianship and Information Science 41, no. 1 (2009), doi: 10.1177/0961000608099896 .<http://eprints.hud.ac.uk/2882/1/Information>
  8. Thomas M. Haladyna, Writing Test Items to Evaluate Higher Order Thinking (Boston: Allyn and Bacon, 1997).
  9. Davida Scharf et al., “Direct Assessment of Information Literacy Using Writing Portfolios,” The Journal of Academic Librarianship 33, no. 4 (2007), doi: 10.1016/j.acalib.2007.03.005. <http://www.njit.edu/middlestates/docs/2012/Scharf_Elliot_Huey_Briller_Joshi_Direct_Assessment_revised.pdf>
  10. Michael Quinn Patton, Qualitative Research and Evaluation Methods (Thousand Oaks, CA: Sage Publications, 2002).
  11. John W. Creswell, Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 3rd ed. ed. (Thousand Oaks, CA: Sage Publications, 2009).
  12. John W. Creswell and Vicki L. Plano Clark, Designing and Conducting Mixed Methods Research (Thousand Oaks, CA: Sage Publications, 2007): 5.
  13. Creswell, Research Design.
  14. Diana K. Wakimoto, “Information Literacy Instruction Assessment and Improvement through Evidence Based Practice: A Mixed Method Study,” Evidence Based Library and Information Practice 5, no. 1 (2010), https://ejournals.library.ualberta.ca/index.php/EBLIP/article/viewFile/6456/6447.
Advertisements
This entry was posted in assessment. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s