background preloader


Facebook Twitter

Rubrics - RAILS. Rubrics are powerful tools for assessment. The RAILS project is intended to help librarians create and use rubrics for information literacy assessment. To this end, RAILS serves as clearinghouse for information literacy rubrics. Existing RAILS rubrics are grouped by topic and/or by creator and accessible using the navigation links on the right. Any of these rubrics can be modified and saved by librarians; librarians can also upload new rubrics. These rubrics have been submitted by volunteers and are not perfect. Questions? The Evolution of a Testing Tool for Measuring Undergraduate Information Literacy Skills in the Online Environment | Mulherrin | Communications in Information Literacy.

National Institute for Learning Outcomes Assessment. Announcement Debra Gilchrist and Megan Oakleaf, two leaders in librarianship and assessment, document the ways librarians contribute toward campus efforts of student learning assessment. The paper includes a variety of examples of institutions that have developed student learning assessment processes. Paper Abstract The authors argue that librarians, both independently and in partnership with other stakeholders, are systematically and intentionally creating learning outcomes, designing curriculum, assessing student achievement of learning goals, using assessment results to identify practices that impact learning, and employing those practices to positively impact student experience. Focusing on information literacy as a student learning outcome, Gilchrist and Oakleaf begin by outlining ideas behind information literacy and how it connects with general education, credit course, and discipline outcomes.

Biographies of the Authors.

Bloom's taxonomy

Goals, Objectives and Outcomes › Assessment Primer › Assessment › University of Connecticut. Outcomes Pyramid The assessment literature is full of terminology such as “mission”, “goals”, “objectives”, “outcomes”, etc. but lacking in a consensus on a precise meaning of each of these terms. Part of the difficulty stems from changes in approaches to education – shifts from objective-based, to competency-based, to outcomes-based, etc. education have taken place over the years with various champions of each espousing the benefits of using a different point of view.

The Outcomes Pyramid shown below presents a pictorial clarification of the hierarchical relationships among several different kinds of goals, objectives, and outcomes that appear in assessment literature. The 'pyramid' image is chosen to convey the fact that increasing complexity and level of specificity are encountered as one moves downward. The pyramid structure also reinforces the notion that learning flows from the mission of the institution down to the units of instruction. Outcomes Pyramid Definitions Objectives. 2005 - NPEC Sourcebook on Assessment: Definitions and Assessment Methods for Communication, Leadership, Information Literacy, Quantitative Reasoning, and Quantitative Skills. 2009- W r it in g I nfor m a t ion Lit e r a c y A s ses s me n t P l a ns: A G uide t o B es t P r a c t iceviewcontent. 2008 - The information literacy instruction assessment cycle. 2001 - Assessing Information Literacy Skills: Developing a Standardized Instrument for Institutional and Longitudinal Measurement.

Authentic Assessment Toolbox Home Page. To the Authentic Assessment Toolbox, a how-to text on creating authentic tasks, rubrics, and standards for measuring and improving student learning. Inside, you will find chapters on A good place to start -- In this chapter I identify the characteristics, strengths and limitations of authentic assessment; compare and contrast it with traditional (test-based) assessment. Why has authentic assessment become more popular in recent years?

When can it best serve assessment needs? After a brief overview, follow a detailed, four-step process for creating an authentic assessment. All good assessment begins with standards: statements of what we want our students to know and be able to do. Authentic assessments are often called "tasks" because they include real-world applications we ask students to perform. To assess the quality of student work on authentic tasks, teachers develop rubrics, or scoring scales. A guide to constructing good, multiple-choice tests, to complement your authentic assessments. Assessments of Information Literacy available online (Information Literacy Assessments) Information Competency Proficiency Exam - Gavilan College - "Use or modification is permitted as long as acknowledgement is made to the Bay Area Community Colleges Information Competency Assessment Project.

" Information Literacy Survey - from Worcester Polytechnic Institute - The assessment begins on p. 70 of the document. Information Competency Assessment Instrument -- developed by Rodney Marshall, Eastern Illinois University -- a 40-item scale assessing information users' attitudes and practices -- see paper describing instrument's development Skills Assessment -- from Stanford University -- each of six modules includes 10 multiple-choice questions as a final skills assessment for the module Information Literacy Skills Assessment -- from Millikin University -- 15 multiple-choice questions -- scroll down to Appendix A Information Literacy Survey - from San Jose State University -- 11 multiple-choice questions -- scroll down to Appendix A to view assessment.

Information Literacy Assessment: Standards-based Tools and Assignments - Teresa Y. Neely - Google Books. Catching up with information literacy assessment. Resources for program evaluation Cheryl L. Blevens + Author Affiliations In March 2009, the Higher Education Research Institute at UCLA released its triennial report on national faculty norms for 2007–08, which were based on survey responses from 22,562 full-time faculty at 372 four-year colleges and universities nationwide.1 For 97.2 percent of the faculty surveyed, helping students to evaluate the quality and reliability of information was a top goal for undergraduate education.

Information literacy is the evaluation of the quality and reliability of information, paired with the ability to use that information legally and ethically. Academic librarians charged with developing a reliable assessment tool for their libraries’ information literacy instruction programs can find a wealth of resources that are readily available from both commercially and institutionally developed and administered products that are used among librarian peers across the country and throughout the world. THRESHOLD ACHIEVEMENT About the Test. The Threshold Achievement Test of Information Literacy will be field tested in the academic year 2015-16. The test will be organized in 4 modules, each designed to be administered separately. The content for each module is inspired by one or more of the frames of the ACRL IL Framework. The first module to be field tested will be Evaluating Process & Authority. This module is based on outcomes and performance indicators that draw from the ACRL IL frames called “Information Creation as a Process” and “Authority is Constructed and Contextual.”

Below you can see how each of the six ACRL IL frames is connected to the TATIL modules. After field testing Evaluating Process & Authority in early fall 2015, the other modules will be field tested in the following order: Strategic Searching (late fall 2015), Research & Scholarship (early spring 2016), and The Value of Information (late spring 2016).

TATIL is being created by a team of experts, including librarians and professors. TRAILS: Tool for Real-time Assessment of Information Literacy Skills. Madison Assessment | Information Literacy Test. The Information Literacy Test (ILT) is a computerized, multiple-choice test developed collaboratively by the JMU Center for Assessment and Research Studies (CARS) and JMU Libraries. It is designed to assess the ACRL Information Literacy Competency Standards for Higher Education. These standards call for an information literate student to: Determine the nature and extent of the information needed;Access needed information effectively and efficiently;Evaluate information and its sources critically and incorporate selected information into his or her knowledge base and value system;Use information effectively to accomplish a specific purpose;Understand many of the economic, legal, and social issues surrounding the use of information and access and use information ethically and legally.

The ILT measures Standards 1, 2, 3, and 5. View a PDF of the Information Literacy Test (ILT) Test Manual Please click here if you wish to view a demo of our test questions. Information Literacy Test | Project SAILS | Information Literacy Assessment. Rubrics - Information Literacy Tools @ Pitt - LibGuides at University of Pittsburgh. Most educators define a rubric as a scoring tool that lists the criteria for performance of specific tasks by a defined level. In its simplest form (for a specific assignment) a rubric is usually composed of four parts and detailed in a grid. In the context of information literacy, a rubric can be developed for various levels and audiences, from the simple lesson plan to a definition of the entire information literacy program. The ULS Information Literacy and Assessment Working Group has created several rubrics that can be used by faculty and librarians to incorporate appropriate structure and assessment to the development of their instructional sessions.

These rubrics are based on the ACRL Standards and the eight skill sets identified by the SAILS (Standardized Assessment of Information Literacy Skills) test currently in use at the University of Pittsburgh. Rethinking Classroom Assessment with Purpose in Mind. InformationLiteracy.pdf. Information Literacy VALUE Rubric | Association of American Colleges & Universities. The VALUE rubrics were developed by teams of faculty experts representing colleges and universities across the United States through a process that examined many existing campus rubrics and related documents for each learning outcome and incorporated additional feedback from faculty. The rubrics articulate fundamental criteria for each learning outcome, with performance descriptors demonstrating progressively more sophisticated levels of attainment. The rubrics are intended for institutional-level use in evaluating and discussing student learning, not for grading.

The core expectations articulated in all 16 of the VALUE rubrics can and should be translated into the language of individual campuses, disciplines, and even courses. The utility of the VALUE rubrics is to position learning at all undergraduate levels within a basic framework of expectations such that evidence of learning can by shared nationally through a common dialog and understanding of student success. Definition. Middle States Commission on Higher Education.