Three items in my feed over the past week or so: Peter Hernon/Candy Schwartz sound off on the need to teach assessment and evaluation in LIS programs (Library & Information Science Research in press), and an Inside Higher Ed post by Barbara Fister discusses expectation gaps between faculty and academic library directors - and the lack of solid strategic plans for the library. Meanwhile, another Inside Higher Ed column focuses on the worrying trend of going by the numbers for outcomes evaluation in education.
Says Fister:
Responses cited above may be problematic, as it's one of those kinds of questions, where you read it and think 'yes, but' - as in, 'Yes, but - my library is changing so rapidly that existing strategic plans are currently under revision. In fact, I'm late for a meeting,' as in 'Yes, we had a plan, but then the budget got cut and more is on the way.' I'm not a library director so I don't know: is it possible or practicable to place strategic planning in a template for action while simultaneously waiting foran axe to fall? I have not read the entire 43 page report, which appears to be of some importance. Key point of Fister's editorial is that we're in trouble if we don't do a better job of communicating with our stakeholders:
Meanwhile, Hernon and Schwartz push for assessment/evaluation content in LIS coursework, arguing that
Meanwhile, there's that concern mentioned previously about 'by the numbers' outcome assessment:
I think I'm talking about the need for translational research.
Says Fister:
First, Ithaka surveyed library directors and found that only about a third of them agree with the statement “my library has a well-developed strategy to meet changing user needs and research habits.” Two thirds don't.
Responses cited above may be problematic, as it's one of those kinds of questions, where you read it and think 'yes, but' - as in, 'Yes, but - my library is changing so rapidly that existing strategic plans are currently under revision. In fact, I'm late for a meeting,' as in 'Yes, we had a plan, but then the budget got cut and more is on the way.' I'm not a library director so I don't know: is it possible or practicable to place strategic planning in a template for action while simultaneously waiting foran axe to fall? I have not read the entire 43 page report, which appears to be of some importance. Key point of Fister's editorial is that we're in trouble if we don't do a better job of communicating with our stakeholders:
Half of the survey respondents couldn’t even venture a guess as to whether faculty were aware or not [of the library and its services], which either means libraries are doing a poor job of communicating the issues or faculty aren't listening.Fister concludes with an anecdote about the spur-of-the-moment decision to physically dismantle the library's reference desk to see if better solutions can be found. Get off your swivel chairs and get a screwdriver, she insists - go do something now!
Meanwhile, Hernon and Schwartz push for assessment/evaluation content in LIS coursework, arguing that
With so much in the literature of library and information science on the culture of assessment or evaluation, we concur that the concepts and methods behind assessment and evaluation merit coverage and an expectation that students can apply what they are learning. However, each might involve experimental designs and anyone dealing with assessment (or evaluation for that matter) should know how to analyze and present findings to different stakeholders.I see a tension here between the do-it-now school of thought and the other academy, which urges more rigorous examination of user needs.
Meanwhile, there's that concern mentioned previously about 'by the numbers' outcome assessment:
Experts in a field spoke to numbers of students, interviewed faculty, observed classroom lectures, and, using their own experience and expertise as backdrop, arrived at a holistic conclusion. There was nothing "scientific" about the process, but it proved remarkably successful. This is the accreditation that is universally acknowledged to have enabled American colleges and universities to remain independent, diverse, and the envy of the world.In this op-ed, Bernard Fryshman reminds the reader that numbers in education are really the outcome of widget production economies, a strategy that was snapped up eagerly by higher education and beyond.
Advocates persisted, and states, one by one, were convinced of the necessity to measure student learning. And measure they did! Immense amounts of money, staff time, and energy went into gathering and storing numbers. Numbers that had no relevance to higher education, to effectiveness, to teaching or to learning. "Experts" claimed that inputs didn't count, and those who objected were derided as the accreditors who, clipboard in hand, wandered around "counting books in the library."Oh, right - that sounds familiar. But he's saying this had nothing to do, really, with human behavior in education or anywhere else. Do you see what I'm puzzling over here? These are truth claims. How the world is run, or should be, how things work - what will work. Amidst recognition that there are problems, champions call for their solution to be adopted. More, too. Echoes here of the same sort of historical tension found in the medical literature about the need to adopt evidence-based medicine - and going farther back, more of the same. I'm not saying any one of them is wrong. Maybe they all should be right. Where is the point at which these elements (and more?) mesh to meet these differing claims, which are each about the same elephant? Hmm.
I think I'm talking about the need for translational research.
No comments:
Post a Comment