Sunday, March 22, 2009

Talk about questions..!

I'm looking through association statements about research in librarianship, so this morning I note the following, from an 1998 ACRL report entitled Academic Librarianship and the Redefining Scholarship Project: A Report from the Association of College and Research Libraries Task Force on Institutional Priorities and Faculty Rewards (the highlighting in red and footnotes are mine):


As previously noted, a major proportion of the work done by librarians (1) qualifies as scholarship.


Librarians have applied a wide range of quantitative and qualitative research methodologies in advancing the discipline's knowledge base. They engage in the scholarship of inquiry in order to apply their findings to the everyday challenges of providing library services (2). Especially important areas of inquiry for librarians include:

  • conducting citation studies (3);
  • analyzing how people seek and use information;
  • constructing means for organizing bodies of data and information, and designing methods for precise and efficient information retrieval;
  • establishing methods for evaluating the effectiveness of library services and processes;
  • researching the effects of environment and library practices on the "life span" of the various information media found in libraries;
  • discovering the communication modes and related factors that lead to the most effective reference interview, one that has the best chance of determining any given user's precise information needs;
  • preparing analytical bibliographies;
  • investigating the history of the book and recorded knowledge.


(1) I wonder if this assumes 'academic' and not all librarians. I also am wondering if this statement is not one intended more as motivation, as opposed to being a reflection on the actual state of our work. Argh - there is no reference here to 'previous' notations.

(2) What's assumed here - the role of 'librarians' as academic researchers, or even academic librarians as academic researchers? How are academic librarians doing it right (and non-academic librarians, not), if this statement is actually true?

(3) This makes me wonder why we have not already developed some kind of tool for the evaluation of our work, if it's so focused on citation analysis!


From an earlier section of the same document:
In "Making a Place for the New American Scholar," Eugene Rice describes Ernest Boyer's Scholarship Reconsidered: Priorities of the Professoriate as having "called on faculty to move beyond the tired old 'teaching vs. research' debate . . . What moves to the foreground is the scholarly work of faculty, whether they are engaged in the advancing of knowledge in a field, integrating knowledge through the structuring of a curriculum, transforming knowledge through the challenging intellectual work involved in teaching and facilitating learning, or applying knowledge to a compelling problem in the community."(2) These four types of scholarship, which we shall call inquiry, integration, teaching, and application, provide a framework for considering how the activities of academic librarians may fit into the broader, more complete understanding of what constitutes academic work(1). Such a reexamination is very timely in light of the similar efforts being carried out in the Institutional Priorities and Faculty Rewards project by dozens of other professional associations on behalf of their academic disciplines.

(1) I understand this to be an attempt to categorize academic library 'scholarship' as a broad concept, with 'research' (here called 'inquiry') as a sub-category. How does this affect the model for EBL, where inquiry (the 'important questions for the profession,' to paraphrase Andrew Booth*) is intended as the driver, with integration encapsulating all the other aspects of application, including teaching and evaluation of outcomes?

* Booth A. (2006). Clear and present questions : formulating questions for evidence based practice. Library Hi Tech, 24(3):355-368.

Saturday, March 21, 2009

All the classics: Bless me, EBLIP, for I have sinned

In school, during one of my first library courses at the Master's level, we were made to digest huge chunks of LIS classical research. Not just inches, but several feet of tree-killing articles were discussed, then quarterly a huge auditorium-full of students (160 of us!) took open-book tests that entailed answering large questions with reference to these works. We'd all carry this with us, indexed and tabbed, boxed and annotated, beginning the process of building a scholarly scoliosis along with strained eyes.

As an aside, I never will forget the wonderful feeling of finding myself among so many peers. It was like a lightning-jolt, after years of working in small libraries, a paraprofessional who'd never attended a single library conference.

Today I am teaching (and continuing to learn) about critical evaluation of the LIS literature. I know I'd needed to be familiar with all those works (so many of them stay in my mind, foundation to our practice), but I wonder if in some ways, this method of teaching did not also work to reinforce a practice that is simultaneously entirely human - and a shameful little secret.

Oh, we know (right?) that barriers to research in LIS include the lack of time, support, research training, perception of applicability. All the surveys agree, and it's true, I'm sure. But I also know (I confess here in this private place) that another barrier for me was ignorance and maybe a sort of intellectual apathy, the well-entrenched habit of digesting huge chunks of information as if they were unquestionable, in just the way I've described, or in response to an immediate need. It's how I was taught to be. Or rather, I feel as if I was told to be critical, but in practice, there was no time or space for criticism to occur, and the reality of my whole working life had to do with paying attention to politics in one form or another.

How do we get to a place where we can learn (or realize a need to learn)? I knew 'real research' was out there, vaguely, but felt that place across the chasm I sensed had no real connection to me. In fact, I did not often have access to our literature, other than current issues of a number of library journals (Computers in Libraries, JMLA, Library Technology). I'd scan tables of contents, copy some for later reading, then cross my initials off and send the journal on its way to the next staff member. When I read something it was of practical use to a problem I'd been confronting - usually programmatic accounts, 'how I done it good.'

Feeling myself a stranger to 'big R Research,' intimidated by statistics and fancy, terminology-laden graphs, I have in the past decided that the authors must have proven their hypothesis because it did, it surely did, appear that they knew what they were doing. So writing a paper in school, or (earlier, much earlier) reading about how another library made their bindery decisions, I seldom questioned anything but the potential application of the author's findings; the literature as a manual for practice, our sharing in the most natural way.

After all, it seems we seek first to find information from our peers, and even physicians report doing so. This other way of inquiry (research) comes after all else has been exhausted. And frankly, if for years you've been doing it the first way, and it has worked, why change? In fact, might the suggestion that you've been neglecting a professional responsibility introduce unwelcome guilt, piled atop having too much to do, stacked with the realities of political expedience?

Another thought, maybe a fragment. Reading bibliometric studies as I've been doing, I find so many claims to tracking the research of LIS - compiling and measuring topics, identifying domains, top journals, and even asking what our largest and most compelling questions might be. Undoubtedly these works are key to the future of LIS, and I hope so much to build on their achievements.

However, I wonder who the authors of those measured citations might be (academics, doctoral students, librarians who staff a busy reference desk?), and whether the questions identified might actually come from those who are already choir members, and I want to ask: Which of these reflect the important questions faced by practicing librarians? What is the evidence base for our question lists, our full methodological disclosure, our earnest owning-up to the assumptions and weaknesses of our work?

Does research mean something different to the practitioner than to the researcher? Do publications authored by practitioners reflect practice-oriented questions (and what percentage)? How well do the research trends mirror all our professional concerns? - that, I do not know, and may never know. I may not find out by reading the literature.

Speaking of classical LIS research, I wonder if Gloria Leckie's findings are equally applicable to our own profession, and if so, how we might know. Remember her work, tracing the assumptions of experts about novice understandings, where the experts dwelt in a universe of information surrounding their topic - speaking a different language, and leaving the novice to feel judged, inadequate, a gaping chasm uncovered? I read that and felt aware as I had not before, and carried that new thought to my teaching with library patrons. I nodded sagely and thought about all those teachers who'd sent their hapless students to the public library to ask for '50 facts' (no lie - this happens).

I never wondered 'til now if this is equally applicable to our profession, to the assumptions made by researchers, and to the sense of being inadequate, judged, separate, on the part of practitioners in the face of earnest lectures: Go forth, practitioner, and sin no more with shoddy decision support.

Leckie, G. J. (1996). Desperately seeking citations: Uncovering faculty assumptions about the undergraduate research process. Journal of Academic Librarianship, 22 (3): 201-8.

Thursday, March 05, 2009

Update on the bibliometric evaluation tool!

Thanks so much to Lorie Kloda, editor of the EBLIP journal Evidence Summaries section, who asked her colleague, an expert in bibliometric research, to comment. Changes recommended by Vincent Larivière have been made, particularly concerning statistical evaluation.

New evaluation tool

Sunday, March 01, 2009

Evaluation of research

You may have read here that in our class, we're practicing critical evaluation of published LIS research by using the 'toolbox' provided to those who write evidence summaries for the online journal, Evidence Based Library and Information Practice. This semester we've examined two bibliometric articles, and all have noted that the existing tools appear to be inadequate for the evaluation of this sort of research. This is surprising to me, because while it's a methodology (or series of methodologies) most often used by those in our profession, we are not alone by any means in tracking publication trends, exploring issues associated with the use of provided journal studies (e.g., JCR) for promotion and tenure, and more. Short of a critical instrument for the evaluation of a systematic review, what's out there?

I imagine something like this would help us in a number of ways. A key question is: Is there a standard 'best practice' for the conduct of a bibliometric research study? From there, of course, we can ask whether a given work meets the standards - and on the heels of that, ask whether findings or methodologies (even flawed ones) can be used in another setting. I have seen nothing like this. With that in mind, I took notes this semester and last, and I've got a (beta!) question set that I plan to use for evaluation. If you're interested, I'd love feedback (and if you do know of something out there that's already in use - please let me know!). 

The tool I created has since been updated several times. Please see this post for the most current version, which has now been tested in use at the UNC Chapel Hill School of Library and Information Science, and at the Texas Woman's University School of Library and Information Studies in several different Master's and Doctoral-level courses.

Other notes on this tool and its review are here.