In school, during one of my first library courses at the Master's level, we were made to digest huge chunks of LIS classical research. Not just inches, but several feet of tree-killing articles were discussed, then quarterly a huge auditorium-full of students (160 of us!) took open-book tests that entailed answering large questions with reference to these works. We'd all carry this with us, indexed and tabbed, boxed and annotated, beginning the process of building a scholarly scoliosis along with strained eyes.
As an aside, I never will forget the wonderful feeling of finding myself among so many peers. It was like a lightning-jolt, after years of working in small libraries, a paraprofessional who'd never attended a single library conference.
Today I am teaching (and continuing to learn) about critical evaluation of the LIS literature. I know I'd needed to be familiar with all those works (so many of them stay in my mind, foundation to our practice), but I wonder if in some ways, this method of teaching did not also work to reinforce a practice that is simultaneously entirely human - and a shameful little secret.
Oh, we know (right?) that barriers to research in LIS include the lack of time, support, research training, perception of applicability. All the surveys agree, and it's true, I'm sure. But I also know (I confess here in this private place) that another barrier for me was ignorance and maybe a sort of intellectual apathy, the well-entrenched habit of digesting huge chunks of information as if they were unquestionable, in just the way I've described, or in response to an immediate need. It's how I was taught to be. Or rather, I feel as if I was told to be critical, but in practice, there was no time or space for criticism to occur, and the reality of my whole working life had to do with paying attention to politics in one form or another.
How do we get to a place where we can learn (or realize a need to learn)? I knew 'real research' was out there, vaguely, but felt that place across the chasm I sensed had no real connection to me. In fact, I did not often have access to our literature, other than current issues of a number of library journals (Computers in Libraries, JMLA, Library Technology). I'd scan tables of contents, copy some for later reading, then cross my initials off and send the journal on its way to the next staff member. When I read something it was of practical use to a problem I'd been confronting - usually programmatic accounts, 'how I done it good.'
Feeling myself a stranger to 'big R Research,' intimidated by statistics and fancy, terminology-laden graphs, I have in the past decided that the authors must have proven their hypothesis because it did, it surely did, appear that they knew what they were doing. So writing a paper in school, or (earlier, much earlier) reading about how another library made their bindery decisions, I seldom questioned anything but the potential application of the author's findings; the literature as a manual for practice, our sharing in the most natural way.
After all, it seems we seek first to find information from our peers, and even physicians report doing so. This other way of inquiry (research) comes after all else has been exhausted. And frankly, if for years you've been doing it the first way, and it has worked, why change? In fact, might the suggestion that you've been neglecting a professional responsibility introduce unwelcome guilt, piled atop having too much to do, stacked with the realities of political expedience?
Another thought, maybe a fragment. Reading bibliometric studies as I've been doing, I find so many claims to tracking the research of LIS - compiling and measuring topics, identifying domains, top journals, and even asking what our largest and most compelling questions might be. Undoubtedly these works are key to the future of LIS, and I hope so much to build on their achievements.
However, I wonder who the authors of those measured citations might be (academics, doctoral students, librarians who staff a busy reference desk?), and whether the questions identified might actually come from those who are already choir members, and I want to ask: Which of these reflect the important questions faced by practicing librarians? What is the evidence base for our question lists, our full methodological disclosure, our earnest owning-up to the assumptions and weaknesses of our work?
Does research mean something different to the practitioner than to the researcher? Do publications authored by practitioners reflect practice-oriented questions (and what percentage)? How well do the research trends mirror all our professional concerns? - that, I do not know, and may never know. I may not find out by reading the literature.
Speaking of classical LIS research, I wonder if Gloria Leckie's findings are equally applicable to our own profession, and if so, how we might know. Remember her work, tracing the assumptions of experts about novice understandings, where the experts dwelt in a universe of information surrounding their topic - speaking a different language, and leaving the novice to feel judged, inadequate, a gaping chasm uncovered? I read that and felt aware as I had not before, and carried that new thought to my teaching with library patrons. I nodded sagely and thought about all those teachers who'd sent their hapless students to the public library to ask for '50 facts' (no lie - this happens).
I never wondered 'til now if this is equally applicable to our profession, to the assumptions made by researchers, and to the sense of being inadequate, judged, separate, on the part of practitioners in the face of earnest lectures: Go forth, practitioner, and sin no more with shoddy decision support.
Leckie, G. J. (1996). Desperately seeking citations: Uncovering faculty assumptions about the undergraduate research process. Journal of Academic Librarianship, 22 (3): 201-8.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment