Thursday, October 23, 2008

Evidence summaries and a culture of criticism

As always, Thursday afternoon's EBL class sparked off a number of thoughts and questions, exactly as intended. I cannot ask for a better outcome than to do this. Focus of the class was the journal club discussion/critical evaluation of this article:
McKnight S. & Berrington, M. (2008, March). Improving Customer Satisfaction: Changes as a Result of Customer Value Discovery. Evidence Based Library and Information Practice, 3(1):33-52.
The process we've been using for this core part of the class is to rotate as facilitators, responsible for assigning the use of a particular critical evaluation tool (these being the same ones provided to the EBLIP journal's Evidence Summary team) then leading analysis of an article that was previously chosen. In this case, it was chosen because Joanne Marshall will be talking about Servqual and Libqual next week. The author of this work, however, chose to use a model for user satisfaction imported from marketing. I also frankly wanted to see how an original research article published in the Evidence-based Library and Information Practice journal might fare.

We kept returning to the culture of LIS, and danced as we've done before around the idea and attitude toward criticism of the literature, sensitive to the difficulties many in LIS have to negativity. Some feel that in order to encourage more robust literature, potential contributors should be approached more gently, more positively. An extension of this is the aversion to cultures in some disciplines where criticism involves spiked fingernails or even poisoned sledgehammers, no-holds-barred nastiness in which one (apparently) must engage in order to be taken seriously as a researcher.

It seems undeniably to be the case though, that in LIS generally (how do you like my 'weaselly' words, spoken as a true grad student!), we are averse to, unused to, and even hypersensitive to criticism. Several times, when I've discussed my EBL interests or focus with well-respected LIS professionals (including those with national and even international reputations), they've responded with what I could only term a sort of culturally-entrenched insecurity. "When you talk about EBL," one said, "I feel that I've been doing it wrong, haven't done enough." These administrators do not refrain from conducting research, but if they feel this way - how do others feel?

As an aside, I regard EBL (or some of its tools, though they aren't affiliated with EBL alone) particularly critical evaluation of the literature, to be a cultural tool. One student said yesterday that she will not read research the same way again, after close and organized readings, and creating an 'evidence summary' paper. Setting aside for a moment the question of whether we do read the literature - isn't this changed view of it something we need to be seeking as a profession? We must (I say, climbing once more onto a wellworn soapbox) engage with our own literature, as a community.

How do critical evaluation tools help us to deal with this sensitivity? Shall we decide that if librarians are touchy, tant pis, and move on anyway, kniving sharply with tools meant for different cultures more accustomed to such stern looks - and potentially, deterring cautious attempts by those who are alienated by such an approach?

To think about, here: is the culture from which EBP arises more patriarchal, positivist, despite recent trends toward the integration and evaluation of qualitative research? How might more feminist models find space in our adaptation of the EBP model? For consideration:

Rogerian argument:
The more traditional & patriarchal model:

Are there studies of the effectiveness of the critical evaluation tools?

In class last Thursday, we used the HCPRDU mixed method tool to evaluate...... and found it utterly cumbersome, with lots of N/A responses to a case study conducted in university libraries. This UK library setting was the setting for a series of workshops, followed by changes in practices, collections, and services. Patrons were surveyed using paired values for both positive and negative aspects of the libraries based on a model intended for use in the marketing sector.

We discuss what questions would be suited for the evaluation of a bibliometric study, a case study. I think about a question set that one might apply to the results of a survey conducted on Medlib-L, or to a 'how I done it' article if, as may frequently be the case, there is a shortage of material.

Tuesday, October 21, 2008

If with each step you close half the distance to your goal

- you will never reach it. This simple fact, I always found compelling and a bit discouraging, and as a child I tried to test it by bringing my fingertips closer, closer, closer - and they'd bump. I was incapable of the calibration, and knew it.

In seeking to render research perfectly rigorous, our decisions error- or bias-free, we will never reach that state. How far are we, though? How close might we come, and is the exercise of calibration worth making new tools? In a practice setting, is such an exercise more an expensive waste of time? Don't answer that, or any of these questions.

Or rather, don't answer them generically - do so with each decision, then tell me. I look back, I confess, to 20 years of gut-driven decision making, to marketing in a public library without any knowledge of marketing, surveys conducted with no real idea of what I was doing. In one, a survey about what music patrons might like to see in the circulating collection of a public library, a patron gently pointed out that I'd completely skipped over country music. When I did such things, that I did them at all was viewed as sufficient. No one had time to spend on a more stepwise approach. Everything I learned (or most of it) in library management, supervision, patron services, education - all of it, I learned by trial and error.

I hear older voices, a little amused, whispering: We drank unpasteurized milk/went shoeless/found our own amusements, for years. We drove with not the slightest thought of carseats or seatbelts. This new stuff! and with it, the impression's given that 'new stuff' is unnecessary, extraneous, foolish, even wasteful.

I don't have an answer to that (usually) unvoiced opinion, either. I think I used to have one, but as I find I tend to do over time (must be that maturity thing), I came to a realization that soapbox-standing is not my metier. There is a certain aspect to EBL that is evangelical, so that a proportion of research articles and editorials about EBL are about persuasion ('for years editors of leading LIS journals have deplored the quality of practitioner research...') Shouldn't our discourse be about those questions, about our existing practices, about our perceived needs? Otherwise it's selling - not ice in Alaska - but ice someplace where there is no use for it.

Monday, October 13, 2008

Why bother?

Why should we bother providing evidence, examining the basis for our ideas - when decisions in the workplace (as elsewhere) are usually made from the gut?

Is evidence-based librarianship, as an initiative, a Sisyphean, pointless labor intended only to amplify the egos of those who champion it (and belittle the voices of the experts against which, it might be claimed, it is arrayed)?

Yesterday my blood pressure was raised, and raised again. No problem - one must get exercise.

Recently, I read the JESSE listserv, and found (yet another) diatribe by an LIS educator against Second Life (SL). Let it be said that I am biased here, because I have been employed there, paid by a grant (the most recent of four) from the National Library of Medicine. The ongoing debate has been between those who claim SL is an important area for exploration, and those who argue it's a waste of time.

This person, who is presumably educating students, displayed a breathtaking arrogance in attacking librarians' activities in SL. First, the valid argument is made that data is lacking. This is fine and warranted (though I must say it would be difficult to provide much demographic data when you cannot even collect IP addresses) but unfortunately, the attack continues.

If this person had left it there, it would have retained a precarious stance as critical inquiry, but unfortunately, this is not what happened. Instead, they continued, writing about how they, in their professional and personal life, are doing real things of 'real' value, and find no need to engage in playacting. The implication is very clearly that those who are active in virtual worlds are not doing such things. Their attack ends by saying they doubt these things could be better done in a virtual world.

The argument described above is not made within the framework of formal rhetoric. It is not presented as the end result of academic inquiry. Instead, it is presented from the stance of expert, essentially an informally published editorial comment shared with peers in JESSE, a forum for educators in information science:

jESSE is a listserv discussion group that, since 1994, promotes discussion of library and information science education issues in a world-wide context. It addresses issues of curricula, administration, research, and education theory and practice as they relate to information science issues in general, and in general academia as the membership feels so moved. It is one of the primary outlets for faculty position announcements in LIS. Specific queries on lost resources and other minutia are welcome, as are broader questions for general discussion (from the listserv description, found here).

It is presented and then signed using the author's full signature block, and is available to any person who cares to view it, including students, press, and others, who take the time to search the archives. At the very least, then, it's the public statement of someone acting in their public, professional persona, issuing an expert opinion about the topic in a forum designated for professional discussion.

I find myself as a doctoral student feeling some pressure to be circumspect in writing about this. It is not my intention to position myself as expert, or to even attempt to stifle free expression. I am frustrated and a little appalled at the level of this discourse.

I draw parallels, perhaps unfair ones, to the level of discourse surrounding the presidential candidates. That was my second blood pressure-riser yesterday. Opinion in this public debate is accorded the status of fact (whatever that is!) - and thus the argument becomes one of innuendo against half-baked assertion. There is no countering such personalized claims, in the case of the current political atmosphere, so that the entire discussion becomes empty sputtering, name calling, baseless animus, replicated caricature.

I am fairly certain this is not the way to move ahead. We can debate about what will benefit our profession, and argue over direction, but placed upon a table for examination, such practices find no credence in the profession.