You may have read here that in our class, we're practicing critical evaluation of published LIS research by using the 'toolbox' provided to those who write evidence summaries for the online journal, Evidence Based Library and Information Practice. This semester we've examined two bibliometric articles, and all have noted that the existing tools appear to be inadequate for the evaluation of this sort of research. This is surprising to me, because while it's a methodology (or series of methodologies) most often used by those in our profession, we are not alone by any means in tracking publication trends, exploring issues associated with the use of provided journal studies (e.g., JCR) for promotion and tenure, and more. Short of a critical instrument for the evaluation of a systematic review, what's out there?
I imagine something like this would help us in a number of ways. A key question is: Is there a standard 'best practice' for the conduct of a bibliometric research study? From there, of course, we can ask whether a given work meets the standards - and on the heels of that, ask whether findings or methodologies (even flawed ones) can be used in another setting. I have seen nothing like this. With that in mind, I took notes this semester and last, and I've got a (beta!) question set that I plan to use for evaluation. If you're interested, I'd love feedback (and if you do know of something out there that's already in use - please let me know!).
The tool I created has since been updated several times. Please see this post for the most current version, which has now been tested in use at the UNC Chapel Hill School of Library and Information Science, and at the Texas Woman's University School of Library and Information Studies in several different Master's and Doctoral-level courses.
Other notes on this tool and its review are here.
I imagine something like this would help us in a number of ways. A key question is: Is there a standard 'best practice' for the conduct of a bibliometric research study? From there, of course, we can ask whether a given work meets the standards - and on the heels of that, ask whether findings or methodologies (even flawed ones) can be used in another setting. I have seen nothing like this. With that in mind, I took notes this semester and last, and I've got a (beta!) question set that I plan to use for evaluation. If you're interested, I'd love feedback (and if you do know of something out there that's already in use - please let me know!).
The tool I created has since been updated several times. Please see this post for the most current version, which has now been tested in use at the UNC Chapel Hill School of Library and Information Science, and at the Texas Woman's University School of Library and Information Studies in several different Master's and Doctoral-level courses.
Other notes on this tool and its review are here.
No comments:
Post a Comment