Abstract

It is not uncommon for practitioners, myself included, to bemoan the absence of good quality research upon which we can base our day-to-day decisions. While the stereotypical research message of systematic reviews is that ‘more research is needed’, thereby securing future researcher longevity, this can lead to a very unfair perception of the usefulness of such reviews. In a quest to unearth the evidence base for information practice, to feed into future courses, articles and columns such as this one, I have recently identified three review-type publications with varying degrees of systematicity, that could rightly claim a place among a busy health information practitioner's Desert Island essentials. Of course all this assumes that you have access to a broadband Internet connection rather than the ubiquitous message in a bottle! Because of the wide-ranging coverage of these three reviews we shall momentarily depart from our previous practice of starting with a problem-based scenario. Nevertheless we would welcome follow-up correspondence from any health information professional who finds him or herself using these reviews to inform their specific library practice. Interestingly, two of these reviews originate from mainstream evidence-based practice while the other has its origins in the general library literature, a pedigree unfortunately betrayed by a dearth of explicit methods. The first review on knowledge management in clinical practice describes itself as a ‘systematic review of information seeking behaviour in physicians’.1 Apart from querying the implicit equivalency of ‘clinical practice’ with ‘physicians’ and the aggrandizement of information retrieval into the sexier term ‘knowledge management’, how does this measure up as an example of the systematic review art form? More importantly what are its implications for those seeking to accommodate the information seeking behaviour of such clinicians? This study aims to update a seminal, although quasi-traditional review conducted by the editor of the BMJ, Richard Smith, in 19962. The authors, the lead among whom was a former Director of the Centre for Evidence-based Medicine in Oxford, selected trials or reviews that covered such wide-ranging topics as ‘information seeking behaviour, frequently asked questions, information needs, clinical questions, information sources, or knowledge resources’. Their review only included studies where physicians were either the primary or the sole focus. Only clinical information needs, excluding that for patient data, were investigated. The authors searched Cochrane Library, medline and embase from 1966 to December 2001 at a broad level for physicians’ questions in general and, more specifically, for terminology associated with information need. Specific author searches were included as was a hand search of relevant references. Contact with leading authors was an additional subsidiary method. After independently assessing the abstracts for relevance the authors extracted data on the method(s) of data collection, sampling process, subjects and setting, data collection process and response rate, and sources of information. In particular the authors looked for valid methods of data collection (where the instrument used had been piloted before use, and/or a second researcher oversaw the data collection process and/or the collected data was independently reviewed). Nineteen studies met the eligibility criteria, nine used a questionnaire and eight used interviews with the remainder using record reviews or observation. Eight studies used random sampling, five used purposive sampling, four used convenience samples with one systematic sample and one stratified sample. Twelve studies (63%) had a validated process of data collection. Thirteen of the 19 studies reported text sources as the primary source of information, four studies reported colleagues as the primary source and one found electronic sources to be the primary source. Between 50 and 80% of physicians across studies used printed material as an information source. Information enablers include convenience of access, habit, reliability, quick use, and applicability. Barriers to information seeking include lack of time to search, the huge amount of material, and forgetfulness. The authors acknowledge two weaknesses in their review; the ‘drawbacks of the individual studies’ and ‘the inherent differences of the studies’. In particular concern about responder bias is a result of the high dependence on mail surveys where information seeking tends to be overestimated, information needs underrepresented and resource use and preference misperceived. Other methodological concerns centre on a general failure to use randomization. An interesting observation made by the reviewers is that frequent recourse to colleagues may in fact be a ‘psychological need for reassurance as well as the need for tacit knowledge, which usually embodies the experiential knowledge of another individual’. If this is indeed the case then it is clearly an unequal comparison to match information from colleagues against the output of an information service. The review concludes that a ‘remarkably low number of studies’ have explored the information seeking behaviour of clinicians (i.e. physicians). The review's bottom line is that information provision needs to be ‘useful, relevant and fast’. The second review, Use and Users of Electronic Library Resources: An Overview and Analysis of Recent Research Studies,3 a report for the United States Council on Library and Information Resources (CLIR), summarizes and analyses more than 200 research publications published between 1995 and 2003. The report divides these publications into two ‘tiers’. In Tier 1 are eight major ongoing studies (represented by multiple publications). In Tier 2 are about 100 smaller-scale studies. Although the studies use a much broader variety of research methods, including observation, experiments, and transaction log analysis, the high prevalence of surveys and interviews reported above is also apparent in this review. What is lacking, as becomes immediately apparent, is the explicit detailing of methods as required by a systematic review methodology. Clearly the author has performed a considerable feat in assembling and analysing such an evidence base. However some of its value is dissipated by the complete absence of a description of review methods. Indeed for little extra effort this review could usefully be converted into a systematic review. In the absence of clear methods one has to focus, with a certain amount of trust, on the review's main findings. Primary criteria for adoption of electronic resources are convenience, relevance, and time savings. Differences can be observed across disciplines in terms of both usage patterns and preferences for print or electronic—a ‘one size fits all’ approach is clearly not appropriate. Indeed almost every discipline retains a vestigial need for some form of print access, although this is most marked in the humanities. E-books are still at an early phase of development and thus compare poorly with print. While print formats such as PDF prove popular it is interesting to observe that subject experts are starting to develop innovative ways of using the collection, using hyperlinks to view related articles and thus follow a thread of an argument rather than reading articles serially. Another observation is the need to accommodate two contrasting requirements—the need to browse a focused core of journals, especially for subject experts and for current awareness searching, and yet also to search an article database by topic for all other purposes. In fact, most journal article readings are of articles within their first year of publication, and yet, in line with the above, a sizeable minority of readings is older than one year. The final review, and by far the pick of our ad hoc trilogy, originates from the Centre for Clinical Effectiveness in Australia, itself an evidence seeking bastion. This review entitled ‘Information finding & assessment methods that different groups of clinicians find most useful’4 seeks to address the research question: ‘What evidence-based health care information finding and assessment methods do general practitioners, registrars, specialists, other medical practitioners, nurses and allied health care workers find most valuable in guiding their clinical decision-making?’. As indicated by the scope of the remit this review aims to identify ‘some valid generalizations about discipline-specific characteristics of the methods found to be most useful for accessing relevant information’. The context is to inform future dissemination strategies for clinical practice guidelines targeting specific clinical disciplines although, in addressing self-initiated information seeking behaviours, it purposely excludes directed activities such as dissemination of evidence-based guidelines. The search strategy, covering only English language articles from 1995 onwards, started from the Related Articles feature of PubMed using it to generate a comprehensive search strategy with relevant text-words and medical subject headings. Databases used included medline, premedline, cinahl, Current Contents and the Cochrane Library along with less familiar sources for a UK audience such as cinch-Health, Australasian Medical Index, apais-Health, archi-Australian Resource Centre for Hospital Innovations, Health and Society and the New York Academy of Medicine Grey Literature Report. The reviewers also scanned the reference lists of retrieved articles to identify other relevant articles. Included articles had to meet predetermined inclusion criteria, including a comparative requirement to cover at least two different information sources. Purely academic populations, studies with fewer than 15 subjects and those lacking quantitative data were excluded. Independent selection of articles by two reviewers was followed by reading of the full-text to determine relevance. A total of 32 articles were selected following a full-text examination. Each included article was appraised in tabular format. This allowed the reviewers to examine the characteristics of studies by such aspects as country of origin, professional group, affiliation of the investigators, date and length of study etc. Only seven included studies investigating how clinicians assess the quality of the information they acquire—none of these constituted evidence-based critical appraisal. In addition to the demographic characteristics of the studies, mentioned above, the review examined study quality-related factors such as their sample size, response rates, sample characteristics, recruitment, design etc. Identified variables included the age and experience of the participating clinicians, the need for which clinicians were seeking information, and their access to resources. Again this study confirmed the high prevalence of questionnaire and survey instruments, highlighting the methodological issues associated with self-administration and self-report. While cautioning about the diversity of included information sources the reviewers observe that the ‘most striking overall trend was towards people as the most preferred and/or frequently used information sources across clinical disciplines’. Books, incorporating the widest possible definition of the category, also consistently figured among preferred information sources. Overall, journals appeared to be ‘less favoured than people as information sources across disciplines’ and databases appeared to be ‘less used or preferred information sources than any of the three preceding categories’. The review also examined education as an information source and an ‘other’ category but the interested reader is referred to the full report for details of the smaller numbers of studies in these categories. The reviewers observe that no studies have triangulated questionnaires, observation and documentary analysis and therefore understanding of information seeking preferences is ‘incomplete and requires further rigorous research’. Identified studies were generally of poor methodological quality, including the acknowledged limitations of self-report. Another deficiency was the failure to establish accurately the nature of the information need before investigating the information-seeking behaviour itself. Variables to be included in future research include need, access and assessment and these should be linked to the information sourced by health professionals. Age, experience and gender must also be considered. Triangulated methodology is a priority to counter the biases of behaviour elicited by a single instrument. In concluding, the reviewers observe that while some clinician preferences for information sources have been indicated by the review the research was ‘generally not of sufficient quality or currency to guide the development of a comprehensive strategy for guideline dissemination’. If you, as a health care librarian, are grappling daily with issues connected with the information seeking behaviour of your doctors, or your wider clinician-based population, or contemplating the ideal paper-electronic mix, you can have little justification for the plea—‘there is no evidence’. While uncertainties around the quality of the evidence and its overall message may remain, it is as reasonable to expect that you will be familiar with the findings of these three important reviews as it is to expect your GP to be up-to-date with current best evidence. So put away your complete works of Shakespeare and your other chosen book and give attention to these findings! Or, if you are more person-orientated, at least find some Man Friday to share them with you!

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call