Jared L. Howland

Undergraduate Use of Federated Searching: A Survey of Preferences and Perceptions of Value-added Functionality

C. Jeffrey Belliston, Jared L. Howland and Brian C. Roberts

Presented: March 2007 at the 13th National ACRL Conference in Baltimore, MD.

Citation Information: Belliston, Jeffrey C., Howland, Jared L. and Roberts, Brian C. (2007). Undergraduate Use of Federated Searching: A Survey of Preferences and Perceptions of Value-added Functionality. College & Research Libraries. 68(6): 472–486. (Abstract) (Full Text)

Abstract

Randomly selected undergraduates at Brigham Young University, Brigham Young University-Idaho and Brigham Young University-Hawaii, all private universities sponsored by The Church of Jesus Christ of Latter-day Saints, participated in a study that investigated four questions regarding federated searching: (1) Does it save time? (2) Do undergraduates prefer it? (3) Are undergraduates satisfied with the results they get from it? (4) Does it yield higher quality results than non-federated searching? Federated searching was, on average, 11% faster than non-federated searching. Undergraduates rated their satisfaction with the citations gathered by federated searching 17% higher than their satisfaction using non-federated search methods. A majority of undergraduates, 70%, preferred federated searching to the alternative. This study could not ultimately determine which of the two search methods yielded higher citation quality. The study does shed light on assumptions about federated searching and will interest librarians in different types of academic institutions given the diversity of the three institutions studied.

Introduction

Library research remains a complex, convoluted process for many undergraduates, in spite of the advances promised by the digital age. In their final report, the Bibliographic Services Task Force from the University of California Libraries states “We offer a fragmented set of systems to search for published information (catalogs, A&I databases, full text journal sites, institutional repositories, etc.) each with very different tools for identifying and obtaining materials. For the user, these distinctions are arbitrary.”1 Federated searching attempts to collocate the information found in these fragmented systems and to provide one location to perform all library research. In this study, we investigated the assumptions that have been made about federated searching and studied undergraduates to determine if federated searching resolves some of the issues discussed by the Bibliographic Services Task Force final report.

In 2004, the Directors Council of the Consortium of Church Libraries and Archives (CCLA), consisting of four academic libraries and four special libraries sponsored by the Church of Jesus Christ of Latter-day Saints, licensed WebFeat’s federated search product for three years for all member institutions that wished to implement federated searching. About sixteen months prior to the expiration of the contract, the CCLA Directors Council requested data to assist in their decision concerning license renewal. We undertook this study to provide that data.

CCLA’s eight member libraries include four academic libraries serving undergraduates. These four libraries, at Brigham Young University (BYU), Brigham Young University-Idaho (BYUI), Brigham Young University-Hawaii (BYUH) and LDS Business College (LDSBC), have been the primary users of the licensed federated search technology. The study intended to gather data from all four institutions but, due to a poor response rate, LDSBC was dropped from the study. Although all participating universities have similar names and serve undergraduates, the environments are quite diverse (Table 1).

Table 1: Institutional Information
Library Institution Abbreviation Degrees Granted Student Population (FTE)
Harold B. Lee Library Brigham Young University BYU Bachelors, Masters, Doctorate 31,225
Joseph F. Smith Library Brigham Young University-Hawaii BYUH Bachelors 2,467
David O. McKay Library Brigham Young University-Idaho BYUI Associates, Bachelors 12,209

For this study, we asked random undergraduates to undertake two hypothetical research assignments using a different search method for each – one using federated searching and the other performed with non-federated searching. They were then asked to complete a questionnaire about their experience. This study was designed to answer the following questions for undergraduates:

  1. Does federated searching save time?
  2. Does federated searching satisfy students’ information needs?
  3. Do students prefer federated searching to the alternative of searching databases individually?
  4. Does federated searching yield quality results?

Because all of the CCLA institutions implemented federated searching differently, we designed the study to be implementation-neutral, thereby providing data on federated searching itself rather than on the WebFeat software.2

After compiling the results of this study, we presented this data as a paper at the Association of College and Research Libraries’ (ACRL) 13th National Conference in March 2007. Prior to presenting our findings, we polled our audience concerning their assumptions about federated searching. This study tests the assumptions presented in the literature and, as a matter of interest, compares the assumptions of the ACRL audience to our findings in this study.

Literature Review

End-user federated searching (sometimes known as broadcast searching, distributed searching, cross-search, metasearching, or parallel searching) of multiple databases stored by different companies in multiple locations is a relatively recent development. The concept of a single search of multiple databases goes back to at least 1966 when the Dialog service made possible the simultaneous searching of multiple discrete, proprietary databases. However, in contrast to the databases searched by current federated search products, the Dialog databases were (1) stored by a single company in a single location and (2) usually searched for an end-user by a librarian due to both the fee structure and the proprietary command-driven nature of the search interface. Roger K. Summit’s 1971 article on Dialog’s user interface and Stanley Elman’s various articles on the cost-benefit of Dialog examined this forerunner to federated searching.3

The majority of articles about today’s federated search technology tend to fall into four categories: (1) discussions of the desirability and/or difficulty of creating a robust federated search tool,4 (2) reports on one or more specific federated search implementations,5 (3) comparisons of federated search products currently on the market to each other and/or to Google Scholar,6 or (4) views on how to implement a subject-specific federated searching tool.7 Because these articles are theoretical, anecdotal, or comparative, they contain little data based on quantitative research.

The literature includes many explicit, and reasonable, assumptions about federated searching. The Serials Review column, “The One-Box Challenge: Providing a Federated Search That Benefits the Research Process,” edited by Allan Scherlen with contributions from five academic librarians, provides a recent example of assumptions made about federated searching. The editorial introduction to the column states, “Federated searching will certainly make some aspects of research easier, but will it make it better?”8 For contributor Marian Hampton, “[t]he benefit of metasearching is obvious – one simple interface for several sources …”9 Penny Pugh quotes the “minimal instruction” on West Virginia University’s federated search: “‘E-ZSearch provides a quick and easy way to search multiple databases at once.’”10 Frank Cervone writes, “the point of federated searching is to make searching as simple as possible …”11 Federated searching, then, is assumed to make research easier, provide a simple interface and require minimal training.

Others have pointed to the inherent problems with federated searching as it is currently implemented. These problems include waiting for the slowest database to return results before all citations can be viewed, true de-duplication is impossible and true relevancy ranking is unavailable. Rochkind (2007) comments: “Current library metasearch typically relies on searching multiple source repositories at once, in parallel, at the point of request and then merging the results.”12 The weaknesses of federated searching all stem from the choice vendors made to do federated searching in this manner. If libraries compiled and indexed the metadata from all the third-party database subscriptions and made the data searchable, true de-duplication and relevancy ranking would be possible. Additionally, the results could be returned much faster because the system would not have to wait on the individual database vendors to return the results.

Imperfect as it is, federated searching still has the significant potential benefit of saving time by requiring less searching. It also has the benefit of serendipitous discovery. Students may not know which databases to search for a particular topic, so a federated search engine that automatically selects appropriate databases helps students find materials they would not likely have found otherwise. Our study tested these assumptions by determining how much time is saved using a federated search, if undergraduates preferred it to traditional searching, if it satisfied their information needs and if federated searching yielded higher or lower quality results than non-federated searching.

Methodology

Research participants and data gathering. A random sample of currently enrolled undergraduate students at BYU, BYUI and BYUH received e-mail invitations to participate in a research project. To ensure a consistent delivery of expectations for the study, participants received written, rather than oral, directions (Appendix A). Each student was randomly assigned to one of two biology-related topics for a hypothetical research assignment. The written directions indicated which topic and search method (federated or non-federated) they were to use first to locate citations of journal articles that they felt best addressed the topic. Then, using the same user interface and the same set of seven databases, each student compiled a set of citations, copying and pasting them into the Google scratch pad available to the right of their Internet browser on the screen.

A proctor noted the time a participant began researching the first topic. When the participant indicated he or she had completed the research for the assigned topic, the proctor recorded the ending time, captured the collected citations into a Microsoft Word document with a filename indicating participant, topic, and method, and cleared the scratch pad. The process was then repeated for the other research topic, using the other search method, so each student created two citation sets for analysis (Appendix B). Finally, participants completed a questionnaire that asked about their satisfaction with the citations gathered by each method, along with the method they preferred and why (Appendix C). A total of ninety-five undergraduates from the three schools participated (Table 2).

Table 2: Summary of Participants (n = 95)
Question 1 First Question 2 First
Federated First 26 24
Non-Federated First 24 21

Neutral interface. For both topics, and both of the search methods, the students were presented with the same set of seven databases. We selected databases that (1) were available at all three institutions participating in the study and (2) would include biology information. Additionally, we noted that, on their subject pages, subject librarians at BYU included, on average, just over six databases to be searched using a federated search. Assuming that this number likely represents close to the optimal number of databases to be searched simultaneously based on the subject librarian’s experience, we included seven in our research protocol. These were Academic Search Premier (EBSCO), Agricola (EBSCO), BIOSIS Previews (ISI), CINAHL (EBSCO), MEDLINE (EBSCO), Research Library (ProQuest) and Web of Science (ISI).

For the non-federated searches, the students were simply given a bulleted list of links to databases that included the name of the database to be searched. For the federated searches, the list of resources being searched appeared beneath the search box. The default settings for the federated search, as well as the defaults for the individual databases were the same for all participants so that all would use the same interface and receive the same results for an identical search.

Citation set handling. We created a master spreadsheet in Microsoft Excel in which we recorded all data on participant, topic-method combinations, start/stop times, and questionnaire data. The citation sets were in a variety of different formats due to having been copied from the federated search results pages, the native interface brief results pages, or the native interface detailed display pages. We normalized the citation sets by entering each of them into its own folder in the RefWorks bibliographic manager program. As we did so, we removed duplicate citations and cited resources other than articles.

To facilitate grading, we exported the citation sets from RefWorks back to Microsoft Word, formatted according to a custom RefWorks format created for that purpose. In addition to printing the Word files for the grader, we also used macros to create a master list of journals used in the citations and to parse the citation sets completely into a comma-delimited text file. We imported the master journal list into one Excel file and the parsed citation sets into another.

Analysis of citations. To gather different perspectives on quality, each citation set was judged using two rubrics: one created by librarians consisting of quantitative measures and a more qualitative one approved by a faculty member in BYU’s Physiology and Developmental Biology department (Appendices D and E). The quantitative criteria in the librarian-created rubric included the journal impact factor, the proportion of citations from peer-reviewed journals to total citations, and the timeliness of the articles. While timeliness is not critical for all subject areas, it was deemed to be important to writing an adequate research paper on the two biology-related topics used in the study.

The impact factor, as reported by ISI’s Journal Citation Reports, and peer review status, as determined by consulting Ulrichsweb, were recorded in the master journal list spreadsheet. We used Excel macros and formulas to calculate the average impact factor, the proportion of peer-reviewed to total citations, and the average timeliness of each citation set. Each of the three criteria was weighted equally by normalizing the data for each criterion to a maximum value of ten. Each citation set received a final score by summing the points assigned to each criterion to reach a composite quantitative quality score that was then transferred to the master spreadsheet.

The qualitative faculty-approved rubric was designed to follow more closely the practices used by faculty members in a college or university setting. The three criteria used in this rubric included relevance to the topic, quality of the individual citations, and quantity of citations. Using the rubric, one undergraduate, a senior majoring in Biology, assigned points to each of the 190 citation sets for each of the three criteria and summed them to create a composite quality score that we input into the master spreadsheet.

Statistical analysis. After gathering the data, we analyzed it using analysis of variance (ANOVA) and multivariate analysis of variance tests (MANOVA). We selected these tests to permit accurate observation of the variance due to the different variables under study. To be consistent, the factors under study included school (BYU, BYUH, BYUI), method (federated versus non-federated), order (the order in which a given student was asked to use federated and non-federated searching), and type of question (to ascertain if the topic itself – though both were biological in nature – made a difference in the responses). After controlling for those factors, the analyzed data included the amount of time to complete the hypothetical research assignment, participant satisfaction rating of citations found, preference for search method and the two composite quality scores.

Results

Time savings. Statistically significant differences exist between BYU and the other two schools in the time required to complete the hypothetical assignments by the two search methods. While all schools recorded time savings in research by using federated searching, the results were widely dispersed. BYUI students saved an average of only 11 seconds and BYUH students saved an average of 26 seconds. BYU students, on the other hand, saved an average of 4 minutes, 11 seconds. Only the BYU results showed a statistically significant difference between time required for research and the search method used (Table 3). All undergraduates, on average, completed their hypothetical assignments 11% more quickly using a federated search rather than searching databases individually.

Table 3: Comparison of Results Between Schools
School Number of Participants % Preferred Federated Average Time to Complete Research
(in minutes) 5
Satisfaction of Results – Average Rating
(Scale of 1–7; 7 being highest) 5
Librarian-created Rubric – Average Quality Scores
(Scale of 0–30; 30 being highest) 5
Faculty-created Rubric – Average Quality Scores
(Scale of 0–9; 9 being highest) 5
Federated Non-federated Federated Non-federated Federated Non-federated Federated Non-federated
BYUH 27 81% 21.17 22.14 5.57 4.131,2 17.71 19.354 5.591 5.743
BYUI 21 52% 23.10 23.54 5.411 5.48 17.67 18.08 6.38 5.79
BYU 47 72% 16.761 21.142 5.77 4.782 18.10 19.204 6.15 6.31
ALL 95 70% 20.34 22.72 5.59 4.802 17.83 18.882 6.04 5.59

1 Statistically significant difference between schools (α = .05)
2 Statistically significant difference between methods (α = .05)
3 Marginally significant difference between schools (α = .10)
4 Marginally significant difference between methods (α = .10)
5 These are adjusted means not pure means. A least squares mean was utilized to create more robust results due to differing sample sizes between the schools.

Comments about reasons for a choice of preferred method clearly indicate that time savings influenced some, but not all, students’ preferences. One BYU student who preferred federated search stated, “[Federated search] definitely saved time and was more convenient to use than the [non-federated search].” However, another BYU student who saved time with federated search but preferred non-federated search commented, “While [federated search] did go faster (which to many will be a plus and will sway them to choose [federated search]), I think if I did lean to one or the other, I’d actually pick [non-federated search]” (emphasis in original). Satisfaction level of meeting information needs. Only BYU and BYUH showed a statistically significant difference in the satisfaction with citations found using the different search methods. Even including data from BYUI, where no statistically significant difference existed, participants were, on average, 17% more satisfied with the results found through federated searching.

When asked to explain the stated preference for a particular method, one BYUH student wrote, “I found that both were not very user friendly … I was frustrated and very tempted to just go back to good old ‘Google’!” Another BYUH student stated, “[Federated search] was much more understanding of the search terms I entered in. Instead of running into continuous blocks while searching[,] all of the results were posted from several search engines and I therefore did not feel nearly as frustrated … Having to only use one search engine at a time is annoying, simultaneously is definitely much better.”

Preferences. All three schools showed a preference for federated searching over non-federated searching, though BYUI showed only a marginal preference (52%). Overall, 70% of study participants preferred federated searching to non-federated searching.

There was a statistically significant (α=.05), but insignificant in practice, negative correlation (−0.18) between time to complete research and preference for search method. Although this is the expected correlation, it is interesting that the correlation was not stronger. One would expect that the less time it takes a student to find citations, the more likely the student would be to prefer the method which took less time, but the correlation is actually very small.

Reasons given by study participants who preferred federated search routinely included that it is faster, easier, simpler, and more efficient.13 One participant’s reason for preferring federated search begins with “Save time.” For this participant, it must have only seemed faster because the time spent using each search method was actually the same.

Extended comments included the following differing viewpoints. A BYUI undergraduate wrote, “… [Federated search] got right to the point. I found more useful information. [With the non-federated search] I had to do a longer search.” A BYU student who preferred non-federated search stated, “I felt like I had more options to choose from. Also [non-federated search] lent itself to more abstracts so you could see what the article was about without having to read it. With [federated search] I was relying more on the title which can sometimes be misleading.”

Quality of citations. Analysis of citation set quality using the librarian-created rubric revealed that, on average, citation sets gathered by using federated search scored a statistically significant 6% lower than those gathered by searching databases individually. Analysis using the faculty-approved rubric revealed no significant difference, statistically or in practice, in the quality of citation sets generated by the two methods.

More than one participant expressed a view that will surely resonate with librarians. When invited to provide additional comments, a BYUH student (who preferred non-federated search) wrote, “It was weird not being able to use the normal [search engines] I use such as Altavista, Google or Ask. Seems as if these web sites had more relevant info for my topic….” A fellow BYUH student (who preferred federated search) also answered the additional comments question by writing, “I love Google, but this certainly helps to narrow your information down to ‘good’ resources.”

ACRL 13th National Conference Presentation. We presented the results of this study at the ACRL 13th National Conference in Baltimore, both at a face-to-face session and at a virtual conference session. The face-to-face audience was polled using an i-Clicker personal response system. The sample respondents included about one-third of the audience. The virtual conference audience was polled using the built-in polling features of the LearningTimes software. The attendees were asked about their assumptions related to federated searching saving time, meeting undergraduates information needs, undergraduate preferences for searching and the quality of citations found using federated searching compared to non-federated searching. The results of the polling in the two sessions were combined into one data set in order to formulate an informal picture of librarian assumptions about federated searching and compare the assumptions to our findings in this study.

The ACRL conference audience agreed with the literature’s assumption that federated searching is “quick.” When asked if they believed federated searching saves time, 59% of the audience answered “Yes.” Data from our study indicated that students saved on average, 17% more time doing a federated search than a non-federated search.

We asked the ACRL conference audience to predict the undergraduate satisfaction ratings using the same seven point scale where one means “Unsatisfied” and seven means “Very satisfied.” Graphs 1 and 2 show the differences between what we found in the study and the assumptions made by the audience.

The audience correctly anticipated that undergraduates prefer federated searching over the alternative. They may well have expected it to be even stronger than the 70% we found, given that 97% of the audience assumed this preference.

When it comes to quality of citations generated, 50% of the ACRL audience indicated that they expected the two search methods to be comparable. This expectation seems quite reasonable given that the same databases were available through both search methods. Only 11% expected federated search to yield higher quality results while 39% expected better results from searching in the native database interfaces.

Generally, the assumptions made by librarians and the literature seem to correspond closely with the findings of this study. Despite the weaknesses of federated searching, the strengths appear to outweigh the weaknesses in the minds of undergraduates.

Discussion

Overall, undergraduates appear to strongly prefer federated searching, to be more satisfied with the results found via federated searching, and to save time by using federated searching. In the final analysis, the quality of the citations found using different search methods can be considered ambiguous. The librarian-created rubric showed that searching databases individually yields higher quality citations than does federated searching. However, that finding depends entirely on the definition of quality used in the rubric. Although the criteria themselves were meant to be objective, the selection of the criteria was not. In the end, quality is in the eye of the beholder.14 Because real-world educators are more likely to make a subjective judgment of quality – like the one used in the faculty-created rubric – than they are to check impact factors of the journals cited by students, it seems reasonable to give greater credence to the finding that both search methods produce citation sets of similar quality.

Future Studies

The statistical models employed in the analysis of data reported here can only be extrapolated to the populations at the participating schools. However, we speculate that our results will hold when applied to the general population.

This study controlled for, but did not address, the effect of implementation of a federated search engine on time savings, satisfaction, preferences, or citation quality. It is plausible that specific implementations could affect the results and either help or hinder a student’s experience. A study examining the effect of various possible implementations of federated searching is needed to determine an optimal implementation. We would suggest that the presence of an abstract with federated search results be an aspect of such a study since 12% of the participants specifically mentioned the usefulness of abstracts in more efficiently selecting better resources.

Finally, this study addressed undergraduate students only. More research is needed to determine the value of federated searching to graduate students and faculty. It is also probable that the results would vary depending on the discipline chosen for the hypothetical research topics, as some disciplines may lend themselves more readily to federated search capabilities than other disciplines.

Conclusion

It is clear that students prefer federated searching over traditional searching, are more satisfied with the results they get from federated searching and save time when doing a federated search. Also, the quality of results seems comparable to what they would get by searching databases individually. However, federated searching is not the panacea that many were looking towards to resolve all research hurdles placed in front of undergraduate students. Hopefully, metasearching technologies will continue to improve and will solve the problems with the current systems. Then maybe our undergraduates will not always feel like they have to get back to “good old Google” to find what they are looking for.

Appendix A: Participant Directions

Directions – Forms 1-F and 2-F

Appendix A: Participant Directions

Appendix B: Hypothetical Assignments

1 – F INTERNAL USE ONLY Start Time 1: _________ Start Time 2: _________ Net ID: _____________ End Time 1 : _________ End Time 2 : _________

Hypothetical Research Assignment #1</h3>

You’ve been given an assignment to write a 10 page research paper on the topic outlined below:

Ignoring any ethical issues involved, what is the current status of stem cell research for the treatment of diabetes?

Using the resources available to you, find enough citations to complete this assignment (copy citations to the “Scratch Pad” on the right-hand side of your screen).

stopAfter you have completed this hypothetical assignment, stop your work and notify the administrator.

Hypothetical Research Assignment #2

You’ve been given an assignment to write a 10 page research paper on the topic outlined below:

According to recent research, what are the health risks associated with being overweight?

Using the resources available to you, find enough citations to complete this assignment (copy citations to the “Scratch Pad” on the right-hand side of your screen).

stopAfter you have completed this hypothetical assignment, stop your work and notify the administrator.

Appendix C: Participant Questionnaire

  1. How satisfied were you with the citations you were able to discover using the first research method (Hypothetical Assignment #1)? (Circle One: 1=Unsatisfied to 7=Very satisfied)
    1 2 3 4 5 6 7
  2. How satisfied were you with the citations you were able to discover using the second research method (Hypothetical Assignment #2)? (Circle One: 1=Unsatisfied to 7=Very satisfied)
    1 2 3 4 5 6 7
  3. Which method did you prefer? (First)__ (Second)__
    Why?


  4. What other comments do you have about your searching experiences?

Appendix D: Librarian-Created Quality Rubric

Average Impact Factor Proportion of Peer Reviewed Average Timeliness TOTAL
Student #1 Federated
Student #1 Non-federated
Student #2 Federated
Etc.
  1. Average Impact Factor: The impact factor of the journal from each citation was gathered from the Institute for Scientific Information’s Journal Citation Reports database.
    The impact factors for the set of citations the student submitted were averaged. Any citation without an impact factor was assigned a value of zero and included in the average.
    The data was then normalized to a maximum value of 10.
  2. Proportion of Peer Reviewed: Whether the journal from each citation is peer reviewed was determined by checking Ulrich’s Periodicals Directory.
    The proportion of peer-reviewed articles cited by the student differs qualitatively from the impact factor because not all journals with impact factors are peer reviewed.
    The data was then normalized to a maximum value of 10.
  3. Average Timeliness: The average timeliness of the articles in the citations submitted by each student was recorded.
  1. 0–1 years old = 10 points
  2. 2 years old = 9 points
  3. 3 years old = 8 points
  4. 4 years old = 7 points
  5. 5 years old = 6 points
  6. 6 years old = 5 points
  7. 7 years old = 4 points
  8. 8 years old = 3 points
  9. 9 years old = 2 points
  10. ≥ 10 years old = 1 point

Appendix E: Faculty-Approved Quality Rubric

SCORE
Relevance All citations are related to the topic 3
Over half of the citations are related to the topic 2
1 or more citations are related to the topic 1
No citations are related to the topic 0
Quality* All citations are of good quality 3
Over half of the citations are of good quality 2
1 or more citations are of good quality 1
No citations are of good quality 0
Quantity There are enough citations to write a 10 page research paper 3
There are enough citations to write a 5–9 page research paper 2
There are enough citations to write a 1–4 page research paper 1
There are not enough citations to write a research paper 0
TOTAL:

</sup> *Good Quality: Citations reporting primary research results would be considered of higher quality than review articles or other types of articles. Citations from “scholarly” or peer-reviewed sources would be considered of higher quality than citations from “popular” or non-peer reviewed sources.

Notes

  1. John Riemer and others, “Rethinking How We Provide Bibliographic Services for the University of California” (Bibliographic Services Task Force Final Report, December 2005), 2, http://libraries.universityofcalifornia.edu/sopag/BSTF/Final.pdf.
  2. We gratefully acknowledge the assistance of WebFeat in setting up the implementation-neutral interface used in the data gathering.
  3. Roger K. Summit, “Dialog and the User: An Evaluation of the User Interface with a Major Online Retrieval System” in Interactive Bibliographic Search: The User/Computer Interface, ed. Donald E. Walker, 83–94 (Montvale, New Jersey: AFIPS Press, 1971); Stanley A. Elman, “Cost-Benefit Experience with Dialog Full-Text Retrieval” in Proceedings of the American Society for Information Science, Volume 10. 36th Annual Meeting, Los Angeles, California, October 21–25, 1973, eds. Helen J. Waldron and F. Raymond Long, 54–55 (Westport, Connecticut: Greenwood Press, 1973); Stanley A. Elman, “Cost Comparison of Manual and On-Line Computerized Literature Searching,” Special Libraries 66, no. 1 (1975): 12–18.
  4. For example: Donna Fryer, “Federated Search Engines,” Online 28, no. 2 (2004): 16–19; Roy Tennant, “Cross-Database Search: One-Stop Shopping,” Library Journal 126, no. 17 (2001): 29–30; Rachel L. Wadham, “Federated Searching,” Library Mosaics 15, no. 1 (2004): 20.
  5. For example: Frank Cervone, “What We’ve Learned from Doing Usability Testing on OpenURL Resolvers and Federated Search Engines,” Computers in Libraries 25, no. 9 (2005): 10–14; Doris Small Helfer and Jina Choi Wakimoto, “Metasearching: The Good, the Bad, and the Ugly of Making it Work in Your Library,” Searcher 13, no. 2 (2005): 40–41; Anne L. Highsmith and Bennett Claire Ponsford, “Notes on Metalib Implementation at Texas A&M University,” Serials Review 32, no. 3 (2006): 190–194.
  6. For example: Xiaotian Chen, “MetaLib, WebFeat, and Google: The Strengths and Weaknesses of Federated Search Engines Compared with Google,” Online Information Review 30, no. 4 (2006): 413–427.
  7. For example: Debbie Campbell, “Federating Access to Digital Objects: PictureAustralia,” Program: Electronic Library and Information Systems 36, no. 3 (2002): 182–187; Geoff Daily, “A Case of Clustered Clarity,” EContent 28, no. 10 (2005): 44–45.
  8. John Boyd and others, “The One-Box Challenge: Providing a Federated Search That Benefits the Research Process,” Serials Review 32, no. 4 (2006): 247 (emphasis ours).
  9. Ibid., 249 (emphasis ours).
  10. Ibid., 251 (emphasis ours).
  11. Ibid., 253 (emphasis ours).
  12. Jonathan Rochkind, “(Meta)search Like Google,” Library Journal, February 15, 2007.
  13. Boyd, 252. Our participants’ terms closely parallel those of West Virginia University students as reported by Penny Pugh in her contribution to the cited column.
  14. A report on the Quality Metrics project at Emory University states, “There was a categorical rejection of the value – and, of the very possibility – of substantive quality indicators presented in the ratings system, in particular as these applied to books and journals. One philosophical objection was to the notion of the quantification of quality in such a reductive manner.” The “quantification” referred to the creation of a rating of search results “hypothetically conceptualized as computed through numerically weighing various factors such as academic peer comments, non-academic comments, number of times cited, and the like.” Rohit Chopra and Aaron Krowne, “Disciplining Search/Searching Disciplines: Perspectives from Academic Communities on Metasearch Quality Indicators,” First Monday 11, no. 8 (2006): under “A. Thematic explication of key findings”, “4. Quality as User Empowerment to Make Judgments about Quality.”