Concept Mapping:
An Innovative Approach to Digital Library Design and Evaluation

June P. Mead
jm62@cornell.edu

and

Geri Gay
gkg1@cornell.edu

Interactive Multimedia Group
Cornell University
Ithaca, New York


Background

The Interactive Multimedia Group (IMG) at Cornell University has been designing and researching the impact of collaborative multimedia technologies on educational environments for over ten years. In one of our current projects, the Making of America, the IMG is focusing on the development of tools to support access to digital libraries. This effort is complemented by parallel research into the cognitive, behavioral, and social implications of using these systems. Additionally, the IMG is working in concert with the Museum Education Site Licensing (MESL) project to investigate visual search strategies and the ways in which digital images can be used to enhance classroom instruction and scholarly research.

We have found that user acceptance and usability are major issues in the design of digital libraries. In essence, the IMG's research indicates that the design of digital libraries will be most successful when user-centered design includes the development and implementation of tools for human-computer interaction and problem-solving.

According to Fidel (1993), there is a growing trend in the use of qualitative methods in information retrieval (IR) research. She says that even though the defining characteristics of qualitative research--that it is open and flexible, holistic and case-oriented, inductive and noncontrolling--make its methodologies "best for exploring human behavior in depth, and thus of great relevance to IR research," too few studies have explored their application in-depth (Fidel, 1993, p. 219). The IMG is exploring qualitative research methods that employ rich data to examine complicated computer-mediated environments. These innovative approaches to user-centered design and evaluation respond to the challenges of tomorrow's digital library environments.

Beyond the technical design challenges of digital libraries and other information retrieval systems, our studies have demonstrated the need to address the social and psychological aspects of using on-line resources and collaboration. We have found that parallel development-evaluation approaches provide a wealth of analytical data that can be used to enhance the overall design process.

Our approach to building networked environments is to collect descriptive, qualitative data by using sensitive on-line computer tracking tools combined with innovative ethnographic research methods. We have found that technologically-rich environments demand equally rich data collection, analysis and interpretation tools--ones capable of examining human-computer interactions as well as the social and cognitive dynamics that develop during computer-mediated collaboration.

Today, there is little doubt that computer-mediated communication technologies are having a profound impact on how we approach and engage in the processes of education and research. However, our research has demonstrated that students often do not make effective use of available database resources and communication tools (Gay & Lentini, 1995; Gay, 1995; Gay & Grosz-Ngate, 1994; Gay & Mazur, 1993; Trumbull, Gay, & Mazur, 1992). But why?

We believe that user interfaces and systems that can accommodate diverse needs and search strategies need to be developed. Likewise, users need to be trained in communication and information seeking behaviors in order to effectively use these powerful systems. We have found that by studying the multiple ways in which users interact with these new systems, we can develop more responsive tools, programs, and technologies. Based on our studies as well as the growing body of research findings from computer-mediated communication research, we have begun the development of a coherent set of principles to guide the design and evaluation of digital libraries.

Introduction

This discussion centers on an innovative strategy called concept mapping. Trochim (1989) defines concept mapping as a structured conceptualization process relying on multivariate statistical analysis techniques. Essentially it is a process that enables the members of a group to visually depict their ideas on some topic or problem of interest. Concept maps have been used in a variety of ways: as a means for constructing theory, as a structure for designing and developing survey instruments, as a framework for database construction, as a first step in organizational planning, and as a basis for analyzing research results.

The IMG has been working with Cornell University's Digital Library Working Group, library users, faculty, students, staff, and search experts to construct a series of concept maps depicting selection and evaluation criteria for the design of an "optimized" digital library interface. This discussion focuses on the ways in which a comparison of concept maps (developed by different user groups) can inform user-centered design of digital libraries. In particular, we are interested in examining the features/capabilities digital library systems should provide, the types of on-line assistance which might facilitate researchers and scholars in using a digital library, and how access to digital libraries can be made more multidimensional.

The Making of America (MoA) Digital Library Project

The Making of America (MoA) project is an ambitious, collaborative digital library effort being undertaken by Cornell University and the University of Michigan. Once completed, the MoA digital collection will consist of the equivalent of 100,000 volumes, encompassing a variety of disciplines with bearing on the history, design, and construction of America's physical landscape: transportation, communications, and cultural environment. Books, articles, manuscripts, drawing, architectural blueprints, business records, maps and other materials will be digitized. The hypothesis underlying the MoA project is that access to a networked, electronically integrated, Web-based collection will open new opportunities for interdisciplinary research.

In the MoA project, our primary research question is: To what extent and in what ways does access to a large body of thematically related digital material influence research and education? Our evaluation objectives are: to contribute to the design and development of a user interface for MoA in order to provide a framework for digitizing a wide range of media into a coherent and readily accessible digital library for a broad array of users; and to conduct focus groups with faculty, students, and librarians to address the multiple ways in which the MoA digital library is influencing scholarship and teaching in higher education.

A brief introduction to concept mapping and pattern matching

The exploratory use of concept mapping in the MoA project draws on concept mapping and pattern matching techniques developed by Trochim (1985; 1989). Trochim refers to concept mapping as an effective tool for constructing program theories with stakeholder groups. He says that concept maps can be used to graphically depict the stakeholders' understandings of the important program features and processes. Concept mapping was used in our research to provide a framework for the design and evaluation of the MoA interface. The rationale for employing concept mapping is based on the following. Concept map procedures recognize the epistemological orientations of different stakeholders while allowing the evaluator to explore points of overlap between these orientations; concept mapping looks across differing agendas while helping the evaluator to discover and illuminate commonalities; and concept mapping procedures acknowledge the value of aggregating data of all types. According to Trochim (1989), concept mapping utilizes multidimensional scaling and cluster analysis to construct "a pictorial representation of the group's thinking" (p. 2). Trochim argues that the level of theoretical sophistication which stakeholders (digital library users in this case) bring to understanding how systems function should not be underestimated.

According to Trochim (1985), pattern matching involves the specification of a theoretical pattern, the acquisition of an observed pattern, and a demonstration of the linkages and/or disparities between the two. Trochim defines "pattern matches" in terms of the degree of correspondence between the theoretical and observed patterns. He says the value of "a match" is that if a pattern of results or outcomes, predicted theoretically, is found in the measured or observed data, the validity of any conclusions drawn about the program is strengthened, because the likelihood that such "a match" could have occurred by chance is very small. In other words, according to Trochim, pattern matches between theorized and actual outcomes provide convincing evidence that a system or program works.

Kolb's (1991) interpretivist pattern matching study demonstrated the utility of concept maps in assessing program processes. In addition, Marquart (1990) has described how pattern matching can be used to assess the congruence between theory developed from concept maps and data from an evaluation. Our research contributes further to this exploration of pattern matching techniques by testing a three-tiered interpretivist approach to concept map-based pattern matching in which we plan to compare three sets of maps: one completed by the IMG, one by the Digital Library Working Group (DLWG), and one by the future users of the MoA digital library (i.e., Cornell faculty and graduate students).

Within the interpretivist context of this study, Trochim's ideas about pattern matching are being used not to test the program theory or assess cause, but rather as a way to enhance the evaluation findings. Trochim says the value of "a match" between theorized and actual outcomes is that it provides convincing evidence that a system or program works. In this exploratory application of Trochim's pattern matching techniques, the question being asked is how the linkages and/or disparities between these three concept maps inform our design and evaluation of digital library interfaces.

Mead (1995) piloted the use of this interpretivist pattern match and concept mapping approach in her evaluation of a school-based substance abuse prevention program for children. What is unique about Mead's approach is that concept maps were used as guides for constructing a comprehensive framework within which to conduct all phases of the evaluation (i.e., instrument development, data collection, analysis, interpretation, and reporting). Mead found that linkages between the three levels of the program (which she identified as the program-as-designed, the program-as-implemented, and the program-as-experienced) indicated program strengths, while disjunctures pointed to program weaknesses and/or areas warranting improvement or modification.

The interpretivist pattern matching trial being employed in this study will assess the nature of fit between three sets of maps created by three different groups: the IMG, the DLWG, and the MoA digital library group. For purposes of this study, matches are defined in terms of linkages between the three maps. The evaluator hypothesizes that congruencies between these three maps will indicate critical interface design features, while disparities are more likely to reflect idiosyncratic differences between the three groups which may or may not warrant interface design and evaluation consideration.

Developing a Framework for Digital Library Design and Evaluation

As employed in this study, the concept mapping process consists of three stages. In Stage 1, an IMG concept map was developed. In Stage 2, a DLWG map was constructed. In Stage 3 (in process), a map will be constructed by the future users of the MoA digital library. The steps involved in the concept map process are outlined in following.

  1. The evaluator worked with members of the IMG staff, graduate students, university staff, and search professionals (n=16) to develop a set of maps depicting an optimized database search tool. The group brainstormed 93 statements and then rated each statement on a 5-point scale of importance (with 1 being most important and 5 being least important), sorted the statements and grouped them into like piles based on their similarity.
  2. The evaluator worked with the DLWG (n=6) to identify 47 statements for the concept mapping exercise. Six members of the DLWG participated in the sorting, rating and interpretation of the maps. Their primary interest in participating in the concept mapping sessions was to develop a set of criteria for evaluating possible user interfaces for the Cornell digital library. Similarly, the DLWG rated each statement on a 5-point scale of importance, sorted the statements into like-piles based on their similarity.
  3. The evaluator entered these data into the computer using Trochim's Concept System software and computed two sets of concept maps. Multidimensional scaling and cluster analysis procedures were applied to the data to create a series of concept maps depicting the IMG Map for an optimized database search tool, and the DLWG Map for selection and evaluation criteria for digital library user interfaces.
  4. Each group met to interpret the concept maps in conjunction with results of the multidimensional scaling and cluster analyses.

What have we accomplished thus far?

The IMG Map
The IMG group consisted of 16 participants: IMG staff and designers, search specialists, graduate students, and educators. The IMG participants brainstormed a set of 93 statements following the instruction to describe an "optimized data base search tool." The group then sorted and rated these statements. The resulting map consisted of 93 statements and 13 clusters. The following list indicates the results of the sorting, rating, and interpretation steps in the concept mapping process. The numerical value shown in parentheses is the average rating assigned to that statement. The combined statement rating averages were calculated for each respective cluster. For example, Cluster 1 "Search & Browse Tools" had a cluster rating average of 3.88 on a scale of 5.00.

Cluster 1: Search & Browse Tools
1. There should be multiple ways to search. (4.40)
56. Browsing would be like walking around in the stacks. (3.80)
9. Should be able to find "lots of things like this". (3.00)
12. To be able to have a list of associated keywords to search on would be helpful. (3.93)
2. Keywords should be able to be used for searching for text. (4.47)
80. You should be able to search by author and keyword simultaneously. (4.47)
73. There should be automatic truncation of author searches. (3.17)
4. It should be easy to modify a search. (4.20)
57. Would like to be able to have a Thesaurus capability to pull in like-words. (4.27)
6. Should be able to modify a search with "more things like this". (3.27)
47. The capability to limit the search after I started. (4.17)
55. It would be nice to browse anyway I wanted to. (3.47)
42. The ability in the front-end to limit or expand search environments. (3.73)
54. It would be nice to browse the data base. (3.93)

Cluster Average = 3.88

Cluster 2: Research Search Tools
3. I want to be able to search an image data base with a keyword. (3.87)
8. "More like this" narrows the search. (3.21)
15. Would like to be able to search content of a field by author. (3.71)
23. I want only articles that are newspaper editorials. (3.27)
24. Being able to search by research type. (3.07)
13. Having a natural language search capability. (3.67)
81. Fuzzy searching would be a useful capability. (3.73)
91. The search engine should be able to perform all the search functions automatically (relevance ranking, fuzzy searching, adding synonyms, etc.). (3.69)
71. You should only have to search in one place and not have to keep constructing the same search. (4.33)

Cluster Average = 3.19

Cluster 3: Search Alternatives
5. Should be able to search with images. (2.57)
29. Creative search parameters would be more inclusive. (2.92)
38. Would be nice to search by a quote. (3.40)
7. Strategies for matching of non-text data. (3.53)
63. I would like to be able to search by sounds (i.e., by bird calls, music, etc.). (2.60)
75. Layered searchers for images would be valuable (e.g., architecture, biology, mapping, fine art). (3.40)
84. I would love to be able to put in a part of a poem and find out where it came from. (3.60)
14. I would like to see hyperlinks between various modes in the data base. (3.64)
64. It should have the ability to translate foreign languages so I can use it. (3.40)
20. It would be nice to search through WWW at the same time you are searching through magazine (for instance) data bases and a library card catalog. (2.80)

Cluster Average = 3.19

Cluster 4: Human Dimension
11. Capability for graphical representation of text-based search strategies. (3.29)
16. Backwards citation would be good. (3.71)
49. It would be nice to share search paths. (3.36)
52. Every data base in the world would share a standard front end. (3.67)
50. The ability to do collaborative searches would be good. (3.47)
68. It would be nice to have a quality sorter. (3.29)

Cluster Average = 3.46

Cluster 5: Sharing
21. Having some way to see traces of what other people have done on a similar topic. (2.87)
22. Having an option that says, "Show me what other people have done." (2.93)
25. It would be nice to have some mechanism built-in for confidentiality (i.e., downloaded documents). (3.60)
51. I would be nice to share annotations real time. (2.93)

Cluster Average = 3.08

Cluster 6: Ease of Operation/Use
26. It would be useful to have intelligent agents. (3.53)
72. The whole idea of standards is important. (4.27)
53. Customize scholarly tools should be useful. (3.79)
86. I want to have some confidence in the results I get. (4.20)

Cluster Average = 3.95

Cluster 7: Sorting
74. If you could have a filter so that your search goes into a prescribed format (e.g., APA), that would be good. (3.50)
90. Relevance ranking is important. (3.43)
82. Relevance ranking is very important to have. (3.73)
87. There should be reliability underlying the search engine. (4.80)

Cluster Average = 3.87

Cluster 8: Modification Tools
10. Should be able to link two previous searches together. (4.13)
66. You should be able to highlight words, titles, authors and add them into your search so you don't have to type them in again. (4.27)
48. It would be nice to modify a list of hits after I have already begun searching through them. (3.87)
39. I would like to see a history of my searches by project. (3.64)
18. List of search results should be able to be modified and printed out by the searcher. (4.53)
88. There should be a custom menu that you can save or by-pass. (3.53)
19. Ways to manipulate idiosyncratically would be useful. (3.00)
65. You should have the ability to add your own links. (3.53)
92. It would be good to have abstracts available for everything in the search. (4.20)

Cluster Average = 3.86

Cluster 9: Customizable Output
17. Would like to be able to keep images on the screen for comparison. (4.07)
43. I don't want to waste time learning search strategies. (4.00)
30. Having a personal organizer would be useful. (3.67)
27. A feature that would remember where I stopped the search before. (4.33)
93. You should have a customizable output capability. (4.13)

Cluster Average = 4.04

Cluster 10: Customized Navigation
32. The ability to cut and past between several documents. (4.40)
44. I like the Bookmarks to point back to places. (4.67)
70. It should be inexpensive to use. (4.47)
78. Having different pathways through the data base colored would be useful. (3.27)
83. It would be nice to be able to sort the results of my search in ways most meaningful to me. (4.07)

Cluster Average = 4.17

Cluster 11: Individualized Reference Tools
31. Having more than one document on the screen at once would be good. (3.87)
35. It would be nice to have the source of cut and pastes so you don't lose track of where you got the document. (4.33)
76. Using colors as markers would be useful. (3.33)
45. I like the ability to make annotation with the Bookmarks. (4.40)
85. I want it to be as simple as possible to use. (4.00)
69. It should be user-friendly. (4.53)

Cluster Average = 4.08

Cluster 12: Visual/Human Aspects
28. You should be able to characterize the agent's personality. (2.71)
40. There should be some toggle to use images without citation. (2.85)
67. It would be helpful to have background checks on the "credibility" of the authors/sources. (3.33)
34. It would be good to treat text as an image so that it could not be plagiarized. (2.87)
37. Source attribution should come with images too. (4.00)
41. There should be a way to distinguish between scholarly and remuneration attribution. (2.93)
58. I would like a real live person on line to help me if I get stuck. (3.13)

Cluster Average = 3.12

Cluster 13: Interface Display Features
33. The ability to create a multimedia document would be good. (4.00)
77. Using colors as carriers of information would be helpful. (3.20)
59. I would like the Help to be something other than the obvious. (3.93)
79. It should be as visual as possible. (3.87)
36. Source attribution should be available with text. (4.13)
60. If I select a reference, it would download the citation automatically into a file. (4.14)
46. There should be multimedia Bookmark annotations. (3.73)
61. I would like a tool to download the article and store the bibliographic reference automatically in my file. (4.47)
89. The design of the screen should allow me to have as much text as possible. (3.93)
62. If it is going to beep, I wish it would tell me why. (4.20)

Cluster Average = 3.96

Figure 1 depicts the computed IMG concept map. The location of the clusters on the concept map indicates either conceptual similarity or divergence. In other words, clusters located close to one another on the map are conceptually more similar than those further away from one another.

The DLWG Map
The Digital Library Working Group (DLWG) at Cornell is an interdisciplinary group composed of nine members representing the library, preservation, engineering, and communication departments. The primary task outlined for the DLWG was to establish a set of criteria for selecting and evaluating a digital library user interface. In conjunction with this task, the group reviewed four interfaces: the Chemistry On-line Retrieval Experiment (CORE) system with Bellcore's Pixlook interface and OCLC's Scepter interface; Cornell's implementation of the TULIP journal database with the Cornell Digital Library (CDL) interface; University of Michigan's implementation of the TULIP journal database with the Netscape World Wide Web (WWW) interface; and Cornell's Computer Science Technical Reports or DIENST with the Netscape and MacWeb WWW interfaces.

Six members of the DLWG participated in a concept mapping exercise. The DLWG participants sorted and rated a set of 47 statements developed during prior meetings of the group. The following list indicates the results of the sorting, rating, and interpretation steps in the concept mapping process. The numerical value shown in parentheses is the average importance rating assigned to that statement by the group.

Cluster 1: Availability and Adaptability
1. The client software should be readily available to users. (4.50)
2. The software should be free to the user. (3.50)
3. The system should be fully functional on three standard platforms: UNIX, Macintosh, and Windows. (4.50)
7. Client software should be under active development to ensure its continuing compatibility with new and more powerful hardware and software. (4.00)
4. The system should be customizable by the user. (3.33)
5. You should be able to adapt the system to the power of the workstation and the network bandwidth available to the user. (3.17)
6. Having custom functions like data compression (for higher power stations) or text-only search and retrieval (for low-power stations) are important. (3.00)

Cluster Average = 3.71

Cluster 2: General Characteristics
39. Digital library systems should be capable of searching.. (5.00)
41. Digital library systems should be capable of delivering documents quickly from a variety of servers. (4.33)
40. Digital library systems should be capable of navigating. (5.00)
42. Digital library systems should be dependable. (4.67)

Cluster Average = 4.75

Cluster 3: Printing and Downloading
31. Users should be able to print the document locally. (3.50)
32. Users should be able to print the document remotely. (3.17)
33. Users should also be able to electronically transfer the document to their computers via e-mail and by downloading. (4.00)
34. Users should be able to download documents. (3.83)
35. Users should get help using thoroughly written and context-sensitive documentation. (4.00)

Cluster Average = 3.70

Cluster 4: Copyright
37. The system should provide access to copyrighted material in such a way that copyright holders can be properly reimbursed for reproduction of their materials beyond fair use. (3.83)
38. Access to copyrighted materials should be provided in a way that least interferes with effective searching, navigation, and reproduction of documents. (4.02)

Cluster Average = 4.02

Cluster 5: Searching
8. The system should be capable of field-specific searches. (4.00)
9. You should be able to search on bibliographic information. (4.83)
10. You should be able to search document abstracts. (4.33)
11. You should be able to search the full text of the document using free-text terms. (3.50)
12. Boolean and proximity operators should be available to construct search statements. (4.33)

Cluster Average = 4.20

Cluster 6: Logging and Monitoring
43. Digital library systems should have built-in means of recording data for evaluation and improvement that protect the privacy of digital library users. (2.83)
44. The system should log document use patterns, including access errors, usage rates through time and by type of document, patterns of use within individual documents, and other data of interest to librarians and others monitoring the system. (2.83)
45. The built-in monitoring system should include the capability for users to evaluate and comment on system performance. (2.83)
46. The monitoring system should have a way for system evaluators to administer questionnaires to users. (2.33)

Cluster Average = 2.71

Cluster 7: Search Results Display
13. Search results should be displayable in reverse chronological and alphabetical order. (3.83)
14. A relevance-weighted display is appropriate for full-text searches. (3.50)
18. Individual documents chosen for display from the results should allow users to choose the part of the document to read. (3.83)
47. Search terms should be highlighted in the document text. (3.00)

Cluster Average = (3.54)

Cluster 8: Document Display
15. The information displayed about each item found should provide an adequate basis for a user decision whether to access a fuller version of the item. (4.67)
19. Individual documents chosen for display from the results should be clearly labeled. (4.33)
16. Individual documents chosen for display from the results should be organized to provide an overview of the document. (4.33)
17. Individual documents chosen for display from the results should allow smooth navigation among the document parts. (3.50)
20. Document pages or sections should be clear and legible. (5.00)
23. Non-textual material (illustrations, maps, sound and video recordings, graphs) should be legibly displayed or readily available via on-page links. (4.33)
24. Users should be able to display a full-page image. (3.33)
25. Users should be able to zoom in on smaller areas. (3.33)

Cluster Average = 4.10

Cluster 9: Navigation
21. Document pages should have links to main document divisions (bibliographies, tables of contents or document outlines, first pages of chapters), any individual page, and the next or previous page. (4.67)
22. Users should be able to move from in-text citations to bibliographic citations and on to the text of the cited documents. (3.83)
26. A clear return to the search function should be provided in the display interface. (4.83)
27. Links to other, related documents should be possible when appropriate. (3.17)
28. Navigation tools should include a search history for the session. (3.50)
29. Navigation tools should include obvious beginning and ending points for the document. (3.67)
30. There should be clear signposts pointing to the various options/functions. (4.50)
36. On-line help should be clear, intuitive, and easy to use. (4.00)

Cluster Average = 4.02

Following the interpretation of the maps, the results were used by the DLWG to develop a set of criteria for the selection and evaluation of a user interface. Table 1 outlines the criteria outlined by the DLWG (M. Engle, personal communication March, 1995). Figure 2 depicts the completed DLWG map with cluster titles shown.

TABLE 1. Selection and Evaluation Criteria for Digital Library Interfaces

    Criteria 1: Client Software Characteristics
  • The client software should be readily available to users, free to the user, and fully functional on three standard platforms: UNIX, Macintosh, and Windows.
  • The client should be customizable by the user to adapt to the power of the workstation and the network bandwidth available to the user. Examples of custom functions are data compression (for higher power stations) or text-only search and retrieval (for low-power stations).
  • Client software should be under active development to ensure its continuing compatibility with new and more powerful hardware and software.

Criteria 2: Searching Capability
  • The system should be capable of field-specific searches and searches on document records, document abstracts, and the full text of the document using free-text terms.
  • Boolean and proximity operators should be available to construct search statements.

  • Criteria 3: Displaying Search Results
  • Search results should be displayable in reverse chronological and alphabetical order. A relevance-weighted display is appropriate for full-text searches. The information displayed about each item found should provide an adequate basis for a user decision whether to access a fuller version of the item.

  • Criteria 4: Displaying and Navigating Documents
  • Individual documents chosen for display from the results should be clearly labeled and organized to provide an overview of the document, to allow users to choose the part of the document to read, and to allow smooth navigation among the document parts.
  • Document pages or sections should be clear and legible with links to main document divisions (bibliographies, tables of contents or document outlines, first pages of chapters), any individual page, and the next or previous page.
  • Users should be able to move from in-text citations to bibliographic citations and on to the text of the cited documents.
  • Non-textual material (illustrations, maps, sound and video recordings, graphs) should be legibly displayed or readily available via on-page links.
  • Users should be able to display a full-page image or zoom in on smaller areas.
  • A clear return to the search function should be provided in the display interface. Links to other, related documents should be possible when appropriate.
  • Tools to enhance document navigation and evaluation include a search history for the session, highlighted search terms in the document text, obvious beginning and ending points for the document, and clear signposts pointing to other functions.

  • Criteria 5: Using Documents: Saving, Printing, and Mark-up
  • Users should be able to print the document locally and remotely.
  • Users should also be able to electronically transfer the document to their computers via e-mail and by downloading.

  • Criteria 6: Help
  • Users should get help using thoroughly written documentation and context-sensitive, on-line help that is clear, intuitive, and easy to use.

  • Criteria 7: Protecting Property Rights
  • The system should provide access to copyrighted material in such a way that copyright holders can be properly reimbursed for reproduction of their materials beyond fair use.
  • Users will want access to copyrighted materials; this should be provided in a way that least interferes with effective searching, navigation, and reproduction of documents.

  • Criteria 8: General System Considerations
  • Digital library systems should be capable of searching, navigating and delivering documents quickly and dependably from a variety of servers.

  • Criteria 9: Evaluation of System Use
  • Digital library systems should have built-in means of recording data for evaluation and improvement that protect the privacy of digital library users.
  • System should log document use patterns, including access errors, usage rates through time and by type of document, patterns of use within individual documents, and other data of interest to librarians and others monitoring the system. This includes a capability for users to evaluate and comment on system performance and for system evaluators to administer questionnaires to users.
  • Prioritizing Design and Evaluation Efforts

    In order to inform our design and evaluation efforts, the results of the DLWG and IMG maps were rank ordered in descending order based on the average cluster ratings. Table 2 depicts the results of the IMG map ratings, Table 3 shows the results of the DLWG map, and Table 4 presents a comparison of the two maps in terms of importance ratings.

    Cluster No.Cluster TitleRating
    Cluster 10
    Cluster 11
    Cluster 9
    Cluster 13
    Cluster 6
    Cluster 1
    Cluster 7
    Cluster 8
    Cluster 2
    Cluster 4
    Cluster 3
    Cluster 12
    Cluster 5
    Customized Navigation
    Individualized Reference Tools
    Customizable Output
    Interface Display Features
    Ease of Operation/Use
    Search & Browse Tools
    Sorting
    Modification Tools
    Research Search Tools
    Human Dimension
    Search Alternatives
    Visual/Human Aspects
    Sharing
    4.17
    4.08
    4.04
    3.96
    3.95
    3.88
    3.87
    3.86
    3.62
    3.46
    3.19
    3.12
    3.08

    TABLE 2. IMG Cluster Titles by Importance Rating

    Cluster No.Cluster TitleRating
    Cluster 2
    Cluster 5
    Cluster 8
    Cluster 4
    Cluster 9
    Cluster 1
    Cluster 3
    Cluster 7
    Cluster 6
    General Characteristics
    Searching
    Document Display
    Copyright
    Navigation
    Availability and Adaptability
    Printing and Downloading
    Search Results Display
    Logging and Monitoring
    4.75
    4.20
    4.10
    4.02
    4.02
    3.71
    3.70
    3.54
    2.71

    TABLE 3. DLWG Cluster Titles by Importance Rating

    IMG Users MapDLWG Map
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    Customized Navigation
    Individualized Ref. Tools
    Customizable Output
    Interface Display Features
    Ease of Operation/Use
    Search & Browse Tools
    Sorting
    Modification Tools
    Research Search Tools
    Human Dimension
    Search Alternatives
    Visual/Human Aspects
    Sharing
    General Characteristics
    Searching
    Document Display
    Copyright
    Navigation
    Availability and Adaptability
    Printing and Downloading
    Search Results Display
    Logging and Monitoring
    N/A
    N/A
    N/A
    N/A

    TABLE 4. A Comparison of IMG and DLWG Importance Ratings

    Preliminary Pattern Matching Results

    The results of the initial two-tiered pattern match between the DLWG and the IMG maps are summarized in Table 5.

    DLWG MapIMG Map
    Cluster 2: General CharacteristicsCluster 11: Individualized Reference Tools
    Cluster 7: Sorting
    Cluster 8: Modification Tools
    Cluster 2: Research Search Tools
    Cluster 5: SearchingCluster 1: Search & Browse Tools
    Cluster 3: Search Alternatives
    Cluster 7: Search Results DisplayCluster 13: Interface Display Features
    Cluster 8: Document Display
    Cluster 9: NavigationCluster 10: Customized Navigation
    Cluster 1: Availability/AdaptabilityCluster 6: Ease of Operation/Use
    Cluster 4: Human Dimension
    Cluster 5: Sharing
    Cluster 3: Printing and DownloadingCluster 9: Customizable Output
    Cluster 6: Logging and MonitoringCluster 12: Visual/Human Aspects
    Cluster 4: CopyrightNo match
    TABLE 5. Cluster "Matches" Between the DLWG and IMG Maps

    As expected, "matches" point to important design considerations. Also as hypothesized, the "no matches" appear to indicate areas in which user interface design is less relevant (e.g., DLWG Cluster 4: Copyright).

    Developing a Set of Principles for Digital Library Design

    Using the two sets of concept maps created by the IMG and the DLWG as a means for prioritizing our design efforts, we have developed a set of user interface tools. For example, we have concentrated on the elements and features in the clusters rated most important by the two groups. The IMG rated "Customized Navigation," "Individualized Reference Tools," and "Customizable Output" as being most important, 4.17, 4.08, and 4.04 respectively; and the DLWG rated "General Characteristics," "Searching," and "Document Display" as most important, 4.75, 4.20, and 4.10, respectively.

    In addition, we are working on a set of guiding principles for the design and evaluation of digital environments. The following summary highlights these interface tools and supporting design rationale. We believe that these types of tools will give digital library users the power they want and need in order to find, retrieve, manipulate, and annotate digital information.

    Search and Browse Tools

    What remains to be done?

    We plan to conduct a focus group with faculty and graduate students expecting to use the MoA digital library in order to develop a concept map depicting their criteria for a user interface. We will then compare the three maps and provide design and evaluation guidance to the MoA project. These maps will be used in three ways: (a) to inform interview protocols; (b) to inform design of the MoA digital library user interface; and (c) to inform future data collection strategies.

    In addition, we will soon begin conducting a series of "mini-case" studies of the ways in which faculty and students use the MoA digital library. Five Cornell faculty members expect to use the MoA digital library in the spring of 1996. These faculty represent diverse fields: human development and family studies, landscape architecture, design and environmental analysis, communication, and city and regional planning. Our plan is to conduct a series of interviews with faculty and students using the MoA digital library and to conduct participant observations in classes and seminars using this resource. We fully expect these case studies to enhance our understanding of the ways in which access to digital materials influences research and education.


    References

    Fidel, R. (1993). Qualitative methods in information retrieval research. LISR, 15, 219-247.

    Gay, G. & Lentini, M.. (1995). Use of communication resources in a networked collaborative design environment. Journal of Computer-Mediated Communication, 1(1). HTTP document available from http://cwis.usc.edu/dept/annenberg/vol1/issue1/contents.html

    Gay, G. (1995). Issues in accessing and constructing multimedia documents. In Barrett, E. and Redmond, M. (Eds.), Contextual Media: Multimedia and Interpretation (pp. 175-188). Cambridge, MA: MIT Press.

    Gay, G., & Grosz-Ngate, M. (1994). Collaborative design in a networked multimedia environment: Emerging communication patterns. Journal of Research on Computing in Education, 26(3), 418-432.

    Gay, G., & Mazur, J. (1993). The utility of computer tracking tools for user-centered design. Educational Technology, 34(3), 45-59.

    Kolb, D. G. (1991). Understanding adventure-based professional development: The role of theory in evaluation. Unpublished doctoral dissertation, Cornell University, Ithaca, NY.

    Marquart, J. M. (1990). A pattern-matching approach to link program theory and evaluation data. New Directions for Program Evaluation, 47, 93-107.

    Mead, J. P. (1995). Substance abuse prevention programs for children: An interpretivist theory-oriented evaluation. Doctoral dissertation, Cornell University, Ithaca, NY.

    Trochim, W. (1989). An introduction to concept mapping for planning and evaluation. Evaluation and Program Planning. Special Issue: Concept Mapping for Evaluation and Planning, 12(1), 1-16.

    Trochim, W. (1985). Pattern matching, validity, and conceptualization in program evaluation. Evaluation Review, 9, 575-604.

    Trumbull, D., Gay, G., & Mazur, J. (1992). Students' actual and perceived use of navigational and guidance tools in a hypermedia program. Journal of Research on Computing in Education, 24, 315-328.