Posts Tagged ‘saa

22
Dec
11

Musings: SAA, DAS, and “Managing Electronic Records in Archives & Special Collections”

This afternoon I successfully completed the electronic exam for “Managing Electronic Records in Archives & Special Collections,” a workshop presented as part of SAA‘s Digital Archives Specialist program. With my new certificate of continuing education in hand, I wonder how much I should/could participate in the DAS program. I have been watching the development of the program with great interest, particularly the cost, expected completion timeline, and who the experts would be. I signed up for the course and ventured up to Pasadena for a two-day workshop with Seth Shaw and Nancy Deromedi.

Erica Boudreau has a good summary of the workshop as taught by Tim Pyatt and Michael Shallcross on her blog, so I will try not to repeat too much here. Of interest to those looking to learn more about e-recs is the Bibliography and the pre-readings, which consisted of several pieces from the SAA Campus Case Studies website. We were asked to read Case 2, “Defining and Formalizing a Procedure for Archiving the Digital Version of the Schedule of Classes at the University of Michigan” by Nancy Deromedi, and Case 13, “On the Development of the University of Michigan Web Archives: Archival Principles and Strategies” by Michael Shallcross, as well as “Guarding the Guards: Archiving the Electronic Records of Hypertext Author Michael Joyce” by Catherine Stollar.

On the first day, the instructors discussed electronic “recordness,” authenticity/trust, the OAIS and PREMIS models, advocacy, and challenges, and reserved time for the participants to break into groups to discuss the three case studies. On the second day, we dove into more practical application of e-records programs, in particular a range of workflows. One of the takeaway messages was simply to focus on doing something, not waiting for some comprehensive solution that can handle every variety of e-record. Seth displayed a Venn diagram he revealed at SAA this year, which separates “fast,” “good,” and “cheap” into three bubbles — each can overlap with one other focus area, but not both. That is, for example, that your workflow can be cheap and good, but not fast; good and fast but not cheap, et cetera.

Seth and Nancy illustrated a multi-step workflow using a checksum creator (example used was MD5sums), Duke DataAccessioner for migration, checksums, as well as plugins for Jhove and Droid, WinDirStat for visual analysis of file contents, and FTKimager for forensics. They also discussed Archivematica for ingest and description, which still seems buggy, and web archiving using tools such as ArchiveIt, the CDL’s Web Archiving Service, and HTTrack. Perhaps the most significant thing I learned was about the use of digital forensics programs like FTKimager, as well as the concept of a forensic write blocker, which essentially prevents files on a disk/USB from being changed during transfer. Digital forensics helps us to see hidden and deleted files, which can help us provide a service to records creators — recovering what was thought lost — and creating a disk image to emulate the original disk environment. Also shared: Peter Chan at Stanford put up a great demo of how to process born digital materials using AccessData FTK on YouTube.  It was helpful to see these tools I have been reading about actually demonstrated.

Our cohort briefly discussed UC Irvine’s “virtual reading room,” which is essentially a way for researchers to access born-digital content in a reading room environment using DSpace, through a combination of an application process and limited user access period. Our rules of use are also posted. I have a lot of thoughts in my mind about how this may change or improve over time as we continue to receive and process born-digital papers and records — when we are doing less arrangement and better summarization/contextualization/description, how can we create a space for researchers to access material with undetermined copyright status? What will the “reading room” look like in the future?

Our digital projects specialist and I attended the workshop and I think we found some potential services and programs that could help us with our born-digital records workflow. Above all, it was helpful to see and hear about the tools being developed and get experienced perspectives on what has been working at Duke and Michigan. I enjoyed the review of familiar concepts as well as demonstrations of unfamiliar tools, and could see myself enrolling in future DAS courses. The certificate program includes an option to test out of the four Foundational courses, at $35 a pop. If I choose to complete the program, it must be done within 2 years, with a comprehensive exam ($100) that must be completed within 5 months after completing the required courses. Some people are cherry-picking from the curriculum, choosing only courses that are the most relevant to their work. I think a DAS certification could help train and employ future digital archivists (or, in my mind, archivists in general — since we’ll all be doing this type of work) and may create a “rising tide lifts all ships” type of situation in our profession. While there is a risk of a certification craze meant for financial gain of the organization, I was grateful to learn from experienced archivists in a structured setting. There’s something to be said for standards in education in our profession. I hope that DAS will raise the standard for (digital) archivists.

Advertisements
31
Aug
11

SAA Days 4 & 5: e-records, metrics, collaboration

Friday in Chicago started with coffee with Christian Dupont from Atlas Systems, followed by Session 302: “Practical Approaches to Born-Digital Records: What Works Today.” The session was packed…standing-room only (some archivists quipped that we must have broken fire codes with the number of people sitting on the floor)! Chris Prom from U Illinois, Urbana-Champaign, moderated the excellent panel on practical solutions to dealing with born-digital archival collections. Suzanne Belovari of Tufts referred to the AIMS project (which sponsored the workshop I attended on Tuesday) and the Personal Archives in Digital Media (paradigm) project, which offers an excellent “Workbook on digital private papers” and “Guidelines for creators of personal archives.” She also referenced the research of Catherine Marshall of the Center for the Study of Digital Libraries at Texas A&M, who has posted her research and papers regarding personal digital archives on her website. All of the speakers referred to Chris Prom’s Practical E-Records blog, which includes lots of guidelines and tools for archivists to deal with born digital material.

Ben Goldman of U Wyoming, who wrote an excellent piece in RB&M entitled “Bridging the Gap: Taking Practical Steps Toward Managing Born-Digital Collections in Manuscript Repositories,” talked about basic steps for dealing with electronic records, including network storage, virus checking, format information, generating checksums, and capturing descriptive metadata. He uses Enterprise Checker for virus checking, Duke DataAccessioner to generate checksums, and a Word doc or spreadsheet to track actions taken for individual files. Melissa Salrin of U Illinois, Urbana-Champaign spoke about her use of a program called Firefly to detect social security numbers in files, TreeSize Pro to identify file types, and a process through which she ensures that the files are read-only when moved. She urged the audience to remember to document every step of the transfer process, and that “people use and create files electronically as inefficiently as analog.” Laura Carroll, formerly of Emory, talked about the famous Salman Rushdie digital archives, noting that donor restrictions are what helped shape their workflow for dealing with Rushdie’s born digital material. The material is now available on a secure Fedora repository. Seth Shaw from Duke spoke about DataAccessioner (see previous posts) but mostly spoke eloquently in what promises to be an historic speech about the need to “do something, even if it isn’t perfect.”

After lunch, I attended Session 410: “The Archivists’ Toolkit: Innovative Uses and Collaborations. The session highlighted interesting collaborations and experiments with AT, and the most interesting was by Adrianna Del Collo of the Met, who found a way to convert folder-level inventories into XML for import into AT. Following the session, I was invited last-minute to a meeting of the “Processing Metrics Collaborative,” led by Emily Novak Gustainis of Harvard. The small group included two brief presentations by Emily Walters of NC State and Adrienne Pruitt of the Free Library of Philadelphia, both of whom have experimented with Gustainis’ Processing Metrics Database, which is an exciting tool to help archivists track statistical information about archival processing timing and costs. Walters also mentioned NC State’s new tool called Steady, which allows archivists to take container list spreadsheets and easily convert them into XML stub documents for easy import into AT. Walters used the PMD for tracking supply cost and time tracking, while Pruitt used the database to help with grant applications. Everyone noted that metrics should be used to compare collections, processing levels, and collection needs, taking special care to note that metrics should NOT be used to compare people. The average processing rate at NC State for their architectural material was 4 linear feet per hour, while it was 2 linear feet per hour for folder lists at Princeton (as noted by meeting participant Christie Petersen).

On Saturday morning I woke up early to prepare for my session, Session 503: “Exposing Hidden Collections Through Consortia and Collaboration.” I was honored and proud to chair the session with distinguished speakers Holly Mengel of the Philadelphia Area Consortium of Special Collections Libraries, Nick Graham of the North Carolina Digital Heritage Center, and Sherri Berger of the California Digital Library. The panelists defined and explored the exposure of hidden collections, from local/practical projects to regional/service-based projects. Each spoke about levels of “hidden-ness,” and the decisionmaking process of choosing partners and service recipients. It was a joy to listen to and facilitate presentations by archivists with such inspirational projects.

After my session, I attended Session 605: “Acquiring Organizational Records in a Social Media World: Documentation Strategies in the Facebook Era.” The focus on documenting student groups is very appealing, since documenting student life is one of the greatest challenges for university archivists. Most of the speakers recommended web archiving for twitter and facebook, which were not new ideas to me. However, Jackie Esposito of Penn State suggested a new strategy for documenting student organizations, which focuses on capture/recapture of social media sites and direct conversations with student groups, including the requirement that every group have a student archivist or historian. Jackie taught an “Archives 101” class to these students during the week after 7 pm early in the fall, and made sure to follow up with student groups before graduation.

After lunch, I went to Session 702: “Return on Investment: Metadata, Metrics, and Management.” All I can say about the session is…wow. Joyce Chapman of TRLN (formerly an NC State Library Fellow) spoke about her research into ROI (return on investment) for manual metadata enhancement and a project to understand researcher expectations of finding aids. The first project addressed the challenge of measuring value in a nonprofit (which cannot measure value via sales like for-profit organizations) through A/B testing of enhancements made to photographic metadata by cataloging staff. Her testing found that page views for enhanced metadata records were quadruple those of unenhanced records, a staggering statistic. Web analytics found that 28% of search strings for their photographs included names, which were only added to enhanced records. In terms of cataloger time, their goal was 5 minutes per image but the average was 7 minutes of metadata work per image. Her project documentation is available online. In her other study, she did a study of discovery success within finding aids by academic researchers using behavior, perception, and rank information. In order from most to least useful for researchers were: collection inventory, abstract, subjects, scope and contents, and biography/history. The abstract was looked at first in 60% of user tests. Users did not know the difference between abstract and scope and contents notes; in fact, 64% of users did not even read the scope at all after reading the abstract! Researchers explained that their reason for ignoring the biography/history note was a lack of trust in the information, since biographies/histories do not tend to include footnotes and the notes are impossible to cite.

Emily Novak Gustainis from Harvard talked about her processing metrics database, as mentioned in the paragraph about the “Processing Metrics Collaborative” session. Her reasoning behind metrics was simple: it is hard to change something until you know what you are doing. Her database tracks 38 aspects of archival processing, including timing and processing levels. She repeated that you cannot compare people, only collections; however, an employee report showed that a permanent processing archivist was spending only 20% of his time processing, so her team was able to use this information to better leverage staff responsibilities to respond to this information.

Adrian Turner from the California Digital Library talked about the Uncovering California Environmental Collections (UCEC) project, a CLIR-funded grant project to help process environmental collections across the state. While metrics were not built into the project, the group thought that it would be beneficial for the project. In another project, the UC Next Generation Technical Services initiative found 71000 feet in backlogs, and developed tactics for collection-level records in EAD and Archivists’ Toolkit using minimal processing techniques. Through info gathering in a Google doc spreadsheet, they found no discernable difference between date ranges, personal papers, and record groups processed through their project. They found processing rates of 1 linear foot per hour for series level arrangement and description and 4-6 linear feet per hour for folder level arrangement and description. He recommended formally incorporating metrics into project plans and creating a shared methodology for processing levels.

I had to head out for Midway before Q&A started to get on the train in time for my return flight, which thankfully wasn’t canceled from Hurricane Irene. As the train passed through Chicago, I found myself thinking about the energizing and inspiring the projects, tools, and theory that comes from attending SAA…and how much I look forward to SAA 2012.

(Cross posted to ZSR Professional Development blog.)

31
Aug
11

SAA Days 2 & 3: assessment, copyright, conversation

I started Wednesday with a birthday breakfast with a friend from college, then lunch with a former mentor, followed by roundtable meetings. I focused on the Archivists’ Toolkit / Archon Roundtable meeting, which is always a big draw for archivists interested in new developments with the software programs. Perhaps the biggest news came from Merilee Proffitt of OCLC, who announced that ArchiveGrid discovery interface for finding aids has been updated and will be freely available (no longer subscription based) for users seeking archival collections online. A demo of the updated interface, to be released soon, was available in the Exhibit Hall. In addition, Jennifer Waxman and Nathan Stevens described their digital object workflow plug-in for Archivists’ Toolkit to help archivists avoid cut-and-paste of digital object information. Their plugin is available online and allows archivists to map persistent identifiers to files in digital repositories, auto-create digital object handles, create tab-delimited work orders, and create a workflow from the rapid entry dropdown in AT.

On Thursday, I attended Session 109: “Engaged! Innovative Engagement and Outreach and Its Assessment.” The session was based on responses to the 2010 ARL survey on special collections (SPEC Kit 317), which found that 90% of special collections librarians are doing ongoing events, instruction sessions, and exhibits. The speakers were interested in how to assess the success of these efforts. Genya O’Meara from NC State cited Michelle McCoy’s article entitled “The Manuscript as Question: Teaching Primary Sources in the Archives — The China Missions Project,” published in C&RL in 2010, suggesting that we have a need for standard metrics for assessment of our outreach work as archivists. Steve MacLeod of UC Irvine explored his work with the Humanities Core Course program, which teaches writing skills in 3 quarters, and how he helped design course sessions with faculty to smoothly incorporate archives instruction into humanities instruction. Basic learning outcomes included the ability to answer two questions: what is a primary source? and what is the different between a first and primary source? He also created a LibGuide for the course and helped subject specialist reference/instruction librarians add primary source resources into their LibGuides. There were over 45 sections, whereby he and his colleagues taught over 1000 students. He suggested that the learning outcomes can help us know when our students “get it.” Florence Turcotte from UF discussed an archives internship program where students got course credit at UF for writing biographical notes and doing basic archival processing. I stepped out of the session in time to catch the riveting tail-end of Session 105: “Pay It Forward: Interns, Volunteers, and the Development of New Archivists and the Archives Profession,” just as Lance Stuchell from the Henry Ford started speaking about the ethics of unpaid intern work. He suggested that paid work is a moral and dignity issue and that unpaid work is not equal to professional work without pay.

After lunch, I headed over to Session 204: “Rights, Risk, and Reality: Beyond ‘Undue Diligence’ in Rights Analysis for Digitization.” I took away a few important points, including “be respectful, not afraid,” that archivists should form communities of practice where we persuade lawyers through peer practice such as the TRLN guidelines and the freshly-endorsed SAA standard Well-intentioned practice document. The speakers called for risk assessment over strict compliance, as well as encouraging the fair use defense and maintaining a liberal take-down policy for any challenges to unpublished material placed online. Perhaps most importantly, Merrilee Proffitt reminded us that no special collections library has been successfully sued for copyright infringement by posting unpublished archival material online for educational use. After looking around the Exhibit Hall, I met a former mentor for dinner and went to the UCLA MLIS alumni party, where I was inspired by colleagues and faculty to list some presentation ideas on a napkin. Ideas for next year (theme: crossing boundaries/borders) included US/Mexico archivist relations; water rights such as the Hoover Dam, Rio Grande, Mulholland, etc; community based archives (my area of interest); and repatriation of Native American material. Lots of great ideas floated around…

(Cross posted at ZSR Professional Development blog.)

31
Aug
11

SAA Day 1: Collecting Repositories and E-Records Workshop

On Tuesday, I arrived in rainy Chicago and headed straight for the Hotel Palomar for the AIMS Project (“Born-Digital Collections: An Inter-Institutional Model for Stewardship”) workshop regarding born-digital archival material in collecting repositories. The free workshop, called “CREW: Collecting Repositories and E-Records Workshop,” included archivists and technologists from around the world to discuss issues related to collection development, accessioning, appraisal, arrangement and description, and discovery and access of born-digital archival materials.

The workshop program started with Glynn Edwards of Stanford and Gretchen Gueguen of UVa, who discussed collection development of born-digital records. The speakers suggested that both collection development policies and donor agreements should have clear language about born-digital material, including asking donors to contribute metadata to electronic records from his/her collection. The challenge, they note, is in collaboratively developing sound guidelines and policies to help archivists/curators make decisions about what to acquire. A group discussion about talking to donors about their personal digital lives and creating a “digital will,” both of which help provide important information about an individual’s work, communication, and history of using technologies.

Kevin Glick and Mark Matienzo from Yale and Seth Shaw from Duke discussed accessioning, the process through which a repository gains control over records and gathers information that informs other functions in the archival workflow. While many of the procedures for accessioning born-digital material is the same for analog material, the speakers distinguished accessioning the records from accessioning the media themselves (ie the Word document versus the floppy disk on which it is saved). Mark described his process of “re-accessioning” material through a forensic (or bit-level) disk imaging process, whereby he write-protected accessioned files to protect data from manipulation. He used FTK imager to create a media log with unique identifiers and physical/logical characteristics of the media, followed by BagIt to create packages with high level info about accessions. Seth discussed Duke’s DataAccessioner program, which he created as an easy way for archivists to migrate and identify data from disks. A group discussion asked: what level of control is necessary for collections containing electronic records at your institution? and, what are the most common barriers to accessioning electronic records, and how would they show up? Our table agreed that barriers include staffing (skills and time); being able to read media; software AND hardware; storage limits; and greater need for students/interns.

Simon Wilson from Hull, Peter Chan from Stanford, and Gabriela Redwine from the Harry Ransom Center at UT Austin discussed arrangement and description. They questioned whether archivists can appraise digital material without knowing content therein, which conflicts with the high-level, minimal processing emphasized in our field in the past few years. Another major issue is with volume: space is cheap, but does that mean archivists shouldn’t appraise? It isn’t practical to describe every item, but how will archivists know what is sensitive or restricted? Hypatia provides an easy-to-use interface that allows drag-and-drop for easy intellectual organization of e-records, as well as the ability to add rights and permissions information. Peter Chan described a complex method for using a combination of AccessData FTK in combination with TransitSolution and Oxygen to compare checksums, find duplicate records, and do a “pattern search” for sensitive terms and numbers (such as social security numbers). Gabi Redwine explored her work with a hybrid collection (analog and digital records) where she learned that descriptive standards should be a learning process for staff, not students or volunteers. Her finding aids for the collection included hyperlinks to electronic content and she advocated for disk imaging. The group discussion following this session was intense! The hotbed topic was: are professional skills of appraisal, arrangement, description still relevant for born digital materials? Our group agreed that appraisal and description remain important; however, we were strongly divided about whether archivists will need to contribute to arrangement of e-records. I believe that arrangement becomes less important as things become more searchable, as argued in David Weinberger’s Everything is Miscellaneous. Arrangement emerged before the digital realm as a way for archivists and librarians to contextualize and organize material based on topics/subjects; however, with better description, users can create their own ways of organizing e-records!

Finally, Gretchen Gueguen (UVa) and Erin O’Meara of UNC Chapel Hill discussed discovery and access. Our goals as archivists include to preserve original format and order as much as possible, and apply restrictions as necessary, while balancing this with our mission to make things accessible and available. Gretchen suggested the idea of Google Books’ “snippet” idea as a way to provide access without compromising privacy or restrictions on sensitive material. Her models for access for digital material include: in-person versus not; authenticated versus not; physical versus online access; and dynamic versus static. Erin described her use of Curator’s Workbenchwithin FOXML and Solr to control access permissions and assign restrictions and roles to e-records. Another group discussion included chewy scenarios for dealing with born-digital materials; my table had to consider: “you are at a large public academic research library; director brings several CDROMs, Zip disks and floppy disks of famous (secretive) professor from campus; they are backup files created over the years; office has more paper files; professor and his laptop are missing; no one can give further details on files; write 1 page plan for preserving/describing files; working institutional repository exists.” With no donor agreement and an understanding that the faculty member was very private, we couldn’t go ahead with full access of the material.

At the end of the day, I left with a much better grasp of how I see myself as an archivist dealing with born-digital material (primarily those on optical and disk media). It seems that item-level description works best for born-digital while aggregate description works best for analog materials. Digital records are dealt with best through collaboratively-created policies and procedures for acquiring, processing, and describing them. Great stuff!

Here is the suggested reading list to help participants prepare for the course:

(Cross posted to ZSR Professional Development blog.)

*Update: all of the workshop presentations have been posted to the born digital archives blog.

17
Aug
10

Reflections: SAA 2010 in Washington DC

*Portions of this post are duplicated at the WFU ZSR Professional Development blog.

This has been my favorite SAA of the three I have attended, mostly because I felt like I had a purpose and specific topics to explore there. The TwapperKeeper archive for #saa10 is available and includes a ton of great resources. I also got the chance to have my curriculum vitae reviewed at the Career Center not once, but twice! I loved every moment of being in DC and will definitely be attending more of the receptions/socials next time!

Tuesday, August 10 was the Research Forum, of which I was a part as a poster presenter. My poster featured the LSTA outreach grant given to my library and the local public library and explored outreach and instruction to these “citizen archivists.” I got a lot of encouraging feedback and questions about our project, including an introduction to the California Digital Library’s hosted instances of Archivist’s Toolkit and Archon, which they use for smaller repositories in the state to post their finding aids.

Wednesday, August 11 consisted primarily of round table meetings, including the highly-anticipated meeting of the Archivists Toolkit/Archon Round Table. The development of ArchivesSpace, the next generation archives management tool to replace AT and Archon, was discussed. Development of the tool is planned to begin in early 2011. Jackie Dooley from OCLC announced that results from a survey of academic and research libraries’ special collections departments will be released. A few interesting findings:

  • Of the 275 institutions surveyed, about 1/3 use Archivist’s Toolkit; 11% use Archon
  • 70% have used EAD for their finding aids
  • About 75% use word processing software for their finding aids
  • Less than 50% of institutions’ finding aids are online

A handful of brief presentations from AT users followed, including Nancy Enneking from the Getty. Nancy demonstrated the use of reports in AT for creating useful statistics to demonstrate processing, accessioning, and other features of staff work with special collections. She mentioned that AT can be linked to Access with MySQL for another way to work with statistics in AT. Corey Nimer from BYU discussed the use of plug-ins to supplement AT, which I have not yet used and hope to implement.

Perhaps more interestingly, Marissa Hudspeth from the Rockefeller and Sibyl Shaefer from the University of Vermont introduced their development of a reference module in AT, which would allow patron registration, use tracking, duplication requests, personal user accounts, et cetera. Although there is much debate in the archives community about whether this is a good use of AT (since it was originally designed for description/content management of archives), parts of the module should be released in Fall 2010. They said they’d post a formal announcement on the ATUG listserv soon.

On Thursday, August 12, sessions began bright and early. I started the day with Session 102: “Structured Data Is Essential for Effective Archival Description and Discovery: True or False?” Overall summary: usability studies, tabbed finding aids, and photos in finding aids are great! While the panel concluded that structured data is not essential for archival description and discovery due to search tools, Noah Huffman from Duke demonstrated how incorporating more EAD into MARC as part of their library’s discovery layer resulted in increased discovery of archival materials.

Session 201 included a panel of law professors and copyright experts, who gave an update on intellectual property legislation. Peter Jaszi introduced the best practice and fair use project at the Center for Social Media, a 5-year effort to analyze best practice for fair use. Their guidelines for documentary filmmakers could be used as an example for research libraries. In addition, the organization also created a statement of best practices for fair use of dance materials, hosted at the Dance Heritage Center. Mr. Jaszi argued that Section 1201 does not equal copyright, but what he called “para-copyright law” that can be maneuvered around by cultural heritage institutions for fair use. I was also introduced to Peter Hirtle’s book about copyright (and a free download) entitled Copyright and Cultural Institutions: Guidelines for Digitization for U.S. Libraries, Archives, and Museums, which I have started to read.

I wandered out of Session 201 into Session 209, “Archivist or Educator? Meet Your Institution’s Goals by Being Both,” which featured archivists who teach. The speakers emphasized the study of how students learn as the core of becoming a good teacher. One recommendation included attending a history or social sciences course in order to see how faculty/teachers teach and how students respond. I was inspired to consider faculty themes, focuses, and specialties when thinking about how to reach out to students.

Around 5:30 pm, the Exhibit Hall opened along with the presentation of the graduate student poster session. I always enjoy seeing the work of emerging scholars in the archival field, and this year was no different. One poster featured the Philadelphia Area Consortium of Special Collections Libraries in a CLIR-funded project to process hidden collections in the Philadelphia region — not those within larger repositories, but within smaller repositories without the resources or means to process and make available their materials. The graduate student who created the poster served as a processor, traveling to local repositories and communicating her progress and plan to a project manager. This is an exciting concept, since outreach grants tend to focus on digitization or instruction, not the act of physically processing the archival materials or creating finding aids.

On Friday, August 13, I started the morning with Session 308, “Making Digital Archives a Pleasure to Use,” which ended up focusing on user-centered design. User studies at the National Archives and WGBH Boston found that users preferred annotation tools, faceted searching, and filtered searching. Emphasis was placed on an iterative approach to design: prototype, feedback, refinement.

I headed afterward to Session 410, “Beyond the Ivory Tower: Archival Collaboration, Community Partnerships, and Access Issues in Building Women’s Collections.” The panel, while focused on women’s collections, explored collaborative projects in a universally applicable way. L. Rebecca Johnson Melvin from the University of Delaware described the library’s oral history project to record Afra-Latina experiences in Delaware. They found the Library of Congress’ Veterans’ History Project documentation useful for the creation of their project in order to reach out to the Hispanic community of Delaware. T-Kay Sangwand from the University of Texas, Austin, described how the June L. Mazer Lesbian Archives were processed and digitized, then stored at UCLA. Ms. Sangwand suggested that successful collaborations build trust and transparency, articulate expectations from both sides, include stakeholders from diverse groups, and integrate the community into the preservation process. One speaker noted that collaborative projects are “a lot like donor relations” in the sense that you have to incorporate trust, communications, and contracts in order to create a mutually-beneficial result.

On Saturday, August 14, I sat in on Session 502, “Not on Google? It Doesn’t Exist,” which focused on search engine optimization and findability of archival materials. One thing to remember: Java is evil for cultural heritage because it cannot be searched. The session was a bit introductory in nature, but I did learn about a new resource called Linkypedia, which shows how Wikipedia and social media interact with cultural heritage websites.

Then I headed to Session 601, “Balancing Public Services with Technical Services in the Age of Basic Processing,” which featured the use of More Product, Less Process, aka “basic processing,” in order to best serve patrons. After a few minutes I decided to head over to Session 604, “Bibliographic Control of Archival Materials.” The release of RDA and the RDA Toolkit (available free until August 30) has opened up the bibliographic control world to the archival world in new ways. While much of the discussion was outside of my area of knowledge (much was discussed about MARC fields), I learned that even places like Harvard have issues with cross-referencing different types of resources that use different descriptive schemas.

My last session at SAA was 705, “The Real Reference Revolution,” which was an engaging exploration of reference approaches for archivists. Multiple institutions use Google Calendar for student hours, research appointments, and special hours. One panelist suggested having a blog where students could describe their work experience. Rachel Donahue described what she called “proactive reference tools” such as Zotero groups to add new materials from your collection and share those with interested researchers, and Google Feedburner.

It was a whirlwind experience and I left feeling invigorated and ready to tackle new challenges and ideas. Whew!

16
Jan
10

Beautiful finding aids

Recently, I was presented with a challenge by a tech librarian. He asked me if I could think of any examples of special collections websites with appealing, user-friendly finding aids in EAD. One comment made: “Archives seem to be the only places still doing a long narrative, like a printed document, on the web.”

My first response was to mention the Online Archive of California, but after that, I realized that my knowledge of visually appealing finding aid design and special collections websites was very limited.

The OAC is one of the first archival initiatives of its kind, because it attempts to digitally collocate archival resources in the state of California. Finding aids here are not only easily discovered through each repository’s website, but also through Google, ArchiveGrid, and OCLC (including OAIster when appropriate). Of course, the appealing interface doesn’t hurt the possibility of user discovery. The finding aids (here’s an example) have more visual interest through use of color blocks and links on the right side, as well as a sans-serif font. Perhaps the best part about a statewide interface? Consistency in design and usability.

The purpose of the site, however, is clear: to search finding aids (also referred to as collection guides). Digital content is tied to relevant collections with a small eyeball icon. Users can browse from A-Z and view brief collection descriptions. Overall the site has a clean interface with a simple purpose. The OAC’s collections are tied to the UC system’s Calisphere, which is a public- and educator-focused search site for over 150,000 digital objects (it also includes teacher modules for K-12). Both of these projects are powered by the California Digital Library.

Because my colleague was interested in EAD finding aids, I decided to start with SAA’s EAD Roundtable website. The site includes a list of early adopters of EAD, so I took a look at how creative some institutions were with representing their finding aids online.

My favorites so far?

Emory’s Manuscripts and Rare Books Library has a great search and browse interface. From the main page, users are informed that they can browse, search, and also search the catalog for resources. The database includes unprocessed collections, which is a pleasant surprise in the era of “hidden collections”. The finding aids themselves are visually interesting, with linked content, as well as icons for the PDF and printable versions (see the James D. Waddell papers for example).

Columbia University’s Archival Collections Portal searches both finding aids and digital content. I think this type of searching is natural for users, making it easier for users to access resources. The finding aids appear to be in a variety of formats depending on the collection, including HTML and PDF, but each record in the portal includes a descriptive summary and subject terms.

Both of these go against the typical left-side menu browsing of many EAD finding aids. I started to realize with my preferences that EAD was less important than the overall visual appeal and ease of use of the finding aid itself. If we can do a full-text search of any text document, why are we doing complex EAD encoding? Why aren’t we just doing HTML? How about catablogs? The idea is that, like MARC, having standards can help researchers find similar resources.

I’m at the beginning of understanding the many reasons to use EAD, but already I find myself questioning it. Jeanne over at Spellbound Blog talked about the possibilities of simpler EAD finding aids in 2008, through the Utah State Historical Society’s next-generation version of the Susa papers. There’s the Jon Cohen AIDS Research Collection, which is a finding aid and digital collection. Then there is the famous Polar Bear Expedition collection of next-generation finding aids.

There seems to be a lot of overlap between finding aids and digital objects, which I’ve seen at Duke and Eastern Carolina University, among others. Then there’s the movement to push our resources onto Flickr, Facebook, Twitter, etc. If repositories host their own finding aids and digital objects, they can repurpose and collocate them anywhere on the web, right?

I still don’t know if I have a good answer for my colleague. I know I have much to learn. I am curious to know…what’s is your favorite EAD finding aid site? The most beautiful finding aid site?

01
Sep
09

SAA Research Forum: collaboration for the greater good

My presentation at the 2009 SAA Research Forum was “Sharing for the Greater Good: Outreach and Collaboration from the Perspective of Community-Based Archives,” which was an attempt to bring attention to collaboration between large and small memory institutions. You can read the abstract here.  

Following the initial shock of actually being selected to participate in the forum, I realized that there was much I wanted to say and very little time (10 minutes to be exact) to say it. I attempted to explore the process of creating a successful collaborative partnership, using the Collaboration Continuum created by Gunter Waibel in the now-famous report “Beyond the Silos of the LAMs.” While the OCLC report was meant as a high-level analysis of primarily intra-insititutional collaboration, I felt that the continuum could be applied to many local-level projects and relationships between libraries, archives, and museums.

For example, Digital Forsyth is a county-wide collaborative digitization project bringing together LAMs for a common goal. The technical and grantwriting expertise of Wake Forest University was key to the creation of the project, while Forsyth County Public Library, Old Salem Museum and Gardens, and Winston-Salem State University provided the content depth. All of this was done without the smaller institutions feeling obligated to donate their materials to Wake Forest. As a result, the DF website has become the new archive of visual history of Forsyth County, undefined by physical or institutional boundaries.

I believe that these boundaries can be blurred, indeed erased, by the formation of digital archives/libraries/museums. Through the creation of topical/geographic digital LAMs, we can permit greater access and findability to the researcher/patron/end-user. This carries great significance for community-based archives, who can keep their records in cultural and geographic context. Communities and individuals can re-define their context artificially and create new archives without diminishing or erasing historical/evidential/documentary/cultural value.

By including records and collections in subject-based archives (like the Walt Whitman Archive) or union catalogs/federated searches (like ArchiveGrid or OAIster), multiple points of access — and description — can be conceived. Some archivists ponder the interest of non-archivists in such a project. I think “non-archivists,” particularly those coming from community-based archives, would welcome the opportunity for autonomy and laying claim to their records online.

Problems arise when we consider the lack of physical preservation and digitization resources available to these community-based archives. That’s where larger institutions come into the picture: to collaborate “for the greater good.” I think the state of North Carolina is headed in a very positive direction with the Traveling Archivist program and the NC Digital Heritage Center (see previous post), both of which focus on smaller, community-based memory institutions. Smaller institutions can then take the initiative to make contact with larger institutions and be responsible for their community’s history being represented (if they so choose).

I guess my ramblings demonstrate the largeness of my topic, and the overall squishiness of my argument. I believe collaboration can be much more than a buzzword. Between the large and small repositories I can see convergence, which the Collaboration Continuum notes as the high-investment, high-risk, high-benefit result of a successful partnership. Through it, both actors are responsible for their roles and become intertwined in a mutually-beneficial relationship and at least one “common function.”

I plan to post a paper exploring my topic in a bit more detail for the forum proceedings later this month. I hope to make better sense of all this by then!