Posts Tagged ‘finding aids

17
Jul
13

Society of California Archivists Meeting: Berkeley

I presented at the Annual General Meeting of the Society of California Archivists in April as part of Session 12, Moving Backlogs to the Forefront: Revamping Archival Processing Across the UC Libraries. The session highlighted a report created by a “Power of Three” group within the University of California’s Next Gen Technical Services initiative that focused specifically on more efficient archival processing. The main deliverable of this group and its lightning teams: the University of California Guidelines for Efficient Archival Processing.

What makes the UC Guidelines unique is the concept of a “value score,” which helps guide archivists/processors to document their decisionmaking process with regard to the most efficient processing levels at the collection and component levels. There are several charts in the guidelines that can be used as a tool for determining the “value” of a collection and justifying appropriate processing levels. Michelle Light’s presentation on the guidelines provides an excellent description and background.

I presented on my lightning team‘s work on defining a method for capturing processing rates, or archival processing metrics. Our group’s work resulted in a basic archival metrics tracking spreadsheet, along with a set of recommendations for key units for measurement. The spreadsheet is embedded in the UC Guidelines. My presentation:

05
Apr
11

Society of NC Archivists meeting: Morehead City

Most of this post is duplicated on the Professional Development blog at my institution.

While many of my colleagues were in Philadelphia for ACRL, I traveled east to the coast of North Carolina for the joint conference of the Society of North Carolina Archivists and the South Carolina Archival Association in Morehead City.

After arriving on Wednesday around dinnertime with my carpooling partner Katie (Archivist and Special Collections Librarian at Elon), we met up with Gretchen (Digital Initiatives Librarian at ECU) for dinner at a seaside restaurant and discussion about digital projects and, of course, seafood.

On Thursday, the conference kicked off with an opening plenary from two unique scholars: David Moore of the NC Maritime Museum talked about artist renditions of Blackbeard, Stede Bonnet, and other pirates, as well as archival research that helped contextualize these works; Ralph Wilbanks of the National Underwater and Marine Agency detailed his team’s discovery of the H.L. Hunley submarine, including the Civil War-era men trapped inside.

Session 1 on Thursday, succinctly titled “Digital Initiatives,” highlighted important work being done at the Avery Center for African American Research at the College of Charleston, UNC Charlotte, and ECU. Amanda Ross and Jessica Farrell from the College of Charleston described the challenges and successes of digitization of material culture, namely slave artifacts and African artwork in their collections. Of primary importance was the maintenance of color and shape fidelity of 3-D objects, which they dealt with economically with 2 flourescent lights with clamps, a Nikon D80 with a 18-200 mm lens by Quantaray (although they recommend a macro lens), a tripod, and a $50 roll of heavy white paper. Their makeshift lab and Dublin Core metadata project resulted in the Avery Artifact Collection within the Lowcountry Digital Library. Kristy Dixon and Katie McCormick from UNC Charlotte spoke carefully about the need for strategic thinking and collaboration at a broad level for special collections and archives today, in particular creating partnerships with systems staff and technical services staff. They noted that with the reorganization of their library, 6 technical services librarians/staff were added to their department of special collections!

Finally, Mark Custer and Jennifer Joyner from ECU explored the future of archival description with a discussion about ECU’s implementation of EAC-CFP, essentially authority records for creators of archival materials. Mark found inspiration from SNAC, the Social Networks and Archival Context Project (a project of UVa and the California Digital Library) to incorporate and create names for their archival collections. Mark used Google Refine‘s cluster and edit feature to pull all their EAD files into one file, grabbed URLs through VIAF and WorldCat identities, and hope to share their authority records with SNAC. Mark clarified the project, saying:

Firstly, we are not partnered with anyone involved in the excellent SNAC project. Instead, we decided to undertake a smaller, SNAC-like project here at ECU (i.e., we mined our EAD data in order to create EAC records). To accomplish this, I wrote an XSLT stylesheet to extract and clean up our local data. Only after working through that step did we then import this data into Google Refine. With Refine, we did a number of things, but the two things discussed in our presentation were: 1) cluster and edit our names with the well-established, advanced algorithms provided in that product 2) grab more data from databases like WorldCat Identities and VIAF without doing any extra scripting work outside of Google Refine.

Secondly, we haven’t enhanced our finding aid interface at all at this point. In fact, we’ve only put in a few weeks’ worth of work into the project so far, so none of our work is represented online yet. The HTML views of the Frances Renfrow Doak EAC record that we demonstrated were created by an XSLT stylesheet authored by Brian Tingle at the California Digital Library. He has graciously provided some of the tools that the SNAC project is using online at: https://bitbucket.org/btingle/cpf2html/.

Lastly, these authority records have stayed with us; mostly because, at this point, they’re unfinished (e.g., we still need to finish that clustering step within Refine, which requires a bit of extra work). But the ultimate goal, of course, is to share this data as widely as possible. Toward that end, I tend to think that we also need to be curating this data as collaboratively as possible.

The final session of the day was the SNCA Business Meeting, where I gave my report as the Archives Week Chair. That evening, a reception was held to celebrate the award winners for SNCA and give conference attendees the opportunity to participate in a behind-the-scenes tour of the NC Maritime Museum. Lots of fun ensued during the pirate-themed tours and I almost had enough energy to go to karaoke with some other young archivists.

On Friday, I moderated the session entitled “Statewide Digital Library Projects,” with speakers Nick Graham from the NC Digital Heritage Center and Kate Boyd from the SC Digital Library. The session highlighted interesting parallels and differences between the two statewide initiatives. Kate Boyd explained that the SCDL is a multisite project nested in multiple universities with distributed “buckets” for description and digitization. Their project uses a multi-host version of CONTENTdm, with some projects hosted and branded specifically to certain regions and institutions. Users can browse by county, institution, and date, and the site includes teacher-created lesson plans. The “About” section includes scanning and metadata guidelines; Kate mentioned that the update to CONTENTdm 6 would help with zoom and expand/reduce views of their digital objects. Nick Graham gave a brief background on the formation of the NCDHC, including NC ECHO and its survey and digitization guidelines. He explained that the NCDHC has minimal selection criteria: simply have no copyright/privacy concerns and a title. The NCDHC displays its digital objects through one instance of CONTENTdm. Both programs are supported by a mix of institutional and government funding/support, and both speakers emphasized the value of word of mouth marketing and shared branding for better collaborative efforts.

Later that morning, I attended a session regarding “Collaboration in Records Management.” Jennifer Neal of the Catholic Diocese of Charleston Archives gave an interesting presentation about the creation of a records management policy for her institution. Among the many reasons to begin an RM program, Jennifer noted that it was likely the legal reasons that were most important, both federal and state (and in her case, organizational rules). She recommended a pilot RM program with an enthusiastic department, as well as a friendly department liaison with organizational tendencies. Jennifer came up with “RM Fridays” as a pre-determined method for making time to sort, shred, organize, and inventory the materials for her pilot department. Her metrics were stunning: 135 record cartons were destroyed and 245 were organized and sent off site. Kelly Eubank from the NC State Archives explained how the state archives uses ArchiveIt to harvest social media sites and websites of government agencies and officials. She then explored, briefly, their use of BagIt to validate GIS geospatial files as part of their GeoMAPP project.

It was great to meet and network with archival professionals from both Carolinas and learn about some of the innovative and creative projects happening in their institutions. Right now I am thinking about EAC, collaboration with tech services, CONTENTdm, and records management.

23
Nov
10

Sharing MARC from Archivists’ Toolkit

A few weeks ago, I shared an excited tweet with the archives twitterverse announcing that I had successfully tested importing a MARC record from Archivists’ Toolkit into WorldCat. The tweet garnered more attention than I had anticipated, including a few direct messages from fellow archivists wanting to know how we came up with a solution to the MARC from AT problem. Here is what we did.

The problems with MARCXML exported from AT are few but significant. My colleague Mark Custer at ECU recently posted to the AT user group listserv a question about the fact that AT does not currently allow subfields for subject headings, so the MARC from AT is missing the subfield indicators. I set up a meeting with a cataloger at my library to help me look at the MARCXML files being exported from AT to see what her thoughts were about whether the records could be considered complete. We took a look at MARC for archival material already on WorldCat and compared that to what we exported from AT. She illustrated what she saw as the issues that would prevent proper sharing of the MARC with our local catalog and WorldCat:

  • Missing fixed fields including Ctrl, Desc, and Date (if no date range was included in the finding aid)
  • Missing subject heading subfield delimiters
  • 650 used instead of 600 field in some instances
  • Missing indicators for 245 (and 545, optional)
  • Missing cataloging source for 049 and 040

Because the MARC exported from AT is in MARCXML format and our catalogers work with the MRC format, we used MARCedit to convert the record from MARCXML to MRC. Once these missing and erroneous elements were fixed using MARCedit, we were ready to test import the record. Our library’s account with OCLC Connexion accepts imported records in DAT format, so we saved the MRC file as a DAT file. We tried uploading to Connexion using local bibliographic import and were successful. We determined that it would probably be easier to edit the MARC directly in Connexion, so we will do that in the future. The cataloger and I decided to upload the file to WorldCat as an official record, which worked, as well as to our local catalog, which also worked!

One issue for my library is that our finding aids are missing subject terms and authority work that most catalogers would require for submission to WorldCat. We have started incorporating this cataloger into our processing workflow and introduced her to the Names and Subjects modules in AT so that she can finalize subject headings and names that we assign. We can also consider an automated batch update for all our exported MARCXML to include the edits listed above, incorporating help from our technology team and their knowledge of FTP and scripting. In the meantime, we will be submitting our MARC one at a time since our finding aids are incomplete.

Here’s a recap of our tentative workflow, for your information:

  • Open MARCedit, then Tools
  • Choose MARCXML file as input file
  • Tell program output file name (copy and paste input file info; change ending to .mrc)
  • Select MARC21XML to MARC plus Translate to MARC8
  • Select Execute
  • Open OCLC Connexion
  • Import records; browse to .mrc file
  • Edit directly in OCLC Connexion
  • Update fixed fields including Ctrl, Desc, and Date
  • Change 650 to 600 when necessary
  • Add subfield delimiters to subject headings
  • Add indicators to 545, 245 as needed
  • Add cataloging source to 040 and 049
  • Save and validate
  • Login to OCLC, select ActionHoldingsUpdateHolding to load directly to WorldCat

Thoughts, comments, ideas, and suggestions are gratefully welcomed! I am really curious to know how others approach this issue.

02
Nov
10

Creating a processing guide

I learned much about the standards of archival processing while I was a fellow at the Center for Primary Research and Training at UCLA. While there, I processed the papers of art critic Jules Langsner, the papers of activist and scholar Josephine Fowler, and the pop culture collection of Middle Eastern Americana created by Jonathan Friedlander. Perhaps most important for my professional development, however, was the training I received from CFPRT Coordinator Kelley Wolfe Bachli, who wrote a succinct and informative processing manual to train each CFPRT fellow.

I brought this training manual with me to North Carolina, and with Kelley’s permission I incorporated her work with the standards used at my institution, DACS, and the Archivist’s Toolkit User Manual. The result? The Archival Processing Guide for Staff, Students, and Volunteers. I also include the chapters about processing and the over-the-shoulder look at processing from Michael J. Fox and Peter L. Wilkerson’s Introduction to Archives, now available free online.

The guide and its rules are constantly under review but I think this would be a great starting resource for any archives or special collections repository looking for some standards for training staff, students, and volunteers about the basics of archival processing. Comments are welcome!

17
Aug
10

Reflections: SAA 2010 in Washington DC

*Portions of this post are duplicated at the WFU ZSR Professional Development blog.

This has been my favorite SAA of the three I have attended, mostly because I felt like I had a purpose and specific topics to explore there. The TwapperKeeper archive for #saa10 is available and includes a ton of great resources. I also got the chance to have my curriculum vitae reviewed at the Career Center not once, but twice! I loved every moment of being in DC and will definitely be attending more of the receptions/socials next time!

Tuesday, August 10 was the Research Forum, of which I was a part as a poster presenter. My poster featured the LSTA outreach grant given to my library and the local public library and explored outreach and instruction to these “citizen archivists.” I got a lot of encouraging feedback and questions about our project, including an introduction to the California Digital Library’s hosted instances of Archivist’s Toolkit and Archon, which they use for smaller repositories in the state to post their finding aids.

Wednesday, August 11 consisted primarily of round table meetings, including the highly-anticipated meeting of the Archivists Toolkit/Archon Round Table. The development of ArchivesSpace, the next generation archives management tool to replace AT and Archon, was discussed. Development of the tool is planned to begin in early 2011. Jackie Dooley from OCLC announced that results from a survey of academic and research libraries’ special collections departments will be released. A few interesting findings:

  • Of the 275 institutions surveyed, about 1/3 use Archivist’s Toolkit; 11% use Archon
  • 70% have used EAD for their finding aids
  • About 75% use word processing software for their finding aids
  • Less than 50% of institutions’ finding aids are online

A handful of brief presentations from AT users followed, including Nancy Enneking from the Getty. Nancy demonstrated the use of reports in AT for creating useful statistics to demonstrate processing, accessioning, and other features of staff work with special collections. She mentioned that AT can be linked to Access with MySQL for another way to work with statistics in AT. Corey Nimer from BYU discussed the use of plug-ins to supplement AT, which I have not yet used and hope to implement.

Perhaps more interestingly, Marissa Hudspeth from the Rockefeller and Sibyl Shaefer from the University of Vermont introduced their development of a reference module in AT, which would allow patron registration, use tracking, duplication requests, personal user accounts, et cetera. Although there is much debate in the archives community about whether this is a good use of AT (since it was originally designed for description/content management of archives), parts of the module should be released in Fall 2010. They said they’d post a formal announcement on the ATUG listserv soon.

On Thursday, August 12, sessions began bright and early. I started the day with Session 102: “Structured Data Is Essential for Effective Archival Description and Discovery: True or False?” Overall summary: usability studies, tabbed finding aids, and photos in finding aids are great! While the panel concluded that structured data is not essential for archival description and discovery due to search tools, Noah Huffman from Duke demonstrated how incorporating more EAD into MARC as part of their library’s discovery layer resulted in increased discovery of archival materials.

Session 201 included a panel of law professors and copyright experts, who gave an update on intellectual property legislation. Peter Jaszi introduced the best practice and fair use project at the Center for Social Media, a 5-year effort to analyze best practice for fair use. Their guidelines for documentary filmmakers could be used as an example for research libraries. In addition, the organization also created a statement of best practices for fair use of dance materials, hosted at the Dance Heritage Center. Mr. Jaszi argued that Section 1201 does not equal copyright, but what he called “para-copyright law” that can be maneuvered around by cultural heritage institutions for fair use. I was also introduced to Peter Hirtle’s book about copyright (and a free download) entitled Copyright and Cultural Institutions: Guidelines for Digitization for U.S. Libraries, Archives, and Museums, which I have started to read.

I wandered out of Session 201 into Session 209, “Archivist or Educator? Meet Your Institution’s Goals by Being Both,” which featured archivists who teach. The speakers emphasized the study of how students learn as the core of becoming a good teacher. One recommendation included attending a history or social sciences course in order to see how faculty/teachers teach and how students respond. I was inspired to consider faculty themes, focuses, and specialties when thinking about how to reach out to students.

Around 5:30 pm, the Exhibit Hall opened along with the presentation of the graduate student poster session. I always enjoy seeing the work of emerging scholars in the archival field, and this year was no different. One poster featured the Philadelphia Area Consortium of Special Collections Libraries in a CLIR-funded project to process hidden collections in the Philadelphia region — not those within larger repositories, but within smaller repositories without the resources or means to process and make available their materials. The graduate student who created the poster served as a processor, traveling to local repositories and communicating her progress and plan to a project manager. This is an exciting concept, since outreach grants tend to focus on digitization or instruction, not the act of physically processing the archival materials or creating finding aids.

On Friday, August 13, I started the morning with Session 308, “Making Digital Archives a Pleasure to Use,” which ended up focusing on user-centered design. User studies at the National Archives and WGBH Boston found that users preferred annotation tools, faceted searching, and filtered searching. Emphasis was placed on an iterative approach to design: prototype, feedback, refinement.

I headed afterward to Session 410, “Beyond the Ivory Tower: Archival Collaboration, Community Partnerships, and Access Issues in Building Women’s Collections.” The panel, while focused on women’s collections, explored collaborative projects in a universally applicable way. L. Rebecca Johnson Melvin from the University of Delaware described the library’s oral history project to record Afra-Latina experiences in Delaware. They found the Library of Congress’ Veterans’ History Project documentation useful for the creation of their project in order to reach out to the Hispanic community of Delaware. T-Kay Sangwand from the University of Texas, Austin, described how the June L. Mazer Lesbian Archives were processed and digitized, then stored at UCLA. Ms. Sangwand suggested that successful collaborations build trust and transparency, articulate expectations from both sides, include stakeholders from diverse groups, and integrate the community into the preservation process. One speaker noted that collaborative projects are “a lot like donor relations” in the sense that you have to incorporate trust, communications, and contracts in order to create a mutually-beneficial result.

On Saturday, August 14, I sat in on Session 502, “Not on Google? It Doesn’t Exist,” which focused on search engine optimization and findability of archival materials. One thing to remember: Java is evil for cultural heritage because it cannot be searched. The session was a bit introductory in nature, but I did learn about a new resource called Linkypedia, which shows how Wikipedia and social media interact with cultural heritage websites.

Then I headed to Session 601, “Balancing Public Services with Technical Services in the Age of Basic Processing,” which featured the use of More Product, Less Process, aka “basic processing,” in order to best serve patrons. After a few minutes I decided to head over to Session 604, “Bibliographic Control of Archival Materials.” The release of RDA and the RDA Toolkit (available free until August 30) has opened up the bibliographic control world to the archival world in new ways. While much of the discussion was outside of my area of knowledge (much was discussed about MARC fields), I learned that even places like Harvard have issues with cross-referencing different types of resources that use different descriptive schemas.

My last session at SAA was 705, “The Real Reference Revolution,” which was an engaging exploration of reference approaches for archivists. Multiple institutions use Google Calendar for student hours, research appointments, and special hours. One panelist suggested having a blog where students could describe their work experience. Rachel Donahue described what she called “proactive reference tools” such as Zotero groups to add new materials from your collection and share those with interested researchers, and Google Feedburner.

It was a whirlwind experience and I left feeling invigorated and ready to tackle new challenges and ideas. Whew!

07
Jul
10

An archivist at ALA

Note: this post is duplicated at http://cloud.lib.wfu.edu/blog/pd/.

After completing my project as a 2009 Emerging Leader (updating the wiki and resources of the Joint Committee on Archives, Libraries, and Museums, also known as CALM) I was nominated to join the Emerging Leaders subcommittee, which is a big reason why I participated in ALA Annual 2010.

On Friday, June 25, I attended the 2010 Emerging Leader poster session, which included excellent reports from this year’s EL cohort. Final projects have been posted to ALAConnect. The 2010 EL group assigned to CALM created a podcast that included an interview with the Archivist of the United States, David Ferriero. After the poster session, I joined the hush of librarians that waited patiently for the Exhibit Hall to open.

On Saturday, June 26, a session entitled “Developing a Sustainable Digitization Workflow” was canceled, so I wandered over to the professional poster sessions and discovered a relevant and interesting poster by Melanie Griffin and Barbara Lewis of the University of South Florida’s Special & Digital Collections department. Entitled “Transforming Special Collections: A (Lib) Guide to Innovation,” the poster detailed the department’s creative use of LibGuides to create special collections guides that unify digital objects and EAD into one interactive interface. Here is an example of a guide to graphic arts materials, with a specific collection tab selected. Their MARC (via Fedora) and EAD (via Archon) is displayed in LibGuide boxes using script created by their systems librarian. Perhaps the most interesting result of the experimental project is that statistics show higher hits to collections that were displayed as LibGuides. I am in touch with Melanie and Barbara, who continue their project and are working to create a new stylesheet for their EAD as well.

After lunch, I attended the Emerging Leaders summit, which was a discussion led by current and past Emerging Leaders to reflect on the process and experience of the EL program. I gathered feedback to bring to the EL subcommittee meeting. On Sunday, June 27, I participated in the EL subcommittee meeting (my first experience with ALA committee work). We discussed the EL mentor experience and project development, as well as assessment and managing expectations from both the EL and mentor/sponsor perspective.

After lunch with Atlas Systems regarding the Aeon archives management program, I attended the LITA Top Tech Trends forum. This was my first time at TTT, which Erik explores in greater detail in an earlier post. Cindi Trainor brought up a topic that I thought I would hear only at an archivists’ gathering: after declaring the end of the era of physical copy scarcity, she asked “what will the future scarce commodities be” in libraries. Of course, my ears heard “what will future special collections and archives be?” For the first time, I started thinking that as an archivist, I should be part of LITA.

11
May
10

Who cares about learning EAD?

Matt (@herbison) over at Hot Brainstem posted a good question to his blog: “Can you skip learning EAD and go right to Archivists’ Toolkit or Archon?” He suggests that the “right way” to create accessible finding aids (EAD, DACS, XML, XSLT, and AT) is not as important as finding a (faster) way to get stuff online. First, I want to say thanks to him for bringing this question to the table.

I was not trained to create EAD finding aids in grad school (although I have experience with XML and HTML). Instead, I was trained to create EAD-compatible MS Word docs that were plopped into an EAD template by an encoder and sent over to the OAC. For me, AT was not part of the process of creating a finding aid.

In my current job, I’m working with old EAD files that were outsourced and tied to a problematic stylesheet (they referenced JPG files and included HTML color codes). I imported these old EAD files  into AT — minor editing was needed, but nothing that made me reference the EAD tag library. I have yet to create one from “scratch,” although I did recently attend the basic EAD workshop through SAA. I can now search and edit the contents of our existing finding aids (all 450+ of them) and create new ones within the AT interface…and with less opportunity for human error.

I am moving toward the idea of going straight to AT for EAD since it exports “good” EAD (that I have seen so far). I am going to train our grad students and library assistants how to use AT for accessions and basic processing…why would I need to teach them EAD? I am still in the process of answering that question because we are working on a new stylesheet for our finding aids — which means I need to learn more about XSLT. AT might give me a nice EAD document, but it doesn’t make it look pretty online for me.

AT experts like Sibyl (@sibylschaefer) and Mark (@anarchivist) are right when they suggest that an understanding of EAD is important when you need to do stuff with the EAD that AT exports. Just being aware of elements and the tag library helps me “read” an EAD document…and hopefully, it will help me create better, more beautiful finding aids through stylesheets that interact with the data in functional, interactive ways.

So I suppose the question to consider is, “how much do you need to learn about EAD in order to go right to AT or Archon?”