Posts Tagged ‘archiviststoolkit

03
May
12

Society of California Archivists Meeting: Ventura

Last weekend was the 2012 general meeting of the Society of California Archivists. The conference, held in Ventura, was my first time attending SCA and I was able to connect with a lot of interesting people and projects.

I drove up from Orange County on Friday morning and arrived just before the opening plenary to register and visit the exhibitor hall. I spent so much time catching up with colleagues and connections that I completely missed the plenary!

Session 1, “Changing Moving Image Access: Presenting Video Artworks in an Online Environment,” focused on a large collection of video art from the Long Beach Museum of Art that was acquired by the Getty Research Institute. Annette Doss and Mary K. Woods of the Getty Research Institute described this and another collection related to women adding up to over 5000 videotapes of multiple formats. The videotapes were individually cataloged with help from AMIM2 and chapter 7 of AACR2. Perhaps the most important takeaway from their presentation was the Getty’s transition from creating DVD user copies into digital user copies of the works of art on videotape. Their workflow is now: Umatic (or other tape) to DigiBeta to digital. Their System for Automatic Migration of Media Assets, or SAMMA, machine, is a multi-encoder that does real-time conversion in up to 5 output files simultaneously. At the Getty, they create JPEG2000 with an MXF wrapper. Their digital repository is DigiTool, used as an access platform (as opposed to preservation platform), ingested in MODSĀ  created via MARCedit, with a METS wrapper. During the Q&A, members of the audience asked about artist involvement in the process, to which they responded that artists frequently weigh in on reformatting — and that most of them are more concerned with display over format.

I met my archives buddy (and LACMA archivist) Jessica Gambling for lunch at a local Thai restaurant and then headed back to the conference hotel for Session 6, “The Business of Audio-Visual Preservation.” The session emphasized knowing standards for preserving audiovisual materials, especially video (as opposed to film). Most of the speakers were a bit too general for my needs, although I did learn from Leah Kerr of the Mayme A. Clayton Library and Museum that they have posted their library catalog online (just click on anonymous user login). One takeaway: keep in mind that video has a 15-20 year lifespan. Lauren Sorensen from the Bay Area Video Coalition gave an engaging introduction to the nonprofit, including its history and services.

That evening, I met up with a few UC archivists at an Aeon mixer in downtown Ventura, then drove up to Santa Barbara for dinner. The next morning, I got up bright and early for my presentation in the lightning talks, Session 7. Moderator Lisa Miller from the Hoover Institution Archives introduced all of us and we proceeded to give 6-minute, 20-slide max talks on a variety of topics. Jill Golden from the Hoover Institute Archives discussed how she used Google Hot Trends to find a potential area to focus on in her next processing project — she found the name “Saul Alinsky” listed, which happened to be the source of an unprocessed collection. Jason Miller from UC Berkeley described his process of creating “digital contact sheets” to allow users to view massive amounts of 35mm slides at once. Essentially, Miller does sleeve-page scans of 20 slides at a time, batch edits these pages, then attaches them as low-res images to the finding aid. My presentation, “Forget About the Backlog: Surfacing Accessions Using Archivists’ Toolkit,” highlighted a triage approach taken at my institution with regard to accessions. I am interested in exposing unprocessed accessions via the web, which I see as an even more minimal approach to accessioning as processing. I plan to do further research into this area, since finding similar practices at Yale and Emory.

I took a break from sessions and walked around the historic district and beaches, including a stop at the San Buenaventura Mission. After lunch, I attended Session 14, “Online Archive of California Contributor Meeting,” led by Sherri Berger and Adrian Turner. Adrian described the upcoming collection-level record tool, which is essentially a web form that allows OAC contributors to create a collection-level record and, optionally, attach a PDF inventory or other non-standard finding aid. Adrian’s use of the “mullet” record metaphor included a brilliantly-placed photo of a kid with a mullet hairstyle — short in the front (collection level minimal DACS record) and long in the back (PDF inventory attached). The tool should be available next week. Sherri reported on a survey of OAC/Calisphere users and the results were remarkable: 27% of users of OAC identify as “other”, including historians, researchers, and writers. A full 51% of OAC and 53% of Calisphere users get to the sites via web searches; 35% and 20% respectively get there via referrer (top referrer is, of course, Wikipedia). Nearly 70% of K-12 users get to these sites via web searches. Adrian and Sherri discussed ways to connect users to related content, including the use of a “more like this” feature. They hope that tools such as this, as well as EAC, will help connect users to related archival material.

I loved connecting with archivists at the regional level and hearing about practices from across the state. Presentations will be posted online at the SCA past meetings page.

31
Aug
11

SAA Days 4 & 5: e-records, metrics, collaboration

Friday in Chicago started with coffee with Christian Dupont from Atlas Systems, followed by Session 302: “Practical Approaches to Born-Digital Records: What Works Today.” The session was packed…standing-room only (some archivists quipped that we must have broken fire codes with the number of people sitting on the floor)! Chris Prom from U Illinois, Urbana-Champaign, moderated the excellent panel on practical solutions to dealing with born-digital archival collections. Suzanne Belovari of Tufts referred to the AIMS project (which sponsored the workshop I attended on Tuesday) and the Personal Archives in Digital Media (paradigm) project, which offers an excellent “Workbook on digital private papers” and “Guidelines for creators of personal archives.” She also referenced the research of Catherine Marshall of the Center for the Study of Digital Libraries at Texas A&M, who has posted her research and papers regarding personal digital archives on her website. All of the speakers referred to Chris Prom’s Practical E-Records blog, which includes lots of guidelines and tools for archivists to deal with born digital material.

Ben Goldman of U Wyoming, who wrote an excellent piece in RB&M entitled “Bridging the Gap: Taking Practical Steps Toward Managing Born-Digital Collections in Manuscript Repositories,” talked about basic steps for dealing with electronic records, including network storage, virus checking, format information, generating checksums, and capturing descriptive metadata. He uses Enterprise Checker for virus checking, Duke DataAccessioner to generate checksums, and a Word doc or spreadsheet to track actions taken for individual files. Melissa Salrin of U Illinois, Urbana-Champaign spoke about her use of a program called Firefly to detect social security numbers in files, TreeSize Pro to identify file types, and a process through which she ensures that the files are read-only when moved. She urged the audience to remember to document every step of the transfer process, and that “people use and create files electronically as inefficiently as analog.” Laura Carroll, formerly of Emory, talked about the famous Salman Rushdie digital archives, noting that donor restrictions are what helped shape their workflow for dealing with Rushdie’s born digital material. The material is now available on a secure Fedora repository. Seth Shaw from Duke spoke about DataAccessioner (see previous posts) but mostly spoke eloquently in what promises to be an historic speech about the need to “do something, even if it isn’t perfect.”

After lunch, I attended Session 410: “The Archivists’ Toolkit: Innovative Uses and Collaborations. The session highlighted interesting collaborations and experiments with AT, and the most interesting was by Adrianna Del Collo of the Met, who found a way to convert folder-level inventories into XML for import into AT. Following the session, I was invited last-minute to a meeting of the “Processing Metrics Collaborative,” led by Emily Novak Gustainis of Harvard. The small group included two brief presentations by Emily Walters of NC State and Adrienne Pruitt of the Free Library of Philadelphia, both of whom have experimented with Gustainis’ Processing Metrics Database, which is an exciting tool to help archivists track statistical information about archival processing timing and costs. Walters also mentioned NC State’s new tool called Steady, which allows archivists to take container list spreadsheets and easily convert them into XML stub documents for easy import into AT. Walters used the PMD for tracking supply cost and time tracking, while Pruitt used the database to help with grant applications. Everyone noted that metrics should be used to compare collections, processing levels, and collection needs, taking special care to note that metrics should NOT be used to compare people. The average processing rate at NC State for their architectural material was 4 linear feet per hour, while it was 2 linear feet per hour for folder lists at Princeton (as noted by meeting participant Christie Petersen).

On Saturday morning I woke up early to prepare for my session, Session 503: “Exposing Hidden Collections Through Consortia and Collaboration.” I was honored and proud to chair the session with distinguished speakers Holly Mengel of the Philadelphia Area Consortium of Special Collections Libraries, Nick Graham of the North Carolina Digital Heritage Center, and Sherri Berger of the California Digital Library. The panelists defined and explored the exposure of hidden collections, from local/practical projects to regional/service-based projects. Each spoke about levels of “hidden-ness,” and the decisionmaking process of choosing partners and service recipients. It was a joy to listen to and facilitate presentations by archivists with such inspirational projects.

After my session, I attended Session 605: “Acquiring Organizational Records in a Social Media World: Documentation Strategies in the Facebook Era.” The focus on documenting student groups is very appealing, since documenting student life is one of the greatest challenges for university archivists. Most of the speakers recommended web archiving for twitter and facebook, which were not new ideas to me. However, Jackie Esposito of Penn State suggested a new strategy for documenting student organizations, which focuses on capture/recapture of social media sites and direct conversations with student groups, including the requirement that every group have a student archivist or historian. Jackie taught an “Archives 101” class to these students during the week after 7 pm early in the fall, and made sure to follow up with student groups before graduation.

After lunch, I went to Session 702: “Return on Investment: Metadata, Metrics, and Management.” All I can say about the session is…wow. Joyce Chapman of TRLN (formerly an NC State Library Fellow) spoke about her research into ROI (return on investment) for manual metadata enhancement and a project to understand researcher expectations of finding aids. The first project addressed the challenge of measuring value in a nonprofit (which cannot measure value via sales like for-profit organizations) through A/B testing of enhancements made to photographic metadata by cataloging staff. Her testing found that page views for enhanced metadata records were quadruple those of unenhanced records, a staggering statistic. Web analytics found that 28% of search strings for their photographs included names, which were only added to enhanced records. In terms of cataloger time, their goal was 5 minutes per image but the average was 7 minutes of metadata work per image. Her project documentation is available online. In her other study, she did a study of discovery success within finding aids by academic researchers using behavior, perception, and rank information. In order from most to least useful for researchers were: collection inventory, abstract, subjects, scope and contents, and biography/history. The abstract was looked at first in 60% of user tests. Users did not know the difference between abstract and scope and contents notes; in fact, 64% of users did not even read the scope at all after reading the abstract! Researchers explained that their reason for ignoring the biography/history note was a lack of trust in the information, since biographies/histories do not tend to include footnotes and the notes are impossible to cite.

Emily Novak Gustainis from Harvard talked about her processing metrics database, as mentioned in the paragraph about the “Processing Metrics Collaborative” session. Her reasoning behind metrics was simple: it is hard to change something until you know what you are doing. Her database tracks 38 aspects of archival processing, including timing and processing levels. She repeated that you cannot compare people, only collections; however, an employee report showed that a permanent processing archivist was spending only 20% of his time processing, so her team was able to use this information to better leverage staff responsibilities to respond to this information.

Adrian Turner from the California Digital Library talked about the Uncovering California Environmental Collections (UCEC) project, a CLIR-funded grant project to help process environmental collections across the state. While metrics were not built into the project, the group thought that it would be beneficial for the project. In another project, the UC Next Generation Technical Services initiative found 71000 feet in backlogs, and developed tactics for collection-level records in EAD and Archivists’ Toolkit using minimal processing techniques. Through info gathering in a Google doc spreadsheet, they found no discernable difference between date ranges, personal papers, and record groups processed through their project. They found processing rates of 1 linear foot per hour for series level arrangement and description and 4-6 linear feet per hour for folder level arrangement and description. He recommended formally incorporating metrics into project plans and creating a shared methodology for processing levels.

I had to head out for Midway before Q&A started to get on the train in time for my return flight, which thankfully wasn’t canceled from Hurricane Irene. As the train passed through Chicago, I found myself thinking about the energizing and inspiring the projects, tools, and theory that comes from attending SAA…and how much I look forward to SAA 2012.

(Cross posted to ZSR Professional Development blog.)

31
Aug
11

SAA Days 2 & 3: assessment, copyright, conversation

I started Wednesday with a birthday breakfast with a friend from college, then lunch with a former mentor, followed by roundtable meetings. I focused on the Archivists’ Toolkit / Archon Roundtable meeting, which is always a big draw for archivists interested in new developments with the software programs. Perhaps the biggest news came from Merilee Proffitt of OCLC, who announced that ArchiveGrid discovery interface for finding aids has been updated and will be freely available (no longer subscription based) for users seeking archival collections online. A demo of the updated interface, to be released soon, was available in the Exhibit Hall. In addition, Jennifer Waxman and Nathan Stevens described their digital object workflow plug-in for Archivists’ Toolkit to help archivists avoid cut-and-paste of digital object information. Their plugin is available online and allows archivists to map persistent identifiers to files in digital repositories, auto-create digital object handles, create tab-delimited work orders, and create a workflow from the rapid entry dropdown in AT.

On Thursday, I attended Session 109: “Engaged! Innovative Engagement and Outreach and Its Assessment.” The session was based on responses to the 2010 ARL survey on special collections (SPEC Kit 317), which found that 90% of special collections librarians are doing ongoing events, instruction sessions, and exhibits. The speakers were interested in how to assess the success of these efforts. Genya O’Meara from NC State cited Michelle McCoy’s article entitled “The Manuscript as Question: Teaching Primary Sources in the Archives — The China Missions Project,” published in C&RL in 2010, suggesting that we have a need for standard metrics for assessment of our outreach work as archivists. Steve MacLeod of UC Irvine explored his work with the Humanities Core Course program, which teaches writing skills in 3 quarters, and how he helped design course sessions with faculty to smoothly incorporate archives instruction into humanities instruction. Basic learning outcomes included the ability to answer two questions: what is a primary source? and what is the different between a first and primary source? He also created a LibGuide for the course and helped subject specialist reference/instruction librarians add primary source resources into their LibGuides. There were over 45 sections, whereby he and his colleagues taught over 1000 students. He suggested that the learning outcomes can help us know when our students “get it.” Florence Turcotte from UF discussed an archives internship program where students got course credit at UF for writing biographical notes and doing basic archival processing. I stepped out of the session in time to catch the riveting tail-end of Session 105: “Pay It Forward: Interns, Volunteers, and the Development of New Archivists and the Archives Profession,” just as Lance Stuchell from the Henry Ford started speaking about the ethics of unpaid intern work. He suggested that paid work is a moral and dignity issue and that unpaid work is not equal to professional work without pay.

After lunch, I headed over to Session 204: “Rights, Risk, and Reality: Beyond ‘Undue Diligence’ in Rights Analysis for Digitization.” I took away a few important points, including “be respectful, not afraid,” that archivists should form communities of practice where we persuade lawyers through peer practice such as the TRLN guidelines and the freshly-endorsed SAA standard Well-intentioned practice document. The speakers called for risk assessment over strict compliance, as well as encouraging the fair use defense and maintaining a liberal take-down policy for any challenges to unpublished material placed online. Perhaps most importantly, Merrilee Proffitt reminded us that no special collections library has been successfully sued for copyright infringement by posting unpublished archival material online for educational use. After looking around the Exhibit Hall, I met a former mentor for dinner and went to the UCLA MLIS alumni party, where I was inspired by colleagues and faculty to list some presentation ideas on a napkin. Ideas for next year (theme: crossing boundaries/borders) included US/Mexico archivist relations; water rights such as the Hoover Dam, Rio Grande, Mulholland, etc; community based archives (my area of interest); and repatriation of Native American material. Lots of great ideas floated around…

(Cross posted at ZSR Professional Development blog.)

23
Nov
10

Sharing MARC from Archivists’ Toolkit

A few weeks ago, I shared an excited tweet with the archives twitterverse announcing that I had successfully tested importing a MARC record from Archivists’ Toolkit into WorldCat. The tweet garnered more attention than I had anticipated, including a few direct messages from fellow archivists wanting to know how we came up with a solution to the MARC from AT problem. Here is what we did.

The problems with MARCXML exported from AT are few but significant. My colleague Mark Custer at ECU recently posted to the AT user group listserv a question about the fact that AT does not currently allow subfields for subject headings, so the MARC from AT is missing the subfield indicators. I set up a meeting with a cataloger at my library to help me look at the MARCXML files being exported from AT to see what her thoughts were about whether the records could be considered complete. We took a look at MARC for archival material already on WorldCat and compared that to what we exported from AT. She illustrated what she saw as the issues that would prevent proper sharing of the MARC with our local catalog and WorldCat:

  • Missing fixed fields including Ctrl, Desc, and Date (if no date range was included in the finding aid)
  • Missing subject heading subfield delimiters
  • 650 used instead of 600 field in some instances
  • Missing indicators for 245 (and 545, optional)
  • Missing cataloging source for 049 and 040

Because the MARC exported from AT is in MARCXML format and our catalogers work with the MRC format, we used MARCedit to convert the record from MARCXML to MRC. Once these missing and erroneous elements were fixed using MARCedit, we were ready to test import the record. Our library’s account with OCLC Connexion accepts imported records in DAT format, so we saved the MRC file as a DAT file. We tried uploading to Connexion using local bibliographic import and were successful. We determined that it would probably be easier to edit the MARC directly in Connexion, so we will do that in the future. The cataloger and I decided to upload the file to WorldCat as an official record, which worked, as well as to our local catalog, which also worked!

One issue for my library is that our finding aids are missing subject terms and authority work that most catalogers would require for submission to WorldCat. We have started incorporating this cataloger into our processing workflow and introduced her to the Names and Subjects modules in AT so that she can finalize subject headings and names that we assign. We can also consider an automated batch update for all our exported MARCXML to include the edits listed above, incorporating help from our technology team and their knowledge of FTP and scripting. In the meantime, we will be submitting our MARC one at a time since our finding aids are incomplete.

Here’s a recap of our tentative workflow, for your information:

  • Open MARCedit, then Tools
  • Choose MARCXML file as input file
  • Tell program output file name (copy and paste input file info; change ending to .mrc)
  • Select MARC21XML to MARC plus Translate to MARC8
  • Select Execute
  • Open OCLC Connexion
  • Import records; browse to .mrc file
  • Edit directly in OCLC Connexion
  • Update fixed fields including Ctrl, Desc, and Date
  • Change 650 to 600 when necessary
  • Add subfield delimiters to subject headings
  • Add indicators to 545, 245 as needed
  • Add cataloging source to 040 and 049
  • Save and validate
  • Login to OCLC, select ActionHoldingsUpdateHolding to load directly to WorldCat

Thoughts, comments, ideas, and suggestions are gratefully welcomed! I am really curious to know how others approach this issue.

02
Nov
10

Creating a processing guide

I learned much about the standards of archival processing while I was a fellow at the Center for Primary Research and Training at UCLA. While there, I processed the papers of art critic Jules Langsner, the papers of activist and scholar Josephine Fowler, and the pop culture collection of Middle Eastern Americana created by Jonathan Friedlander. Perhaps most important for my professional development, however, was the training I received from CFPRT Coordinator Kelley Wolfe Bachli, who wrote a succinct and informative processing manual to train each CFPRT fellow.

I brought this training manual with me to North Carolina, and with Kelley’s permission I incorporated her work with the standards used at my institution, DACS, and the Archivist’s Toolkit User Manual. The result? The Archival Processing Guide for Staff, Students, and Volunteers. I also include the chapters about processing and the over-the-shoulder look at processing from Michael J. Fox and Peter L. Wilkerson’s Introduction to Archives, now available free online.

The guide and its rules are constantly under review but I think this would be a great starting resource for any archives or special collections repository looking for some standards for training staff, students, and volunteers about the basics of archival processing. Comments are welcome!

17
Aug
10

Reflections: SAA 2010 in Washington DC

*Portions of this post are duplicated at the WFU ZSR Professional Development blog.

This has been my favorite SAA of the three I have attended, mostly because I felt like I had a purpose and specific topics to explore there. The TwapperKeeper archive for #saa10 is available and includes a ton of great resources. I also got the chance to have my curriculum vitae reviewed at the Career Center not once, but twice! I loved every moment of being in DC and will definitely be attending more of the receptions/socials next time!

Tuesday, August 10 was the Research Forum, of which I was a part as a poster presenter. My poster featured the LSTA outreach grant given to my library and the local public library and explored outreach and instruction to these “citizen archivists.” I got a lot of encouraging feedback and questions about our project, including an introduction to the California Digital Library’s hosted instances of Archivist’s Toolkit and Archon, which they use for smaller repositories in the state to post their finding aids.

Wednesday, August 11 consisted primarily of round table meetings, including the highly-anticipated meeting of the Archivists Toolkit/Archon Round Table. The development of ArchivesSpace, the next generation archives management tool to replace AT and Archon, was discussed. Development of the tool is planned to begin in early 2011. Jackie Dooley from OCLC announced that results from a survey of academic and research libraries’ special collections departments will be released. A few interesting findings:

  • Of the 275 institutions surveyed, about 1/3 use Archivist’s Toolkit; 11% use Archon
  • 70% have used EAD for their finding aids
  • About 75% use word processing software for their finding aids
  • Less than 50% of institutions’ finding aids are online

A handful of brief presentations from AT users followed, including Nancy Enneking from the Getty. Nancy demonstrated the use of reports in AT for creating useful statistics to demonstrate processing, accessioning, and other features of staff work with special collections. She mentioned that AT can be linked to Access with MySQL for another way to work with statistics in AT. Corey Nimer from BYU discussed the use of plug-ins to supplement AT, which I have not yet used and hope to implement.

Perhaps more interestingly, Marissa Hudspeth from the Rockefeller and Sibyl Shaefer from the University of Vermont introduced their development of a reference module in AT, which would allow patron registration, use tracking, duplication requests, personal user accounts, et cetera. Although there is much debate in the archives community about whether this is a good use of AT (since it was originally designed for description/content management of archives), parts of the module should be released in Fall 2010. They said they’d post a formal announcement on the ATUG listserv soon.

On Thursday, August 12, sessions began bright and early. I started the day with Session 102: “Structured Data Is Essential for Effective Archival Description and Discovery: True or False?” Overall summary: usability studies, tabbed finding aids, and photos in finding aids are great! While the panel concluded that structured data is not essential for archival description and discovery due to search tools, Noah Huffman from Duke demonstrated how incorporating more EAD into MARC as part of their library’s discovery layer resulted in increased discovery of archival materials.

Session 201 included a panel of law professors and copyright experts, who gave an update on intellectual property legislation. Peter Jaszi introduced the best practice and fair use project at the Center for Social Media, a 5-year effort to analyze best practice for fair use. Their guidelines for documentary filmmakers could be used as an example for research libraries. In addition, the organization also created a statement of best practices for fair use of dance materials, hosted at the Dance Heritage Center. Mr. Jaszi argued that Section 1201 does not equal copyright, but what he called “para-copyright law” that can be maneuvered around by cultural heritage institutions for fair use. I was also introduced to Peter Hirtle’s book about copyright (and a free download) entitled Copyright and Cultural Institutions: Guidelines for Digitization for U.S. Libraries, Archives, and Museums, which I have started to read.

I wandered out of Session 201 into Session 209, “Archivist or Educator? Meet Your Institution’s Goals by Being Both,” which featured archivists who teach. The speakers emphasized the study of how students learn as the core of becoming a good teacher. One recommendation included attending a history or social sciences course in order to see how faculty/teachers teach and how students respond. I was inspired to consider faculty themes, focuses, and specialties when thinking about how to reach out to students.

Around 5:30 pm, the Exhibit Hall opened along with the presentation of the graduate student poster session. I always enjoy seeing the work of emerging scholars in the archival field, and this year was no different. One poster featured the Philadelphia Area Consortium of Special Collections Libraries in a CLIR-funded project to process hidden collections in the Philadelphia region — not those within larger repositories, but within smaller repositories without the resources or means to process and make available their materials. The graduate student who created the poster served as a processor, traveling to local repositories and communicating her progress and plan to a project manager. This is an exciting concept, since outreach grants tend to focus on digitization or instruction, not the act of physically processing the archival materials or creating finding aids.

On Friday, August 13, I started the morning with Session 308, “Making Digital Archives a Pleasure to Use,” which ended up focusing on user-centered design. User studies at the National Archives and WGBH Boston found that users preferred annotation tools, faceted searching, and filtered searching. Emphasis was placed on an iterative approach to design: prototype, feedback, refinement.

I headed afterward to Session 410, “Beyond the Ivory Tower: Archival Collaboration, Community Partnerships, and Access Issues in Building Women’s Collections.” The panel, while focused on women’s collections, explored collaborative projects in a universally applicable way. L. Rebecca Johnson Melvin from the University of Delaware described the library’s oral history project to record Afra-Latina experiences in Delaware. They found the Library of Congress’ Veterans’ History Project documentation useful for the creation of their project in order to reach out to the Hispanic community of Delaware. T-Kay Sangwand from the University of Texas, Austin, described how the June L. Mazer Lesbian Archives were processed and digitized, then stored at UCLA. Ms. Sangwand suggested that successful collaborations build trust and transparency, articulate expectations from both sides, include stakeholders from diverse groups, and integrate the community into the preservation process. One speaker noted that collaborative projects are “a lot like donor relations” in the sense that you have to incorporate trust, communications, and contracts in order to create a mutually-beneficial result.

On Saturday, August 14, I sat in on Session 502, “Not on Google? It Doesn’t Exist,” which focused on search engine optimization and findability of archival materials. One thing to remember: Java is evil for cultural heritage because it cannot be searched. The session was a bit introductory in nature, but I did learn about a new resource called Linkypedia, which shows how Wikipedia and social media interact with cultural heritage websites.

Then I headed to Session 601, “Balancing Public Services with Technical Services in the Age of Basic Processing,” which featured the use of More Product, Less Process, aka “basic processing,” in order to best serve patrons. After a few minutes I decided to head over to Session 604, “Bibliographic Control of Archival Materials.” The release of RDA and the RDA Toolkit (available free until August 30) has opened up the bibliographic control world to the archival world in new ways. While much of the discussion was outside of my area of knowledge (much was discussed about MARC fields), I learned that even places like Harvard have issues with cross-referencing different types of resources that use different descriptive schemas.

My last session at SAA was 705, “The Real Reference Revolution,” which was an engaging exploration of reference approaches for archivists. Multiple institutions use Google Calendar for student hours, research appointments, and special hours. One panelist suggested having a blog where students could describe their work experience. Rachel Donahue described what she called “proactive reference tools” such as Zotero groups to add new materials from your collection and share those with interested researchers, and Google Feedburner.

It was a whirlwind experience and I left feeling invigorated and ready to tackle new challenges and ideas. Whew!

11
May
10

Who cares about learning EAD?

Matt (@herbison) over at Hot Brainstem posted a good question to his blog: “Can you skip learning EAD and go right to Archivists’ Toolkit or Archon?” He suggests that the “right way” to create accessible finding aids (EAD, DACS, XML, XSLT, and AT) is not as important as finding a (faster) way to get stuff online. First, I want to say thanks to him for bringing this question to the table.

I was not trained to create EAD finding aids in grad school (although I have experience with XML and HTML). Instead, I was trained to create EAD-compatible MS Word docs that were plopped into an EAD template by an encoder and sent over to the OAC. For me, AT was not part of the process of creating a finding aid.

In my current job, I’m working with old EAD files that were outsourced and tied to a problematic stylesheet (they referenced JPG files and included HTML color codes). I imported these old EAD filesĀ  into AT — minor editing was needed, but nothing that made me reference the EAD tag library. I have yet to create one from “scratch,” although I did recently attend the basic EAD workshop through SAA. I can now search and edit the contents of our existing finding aids (all 450+ of them) and create new ones within the AT interface…and with less opportunity for human error.

I am moving toward the idea of going straight to AT for EAD since it exports “good” EAD (that I have seen so far). I am going to train our grad students and library assistants how to use AT for accessions and basic processing…why would I need to teach them EAD? I am still in the process of answering that question because we are working on a new stylesheet for our finding aids — which means I need to learn more about XSLT. AT might give me a nice EAD document, but it doesn’t make it look pretty online for me.

AT experts like Sibyl (@sibylschaefer) and Mark (@anarchivist) are right when they suggest that an understanding of EAD is important when you need to do stuff with the EAD that AT exports. Just being aware of elements and the tag library helps me “read” an EAD document…and hopefully, it will help me create better, more beautiful finding aids through stylesheets that interact with the data in functional, interactive ways.

So I suppose the question to consider is, “how much do you need to learn about EAD in order to go right to AT or Archon?”