17
Jul
13

Society of California Archivists Meeting: Berkeley

I presented at the Annual General Meeting of the Society of California Archivists in April as part of Session 12, Moving Backlogs to the Forefront: Revamping Archival Processing Across the UC Libraries. The session highlighted a report created by a “Power of Three” group within the University of California’s Next Gen Technical Services initiative that focused specifically on more efficient archival processing. The main deliverable of this group and its lightning teams: the University of California Guidelines for Efficient Archival Processing.

What makes the UC Guidelines unique is the concept of a “value score,” which helps guide archivists/processors to document their decisionmaking process with regard to the most efficient processing levels at the collection and component levels. There are several charts in the guidelines that can be used as a tool for determining the “value” of a collection and justifying appropriate processing levels. Michelle Light’s presentation on the guidelines provides an excellent description and background.

I presented on my lightning team‘s work on defining a method for capturing processing rates, or archival processing metrics. Our group’s work resulted in a basic archival metrics tracking spreadsheet, along with a set of recommendations for key units for measurement. The spreadsheet is embedded in the UC Guidelines. My presentation:

10
Jul
13

2013 Archives Leadership Institute: takeaways

Most of this piece is cross-posted to my Library’s blog, The Learning Library.

From June 16 – 23, I had the privilege of attending the Archives Leadership Institute, a selective, weeklong immersion program in Decorah, Iowa for emerging archival leaders to learn and develop theories, skills, and knowledge for effective leadership. The program is funded by the National Historical Publications and Records Commission (NHPRC), a statutory body affiliated with the National Archives and Records Administration (NARA), hosted at Luther College for the years 2013-2015.

This year represented a complete re-visioning of the program, which featured 5 daylong sessions: New Leadership Thinking and Methods (with Luther Snow), Project Management (with Sharon Leon, The Center for History and New Media at George Mason University), Human Resource Development (with Christopher Barth, The United States Military Academy at West Point), Strategies for Born Digital Resources (with Daniel Noonan, The Ohio State University), and Advocacy and Outreach (with Kathleen Roe, New York State Archives).

ALI has been one of the greatest learning experiences of my career. So much of this program related directly to my work and current role — but more importantly, much of it could be applied more broadly. Enthusiastic participant responses and notes are captured in this Storify story from ALI and also this excellent recap by a fellow participant, but I will attempt to illustrate what I see as the biggest takeaways from the program that could relate to my colleagues.

Each day of the program included introductions and wrap-up by Luther Snow, an expert consultant/facilitator who originated the concept of “Asset Mapping.” Luther’s background as a community organizer provided a solid foundation for his positive leadership strategy, which emphasizes networked, or “generative” methods of getting things done. There are several principles that I took away from this:

  • Leadership is impact without control. We cannot force people to contribute or participate; the goal is to get people to do things voluntarily by allowing people to contribute with their own strengths.
  • Generative leadership is about asset thinking. The key to creating impact is in starting by thinking of what we actually have: our assets. Focus on talent and areas of strength instead of “needs” and problems — avoid focusing on scarcity or pity.
  • Look for affinities. How can our self-interests overlap? Asset thinking helps us find common interests and mutual benefit — we can connect what we have to get more done than we could on our own.
  • Be part of the larger whole. By emphasizing abundance, we can create affinities, which leads to a sense that “my gain is your gain is our gain.” This sets up a virtuous cycle based on an open-sum (think: potluck; network) instead of a closed-sum (think: slices of pie; gatekeeping) environment.

Of particular importance to generative thinking is the fact that semantics matter. In one activity, participants took turns making “need statements” and then turning them into “asset statements.” One example? Time. Instead of saying “time is scarce,” consider saying “time is valuable.” Instead of “we need more staff,” say “we have lots of great projects and so much enthusiasm from our users. How can we continue to provide these services?” Some more examples of language choices were included in Luther’s (copyrighted) handouts.

Building affinity can be difficult, since it is based on trust and recognizing likeness. We can build affinity with stakeholders connected to our assets — emphasize what you have in common, or talk about how your differences complement each other. Relate to stakeholders by focusing on mutual interests, and try to create opportunities to do a project together. Keep in mind: we can do more together than we can on our own.

And now for some highlights from the daylong sessions…

Strategies for Born Digital Resources (with Daniel Noonan, The Ohio State University)

Project Management (with Sharon Leon, The Center for History and New Media at George Mason University)

  • Historical Thinking Matters, a resource for teaching students how to engage critically with primary sources
  • Consider collaborative, flexible workspaces that increase staff productivity: moveable tables, whiteboards, a staff candy drawer
  • Articulating the Idea, worksheets for project planning from WebWise, IMLS, and the CHNM at GMU
  • Leon’s presentation from a different workshop on project management, including guidelines for creating “project charters” that include a scope statement, deliverables, and milestones
  • Share full text of grant projects and proposals with your staff for learning purposes!
  • Recommended PM tools: Basecamp and Asana; deltek.com/products/kona.aspx … https://podio.com/  http://basecamp.com/  http://asana.com/  https://trello.com/ (we are using Trello with some projects in collaboration with IT) — trick is to use these tools yourself to get team buy-in
  • Example from my former institution on positive reinforcement: Dedicated Deacon, which sends automatically to supervisor of person recognized; weekly drawing for prizes

Strategic Visioning and Team Development (with Christopher Barth, The United States Military Academy at West Point)

Advocacy and Outreach (with Kathleen Roe, New York State Archives)

The next phase of my ALI experience includes a practicum, workshop, and group project. I plan to focus my practicum on building and empowering a new team — my current focus as Acting Head of Special Collections & Archives — by integrating asset-based thinking into our projects and strategic planning. Looking forward to continued growth both through my ALI cohort and the valuable leadership tools and resources I gathered from the intensive in June.

25
May
12

The present and future of audiovisual archives: Screening the Future 2012, Los Angeles

This week, I attended the second annual Screening the Future conference, held at the University of Southern California. Screening the Future 2012: Play, Pause and Press Forward was organized around three themes:

  • For the record: should we talk about data or media?
  • Meeting the demand: how can we match users’ expectations with institutional capabilities?
  • “I am big, it’s the pictures that got small!”: what we can learn from each other

The conference program details these themes, which revolve around the current state and challenges facing archives that include audiovisual material. As the event website notes, the conference brought together “archivists, production companies, filmmakers, TV producers, CTOs, scientists, vendors, strategists, funders, and policy makers to develop solutions to the most urgent questions facing audiovisual repositories.” I drove up to campus daily for the intense, three-day event.

I was struck by the unique format of the conference, which was somewhat TED-like in structure. Three hour sessions (without breaks) brought together experts and innovators from the US and Europe to address the issues listed above.

While I kept track of the event via the #stf12 hashtag on Twitter, I learned about PrestoCentre, a European member organization focused on audiovisual and digital preservation — they also have a blog with lots of free content and a new magazine.

Overall, there was a continual theme of presentations around the need to address the non-materiality of digital audiovisual content. There is a great deal of anxiety about the “problem” of digital preservation, which Kara Van Malssen points out should also have some opportunities. Presenters seemed to fluctuate between the move from media-based to file-based audiovisual content and acceptance that, in the end, digital preservation is about the preservation of very real, physical storage technology (including servers) with a limited life span.

Some institutions and organizations with adequate funding are focusing on migration, such as James DeFilippis of FOX Technology Group. DeFilippis asked the audience to consider the “archive horizon,” looking 5, 10, or 100 years into the future of our digital storage media — an understanding of the life cycle of your storage will help inform a migration policy, which ensures the transfer of media to new/updated storage on a regular basis. He also described how quickly we are filling our storage media, noting that if 1MB were equal to 1 raindrop, 1 PB (petabyte) is roughly equal to the wine consumption of France over a thousand years. He used FOX Film Entertainment as an example, noting that their digital vault has 15 PB available. “Only” 1.5 PB has been used to date, but they expect over 2000 TB per year to be added.

Rob Hummel from Group47 explored a lot of technical jargon in the film archives world, including frame rates and lossless compression. What interested me was his comparison of tape media and digital media, which are similar in that they were considered new, better, and faster technologies — but also similar in that they are fragile, have a short life span, and require specialized equipment to read or view. He noted, “Cloud storage is still just a bunch of spinning disks. We’re acting like electricity is infinite.” He referred the audience to this article about the future accessibility of digital media, then introduced a physical medium called Digital Optical Technology System, or DOTS. According to their website, DOTS is metal-based digital storage media that was patented at Eastman Kodak (Group 47 bought the patent) that is non-magnetic, inert, store-able under normal RH and temperature conditions, with a lifespan of at least 100 years. Their website says that DOTS is a “true optical ‘eye readable’ method of storing digital files” that is write-only, and requires only magnification to be seen (as opposed to specialized equipment/hardware). I found it interesting that we are considering a return to physical media and am curious to know what the future holds for DOTS.

Howard Besser from NYU (professor/director of MIAP program) delved into audiovisual material used as research data. One example included use of video to observe left-handedness over time, whereby the researchers watched early films of sporting events for audience members waving. The films, he noted, were not indexed for hand waving…so it made it challenging to find appropriate films.  Besser noted that the Center for Home Movies created multiple ways of describing its films, including genre, tropes, actions, and recurring imagery — imagine a category for “look Ma, no hands!” He also emphasized what he called a shift in academia, where scholars are interested in everyday life as subjects of study, and urged archivists to consider that what is collected heavily influences what is studied. Besser insists that we need to be able to attach metadata to specific time-codes in audiovisual material, so that multiple topics can be discovered.

Pip Laurenson from the Tate Gallery discussed something I heard about at SCA: video artworks. While the presenters at SCA (Annette Doss and Mary K. Woods “Changing Moving Image Access: Presenting Video Artworks in an Online Environment” (PDF)) mentioned that artists seemed more interested in display over format, Laurenson said that there are artists concerned with the preservation and presentation of their video artworks. She argued that some artists want the textures and quality of older (at times obsolete) formats. She noted that some artists are interested in the aesthetic variety of different technologies.

Lev Manovich from UCSD (Visual Arts Department) described a whirlwind of digital humanities projects he has been working on with students as part of the Software Studies Initiative. In one project, Manovich’s student used the 5930 front pages of the Hawaiian Star newspaper from 1893-1912 to show the design shift in print media over time, using images from the Library of Congress’ Chronicling America project. Manovich spoke quickly, so I wasn’t able to keep up with some of the projects, but he described a project whereby a computer grouped artworks by hue and saturation for a particular stylistic period, but I could only find the project he did on visualizing modernist art. After describing another project to map out a million pages of manga artwork by shading, Manovich suggested that our genres are artificial and perhaps flawed. With computer-generated groupings based on visual themes/consistencies, visualizations can help create new groupings of content…although I would add that human intervention, especially verification and quality control, is vital to these projects.

A master class the next day on managing the cost of archiving ended up being much more high-level than I anticipated, exploring economics and models for pricing out long-term preservation of digital content. Stephen Abrams from the CDL referred the audience to a CDL white paper on cost modeling for preservation, noting that the cost of preservation consists of a full set of requirements. The CDL and other groups provide a service beyond just storage: if you buy a TB flash drive or cloud storage, you’re only buying storage. Not service or maintenance. Matthew Addis from IT Innovation Centre suggested that cost and risk for digital preservation are linked. More copies and more scrubbing equals less risk of loss, but also greater cost.

A second master class on Tuesday explored archiving future data — Lev Manovich was notably absent as he was sick, so the conversation veered towards personal digital archives with a presentation by Jeff Ubois. Ubois, the original organizer for the Personal Digital Archiving conference, suggested that what is personal has become collective through social media, and that people are curating themselves through tools like Facebook Timeline. Ubois’ most compelling argument (from my perspective) is that we cannot trust companies for preservation of permanent records. One great example was a site called “mylastemail.com,” which was supposed to save your final messages upon your death. Ironically, the company went under just a few years after its peak. Businesses are ephemeral, with objectives outside of those of archives. He quoted Jason Scott, who said that “Google is a museum like a supermarket is a food museum” — continuity and preservation are not compatible with the market environment. I was inspired by the discussion regarding personal digital archiving and educating the users, and whether it should be patron or research driven — for me, the unspoken theme was archival appraisal. How can we teach users to do personal digital appraisal, and let users decide what to keep? I don’t  believe every digital shard should be retained, and tools like Stanford’s Self Archiving Legacy Toolkit could present that kind of opportunity to people. Oh, and there was a shout-out to work by Cal Lee and Richard Cox!

The evening ended with a screening of Rick Prelinger’s Lost Landscapes, featuring a multimedia presentation of digitized film footage of Los Angeles.

On the final day of the conference, Brewster Kahle (of the Internet Archive) spoke — he worries about one (commercial) solution to preservation of human knowledge, and says there should be lots of groups involved. I agree. Sam Gustman, one of the conference organizers and head of the USC Shoah Foundation Institute and Associate Dean of the USC Libraries, showed off some of the great features of the Shoah Visual History Archive, which “enables researchers to search cataloguing and indexing data of nearly 52,000 videotaped interviews conducted with survivors and witnesses of the Holocaust in 56 countries and 32 languages.” The videos were manually indexed, with timestamps by themes and names of people mentioned. A new geographic search also allows users to see locations mentioned across the interviews, as represented on the Google map. They also created  a K-12 site that is consistent with the ISTE standard, which they have called IWitness. The site has a ton of interactive features, including the ability for teachers to cut and edit interviews — there’s even a video guide to “ethical editing” of the interviews.

Ben Moskowitz from Mozilla gave a dynamic presentation about web video. He showed off some great tools and coming attractions from Mozilla. One tool, called the Gendered Advertisement Remixer, allows people to use the HTML5 tool to mash up video and audio from gender-oriented children’s television commercials. One example: My Little Ponies audio mixed with video from a toy gun ad. He mentioned a tool called Hyper Audio as a new way to engage critically with media — I know it has something to do with popcorn.js, and it allows people to switch languages, interact with audio transcripts, tweet parts of the transcript and link directly to that point in the audio, and more. At that point, he revealed plans for Mozilla Popcorn, which is a video authoring interface that allows people to create things like multimedia essays consisting of maps, tweets, and archival video — as he said, “be like Jon Stewart.” Finally, Moskowitz urged archivists to provide interfaces to archived material and allow for unanticipated uses of our audiovisual and other records.

Kara Van Malssen of AudioVisual Preservation Solutions (and a super cool digital/video archivist and instructor at NYU/Pratt — check out this video of her talking about oral history in the digital age and her presentations on SlideShare) brought an important topic to the conference: an exploration of the needs/tools for smaller archives with regard to preservation of digital archives. She mentioned the Open Planets Foundation as a forum for discussion between archivists and coders — they have annual hackathons and also have a problems/solutions area on their website. She emphasized the need for smaller institutions to communicate with developers in order to contribute to the success of digital preservation tools.

Whew! I thought I would only write a few paragraphs…but there were so many valuable and interesting presentations at this conference. I plan to steal some of the conference’s ideas about speakers and session formats in hope that we can incorporate master classes and the like into SAA someday. I met a number of audiovisual archivists who I normally would not meet (they go to the AMIA conference instead) as well as other important folks involved in the audiovisual preservation realm. I was reminded at how there is still very little overlap between representatives of LAMs, but encouraged that this forum exists.

03
May
12

Society of California Archivists Meeting: Ventura

Last weekend was the 2012 general meeting of the Society of California Archivists. The conference, held in Ventura, was my first time attending SCA and I was able to connect with a lot of interesting people and projects.

I drove up from Orange County on Friday morning and arrived just before the opening plenary to register and visit the exhibitor hall. I spent so much time catching up with colleagues and connections that I completely missed the plenary!

Session 1, “Changing Moving Image Access: Presenting Video Artworks in an Online Environment,” focused on a large collection of video art from the Long Beach Museum of Art that was acquired by the Getty Research Institute. Annette Doss and Mary K. Woods of the Getty Research Institute described this and another collection related to women adding up to over 5000 videotapes of multiple formats. The videotapes were individually cataloged with help from AMIM2 and chapter 7 of AACR2. Perhaps the most important takeaway from their presentation was the Getty’s transition from creating DVD user copies into digital user copies of the works of art on videotape. Their workflow is now: Umatic (or other tape) to DigiBeta to digital. Their System for Automatic Migration of Media Assets, or SAMMA, machine, is a multi-encoder that does real-time conversion in up to 5 output files simultaneously. At the Getty, they create JPEG2000 with an MXF wrapper. Their digital repository is DigiTool, used as an access platform (as opposed to preservation platform), ingested in MODS  created via MARCedit, with a METS wrapper. During the Q&A, members of the audience asked about artist involvement in the process, to which they responded that artists frequently weigh in on reformatting — and that most of them are more concerned with display over format.

I met my archives buddy (and LACMA archivist) Jessica Gambling for lunch at a local Thai restaurant and then headed back to the conference hotel for Session 6, “The Business of Audio-Visual Preservation.” The session emphasized knowing standards for preserving audiovisual materials, especially video (as opposed to film). Most of the speakers were a bit too general for my needs, although I did learn from Leah Kerr of the Mayme A. Clayton Library and Museum that they have posted their library catalog online (just click on anonymous user login). One takeaway: keep in mind that video has a 15-20 year lifespan. Lauren Sorensen from the Bay Area Video Coalition gave an engaging introduction to the nonprofit, including its history and services.

That evening, I met up with a few UC archivists at an Aeon mixer in downtown Ventura, then drove up to Santa Barbara for dinner. The next morning, I got up bright and early for my presentation in the lightning talks, Session 7. Moderator Lisa Miller from the Hoover Institution Archives introduced all of us and we proceeded to give 6-minute, 20-slide max talks on a variety of topics. Jill Golden from the Hoover Institute Archives discussed how she used Google Hot Trends to find a potential area to focus on in her next processing project — she found the name “Saul Alinsky” listed, which happened to be the source of an unprocessed collection. Jason Miller from UC Berkeley described his process of creating “digital contact sheets” to allow users to view massive amounts of 35mm slides at once. Essentially, Miller does sleeve-page scans of 20 slides at a time, batch edits these pages, then attaches them as low-res images to the finding aid. My presentation, “Forget About the Backlog: Surfacing Accessions Using Archivists’ Toolkit,” highlighted a triage approach taken at my institution with regard to accessions. I am interested in exposing unprocessed accessions via the web, which I see as an even more minimal approach to accessioning as processing. I plan to do further research into this area, since finding similar practices at Yale and Emory.

I took a break from sessions and walked around the historic district and beaches, including a stop at the San Buenaventura Mission. After lunch, I attended Session 14, “Online Archive of California Contributor Meeting,” led by Sherri Berger and Adrian Turner. Adrian described the upcoming collection-level record tool, which is essentially a web form that allows OAC contributors to create a collection-level record and, optionally, attach a PDF inventory or other non-standard finding aid. Adrian’s use of the “mullet” record metaphor included a brilliantly-placed photo of a kid with a mullet hairstyle — short in the front (collection level minimal DACS record) and long in the back (PDF inventory attached). The tool should be available next week. Sherri reported on a survey of OAC/Calisphere users and the results were remarkable: 27% of users of OAC identify as “other”, including historians, researchers, and writers. A full 51% of OAC and 53% of Calisphere users get to the sites via web searches; 35% and 20% respectively get there via referrer (top referrer is, of course, Wikipedia). Nearly 70% of K-12 users get to these sites via web searches. Adrian and Sherri discussed ways to connect users to related content, including the use of a “more like this” feature. They hope that tools such as this, as well as EAC, will help connect users to related archival material.

I loved connecting with archivists at the regional level and hearing about practices from across the state. Presentations will be posted online at the SCA past meetings page.

22
Dec
11

Musings: SAA, DAS, and “Managing Electronic Records in Archives & Special Collections”

This afternoon I successfully completed the electronic exam for “Managing Electronic Records in Archives & Special Collections,” a workshop presented as part of SAA‘s Digital Archives Specialist program. With my new certificate of continuing education in hand, I wonder how much I should/could participate in the DAS program. I have been watching the development of the program with great interest, particularly the cost, expected completion timeline, and who the experts would be. I signed up for the course and ventured up to Pasadena for a two-day workshop with Seth Shaw and Nancy Deromedi.

Erica Boudreau has a good summary of the workshop as taught by Tim Pyatt and Michael Shallcross on her blog, so I will try not to repeat too much here. Of interest to those looking to learn more about e-recs is the Bibliography and the pre-readings, which consisted of several pieces from the SAA Campus Case Studies website. We were asked to read Case 2, “Defining and Formalizing a Procedure for Archiving the Digital Version of the Schedule of Classes at the University of Michigan” by Nancy Deromedi, and Case 13, “On the Development of the University of Michigan Web Archives: Archival Principles and Strategies” by Michael Shallcross, as well as “Guarding the Guards: Archiving the Electronic Records of Hypertext Author Michael Joyce” by Catherine Stollar.

On the first day, the instructors discussed electronic “recordness,” authenticity/trust, the OAIS and PREMIS models, advocacy, and challenges, and reserved time for the participants to break into groups to discuss the three case studies. On the second day, we dove into more practical application of e-records programs, in particular a range of workflows. One of the takeaway messages was simply to focus on doing something, not waiting for some comprehensive solution that can handle every variety of e-record. Seth displayed a Venn diagram he revealed at SAA this year, which separates “fast,” “good,” and “cheap” into three bubbles — each can overlap with one other focus area, but not both. That is, for example, that your workflow can be cheap and good, but not fast; good and fast but not cheap, et cetera.

Seth and Nancy illustrated a multi-step workflow using a checksum creator (example used was MD5sums), Duke DataAccessioner for migration, checksums, as well as plugins for Jhove and Droid, WinDirStat for visual analysis of file contents, and FTKimager for forensics. They also discussed Archivematica for ingest and description, which still seems buggy, and web archiving using tools such as ArchiveIt, the CDL’s Web Archiving Service, and HTTrack. Perhaps the most significant thing I learned was about the use of digital forensics programs like FTKimager, as well as the concept of a forensic write blocker, which essentially prevents files on a disk/USB from being changed during transfer. Digital forensics helps us to see hidden and deleted files, which can help us provide a service to records creators — recovering what was thought lost — and creating a disk image to emulate the original disk environment. Also shared: Peter Chan at Stanford put up a great demo of how to process born digital materials using AccessData FTK on YouTube.  It was helpful to see these tools I have been reading about actually demonstrated.

Our cohort briefly discussed UC Irvine’s “virtual reading room,” which is essentially a way for researchers to access born-digital content in a reading room environment using DSpace, through a combination of an application process and limited user access period. Our rules of use are also posted. I have a lot of thoughts in my mind about how this may change or improve over time as we continue to receive and process born-digital papers and records — when we are doing less arrangement and better summarization/contextualization/description, how can we create a space for researchers to access material with undetermined copyright status? What will the “reading room” look like in the future?

Our digital projects specialist and I attended the workshop and I think we found some potential services and programs that could help us with our born-digital records workflow. Above all, it was helpful to see and hear about the tools being developed and get experienced perspectives on what has been working at Duke and Michigan. I enjoyed the review of familiar concepts as well as demonstrations of unfamiliar tools, and could see myself enrolling in future DAS courses. The certificate program includes an option to test out of the four Foundational courses, at $35 a pop. If I choose to complete the program, it must be done within 2 years, with a comprehensive exam ($100) that must be completed within 5 months after completing the required courses. Some people are cherry-picking from the curriculum, choosing only courses that are the most relevant to their work. I think a DAS certification could help train and employ future digital archivists (or, in my mind, archivists in general — since we’ll all be doing this type of work) and may create a “rising tide lifts all ships” type of situation in our profession. While there is a risk of a certification craze meant for financial gain of the organization, I was grateful to learn from experienced archivists in a structured setting. There’s something to be said for standards in education in our profession. I hope that DAS will raise the standard for (digital) archivists.

30
Sep
11

A new chapter begins

About a year ago, I set my sights on a return to California, mostly for personal reasons. After much searching (within and without me), I interviewed for an archivist position at a large research library in Southern California. I have been told that a surprisingly large number of archivists applied for the position; in fact, I was kept in the pool as a first round of candidates were brought in for interviews. I persisted, and found myself interviewing for the exciting position of archivist at the University of California, Irvine.

My presentation for the interview was an answer to the question, “What key challenges will archivists in academic research libraries face in the next 5 years?” See my response below, which I feel illustrates the core of my professional identity as an archivist:

  • Becoming more user-centered
  • Managing digital expectations
  • Revisiting description
  • Revisiting discovery
  • Everyone is an archivist
  • Represent and document
A few days later, I was offered the position! This month, I packed up my household and drove from North Carolina to California into what promises to be a rewarding, exciting opportunity. I start my new adventure on Monday with great energy, enthusiasm, and a great sense of direction from a powerhouse team. More to come…
31
Aug
11

SAA Days 4 & 5: e-records, metrics, collaboration

Friday in Chicago started with coffee with Christian Dupont from Atlas Systems, followed by Session 302: “Practical Approaches to Born-Digital Records: What Works Today.” The session was packed…standing-room only (some archivists quipped that we must have broken fire codes with the number of people sitting on the floor)! Chris Prom from U Illinois, Urbana-Champaign, moderated the excellent panel on practical solutions to dealing with born-digital archival collections. Suzanne Belovari of Tufts referred to the AIMS project (which sponsored the workshop I attended on Tuesday) and the Personal Archives in Digital Media (paradigm) project, which offers an excellent “Workbook on digital private papers” and “Guidelines for creators of personal archives.” She also referenced the research of Catherine Marshall of the Center for the Study of Digital Libraries at Texas A&M, who has posted her research and papers regarding personal digital archives on her website. All of the speakers referred to Chris Prom’s Practical E-Records blog, which includes lots of guidelines and tools for archivists to deal with born digital material.

Ben Goldman of U Wyoming, who wrote an excellent piece in RB&M entitled “Bridging the Gap: Taking Practical Steps Toward Managing Born-Digital Collections in Manuscript Repositories,” talked about basic steps for dealing with electronic records, including network storage, virus checking, format information, generating checksums, and capturing descriptive metadata. He uses Enterprise Checker for virus checking, Duke DataAccessioner to generate checksums, and a Word doc or spreadsheet to track actions taken for individual files. Melissa Salrin of U Illinois, Urbana-Champaign spoke about her use of a program called Firefly to detect social security numbers in files, TreeSize Pro to identify file types, and a process through which she ensures that the files are read-only when moved. She urged the audience to remember to document every step of the transfer process, and that “people use and create files electronically as inefficiently as analog.” Laura Carroll, formerly of Emory, talked about the famous Salman Rushdie digital archives, noting that donor restrictions are what helped shape their workflow for dealing with Rushdie’s born digital material. The material is now available on a secure Fedora repository. Seth Shaw from Duke spoke about DataAccessioner (see previous posts) but mostly spoke eloquently in what promises to be an historic speech about the need to “do something, even if it isn’t perfect.”

After lunch, I attended Session 410: “The Archivists’ Toolkit: Innovative Uses and Collaborations. The session highlighted interesting collaborations and experiments with AT, and the most interesting was by Adrianna Del Collo of the Met, who found a way to convert folder-level inventories into XML for import into AT. Following the session, I was invited last-minute to a meeting of the “Processing Metrics Collaborative,” led by Emily Novak Gustainis of Harvard. The small group included two brief presentations by Emily Walters of NC State and Adrienne Pruitt of the Free Library of Philadelphia, both of whom have experimented with Gustainis’ Processing Metrics Database, which is an exciting tool to help archivists track statistical information about archival processing timing and costs. Walters also mentioned NC State’s new tool called Steady, which allows archivists to take container list spreadsheets and easily convert them into XML stub documents for easy import into AT. Walters used the PMD for tracking supply cost and time tracking, while Pruitt used the database to help with grant applications. Everyone noted that metrics should be used to compare collections, processing levels, and collection needs, taking special care to note that metrics should NOT be used to compare people. The average processing rate at NC State for their architectural material was 4 linear feet per hour, while it was 2 linear feet per hour for folder lists at Princeton (as noted by meeting participant Christie Petersen).

On Saturday morning I woke up early to prepare for my session, Session 503: “Exposing Hidden Collections Through Consortia and Collaboration.” I was honored and proud to chair the session with distinguished speakers Holly Mengel of the Philadelphia Area Consortium of Special Collections Libraries, Nick Graham of the North Carolina Digital Heritage Center, and Sherri Berger of the California Digital Library. The panelists defined and explored the exposure of hidden collections, from local/practical projects to regional/service-based projects. Each spoke about levels of “hidden-ness,” and the decisionmaking process of choosing partners and service recipients. It was a joy to listen to and facilitate presentations by archivists with such inspirational projects.

After my session, I attended Session 605: “Acquiring Organizational Records in a Social Media World: Documentation Strategies in the Facebook Era.” The focus on documenting student groups is very appealing, since documenting student life is one of the greatest challenges for university archivists. Most of the speakers recommended web archiving for twitter and facebook, which were not new ideas to me. However, Jackie Esposito of Penn State suggested a new strategy for documenting student organizations, which focuses on capture/recapture of social media sites and direct conversations with student groups, including the requirement that every group have a student archivist or historian. Jackie taught an “Archives 101” class to these students during the week after 7 pm early in the fall, and made sure to follow up with student groups before graduation.

After lunch, I went to Session 702: “Return on Investment: Metadata, Metrics, and Management.” All I can say about the session is…wow. Joyce Chapman of TRLN (formerly an NC State Library Fellow) spoke about her research into ROI (return on investment) for manual metadata enhancement and a project to understand researcher expectations of finding aids. The first project addressed the challenge of measuring value in a nonprofit (which cannot measure value via sales like for-profit organizations) through A/B testing of enhancements made to photographic metadata by cataloging staff. Her testing found that page views for enhanced metadata records were quadruple those of unenhanced records, a staggering statistic. Web analytics found that 28% of search strings for their photographs included names, which were only added to enhanced records. In terms of cataloger time, their goal was 5 minutes per image but the average was 7 minutes of metadata work per image. Her project documentation is available online. In her other study, she did a study of discovery success within finding aids by academic researchers using behavior, perception, and rank information. In order from most to least useful for researchers were: collection inventory, abstract, subjects, scope and contents, and biography/history. The abstract was looked at first in 60% of user tests. Users did not know the difference between abstract and scope and contents notes; in fact, 64% of users did not even read the scope at all after reading the abstract! Researchers explained that their reason for ignoring the biography/history note was a lack of trust in the information, since biographies/histories do not tend to include footnotes and the notes are impossible to cite.

Emily Novak Gustainis from Harvard talked about her processing metrics database, as mentioned in the paragraph about the “Processing Metrics Collaborative” session. Her reasoning behind metrics was simple: it is hard to change something until you know what you are doing. Her database tracks 38 aspects of archival processing, including timing and processing levels. She repeated that you cannot compare people, only collections; however, an employee report showed that a permanent processing archivist was spending only 20% of his time processing, so her team was able to use this information to better leverage staff responsibilities to respond to this information.

Adrian Turner from the California Digital Library talked about the Uncovering California Environmental Collections (UCEC) project, a CLIR-funded grant project to help process environmental collections across the state. While metrics were not built into the project, the group thought that it would be beneficial for the project. In another project, the UC Next Generation Technical Services initiative found 71000 feet in backlogs, and developed tactics for collection-level records in EAD and Archivists’ Toolkit using minimal processing techniques. Through info gathering in a Google doc spreadsheet, they found no discernable difference between date ranges, personal papers, and record groups processed through their project. They found processing rates of 1 linear foot per hour for series level arrangement and description and 4-6 linear feet per hour for folder level arrangement and description. He recommended formally incorporating metrics into project plans and creating a shared methodology for processing levels.

I had to head out for Midway before Q&A started to get on the train in time for my return flight, which thankfully wasn’t canceled from Hurricane Irene. As the train passed through Chicago, I found myself thinking about the energizing and inspiring the projects, tools, and theory that comes from attending SAA…and how much I look forward to SAA 2012.

(Cross posted to ZSR Professional Development blog.)

31
Aug
11

SAA Days 2 & 3: assessment, copyright, conversation

I started Wednesday with a birthday breakfast with a friend from college, then lunch with a former mentor, followed by roundtable meetings. I focused on the Archivists’ Toolkit / Archon Roundtable meeting, which is always a big draw for archivists interested in new developments with the software programs. Perhaps the biggest news came from Merilee Proffitt of OCLC, who announced that ArchiveGrid discovery interface for finding aids has been updated and will be freely available (no longer subscription based) for users seeking archival collections online. A demo of the updated interface, to be released soon, was available in the Exhibit Hall. In addition, Jennifer Waxman and Nathan Stevens described their digital object workflow plug-in for Archivists’ Toolkit to help archivists avoid cut-and-paste of digital object information. Their plugin is available online and allows archivists to map persistent identifiers to files in digital repositories, auto-create digital object handles, create tab-delimited work orders, and create a workflow from the rapid entry dropdown in AT.

On Thursday, I attended Session 109: “Engaged! Innovative Engagement and Outreach and Its Assessment.” The session was based on responses to the 2010 ARL survey on special collections (SPEC Kit 317), which found that 90% of special collections librarians are doing ongoing events, instruction sessions, and exhibits. The speakers were interested in how to assess the success of these efforts. Genya O’Meara from NC State cited Michelle McCoy’s article entitled “The Manuscript as Question: Teaching Primary Sources in the Archives — The China Missions Project,” published in C&RL in 2010, suggesting that we have a need for standard metrics for assessment of our outreach work as archivists. Steve MacLeod of UC Irvine explored his work with the Humanities Core Course program, which teaches writing skills in 3 quarters, and how he helped design course sessions with faculty to smoothly incorporate archives instruction into humanities instruction. Basic learning outcomes included the ability to answer two questions: what is a primary source? and what is the different between a first and primary source? He also created a LibGuide for the course and helped subject specialist reference/instruction librarians add primary source resources into their LibGuides. There were over 45 sections, whereby he and his colleagues taught over 1000 students. He suggested that the learning outcomes can help us know when our students “get it.” Florence Turcotte from UF discussed an archives internship program where students got course credit at UF for writing biographical notes and doing basic archival processing. I stepped out of the session in time to catch the riveting tail-end of Session 105: “Pay It Forward: Interns, Volunteers, and the Development of New Archivists and the Archives Profession,” just as Lance Stuchell from the Henry Ford started speaking about the ethics of unpaid intern work. He suggested that paid work is a moral and dignity issue and that unpaid work is not equal to professional work without pay.

After lunch, I headed over to Session 204: “Rights, Risk, and Reality: Beyond ‘Undue Diligence’ in Rights Analysis for Digitization.” I took away a few important points, including “be respectful, not afraid,” that archivists should form communities of practice where we persuade lawyers through peer practice such as the TRLN guidelines and the freshly-endorsed SAA standard Well-intentioned practice document. The speakers called for risk assessment over strict compliance, as well as encouraging the fair use defense and maintaining a liberal take-down policy for any challenges to unpublished material placed online. Perhaps most importantly, Merrilee Proffitt reminded us that no special collections library has been successfully sued for copyright infringement by posting unpublished archival material online for educational use. After looking around the Exhibit Hall, I met a former mentor for dinner and went to the UCLA MLIS alumni party, where I was inspired by colleagues and faculty to list some presentation ideas on a napkin. Ideas for next year (theme: crossing boundaries/borders) included US/Mexico archivist relations; water rights such as the Hoover Dam, Rio Grande, Mulholland, etc; community based archives (my area of interest); and repatriation of Native American material. Lots of great ideas floated around…

(Cross posted at ZSR Professional Development blog.)

31
Aug
11

SAA Day 1: Collecting Repositories and E-Records Workshop

On Tuesday, I arrived in rainy Chicago and headed straight for the Hotel Palomar for the AIMS Project (“Born-Digital Collections: An Inter-Institutional Model for Stewardship”) workshop regarding born-digital archival material in collecting repositories. The free workshop, called “CREW: Collecting Repositories and E-Records Workshop,” included archivists and technologists from around the world to discuss issues related to collection development, accessioning, appraisal, arrangement and description, and discovery and access of born-digital archival materials.

The workshop program started with Glynn Edwards of Stanford and Gretchen Gueguen of UVa, who discussed collection development of born-digital records. The speakers suggested that both collection development policies and donor agreements should have clear language about born-digital material, including asking donors to contribute metadata to electronic records from his/her collection. The challenge, they note, is in collaboratively developing sound guidelines and policies to help archivists/curators make decisions about what to acquire. A group discussion about talking to donors about their personal digital lives and creating a “digital will,” both of which help provide important information about an individual’s work, communication, and history of using technologies.

Kevin Glick and Mark Matienzo from Yale and Seth Shaw from Duke discussed accessioning, the process through which a repository gains control over records and gathers information that informs other functions in the archival workflow. While many of the procedures for accessioning born-digital material is the same for analog material, the speakers distinguished accessioning the records from accessioning the media themselves (ie the Word document versus the floppy disk on which it is saved). Mark described his process of “re-accessioning” material through a forensic (or bit-level) disk imaging process, whereby he write-protected accessioned files to protect data from manipulation. He used FTK imager to create a media log with unique identifiers and physical/logical characteristics of the media, followed by BagIt to create packages with high level info about accessions. Seth discussed Duke’s DataAccessioner program, which he created as an easy way for archivists to migrate and identify data from disks. A group discussion asked: what level of control is necessary for collections containing electronic records at your institution? and, what are the most common barriers to accessioning electronic records, and how would they show up? Our table agreed that barriers include staffing (skills and time); being able to read media; software AND hardware; storage limits; and greater need for students/interns.

Simon Wilson from Hull, Peter Chan from Stanford, and Gabriela Redwine from the Harry Ransom Center at UT Austin discussed arrangement and description. They questioned whether archivists can appraise digital material without knowing content therein, which conflicts with the high-level, minimal processing emphasized in our field in the past few years. Another major issue is with volume: space is cheap, but does that mean archivists shouldn’t appraise? It isn’t practical to describe every item, but how will archivists know what is sensitive or restricted? Hypatia provides an easy-to-use interface that allows drag-and-drop for easy intellectual organization of e-records, as well as the ability to add rights and permissions information. Peter Chan described a complex method for using a combination of AccessData FTK in combination with TransitSolution and Oxygen to compare checksums, find duplicate records, and do a “pattern search” for sensitive terms and numbers (such as social security numbers). Gabi Redwine explored her work with a hybrid collection (analog and digital records) where she learned that descriptive standards should be a learning process for staff, not students or volunteers. Her finding aids for the collection included hyperlinks to electronic content and she advocated for disk imaging. The group discussion following this session was intense! The hotbed topic was: are professional skills of appraisal, arrangement, description still relevant for born digital materials? Our group agreed that appraisal and description remain important; however, we were strongly divided about whether archivists will need to contribute to arrangement of e-records. I believe that arrangement becomes less important as things become more searchable, as argued in David Weinberger’s Everything is Miscellaneous. Arrangement emerged before the digital realm as a way for archivists and librarians to contextualize and organize material based on topics/subjects; however, with better description, users can create their own ways of organizing e-records!

Finally, Gretchen Gueguen (UVa) and Erin O’Meara of UNC Chapel Hill discussed discovery and access. Our goals as archivists include to preserve original format and order as much as possible, and apply restrictions as necessary, while balancing this with our mission to make things accessible and available. Gretchen suggested the idea of Google Books’ “snippet” idea as a way to provide access without compromising privacy or restrictions on sensitive material. Her models for access for digital material include: in-person versus not; authenticated versus not; physical versus online access; and dynamic versus static. Erin described her use of Curator’s Workbenchwithin FOXML and Solr to control access permissions and assign restrictions and roles to e-records. Another group discussion included chewy scenarios for dealing with born-digital materials; my table had to consider: “you are at a large public academic research library; director brings several CDROMs, Zip disks and floppy disks of famous (secretive) professor from campus; they are backup files created over the years; office has more paper files; professor and his laptop are missing; no one can give further details on files; write 1 page plan for preserving/describing files; working institutional repository exists.” With no donor agreement and an understanding that the faculty member was very private, we couldn’t go ahead with full access of the material.

At the end of the day, I left with a much better grasp of how I see myself as an archivist dealing with born-digital material (primarily those on optical and disk media). It seems that item-level description works best for born-digital while aggregate description works best for analog materials. Digital records are dealt with best through collaboratively-created policies and procedures for acquiring, processing, and describing them. Great stuff!

Here is the suggested reading list to help participants prepare for the course:

(Cross posted to ZSR Professional Development blog.)

*Update: all of the workshop presentations have been posted to the born digital archives blog.

15
Jun
11

Teaching digitization for C2C

Most of this post is duplicated on the Professional Development blog at my institution.
I recently volunteered to help teach a workshop entitled “Preparing for a Digitization Project” through NC Connecting to Collections (C2C), an LSTA-funded grant project administered by the North Carolina Department of Cultural Resources. This came about as part of an informal group of archivists, special collections librarians, and digital projects librarians interested in the future of NC ECHO and its efforts to educate staff and volunteers in the cultural heritage institutions across the state about digitization. The group is loosely connected through the now-defunct North Carolina Digital Collections Collaboratory.

Late last year, Nick Graham of the North Carolina Digital Heritage Center was contacted by LeRae Umfleet of NC C2C about teaching a few regional workshops about planning digitization projects. The workshops were created as a way to teach smaller archives, libraries, and museums about planning, implementing, and sustaining digitization efforts. I volunteered to help with the workshops, which were held in January 2011 in Hickory as well as this past Monday in Wilson.

The workshops were promoted through multiple listservs and were open to staff, board members, and volunteers across the state. Each workshop cost $10 and included lunch for participants. Many of the participants reminded me of the folks at the workshops for Preserving Forsyth’s Past. The crowd was enthusiastic and curious, asking lots of questions and taking notes. Nick Graham and Maggie Dickson covered project preparation, metadata, and the NC Digital Heritage Center (and how to get involved); I discussed the project process and digital production as well as free resources for digital publishing; and Lisa Gregory from the State Archives discussed metadata and digital preservation.

I must confess that the information was so helpful, I found myself taking notes! When Nick stepped up to describe the efforts of the Digital Heritage Center, which at this time is digitizing and hosting materials from across the state at no cost, I learned that they will be seeking nominations for North Carolina historical newspapers to digitize in the near future, and that they are also interested in accepting digitized video formats. Lisa also introduced the group to NC PMDO, Preservation Metadata for Digital Objects, which includes a free preservation metadata tool. It is always a joy to help educate repositories across the state in digitization standards and processes!