Archive for the 'Conferences' Category


Society of California Archivists Meeting: Berkeley

I presented at the Annual General Meeting of the Society of California Archivists in April as part of Session 12, Moving Backlogs to the Forefront: Revamping Archival Processing Across the UC Libraries. The session highlighted a report created by a “Power of Three” group within the University of California’s Next Gen Technical Services initiative that focused specifically on more efficient archival processing. The main deliverable of this group and its lightning teams: the University of California Guidelines for Efficient Archival Processing.

What makes the UC Guidelines unique is the concept of a “value score,” which helps guide archivists/processors to document their decisionmaking process with regard to the most efficient processing levels at the collection and component levels. There are several charts in the guidelines that can be used as a tool for determining the “value” of a collection and justifying appropriate processing levels. Michelle Light’s presentation on the guidelines provides an excellent description and background.

I presented on my lightning team‘s work on defining a method for capturing processing rates, or archival processing metrics. Our group’s work resulted in a basic archival metrics tracking spreadsheet, along with a set of recommendations for key units for measurement. The spreadsheet is embedded in the UC Guidelines. My presentation:


The present and future of audiovisual archives: Screening the Future 2012, Los Angeles

This week, I attended the second annual Screening the Future conference, held at the University of Southern California. Screening the Future 2012: Play, Pause and Press Forward was organized around three themes:

  • For the record: should we talk about data or media?
  • Meeting the demand: how can we match users’ expectations with institutional capabilities?
  • “I am big, it’s the pictures that got small!”: what we can learn from each other

The conference program details these themes, which revolve around the current state and challenges facing archives that include audiovisual material. As the event website notes, the conference brought together “archivists, production companies, filmmakers, TV producers, CTOs, scientists, vendors, strategists, funders, and policy makers to develop solutions to the most urgent questions facing audiovisual repositories.” I drove up to campus daily for the intense, three-day event.

I was struck by the unique format of the conference, which was somewhat TED-like in structure. Three hour sessions (without breaks) brought together experts and innovators from the US and Europe to address the issues listed above.

While I kept track of the event via the #stf12 hashtag on Twitter, I learned about PrestoCentre, a European member organization focused on audiovisual and digital preservation — they also have a blog with lots of free content and a new magazine.

Overall, there was a continual theme of presentations around the need to address the non-materiality of digital audiovisual content. There is a great deal of anxiety about the “problem” of digital preservation, which Kara Van Malssen points out should also have some opportunities. Presenters seemed to fluctuate between the move from media-based to file-based audiovisual content and acceptance that, in the end, digital preservation is about the preservation of very real, physical storage technology (including servers) with a limited life span.

Some institutions and organizations with adequate funding are focusing on migration, such as James DeFilippis of FOX Technology Group. DeFilippis asked the audience to consider the “archive horizon,” looking 5, 10, or 100 years into the future of our digital storage media — an understanding of the life cycle of your storage will help inform a migration policy, which ensures the transfer of media to new/updated storage on a regular basis. He also described how quickly we are filling our storage media, noting that if 1MB were equal to 1 raindrop, 1 PB (petabyte) is roughly equal to the wine consumption of France over a thousand years. He used FOX Film Entertainment as an example, noting that their digital vault has 15 PB available. “Only” 1.5 PB has been used to date, but they expect over 2000 TB per year to be added.

Rob Hummel from Group47 explored a lot of technical jargon in the film archives world, including frame rates and lossless compression. What interested me was his comparison of tape media and digital media, which are similar in that they were considered new, better, and faster technologies — but also similar in that they are fragile, have a short life span, and require specialized equipment to read or view. He noted, “Cloud storage is still just a bunch of spinning disks. We’re acting like electricity is infinite.” He referred the audience to this article about the future accessibility of digital media, then introduced a physical medium called Digital Optical Technology System, or DOTS. According to their website, DOTS is metal-based digital storage media that was patented at Eastman Kodak (Group 47 bought the patent) that is non-magnetic, inert, store-able under normal RH and temperature conditions, with a lifespan of at least 100 years. Their website says that DOTS is a “true optical ‘eye readable’ method of storing digital files” that is write-only, and requires only magnification to be seen (as opposed to specialized equipment/hardware). I found it interesting that we are considering a return to physical media and am curious to know what the future holds for DOTS.

Howard Besser from NYU (professor/director of MIAP program) delved into audiovisual material used as research data. One example included use of video to observe left-handedness over time, whereby the researchers watched early films of sporting events for audience members waving. The films, he noted, were not indexed for hand waving…so it made it challenging to find appropriate films.  Besser noted that the Center for Home Movies created multiple ways of describing its films, including genre, tropes, actions, and recurring imagery — imagine a category for “look Ma, no hands!” He also emphasized what he called a shift in academia, where scholars are interested in everyday life as subjects of study, and urged archivists to consider that what is collected heavily influences what is studied. Besser insists that we need to be able to attach metadata to specific time-codes in audiovisual material, so that multiple topics can be discovered.

Pip Laurenson from the Tate Gallery discussed something I heard about at SCA: video artworks. While the presenters at SCA (Annette Doss and Mary K. Woods “Changing Moving Image Access: Presenting Video Artworks in an Online Environment” (PDF)) mentioned that artists seemed more interested in display over format, Laurenson said that there are artists concerned with the preservation and presentation of their video artworks. She argued that some artists want the textures and quality of older (at times obsolete) formats. She noted that some artists are interested in the aesthetic variety of different technologies.

Lev Manovich from UCSD (Visual Arts Department) described a whirlwind of digital humanities projects he has been working on with students as part of the Software Studies Initiative. In one project, Manovich’s student used the 5930 front pages of the Hawaiian Star newspaper from 1893-1912 to show the design shift in print media over time, using images from the Library of Congress’ Chronicling America project. Manovich spoke quickly, so I wasn’t able to keep up with some of the projects, but he described a project whereby a computer grouped artworks by hue and saturation for a particular stylistic period, but I could only find the project he did on visualizing modernist art. After describing another project to map out a million pages of manga artwork by shading, Manovich suggested that our genres are artificial and perhaps flawed. With computer-generated groupings based on visual themes/consistencies, visualizations can help create new groupings of content…although I would add that human intervention, especially verification and quality control, is vital to these projects.

A master class the next day on managing the cost of archiving ended up being much more high-level than I anticipated, exploring economics and models for pricing out long-term preservation of digital content. Stephen Abrams from the CDL referred the audience to a CDL white paper on cost modeling for preservation, noting that the cost of preservation consists of a full set of requirements. The CDL and other groups provide a service beyond just storage: if you buy a TB flash drive or cloud storage, you’re only buying storage. Not service or maintenance. Matthew Addis from IT Innovation Centre suggested that cost and risk for digital preservation are linked. More copies and more scrubbing equals less risk of loss, but also greater cost.

A second master class on Tuesday explored archiving future data — Lev Manovich was notably absent as he was sick, so the conversation veered towards personal digital archives with a presentation by Jeff Ubois. Ubois, the original organizer for the Personal Digital Archiving conference, suggested that what is personal has become collective through social media, and that people are curating themselves through tools like Facebook Timeline. Ubois’ most compelling argument (from my perspective) is that we cannot trust companies for preservation of permanent records. One great example was a site called “,” which was supposed to save your final messages upon your death. Ironically, the company went under just a few years after its peak. Businesses are ephemeral, with objectives outside of those of archives. He quoted Jason Scott, who said that “Google is a museum like a supermarket is a food museum” — continuity and preservation are not compatible with the market environment. I was inspired by the discussion regarding personal digital archiving and educating the users, and whether it should be patron or research driven — for me, the unspoken theme was archival appraisal. How can we teach users to do personal digital appraisal, and let users decide what to keep? I don’t  believe every digital shard should be retained, and tools like Stanford’s Self Archiving Legacy Toolkit could present that kind of opportunity to people. Oh, and there was a shout-out to work by Cal Lee and Richard Cox!

The evening ended with a screening of Rick Prelinger’s Lost Landscapes, featuring a multimedia presentation of digitized film footage of Los Angeles.

On the final day of the conference, Brewster Kahle (of the Internet Archive) spoke — he worries about one (commercial) solution to preservation of human knowledge, and says there should be lots of groups involved. I agree. Sam Gustman, one of the conference organizers and head of the USC Shoah Foundation Institute and Associate Dean of the USC Libraries, showed off some of the great features of the Shoah Visual History Archive, which “enables researchers to search cataloguing and indexing data of nearly 52,000 videotaped interviews conducted with survivors and witnesses of the Holocaust in 56 countries and 32 languages.” The videos were manually indexed, with timestamps by themes and names of people mentioned. A new geographic search also allows users to see locations mentioned across the interviews, as represented on the Google map. They also created  a K-12 site that is consistent with the ISTE standard, which they have called IWitness. The site has a ton of interactive features, including the ability for teachers to cut and edit interviews — there’s even a video guide to “ethical editing” of the interviews.

Ben Moskowitz from Mozilla gave a dynamic presentation about web video. He showed off some great tools and coming attractions from Mozilla. One tool, called the Gendered Advertisement Remixer, allows people to use the HTML5 tool to mash up video and audio from gender-oriented children’s television commercials. One example: My Little Ponies audio mixed with video from a toy gun ad. He mentioned a tool called Hyper Audio as a new way to engage critically with media — I know it has something to do with popcorn.js, and it allows people to switch languages, interact with audio transcripts, tweet parts of the transcript and link directly to that point in the audio, and more. At that point, he revealed plans for Mozilla Popcorn, which is a video authoring interface that allows people to create things like multimedia essays consisting of maps, tweets, and archival video — as he said, “be like Jon Stewart.” Finally, Moskowitz urged archivists to provide interfaces to archived material and allow for unanticipated uses of our audiovisual and other records.

Kara Van Malssen of AudioVisual Preservation Solutions (and a super cool digital/video archivist and instructor at NYU/Pratt — check out this video of her talking about oral history in the digital age and her presentations on SlideShare) brought an important topic to the conference: an exploration of the needs/tools for smaller archives with regard to preservation of digital archives. She mentioned the Open Planets Foundation as a forum for discussion between archivists and coders — they have annual hackathons and also have a problems/solutions area on their website. She emphasized the need for smaller institutions to communicate with developers in order to contribute to the success of digital preservation tools.

Whew! I thought I would only write a few paragraphs…but there were so many valuable and interesting presentations at this conference. I plan to steal some of the conference’s ideas about speakers and session formats in hope that we can incorporate master classes and the like into SAA someday. I met a number of audiovisual archivists who I normally would not meet (they go to the AMIA conference instead) as well as other important folks involved in the audiovisual preservation realm. I was reminded at how there is still very little overlap between representatives of LAMs, but encouraged that this forum exists.