Archive for the 'Archives Management' Category

17
Jul
13

Society of California Archivists Meeting: Berkeley

I presented at the Annual General Meeting of the Society of California Archivists in April as part of Session 12, Moving Backlogs to the Forefront: Revamping Archival Processing Across the UC Libraries. The session highlighted a report created by a “Power of Three” group within the University of California’s Next Gen Technical Services initiative that focused specifically on more efficient archival processing. The main deliverable of this group and its lightning teams: the University of California Guidelines for Efficient Archival Processing.

What makes the UC Guidelines unique is the concept of a “value score,” which helps guide archivists/processors to document their decisionmaking process with regard to the most efficient processing levels at the collection and component levels. There are several charts in the guidelines that can be used as a tool for determining the “value” of a collection and justifying appropriate processing levels. Michelle Light’s presentation on the guidelines provides an excellent description and background.

I presented on my lightning team‘s work on defining a method for capturing processing rates, or archival processing metrics. Our group’s work resulted in a basic archival metrics tracking spreadsheet, along with a set of recommendations for key units for measurement. The spreadsheet is embedded in the UC Guidelines. My presentation:

10
Jul
13

2013 Archives Leadership Institute: takeaways

Most of this piece is cross-posted to my Library’s blog, The Learning Library.

From June 16 – 23, I had the privilege of attending the Archives Leadership Institute, a selective, weeklong immersion program in Decorah, Iowa for emerging archival leaders to learn and develop theories, skills, and knowledge for effective leadership. The program is funded by the National Historical Publications and Records Commission (NHPRC), a statutory body affiliated with the National Archives and Records Administration (NARA), hosted at Luther College for the years 2013-2015.

This year represented a complete re-visioning of the program, which featured 5 daylong sessions: New Leadership Thinking and Methods (with Luther Snow), Project Management (with Sharon Leon, The Center for History and New Media at George Mason University), Human Resource Development (with Christopher Barth, The United States Military Academy at West Point), Strategies for Born Digital Resources (with Daniel Noonan, The Ohio State University), and Advocacy and Outreach (with Kathleen Roe, New York State Archives).

ALI has been one of the greatest learning experiences of my career. So much of this program related directly to my work and current role — but more importantly, much of it could be applied more broadly. Enthusiastic participant responses and notes are captured in this Storify story from ALI and also this excellent recap by a fellow participant, but I will attempt to illustrate what I see as the biggest takeaways from the program that could relate to my colleagues.

Each day of the program included introductions and wrap-up by Luther Snow, an expert consultant/facilitator who originated the concept of “Asset Mapping.” Luther’s background as a community organizer provided a solid foundation for his positive leadership strategy, which emphasizes networked, or “generative” methods of getting things done. There are several principles that I took away from this:

  • Leadership is impact without control. We cannot force people to contribute or participate; the goal is to get people to do things voluntarily by allowing people to contribute with their own strengths.
  • Generative leadership is about asset thinking. The key to creating impact is in starting by thinking of what we actually have: our assets. Focus on talent and areas of strength instead of “needs” and problems — avoid focusing on scarcity or pity.
  • Look for affinities. How can our self-interests overlap? Asset thinking helps us find common interests and mutual benefit — we can connect what we have to get more done than we could on our own.
  • Be part of the larger whole. By emphasizing abundance, we can create affinities, which leads to a sense that “my gain is your gain is our gain.” This sets up a virtuous cycle based on an open-sum (think: potluck; network) instead of a closed-sum (think: slices of pie; gatekeeping) environment.

Of particular importance to generative thinking is the fact that semantics matter. In one activity, participants took turns making “need statements” and then turning them into “asset statements.” One example? Time. Instead of saying “time is scarce,” consider saying “time is valuable.” Instead of “we need more staff,” say “we have lots of great projects and so much enthusiasm from our users. How can we continue to provide these services?” Some more examples of language choices were included in Luther’s (copyrighted) handouts.

Building affinity can be difficult, since it is based on trust and recognizing likeness. We can build affinity with stakeholders connected to our assets — emphasize what you have in common, or talk about how your differences complement each other. Relate to stakeholders by focusing on mutual interests, and try to create opportunities to do a project together. Keep in mind: we can do more together than we can on our own.

And now for some highlights from the daylong sessions…

Strategies for Born Digital Resources (with Daniel Noonan, The Ohio State University)

Project Management (with Sharon Leon, The Center for History and New Media at George Mason University)

  • Historical Thinking Matters, a resource for teaching students how to engage critically with primary sources
  • Consider collaborative, flexible workspaces that increase staff productivity: moveable tables, whiteboards, a staff candy drawer
  • Articulating the Idea, worksheets for project planning from WebWise, IMLS, and the CHNM at GMU
  • Leon’s presentation from a different workshop on project management, including guidelines for creating “project charters” that include a scope statement, deliverables, and milestones
  • Share full text of grant projects and proposals with your staff for learning purposes!
  • Recommended PM tools: Basecamp and Asana; deltek.com/products/kona.aspx … https://podio.com/  http://basecamp.com/  http://asana.com/  https://trello.com/ (we are using Trello with some projects in collaboration with IT) — trick is to use these tools yourself to get team buy-in
  • Example from my former institution on positive reinforcement: Dedicated Deacon, which sends automatically to supervisor of person recognized; weekly drawing for prizes

Strategic Visioning and Team Development (with Christopher Barth, The United States Military Academy at West Point)

Advocacy and Outreach (with Kathleen Roe, New York State Archives)

The next phase of my ALI experience includes a practicum, workshop, and group project. I plan to focus my practicum on building and empowering a new team — my current focus as Acting Head of Special Collections & Archives — by integrating asset-based thinking into our projects and strategic planning. Looking forward to continued growth both through my ALI cohort and the valuable leadership tools and resources I gathered from the intensive in June.

22
Dec
11

Musings: SAA, DAS, and “Managing Electronic Records in Archives & Special Collections”

This afternoon I successfully completed the electronic exam for “Managing Electronic Records in Archives & Special Collections,” a workshop presented as part of SAA‘s Digital Archives Specialist program. With my new certificate of continuing education in hand, I wonder how much I should/could participate in the DAS program. I have been watching the development of the program with great interest, particularly the cost, expected completion timeline, and who the experts would be. I signed up for the course and ventured up to Pasadena for a two-day workshop with Seth Shaw and Nancy Deromedi.

Erica Boudreau has a good summary of the workshop as taught by Tim Pyatt and Michael Shallcross on her blog, so I will try not to repeat too much here. Of interest to those looking to learn more about e-recs is the Bibliography and the pre-readings, which consisted of several pieces from the SAA Campus Case Studies website. We were asked to read Case 2, “Defining and Formalizing a Procedure for Archiving the Digital Version of the Schedule of Classes at the University of Michigan” by Nancy Deromedi, and Case 13, “On the Development of the University of Michigan Web Archives: Archival Principles and Strategies” by Michael Shallcross, as well as “Guarding the Guards: Archiving the Electronic Records of Hypertext Author Michael Joyce” by Catherine Stollar.

On the first day, the instructors discussed electronic “recordness,” authenticity/trust, the OAIS and PREMIS models, advocacy, and challenges, and reserved time for the participants to break into groups to discuss the three case studies. On the second day, we dove into more practical application of e-records programs, in particular a range of workflows. One of the takeaway messages was simply to focus on doing something, not waiting for some comprehensive solution that can handle every variety of e-record. Seth displayed a Venn diagram he revealed at SAA this year, which separates “fast,” “good,” and “cheap” into three bubbles — each can overlap with one other focus area, but not both. That is, for example, that your workflow can be cheap and good, but not fast; good and fast but not cheap, et cetera.

Seth and Nancy illustrated a multi-step workflow using a checksum creator (example used was MD5sums), Duke DataAccessioner for migration, checksums, as well as plugins for Jhove and Droid, WinDirStat for visual analysis of file contents, and FTKimager for forensics. They also discussed Archivematica for ingest and description, which still seems buggy, and web archiving using tools such as ArchiveIt, the CDL’s Web Archiving Service, and HTTrack. Perhaps the most significant thing I learned was about the use of digital forensics programs like FTKimager, as well as the concept of a forensic write blocker, which essentially prevents files on a disk/USB from being changed during transfer. Digital forensics helps us to see hidden and deleted files, which can help us provide a service to records creators — recovering what was thought lost — and creating a disk image to emulate the original disk environment. Also shared: Peter Chan at Stanford put up a great demo of how to process born digital materials using AccessData FTK on YouTube.  It was helpful to see these tools I have been reading about actually demonstrated.

Our cohort briefly discussed UC Irvine’s “virtual reading room,” which is essentially a way for researchers to access born-digital content in a reading room environment using DSpace, through a combination of an application process and limited user access period. Our rules of use are also posted. I have a lot of thoughts in my mind about how this may change or improve over time as we continue to receive and process born-digital papers and records — when we are doing less arrangement and better summarization/contextualization/description, how can we create a space for researchers to access material with undetermined copyright status? What will the “reading room” look like in the future?

Our digital projects specialist and I attended the workshop and I think we found some potential services and programs that could help us with our born-digital records workflow. Above all, it was helpful to see and hear about the tools being developed and get experienced perspectives on what has been working at Duke and Michigan. I enjoyed the review of familiar concepts as well as demonstrations of unfamiliar tools, and could see myself enrolling in future DAS courses. The certificate program includes an option to test out of the four Foundational courses, at $35 a pop. If I choose to complete the program, it must be done within 2 years, with a comprehensive exam ($100) that must be completed within 5 months after completing the required courses. Some people are cherry-picking from the curriculum, choosing only courses that are the most relevant to their work. I think a DAS certification could help train and employ future digital archivists (or, in my mind, archivists in general — since we’ll all be doing this type of work) and may create a “rising tide lifts all ships” type of situation in our profession. While there is a risk of a certification craze meant for financial gain of the organization, I was grateful to learn from experienced archivists in a structured setting. There’s something to be said for standards in education in our profession. I hope that DAS will raise the standard for (digital) archivists.

23
Nov
10

Sharing MARC from Archivists’ Toolkit

A few weeks ago, I shared an excited tweet with the archives twitterverse announcing that I had successfully tested importing a MARC record from Archivists’ Toolkit into WorldCat. The tweet garnered more attention than I had anticipated, including a few direct messages from fellow archivists wanting to know how we came up with a solution to the MARC from AT problem. Here is what we did.

The problems with MARCXML exported from AT are few but significant. My colleague Mark Custer at ECU recently posted to the AT user group listserv a question about the fact that AT does not currently allow subfields for subject headings, so the MARC from AT is missing the subfield indicators. I set up a meeting with a cataloger at my library to help me look at the MARCXML files being exported from AT to see what her thoughts were about whether the records could be considered complete. We took a look at MARC for archival material already on WorldCat and compared that to what we exported from AT. She illustrated what she saw as the issues that would prevent proper sharing of the MARC with our local catalog and WorldCat:

  • Missing fixed fields including Ctrl, Desc, and Date (if no date range was included in the finding aid)
  • Missing subject heading subfield delimiters
  • 650 used instead of 600 field in some instances
  • Missing indicators for 245 (and 545, optional)
  • Missing cataloging source for 049 and 040

Because the MARC exported from AT is in MARCXML format and our catalogers work with the MRC format, we used MARCedit to convert the record from MARCXML to MRC. Once these missing and erroneous elements were fixed using MARCedit, we were ready to test import the record. Our library’s account with OCLC Connexion accepts imported records in DAT format, so we saved the MRC file as a DAT file. We tried uploading to Connexion using local bibliographic import and were successful. We determined that it would probably be easier to edit the MARC directly in Connexion, so we will do that in the future. The cataloger and I decided to upload the file to WorldCat as an official record, which worked, as well as to our local catalog, which also worked!

One issue for my library is that our finding aids are missing subject terms and authority work that most catalogers would require for submission to WorldCat. We have started incorporating this cataloger into our processing workflow and introduced her to the Names and Subjects modules in AT so that she can finalize subject headings and names that we assign. We can also consider an automated batch update for all our exported MARCXML to include the edits listed above, incorporating help from our technology team and their knowledge of FTP and scripting. In the meantime, we will be submitting our MARC one at a time since our finding aids are incomplete.

Here’s a recap of our tentative workflow, for your information:

  • Open MARCedit, then Tools
  • Choose MARCXML file as input file
  • Tell program output file name (copy and paste input file info; change ending to .mrc)
  • Select MARC21XML to MARC plus Translate to MARC8
  • Select Execute
  • Open OCLC Connexion
  • Import records; browse to .mrc file
  • Edit directly in OCLC Connexion
  • Update fixed fields including Ctrl, Desc, and Date
  • Change 650 to 600 when necessary
  • Add subfield delimiters to subject headings
  • Add indicators to 545, 245 as needed
  • Add cataloging source to 040 and 049
  • Save and validate
  • Login to OCLC, select ActionHoldingsUpdateHolding to load directly to WorldCat

Thoughts, comments, ideas, and suggestions are gratefully welcomed! I am really curious to know how others approach this issue.

02
Nov
10

Creating a processing guide

I learned much about the standards of archival processing while I was a fellow at the Center for Primary Research and Training at UCLA. While there, I processed the papers of art critic Jules Langsner, the papers of activist and scholar Josephine Fowler, and the pop culture collection of Middle Eastern Americana created by Jonathan Friedlander. Perhaps most important for my professional development, however, was the training I received from CFPRT Coordinator Kelley Wolfe Bachli, who wrote a succinct and informative processing manual to train each CFPRT fellow.

I brought this training manual with me to North Carolina, and with Kelley’s permission I incorporated her work with the standards used at my institution, DACS, and the Archivist’s Toolkit User Manual. The result? The Archival Processing Guide for Staff, Students, and Volunteers. I also include the chapters about processing and the over-the-shoulder look at processing from Michael J. Fox and Peter L. Wilkerson’s Introduction to Archives, now available free online.

The guide and its rules are constantly under review but I think this would be a great starting resource for any archives or special collections repository looking for some standards for training staff, students, and volunteers about the basics of archival processing. Comments are welcome!

17
Aug
10

Reflections: SAA 2010 in Washington DC

*Portions of this post are duplicated at the WFU ZSR Professional Development blog.

This has been my favorite SAA of the three I have attended, mostly because I felt like I had a purpose and specific topics to explore there. The TwapperKeeper archive for #saa10 is available and includes a ton of great resources. I also got the chance to have my curriculum vitae reviewed at the Career Center not once, but twice! I loved every moment of being in DC and will definitely be attending more of the receptions/socials next time!

Tuesday, August 10 was the Research Forum, of which I was a part as a poster presenter. My poster featured the LSTA outreach grant given to my library and the local public library and explored outreach and instruction to these “citizen archivists.” I got a lot of encouraging feedback and questions about our project, including an introduction to the California Digital Library’s hosted instances of Archivist’s Toolkit and Archon, which they use for smaller repositories in the state to post their finding aids.

Wednesday, August 11 consisted primarily of round table meetings, including the highly-anticipated meeting of the Archivists Toolkit/Archon Round Table. The development of ArchivesSpace, the next generation archives management tool to replace AT and Archon, was discussed. Development of the tool is planned to begin in early 2011. Jackie Dooley from OCLC announced that results from a survey of academic and research libraries’ special collections departments will be released. A few interesting findings:

  • Of the 275 institutions surveyed, about 1/3 use Archivist’s Toolkit; 11% use Archon
  • 70% have used EAD for their finding aids
  • About 75% use word processing software for their finding aids
  • Less than 50% of institutions’ finding aids are online

A handful of brief presentations from AT users followed, including Nancy Enneking from the Getty. Nancy demonstrated the use of reports in AT for creating useful statistics to demonstrate processing, accessioning, and other features of staff work with special collections. She mentioned that AT can be linked to Access with MySQL for another way to work with statistics in AT. Corey Nimer from BYU discussed the use of plug-ins to supplement AT, which I have not yet used and hope to implement.

Perhaps more interestingly, Marissa Hudspeth from the Rockefeller and Sibyl Shaefer from the University of Vermont introduced their development of a reference module in AT, which would allow patron registration, use tracking, duplication requests, personal user accounts, et cetera. Although there is much debate in the archives community about whether this is a good use of AT (since it was originally designed for description/content management of archives), parts of the module should be released in Fall 2010. They said they’d post a formal announcement on the ATUG listserv soon.

On Thursday, August 12, sessions began bright and early. I started the day with Session 102: “Structured Data Is Essential for Effective Archival Description and Discovery: True or False?” Overall summary: usability studies, tabbed finding aids, and photos in finding aids are great! While the panel concluded that structured data is not essential for archival description and discovery due to search tools, Noah Huffman from Duke demonstrated how incorporating more EAD into MARC as part of their library’s discovery layer resulted in increased discovery of archival materials.

Session 201 included a panel of law professors and copyright experts, who gave an update on intellectual property legislation. Peter Jaszi introduced the best practice and fair use project at the Center for Social Media, a 5-year effort to analyze best practice for fair use. Their guidelines for documentary filmmakers could be used as an example for research libraries. In addition, the organization also created a statement of best practices for fair use of dance materials, hosted at the Dance Heritage Center. Mr. Jaszi argued that Section 1201 does not equal copyright, but what he called “para-copyright law” that can be maneuvered around by cultural heritage institutions for fair use. I was also introduced to Peter Hirtle’s book about copyright (and a free download) entitled Copyright and Cultural Institutions: Guidelines for Digitization for U.S. Libraries, Archives, and Museums, which I have started to read.

I wandered out of Session 201 into Session 209, “Archivist or Educator? Meet Your Institution’s Goals by Being Both,” which featured archivists who teach. The speakers emphasized the study of how students learn as the core of becoming a good teacher. One recommendation included attending a history or social sciences course in order to see how faculty/teachers teach and how students respond. I was inspired to consider faculty themes, focuses, and specialties when thinking about how to reach out to students.

Around 5:30 pm, the Exhibit Hall opened along with the presentation of the graduate student poster session. I always enjoy seeing the work of emerging scholars in the archival field, and this year was no different. One poster featured the Philadelphia Area Consortium of Special Collections Libraries in a CLIR-funded project to process hidden collections in the Philadelphia region — not those within larger repositories, but within smaller repositories without the resources or means to process and make available their materials. The graduate student who created the poster served as a processor, traveling to local repositories and communicating her progress and plan to a project manager. This is an exciting concept, since outreach grants tend to focus on digitization or instruction, not the act of physically processing the archival materials or creating finding aids.

On Friday, August 13, I started the morning with Session 308, “Making Digital Archives a Pleasure to Use,” which ended up focusing on user-centered design. User studies at the National Archives and WGBH Boston found that users preferred annotation tools, faceted searching, and filtered searching. Emphasis was placed on an iterative approach to design: prototype, feedback, refinement.

I headed afterward to Session 410, “Beyond the Ivory Tower: Archival Collaboration, Community Partnerships, and Access Issues in Building Women’s Collections.” The panel, while focused on women’s collections, explored collaborative projects in a universally applicable way. L. Rebecca Johnson Melvin from the University of Delaware described the library’s oral history project to record Afra-Latina experiences in Delaware. They found the Library of Congress’ Veterans’ History Project documentation useful for the creation of their project in order to reach out to the Hispanic community of Delaware. T-Kay Sangwand from the University of Texas, Austin, described how the June L. Mazer Lesbian Archives were processed and digitized, then stored at UCLA. Ms. Sangwand suggested that successful collaborations build trust and transparency, articulate expectations from both sides, include stakeholders from diverse groups, and integrate the community into the preservation process. One speaker noted that collaborative projects are “a lot like donor relations” in the sense that you have to incorporate trust, communications, and contracts in order to create a mutually-beneficial result.

On Saturday, August 14, I sat in on Session 502, “Not on Google? It Doesn’t Exist,” which focused on search engine optimization and findability of archival materials. One thing to remember: Java is evil for cultural heritage because it cannot be searched. The session was a bit introductory in nature, but I did learn about a new resource called Linkypedia, which shows how Wikipedia and social media interact with cultural heritage websites.

Then I headed to Session 601, “Balancing Public Services with Technical Services in the Age of Basic Processing,” which featured the use of More Product, Less Process, aka “basic processing,” in order to best serve patrons. After a few minutes I decided to head over to Session 604, “Bibliographic Control of Archival Materials.” The release of RDA and the RDA Toolkit (available free until August 30) has opened up the bibliographic control world to the archival world in new ways. While much of the discussion was outside of my area of knowledge (much was discussed about MARC fields), I learned that even places like Harvard have issues with cross-referencing different types of resources that use different descriptive schemas.

My last session at SAA was 705, “The Real Reference Revolution,” which was an engaging exploration of reference approaches for archivists. Multiple institutions use Google Calendar for student hours, research appointments, and special hours. One panelist suggested having a blog where students could describe their work experience. Rachel Donahue described what she called “proactive reference tools” such as Zotero groups to add new materials from your collection and share those with interested researchers, and Google Feedburner.

It was a whirlwind experience and I left feeling invigorated and ready to tackle new challenges and ideas. Whew!

25
May
10

Digitization policies: drafts

In a few weeks, I will have been in my position here for four months. If there is one project that I hope to complete before my first year, it is to successfully create a sustainable digitization process for our library!

With feedback from the digital/web librarian who attempted to create a digitization policy about two years ago and a lot of reading, I created four documents to get our digitization “task force” talking about our project process. These documents, in draft form, are as follows:

  • Digital Collection Development Policy: This document is modeled after the original policy document. It describes types of digitization projects, defines a “digitization advisory group” that decides what projects to do and who will be part of the projects, as well as project selection criteria.
  • Digital Project Life Cycle: This document describes the process of identifying and implementing a digital project. Team roles are described, as well as technical and metadata specs (still in development).
  • Digitization Project Proposal: This is a very short form that groups can fill out to propose a digital project to the “digitization advisory group.”
  • Project Proposal Checklist: This is the checklist that the “digitization advisory group” would use to help the group decide on and prioritize digitization projects. Adapted from Syracuse University Library’s “Digital Library Project Proposal Checklist.”

There are other forms and policies, such as a work order submission form and copyright research policy — I have some great guidance from the Society of Georgia Archivists’ Forms Forum, which has a lot of excellent examples. Some of the other resources I consulted and adapted include:

For me, the development policy and life cycle documents are the most important. Once our “task force” comes to agreement on these documents, they can serve as the backbone for our projects, as well as evidence that we all support a long-term, collaborative digitization effort. Feedback and suggestions are welcome. Thank you for reading!

As an unrelated note, Touchable Archives is the blog of the month for May 2010 at Simmons’ GSLIS!

11
May
10

Who cares about learning EAD?

Matt (@herbison) over at Hot Brainstem posted a good question to his blog: “Can you skip learning EAD and go right to Archivists’ Toolkit or Archon?” He suggests that the “right way” to create accessible finding aids (EAD, DACS, XML, XSLT, and AT) is not as important as finding a (faster) way to get stuff online. First, I want to say thanks to him for bringing this question to the table.

I was not trained to create EAD finding aids in grad school (although I have experience with XML and HTML). Instead, I was trained to create EAD-compatible MS Word docs that were plopped into an EAD template by an encoder and sent over to the OAC. For me, AT was not part of the process of creating a finding aid.

In my current job, I’m working with old EAD files that were outsourced and tied to a problematic stylesheet (they referenced JPG files and included HTML color codes). I imported these old EAD files  into AT — minor editing was needed, but nothing that made me reference the EAD tag library. I have yet to create one from “scratch,” although I did recently attend the basic EAD workshop through SAA. I can now search and edit the contents of our existing finding aids (all 450+ of them) and create new ones within the AT interface…and with less opportunity for human error.

I am moving toward the idea of going straight to AT for EAD since it exports “good” EAD (that I have seen so far). I am going to train our grad students and library assistants how to use AT for accessions and basic processing…why would I need to teach them EAD? I am still in the process of answering that question because we are working on a new stylesheet for our finding aids — which means I need to learn more about XSLT. AT might give me a nice EAD document, but it doesn’t make it look pretty online for me.

AT experts like Sibyl (@sibylschaefer) and Mark (@anarchivist) are right when they suggest that an understanding of EAD is important when you need to do stuff with the EAD that AT exports. Just being aware of elements and the tag library helps me “read” an EAD document…and hopefully, it will help me create better, more beautiful finding aids through stylesheets that interact with the data in functional, interactive ways.

So I suppose the question to consider is, “how much do you need to learn about EAD in order to go right to AT or Archon?”

04
May
10

Creating a digitization task force

Just over a month ago, I asked my colleagues at the NC Digital Collections Collaboratory about ways to formulate a digital collections program at my institution. I got some great feedback and this morning, I was able to wrangle in the eight very important technology, metadata, and special collections staff that could create a sustainable digitization “task force.”

I was fairly nervous about my attempt to gain consensus among this mixed, highly trained, busy group. Without a director of special collections, our ragtag task force became more of a brainstorming session. I brought everyone a copy of Suzanne Preate’s “Digital Project Life Cycle” slide from the 2009 NNYLN conference and allowed for a little storytelling about the history of efforts to create a digital collections program. Once everyone had a chance to express their past frustrations and concerns, we began to ponder the idea of a digital collections process that would work for our institution.

Everyone immediately agreed that special collections alone should have final say about what is selected for digitization, since our staff should have the best idea of what is in our collections. I mentioned that our manuscript collections are not processed to the point where potential digital projects could be created, but our rare books librarian could likely make decisions about rare books that could be digitized. At the same time, everyone wanted to be a part of the creation of a digital collection development policy (also known as selection criteria), which was a relief. I was asked to draft the policy and email the group for feedback and suggestions.

The remainder of the meeting was spent discussing issues with post-production, such as user interface and what the tech team called the “discovery layer” for DSpace. It turns out there is a possibility of creating a new portal for digital collections that pulls from DSpace, without having to use the standard DSpace interface templates. Basically, DSpace and Encompass are the databases, and our new digital portal and VuFind (our catalog) will be the discovery layers. I am still learning about this. Our head tech programmer mentioned that we could use VuFind or a blog (catablog?) as our special collections interface, with MARC records mapped from Dublin Core records that are in DSpace. Of course, this would not work with our finding aids, since the majority of the information therein would not be fully searchable as a MARC record. Our tech team asked special collections to send examples of best practices of how a DSpace portal could look (I did not find many good examples online) as well as any examples we could find of interfaces that may have DSpace as a backend (this is in the works).

We then turned back to the need for a project process. Our copyright expert librarian chimed in to mention a need to document efforts to determine copyright status for orphaned and unpublished works. She urged us to consider creating a standard rights statement for our digital objects. I gave her a copy of the “Well-intentioned practice for putting digitized collections of unpublished materials online” document shared in the recent post from hangingtogether.org entitled “Archivists: be bold and do your job.”

We closed with a few goals in mind: meet again in June after our visit to ECU’s Digital Collections team, and for me to draft a digital collection development policy/selection criteria. My initial thoughts? While disorganized, our meeting established our group’s commitment to a long-term digitization program. We will need to work on a project life cycle of our own in the near future.

24
Mar
10

Joining the NC Digital Collections Collaboratory

I’m the newest contributor to the NC Digital Collections Collaboratory! Check out my premiere post about the challenges of creating a digital collections program in my new job. Please leave comments or suggestions — thanks!