Posts Tagged ‘ead

05
Apr
11

Society of NC Archivists meeting: Morehead City

Most of this post is duplicated on the Professional Development blog at my institution.

While many of my colleagues were in Philadelphia for ACRL, I traveled east to the coast of North Carolina for the joint conference of the Society of North Carolina Archivists and the South Carolina Archival Association in Morehead City.

After arriving on Wednesday around dinnertime with my carpooling partner Katie (Archivist and Special Collections Librarian at Elon), we met up with Gretchen (Digital Initiatives Librarian at ECU) for dinner at a seaside restaurant and discussion about digital projects and, of course, seafood.

On Thursday, the conference kicked off with an opening plenary from two unique scholars: David Moore of the NC Maritime Museum talked about artist renditions of Blackbeard, Stede Bonnet, and other pirates, as well as archival research that helped contextualize these works; Ralph Wilbanks of the National Underwater and Marine Agency detailed his team’s discovery of the H.L. Hunley submarine, including the Civil War-era men trapped inside.

Session 1 on Thursday, succinctly titled “Digital Initiatives,” highlighted important work being done at the Avery Center for African American Research at the College of Charleston, UNC Charlotte, and ECU. Amanda Ross and Jessica Farrell from the College of Charleston described the challenges and successes of digitization of material culture, namely slave artifacts and African artwork in their collections. Of primary importance was the maintenance of color and shape fidelity of 3-D objects, which they dealt with economically with 2 flourescent lights with clamps, a Nikon D80 with a 18-200 mm lens by Quantaray (although they recommend a macro lens), a tripod, and a $50 roll of heavy white paper. Their makeshift lab and Dublin Core metadata project resulted in the Avery Artifact Collection within the Lowcountry Digital Library. Kristy Dixon and Katie McCormick from UNC Charlotte spoke carefully about the need for strategic thinking and collaboration at a broad level for special collections and archives today, in particular creating partnerships with systems staff and technical services staff. They noted that with the reorganization of their library, 6 technical services librarians/staff were added to their department of special collections!

Finally, Mark Custer and Jennifer Joyner from ECU explored the future of archival description with a discussion about ECU’s implementation of EAC-CFP, essentially authority records for creators of archival materials. Mark found inspiration from SNAC, the Social Networks and Archival Context Project (a project of UVa and the California Digital Library) to incorporate and create names for their archival collections. Mark used Google Refine‘s cluster and edit feature to pull all their EAD files into one file, grabbed URLs through VIAF and WorldCat identities, and hope to share their authority records with SNAC. Mark clarified the project, saying:

Firstly, we are not partnered with anyone involved in the excellent SNAC project. Instead, we decided to undertake a smaller, SNAC-like project here at ECU (i.e., we mined our EAD data in order to create EAC records). To accomplish this, I wrote an XSLT stylesheet to extract and clean up our local data. Only after working through that step did we then import this data into Google Refine. With Refine, we did a number of things, but the two things discussed in our presentation were: 1) cluster and edit our names with the well-established, advanced algorithms provided in that product 2) grab more data from databases like WorldCat Identities and VIAF without doing any extra scripting work outside of Google Refine.

Secondly, we haven’t enhanced our finding aid interface at all at this point. In fact, we’ve only put in a few weeks’ worth of work into the project so far, so none of our work is represented online yet. The HTML views of the Frances Renfrow Doak EAC record that we demonstrated were created by an XSLT stylesheet authored by Brian Tingle at the California Digital Library. He has graciously provided some of the tools that the SNAC project is using online at: https://bitbucket.org/btingle/cpf2html/.

Lastly, these authority records have stayed with us; mostly because, at this point, they’re unfinished (e.g., we still need to finish that clustering step within Refine, which requires a bit of extra work). But the ultimate goal, of course, is to share this data as widely as possible. Toward that end, I tend to think that we also need to be curating this data as collaboratively as possible.

The final session of the day was the SNCA Business Meeting, where I gave my report as the Archives Week Chair. That evening, a reception was held to celebrate the award winners for SNCA and give conference attendees the opportunity to participate in a behind-the-scenes tour of the NC Maritime Museum. Lots of fun ensued during the pirate-themed tours and I almost had enough energy to go to karaoke with some other young archivists.

On Friday, I moderated the session entitled “Statewide Digital Library Projects,” with speakers Nick Graham from the NC Digital Heritage Center and Kate Boyd from the SC Digital Library. The session highlighted interesting parallels and differences between the two statewide initiatives. Kate Boyd explained that the SCDL is a multisite project nested in multiple universities with distributed “buckets” for description and digitization. Their project uses a multi-host version of CONTENTdm, with some projects hosted and branded specifically to certain regions and institutions. Users can browse by county, institution, and date, and the site includes teacher-created lesson plans. The “About” section includes scanning and metadata guidelines; Kate mentioned that the update to CONTENTdm 6 would help with zoom and expand/reduce views of their digital objects. Nick Graham gave a brief background on the formation of the NCDHC, including NC ECHO and its survey and digitization guidelines. He explained that the NCDHC has minimal selection criteria: simply have no copyright/privacy concerns and a title. The NCDHC displays its digital objects through one instance of CONTENTdm. Both programs are supported by a mix of institutional and government funding/support, and both speakers emphasized the value of word of mouth marketing and shared branding for better collaborative efforts.

Later that morning, I attended a session regarding “Collaboration in Records Management.” Jennifer Neal of the Catholic Diocese of Charleston Archives gave an interesting presentation about the creation of a records management policy for her institution. Among the many reasons to begin an RM program, Jennifer noted that it was likely the legal reasons that were most important, both federal and state (and in her case, organizational rules). She recommended a pilot RM program with an enthusiastic department, as well as a friendly department liaison with organizational tendencies. Jennifer came up with “RM Fridays” as a pre-determined method for making time to sort, shred, organize, and inventory the materials for her pilot department. Her metrics were stunning: 135 record cartons were destroyed and 245 were organized and sent off site. Kelly Eubank from the NC State Archives explained how the state archives uses ArchiveIt to harvest social media sites and websites of government agencies and officials. She then explored, briefly, their use of BagIt to validate GIS geospatial files as part of their GeoMAPP project.

It was great to meet and network with archival professionals from both Carolinas and learn about some of the innovative and creative projects happening in their institutions. Right now I am thinking about EAC, collaboration with tech services, CONTENTdm, and records management.

23
Nov
10

Sharing MARC from Archivists’ Toolkit

A few weeks ago, I shared an excited tweet with the archives twitterverse announcing that I had successfully tested importing a MARC record from Archivists’ Toolkit into WorldCat. The tweet garnered more attention than I had anticipated, including a few direct messages from fellow archivists wanting to know how we came up with a solution to the MARC from AT problem. Here is what we did.

The problems with MARCXML exported from AT are few but significant. My colleague Mark Custer at ECU recently posted to the AT user group listserv a question about the fact that AT does not currently allow subfields for subject headings, so the MARC from AT is missing the subfield indicators. I set up a meeting with a cataloger at my library to help me look at the MARCXML files being exported from AT to see what her thoughts were about whether the records could be considered complete. We took a look at MARC for archival material already on WorldCat and compared that to what we exported from AT. She illustrated what she saw as the issues that would prevent proper sharing of the MARC with our local catalog and WorldCat:

  • Missing fixed fields including Ctrl, Desc, and Date (if no date range was included in the finding aid)
  • Missing subject heading subfield delimiters
  • 650 used instead of 600 field in some instances
  • Missing indicators for 245 (and 545, optional)
  • Missing cataloging source for 049 and 040

Because the MARC exported from AT is in MARCXML format and our catalogers work with the MRC format, we used MARCedit to convert the record from MARCXML to MRC. Once these missing and erroneous elements were fixed using MARCedit, we were ready to test import the record. Our library’s account with OCLC Connexion accepts imported records in DAT format, so we saved the MRC file as a DAT file. We tried uploading to Connexion using local bibliographic import and were successful. We determined that it would probably be easier to edit the MARC directly in Connexion, so we will do that in the future. The cataloger and I decided to upload the file to WorldCat as an official record, which worked, as well as to our local catalog, which also worked!

One issue for my library is that our finding aids are missing subject terms and authority work that most catalogers would require for submission to WorldCat. We have started incorporating this cataloger into our processing workflow and introduced her to the Names and Subjects modules in AT so that she can finalize subject headings and names that we assign. We can also consider an automated batch update for all our exported MARCXML to include the edits listed above, incorporating help from our technology team and their knowledge of FTP and scripting. In the meantime, we will be submitting our MARC one at a time since our finding aids are incomplete.

Here’s a recap of our tentative workflow, for your information:

  • Open MARCedit, then Tools
  • Choose MARCXML file as input file
  • Tell program output file name (copy and paste input file info; change ending to .mrc)
  • Select MARC21XML to MARC plus Translate to MARC8
  • Select Execute
  • Open OCLC Connexion
  • Import records; browse to .mrc file
  • Edit directly in OCLC Connexion
  • Update fixed fields including Ctrl, Desc, and Date
  • Change 650 to 600 when necessary
  • Add subfield delimiters to subject headings
  • Add indicators to 545, 245 as needed
  • Add cataloging source to 040 and 049
  • Save and validate
  • Login to OCLC, select ActionHoldingsUpdateHolding to load directly to WorldCat

Thoughts, comments, ideas, and suggestions are gratefully welcomed! I am really curious to know how others approach this issue.

17
Aug
10

Reflections: SAA 2010 in Washington DC

*Portions of this post are duplicated at the WFU ZSR Professional Development blog.

This has been my favorite SAA of the three I have attended, mostly because I felt like I had a purpose and specific topics to explore there. The TwapperKeeper archive for #saa10 is available and includes a ton of great resources. I also got the chance to have my curriculum vitae reviewed at the Career Center not once, but twice! I loved every moment of being in DC and will definitely be attending more of the receptions/socials next time!

Tuesday, August 10 was the Research Forum, of which I was a part as a poster presenter. My poster featured the LSTA outreach grant given to my library and the local public library and explored outreach and instruction to these “citizen archivists.” I got a lot of encouraging feedback and questions about our project, including an introduction to the California Digital Library’s hosted instances of Archivist’s Toolkit and Archon, which they use for smaller repositories in the state to post their finding aids.

Wednesday, August 11 consisted primarily of round table meetings, including the highly-anticipated meeting of the Archivists Toolkit/Archon Round Table. The development of ArchivesSpace, the next generation archives management tool to replace AT and Archon, was discussed. Development of the tool is planned to begin in early 2011. Jackie Dooley from OCLC announced that results from a survey of academic and research libraries’ special collections departments will be released. A few interesting findings:

  • Of the 275 institutions surveyed, about 1/3 use Archivist’s Toolkit; 11% use Archon
  • 70% have used EAD for their finding aids
  • About 75% use word processing software for their finding aids
  • Less than 50% of institutions’ finding aids are online

A handful of brief presentations from AT users followed, including Nancy Enneking from the Getty. Nancy demonstrated the use of reports in AT for creating useful statistics to demonstrate processing, accessioning, and other features of staff work with special collections. She mentioned that AT can be linked to Access with MySQL for another way to work with statistics in AT. Corey Nimer from BYU discussed the use of plug-ins to supplement AT, which I have not yet used and hope to implement.

Perhaps more interestingly, Marissa Hudspeth from the Rockefeller and Sibyl Shaefer from the University of Vermont introduced their development of a reference module in AT, which would allow patron registration, use tracking, duplication requests, personal user accounts, et cetera. Although there is much debate in the archives community about whether this is a good use of AT (since it was originally designed for description/content management of archives), parts of the module should be released in Fall 2010. They said they’d post a formal announcement on the ATUG listserv soon.

On Thursday, August 12, sessions began bright and early. I started the day with Session 102: “Structured Data Is Essential for Effective Archival Description and Discovery: True or False?” Overall summary: usability studies, tabbed finding aids, and photos in finding aids are great! While the panel concluded that structured data is not essential for archival description and discovery due to search tools, Noah Huffman from Duke demonstrated how incorporating more EAD into MARC as part of their library’s discovery layer resulted in increased discovery of archival materials.

Session 201 included a panel of law professors and copyright experts, who gave an update on intellectual property legislation. Peter Jaszi introduced the best practice and fair use project at the Center for Social Media, a 5-year effort to analyze best practice for fair use. Their guidelines for documentary filmmakers could be used as an example for research libraries. In addition, the organization also created a statement of best practices for fair use of dance materials, hosted at the Dance Heritage Center. Mr. Jaszi argued that Section 1201 does not equal copyright, but what he called “para-copyright law” that can be maneuvered around by cultural heritage institutions for fair use. I was also introduced to Peter Hirtle’s book about copyright (and a free download) entitled Copyright and Cultural Institutions: Guidelines for Digitization for U.S. Libraries, Archives, and Museums, which I have started to read.

I wandered out of Session 201 into Session 209, “Archivist or Educator? Meet Your Institution’s Goals by Being Both,” which featured archivists who teach. The speakers emphasized the study of how students learn as the core of becoming a good teacher. One recommendation included attending a history or social sciences course in order to see how faculty/teachers teach and how students respond. I was inspired to consider faculty themes, focuses, and specialties when thinking about how to reach out to students.

Around 5:30 pm, the Exhibit Hall opened along with the presentation of the graduate student poster session. I always enjoy seeing the work of emerging scholars in the archival field, and this year was no different. One poster featured the Philadelphia Area Consortium of Special Collections Libraries in a CLIR-funded project to process hidden collections in the Philadelphia region — not those within larger repositories, but within smaller repositories without the resources or means to process and make available their materials. The graduate student who created the poster served as a processor, traveling to local repositories and communicating her progress and plan to a project manager. This is an exciting concept, since outreach grants tend to focus on digitization or instruction, not the act of physically processing the archival materials or creating finding aids.

On Friday, August 13, I started the morning with Session 308, “Making Digital Archives a Pleasure to Use,” which ended up focusing on user-centered design. User studies at the National Archives and WGBH Boston found that users preferred annotation tools, faceted searching, and filtered searching. Emphasis was placed on an iterative approach to design: prototype, feedback, refinement.

I headed afterward to Session 410, “Beyond the Ivory Tower: Archival Collaboration, Community Partnerships, and Access Issues in Building Women’s Collections.” The panel, while focused on women’s collections, explored collaborative projects in a universally applicable way. L. Rebecca Johnson Melvin from the University of Delaware described the library’s oral history project to record Afra-Latina experiences in Delaware. They found the Library of Congress’ Veterans’ History Project documentation useful for the creation of their project in order to reach out to the Hispanic community of Delaware. T-Kay Sangwand from the University of Texas, Austin, described how the June L. Mazer Lesbian Archives were processed and digitized, then stored at UCLA. Ms. Sangwand suggested that successful collaborations build trust and transparency, articulate expectations from both sides, include stakeholders from diverse groups, and integrate the community into the preservation process. One speaker noted that collaborative projects are “a lot like donor relations” in the sense that you have to incorporate trust, communications, and contracts in order to create a mutually-beneficial result.

On Saturday, August 14, I sat in on Session 502, “Not on Google? It Doesn’t Exist,” which focused on search engine optimization and findability of archival materials. One thing to remember: Java is evil for cultural heritage because it cannot be searched. The session was a bit introductory in nature, but I did learn about a new resource called Linkypedia, which shows how Wikipedia and social media interact with cultural heritage websites.

Then I headed to Session 601, “Balancing Public Services with Technical Services in the Age of Basic Processing,” which featured the use of More Product, Less Process, aka “basic processing,” in order to best serve patrons. After a few minutes I decided to head over to Session 604, “Bibliographic Control of Archival Materials.” The release of RDA and the RDA Toolkit (available free until August 30) has opened up the bibliographic control world to the archival world in new ways. While much of the discussion was outside of my area of knowledge (much was discussed about MARC fields), I learned that even places like Harvard have issues with cross-referencing different types of resources that use different descriptive schemas.

My last session at SAA was 705, “The Real Reference Revolution,” which was an engaging exploration of reference approaches for archivists. Multiple institutions use Google Calendar for student hours, research appointments, and special hours. One panelist suggested having a blog where students could describe their work experience. Rachel Donahue described what she called “proactive reference tools” such as Zotero groups to add new materials from your collection and share those with interested researchers, and Google Feedburner.

It was a whirlwind experience and I left feeling invigorated and ready to tackle new challenges and ideas. Whew!

11
May
10

Who cares about learning EAD?

Matt (@herbison) over at Hot Brainstem posted a good question to his blog: “Can you skip learning EAD and go right to Archivists’ Toolkit or Archon?” He suggests that the “right way” to create accessible finding aids (EAD, DACS, XML, XSLT, and AT) is not as important as finding a (faster) way to get stuff online. First, I want to say thanks to him for bringing this question to the table.

I was not trained to create EAD finding aids in grad school (although I have experience with XML and HTML). Instead, I was trained to create EAD-compatible MS Word docs that were plopped into an EAD template by an encoder and sent over to the OAC. For me, AT was not part of the process of creating a finding aid.

In my current job, I’m working with old EAD files that were outsourced and tied to a problematic stylesheet (they referenced JPG files and included HTML color codes). I imported these old EAD files  into AT — minor editing was needed, but nothing that made me reference the EAD tag library. I have yet to create one from “scratch,” although I did recently attend the basic EAD workshop through SAA. I can now search and edit the contents of our existing finding aids (all 450+ of them) and create new ones within the AT interface…and with less opportunity for human error.

I am moving toward the idea of going straight to AT for EAD since it exports “good” EAD (that I have seen so far). I am going to train our grad students and library assistants how to use AT for accessions and basic processing…why would I need to teach them EAD? I am still in the process of answering that question because we are working on a new stylesheet for our finding aids — which means I need to learn more about XSLT. AT might give me a nice EAD document, but it doesn’t make it look pretty online for me.

AT experts like Sibyl (@sibylschaefer) and Mark (@anarchivist) are right when they suggest that an understanding of EAD is important when you need to do stuff with the EAD that AT exports. Just being aware of elements and the tag library helps me “read” an EAD document…and hopefully, it will help me create better, more beautiful finding aids through stylesheets that interact with the data in functional, interactive ways.

So I suppose the question to consider is, “how much do you need to learn about EAD in order to go right to AT or Archon?”

04
May
10

Creating a digitization task force

Just over a month ago, I asked my colleagues at the NC Digital Collections Collaboratory about ways to formulate a digital collections program at my institution. I got some great feedback and this morning, I was able to wrangle in the eight very important technology, metadata, and special collections staff that could create a sustainable digitization “task force.”

I was fairly nervous about my attempt to gain consensus among this mixed, highly trained, busy group. Without a director of special collections, our ragtag task force became more of a brainstorming session. I brought everyone a copy of Suzanne Preate’s “Digital Project Life Cycle” slide from the 2009 NNYLN conference and allowed for a little storytelling about the history of efforts to create a digital collections program. Once everyone had a chance to express their past frustrations and concerns, we began to ponder the idea of a digital collections process that would work for our institution.

Everyone immediately agreed that special collections alone should have final say about what is selected for digitization, since our staff should have the best idea of what is in our collections. I mentioned that our manuscript collections are not processed to the point where potential digital projects could be created, but our rare books librarian could likely make decisions about rare books that could be digitized. At the same time, everyone wanted to be a part of the creation of a digital collection development policy (also known as selection criteria), which was a relief. I was asked to draft the policy and email the group for feedback and suggestions.

The remainder of the meeting was spent discussing issues with post-production, such as user interface and what the tech team called the “discovery layer” for DSpace. It turns out there is a possibility of creating a new portal for digital collections that pulls from DSpace, without having to use the standard DSpace interface templates. Basically, DSpace and Encompass are the databases, and our new digital portal and VuFind (our catalog) will be the discovery layers. I am still learning about this. Our head tech programmer mentioned that we could use VuFind or a blog (catablog?) as our special collections interface, with MARC records mapped from Dublin Core records that are in DSpace. Of course, this would not work with our finding aids, since the majority of the information therein would not be fully searchable as a MARC record. Our tech team asked special collections to send examples of best practices of how a DSpace portal could look (I did not find many good examples online) as well as any examples we could find of interfaces that may have DSpace as a backend (this is in the works).

We then turned back to the need for a project process. Our copyright expert librarian chimed in to mention a need to document efforts to determine copyright status for orphaned and unpublished works. She urged us to consider creating a standard rights statement for our digital objects. I gave her a copy of the “Well-intentioned practice for putting digitized collections of unpublished materials online” document shared in the recent post from hangingtogether.org entitled “Archivists: be bold and do your job.”

We closed with a few goals in mind: meet again in June after our visit to ECU’s Digital Collections team, and for me to draft a digital collection development policy/selection criteria. My initial thoughts? While disorganized, our meeting established our group’s commitment to a long-term digitization program. We will need to work on a project life cycle of our own in the near future.

26
Jan
10

A new adventure

Malcolm Gladwell’s book Outliers theorizes about the origins of success, mostly through an exploration of cultural and generational factors. One of his strongest arguments for success is about luck, particularly the idea of being in the right place at the right time. I feel like one of those fortunate success stories right now.

Last week, I was offered and accepted a position as archivist and digital projects manager at Wake Forest University. Collaborative projects, networking, and prior experience led me to become the final candidate for this multi-faceted opportunity. I feel that I was in the right place at the right time. 

I look forward to starting this new adventure and returning to the world of academic special collections. My interview presentation focused on the how and why of archival processing. After exploring the basic concepts of archival processing, I explained what I feel is the reason for processing and digitization — ACCESS. Other than good old fashioned archival processing, I will get to work with Archivist’s Toolkit to create EAD finding aids and collaborate with the technology team to formulate a new digital collections interface, as well as create metadata. I’m sure there will be many more responsibilities and outlets for creativity.

Wake Forest is currently seeking an innovative, creative Director of Special Collections and University Archives, so I eagerly anticipate the arrival of our department’s advocate and visionary. For me, there is much expected, much to learn, and much to contribute.

16
Jan
10

Beautiful finding aids

Recently, I was presented with a challenge by a tech librarian. He asked me if I could think of any examples of special collections websites with appealing, user-friendly finding aids in EAD. One comment made: “Archives seem to be the only places still doing a long narrative, like a printed document, on the web.”

My first response was to mention the Online Archive of California, but after that, I realized that my knowledge of visually appealing finding aid design and special collections websites was very limited.

The OAC is one of the first archival initiatives of its kind, because it attempts to digitally collocate archival resources in the state of California. Finding aids here are not only easily discovered through each repository’s website, but also through Google, ArchiveGrid, and OCLC (including OAIster when appropriate). Of course, the appealing interface doesn’t hurt the possibility of user discovery. The finding aids (here’s an example) have more visual interest through use of color blocks and links on the right side, as well as a sans-serif font. Perhaps the best part about a statewide interface? Consistency in design and usability.

The purpose of the site, however, is clear: to search finding aids (also referred to as collection guides). Digital content is tied to relevant collections with a small eyeball icon. Users can browse from A-Z and view brief collection descriptions. Overall the site has a clean interface with a simple purpose. The OAC’s collections are tied to the UC system’s Calisphere, which is a public- and educator-focused search site for over 150,000 digital objects (it also includes teacher modules for K-12). Both of these projects are powered by the California Digital Library.

Because my colleague was interested in EAD finding aids, I decided to start with SAA’s EAD Roundtable website. The site includes a list of early adopters of EAD, so I took a look at how creative some institutions were with representing their finding aids online.

My favorites so far?

Emory’s Manuscripts and Rare Books Library has a great search and browse interface. From the main page, users are informed that they can browse, search, and also search the catalog for resources. The database includes unprocessed collections, which is a pleasant surprise in the era of “hidden collections”. The finding aids themselves are visually interesting, with linked content, as well as icons for the PDF and printable versions (see the James D. Waddell papers for example).

Columbia University’s Archival Collections Portal searches both finding aids and digital content. I think this type of searching is natural for users, making it easier for users to access resources. The finding aids appear to be in a variety of formats depending on the collection, including HTML and PDF, but each record in the portal includes a descriptive summary and subject terms.

Both of these go against the typical left-side menu browsing of many EAD finding aids. I started to realize with my preferences that EAD was less important than the overall visual appeal and ease of use of the finding aid itself. If we can do a full-text search of any text document, why are we doing complex EAD encoding? Why aren’t we just doing HTML? How about catablogs? The idea is that, like MARC, having standards can help researchers find similar resources.

I’m at the beginning of understanding the many reasons to use EAD, but already I find myself questioning it. Jeanne over at Spellbound Blog talked about the possibilities of simpler EAD finding aids in 2008, through the Utah State Historical Society’s next-generation version of the Susa papers. There’s the Jon Cohen AIDS Research Collection, which is a finding aid and digital collection. Then there is the famous Polar Bear Expedition collection of next-generation finding aids.

There seems to be a lot of overlap between finding aids and digital objects, which I’ve seen at Duke and Eastern Carolina University, among others. Then there’s the movement to push our resources onto Flickr, Facebook, Twitter, etc. If repositories host their own finding aids and digital objects, they can repurpose and collocate them anywhere on the web, right?

I still don’t know if I have a good answer for my colleague. I know I have much to learn. I am curious to know…what’s is your favorite EAD finding aid site? The most beautiful finding aid site?