Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Current developments in AI and what they mean for everyday use in archives

Anja Link takes a closer look at various contributions from the panels and identifies three key findings from the conference.

Insights from the Conference “Artificial Intelligence in Archives and Collections”

This blog post is part of a series on the conference at the Herder Institute in Marburg on 12 and 13 December 2024 and will highlight three overall findings.


AI needs to be co-created by humans – to do its work but also to supervise its ethical conduct

Questions of finding the happy medium of human and machine action were touched on by many speakers. While the call co-creation may appear self-evident—after all, AI models are developed by humans — when taking a closer look at the modelling and training phase, the issue reveals to be more complex. The complexity begins with data selection and extends through the modelling and training phases to the actual implementation of AI systems. Furthermore, it touches ethical questions of (energy) resource efficiency, bias and power. Ensuring meaningful human involvement at every stage is therefore not just a technical necessity, but an ethical imperative.

Data Selection and AI

>>> Mo von Bychelberg’s overview on „Artificial Intelligence Technology’s Influence on the Authenticity of Digital Intangible Cultural Inheritance” explored how AI introduces new dimensions of power and bias into archival practices—dimensions that, in fact, build upon long-standing issues in the field. Traditional dimensions of power and bias in archives cover questions of what is worth being archived as well as who decides what is being archived. In the era of digitisation those dimensions extend to questions of what is worth being digitised and who decides on that. The emergence of AI technologies adds yet another layer: Which archives can afford to use or even create AI? Who has the human resources and expertise to work with off-the-shelf AI solutions? Will the staff be able to handle off-the-shelf AI? Can AI potentially address broader challenges, such as preserving the tacit knowledge of retiring archivists? Von Bychelberg argued that many of these issues—power, bias, access, and sustainability—converge in the context of AI. Before archives consider adopting AI technologies, these critical questions must be openly and thoroughly addressed.

Mo von Bychelberg, Uppsala University

AI model training phase

>>> In her presentation on “Dusting Off the Unlabelled Data: Graph Semi Supervised Learning for Large-Scale Datasets” Angelica I. Aviles-Rivero shared insights from her research on efficient image procession in training phases. Regardless of the model’s ultimate goal—be it image classification, recognition, or generation—every image-based AI system begins with object detection and data labelling. Since AI models have limited generalisation capabilities, acquiring large datasets with accurate annotations is costly in terms of time, financial resources, and the need for expert knowledge. Reducing those costs requires a well-balanced combination of human expertise and machine-based interpretation of image data.  Semi-supervised learning approaches that balance since its central task is replacing data by human knowledge and thus training with less data. Aviles-Rivero’s findings indicate that SSL can lead to improved model performance by replacing parts of the training data with human insight, thereby requiring fewer labelled examples.

>>> Nicole Graaf explored the temporal, gender-based, and cultural dimensions of bias in AI-generated image metadata in her presentation „Between ritual and relief – when the computer squints: analysis if AI-based indexing in the Image Archive of the ETH Library”. From an archivist’s persepective, Graaf presented the results of a qualitative analysis of automatically tagged images. The images were labelled using Clarifai’s general model and subsequently reviewed by subject experts—not only for semantic accuracy, but also for their potential to create or reinforce bias. As it turned out, automatic tagging works best with iconic, easily recognisable images that would likely be used to teach a child. In contrast, automatic tagging struggles significantly when it comes to abstract cultural or gender-related concepts. These findings underline two key points: a) AI-generated metadata must be critically evaluated by humans, and b) the ethical dimensions of AI output are fundamentally shaped by the data used during training. Graaf suggested that one possible technological approach to mitigating bias is to address the issue at its source—by regularly reprocessing digitised archival materials with updated AI models. Another long-term solution could lie in developing institution-specific AI systems tailored to the archival context.

>>> Christopher Kermovant also explored pre-trained AI-models’ capacities and limitations in interpreting historical image material. In his presentation on “How to use controlled vocabularies to describe early Japanese photographs using deep learning models” he shared his main findings for interpreting historical image material, particularly archive documents of non-Western origin. Kermovant’s findings highlighted two central issues: a) Pre-trained AI models (in this case: CLIP, Qwen2, and ChatGPT-4o) tend to perform best in languages with strong representation in their training data—namely English, French, and German. b) These models exhibit a significant bias toward Western perspectives when interpreting non-Western imagery, often leading to distorted or overly simplified classifications. His work also underscores the need for more culturally diverse training datasets and the careful use of controlled vocabularies when applying AI to global archival materials.

Reducing time, labour and energy intensity —while also addressing ethical concerns— reveals to be a major task in the field of object detection and labelling image and text documents. On the one hand there is the question to what extend co-creation is essential for bridging the gap between machine-classifiable content and the deeper semantics of images or texts. On the other hand, it must be critically assessed which tasks necessarily have to be executed by AI and which ones humans might be better at than resource-efficient AI could ever be.

The conference took place in a hybrid format. Questions were taken directly from the audience as well as via the chat.

Picture quality matters. But in a different way than we as humans might expect

Image quality plays a crucial role in both the preprocessing and processing stages of archival image and text documents. While preprocessing can be extremely time-consuming, it is highly relevant for enabling meaningful analysis. In archival contexts, preprocessing typically involves distinguishing between the medium (e.g., paper texture, handwriting, image artifacts) and the actual content. Only after this separation can the content be decomposed and relevant information extracted effectively.

>>> Contrary to what one might assume, in the case of object detection, higher resolution does not automatically yield better results in object detection tasks during AI training. The appropriate level of image detail depends on the specific analytical goal, as Karsten Tolle, Yury Korolev and Christopher Kermovant explained in their respective presentations „Potpourri of Computer Vision in Cultural Heritage“, “Image filtering based on total variation spectral decompositions: an overview of the method and an application in medieval paper analysis“ and “How to use controlled vocabularies to describe early Japanese photographs using deep learning models“. For certain features, reducing the resolution—or even intentionally blurring the images—can improve detection performance. This approach not only facilitates the filtering of relevant information but also helps eliminate insignificant visual elements, such as scratches or artifacts, which might otherwise be misinterpreted by the model as meaningful features.

>>> Mahsa Vafaie’s insights into „Separation of machine-printed and handwritten text in archival documents” illustrated many incremental steps required to extract semantically accurate information from digitised archival documents—steps that are essential for uncovering previously unknown historical knowledge. The process begins with removing traces of the documents’ physical characteristics, such as background noise, material textures, or the representation medium itself. It may also include the distinction between relevant elements and noise. When documents contain both machine-printed and handwritten text, further complexity arises. Text-type separation must be carried out using OCR (Optical Character Recognition) for print and HTR (Handwritten Text Recognition) for script. This involves handling heterogeneous layouts, processing both standard and non-standard fonts, recognising different handwriting styles, identifying stamps and annotations. However, Vafaie pointed out that the computational challenge does not stop at the complex procedure of accurately identifying information of every single document. Future research will have to address ways of connecting as well as embedding information into its historical context.

Mahsa Vafaie, FIZ Karlsruhe – Leibniz Institute for Information Infrastructure

Several contributions throughout the conference have underlined that AI forces us to examine analytical steps that the human brain implicitly takes and to adapt those steps to the technology’s requirements. Both the training of AI models and the analysis of archival material using AI are often shaped by trial-and-error processes that highly profit from creative ideas and fondness of experimenting.

Multimodal AI solutions are needed for AI to be applied across a broader range of archives

Despite growing technical capabilities, most current research projects still tend to focus on either image or text processing. The integration and simultaneous analysis of hybrid data—combining visual and textual elements—seems to remain the widest research gap.

As Erik Radisch concluded in his presentation on “A new Approach to Semi-Automated Annotations with Segment-Anything (Meta AI)” while numerous AI tools are already available, their full potential is only realised when they are combined —both to improve usability and to produce meaningful outcome. Radisch also emphasised the highly dynamic nature of the AI landscape: many models currently in use may soon be otperformed. The interim solution to that appears to lie in relying on pre-trained multimodal AI systems. When doing that, one has to bear in mind that pre-trained AI tends to fail at analysing phenomena that are not reflected in their training material for cultural or language differences or because they lack adequate historical training material.

Autor’s reflexion on the research findings and what they might mean for everyday use in archives

Practitioners attending the conference with high expectations for ready-to-use, one-size-fits-all AI solutions—comparable in performance and accessibility to chatbots like ChatGPT—may have left somewhat disappointed. At present, no such solution does exist. However, that does not mean that these ideas went unaddressed. As it is common for any conference the horizon of the panels was broadened by informal discussions during the coffee brakes where speculative ideas were explored. In this case the coffee brake discussions with practitioners from archives proved to be specifically insightful when it came to questions of integrating research findings into day-to-day archival work. While the conference contributors presented insights into the use of AI-models for different types of archival use it became even more apparent in the discussions that the path to their everyday applicability remains to be a long one. This path starts with the basic necessity of the existence of digitized material. Yet, particularly in small and understaffed archives, the proportion of digitised holdings is often very low. Ironically, these are the very institutions that could benefit most from AI-supported tools, given their limited financial and human resources. However, it is particularly that lack of resources that makes the use of AI tools unattractive at this point in time. Until AI tools can be seamlessly integrated into archival workflows in a significantly more resource-efficient manner, their use will likely remain out of reach for the average archive.

Among its many opportunities and challenges, AI has the potential to not only enhance biases. It also has the potential to widen gaps between well-resourced and under-resourced archives. Addressing these challenges requires more than just developing open-source, multimodal AI models. Equally important is ensuring maximum usability—through intuitive interfaces, cost-effectiveness, adequate server capacities, and robust data protection measures. Additionally, setting special focus on developing models in a modular system that can be used by a wide range of archives and adjoining institutions might be useful. AI might even constitute a new field of state action similar to already existing e-government or cloud infrastructures. Regardless of the specific perspective taken on AI in archives and collections in the near future, the field continues to offer ample food for thoughts, future research and discussions.


Anja Link has been working as a research assistant in the Department of Architectural Theory and Building History at Bremen University of Applied Sciences since 2019. By focussing her doctoral thesis on the history of architecture and urban planning, which is underpinned by an economic-historical perspective, her doctoral project brings together her previous research interests. She participated in the conference and is the author of this conference report.


Picture credit (all): Impressions from the conference, Claudia Junghänel, Herder Institute.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Lab 1.3 Digitale Heuristik und Historik (27. Mai 2025). Current developments in AI and what they mean for everyday use in archives. Value of the Past. Abgerufen am 10. November 2025 von https://doi.org/10.58079/140ry


Autor: Lab 1.3 Digitale Heuristik und Historik

Das Lab 1.3 widmet sich der Geschichtsaneignung im digitalen Zeitalter. Ziel ist es, Wert- und Denksysteme transparent zu machen, die Diskurse über Vergangenheit im digitalen Wissensraum prägen.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

This site uses Akismet to reduce spam. Learn how your comment data is processed.