Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Visuelle Spuren kolonialer Vergangenheit und Stellenwert der Aufarbeitung kolonialer Bildarchive

Romuald Valentin Nkouda Sopgui beleuchtet in diesem Beitrag, wie koloniale Bildarchive visuelle Spuren vergangener Machtverhältnisse bewahren und warum ihre Aufarbeitung heute von entscheidender Bedeutung ist.

In diesem Blogbeitrag wird beleuchtet, wie koloniale Bildarchive visuelle Spuren vergangener Machtverhältnisse bewahren und warum ihre Aufarbeitung heute von entscheidender Bedeutung ist. Im Rahmen dessen werden einige Fotografien aus der Zeit der deutschen Kolonialherrschaft in Kamerun präsentiert, um die Verflechtung von Vergangenheit und Gegenwart exemplarisch zu veranschaulichen.

This blog post examines how colonial image archives preserve visual traces of past power relations and why it is crucial to address them today. In this context, several photographs from the period of German colonial rule in Cameroon are presented to illustrate the intertwining of past and present.


Kolonialbildarchive spiegeln koloniale Wissensordnungen wider. In ihnen verdichten sich koloniale Blickregime. Die Auseinandersetzung mit diesen Archiven ermöglicht nicht nur den Zugang zu visuellen Spuren kolonialer Vergangenheit, sondern macht auch deren gegenwärtige Deutung aus unterschiedlichen Perspektiven beispielsweise im Hinblick auf Restitution und ethische Verantwortung sichtbar. Die Auseinandersetzung mit der kolonialen Vergangenheit hat in den letzten Jahren an Aktualität gewonnen. Der Fokus dabei liegt zunehmend auch auf Artefakten, von entwendeten Kulturgütern bis hin zu visuellen Spuren. Fotografien der Kolonialzeit zeugen von der Tatsache, dass die Geschichte kolonialer Herrschaft in vielerlei Hinsicht eine Geschichte visueller Darstellung war.[1]

Wie andere europäische Nationen nutzte auch das deutsche Kolonialreich das Medium Fotografie, um seine kolonialen Bestrebungen zu verwirklichen und das eigene imperialistische Selbstbild zu verstärken. Besonders in Kamerun, das zwischen 1884 und 1914 unter deutschem „Schutz“ stand, sind umfangreiche Bildbestände entstanden. Zahlreiche bundesdeutsche Archive und Museen bewahren Kolonialfotografien aus Kamerun auf. Diese Fotografien bilden „Kolonialbildarchive“, d.h. die Gesamtheit aller Bilder, die dieser Kategorie zuzuordnen sind und den Weg in Archive und Sammlungen gefunden haben.

Heutzutage können „Kolonialbildarchive“ als Orte begriffen werden, an denen Gegenwart und Vergangenheit ineinandergreifen, als Räume, in denen Kolonialerinnerung und -kritik zugleich stattfinden. Fragen, die man sich diesbezüglich stellen kann, lauten: Welche Bedeutungen entfalten Fotografien im Spannungsfeld von visuellem Erbe und kolonialer Aufarbeitung? Wie sieht es aus mit den kolonialen Beziehungen in der Praxis von Museen und Archiven? Welche materiellen und kuratorischen Praktiken beeinflussen weiterhin die aktuellen Überlegungen im Umgang mit kolonialen Fotografien?

Für die deutsche Kolonialmacht in Kamerun waren Bilder ein wichtiges Instrument, um Wissen zu sammeln, Territorium zu kartieren und Macht zu inszenieren. Visuelle Aufnahmen dokumentieren eine Vielzahl von Themen, die von den Forschungsexpeditionen über die ethnografischen Studien und die kolonialen Verwaltungstätigkeiten bis hin zum Bau der Infrastruktur und den Momenten kolonialer Selbstrepräsentation reichen. Die bildlichen Darstellungen sowie die Beschriftungen der Bilder stehen beispielhaft für den Vorgang des Otherings, der das Deutsche Kolonialreich als “zivilisatorisch” und “technisch fortschrittlich” seinem “rückständig” erscheinenden kolonialen Gebiet gegenüberstellt.[2]

Koloniale Expeditionen im Bild: Erkundung und Besitzergreifung

Expeditionsfotografie in Kamerun erfüllte eine klare Funktion; sie sollte das unbekannte Territorium kartieren, sichtbar machen und damit für koloniale Herrschaft verfügbar halten.

Beschriftung: Mein Expeditionspersonal. Gehilfe, Polizeisoldaten, Köche, Boys, Dolmetscher, Arbeiter. Fotograf: Guillemain, Constantin. Portraitiert. Guillemain, Constantin (1873-1914). Aufgenommen etwa 1906. Ort: Kamerun. Signatur: 081-3200-38. Quelle: Bildarchiv der Deutschen Kolonialgesellschaft, Universitätsbibliothek Frankfurt am Main.

Die Kameras der Forscher richteten sich auf Flüsse, Wälder, Dörfer und Wege. Bei den Expeditionen wurden nicht nur Landschaften, sondern auch Menschen und Dorfgemeinschaften dokumentiert. „Stammesführer“ oder „typische Vertreter“ bestimmter Ethnien wurden oft in inszenierten Posen abgelichtet, die den kolonialen Erwartungen entsprachen.   

Beschriftung: Mädchen der Hausa, die Tontöpfe tragen. Darstellung: Hausa. Fotograf : Rudolf Oldenburg, 1907/1913, Kamerun. Signatur: PhMAf 2100. Quelle: GRASSI Museum für Völkerkunde zu Leipzig.

Heute können diese Aufnahmen aus einer doppelten Perspektive gelesen werden: Einerseits liefern sie historische Hinweise zu Kleidung oder sozialen Konstellationen. Andererseits verdeutlichen sie, wie sehr der koloniale Blick das Bild bestimmte und dabei Stereotypen zugrunde legte. 

Ethnografische Studien: Körper im Fokus

Einen besonders heiklen Bereich der Kolonialfotografie bilden ethnografische Porträts. In vielen Sammlungen finden sich Reihen von Fotografien, auf denen Menschen frontal und im Profil abgebildet werden. Ziel war es, „wissenschaftliche“ Vergleiche zu ermöglichen, etwa zu Körper- und Gesichtszügen.

Beschriftung: Anthropometrische Aufnahmen eines Mädchens aus Bagam. Fotograf: Rudolf Oldenburg, 1907/1913, Bagam, Kamerun. Signatur: PhMAf 2553. Quelle: GRASSI Museum für Völkerkunde zu Leipzig.

Diese Praxis war Teil eines rassistischen Wissenssystems, das Menschen kategorisierte und hierarchisierte. Solche Aufnahmen sind heute belastete Quellen. Sie machen die Gewalt des kolonialen Blicks sichtbar: die abgebildeten Subjekte wurden auf ihre Körper reduziert und zu Studienobjekten degradiert.  

Infrastrukturprojekte: visueller Fortschritt und technische Errungenschaft

Neben Expeditionen und ethnografischen Aufnahmen wurden auch Infrastrukturprojekte intensiv fotografiert. Der Bau von Straßen, Eisenbahnen, Verwaltungsgebäuden oder Plantagen war für die Kolonialmacht nicht nur ein Mittel wirtschaftlicher Ausbeutung, sondern auch ein Symbol der „zivilisatorischen Mission“.

Beschriftung: Sanagabrücke, Kamerun. Ausgeführt von Gutehoffnungshütte Oberhausen, 1911, Nachlassgeber: Kühne, Ludwig. Herkunft: Schenkung Basfeld. Register-Nr.: FoA vi-KUE-GD199. Quelle: Frobenius-Institut für Kulturanthropologische Forschung Frankfurt am Main.

Kolonialfotografien von Brücken in Kamerun sind nicht nur Dokumente technischer Errungenschaften und architektonischer Leistungen, sondern inszenieren auch einen kolonialen Fortschrittsmythos. Allerdings verbergen sie oft die Ausbeutungsbedingungen, unter denen sie entstanden sind, nämlich Zwangsarbeit und damit einhergehende Gewalt.

Visuelle Selbstinszenierung und koloniales Selbstbild

Ein zentrales Motiv der Kolonialfotografie war die Selbstinszenierung der Kolonisatoren. Gruppenaufnahmen von Offizieren, Verwaltungsbeamten oder Siedlern zeigen, wie die deutschen Akteure sich selbst inszenierten: als Entdecker, als Herren über das Land, als Träger einer überlegenen Kultur.

Beschriftung: Besichtigung der Kameruner Nordbahn 1910/11. Fotograf: Herzog Adolf Friedrich zu Mecklenburg. Afrika/Zentralafrika/Kamerun. Signatur: 2016.13:6. Quelle: MARKK Hamburg.
Beschriftung: Jagdszene mit Europäer 1911-1913. Fotografin: Marie Pauline Thorbecke. Afrika/West-Afrika/Kamerun. Signatur: FT531. Quelle: RJM, Köln.

Diese Bilder dienten sowohl privaten Zwecken – etwa als Erinnerungsfotos für die Familien in Deutschland – als auch einer breiteren Öffentlichkeitsarbeit. In Ausstellungen, illustrierten Zeitschriften oder Kolonialalben trugen sie zur Legitimation des kolonialen Projekts bei.

Aufarbeitung kolonialer Vergangenheit in Bildarchiven

Die Diskussion zur Kolonialaufarbeitung bildet seit einigen Jahren einen Bestandteil gesellschaftspolitischer Debatten. Als Kolonialaufarbeitung werden Prozesse bezeichnet, in denen Gesellschaften, Institutionen und Einzelpersonen die Kolonialvergangenheit kritisch reflektieren, ihre Folgen für Gegenwart und Zukunft untersuchen und nach Wegen für einen gerechten Umgang suchen. Ein zentraler Bezugspunkt der Kolonialaufarbeitung ist die Frage, wie mit Kulturerbe aus kolonialen Kontexten umgegangen werden soll.[3]

Archive und Museen, in denen koloniale Objekte und Sammlungen aufbewahrt werden, sind zu einem Spannungsfeld postkolonialer Debatten geworden. Denn bis heute ist die Tradition, andere auszustellen, stark kolonialistisch geprägt, sodass gerade hier Revisionen und Problematisierungen nahe liegen.[4] Der Anspruch dieser Institutionen ist es, Orte der interkulturellen Begegnung zu schaffen, an denen dem „Fremden“ mit Achtung und Respekt begegnet wird. Die postkoloniale Kritik richtet ihr Augenmerk nicht nur darauf, wie Kolonialfotografien als Trophäen des Kolonialismus fungieren, sondern auch auf die Grenzziehungen und Machtansprüche, durch die solche Exponate gekennzeichnet werden. Eine verantwortungsvolle kuratorische Praxis bedeutet, etwa am Beispiel der Abbildung „Anthropometrische Aufnahmen eines Mädchens aus Bagam“, die Instrumentalisierung der Abgebildeten zu thematisieren und Fragen nach der ethischen Zugänglichkeit sowie nach Kontextualisierung aufzuwerfen.

Die Auseinandersetzung mit Kolonialbildarchiven aus Kamerun eröffnet nicht nur Bildhistorische Perspektiven, sondern auch eine rechtlich-ethische Dimension, die auch im Rahmen historischer Gerechtigkeit verhandelt werden soll. Fotografien aus der Kolonialzeit in Kamerun entstanden in einem Kontext asymmetrischer Machtverhältnisse zwischen Kolonisierenden und Kolonisierten. Dabei wurden Menschen aus Kamerun oftmals nicht als Subjekte mit eigener Würde, Geschichte und Stimme abgebildet, sondern als Objekte kolonialer Kontrolle, als exotische „Typen“, als „Beweise“ für rassistische Ideologien oder als „Beute“ ethnografischer Sammlungen.[5] Entscheidend ist es, die historische Gewalt im Medium der Fotografie nicht zu verdecken, sondern transparent zu verhandeln und zugleich den Blick auf die abgebildeten Menschen als handelnde Subjekte ihrer Geschichte zurückzugewinnen.


[1] Jens Jäger, Fotografie und Geschichte, Frankfurt am Main 2009.  

[2] David Bate, Photography and Colonial Vision, Third Text 7 (1993) 22, 81–91.

[3] Margareta von Oswald/Jonas Tinius (Eds.), Across Anthropology: Troubling Colonial Legacies, Museums, and the Curatorial, Leuven 2020.

[4] Nadine Kulbe/Theresa Jacobs u.a. (Hg.), Bildarchive. Wissensordnungen, Arbeitspraktiken, Nutzungspotenziale, Dresden 2022.

[5] Thomas Theye (Hg.), Der geraubte Schatten. Eine Weltreise im Spiegel der ethnographischen Photographie, München 1989.


Foto: Privat.

Romuald Valentin Nkouda Sopgui ist Dozent (Senior Lecturer) für deutsche Literatur und Kultur im Fachbereich Fremdsprachen an der Pädagogischen Hochschule der Universität Maroua (Kamerun). 2025 war er Value of the Past-Fellow am IEG – Leibniz Institut für Europäische Geschichte Mainz.


Titelbild: Beschriftung: Kamerun. Mongonge. Deutscher Forscher vor einem Fetisch (Holz, geschnitzt) am Dorfeingang. Foto: Unbekannter Fotograf, um 1900. Aufnahme-Nr: df-Hauptkatalog-0278656. 

Unlocking the Past with AI: Spectral Decomposition for Image Processing in Cultural Heritage

Marco Colombo reflects on Yury Korolev’s conference presentation and shares his own perspective and insights.

This blog post is part of a series on the conference “Artificial Intelligence in Archives and Collections” held on December 12-13, 2024, at the Herder Institute in Marburg and presents perspectives on the coference contribution by Yury Korolev, University of Bath – written by Marco Colombo.


Recent advancements in artificial intelligence and mathematical techniques are transforming the way we approach cultural heritage preservation. These technologies provide new ways to analyze and interpret digital data, particularly in the realm of historical documents and artifacts. By applying mathematical models to imaging data, researchers can uncover hidden details, restore ancient manuscripts, and better understand the history behind archaeological findings.

In the world of cultural heritage preservation, technology plays an increasingly vital role in uncovering and understanding historical artifacts. One of the most promising advancements in this field is mathematical imaging, which has revolutionized the way we analyze historical documents. This article delves into how spectral decomposition aids in processing images for cultural heritage applications, with a focus on analyzing handmade medieval paper. This topic was treated by Assistant Professor Yury Korolev in his oral presentation at the conference.

The Power of Spectral Decomposition in Image Processing

At its core, spectral decomposition involves breaking down an image or signal into simpler components that can be interpreted and manipulated independently. A well-known example is the Fourier transformation, which decomposes a signal into sine and cosine waves of different frequencies. This method has long been used in audio processing, where equalizers modify frequencies to emphasize or dampen specific sounds.

Applying this principle to image processing allows researchers to filter images by modifying their spectral components. Similar to how equalizers can adjust bass or treble in music, spectral decomposition enables the enhancement or suppression of specific image features, revealing details otherwise hidden in noise and distortions.

A musical manuscript (Pantokratoros monastery, code 214), 1433.

Challenges in Fourier-Based Image Processing

Traditional Fourier-based filtering presents certain challenges when applied to images. One primary issue is the handling of edges and sharp transitions within an image. The Fourier transform favors smooth variations and struggles with the abrupt changes often found in historical documents, where distinguishing between textual content, illustrations, and background textures is crucial for accurate analysis.

Additionally, artifacts such as noise and dirt often share high-frequency components with vital image details, making it difficult to distinguish between essential and extraneous elements. This limitation necessitates alternative approaches that can better separate meaningful structures from irrelevant noise.

Total Variation Spectral Decomposition: A More Effective Approach

A promising solution to these challenges is total variation (TV) spectral decomposition [1, 2, 3]. Unlike Fourier methods, TV decomposition is particularly effective in processing images with sharp edges and high contrast variations, such as handwritten medieval manuscripts. This approach decomposes images based on the scale and contrast of details, allowing for a more meaningful separation of image elements.

Using TV spectral decomposition, an image is analyzed at different levels of detail, isolating high-contrast features such as text and marks while preserving background textures like paper grain and chain lines. The ability to filter images in this way is invaluable for historians and archivists, as it allows them to extract key structural elements from historical documents with remarkable precision.

Grégoire de Tours, Histoire des Francs. Initial P in the form of a fish opening the book. Late 7th century.

Applications in Medieval Paper Analysis

One particularly compelling application of TV spectral decomposition is in the study of handmade medieval paper. This type of paper was created using molds made of metal wires, which left distinctive imprints known as chain lines (vertical lines) and laid lines (horizontal lines). Additionally, many papers featured watermarks that identified their origin, providing crucial insights into historical trade networks and manuscript provenance.

By applying spectral decomposition techniques, researchers can:

  • Digitally remove overlying text and stains from historical documents to better reveal the mold imprints.
  • Enhance faint chain lines and laid lines, allowing for accurate identification of the paper’s origin.
  • Segment different elements of an image to isolate and analyze individual components more effectively.

This methodology has been successfully employed on manuscripts from the Cambridge University Library, enabling historians to trace the origins and movements of medieval texts with better accuracy.

Beyond Cultural Heritage: Broader Implications

The benefits of TV spectral decomposition extend beyond historical paper analysis. This approach has demonstrated its effectiveness in several other imaging applications, including:

  • Image Denoising: Removing unwanted noise from photographs, particularly in low-light conditions.
  • Image Segmentation: Identifying and categorizing different elements within an image, useful in medical imaging and microscopy.
  • Image Fusion: Combining multiple imaging modalities to create a more comprehensive representation of an object, especially in medical diagnostics.

Moreover, machine learning is being integrated into these spectral decomposition processes, significantly improving computational efficiency. Researchers are developing AI-driven models to automate and optimize these transformations, making them more accessible and scalable for various imaging needs.

Gemini, from the medieval Georgian manuscript of an astrological treatise, 12th century.

Future Prospects and Challenges

While TV spectral decomposition has proven to be a powerful tool, it is not without its challenges. One notable limitation is its preference for detecting rounded or disk-like structures, which may not always align with the features of certain historical documents. Future research is focused on refining these methods to accommodate a wider variety of structural elements, making them even more effective for cultural heritage applications.

Additionally, computational efficiency remains a concern. Unlike standard Fourier filtering, TV-based decomposition requires solving complex optimization problems, making it computationally intensive. However, ongoing advancements in machine learning and algorithmic optimization are expected to mitigate these challenges, paving the way for faster and more efficient implementations.

Conclusion

Spectral decomposition, particularly through total variation methods, represents a powerful technique for cultural heritage applications. By enabling clearer and more detailed analysis of historical documents, this technology offers invaluable insights into the past while preserving artifacts for future generations. As AI and machine learning continue to enhance these techniques, the potential for further breakthroughs in digital humanities and archival science remains immense.

With continued research and innovation, spectral decomposition will undoubtedly play a crucial role in unlocking the hidden secrets of our cultural heritage, bridging the gap between history and technology.


References

[1] Gilboa, G., 2014. A total variation spectral framework for scale and texture analysis. SIAM journal on Imaging Sciences, 7(4), pp. 1937-1961.

[2] Grossmann, T.G., Schönlieb, C.B. and Da Rold, O., 2023. Extracting chain lines and laid lines from digital images of medieval paper using spectral total variation decomposition. Heritage Science, 11(1), p. 180.

[3] Grossmann, T.G., Dittmer, S., Korolev, Y. and Schönlieb, C.B., 2022. Unsupervised learning of the total variation flow. arXiv:2206.04406.


Marco Colombo is a PhD student in the Materials Analysis Group at the Darmstadt Technical University. He was a participant at the conference in Marburg and was particularly interested in Yury Korolev’s work, so he decided to write down his impression and perspectives on his presentation.

The original title of Yury Korolev‘s presentation at the conference was “Image filtering based on total variation spectral decompositions: an overview of the method and an application in medieval paper analysis”. For further information on the conference see the programme.


Title picture: Book of Hours — a prayer book written in Medieval Latin (c. 15th century).

All pictures: symbolic images, public domain via Wikimedia, see also here.

Current developments in AI and what they mean for everyday use in archives

Anja Link takes a closer look at various contributions from the panels and identifies three key findings from the conference.

Insights from the Conference “Artificial Intelligence in Archives and Collections”

This blog post is part of a series on the conference at the Herder Institute in Marburg on 12 and 13 December 2024 and will highlight three overall findings.


AI needs to be co-created by humans – to do its work but also to supervise its ethical conduct

Questions of finding the happy medium of human and machine action were touched on by many speakers. While the call co-creation may appear self-evident—after all, AI models are developed by humans — when taking a closer look at the modelling and training phase, the issue reveals to be more complex. The complexity begins with data selection and extends through the modelling and training phases to the actual implementation of AI systems. Furthermore, it touches ethical questions of (energy) resource efficiency, bias and power. Ensuring meaningful human involvement at every stage is therefore not just a technical necessity, but an ethical imperative.

Data Selection and AI

>>> Mo von Bychelberg’s overview on „Artificial Intelligence Technology’s Influence on the Authenticity of Digital Intangible Cultural Inheritance” explored how AI introduces new dimensions of power and bias into archival practices—dimensions that, in fact, build upon long-standing issues in the field. Traditional dimensions of power and bias in archives cover questions of what is worth being archived as well as who decides what is being archived. In the era of digitisation those dimensions extend to questions of what is worth being digitised and who decides on that. The emergence of AI technologies adds yet another layer: Which archives can afford to use or even create AI? Who has the human resources and expertise to work with off-the-shelf AI solutions? Will the staff be able to handle off-the-shelf AI? Can AI potentially address broader challenges, such as preserving the tacit knowledge of retiring archivists? Von Bychelberg argued that many of these issues—power, bias, access, and sustainability—converge in the context of AI. Before archives consider adopting AI technologies, these critical questions must be openly and thoroughly addressed.

Mo von Bychelberg, Uppsala University

AI model training phase

>>> In her presentation on “Dusting Off the Unlabelled Data: Graph Semi Supervised Learning for Large-Scale Datasets” Angelica I. Aviles-Rivero shared insights from her research on efficient image procession in training phases. Regardless of the model’s ultimate goal—be it image classification, recognition, or generation—every image-based AI system begins with object detection and data labelling. Since AI models have limited generalisation capabilities, acquiring large datasets with accurate annotations is costly in terms of time, financial resources, and the need for expert knowledge. Reducing those costs requires a well-balanced combination of human expertise and machine-based interpretation of image data.  Semi-supervised learning approaches that balance since its central task is replacing data by human knowledge and thus training with less data. Aviles-Rivero’s findings indicate that SSL can lead to improved model performance by replacing parts of the training data with human insight, thereby requiring fewer labelled examples.

>>> Nicole Graaf explored the temporal, gender-based, and cultural dimensions of bias in AI-generated image metadata in her presentation „Between ritual and relief – when the computer squints: analysis if AI-based indexing in the Image Archive of the ETH Library”. From an archivist’s persepective, Graaf presented the results of a qualitative analysis of automatically tagged images. The images were labelled using Clarifai’s general model and subsequently reviewed by subject experts—not only for semantic accuracy, but also for their potential to create or reinforce bias. As it turned out, automatic tagging works best with iconic, easily recognisable images that would likely be used to teach a child. In contrast, automatic tagging struggles significantly when it comes to abstract cultural or gender-related concepts. These findings underline two key points: a) AI-generated metadata must be critically evaluated by humans, and b) the ethical dimensions of AI output are fundamentally shaped by the data used during training. Graaf suggested that one possible technological approach to mitigating bias is to address the issue at its source—by regularly reprocessing digitised archival materials with updated AI models. Another long-term solution could lie in developing institution-specific AI systems tailored to the archival context.

>>> Christopher Kermovant also explored pre-trained AI-models’ capacities and limitations in interpreting historical image material. In his presentation on “How to use controlled vocabularies to describe early Japanese photographs using deep learning models” he shared his main findings for interpreting historical image material, particularly archive documents of non-Western origin. Kermovant’s findings highlighted two central issues: a) Pre-trained AI models (in this case: CLIP, Qwen2, and ChatGPT-4o) tend to perform best in languages with strong representation in their training data—namely English, French, and German. b) These models exhibit a significant bias toward Western perspectives when interpreting non-Western imagery, often leading to distorted or overly simplified classifications. His work also underscores the need for more culturally diverse training datasets and the careful use of controlled vocabularies when applying AI to global archival materials.

Reducing time, labour and energy intensity —while also addressing ethical concerns— reveals to be a major task in the field of object detection and labelling image and text documents. On the one hand there is the question to what extend co-creation is essential for bridging the gap between machine-classifiable content and the deeper semantics of images or texts. On the other hand, it must be critically assessed which tasks necessarily have to be executed by AI and which ones humans might be better at than resource-efficient AI could ever be.

The conference took place in a hybrid format. Questions were taken directly from the audience as well as via the chat.

Picture quality matters. But in a different way than we as humans might expect

Image quality plays a crucial role in both the preprocessing and processing stages of archival image and text documents. While preprocessing can be extremely time-consuming, it is highly relevant for enabling meaningful analysis. In archival contexts, preprocessing typically involves distinguishing between the medium (e.g., paper texture, handwriting, image artifacts) and the actual content. Only after this separation can the content be decomposed and relevant information extracted effectively.

>>> Contrary to what one might assume, in the case of object detection, higher resolution does not automatically yield better results in object detection tasks during AI training. The appropriate level of image detail depends on the specific analytical goal, as Karsten Tolle, Yury Korolev and Christopher Kermovant explained in their respective presentations „Potpourri of Computer Vision in Cultural Heritage“, “Image filtering based on total variation spectral decompositions: an overview of the method and an application in medieval paper analysis“ and “How to use controlled vocabularies to describe early Japanese photographs using deep learning models“. For certain features, reducing the resolution—or even intentionally blurring the images—can improve detection performance. This approach not only facilitates the filtering of relevant information but also helps eliminate insignificant visual elements, such as scratches or artifacts, which might otherwise be misinterpreted by the model as meaningful features.

>>> Mahsa Vafaie’s insights into „Separation of machine-printed and handwritten text in archival documents” illustrated many incremental steps required to extract semantically accurate information from digitised archival documents—steps that are essential for uncovering previously unknown historical knowledge. The process begins with removing traces of the documents’ physical characteristics, such as background noise, material textures, or the representation medium itself. It may also include the distinction between relevant elements and noise. When documents contain both machine-printed and handwritten text, further complexity arises. Text-type separation must be carried out using OCR (Optical Character Recognition) for print and HTR (Handwritten Text Recognition) for script. This involves handling heterogeneous layouts, processing both standard and non-standard fonts, recognising different handwriting styles, identifying stamps and annotations. However, Vafaie pointed out that the computational challenge does not stop at the complex procedure of accurately identifying information of every single document. Future research will have to address ways of connecting as well as embedding information into its historical context.

Mahsa Vafaie, FIZ Karlsruhe – Leibniz Institute for Information Infrastructure

Several contributions throughout the conference have underlined that AI forces us to examine analytical steps that the human brain implicitly takes and to adapt those steps to the technology’s requirements. Both the training of AI models and the analysis of archival material using AI are often shaped by trial-and-error processes that highly profit from creative ideas and fondness of experimenting.

Multimodal AI solutions are needed for AI to be applied across a broader range of archives

Despite growing technical capabilities, most current research projects still tend to focus on either image or text processing. The integration and simultaneous analysis of hybrid data—combining visual and textual elements—seems to remain the widest research gap.

As Erik Radisch concluded in his presentation on “A new Approach to Semi-Automated Annotations with Segment-Anything (Meta AI)” while numerous AI tools are already available, their full potential is only realised when they are combined —both to improve usability and to produce meaningful outcome. Radisch also emphasised the highly dynamic nature of the AI landscape: many models currently in use may soon be otperformed. The interim solution to that appears to lie in relying on pre-trained multimodal AI systems. When doing that, one has to bear in mind that pre-trained AI tends to fail at analysing phenomena that are not reflected in their training material for cultural or language differences or because they lack adequate historical training material.

Autor’s reflexion on the research findings and what they might mean for everyday use in archives

Practitioners attending the conference with high expectations for ready-to-use, one-size-fits-all AI solutions—comparable in performance and accessibility to chatbots like ChatGPT—may have left somewhat disappointed. At present, no such solution does exist. However, that does not mean that these ideas went unaddressed. As it is common for any conference the horizon of the panels was broadened by informal discussions during the coffee brakes where speculative ideas were explored. In this case the coffee brake discussions with practitioners from archives proved to be specifically insightful when it came to questions of integrating research findings into day-to-day archival work. While the conference contributors presented insights into the use of AI-models for different types of archival use it became even more apparent in the discussions that the path to their everyday applicability remains to be a long one. This path starts with the basic necessity of the existence of digitized material. Yet, particularly in small and understaffed archives, the proportion of digitised holdings is often very low. Ironically, these are the very institutions that could benefit most from AI-supported tools, given their limited financial and human resources. However, it is particularly that lack of resources that makes the use of AI tools unattractive at this point in time. Until AI tools can be seamlessly integrated into archival workflows in a significantly more resource-efficient manner, their use will likely remain out of reach for the average archive.

Among its many opportunities and challenges, AI has the potential to not only enhance biases. It also has the potential to widen gaps between well-resourced and under-resourced archives. Addressing these challenges requires more than just developing open-source, multimodal AI models. Equally important is ensuring maximum usability—through intuitive interfaces, cost-effectiveness, adequate server capacities, and robust data protection measures. Additionally, setting special focus on developing models in a modular system that can be used by a wide range of archives and adjoining institutions might be useful. AI might even constitute a new field of state action similar to already existing e-government or cloud infrastructures. Regardless of the specific perspective taken on AI in archives and collections in the near future, the field continues to offer ample food for thoughts, future research and discussions.


Anja Link has been working as a research assistant in the Department of Architectural Theory and Building History at Bremen University of Applied Sciences since 2019. By focussing her doctoral thesis on the history of architecture and urban planning, which is underpinned by an economic-historical perspective, her doctoral project brings together her previous research interests. She participated in the conference and is the author of this conference report.


Picture credit (all): Impressions from the conference, Claudia Junghänel, Herder Institute.

​Innovative Tools and Current Projects: Artificial Intelligence in Archives and Collections

In his article, Erdal Ayan presents a selection of AI tools for archives and collections as well as current projects that were presented at the Marburg AI Conference.

Key topics of the conference Artificial Intelligence in Archives and Collections: Practices, Potentials and Evidence Production in Dealing with Images and Multimodal Cultural Heritage included computer vision techniques, multi-modal large language models (MLLMs), semi-automated annotation, and innovative tools for search and discovery. Several new and ongoing projects showcased advancements in deep learning, text-image embedding, and graph-based learning.

This blog post is part of a series on the conference held on December 12-13, 2024, at the Herder Institute in Marburg and will present a number of selected tools and topics that have been discussed.

Opportunities and Challenges

The conference emphasized both the opportunities and challenges presented by AI-driven tools, including ethical concerns, explainability, and data bias, while promoting collaboration and knowledge exchange across disciplines. It focused on visual sources such as photographs, graphic collections, and mixed image-text archives, addressing both theoretical and practical challenges in automated indexing, cataloguing, and image processing. The growing role of AI and ML in multi-modal cultural heritage research was highlighted, particularly in semantic segmentation, object classification, and annotation. Key datasets discussed included coins, archaeological record cards, Buddhist murals, Soviet newsreels, and historical photographs, etc. This article will present a number of selected tools and topics that have been discussed.

​Image Archives and Image Processing

Image archiving and processing topics had an important place in the conference, and current studies on these topics attracted a lot of attention from the participants. The presentations at the conference demonstrated how AI and ML are revolutionizing the management and analysis of image archives.

Ralph Ewerth introduced iART, a computer vision-based search engine that enhances the accessibility of art and historical archives. It addresses cultural heritage images that integrate computer vision techniques for object recognition and cross-modal search. The dataset includes coin images, and the tool supports both scholarly research and public engagement. iART enables users to explore large image collections through features like object recognition, pose estimation, and natural language queries. Ewerth demonstrated how state-of-the-art computer vision techniques, combined with large language models, facilitate improved retrieval results and explanations for search outcomes.

Frank Puppe addressed the digital indexing of archaeological record cards, presenting a pipeline that processes scanned documents through Optical Character Recognition (OCR), semantic mapping, and multi-modal large language models like ChatGPT-4o. This pipeline enables the transformation of analogue archaeological records with images into searchable, structured formats such as JSON.

Home page of HikarIA, screenshot taken on 21 May 2025.

Christopher Kermorvant highlighted challenges in applying contemporary deep learning models to early Japanese photographs. He introduced HikarIA, which is a search engine designed for Japanese historical photographs, addressing issues of limited metadata and enabling discovery through advanced AI techniques. While convolutional neural networks and transformer models excel with modern datasets, historical images often present difficulties due to cultural and temporal differences. Kermorvant emphasized the need for tailored approaches to address these limitations.

Erik Radisch mentioned the Segment Anything, integrated with Annotorious for semi-automated image annotation. Segment Anything is a model developed by Meta AI for image segmentation, capable of prompt-based annotation without extensive retraining. And Annotorious is an open-source JavaScript solution for image annotation. This method significantly reduces the time required for detailed polygon-based annotations, making it ideal for large-scale archival projects. Segment Anything’s ability to adapt to new tasks without extensive retraining provides a powerful tool for researchers.

​New and Ongoing Projects in Image Archiving

At the conference, researchers talked in detail about their own work on image archiving and processing and shared their knowledge and experience on new and ongoing projects. The conference, therefore, showcased several innovative projects and tools that push the boundaries of AI-driven cultural heritage research. Among these were:

Celtic coin type “Divinka” from Slovakia. Foto: Marek Sobola, Wikimedia Commons.

ClaReNet (Karsten Tolle): A project utilizing object detection and classification to analyze coins. Tolle demonstrated the use of Orange Data Mining, a visual programming tool, to enable non-programmers to perform clustering and classification tasks. 

Wiedergutmachung (Harald Sack and Mahsa Vafaie): A collaboration with the Landesarchiv Baden-Württemberg that focuses on the classification and extraction of handwritten and machine-printed text in historical documents. Vafaie presented a pipeline for separating handwritten annotations from machine-printed text, resulting in a 26% improvement in OCR accuracy.

Kinokroonika (Mila Oiva): A project that explores clustering of images as graph data, leveraging graph-based methods for visual analysis.

Dight-Net (Mila Oiva): A collaborative research initiative utilizing tools like ResNet-50 to advance digital cultural heritage projects.

Orange: An open-source visual programming tool for clustering, classification, and image analytics, designed for non-programmers.

ResNet-50: A deep learning model architecture for image classification, compatible with Python libraries like Keras.

Collection Space Navigator (CSN): A tool for visualizing and exploring collections through vector encoding of images, built using Python (Flask) and React.

The sharing of these studies and experiences by experts at the conference provided an important impetus for knowledge transfer and the development of future collaborations.

Conclusion

The Artificial Intelligence in Archives and Collections conference underscored the transformative potential of AI and machine learning in managing, analyzing, and understanding cultural heritage data. Projects like iART and Wiedergutmachung demonstrated practical applications of computer vision and text recognition, while tools like Segment Anything and Orange highlighted the growing accessibility of AI technologies for non-experts.

Despite these advancements, the conference also addressed challenges such as data bias, explainability, and the limitations of current models when applied to historical datasets. Speakers emphasized the importance of interdisciplinary collaboration and the need for ethical considerations in the deployment of AI technologies.

Overall, the event successfully fostered dialogue between researchers, developers, and practitioners, paving the way for future innovations in AI-driven cultural heritage research.


Selected Reference List for the Projects Presented

Arnold, T. (2024). Explainable and Auditable Search and Discovery of Visual Cultural Heritage Collections.

Aviles-Rivero, A. I. (2024). Dusting Off the Unlabeled Data: Graph Semi-Supervised Learning for Large-Scale Datasets.

Evans, J. (2024). The Geometry of Culture: Analyzing Meaning through Embeddings of Text and Images.

Ewerth, R. (2024). Unlocking Cultural Heritage: Computer Vision for Art and History Archives.

Markus Huff, Nine Abele, Dominik Kimmel, Helen Fischer, Gerrit Anders, Tolgahan Aydin, Jürgen Bude. (2024). ArchiveGPT: Psychological and Technological Perspectives on the AI-Supported Archiving of Image Material.

Kermorvant, C. (2024). How Contemporary Deep Learning Models Describe Early Japanese Photographs?

Puppe, F., N. Fischer, D. Kimmel (2024). Pipeline for Digital Indexing of Archaeological Record Cards.

Radisch, E. (2024). A New Approach to Semi-Automated Annotations with Segment Anything.

Sack, H., & Vafaie, M., Waitelonis J. (2024). Separation of Machine-Printed and Handwritten Text in Archival Documents.

Tolle, K. (2024). Potpourri of Computer Vision in Cultural Heritage.

All of these projects were presented at the conference. The full programme and all abstracts can be found on the website of the Herder Insitute.


Erdal Ayan is a Software developer working at FIZ-Karlsruhe, Germany. As a young researcher and developer, he has been active in Digital Humanities for many years now. He is currently interested in software development in image processing and Computer Vision and has participated in the conference.


Title picture: An example of image processing. Original image, and resulting image after Laplacian and Sobel filters. Alaens, Wikimedia Commons.

Artificial Intelligence in Archives and Collections: Practices, Potentials and Evidence Production in Dealing with Images and Multimodal Cultural Heritage

In this report, the organizers look back on their conference and provide an outlook. Further summaries and reflections of the panels will be presented in a small blog series.

Report on a hybrid conference in Marburg, 12–13 December 2024


In the years to come, curatorial and archival processes in memory and heritage institutions (including key aspects of cataloguing such as description, classification and categorization) will be increasingly supported by automated systems and artificial intelligence. These practices attribute value to sources and link archival materials and collection objects to societal narratives. Collections and archives thus form an essential basis for memory-related discourses and shape our view of the past. New technologies have now reached the stage where they are potentially suitable for the requirements of cultural heritage institutions. There is substantial promise in the partially automated indexing and cataloguing of historical sources, particularly of digital images, the hermeneutics and meaning of images which in the past have only been accessible to the human eye and not the machine.

Images can now be automatically described semantically, making them easier to find. AI methods – and most recently multimodal AI processing – are opening new possibilities for automatic text extraction and layout recognition, automated image segmentation and annotation, as well as the analysis of visual sources and their contextualisation. At the same time, however, there is a lack of knowledge about how evidence – and subsequentially facts – are generated and how AI processes affect the attribution of authenticity to archival documents and photos. Currently, the humanities lack semantically high-quality, historically adequate and subject-appropriate training datasets. Similarly, there is hardly any agreement about what constitutes an acceptable outcome of computational classification processes, or what kind of benchmarks should be used to evaluate the results. We therefore need to develop best practices (that may tap into explainable Artificial Intelligence (XAI) methods), benchmarks and common goals. To increase the value that the information and knowledge held in our archives and collections have for future research, we need to deepen our understanding of processes and algorithms.

To deepen the discussion on these questions, the Research Lab ‘Digital Heuristics and Digital History’ of the Leibniz Research Alliance ‘Value of the Past’, in cooperation with NFDI4Memory, Task Area ‘Data Quality’ have organised a conference in Marburg on artificial intelligence in cultural heritage institutions, such as archives and collections, and how these new technologies are transforming archival institutional practices. Topics primarily focused on – but were not limited to – visual sources, such as photography and graphic collections or mixed image-text sources and multimodal information processing.

The conference provided a forum for researchers and practitioners from the humanities, as well as archives and collections to connect with researchers and engineers in the fields of artificial intelligence, computer science and the digital humanities, to discuss new findings, and to exchange experiences. The event was seeking to promote an interdisciplinary, cross-sectoral dialogue between research, development and practice.

The conference brought together curatorial and archiving knowledge and new AI-based methods but has also provided a forum for ethical reflection on the use of AI in academic and archival practices. Participants discussed how automated processing and AI methods require detailed epistemic reflection and methodological-technical control to ensure that no false or tainted evidence is generated. Thus, the conference reflected on the effects of these new technologies on the production of evidence, thereby contributing to the crucial question of how the digital transformation is changing knowledge creation in the humanities and what this means for scholarship in historical disciplines.

The call was met with a large response: Speakers from Austria, China, France, Germany, USA, UK and Sweden presented. More than 400 colleagues from 25 countries, most of them online, registered for the conference and followed the presentations or participated in the discussions. Papers covered topics such as Exploring and Analysing Collections from Textual and Multimodal Contexts, Computer Vision (Semantic Segmentation, Classification, Analysing and Understanding Images), The Effects of Computational Methods on Image Analytical Research and Visual Studies in Cultural Heritage, Opportunities and Challenges for the Automated Indexing and Cataloguing of Visual Sources in Archives and Collections.

As the organizers of the conference, we conclude from the high level of interest that – even though there are many possibilities in the various scientific communities to exchange upon new developments in AI – there is still a need for interdisciplinary and cross-sectoral exchange in the GLAM realm.  We would like to keep this interest alive and support networking on the topic also in the time to come – not limited to conference participants.

Participants – and any colleague who likes to join – will be informed about future activities and about possible publications. We will keep you updated about future publications resulting from the conference. Stay tuned! As a start, we will publish several short summaries and elaborations on selected conference topics on this blog, written by young researchers who participated in the conference. 


Programme and abstracts: https://www.herder-institut.de/event/conference-artificial-intelligence-in-archives-and-collections


Organizers:

Leibniz-Forschungsverbund „Wert der Vergangenheit“, Lab 1.3. Digitale Heuristik und Historik

Leibniz-Zentrum für Archäologie (LEIZA), Mainz

Herder-Institut für historische Ostmitteleuropaforschung – Institut der Leibniz-Gemeinschaft, Marburg

NFDI4Memory, Task Area Data Quality


This conference report was written by the organizing committee:

Elke Bauer, Herder-Institut

Simon Donig, Herder-Institut

Annette Frey, LEIZA

Dominik Kimmel, LEIZA


Further information on future activities on the topic: Interested parties can still use the conference registration tool to register for forthcoming information.

Contact: archivesai@herder-institut.dedominik.kimmel@leiza.de


Title picture: image detail of the conference flyer

Stettin, Wisconsin USA. Ein Plädoyer für Metadaten

Tabitha Redepenning veranschaulicht in ihrem Beitrag, wie unsere Perspektive auf Orte wie Stettin/Szczecin durch Metadaten beeinflusst wird.

Dieser Werkstattbericht behandelt einen zufälligen Archivfund, der zeigt, wie Metadaten unsere Sicht auf historische Orte wie “Stettin” prägen. Am Beispiel von Stettin in Polen und Wisconsin wird deutlich, wie sich geografische und politische Bedeutungen über Zeit und Raum verändern.

This study report delves into a serendipitous archival discovery, revealing how metadata shape our understanding of historical places like “Stettin.” The examples of Stettin in Poland and Wisconsin highlight how geographical and political meanings shift across time and space.


Eine Google Maps-Suche zu „Stettin“ bringt einen schnell zur Stadt Szczecin an der deutsch-polnischen Grenze, die bis 1945 noch offiziell Stettin hieß. Sucht man jedoch ausführlicher, kommt man auch auf die Stadt Stettin in Wisconsin, USA. Nun ist bekannt, dass Auswander*innen in die USA häufig die Städtenamen ihrer Heimatorte mitbrachten und somit Stuttgart in Arkansas, Dresden in Ohio und Paderborn in Illinois liegt.

Hier ergeben sich verschiedene Betrachtungsweisen. Auf der ontologischen Ebene stellt sich die Frage der Eindeutigkeit und Beständigkeit von Bezeichnungen. Ein Thema, dem sich Gazetteers, Ortslexika, widmen. Sie geht weit über zwei Städte mit dem gleichen Namen hinaus, denn es betrifft auch verschiedene Schreibweisen, Alphabete und historische Ortsbezeichnungen. Eine Übersicht verschiedener Gazetteers findet sich unter: Gazetteers.net.

Stettin, Wisconsin. In: Town of Stettin Centennial. 1860/ 1960. Wisconsin: 1960, Umschlagsseite.

„Stettin“ als Kampfbegriff

Allerdings darf man nicht die Bedeutung vergessen, die den Ortsnamen zugeschrieben wird. Die Frage geht über eine Unterscheidung von Endonym (Name in der Landesprache) und Exonym (Bezeichnung in einer anderen Sprache) hinaus. Gerade die Gebiete, die nach 1945 nach der neuen Grenzziehung nicht mehr zu Deutschland, sondern zu Polen gehörten, stellten einen politischen Brennpunkt der Nachkriegszeit dar. Die Grenze wurde Jahrzehnte lang nicht von westdeutscher Seite anerkannt und einige Gruppierungen oder Vertriebenenverbände forderten einer „Rückkehr nach Stettin“ – nicht nach Szczecin. Die westdeutschen Nachrichtenblätter nutzen den offiziellen polnischen Namen in abwertender Weise und betonten den vermeintlich ‚wahren‘ Namen der Stadt. Diese Einstellung hat sich in den letzten 30 Jahren gewandelt, obwohl Debatten um das Zentrum gegen Vertreibung in Berlin polnische Befürchtungen wieder zeitweise wieder aufflammen ließen.

Metadaten: Chancen und Herausforderungen

Die Nachkriegsdebatten haben auch die Katalogerstellung einiger westdeutscher Archive beeinflusst. Die Klassifizierung z.B. in Zettelkatalogen wurde mit deutschen Ortsnamen angelegt. Im Fall von Szczecin und Stettin gibt es zumindest eine phonetische Ähnlichkeit. Im Fall von Breslau und Wrocław stellen sich zusätzlich noch andere Fragen der alphabetischen Katalogisierung. Die für den Katalog verwendeten Bezeichnungen spiegeln ein Denk- und Ordnungssystem wider, dass mit der Anerkennung der deutsch-polnischen Grenze als überholt gelten könnte. Trotzdem macht es politische und soziale Konflikte sichtbar. Der Erhalt von alten Bezeichnungen – und nicht die pragmatische Überschreibung beim Digitalisieren – ist ein wesentlicher Bestandteil, um Archivarbeit transparent zu gestalten und eröffnet neue Perspektiven der Forschung, wie sie z.B. vom Research Lab Digitale Heuristik und Historik des Leibniz-Forschungsverbunds “Wert der Vergangenheit” bearbeitet werden.

Luftbildaufnahme Szczecin, Stadtzentrum. In: Polska. 1956-1965. Warszawa: Książka i Wiedza, 1966.

Es gilt Metadaten, also im weitesten Sinne Daten, die andere Daten (z.B. Orte) beschreiben, umfassend zu hinterfragen. In der Praxis stellt man sich zeitgleich jedoch auch ganz grundlegenden Problemen: Bestände müssen erst digitalisiert werden. Oder sie wurden bereits digitalisiert, aber die Metadaten uneinheitlich oder nicht standardisiert vergeben. Der Aufbau entsprechender Thesauri, wie sie z.B. bei BARTOC.org gelistet sind, hilft der einschlägigen Verschlagwortung. Andererseits können Bedeutungsnuancen verloren gehen. Trotz der Standardisierung bleiben Begriffsbedeutungen wandelbar und auch die Thesauri müssen in Zukunft angepasst werden.

Zurück zur Stadt Stettin in Wisconsin?

Anlässlich des 100-jährigen Bestehens der Stadt veröffentlichte 1960 der Zusammenschluss lokaler Initiativen ein Jahrbuch, das die Gründung der Town of Stettin thematisierte. Diese Publikation war der Anlass für diesen Blogbeitrag. Sie beschreibt die Ankunft der Weißen in der Mitte des 19. Jahrhunderts, den Aufbau des Holzfällergewerbes, die Kämpfe mit den Native Americans und den Wandel der Stadt zu einem Zentrum für Milchproduktion. Stettin in Pommern, die Inspiration für den Ortsnamen in Wisconsin, erwähnt die Publikation nicht. Obwohl 1960 der Town of Stettin noch eine große Zukunft vorausgesagt wurde, sind heute nur einige wenige der Gebäude erhalten.

Eine Form von Pflege des kulturellen Erbes übernimmt stattdessen der „Pommersche Verein Freistadt – The Pomeranian Society of Freistadt“, die seit 1978 in Mequon, Wisconsin das pommersche Erbe der Region bewahrt. Zu ihren Aktivitäten gehören die Organisation des „Pommerntages“ mit Musik, traditionellem pommerschen Tanz, deutschem Bier und Essen, sowie Anleitungen zur Familienforschung.

Abbildung, auf der die Tanztrachten des Pommersche Tanzdeel Freistadt laut dem Rundschreiben Pommerscher Verein Freistadt (Dez. 2021, S. 6) beruhen. Quelle: „Deutsche Volkstrachten. Eine Sammlung deutscher Trachtenbilder“, Köln: Haus Neuerburg G.M.B.H., um 1930.

Wo finden wir Stettin?

Die Erinnerungskultur der Town of Stettin bezieht sich in der Publikation größtenteils auf die Ankunft der Weißen in der Region – die Geschichte der Immigrierenden vor der Ankunft ist nicht von Bedeutung. Diese Ergänzung nimmt der Pommersche Verein Freistadt vor. Wie es Erinnerungskulturen eigen ist, nehmen die Traditionen zwar Bezug auf Pommern Mitte/Ende des 19. Jahrhunderts, doch die Praktiken sind in der Gegenwart verankert, durch sie geprägt und verändert. Eine soziale Studie der aktuellen Bräuche und ein Vergleich der erhaltenen Archivmaterialen aus dieser Zeit wäre für mich ein spannender Vergleich. Erschafft der Verein eine Form von Zeitkapsel des 19. Jahrhunderts? Und wie stark sind die Praktiken von Prozessen der Nationalstaatenbildung beeinflusst – sowohl deutscher, polnischer als auch US-amerikanischer?

Möchte man Stettin nicht Mitte des 19. Jahrhunderts, sondern Anfang des 20. Jahrhunderts kennenlernen, so würde man eine weitere Zeitkapsel außerhalb Polens finden: in Lübeck – einer Stadt, in die viele der Vertriebenen nach dem Zweiten Weltkrieg gezogen sind. Der Heimatkreis Stettin als Mitglied der Pommerschen Landsmannschaft hat 1984 das „Haus Stettin“ gegründet, das sich heute als Erinnerungsort, Begegnungsstätte, Archiv und Museum versteht. Die hier gesammelten Dokumente und Objekte behandeln ausschließlich das Stettin vor 1945. Die Fotografien fokussieren die Zeit von Beginn des 20. Jahrhundert bis zum Ausbruch des Zweiten Weltkrieges. Viele der Vereinsmitglieder haben sich der Verbreitung der Erinnerungen an die deutsche Stadt Stettin verschrieben und können teilweise sogar als Augenzeugen von der Stadt vor 1945 berichten. Inwiefern sind diese verschiedenen Erinnerungskulturen vergleichbar? Welche Parallelen und Unterschiede lassen sich zwischen dem Pommerschen Verein Freistadt und im weitesten Sinne dem Gedenken an die Town of Stettin und dem Heimatkreis Stettin ziehen?

Postkarte der Hakenterrasse in Stettin, um 1913. Diese und ähnliche Bilder machen einen Großteil der Sammlung des Hauses „Stettin“ in Lübeck aus. Quelle: Sedina.pl

Ohne bereits vergebene Metadaten wären mir solche Forschungsfragen unzugänglich. Sie sind ein entscheidender Teil, um überhaupt solche Ressourcen aufzudecken. Trotzdem wirft genau die Beschäftigung mit der Ortsbezeichnung „Stettin“ eine Bandbreite an Problemen in Bezug auf die Vergabe von Metadaten auf. Sollten alle hier besprochenen Orte als eigene Entität behandelt werden? Wie können solche verflochtenen Bezüge zueinander dargestellt werden und wo ist die Grenze des Abbildbaren? Welche Bezüge sind z.B. auch von bestimmten Gruppierungen unerwünscht?

Kurzum: Metadaten sind unverzichtbar, weil sie Struktur, Kontext und Zugänglichkeit für Daten schaffen, die ansonsten in ihrer Fülle unübersichtlich und schwer nutzbar wären. Während man verfolgen kann, wie Begriffe (und ihre BewohnerInnen) „wandern“, bleibt es gleichzeitig eine Herausforderung, ihre Bedeutungsnuancen und Kontexte abzubilden.


Titelbild: Screenshot der Routenplanung für den Flug von Szczecin, Polen nach Stettin Town Hall, Wisconsin, USA. Google Maps, 13.08.2024.


Tabitha Redepenning ist Doktorandin und wissenschaftliche Mitarbeiterin im Projekt des Herder-Instituts für historische Ostmitteleuropaforschung: „Urban Authenticity in Szczecin“. Außerdem ist sie ehemalige Fellow des Leibniz-Forschungsverbunds „Wert der Vergangenheit“.