lang icon English
Auto-Filling SEO Website as a Gift

Launch Your AI-Powered Business and get clients!

No advertising investment needed—just results. AI finds, negotiates, and closes deals automatically

May 26, 2025, 7:42 p.m.
37

The Role and Ownership of Large Language Models in Historical Research

Large language models (LLMs) have quickly become integral to historical research. Their ability to process, annotate, and generate texts is reshaping scholarly workflows. However, historians are uniquely positioned to ask a more profound question: who owns the tools that influence our understanding of history? Most of the most powerful LLMs today are created by private companies. Although these companies invest heavily, their objectives—centered on profit, platform expansion, or intellectual property control—often conflict with the core values of historical scholarship: transparency, reproducibility, accessibility, and cultural diversity. This situation raises serious concerns about a) opacity: we frequently lack clarity regarding training data and inherent biases, b) instability: terms of access and functionalities can change without warning, and c) inequity: many researchers, especially those in under-resourced environments, are excluded. It is imperative to develop public, open-access LLMs for the humanities, trained on carefully curated, multilingual, historically grounded corpora drawn from libraries, museums, and archives. These models need to be transparent, accountable to academic communities, and backed by public funding.

Creating such infrastructure is challenging but essential. Just as national archives or educational curricula would not be outsourced to private companies, neither should our most powerful interpretive technologies. The humanities bear both a responsibility and an opportunity to develop culturally informed, academically rigorous artificial intelligence. We must not only use LLMs responsibly but also take ownership of them responsibly. The integrity of scholarship and the future of public knowledge may depend on it. Prof Dr Matteo Valleriani Max Planck Institute for the History of Science, Berlin, Germany If you have an opinion on any item you’ve read in the Guardian today, please email your letter for consideration in our letters section.



Brief news summary

Large language models (LLMs) are revolutionizing historical research by enabling advanced text analysis and generation. Yet, most prominent LLMs are created by profit-driven companies, conflicting with scholarly values such as transparency, reproducibility, accessibility, and cultural diversity. This situation leads to challenges like opaque training data, inherent biases, unstable access, and the exclusion of researchers with limited means. To overcome these obstacles, the humanities urgently need publicly funded, open-access LLMs built on carefully curated, multilingual, and historically grounded datasets from libraries, museums, and archives. Such models should emphasize transparency and accountability to academic communities. Relying on proprietary AI tools risks scholarly integrity and public knowledge preservation. Consequently, the humanities have both a responsibility and an opportunity to develop culturally sensitive, academically rigorous AI tools that encourage responsible use and stewardship of LLMs, thereby advancing historical scholarship.
Business on autopilot

AI-powered Lead Generation in Social Media
and Search Engines

Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment

Language

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your Business

Learn how AI can help your business.
Let’s talk!

June 6, 2025, 2:25 p.m.

Blockchain and Digital Assets Virtual Investor Co…

NEW YORK, June 06, 2025 (GLOBE NEWSWIRE) — Virtual Investor Conferences, the premier proprietary investor conference series, today announced that the presentations from the Blockchain and Digital Assets Virtual Investor Conference held on June 5th are now accessible for online viewing.

June 6, 2025, 2:17 p.m.

Lawyers Face Sanctions for Citing Fake Cases with…

A senior UK judge, Victoria Sharp, has issued a strong warning to legal professionals about the dangers of using AI tools like ChatGPT to cite fabricated legal cases.

June 6, 2025, 10:19 a.m.

What Happens When People Don't Understand How AI …

The widespread misunderstanding of artificial intelligence (AI), especially large language models (LLMs) like ChatGPT, has significant consequences that warrant careful examination.

June 6, 2025, 10:18 a.m.

Scalable and Decentralized, Fast and Secure, Cold…

In today’s fast-changing crypto market, investors gravitate toward blockchain projects that blend scalability, decentralization, speed, and security.

June 6, 2025, 6:19 a.m.

Blockchain in Education: Revolutionizing Credenti…

The education sector faces significant challenges in verifying academic credentials and maintaining secure records.

June 6, 2025, 6:15 a.m.

Exploratorium Launches 'Adventures in AI' Exhibit…

This summer, San Francisco’s Exploratorium proudly presents its newest interactive exhibition, "Adventures in AI," aimed at delivering a thorough and engaging exploration of artificial intelligence to visitors.

June 5, 2025, 10:49 p.m.

Google Unveils Ironwood TPU for AI Inference

Google has unveiled its latest breakthrough in artificial intelligence hardware: the Ironwood TPU, its most advanced custom AI accelerator to date.

All news