This seminar is triggered by recent and groundbreaking advancements in the performance of language-processing neural networks, so-called neural language models (LMs). State-of-the-art systems such as OpenAI’s GPT-2, GPT-3, Google’s T5 or Microsoft's DeBERTa are capable of reliably answering (even complex) open questions about a given text, and of generating longer (stylistically matching) text themselves on a given subject. The results have astonished both experts and laypersons. What’s more, LMs achieve state-of-the art results nearly across all natural language processing benchmarks (such as question answering, text summarization, translation, reading comprehension, or natural language inference). Therefore, neural LMs are sometimes considered to be the most promising technological path towards general AI. Yet, even if this were true, there would still be a long way to go: The NLP community is producing a continuous stream of studies that reveal LMs' limitations (and in the same push their boundaries ever further). These developments are not just revolutionizing the field of NLP and AI, but will have major repercussions for how we read, write and study texts in the humanities and social sciences. Vice versa, insights from disciplines such as communication science, computational sociology, formal epistemology, and philosophy of language might help us to better assess, understand and improve LMs. Accordingly, this seminar is jointly held by scholars from computer science, the social sciences, and the humanities; it can be attended by students from the faculty of informatics as well as from the faculty of social sciences and humanities.