Upcoming Masterclass on Philosophy of AI: Additional Materials
I’m offering a philosophy of AI masterclass, “Concepts in Machines”, at the University of Zürich later this month (April 23–25, 2026). In this post, I briefly sketch my intentions for this course and collect extra materials not included in the syllabus.
Sketch of Course Topic
The course explores the role of concepts in neural models. What are the candidates for conceptual representations in LLMs? Starting from this question, we turn to debates in philosophy of AI. For example, we will discuss whether neural language models and especially LLMs understand meaning and produce meaningful output. Both philosophical contributions and publications in computer science venues will be considered.
One of the goals of the course is to ensure that philosophy of AI remains connected to the current and rapidly developing state of AI research. Hence, the course includes
- A hands-on component: Participants will interact with the internals of neural models, including small neural language models.
- Discussions of recent findings: We will debate findings in recent AI research that bear upon philosophy but have received limited attention so far.
Additional Readings
As the masterclass lasts only three days, we cannot cover all contributions to the central topic. I list here additional readings on the core topic of how LLMs capture meaning/concepts.
Vector Semantics
The Cognitive Alignment of LLMs
- The Neural Architecture of Language: Integrative Modeling Converges on Predictive Processing
- Large Language Models Are Human-Like Internally
- Cognitive Modeling Using Artificial Intelligence
- Mission: Impossible Language Models
- See also the list I compiled previously, although it is in need of an update.
More on the Grounding Problem for LLMs
- Does Thought Require Sensory Grounding? From Pure Thinkers to Large Language Models
- Pragmatic Norms Are All You Need – Why The Symbol Grounding Problem Does Not Apply to LLMs
- Symbol Ungrounding: What the Successes (and Failures) of Large Language Models Reveal About Human Cognition
- See also the list I compiled previously
Additional Talks
The seminar of the DIC (Doctorat en informatique cognitive) at the Université du Québec à Montréal has hosted some excellent talks relevant for the masterclass. I recommend at least the following:
- Chris Potts: Meaning in Large Language Models: Bridging Formal Semantics, Pragmatics, and Learned Representations
- Coelho Mollo: Grounding in Large Language Models: lessons for building functional ontologies for AI
- Raphael Millière: Mechanistic Explanation in Deep Learning
There are many more excellent talks on the website!
Note: I used an LLM to copy-edit this post.
| Previous |
| Notes on the Symbol Groundi... |