Diego Calvanese
The Actual Weight of Lightweight Description Logics
Since the mid-2000s, when the theoretical foundations of lightweight description logics (DLs) were established, the EL and DL-Lite families have become central to both foundational and applied research in the field. DL-Lite was designed aiming at FO-rewritability of ontology-mediated query answering, ensuring the same data complexity as plain query evaluation and enabling efficient query processing over large data sources. EL, on the other hand, supports consequence-based reasoning with polynomial-time complexity, a feature that has proven essential for handling large-scale ontologies such as SNOMED CT. Both families have triggered extensive investigations of the trade-off between expressive power and complexity of inference across a wide range of reasoning tasks beyond satisfiability and query answering, profoundly shaping the DL research landscape over the past two decades. Their impact extends far beyond theory: EL and DL-Lite underpin the EL and QL Profiles of OWL 2, respectively, and form the backbone of biomedical reasoning as well as ontology-based data access and integration in a variety of application domains. In this talk, we revisit the theoretical foundations of these logics, examine their role in defining the tractable fragments of DLs, and discuss how their principles continue to drive research in both theory and practice.
Learn MoreJerzy Marcinkowski
What It’s Like to Be a Database Theorist in the Land of Multisets
My original plan for this lecture was to talk about attempts to solve two fundamental decision problems in database theory — namely Query Determinacy Problem and Query Containment Problem — in the setting where one considers multiset semantics instead of set semantics. But since I only have one hour, rather than, say, four hours, I think I will focus exclusively, or almost exclusively, on the multiset version of Query Containment Problem. People have been working on this problem for 35 years now. Hundreds research person‑years have been spent on it. Given that level of effort, surprisingly few papers have been published. If academic’s success is measured by the number of publications, this problem is certainly not worth working on! But there are a few beautiful mathematical anecdotes to tell, which resulted from this research, and I will try to share some of them with you. I won’t say anything, or almost anything, about the multiset version of Query Determinacy Problem. Which I regret, because it is here where I passed through a door from the set semantics based database theory into the land of multisets.
Learn MoreAna Ozaki
On Knowledge Base Embeddings
Embeddings of knowledge bases (KBs) emerged as a way of “softening” their classical crisp representation. The idea is to represent KBs in vector spaces via an optimization procedure that encourages axioms in the KB to hold. To illustrate, consider a KB that keeps the records of politicians. There may be crisp rules such as the condition that politicians need to be lawful citizens to be elected but also data patterns indicating that politicians are commonly senior male natives. By including facts and rules from the KB in the optimization procedure, the resulting vector representation can accommodate both “soft” inferences based on data patterns and rule based inferences, all within the vector representation. In this talk, I will discuss opportunities and challenges of representing the semantics of KBs in vector spaces, focusing on geometric-based embedding methods. I will cast light on certain aspects of data analysis such as biases and fairness, as well as relate KB embeddings with approximate reasoning and query answering.
Learn More