Understanding from models
|
ABSTRACTS
Alisa Bokulich
Understanding from Models on the Eikonic Conception of Explanation
As an alternative to the ontic conception of explanation, according to which explanations are "full-bodied things in the world," I defend instead an 'eikonic' conception of scientific explanation, according to which explanations are an epistemic activity involving representations of the phenomena to be explained. What is explained, in the first instance, is not the phenomenon in the world itself, but a particular representation of that phenomenon, which is contextualized within a particular research program and explanatory project. Different model-representations of the explanandum-phenomenon allow scientists within different research contexts to better understand and explain the phenomenon of interest. This plurality of models or representations in science is not a weakness, but rather a strength. I argue that in some explanatory contexts, the most ‘veridical’ representation of a phenomenon is neither required nor even always the best for advancing understanding. Drawing on two familiar case studies I show how the eikonic conception of explanation is better able to make sense of the understanding we derive from nonverdical models.
Sorin Bangu
On the (non)Factivity of Scientific Understanding: The Argument from Idealizations
Abstract. My primary intention in this paper is to highlight several subtleties overlooked by both parties engaged in the debate on the factivity of scientific understanding. Although I'm sympathetic to non-factivism (against authors like J. Kvanvig), I will be critical about the way in which the position is currently defended. More concretely, I find C. Elgin's non-factivist argument from idealizations (focusing on the ideal gas model) rather unclear and weak, so I suggest a way to fix and strengthen it. Time permitting, I shall also discuss critically some (non-factivist friendly, I take it) ideas by M. Strevens.
Daniel Kostic
Minimal structure explanation and explanatory depth
I argue that topological explanation has a minimal structure in which just by mentally grasping the (mathematical dependency) between topological properties and a mathematical representation of a system delivers the understanding of properties (behaviours) of that system that we want to explain. An opposite case with more complex structure would be an explanation with argument structure, in which the relation between explanans and explanandum is mediated not only by the knowledge of each of the premises in the argument, but also by knowledge about rules of inference, order of derivation.
The explanations with minimal structure provide greater explanatory depth in virtue of being:
Samuel Schindler
Explanation and understanding in models via structural necessitation
Traditionally it has been held that explanantia have to be true in order to be explanatory of the explanandum. More recently, philosophers have argued that this requirement can be weakened, at least when it comes to the understanding of the phenomena through scientific models. Others, who might be called explanatory liberalists, have argued that even for genuine explanations the requirement can be given up. In this talk I will defend explanatory liberalism by pointing out that an important element of model explanations is a model’s structural necessitation of the regularities the model represents. It gives us an understanding of why the regularities have to happen in the way they do.
Understanding from Models on the Eikonic Conception of Explanation
As an alternative to the ontic conception of explanation, according to which explanations are "full-bodied things in the world," I defend instead an 'eikonic' conception of scientific explanation, according to which explanations are an epistemic activity involving representations of the phenomena to be explained. What is explained, in the first instance, is not the phenomenon in the world itself, but a particular representation of that phenomenon, which is contextualized within a particular research program and explanatory project. Different model-representations of the explanandum-phenomenon allow scientists within different research contexts to better understand and explain the phenomenon of interest. This plurality of models or representations in science is not a weakness, but rather a strength. I argue that in some explanatory contexts, the most ‘veridical’ representation of a phenomenon is neither required nor even always the best for advancing understanding. Drawing on two familiar case studies I show how the eikonic conception of explanation is better able to make sense of the understanding we derive from nonverdical models.
Sorin Bangu
On the (non)Factivity of Scientific Understanding: The Argument from Idealizations
Abstract. My primary intention in this paper is to highlight several subtleties overlooked by both parties engaged in the debate on the factivity of scientific understanding. Although I'm sympathetic to non-factivism (against authors like J. Kvanvig), I will be critical about the way in which the position is currently defended. More concretely, I find C. Elgin's non-factivist argument from idealizations (focusing on the ideal gas model) rather unclear and weak, so I suggest a way to fix and strengthen it. Time permitting, I shall also discuss critically some (non-factivist friendly, I take it) ideas by M. Strevens.
Daniel Kostic
Minimal structure explanation and explanatory depth
I argue that topological explanation has a minimal structure in which just by mentally grasping the (mathematical dependency) between topological properties and a mathematical representation of a system delivers the understanding of properties (behaviours) of that system that we want to explain. An opposite case with more complex structure would be an explanation with argument structure, in which the relation between explanans and explanandum is mediated not only by the knowledge of each of the premises in the argument, but also by knowledge about rules of inference, order of derivation.
The explanations with minimal structure provide greater explanatory depth in virtue of being:
- More general,
- More abstract,
- Involving high level non-intuitive mathematical reasoning.
Samuel Schindler
Explanation and understanding in models via structural necessitation
Traditionally it has been held that explanantia have to be true in order to be explanatory of the explanandum. More recently, philosophers have argued that this requirement can be weakened, at least when it comes to the understanding of the phenomena through scientific models. Others, who might be called explanatory liberalists, have argued that even for genuine explanations the requirement can be given up. In this talk I will defend explanatory liberalism by pointing out that an important element of model explanations is a model’s structural necessitation of the regularities the model represents. It gives us an understanding of why the regularities have to happen in the way they do.