Cognitive Modeling Can Synthesize Newell’s and Platt’s Approaches to Advancing Science

Jarren Nylund
11 min readJun 5, 2023

--

IMAGE (LEFT): Allen Newell. IMAGE (RIGHT): Title: John R. Platt. © Time Inc. / Life Magazine / Francis Miller.

Two contrasting approaches to advancing scientific progress have been put forward by Allen Newell (1973) and John R. Platt (1964). In the following article, I will briefly describe Newell’s (1973) approach, which focuses on proposing theoretical dichotomies, and the coordination and amalgamation of scientific endeavours. I will then briefly describe Platt’s (1964) “strong inference” approach, which focuses on the individual efforts of scientists, and the importance of them proposing competing hypotheses. While these two approaches might seem at odds with each other, I will argue that they can be complementary when synthesized through the process of cognitive modeling. Guest and Martin’s (2021) path-function model will be used as a guide for how this can be done. And in doing so, create a strong method for advancing theory-building in psychological science.

Newell’s Approach

Newell’s (1973) approach to advancing scientific progress emphasizes the importance of proposing theoretical dichotomies — such as nature versus nurture (other examples of theoretical dichotomies can be seen in Figure 1) — with the goal of building more detailed theoretical frameworks. Newell’s manuscript was written in response to the experimental psychology research presented at a symposium. On the one hand, he was pleased with the quality of the work, but on the other, lamented the lack of cohesion, describing them as disparate research papers that are unable to be brought together as a cohesive unit. He argued that it is extremely difficult to know the extent to which the results of the individual studies are contradictory or compatible with one another. This is why the paper was titled “you can’t play 20 questions with nature and win”, as he wanted to emphasize the futility of scientific endeavours that endlessly pose separate questions, the results of which cannot be brought together to form a more holistic view, which will increase in clarity over time.

In response to the suggestion that endlessly posing separate questions is how other sciences have progressed, Newell (1973) argues that psychology may need more advanced methods than other sciences due to the complexity of the human subjects, and the potential for them to use radically different ways of completing a task. That is, he asserts that psychological science needs to constrain what other methods might also be employed by subjects to complete the same task. And, to go beyond simple flow diagram models to embrace what he calls modeling the “control structure” (e.g., using programming languages). To help solve his concerns, Newell suggests that scientists could collaborate to develop a set of experimental studies focused on a complex task so that a model for the phenomena can be developed. Or, alternatively, design a single model that can account for many tasks. As such, Newell embraces the idea that science is a result of the collective efforts of scientists, and that its advancement is achieved through coordination and innovations in modeling.

Platt’s “Strong Inference” Approach

Platt’s (1964) “strong inference” approach to advancing scientific progress emphasizes the importance of formulating and testing multiple hypotheses, with the goal of arriving at a decisive explanation. He advocates for scientific problems to be addressed by formulating competing hypotheses, and then designing crucial experiments which enable a scientist to test them, which will ideally rule out one or more of those hypotheses. And then restart the procedure, by devising subsequent hypotheses which refine the remaining potential explanations. He advocates for scientists to use “logical trees” (like the example shown in Figure 2) which illustrates this process.

Platt (1964) argues that contemporary scientists have almost forgotten this method of conducting science, despite it being “the method of science and always has been” (p. 347). And that if only more scientists adopted this method, more significant scientific progress could be made. As such, Platt valorizes the idea of individual scientists being able to make significant scientific advances on their own through simply following his “strong inference” approach.

However, Platt (1964) acknowledges that his method is simply the kind of inductive inference-focused empirical science that was proposed by Bacon (1620/2015). And Platt (1964) also acknowledges that his method is based on Popper’s (1959) theory of falsification. Yet, these foundational perspectives of Platt’s (1964) “strong inference” method suffer from significant philosophical criticisms (e.g., Godfrey-Smith, 2003; Ladyman, 2002; Musgrave, 1973), and I would argue that they oversimplify science by portraying it in binary terms, disregarding its nuanced complexities. For example, the Duhem-Quine thesis posits that testing an idea in isolation is impossible due to its interdependence on a holistic understanding of the world (DeWitt, 2018). Consequently, a test for any single idea effectively becomes a test for an extensive conjunction of ideas. In such cases, if the test yields a negative result, it may not necessarily imply a flaw in the specific idea being tested, as the failure could originate from any part of the extensive conjunction of ideas. However, despite its problems, I believe that Platt (1964) has outlined a useful approach, and I will argue that it can become much stronger when synthesized with Newell’s (1973) approach using cognitive modeling, which I will discuss next.

Synthesized Using Cognitive Modeling

While Platt (1964) and Newell’s (1973) approaches to advancing science might superficially seem in contradiction to one another, they can also be seen as complementary when synthesized using cognitive modeling, and within a more holistic view of the psychological scientific process. To help demonstrate this, I will use Guest and Martin’s (2021) path-function model (shown in Figure 3). It is one way of showing how the psychological research process works, and its levels can be used to describe any research output in psychological science.

Newell’s (1973) approach to advancing science is primarily concerned with the top four levels of Guest and Martin’s (2021) path-function model: framework, theory, specification, and implementation. Whereas Platt’s (1964) approach to advancing science is primarily concerned with the bottom two levels of Guest and Martin’s (2021) path-function model: hypothesis and data. The levels of specification and implementation are the levels of cognitive modeling which can provide a way of synthesizing these two perspectives.

The framework, at its simplest, is simply the context — a way to understand the ideas behind a theory (Guest & Martin, 2021). And a theory is a scientific idea that explains or predicts how phenomena in the world are causally related. The theory level is argued to be often missing from psychological science research. However, Newell’s (1973) approach to advancing science by exploring dichotomous possibilities, is at these levels of framework and theory, and serves to provide a theoretical basis from which scientific exploration can be based.

But first, we need to descend through the levels of cognitive modeling: specification and implementation (Guest & Martin, 2021). These levels are akin to what Newell (1973) called modeling the “control structure”. The specification level is about finding a plausible mechanism and defining how the theory works in detail (Guest & Martin, 2021). A specification is a formal system description based on a theory which helps distinguish between auxiliary assumptions that are theory-relevant or theory-irrelevant (Cooper & Guest, 2014; Lakatos, 1976). Specifications provide a way to check if a cognitive model represents the theory, and a way to create a model of the theory even if the theory is unclear, by constraining the space of cognitive models (Guest & Martin, 2021). Without specifications, debugging implementations and testing theories is impossible (Cooper et al., 1996; Cooper & Guest, 2014; Miłkowski et al., 2018). Because it is the process of specifying a theory mathematically that allows us to define the precise structure of theories, and incrementally increase their accuracy over time (Navarro, 2021).

An implementation is a real-world realization of a model, created using anything from physical materials to software (Guest & Martin, 2021). If the implementation of the cognitive model doesn’t match the specification, it raises serious questions about the theory and its accuracy (Cooper & Guest, 2014). A mismatch such as this needs to be addressed by going back to the previous level and adjusting until the theory is accurately represented by the code (Guest & Martin, 2021). Cognitive models provide a concrete representation of how cognition might function, allowing for a rigorous examination and testing of underlying assumptions (Farrell & Lewandowsky, 2015). By embodying these assumptions in a precise and unambiguous manner, cognitive models offer a valuable means to evaluate and scrutinize the theoretical frameworks that guide our understanding of cognitive processes.

The creation of cognitive models then allows for research that is more precisely communicable and sharable with other researchers, and therefore more testable and falsifiable (Farrell & Lewandowsky, 2015). These models allow for Newell’s (1973) suggestion that scientists design a series of experimental studies around a complex task, since the creation of cognitive models enable sharing between researchers in an unambiguous way. Newell’s vision of creating a single model that can account for many tasks was further developed by Newell (1990) and called “cognitive architecture”. This is effectively a collection of cognitive models which can be brought together to form a larger system. Similarly, Newell’s (1973) vision that science is advanced through coordination is further supported by cognitive models in the way that they allow for what Guest and Martin (2021) call “open theory”.

The hypothesis level is about creating specific and testable statements (Guest & Martin, 2021). By going through the process of theory, specification, and the implementation of cognitive models, it constrains the set of hypotheses that can be tested. And because the step of specification helps to determine the most theory-relevant assumptions, it enables for Platt’s (1964) competing hypotheses to be based on those most relevant to the dichotomous theoretical possibilities suggested by Newell (1973). This would also help establish a strong logical link between theory and hypotheses, the lack of which has been attributed as a cause of the “replication crises” in psychology (Oberauer & Lewandowsky, 2019).

Once the hypotheses are determined, it is time to design what Platt (1964) calls crucial experiments to test them against data. The data level is about information that we collect from the world or from a cognitive model (Guest & Martin, 2021). But because our interpretation of data is always influenced by the ideas we have about the world, we cannot understand data without those ideas (i.e., the theory-ladenness of observation; Feyerabend, 1957; Kuhn, 1962; Lakatos, 1976). And due to the Duhem-Quine thesis, and other philosophical problems (DeWitt, 2018), contrary to what Platt (1964) argued, we can only reject a hypothesis with some level of confidence in the case of it not being supported by the data (Guest & Martin, 2021). Bayesian models can then be used to determine a measure of the relative confidence that we can have in hypotheses given the supporting evidence (Gershman, 2019). However, theories cannot be rejected with as much confidence as hypotheses (Guest & Martin, 2021), and neither should theories be accepted with as much confidence as hypotheses (Meehl, 1967). To address these challenges — and establish the connection between data, hypotheses, and our theory — we need to contextualize the findings using cognitive modeling (Guest & Martin, 2021). And any inconsistencies between data and hypotheses needs to be addressed by asking how the theory needs to change.

In summary, the approaches of Newell (1973) and Platt (1964) may superficially appear to be in tension, but they can also be seen as complementary when synthesized using cognitive modeling. Newell’s (1973) approach can provide a broader framework based on dichotomous possibilities, which can be precisely specified and implemented using cognitive modeling (Guest & Martin, 2021). These can then be the theoretical basis of Platt’s (1964) “strong inference” approach of testing competing hypotheses which can help explain the relationship between them. This would allow researchers to explore multiple hypotheses in a structured and systematic way, while also testing the implications of these hypotheses in a broader theoretical context. The combination of these approaches creates a strong method for advancing theory development in psychological science, through providing a solid foundation for theoretical progress by narrowing the range of plausible explanations and focusing attention on the most promising candidates of hypotheses.

References

Bacon, F. (2015). The new organon and related writings. Martino Fine Books. (Original work published 1620)

Cooper, R. P., Fox, J., Farringdon, J., & Shallice, T. (1996). A systematic methodology for cognitive modelling. Artificial Intelligence, 85(1), 3–44. https://doi.org/10.1016/0004-3702(95)00112-3

Cooper, R. P. & Guest, O. (2014). Implementations are not specifications: Specification, replication and experimentation in computational cognitive modeling. Cognitive Systems Research, 27, 42–49. https://doi.org/10.1016/j.cogsys.2013.05.001

DeWitt, R. (2018). Worldviews: An introduction to the history and philosophy of science (3rd ed.). Wiley Blackwell.

Farrell, S., & Lewandowsky, S. (2015). An introduction to cognitive modeling. In B. U. Forstmann, & E.-J. Wagenmakers (Eds.). An introduction to model-based cognitive neuroscience (pp. 3–24). Springer. https://doi.org/10.1007/978-1-4939-2236-9_1

Feyerabend, P. K. (1957). An attempt at a realistic interpretation of experience. Proceedings of the Aristotelian Society, 58, 143–170. https://www.jstor.org/stable/4544593

Gershman, S. J. (2019). How to never be wrong. Psychonomic Bulletin & Review, 26(1), 13–28. https://doi.org/10.3758/s13423-018-1488-8

Godfrey-Smith, P. (2003). Theory and reality: An introduction to the philosophy of science. The University of Chicago Press.

Guest, O. & Martin, A. E. (2021). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science, 16(4), 789–802. https://doi.org/10.1177/1745691620970585

Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.

Ladyman, J. (2002). Understanding philosophy of science. Routledge.

Lakatos, I. (1976). Falsification and the methodology of scientific research programmes. In S. G. Harding (Ed.), Can theories be refuted? (pp. 205–259). Springer.

Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34(2), 103–115. https://doi.org/10.1086/288135

Miłkowski, M., Hensel, W. M., & Hohol, M. (2018). Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail. Journal of Computational Neuroscience, 45(3), 163–172. https://doi.org/10.1007/s10827-018-0702-z

Musgrave, A. E. (1973). Falsification and its critics. Studies in Logic and the Foundations of Mathematics, 74, 393–406. https://doi.org/10.1016/S0049-237X(09)70374-X

Navarro, D. J. (2021). If mathematical psychology did not exist we might need to invent it: A comment on theory building in psychology. Perspectives on Psychological Science, 16(4), 707–716. https://doi.org/10.1177/1745691620974769

Newell, A. (1973). You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. In W. G. Chase (Ed.), Visual information processing: Proceedings of the eighth annual Carnegie symposium on cognition, held at the Carnegie-Mellon University, Pittsburgh, Pennsylvania, May 19, 1972 (pp. 283–305). Academic Press. http://bit.ly/40AmOZQ

Newell, A. (1990). Unified theories of cognition. Harvard University Press.

Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 26(5), 1596–1618. https://doi.org/10.3758/s13423-019-01645-2

Platt, J. R. (1964). Strong inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others. Science (American Association for the Advancement of Science), 146(3642), 347–353. https://doi.org/10.1126/science.146.3642.347

Popper, K. (1959). The logic of scientific discovery. Basic Books.

--

--

Jarren Nylund
Jarren Nylund

Written by Jarren Nylund

🎓 PhD Student (Social / Environmental Psychology) | 📊 Research Assistant | 🌏 Climate Reality Leader | 🔗 https://bio.site/jarrennylund