
Purported counter-examples to the asymmetry of spontaneous recordability are refuted. ) the essential differences in the conditions requisite to the production of an indicator having retrodictive significance ("post-record"), on the one hand, and of one having predictive significance ("prerecord" or recorded prediction), on the other. To deal with the exceptional cases of non-spontaneous "pre-records," a clarification is offered of (. An account is given of the physical basis for the temporal asymmetry of recordability, which obtains in the following sense: except for humanly recorded predictions and one other class of advance indicators to be discussed, interacting systems can contain reliable indicators of only their past and not of their future interactions. Three major ways in which temporal asymmetries enter into scientific induction are discussed as follows: 1. That I single out this particular source should not be taken to belie the admiration I hold for the masterful scholarship of its author. For the sake of brevity, I propose to focus my remarks wholly on the original widely read article of Hanson on this subject. Specifically, it is my contention that the aforementioned reaction is symptomatic of a contemporary trend in the direction of irrationalism which, of all branches of philosophy, the philosophy of science should seek to dispel rather than to foster. ) is to argue that the current reaction to the Hempel-Oppenheim position, as exemplified by Professors Hanson, Scriven, et al., represents a retrograde movement in the philosophy of science, which, as the number of its converts grows, could undermine the progress, which has been accomplished in the broad tradition that extends from Mach and Poincaré through the Vienna Circle to the present day.

The symmetry thesis, which received its classical exposition in a well-known article by Hempel and Oppenheim, has been subject to a steadily growing criticism by several eminent thinkers. ( shrink)Īnyone, today, with even a slight interest in the methodology of science will be aware of the heated debate which has raged in regard to the thesis of the logical symmetry between explanation and prediction, which is entailed by the hypotheticodeductive account of scientific theory. The paper concludes that any justification of the information generated by such systems is generalised and pragmatic at best and the application of this information to individual cases raises various ethical issues. However, rather than providing information about specific individuals and their exposure to risk, a more valid explanation of a high probability score is that the particular variables related to incidents of maltreatment are just higher amongst certain subgroups in a population than they are amongst others.

The algorithms employed in such systems are geared towards identifying patterns which turn out to be good correlations. While there may be some utility in using predictive risk modelling systems, I argue, since an explanatory account of the output of such algorithms that meets Williamson’s requirements cannot be given, doubt is cast upon the resulting statistical scores as constituting evidence on generally accepted epistemic grounds. This approach is compared to the claim that the output of such computational systems constitutes evidence. Furthermore, Williamson argues that knowledge is equivalent to evidence. Timothy Williamson’s conception of epistemology places a requirement on knowledge that it be explanatory. ) a specific example to illustrate the issues: The New Zealand Government has proposed implementing a predictive risk modelling system which purportedly identifies children at risk of a maltreatment event before the age of five.

This paper examines the nature of the information generated by such systems and compares it with more orthodox notions of evidence found in epistemology. In many cases, the information generated by such systems is treated as a form of evidence to justify further action. The output of such systems is typically represented by a statistical score derived from various related and often arbitrary datasets. Predictive risk modelling is a computational method used to generate probabilities correlating events.
