Part 35 (2/2)

[Footnote 1: Jevons: _Principles of Science_, p. 270.]

The history of science exhibits a constant progress from rude guesses to precise measurement of quant.i.ties. In the earliest history of astronomy there were attempts at quant.i.tative determinations, very crude, of course, in comparison with the exactness of present-day scientific methods.

Every branch of knowledge commences with quant.i.tative notions of a very rude character. After we have far progressed, it is often amusing to look back into the infancy of the science, and contrast present with past methods. At Greenwich Observatory in the present day, the hundredth part of a second is not thought an inconsiderable portion of time. The ancient Chaldreans recorded an eclipse to the nearest hour, and the early Alexandrian astronomers thought it superfluous to distinguish between the edge and center of the sun.

By the introduction of the astrolabe, Ptolemy, and the later Alexandrian astronomers could determine the places of the heavenly bodies within about ten minutes of arc. Little progress then ensued for thirteen centuries, until Tycho Brahe made the first great step toward accuracy, not only by employing better instruments, but even more by ceasing to regard an instrument as correct.... He also took notice of the effects of atmospheric refraction, and succeeded in attaining an accuracy often sixty times as great as that of Ptolemy. Yet Tycho and Hevelius often erred several minutes in the determination of a star's place, and it was a great achievement of Roemer and Flamsteed to reduce this error to seconds. Bradley, the modern Hipparchus, carried on the improvement, his errors in right ascension, according to Bessel, being under one second of time, and those of declination under four seconds of arc. In the present day the average error of a single observation is probably reduced to the half or the quarter of what it was in Bradley's time; and further extreme accuracy is attained by the multiplication of observations, and their skillful combination according to the theory of error. Some of the more important constants... have been determined within a tenth part of a second of s.p.a.ce.[2]

[Footnote 2: _Ibid._, pp. 271-72.]

The precise measurement of quant.i.ties is important because we can, in the first place, only through quant.i.tative determinations be sure we have made accurate observations, observations uncolored by personal idiosyncrasies. Both errors of observation and errors of judgment are checked up and averted by exact quant.i.tative measurements. The relations of phenomena, moreover, are so complex that specific causes and effects can only be understood when they are given precise quant.i.tative determination. In investigating the solubility of salts, for example, we find variability depending on differences in temperature, pressure, the presence of other salts already dissolved, and the like. The solubility of salt in water differs again from its solubility in alcohol, ether, carbon, bisulphide. Generalization about the solubility of salt, therefore, depends on the exact measurement of the phenomenon under all these conditions.[1]

[Footnote 1: See Jevons, p, 279 ff.]

The importance of exact measurement in scientific discovery and generalization may be ill.u.s.trated briefly from one instance in the history of chemistry. The discovery of the chemical element _argon_ came about through some exact measurements by Lord Rayleigh and Sir William Ramsay of the nitrogen and the oxygen in a gla.s.s flask. It was found that the nitrogen derived from air was not altogether pure; that is, there were very minute differences in the weighings of nitrogen made from certain of its compounds and the weight obtained by removing oxygen, water, traces of carbonic acid, and other impurities from the atmospheric air. It was found that the very slightly heavier weight in one case was caused by the presence of argon (about one and one third times as heavy as nitrogen) and some other elementary gases. The discovery was here clearly due to the accurate measurement which made possible the discovery of this minute discrepancy.

It must be noted in general that accuracy in measurement is immediately dependent on the instruments of precision available. It has frequently been pointed out that the Greeks, although incomparably fresh, fertile, and direct in their thinking, yet made such a comparatively slender contribution to scientific knowledge precisely because they had no instruments for exact measurement. The thermometer made possible the science of heat. The use of the balance has been in large part responsible for advances in chemistry.

The degree to which sciences have attained quant.i.tative accuracy varies among the physical sciences. The phenomena of light are not yet subject to accurate measurement; many natural phenomena have not yet been made the subject of measurement at all. Such are the intensity of sound, the phenomena of taste and smell, the magnitude of atoms, the temperature of the electric spark or of the sun's atmosphere.[1]

[Footnote 1: See Jevons, p. 273.]

The sciences tend, in general, to become more and more quant.i.tative. All phenomena ”exist in s.p.a.ce and involve molecular movements, measurable in velocity and extent.”

The ideal of all sciences is thus to reduce all phenomena to measurements of ma.s.s and motion. This ideal is obviously far from being attained. Especially in the social sciences are quant.i.tative measurements difficult, and in these sciences we must remain therefore at best in the region of shrewd guesses or fairly reliable probability.

STATISTICS AND PROBABILITY. While in the social sciences, exact quant.i.tative measurements are difficult, they are to an extent possible, and to the extent that they are possible we can arrive at fairly accurate generalizations as to the probable occurrence of phenomena. There are many phenomena where the elements are so complex that they cannot be a.n.a.lyzed and invariable causal relations established.

In a study of the phenomena of the weather, for example, the phenomena are so exceedingly complex that anything approaching a complete statement of their elements is quite out of the question.

The fallibility of most popular generalizations in these fields is evidence of the difficulty of dealing with such facts. Must we be content then simply to guess at such phenomena? ... In instances of this sort, another method ... becomes important: The Method of Statistics. In statistics we have an _exact_ enumeration of cases. If a small number of cases does not enable us to detect the causal relations of a phenomenon, it sometimes happens that a large number, accurately counted, and taken from a field widely extended in time and s.p.a.ce, will lead to a solution of the problem.[1]

[Footnote 1: Jones; _Logic, Inductive and Deductive_, p. 190.]

If we find, in a wide variety of instances, two phenomena occurring in a certain constant correlation, we infer a causal relation. If the variations in the frequency of one correspond to variations in the frequency of the other, there is probability of more than connection by coincidence.

The correlation between phenomena may be measured mathematically; it is possible to express in figures the exact relations between the occurrence of one phenomenon and the occurrence of another. The number which expresses this relation is called the coefficient of correlation. This coefficient expresses relations.h.i.+p in terms of the mean values of the two series of phenomena by measuring the amount each individual phenomenon varies from its respective mean. Suppose, for example, that in correlating crime and unemployment, the coefficient of correlation were found to be .47. If in every case of unemployment crime were found and in every case of crime, unemployment, the coefficient of correlation would be +1. If crime were never found in unemployment, and unemployment never in crime, the coefficient of correlation would be -1, indicating a perfect inverse relations.h.i.+p.

A coefficient of 0 would indicate that there is no relations.h.i.+p.

The coefficient of .47 would accordingly indicate a significant but not a ”high” correlation between crime and unemployment.

We cannot consider here all the details of statistical methods, but attention may be called to a few of the more significant features of the process. Statistics is a science, and consists in much more than the mere counting of cases.

With the collection of statistical data, only the first step has been taken. The statistics in that condition are only raw material showing nothing. They are not an instrument of investigation any more than a kiln of bricks is a monument of architecture. They need to be arranged, cla.s.sified, tabulated, and brought into connection with other statistics by the statistician. Then only do they become an instrument of investigation, just as a tool is nothing more than a ma.s.s of wood or metal, except in the hands of a skilled workman.[1]

[Footnote 1: Mayo-Smith: _Statistics and Sociology_, p. 18.]

The essential steps in a statistical investigation are: (1) the collection of material, (2) its tabulation, (3) the summary, and (4) a critical examination of the results. The terms are almost self-explanatory. There are, however, several general points of method to be noted.

In the collection of data a wide field must be covered, to be sure that we are dealing with invariable relations instead of with mere coincidences, ”or overemphasizing the importance of one out of a number of cooperating causes.” Tabulation of the data collected is very important, since cla.s.sification of the data does much to suggest the causal relations sought.

The headings under which data will be collected depend on the purposes of the investigation. In general, statistics can suggest generalizations, rather than establish them. They indicate probability, not invariable relation.[2]

[Footnote 2: See Jones: _Logic_, pp. 213-25, for a discussion of Probability.]

<script>