Thoughts on a Life in Methods 2 – Enter Statistics

I was lucky to do my Sociology and Social Administration Degree at Newcastle where Statistics taught in first year by Betty Gittus was a key part of the programme. John Kennedy taught a really good optional course in 3rd year which of course I took. My engagement with stats began from a crude calculation. I was good at calculating, used to doing maths, and could get high marks. It went on to become something of a love affair with data and a liking for methods.

This was all before the ready availability of computers of any kind to undergraduates, let alone GUI (graphical user interface) statistical packages. So we had to derive formulae, e.g. for a correlation coefficient, and do calculations by hand with the working out explained just as I was used to in Physics for A level. This gives you a real engagement with the methods and an understanding of them based on how they actually work We began by using Blalock’s introductory social statistics but progressed to his Social Statistics still for me the best basic and more than basic book on statistical methods and reasoning. This book did all of outlining the methods, explaining issues of sampling, and above all else dealt seriously with issues of measurement. Blalock was a real empirically engaged sociologist and that engagement is a red thread through everything he wrote. I will come back to his later work on causal reasoning in a subsequent post. John Kennedy went a lot further in a course taken by only a few of us and engaged with ideas and methods like Markov chains. That introduced me to a more developed level of mathematical reasoning and helped me to engage with mathematical thinking and methods in terms of what they were actually doing at an advanced level. When I did my Master’s in Social Policy and Planning at the LSE the stats element was very competently taught but really just reproduced what I had done in the first year of my undergraduate degree.

I will conclude this post by a reflection on changes in how Statistics has been taught to social scientists over the fifty years of my academic career, during which I was usually teaching a stats course for the last thirty years of that career. By the time I started teaching stats computer based packages,originally for use on main frame terminals but later for use on PCs, were available. At first I used MINITAB which had many virtues, not least as being a package designed for teaching and excellent graphics for its time. Originally this required students to write scripts rather than being point and click on a GUI. When I first used SPSS, basically still my go to package, it was also script based. Writing scripts was not quite as good a way of engaging with methods as doing it from scratch by pen, paper and slide rule, but it was a lot better than just pointing and clicking. Although I am no great fan of STATA its requirement to write scripts is an asset, although now students just find relevant scripts on the Web or perhaps get CHATGPT to write them. R of course is script based and I can’t help feeling that much of its mystique derives from that.

I will continue the stats theme in subsequent blogs dealing both with my use of partial correlation as a way to elucidate causality and then with my use of Cluster analysis to sort things into kinds but my next blog will deal with two other approaches I encountered as an undergraduate – using administrative documents to construct narratives and action research. Doing a lot of blogging as either too windy or too icy to cycle.

Leave a comment