Comparing methods of measurement: why plotting difference against standard method is misleading

My reasons for jumping into stats was to directly compare two measurement methods… with multiple trials, on multiple ILDs (inter-landmark distances).  I don’t really go for “funny name, lol” things, but when Bland and Borg are cited in the same paper on stats (which I long thought of [cluelessly/ignorantly] as boring).  Eponysterical.

But getting real, the issues raised by Bland and Altman sound pretty interesting, and they raise the issue that many tests of this sort may be using misleading information… I have tried to duplicate their methods in my own little H.T.-UGR/Inquiry Study.

 

 

Summary

When comparing a new method of measurement with a standard method, one of the things we want to know is whether the difference between the measurements by the two methods is related to the magnitude of the measurement. A plot of the difference against the standard measurement is sometimes suggested, but this will always appear to show a relationship between difference and magnitude when there is none. A plot of the difference against the average of the standard and new measurements is unlikely to mislead in this way. This is shown theoretically and illustrated by a practical example using measurements of systolic blood pressure.

Introduction

In earlier papers [1,2] we discussed the analysis of studies of agreement between methods of clinical measurement. We had two issues in mind: to demonstrate that the methods of analysis then in general use were incorrect and misleading, and to recommend a more appropriate method. We saw the aim of such a study as to determine whether two methods agreed sufficiently well for them to be used interchangeably. This led us to suggest that the analysis should be based on the differences between measurements on the same subject by the two methods. The mean difference would be the estimated bias, the systematic difference between methods, and the standard deviation of the differences would measure random fluctuations around this mean. We recommended 95% limits of agreement, mean difference plus or minus 2 standard deviations (or, more precisely, 1.96 standard deviations), which would tell us how far apart measurements by the two methods were likely to be for most individuals.

 

via Comparing methods of measurement: why plotting difference against standard method is misleading.

A Dream Deferred: How access to STEM is denied to many students before they get in the door good | The Urban Scientist, Scientific American Blog Network

A Dream Deferred

by Langston Hughes

What happens to a dream deferred?

Does it dry up

like a raisin in the sun?

Or fester like a sore–

And then run?

Does it stink like rotten meat?

Or crust and sugar over–

like a syrupy sweet?

Maybe it just sags

like a heavy load.

Or does it explode?

via A Dream Deferred: How access to STEM is denied to many students before they get in the door good | The Urban Scientist, Scientific American Blog Network.

oreillymedia/open_government · GitHub

Wow, O’Reilly has made Open Government available to the public free of charge, really not much I could say beyond good guy does good thing.  Worth a read.

Open Government was published in 2010 by O’Reilly Media. The United States had just elected a president in 2008, who, on his first day in office, issued an executive order committing his administration to “an unprecedented level of openness in government.” The contributors of Open Government had long fought for transparency and openness in government, as well as access to public information. Aaron Swartz was one of these contributors (Chapter 25: When is Transparency Useful?). Aaron was a hacker, an activist, a builder, and a respected member of the technology community. O’Reilly Media is making Open Government free to all to access in honor of Aaron. #PDFtribute

— Tim O’Reilly, January 15, 2013

via oreillymedia/open_government · GitHub.