Sunday, July 20, 2008

Neoplasms: 13

This is the thirteenth blog in a series of blogs on neoplasia.

In the past few blogs, I've been trying to explain the disconnect between cancer survival data and cancer death rate data. The cancer survival data seems to indicate that we're making enormous improvements in cancer treatment. The cancer death rate indicates that Americans are dying from cancer at about the same rate as they had been a half-century ago.

Several days ago, I listed over a dozen biases in cancer survival data that contribute to an overly optimistic sense of medical progress.

In this and the next few blogs, I thought I'd review some of these biases. The purpose of this exercise is to explain that the interpretation of survival data is enormously complex and that survival data is probably not the best way to gauge progress in the field of cancer research.

Today, let's look at Statistical Method Bias.

Strange as it may seem, a statistician can look at a set of data, apply different statistical methods to the data, and arrive at any of several different conclusions. Often, conclusions drawn by different methods, from different data sets are contradictory. In this case, articles with opposite conclusions appear in the medical literature, permitting scientists to selectively cite those papers that support their own agendas [1]. Depending on the method of choice, it is often possible to draw opposite conclusions from the same set of data.

John P.A. Ioannidis is the chair of the Clinical and Molecular Epidemiology Unit at the University of Ioannina School of Medicine and Biomedical Research Institute in Greece. In a provocative article entitled, "Why most published research findings are false," he points some common misinterpretations that pose as clinical facts [2]. These include: post hoc subgroup selection and analyses (i.e., cherry-picking a subgroup that qualifies for statistical significance); changing clinical group inclusion or exclusion criteria and disease definitions after the trial has concluded; selective or purposefully distorted reporting of results; data dredging (sifting through study data, searching for outlier groups); for multi-center studies, reporting the significant findings from some of the centers and ignoring negative results from other centers [2,3].



1. [Tatsioni A, Bonitsis NG, Ioannidis JP. Persistence of contradicted claims in the literature. JAMA 2007 Dec 5;298(21):2517-2526, 2007.]

2. [Ioannidis JP. Why most published research findings are false. PLoS Med 2:e124, 2005.]

3. [Ioannidis JP. Some main problems eroding the credibility and relevance of randomized trials. Bull NYU Hosp Jt Dis 66:135-139, 2008.]


-Copyright (C) 2008 Jules J. Berman

key words: cancer, tumor, tumour, carcinogen, neoplasia, neoplastic development, classification, biomedical informatics, tumor development, precancer, benign tumor, ontology, classification, developmental lineage classification and taxonomy of neoplasms

No comments: