-- POST BEGINS HERE --
Gone are the days when a scientist could describe a simple, elegant experiment (on a mouse, a frog, or some other easily obtained chemical reagents) and another scientist would, in a matter of a few hours, repeat the process in his own laboratory. When several laboratories perform the same experiment, using equivalent resources, and producing similar results, it is a safe bet that the research is valid (1).
Today, much of research is conducted in a complex, data-intensive realm. Individual studies can cost millions of dollars, involve hundreds of researchers, and produce terabytes of data. When experiments reach a high level of cost and complexity, repetition of the same experiment, in a different laboratory, becomes impractical.
In the late 1990s, a variety of data-intensive methods were developed for molecular biology, all of which generated vast amounts of data, requiring complex and sophisticated algorithms to convert the raw data into measured quantities and to analyze the huge assortment of measurements. Once such method is gene expression microarrays. In these studies, RNA molecules in tissue samples are converted to DNA and incubated against an array of pre-selected DNA samples. DNA sequences in the sample and the microarray that match, will, under precise conditions, anneal to form double-stranded molecules. The number of matches can be semi-quantitated, and a profile of the relative abundance of every RNA species in the original sample can be produced and compared with the profiles of other specimens. Using these profiles, medical researchers have tried to identify profiles (of diseased tissues) that predict responsiveness to particular types of treatment. In particular, researchers have tried use cancer tissue profiles to predict the likelihood that a specific tumor will respond to a specific type of treatment. Since the late 1990s, an enormous number of studies have been funded to produce the tissue microarray profiles for many different diseases, in many different clinical stages, and to correlate these profiles with treatment response.
Because there are so many different variables in the selection of patients, the selection of tissues, the preparation of tissues for annealment, the selection of microarray reagents, the collection of data, the conversion of data to a quantifiable measure, and the methods of analyzing the data, it is impossible for different laboratories to faithfully repeat a microarray experiment. Michiels and co-workers have shown that most microarray studies could not classify patients better than chance (2). Still, the field of microarray profiling continues, as it should, because successful fields must overcome their limitations. Continued efforts may resolve the seemingly intractable problems discussed here, or may open up alternate areas of more fruitful research. Much money has been invested into microarray profiling, and many laboratories depend on the continued funding of this technology. Experience suggests that it takes at a few decades to thoroughly discredit a well-funded but ill-conceived idea.
Here is another case in point. The U.S. Veterans Administration Medical System operates about 175 hospitals. This is an immense undertaking, but the work is accomplished fairly well, using a rather simple algorithm. The VA hires a bunch of doctors, nurses and healthcare workers, gives them a set salary, and houses them in hospital buildings. When registered patients appear in their clinics, the VA pays for the supplies necessary to treat the patients. Each year, the Congress appropriates the money to keep the VA going the next year. One of the greatest benefits of the VA system is the lack of billing. Patient visits, medical procedures, diagnostics, pharmaceuticals, and other medical arcana are absorbed into budget. If you were to compare the level of complexity of the VA healthcare system with the level of complexity of 175 private hospitals, you would find the VA system to be a model of simplicity.
Then one day, somebody asked, "Should the VA pay for medical services rendered on veterans who have their own private insurers?" Having no affirmative answer, the VA undertook an effort to pry reimbursements from the private insurers of veterans treated at VA hospitals. Suddenly, billing and expense records became important the VA, an institution with no experience in fee-for-service care.
The VA planned a $427 million software system to track billing and other financial transactions. The pilot site was the Bay Pines VA, in Florida. After preliminary testing at Bay Pines, the system, known as the Core Financial and Logistics System, or CoreFLS, would be rolled out to all of the VA hospitals nationwide. Unfortunately, the system could not be implemented at Bay Pines. Neither the software nor the humans were up to the job. In 2005, VA decided to pull the plug on a $472-million system at because it did not work (3).
Four years later, in 2008, the Government Accounting Office reviewed the billing performance on just 18 of the 175 or so VA hospitals. It found that these 18 hospitals, in fiscal year 2007, failed to collect about $1.4 billion that could have been paid by private insurers. The report from the Government Accounting Office concluded, "Since 2001 we have reported that continuing weaknesses in VA billing processes and controls have impaired VA’s ability to maximize the collections received from third-party insurers. (4)"
Why, after years of effort, has the VA not succeeded in billing private insurers for VA care received by privately insured veterans? The reason can be distilled in a single word: complexity. Private insurance reimbursement has reached a level of complexity that exceeds the ability of bureaucratic organizations to cope. There are many insurers, each with their own policies and their own obstructionist bureaucracies. When the VA tries to collect from third party payers, they must deal with insurers across fifty states. The VA paid dearly to acquire a financial database that could handle the problem, but the software wasn't up to the job.
Hospital information systems are among the most complex and most expensive software systems. The cost of a hospital information system for a large medical center can easily exceed $200 million. It is widely assumed that hospital information systems have been of enormous benefit to patients, but reports suggest that 75% of installed systems are failures (5). If Hospital Information Systems worked well, why does the cost of healthcare continue to rise? Has information technology eliminated the fragmentation of medical care or reduced the the complexities of health payment plans? Evidence for the value of implementing complex health information technology in community hospitals is scant. Most of the credible reports on the benefits of Hospital Information Systems come from large institutions that have developed their own systems incrementally, over many years (6).
 Golden F. Science: Fudging Data for Fun and Profit. Time December 7, 1981. http://www.time.com/time/printout/0,8816,953258,00.html
 Michiels S, Koscielny S, Hill C. Prediction of cancer outcome with microarrays: a multiple random validation strategy. Lancet 365:488-492, 2005.
 De La Garza P, Nohlgren S. VA yanks troubled computer system: the $472-million computer system being tested at Bay Pines just doesn't work, veterans officials say. St. Persburg Times July 27, 2004.
 GAO United States Government Accountability Office Testimony Before the Subcommittee on Health, Committee on Veterans' Affairs, House of Representatives. VA HEALTH CARE: Ineffective Medical Center Controls Resulted in Inappropriate Billing and Collection Practices. Statement of Kay L. Daly Director Financial Management and Assurance. GAO-10-152T October 15, 2009.
 Littlejohns P, Wyatt JC, Garvican L. Evaluating computerised health information systems: hard lessons still to be learnt British Medical Journal 326:860-863, April 19, 2003. http://bmj.com/cgi/content/full/326/7394/860. Comment. The authors report that about about three quarters of installed hospital information systems are considered failures.
 Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG. Impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 144:742-752, 2006.]chaudhry.pdf
-- TO BE CONTINUED --
© 2010 Jules Berman
key words:informatics, complexity, jules j berman, medical history
Science is not a collection of facts. Science is what facts teach us; what we can learn about our universe, and ourselves, by deductive thinking. From observations of the night sky, made without the aid of telescopes, we can deduce that the universe is expanding, that the universe is not infinitely old, and why black holes exist. Without resorting to experimentation or mathematical analysis, we can deduce that gravity is a curvature in space-time, that the particles that compose light have no mass, that there is a theoretical limit to the number of different elements in the universe, and that the earth is billions of years old. Likewise, simple observations on animals tell us much about the migration of continents, the evolutionary relationships among classes of animals, why the nuclei of cells contain our genetic material, why certain animals are long-lived, why the gestation period of humans is 9 months, and why some diseases are rare and other diseases are common. In “Armchair Science”, the reader is confronted with 129 scientific mysteries, in cosmology, particle physics, chemistry, biology, and medicine. Beginning with simple observations, step-by-step analyses guide the reader toward solutions that are sometimes startling, and always entertaining. “Armchair Science” is written for general readers who are curious about science, and who want to sharpen their deductive skills.