Last week, we saw a machine trounce the two best humans who have ever played the game of Jeopardy, a wide-ranging quiz game filled with puns and wordplay. The computer, named Watson by the IBM engineers who spent four years and tens of millions of dollars developing it, tapped into its huge database of facts and rules to come up with rank-ordered possibilities for potential answers to each question. It also assigned probabilities to these possible answers, and if the top probability was above its risk threshold, it buzzed in with lightning speed.
According to a NOVA show about Watson's development, the most important innovation in Watson's development was "his" ability to use machine learning. It would take forever to enter all the rules and relationships Watson needed to have a shot at winning Jeopardy. Instead, the engineers fed him huge amounts of data and let him learn from it by giving him tens of thousands of examples--Jeopardy questions with correct answers. Watson learned from his mistakes. Presumably he learned which kinds of evidence (mostly word associations) from his huge memory banks were more important in which circumstances. Very smart.
It's these two features--the ability to learn from thousands of examples and the ability to assign probabilities to different possible decisions--that make Watson's intelligence so relevant to other fields. Consider medicine. The process of diagnosis is, at its heart, gathering a lot of data, considering it, and then ranking possible causes of the patient's symptoms according to an intuitive sense of probabilities.
Doctors know the potential sources of error in this process. One, of course, would be gathering the wrong data--an inaccurate history or physical exam, lab error or mixed lab slips. I'm not sure Watson could help with physical exam, but he would be better than humans at taking into account the likelihood of lab error. The next source of error is the human tendency, nearly impossible to train away, of weighing anecdotal experience, especially recent experience, too highly. The diagnosis we've most recently encountered is the one that jumps to mind. Which brings us to a third source of error--not thinking of a certain diagnosis at all.
Now imagine Watson, fed all the epidemiological data we can gather, all the journal articles published in the last sixty years, all the case studies and Grand Rounds. He can perform statistical analysis and make correlations. Within microseconds of being turned loose on the data of a certain case, he could spit out a comprehensive list of potential diagnoses with probabilities assigned.
Of course, he could still make silly errors, as when he seemed to believe Dorothy Parker was a book or Toronto an American city, but humans can weed out such answers. Doctors would have their memories jogged about possibilities they'd neglected, and the probability rankings would help them decide which tests to pursue next. Doctors would still take the history, interact with the patient, advise the family, doing the things humans are best at.
Radiologists, however, might find themselves out of a job. Radiology is essentially a matter of pattern detection. A radiologist looking at a chest X-ray may have one line of information about the patient: "60-yr. old smoker w. cough x 3 wks." After that, there's only the X-ray, which the radiologist compares in memory with all the other chest X-rays she's read. She compares heart width to chest width, looks for flattening of the diaphragm, and searches systematically for anything that breaks the expected pattern, such as unusual spots or shadows.
Watson could do all that. Feed him ten thousand chest X-ray images and ten thousand diagnoses, and he would be better at searching his memory than any human. He would never get bored by the repetitiveness of his task. He would be diligent and systematic. He would continue to learn, adding CT scans, MRIs, bone scans, thallium heart scans, and whatever else we ask to his repertoire. He would work day and night without vacation, and he would spit out his answers in microseconds. And once the capital cost was covered, taking one more highly-paid human out of the chain of medical diagnosis and treatment would save money and help to keep health care costs down.
4 comments:
This is silly. As non-radiologists its difficult to understand the task required to read a study. Patterns are one thing, but theres more to it than matching patterns. Maybe 70% of what we see is the same but the remaining 30% is novel and requires considering how what you say effects patient management.
Speed is useless if you get generic answers or even a 2 page list of differential diagnoses. I don't doubt a computer could possibly successfully detect 90% of meningiomas. But we are in a business of %100 and thats the reality.
Findings can be suggestive and vague. You need a doctor who knows medicine, patient management and can communicate with the referring physician. There's more to that than a learn-able algorithm.
Post a Comment