Why the Semantic Web Is Hard (Part III)
Part 1 of this series was about the state of knowledge representation during the early 2000’s, when the Semantic Web was picking up steam. Part 2 introduced the problem of chronic pain, and PEMF therapy, and ended with the following question:
How would you, as a chronic pain sufferer, evaluate these health claims? Even though the devices have become cheaper, it’s still a $500 to $1500 purchase in order to get a “high quality” home-device.
And the central claim of this series of blog posts is that it’s impossible to do so (and will remain impossible to do so, because knowledge aggregation is too hard).
Here’s some easy-to-solve-in-theory reasons why it’s hard to take the papers, read through them, and decide for yourself whether or not PEMF therapy will work for you.
- Many of the studies are behind paywalls. Open access to scientific knowledge still has a long way to go.
- Many of the papers / studies are written up in PDF or Microsoft Word. Parsing is hard.
- The papers don’t often have the raw data; they just summarize conclusions. If you’re trying to do a meta-analysis, you need the underlying data.
- Reading the papers requires expert-level knowledge.
But let’s assume that the first three of these are solved (let’s assume that both the papers and their data are universally available in easily parseable form and you could pull it the data into Excel with a single button click) and that what we’re really trying to do is build a system which can perform meta-analysis on therapeutic results and tell us whether or not the proposed treatment is effective.
I’ll go a little bit further here: Automating Meta-Analysis of clinical data is the perfect thought experiment for the viability of something like the Semantic Web. If we can see a path to it, we can deliver enormous value in the short-to-intermediate term (justifying the expense) AND we build something that can probably be extended to a more general purpose framework.
Conversely, if we can’t do it, the Semantic Web is impossible.
Here are some more complex constraints:
- Most of the studies and papers are from small research labs or clinical settings and have small sample sizes. It’s not uncommon for there to be fewer than 10 patients in a study, it’s not uncommon for “n=1”, and even some of the best-funded ones have fewer than 50. This means we’re going to have to aggregate a lot of different studies from different time periods and different countries.
- In general, the reproducibility crisis means that we have to distrust most data from most labs.
- In medicine and biology, p-value-hacking is prevalent and negative results (“didn’t work”) often aren’t reported. While the research community now recognizes this as a problem, there is no solution in place.
- There are many different dimensions to administering PEMF. What field strength for the magnetism, what oscillation cycle, how long a treatment session lasts for, how many sessions a day, how many days of treatment, … These aren’t just theoretical concerns: One of the better meta-analyses of available data explicitly states “Sensitivity analyses suggested that the exposure duration <=30 min per session exhibited better effects compared with the exposure duration >30 min per session” (but it did not make recommendations about magnetic field strength).
- Most papers don’t incorporate long-term effects. Even if PEMF therapy works in the short-term, what are the long-term effects to exposing people to low-level magnetic fields? Very few of the studies deal with “one year later” or “five years later.”
- Reseachers rarely reach definitive conclusions. The people doing and writing the studies often couch the conclusions in maybes and perhapses that make it hard to definitively interpret their conclusions. For example, the meta-analysis above concludes:
PEMF could alleviate pain and improve physical function for patients with knee and hand OA, but not for patients with cervical OA. Meanwhile, a short PEMF treatment duration (within 30 min) may achieve more favourable efficacy. However, given the limited number of study available in hand and cervical OA, the implication of this conclusion should be cautious for hand and cervical OA.
- The placebo effect is real. Some people are going to report pain reduction, even if the device is ineffective. This is often controlled for using “scam” devices, but not always.
- Experimental designs are different. Even when you have all the data, and even when all the measurements are compatible, it’s not always obvious how to merge it.
- Should you incorporate animal studies? How? PEMF is used on horses and cows quite a bit. Are they similar enough to people? How valid are the results, and can they be applied to humans? Cows can’t report pain as well as people can either, but there are indicators (contented mooing?) which indicate pain has subsided.
- Sometimes, the research is paid for. The companies making PEMF devices sometimes sponsor the research. This is usually mentioned in the paper or summary, but it also leads to questions of bias.
- Many of the claims are repeated by third parties for their own reasons, often with their own spin and (commercial) slant. You might think the most cited studies are the most authoritative, but that’s not at all clear.
- Manufacturers often overstate efficacy claims. As one example, one manufacturer, Magnawave, says that each and every one of the 2,800 clinical trials of their product had a positive outcome and that their devices have been approved in 127 different countries. From which we can deduce that is that we should treat anything Magnawave says with extra scrutiny.