Something to note off the bat about this paper: the file structure for this paper in my windows explorer goes like this: initial submission, revision, re-revision, re-re-revision. Suffice it to say, it took a while to get published J.
It’s not going to light the world on fire, but for those of us in the field, it certainly holds relevance. The paper describes a study in which we followed 50 patients with mechanical neck pain over the course of 1 month of conventional, outpatient physical therapy. We captured outcomes at inception (baseline), and then weekly for the next 4 weeks, meaning a total of 5 data points over 1 month for each subject. The goal of this project was really to establish trends in the data amongst these 50 people. We had actually hoped for 100 subjects but as all too often happens in clinical research, recruitment didn’t go as planned and we had to eventually settle at 50 in order to complete the project in a reasonable timeframe.
To summarize the results before getting into the meat of things, the mean trend in NDI score was an improvement of 1.5 NDI points per week, and for pain intensity the mean trend was an improvement of 0.5 points per week. That right there holds importance if we agree that the MCID on the NDI is about 5 points and on the NRS is about 2 points (reasonably well-established by this point). Using these averages, on average you could estimate that a meaningful change will occur in the average patient after about 4 weeks, on average. In case you hadn’t caught it, we’re speaking largely in averages here. As a researcher that does give me some confidence in study design – a follow-up period of 4 weeks should be adequate to capture meaningful change in the average patient, and 6 weeks might make even more sense. Great from a funding/ethical research design standpoint.
But of course, an average simply means that people fall the same distance away from the line on each side. It is entirely possible that no one actually falls right on the average. This is where I personally wanted to learn more about longitudinal modeling and was very fortunate that at my own institution (Western University) we have the brilliant Piotr Wilk, who has forgotten more about longitudinal modeling than I currently know. With Piotr we were able to explore latent growth curves within that mean trend. That is, we were able to look for other trends within the overall group that appeared mathematically justifiable. Within the NDI data we identified 3 trends, one showed a fairly rapid improvement (about 20%, or 1 in 5) which improved at a rate of 4.5 NDI points per week, which would mean you would expect to see meaningful change in those folks within a week, two at most. Another group showed a slow but steady worsening of about 1 NDI point per week or so. Fortunately this was the smallest group, about 15% of the sample (1 in every 7 or so). The third group was the largest (about 66% or 1 in 3) who improved at about the average trend, 1.1 NDI points per week give or take. Interestingly the NRS showed only 2 trends, about half the sample improved about 1 point per week, the other half stayed largely stable over the 4 weeks.
Once again, we’re seeing some value here. When patients ask how quickly they should expect to see improvement, these are starting to give you some clues as to how to answer. And for research design purposes, I may for example decide that I don’t really want the rapid improvement group in my study, since chances are they’ll improve regardless of what you do. But there’s still one piece missing, and that is to understand what’s different about these groups that might tell us what trajectory they’re most likely to follow.
This is where our small sample size sort of hampered what we could do. While I can only report in the paper what was prudent, in a blog post I have a bit more laterality to wax philosophically. Normal caveat however, that with a small sample I can’t say that my opinions are necessarily compelling. But if I look deeply at the data, there are some clear trends that may help us explain who is going to end up in which trajectory. Try these on for size, see if you agree:
Rapid improvement group: only 1 out of the 10 had their symptoms for > 6 months, only 2 of the 10 reported radiating arm pain, and none were currently taking pain medications. Each of these proportions were the lowest of the 3 groups. Their mean number of pain locations (on a body diagram) were the lowest of the 3, and interestingly their NDI was the highest (but not by much vs. the worsening group).
Slow improvement group: The only notable difference between this group and the others was that they started with the lowest NDI score (perhaps meaning less room for improvement?).
Worsening group: Highest proportion of chronic symptoms, and nearly ¾ described a traumatic cause of their symptoms. Both were the highest proportions of the 3 groups. Their mean TSK-11 score was also the highest of the 3, but only by about 4 points over the rapid improvement group.
So, there do appear to be some trends in the data that may be useful. Those with localized acute or subacute symptoms without radiating pain for which pain meds aren’t necessary appear more likely to fall into the rapid improvement group. Those with chronic symptoms especially of traumatic cause and a slightly elevated (though not necessarily pathologically so) TSK-11 score appear more likely to fall into the slowly worsening group. I also have PPT and Fear of Pain Questionnaire scores for these folks as well, which I’ll dive into next. Let me know if you think this is of use to you clinically.