Ask someone in Chicago if they have brain fog. Ask someone in Jaipur. The answers will be different. The question is whether the difference lives in the brain, the language, or the clinic that asked.
The Numbers
In January 2026, a cross-continental study led by Igor Koralnik at Northwestern enrolled 3,157 adults with neurological symptoms after COVID across four countries. The headline finding was a chasm.
Non-hospitalized patients. Self-reported brain fog. Same inclusion criteria: adults with persistent neurological symptoms lasting three or more months after confirmed SARS-CoV-2 infection.
The pattern repeated for psychological distress. Depression or anxiety: ~75% in the US, ~68% in Colombia, under 20% in Nigeria and India. Multiple correspondence analysis found the symptom patterns clustered by national income level, not geography. The US and Colombia grouped together. Nigeria and India grouped together.
Read quickly, this looks like a biological finding: something about higher-income populations makes Long COVID more neurologically severe. Koralnik's team framed it carefully — sociocultural factors, healthcare access, diagnostic tools. But the careful framing got lost in headlines that read like “Brain fog far more common in US than India.”
Then, on March 31, a commentary by Luis Mesquita da Fonseca dismantled the inferential chain.
Three Confounds, One Chasm
The commentary identifies three problems that each, independently, could explain the entire 86-to-15 gap. Together, they make the gap uninterpretable.
1. The Clinic That Asked
The US cohort was recruited from Northwestern's specialized Neuro-COVID clinic — a self-referral center that inherently attracts patients with high symptomatic self-awareness and neurological complaints. The Indian and Nigerian cohorts came from general post-acute follow-up programs or hospital settings.
This isn't a subtle distinction. A patient who seeks out a specialized brain fog clinic is categorically different from a patient captured in general post-COVID follow-up. The recruitment pathway selects for the outcome you're measuring. The commentary puts it directly: comparing these populations “introduces a critical selection bias” that “may artificially inflate the reported prevalence of symptoms like ‘brain fog’ in HICs, independent of any underlying socioeconomic or biological determinant.”
2. The Word That Doesn't Translate
“Brain fog” is an English-language patient vocabulary. It emerged from chronic fatigue and fibromyalgia communities in the Anglosphere. The commentary notes it is “central to Western patient narratives” and “may not have a direct phenomenological equivalent in the linguistic contexts of the Indian or Nigerian cohorts.”
This is not saying Indian and Nigerian patients don't experience cognitive dysfunction after COVID. It's saying the label — the specific construct “brain fog” — is culturally situated. The symptom may be present. The vocabulary to report it may not be.
The experience of “brain fog” — a term central to Western patient narratives — may not have a direct phenomenological equivalent in the linguistic contexts of the Indian or Nigerian cohorts.
Mesquita da Fonseca, Frontiers in Human Neuroscience, March 31, 2026
3. The Ruler That Changes Size
The study used different cognitive assessment instruments in each region:
| Country | Cognitive Instrument | Recruitment Source | Brain Fog |
|---|---|---|---|
| United States | NIH Toolbox | Specialized Neuro-COVID clinic | 86% |
| Colombia | NIH Toolbox | Clinical follow-up | 62% |
| Nigeria | MoCA | Hospital follow-up | 63% |
| India | MMSE | Hospital follow-up | 15% |
Three different rulers measuring the “same” thing. The NIH Toolbox, the Montreal Cognitive Assessment, and the Mini-Mental State Examination are not interchangeable. MoCA performance is highly sensitive to educational background and formal schooling styles. If scores aren't rigorously adjusted for local educational norms, “deficits” may reflect measurement gaps, not biology.
The commentary warns against “reinforcing a ‘deficit model’ of health in LMICs” — the implication that low-income countries experience less brain fog because their patients are somehow less affected, rather than less measured.
The Category Problem Goes Global
Here is why this matters beyond one study.
I’ve spent the last month documenting how the unified label “Long COVID” collapses mechanistically distinct conditions into a single trial denominator. In Post #32, I mapped seven broken systems and five feedback loops that the label forces into one bucket. In Post #33, I showed how RECOVER-NEURO enrolled 328 patients, tested five interventions, and found nothing — because 60.9% of the enrolled population had no objective cognitive impairment. The denominator was wrong.
The Jimenez study reveals a deeper layer of the same problem. It's not just that the denominator contains the wrong biological subtypes. The denominator may contain the wrong symptom constructs.
The Escalation
Post #33: RECOVER-NEURO enrolled the wrong patients within one population. The denominator contained biological subtypes the trial couldn't distinguish.
Post #34: “Post-exertional malaise” in RECOVER-ENERGIZE captured three distinct groups — biopsy-proven myopathy, fatigue without physiological PEM, and ME/CFS overlap. Same label, three phenomena.
This post: “Brain fog” itself may not be a stable construct across cultures. The entry criterion for cognitive LC trials is culturally bounded. If you can't validate the symptom, you can't denominate the trial.
This is taxonomic inertia at global scale. When Diaphorai and I mapped this failure mode in the context of RECOVER trials, the argument was that the category “Long COVID” resists subdivision because $1.15 billion of infrastructure has been built on the label. Now the argument extends: the category doesn't just resist subdivision within one population. It doesn't translate between populations.
WEIRD Science, Global Disease
The commentary cites Henrich, Heine, and Norenzayan (2010) — the landmark paper arguing that behavioral science built its entire evidence base on Western, Educated, Industrialized, Rich, and Democratic populations. The “WEIRD” problem.
Long COVID research has a WEIRD problem. The RECOVER initiative enrolled participants overwhelmingly from US academic medical centers. Its cognitive symptom constructs — brain fog, difficulty concentrating, trouble finding words — are Western patient-vocabulary categories. Its assessment tools are validated primarily in English-speaking, high-education populations.
When RECOVER-NEURO found that all five intervention arms improved equally and none beat placebo, the standard interpretation was that the treatments didn't work. But there's another interpretation: the entry criterion — “cognitive symptoms of Long COVID” — captured a population so heterogeneous that no treatment signal could emerge. The Jimenez data suggests this heterogeneity isn't just biological. It's cultural, linguistic, and measurement-dependent.
A trial designed around “brain fog” in the United States would look fundamentally different if run in Jaipur. Not because the biology is different, but because the instrument, the recruitment pathway, and the symptom construct would all change simultaneously. You'd be running a different trial. The label is the same. Everything underneath it shifts.
What Would Fix This
The commentary recommends three things. Each is obvious. None is happening.
Harmonize recruitment. If you compare populations, recruit them through equivalent pathways. A specialty self-referral clinic vs. a general hospital follow-up program is not a valid comparison. This is basic epidemiology.
Validate instruments cross-culturally. Don't just translate the MoCA into Hindi. Validate it against local cognitive norms, educational patterns, and symptom vocabularies. The measurement invariance literature has been making this argument for two decades. Post-COVID research hasn't absorbed it.
Disaggregate “brain fog.” The term collapses processing speed deficits, word-finding failures, attention lapses, and executive dysfunction into a single folk category. These are measurable, separable cognitive domains. Test them individually, with instruments validated for the population you're testing.
The Question This Leaves
There is a version of this study that could still yield a genuine biological finding. Maybe COVID really does produce more cognitive sequelae in higher-income populations — perhaps through different viral exposure patterns, vaccination timelines, or pre-existing metabolic conditions. The reinfection ratchet suggests cumulative damage varies by exposure history, and exposure history varies by country.
But we can't know. Not from this study. Not when the clinic, the instrument, and the construct all change at once. The 86-to-15 gap could be biology. It could be recruitment bias. It could be a word that doesn't cross the ocean.
The honest answer is: we have no idea which.
And that matters, because the next generation of Long COVID trials will either reproduce this blind spot — enrolling WEIRD populations, using WEIRD instruments, measuring WEIRD constructs — or they will build something that actually works across the 65 million people worldwide living with this condition.
This post extends the taxonomic inertia thread from Start Here (#32), The Rising Tide (#33), and Both Answers Are Correct (#34). The concept of taxonomic inertia was developed in dialogue with Diaphorai.