What They Studied

Researchers set out to rigorously validate the assessment instrument used to measure developmental outcomes in the Brain Balance program. The study was authored by Dr. Rebecca Jackson of Brain Balance and Dr. Joshua T. Jordan of the Department of Psychology at Dominican University of California, and published in Current Psychology, a peer-reviewed journal published by Springer.

The data encompassed 47,571 participants (68.5% male; ages 4–18) whose parents completed the BB-MDS before and after 3 months of in-center Brain Balance participation. This is the largest sample in the entire Brain Balance research library — and one of the largest psychometric validation samples in the pediatric developmental assessment literature.

The researchers applied a rigorous multi-stage validation process. First, Exploratory Factor Analysis (EFA) was performed on a training sample of 28,254 participants to identify the underlying structure of the survey and reduce it from 98 items to 31 items. Then, Exploratory Structural Equation Modeling (ESEM) was applied to two separate validation samples (n = 9,394 and n = 9,923) to confirm the factor structure held in independent data. Finally, measurement invariance testing was conducted to determine whether the survey measures the same constructs equivalently across boys vs. girls and younger vs. older participants.

What They Found

Total Sample
47,571
participants analyzed
Refined Items
98→31
items after EFA reduction
Factors
6
validated developmental domains
Invariance
across gender & age

Validated Six-Factor Structure

The Exploratory Factor Analysis identified a clear six-factor solution, reducing the original 98-item survey to 31 items across six developmental domains. When this structure was tested on two independent validation samples using Exploratory Structural Equation Modeling, it demonstrated strong goodness-of-fit — meaning the six-factor structure held up across different groups of participants, not just the initial training sample.

The Six Developmental Domains

Factor Domain What It Measures
1 Negative Emotionality Emotional reactivity, mood dysregulation, anxiety-like symptoms
2 Reading / Writing Difficulties Academic literacy challenges, reading fluency, written expression
3 Hyperactive-Disruptive Impulsivity, difficulty sitting still, disruptive behavior
4 Academic Disengagement Difficulty sustaining focus on schoolwork, task avoidance
5 Motor / Coordination Problems Gross and fine motor deficits, clumsiness, poor balance
6 Social Communication Problems Difficulty reading social cues, peer interaction challenges

High Reliability: Internal Consistency and Test-Retest

Each of the six subscales demonstrated high internal reliability, meaning the items within each factor consistently measure the same construct. Test-retest reliability coefficients (assessed via Pearson correlations) were also high for each subscale, confirming that the instrument produces stable, repeatable measurements when administered to the same individuals over time.

High reliability is a prerequisite for any assessment used to measure change — if an instrument's scores fluctuate randomly between administrations, observed "improvement" could be noise rather than genuine change. The BB-MDS's strong reliability confirms that the changes measured in other Brain Balance studies (particularly the Frontiers in Psychology N=4,041 study) reflect actual developmental change, not measurement error.

Measurement Invariance Across Gender and Age

The survey's factor structure was tested for measurement invariance — whether it measures the same constructs in the same way across different demographic groups. The BB-MDS demonstrated equivalence across four groups stratified by reported gender (male vs. female) and adolescent status (younger vs. older participants).

This is a critical finding because it means the survey is not biased toward one gender or age group. A score of "5" on the hyperactive-disruptive subscale means the same thing for a 6-year-old boy as it does for a 14-year-old girl. Without measurement invariance, comparisons across demographic groups would be unreliable.

Why It Matters

This Study Answers the "Made-Up Survey" Critique

One of the most common objections to Brain Balance outcomes research is: "They're measuring themselves with their own survey." This study directly addresses that concern. The BB-MDS is now a peer-reviewed, psychometrically validated instrument published in a Springer journal, subjected to Exploratory Factor Analysis, Exploratory Structural Equation Modeling, and measurement invariance testing — the same rigorous methods used to validate widely accepted clinical instruments.

The validation sample of 47,571 participants is larger than the validation samples used for many established pediatric assessment tools. This doesn't make the BB-MDS equivalent to a decades-old clinical instrument with independent normative data, but it does mean the survey meets the published standards for a psychometrically sound measurement tool.

This study's primary strategic function is as a foundation for other Brain Balance research. The Frontiers in Psychology study (N=4,041) and the at-home cognitive outcomes study (N=16,330) both use the BB-MDS as an outcome measure. The validity of their findings depends on the validity of the instrument. This validation study provides the peer-reviewed psychometric evidence that the BB-MDS measures what it claims to measure, reliably and consistently.

The three-stage validation methodology (EFA on a training sample → ESEM on two independent validation samples → measurement invariance testing) follows best practices in psychometric research. Using separate samples for exploration and confirmation prevents the overfitting problem that occurs when a factor structure is developed and validated on the same data. The use of two independent validation samples rather than one provides an additional layer of confirmation.

The reduction from 98 items to 31 items is itself a meaningful result. It demonstrates methodological rigor — the researchers didn't simply validate whatever they started with. They stripped the instrument down to its core, kept only the items with the strongest factor loadings, and verified that the refined version held its structure across independent samples. A 31-item survey is also more practical for clinical use, reducing parent burden and increasing completion rates.

Study Limitations

The BB-MDS was developed by Brain Balance and validated using Brain Balance program participants, not a general population sample. The instrument has not yet been independently validated by researchers outside the organization. The survey relies on parent-reported ratings rather than clinician-administered assessments, and may be subject to response biases inherent in parent questionnaires. The study did not compare the BB-MDS to established clinical instruments (such as the CBCL or Conners) for convergent validity — an important next step for establishing the survey's relationship to widely used diagnostic tools. The study's conflict-of-interest disclosure notes that Dr. Jackson is employed by Brain Balance and Dr. Jordan provides consulting services to the organization.

Full Citation (APA)
Jackson, R., & Jordan, J. T. (2023). Measurement properties of the Brain Balance® multidomain developmental survey: validated factor structure, internal reliability, and measurement invariance. Current Psychology, 42(36), 32483–32493. https://doi.org/10.1007/s12144-023-04248-2
Last reviewed and updated: May 2026