A new machine learning framework has been developed to detect fetal movements using a wearable array of vibroacoustic sensors, offering a potential tool for assessing fetal health during pregnancy. The system combines piezoelectric and acoustic sensors to capture a range of movements, validated against ultrasound as the gold standard. An ensemble model achieved moderate precision and recall in identifying movements, demonstrating feasibility for low-cost monitoring, especially in resource-limited settings where stillbirth rates remain high.
Background on Stillbirth and Fetal Movement Monitoring
Stillbirth represents a significant global health challenge, with estimates indicating around two million cases annually, disproportionately affecting low- and middle-income countries. Regional disparities are stark, with rates exceeding 20 per 1,000 births in parts of Africa compared to under three in Western Europe. Underreporting in these areas exacerbates the issue, underscoring inequalities in healthcare access. Factors like maternal education, socioeconomic status, and availability of prenatal care contribute to these variations.
Fetal movements serve as a key indicator of well-being, with sudden reductions often signaling distress that could lead to intervention. Traditional out-of-clinic monitoring relies on maternal perception, which is subjective and influenced by factors such as gestational age, body mass index, and medication. Studies show mothers detect only a fraction of actual movements, limiting its reliability. Ultrasound provides objective data but is resource-intensive, requiring trained professionals and equipment, making frequent use impractical, particularly in rural or low-resource environments. Prolonged ultrasound exposure also raises theoretical safety concerns, though evidence is inconclusive.
Wearable monitors have emerged as alternatives, using accelerometers or acoustics to track movements empirically. Prior efforts focused on single modalities, achieving variable success in home settings but lacking robust clinical validation. Recent proposals integrate multiple sensors for complementary data, yet challenges persist in distinguishing fine from gross movements and validating against ultrasound in diverse populations.
Study Design and Sensor Array
The research involved 25 pregnant participants recruited from a urban maternity unit, spanning 24 to 38 weeks gestation. Participants included those with high-risk factors like elevated body mass index, diabetes, or prior fetal loss to reflect real-world variability. Data collection occurred during 30-minute ultrasound sessions, with 38 scans analyzed due to follow-up constraints.
A heterogeneous sensor array was placed on the abdomen: three piezoelectric sensors for vibration detection and three custom acoustic sensors for sound capture. An inertial measurement unit on the ultrasound probe filtered probe-induced noise. Participants noted perceived movements via a button, while a clinician labeled ultrasound-observed activities in real time, categorizing them as general, breathing, startle, or limb movements.
Data Processing and Machine Learning Approach
Raw signals underwent preprocessing: offset removal, filtering to eliminate low-frequency noise like maternal respiration, and synchronization with ultrasound timestamps. Movements were labeled as positive within clinician-noted windows, accounting for brief delays.
Feature extraction focused on time-domain metrics like root mean square and frequency-domain aspects via wavelet transforms. Models were trained on concatenated sensor data, exploring classifiers such as support vector machines, random forests, and ensemble methods. An ensemble RUSBoost model, handling class imbalance through undersampling and boosting, was selected for its balance of performance.
Cross-validation used participant-wise splits to avoid data leakage, with hyperparameters tuned via Bayesian optimization. Performance metrics included precision, recall, and F1-score, emphasizing detection of true movements amid noise.
Key Findings
The model predicted fetal movements with a precision of 0.44 and recall of 0.61, yielding an F1-score of 0.51. Piezoelectric sensors excelled in capturing gross movements, while acoustics better detected subtler ones like breathing. Concatenating data improved results over individual modalities, highlighting complementarity.
Analysis revealed higher detection rates for limb and general movements compared to breathing or startles, possibly due to signal strength. Participant variability, influenced by factors like amniotic fluid or placental position, affected accuracy. The system demonstrated potential for distinguishing movement types, though further refinement is needed.
Context and Future Implications
This work addresses gaps in fetal monitoring by validating a low-cost, wearable solution against ultrasound in a clinical setting. In high-income contexts, it could complement routine care, enabling home tracking between visits. For low- and middle-income countries, where ultrasound access is limited, such devices could empower community health workers, potentially reducing stillbirths through timely alerts.
The approach aligns with global efforts to personalize prenatal care, incorporating machine learning for nuanced analysis. Limitations include the controlled environment and modest sample size; real-world testing in diverse settings is essential. Future iterations might integrate additional modalities like accelerometers or refine models for movement subtyping.
By leveraging off-the-shelf components, the framework paves the way for scalable, accessible tools, contributing to sustainable development goals for maternal and child health.
Ashik, A. K. et al. (2025). A machine learning model for assessing fetal health during pregnancy. Frontiers in Bioengineering and Biotechnology, 13:1691064. DOI: 10.3389/fbioe.2025.1691064
