SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers

Perceptual measures, such as intelligibility and speech disorder severity, are widely used in the clinical assessment of speech disorders in patients treated for oral or oropharyngeal cancer. Despite their widespread usage, these measures are known to be subjective and hard to reproduce. Therefore,...

Full description

Bibliographic Details
Published in:Frontiers in Artificial Intelligence
Main Authors: Sebastião Quintas, Robin Vaysse, Mathieu Balaguer, Vincent Roger, Julie Mauclair, Jérôme Farinas, Virginie Woisard, Julien Pinquier
Format: Article in Journal/Newspaper
Language:English
Published: Frontiers Media S.A. 2024
Subjects:
Online Access:https://doi.org/10.3389/frai.2024.1359094
https://doaj.org/article/3f6c34c6544649c4b7f65691780f53d8
id ftdoajarticles:oai:doaj.org/article:3f6c34c6544649c4b7f65691780f53d8
record_format openpolar
spelling ftdoajarticles:oai:doaj.org/article:3f6c34c6544649c4b7f65691780f53d8 2024-09-15T18:33:15+00:00 SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers Sebastião Quintas Robin Vaysse Mathieu Balaguer Vincent Roger Julie Mauclair Jérôme Farinas Virginie Woisard Julien Pinquier 2024-05-01T00:00:00Z https://doi.org/10.3389/frai.2024.1359094 https://doaj.org/article/3f6c34c6544649c4b7f65691780f53d8 EN eng Frontiers Media S.A. https://www.frontiersin.org/articles/10.3389/frai.2024.1359094/full https://doaj.org/toc/2624-8212 2624-8212 doi:10.3389/frai.2024.1359094 https://doaj.org/article/3f6c34c6544649c4b7f65691780f53d8 Frontiers in Artificial Intelligence, Vol 7 (2024) speech intelligibility speaker embeddings head and neck cancer deep learning healthcare application Electronic computers. Computer science QA75.5-76.95 article 2024 ftdoajarticles https://doi.org/10.3389/frai.2024.1359094 2024-08-05T17:49:25Z Perceptual measures, such as intelligibility and speech disorder severity, are widely used in the clinical assessment of speech disorders in patients treated for oral or oropharyngeal cancer. Despite their widespread usage, these measures are known to be subjective and hard to reproduce. Therefore, an M-Health assessment based on an automatic prediction has been seen as a more robust and reliable alternative. Despite recent progress, these automatic approaches still remain somewhat theoretical, and a need to implement them in real clinical practice rises. Hence, in the present work we introduce SAMI, a clinical mobile application used to predict speech intelligibility and disorder severity as well as to monitor patient progress on these measures over time. The first part of this work illustrates the design and development of the systems supported by SAMI. Here, we show how deep neural speaker embeddings are used to automatically regress speech disorder measurements (intelligibility and severity), as well as the training and validation of the system on a French corpus of head and neck cancer. Furthermore, we also test our model on a secondary corpus recorded in real clinical conditions. The second part details the results obtained from the deployment of our system in a real clinical environment, over the course of several weeks. In this section, the results obtained with SAMI are compared to an a posteriori perceptual evaluation, conducted by a set of experts on the new recorded data. The comparison suggests a high correlation and a low error between the perceptual and automatic evaluations, validating the clinical usage of the proposed application. Article in Journal/Newspaper sami Directory of Open Access Journals: DOAJ Articles Frontiers in Artificial Intelligence 7
institution Open Polar
collection Directory of Open Access Journals: DOAJ Articles
op_collection_id ftdoajarticles
language English
topic speech intelligibility
speaker embeddings
head and neck cancer
deep learning
healthcare application
Electronic computers. Computer science
QA75.5-76.95
spellingShingle speech intelligibility
speaker embeddings
head and neck cancer
deep learning
healthcare application
Electronic computers. Computer science
QA75.5-76.95
Sebastião Quintas
Robin Vaysse
Mathieu Balaguer
Vincent Roger
Julie Mauclair
Jérôme Farinas
Virginie Woisard
Julien Pinquier
SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers
topic_facet speech intelligibility
speaker embeddings
head and neck cancer
deep learning
healthcare application
Electronic computers. Computer science
QA75.5-76.95
description Perceptual measures, such as intelligibility and speech disorder severity, are widely used in the clinical assessment of speech disorders in patients treated for oral or oropharyngeal cancer. Despite their widespread usage, these measures are known to be subjective and hard to reproduce. Therefore, an M-Health assessment based on an automatic prediction has been seen as a more robust and reliable alternative. Despite recent progress, these automatic approaches still remain somewhat theoretical, and a need to implement them in real clinical practice rises. Hence, in the present work we introduce SAMI, a clinical mobile application used to predict speech intelligibility and disorder severity as well as to monitor patient progress on these measures over time. The first part of this work illustrates the design and development of the systems supported by SAMI. Here, we show how deep neural speaker embeddings are used to automatically regress speech disorder measurements (intelligibility and severity), as well as the training and validation of the system on a French corpus of head and neck cancer. Furthermore, we also test our model on a secondary corpus recorded in real clinical conditions. The second part details the results obtained from the deployment of our system in a real clinical environment, over the course of several weeks. In this section, the results obtained with SAMI are compared to an a posteriori perceptual evaluation, conducted by a set of experts on the new recorded data. The comparison suggests a high correlation and a low error between the perceptual and automatic evaluations, validating the clinical usage of the proposed application.
format Article in Journal/Newspaper
author Sebastião Quintas
Robin Vaysse
Mathieu Balaguer
Vincent Roger
Julie Mauclair
Jérôme Farinas
Virginie Woisard
Julien Pinquier
author_facet Sebastião Quintas
Robin Vaysse
Mathieu Balaguer
Vincent Roger
Julie Mauclair
Jérôme Farinas
Virginie Woisard
Julien Pinquier
author_sort Sebastião Quintas
title SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers
title_short SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers
title_full SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers
title_fullStr SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers
title_full_unstemmed SAMI: an M-Health application to telemonitor intelligibility and speech disorder severity in head and neck cancers
title_sort sami: an m-health application to telemonitor intelligibility and speech disorder severity in head and neck cancers
publisher Frontiers Media S.A.
publishDate 2024
url https://doi.org/10.3389/frai.2024.1359094
https://doaj.org/article/3f6c34c6544649c4b7f65691780f53d8
genre sami
genre_facet sami
op_source Frontiers in Artificial Intelligence, Vol 7 (2024)
op_relation https://www.frontiersin.org/articles/10.3389/frai.2024.1359094/full
https://doaj.org/toc/2624-8212
2624-8212
doi:10.3389/frai.2024.1359094
https://doaj.org/article/3f6c34c6544649c4b7f65691780f53d8
op_doi https://doi.org/10.3389/frai.2024.1359094
container_title Frontiers in Artificial Intelligence
container_volume 7
_version_ 1810474957200687104