This year we have 22 full papers and one system demonstration. To be able to fit all these presentations on one single day, we have to make the presentations shorter than usual. See the instructions for more information about this.
We are proud to have Jonas Beskow from KTH, Stockholm, Sweden, as invited speaker this year. He will give a talk with the following title and abstract:
Talking Heads, Signing Avatars and Social Robots: Exploring multimodality in assistive applications
Over the span of human existence our brains have evolved into sophisticated multimodal social signal processing machines. We are experts at detecting and decoding information from a variety of sources and interpreting this information in a social context. The human face is one of the most important social channels that plays a key role in the human communication chain. Today, with computer animated characters becoming ubiquitous in games and media, and social robots starting to bring human-like social interaction capabilities into the physical world, it is possible to build applications that leverage the unique human capability for social communication new ways to assist our lives and support us in a variety of domains.
This talk will cover a series of experiments attempting to quantise the effect of several traits of computer generated characters/robots such as visual speech movements, non-verbal signals, physical embodiment and manual signing. It is shown that a number of human functions ranging from low-level speech perception to learning can benefit from the presence of such characters when compared to unimodal (e.g. audio-only) settings. Two examples are given of applications where these effects are exploited in order to provide support for people with special needs – a virtual lipreading support application for hard of hearing and a signing avatar game for children with communicative disorders.
Sarah Ebling and Matt Huenerfauth. Bridging the gap between sign language machine translation and sign language animation using sequence classification.
Sarah Ebling, Rosalee Wolfe, Jerry Schnepp, Souad Baowidan, John McDonald, Robyn Moncrief, Sandra Sidler-Miserez and Katja Tissi. Synthesizing the finger alphabet of Swiss German Sign Language and evaluating the comprehensibility of the resulting animations.
Mika Hatano, Shinji Sako and Tadashi Kitamura. Contour-based Hand Pose Recognition for Sign Language Recognition.
Matt Huenerfauth, Pengfei Lu and Hernisa Kacorri. Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data.
Hernisa Kacorri and Matt Huenerfauth. Evaluating a Dynamic Time Warping Based Scoring Algorithm for Facial Expressions in ASL Animations.
Agnès Piquard-Kipffer, Odile Mella, Jérémy Miranda, Denis Jouvet and Luiza Orosanu. Qualitative investigation of the display of speech recognition results for communication with deaf people.
Lionel Fontan, Thomas Pellegrini, Julia Olcoz and Alberto Abad. Predicting disordered speech comprehensibility from Goodness of Pronunciation scores.
Seongjun Hahm, Daragh Heitzman and Jun Wang. Recognizing Dysarthric Speech due to Amyotrophic Lateral Sclerosis with Across-Speaker Articulatory Normalization.
Rizwan Ishaq, Dhanananjaya Gowda, Paavo Alku and Begonya Garcia Zapirain. Vowel Enhancement in Early Stage Spanish Esophageal Speech Using Natural Glottal Flow Pulse and Vocal Tract Frequency Warping.
Stacey Oue, Ricard Marxer and Frank Rudzicz. Automatic dysfluency detection in dysarthric speech using deep belief networks.
Siddharth Sehgal and Stuart Cunningham. Model adaptation and adaptive training for the recognition of dysarthric speech.
Sriranjani R, S Umesh and M Ramasubba Reddy. Pronunciation Adaptation For Disordered Speech Recognition Using State-Specific Vectors of Phone-Cluster Adaptive Training.
Jun Wang, Seongjun Hahm and Ted Mau. Determining an Optimal Set of Flesh Points on Tongue, Lips, and Jaw for Continuous Silent Speech Recognition.
Ka-Ho Wong, Yu Ting Yeung, Patrick C. M. Wong, Gina-Anne Levow and Helen Meng. Analysis of Dysarthric Speech using Distinctive Feature Recognition.
E.A. Draffan, Mike Wald, Nawar Halabi, Ouadie Sabia, Wajdi Zaghouani, Amatullah Kadous, Amal Idris, Nadine Zeinoun, David Banes and Dana Lawand. Generating acceptable Arabic Core Vocabularies and Symbols for AAC users.
Phil Green, Ricard Marxer, Stuart Cunningham, Heidi Christensen, Frank Rudzicz, Maria Yancheva, André Coy, Massimuliano Malavasi and Lorenzo Desideri. Remote Speech Technology for Speech Professionals - the CloudCAST initiative.
Anna Pompili, Cristiana Amorim, Alberto Abad and Isabel Trancoso. Speech and language technologies for the automatic monitoring and training of cognitive functions.
Leen Sevens, Vincent Vandeghinste, Ineke Schuurman and Frank Van Eynde. Extending a Dutch Text-to-Pictograph Converter to English and Spanish.
Reina Ueda, Ryo Aihara, Tetsuya Takiguchi and Yasuo Ariki. Individuality-Preserving Spectrum Modification for Articulation Disorders Using Phone Selective Synthesis.
Michel Vacher, Benjamin Lecouteux, Frédéric Aman, Solange Rossato and François Portet. Recognition of Distress Calls in Distant Speech Setting: a Preliminary Experiment in a Smart Home.
Christophe Veaux, Junichi Yamagishi and Simon King. A Comparison of Manual and Automatic Voice Repair for Individual with Vocal Disabilities.
Maria Yancheva, Kathleen Fraser and Frank Rudzicz. Using linguistic features longitudinally to predict clinical scores for Alzheimer's disease and related dementias.
Inês Almeida, Luísa Coheur and Sara Candeias. From European Portuguese to Portuguese Sign Language.