We gladly accept the PDF or Powerpoint presentations and/or posters that you presented during the workshop. Just mail them to us and we'll add them below.
We received 23 paper submissions, of which 12 were chosen for oral presentation and another 5 for poster presentation. In addition, two demo proposals were accepted.
The full proceedings are available as a large PDF file (13MB), but you can also access each individual paper from the list below. The preface and table of contents of the proceedings are also available as a separate smaller PDF file. Finally, all papers are now published in the ACL Anthology.
Below are the time slots for the talks, posters and demos. For more information about the program, please look at the program page.
All accepted talks will be 20 minutes, plus 5–10 minutes for questions and discussion. The poster and demo session will start with a short introduction, where each presenter is welcome to give a 1 minute presentation (no more than 1–2 slides) before everyone grabs their coffee and starts walking around.
Mark Hawley. SLPAT in practice: lessons from translational research
Jens Forster, Oscar Koller, Christian Oberdörfer, Yannick Gweth and Hermann Ney. Improving Continuous Sign Language Recognition: Speech Recognition Techniques and System Design
Christophe Veaux, Junichi Yamagishi and Simon King. Towards Personalised Synthesised Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction
Ryo Aihara, Tetsuya Takiguchi and Yasuo Ariki. Individuality-Preserving Voice Conversion for Articulation Disorders Using Locality-Constrained NMF
Heidi Christensen, Iñigo Casanueva, Stuart Cunningham, Phil Green and Thomas Hain. homeService: Voice-enabled assistive technology in the home using cloud-based automatic speech recognition
Foad Hamidi and Melanie Baljko. Automatic Speech Recognition: A Shifted Role in Early Speech Intervention?
Milan Rusko, Marian Trnka, Sakhia Darjaa and Juraj Hamar. The dramatic piece reader for the blind and visually impaired
Ken Sadohara, Hiroaki Kojima, Takuya Narita, Misato Nihei, Minoru Kamata, Shinichi Onaka, Yoshihiro Fujita and Takenobu Inoue. Sub-lexical Dialogue Act Classification in a Spoken Dialogue System Support for the Elderly with Cognitive Disabilities
Chitralekha Bhat, Imran Ahmed, Vikram Saxena and Sunil Kumar Kopparapu. Visual Subtitles for Internet Videos
William Li, Don Fredette, Alexander Burnham, Bob Lamoureux, Marva Serotkin and Seth Teller. Making Speech-Based Assistive Technology Work for a Real User [PDF poster]
Lize Broekx, Katrien Dreesen, Jort Florent Gemmeke and Hugo Van Hamme. Comparing and combining classifiers for self-taught vocal interfaces
Toshiya Yoshioka, Tetsuya Takiguchi and Yasuo Ariki. Robust Feature Extraction to Utterance Fluctuation of Articulation Disorders Based on Random Projection
Frédéric Aman, Michel Vacher, Solange Rossato and François Portet. Analyzing the Performance of Automatic Speech Recognition for Ageing Voice: Does it Correlate with Dependency Level?
William Li, Jim Glass, Nicholas Roy and Seth Teller. Probabilistic Dialogue Modeling for Speech-Enabled Assistive Technology [PDF slides]
Michel Vacher, Benjamin Lecouteux, Dan Istrate, Thierry Joubert, François Portet, Mohamed Sehili and Pedro Chahuara. Experimental Evaluation of Speech Recognition Technologies for Voice-based Home Automation Control in a Smart Home
Lode Vuegen, Bert Van Den Broeck, Peter Karsmakers, Hugo Van Hamme and Bart Vanrumste. Automatic Monitoring of Activities of Daily Living based on Real-life Acoustic Sensor Data: a preliminary study
Hanne Deprez, Emre Yilmaz, Stefan Lievens and Hugo Van Hamme. Automating speech reception threshold measurements using automatic speech recognition
Kathleen Fraser, Frank Rudzicz, Naida Graham and Elizabeth Rochon. Automatic speech recognition in the diagnosis of primary progressive aphasia
Bart Ons, Netsanet Tessema, Janneke van de Loo, Jort Gemmeke, Guy De Pauw, Walter Daelemans and Hugo Van Hamme. A Self Learning Vocal Interface for Speech-impaired Users
Jun Wang, Arvind Balasubramanian, Luis Mojica de La Vega, Jordan R. Green, Ashok Samal and Balakrishnan Prabhakaran. Word Recognition from Continuous Articulatory Movement Time-series Data using Symbolic Representations