Login / Signup
ICMI Companion
2020
2023
2020
2023
Keyphrases
Publications
2023
Steve DiPaola
,
Meehae Song
Combining Artificial Intelligence, Bio-Sensing and Multimodal Control for Bio-Responsive Interactives.
ICMI Companion
(2023)
Steve DiPaola
,
Suk Kyoung Choi
Art creation as an emergent multimodal journey in Artificial Intelligence latent space.
ICMI Companion
(2023)
Meehae Song
,
Steve DiPaola
Multimodal Entrainment in Bio-Responsive Multi-User VR Interactives.
ICMI Companion
(2023)
Spatika Sampath Gujran
,
Merel M. Jung
Multimodal prompts effectively elicit robot-initiated social touch interactions.
ICMI Companion
(2023)
Jeffrey A. Brooks
,
Vineet Tiruvadi
,
Alice Baird
,
Panagiotis Tzirakis
,
Haoqi Li
,
Chris Gagne
,
Moses Oh
,
Alan Cowen
Emotion Expression Estimates to Measure and Improve Multimodal Social-Affective Interactions.
ICMI Companion
(2023)
Stefano Papetti
,
Eric Larrieux
,
Martin Fröhlich
A Versatile Finger-Interaction Device with Audio-Tactile Feedback.
ICMI Companion
(2023)
Andrey Goncharov
,
Özge Nilay Yalçin
,
Steve DiPaola
Expectations vs. Reality: The Impact of Adaptation Gap on Avatars in Social VR Platforms.
ICMI Companion
(2023)
Peitong Li
,
Hui Lu
,
Ronald W. Poppe
,
Albert Ali Salah
Automated Detection of Joint Attention and Mutual Gaze in Free Play Parent-Child Interactions.
ICMI Companion
(2023)
Prasanth Murali
,
Mehdi Arjmand
,
Matias Volonte
,
Zixi Li
,
James Griffith
,
Michael K. Paasche-Orlow
,
Timothy W. Bickmore
Towards Automated Pain Assessment using Embodied Conversational Agents.
ICMI Companion
(2023)
Viktor Schmuck
,
Nguyen Tan Viet Tuyen
,
Oya Çeliktutan
The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker.
ICMI Companion
(2023)
Taichi Higasa
,
Keitaro Tanaka
,
Qi Feng
,
Shigeo Morishima
Gaze-Driven Sentence Simplification for Language Learners: Enhancing Comprehension and Readability.
ICMI Companion
(2023)
Tamim Ahmed
,
Thanassis Rikakis
,
Aisling Kelliher
,
Mohammad Soleymani
ASAR Dataset and Computational Model for Affective State Recognition During ARAT Assessment for Upper Extremity Stroke Survivors.
ICMI Companion
(2023)
Natalia Kalashnikova
,
Mathilde Hutin
,
Ioana Vasilescu
,
Laurence Devillers
Do We Speak to Robots Looking Like Humans As We Speak to Humans? A Study of Pitch in French Human-Machine and Human-Human Interactions.
ICMI Companion
(2023)
Armand Deffrennes
,
Lucile Vincent
,
Marie Pivette
,
Kevin El Haddad
,
Jacqueline Deanna Bailey
,
Monica Perusquía-Hernández
,
Soraia M. Alarcão
,
Thierry Dutoit
The Limitations of Current Similarity-Based Objective Metrics in the Context of Human-Agent Interaction Applications.
ICMI Companion
(2023)
Pieter Wolfert
,
Gustav Eje Henter
,
Tony Belpaeme
"Am I listening?", Evaluating the Quality of Generated Data-driven Listening Motion.
ICMI Companion
(2023)
Bruno Carlos Dos Santos Melício
,
Linyun Xiang
,
Emily Dillon
,
Latha Soorya
,
Mohamed Chetouani
,
Andras Sarkany
,
Peter Kun
,
Kristian Fenech
,
András Lörincz
Composite AI for Behavior Analysis in Social Interactions.
ICMI Companion
(2023)
International Conference on Multimodal Interaction, ICMI 2023, Companion Volume, Paris, France, October 9-13, 2023
ICMI Companion
(2023)
Mounika Kanakanti
,
Shantanu Singh
,
Manish Shrivastava
MultiFacet: A Multi-Tasking Framework for Speech-to-Sign Language Generation.
ICMI Companion
(2023)
Setareh Nasihati Gilani
,
Kimberly A. Pollard
,
David R. Traum
Multimodal Prediction of User's Performance in High-Stress Dialogue Interactions.
ICMI Companion
(2023)
Ankur Chemburkar
,
Shuhong Lu
,
Andrew Feng
Discrete Diffusion for Co-Speech Gesture Synthesis.
ICMI Companion
(2023)
Stefano Papetti
,
Eric Larrieux
,
Martin Fröhlich
The TouchBox MK3: An Open-Source Device for Finger-Based Interaction with Advanced Auditory and Vibrotactile Feedback.
ICMI Companion
(2023)
Koji Inoue
,
Divesh Lala
,
Keiko Ochi
,
Tatsuya Kawahara
,
Gabriel Skantze
Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors.
ICMI Companion
(2023)
Gwantae Kim
,
Yuanming Li
,
Hanseok Ko
The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation.
ICMI Companion
(2023)
Yann Frachi
,
Guillaume Chanel
,
Mathieu Barthet
Affective gaming using adaptive speed controlled by biofeedback.
ICMI Companion
(2023)
Seyma Takir
,
Elif Toprak
,
Pinar Uluer
,
Duygun Erol Barkana
,
Hatice Kose
Exploring the Potential of Multimodal Emotion Recognition for Hearing-Impaired Children Using Physiological Signals and Facial Expressions.
ICMI Companion
(2023)
Muxiao Sun
,
Qinglei Bu
,
Ying Hou
,
Xiaowen Ju
,
Limin Yu
,
Eng Gee Lim
,
Jie Sun
Virtual Reality Music Instrument Playing Game for Upper Limb Rehabilitation Training.
ICMI Companion
(2023)
Martina Galletti
,
Eleonora Pasqua
,
Francesca Bianchi
,
Manuela Calanca
,
Francesca Padovani
,
Daniele Nardi
,
Donatella Tomaiuoli
A Reading Comprehension Interface for Students with Learning Disorders.
ICMI Companion
(2023)
Alice Delbosc
,
Magalie Ochs
,
Nicolas Sabouret
,
Brian Ravenet
,
Stéphane Ayache
Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent.
ICMI Companion
(2023)
Muhammad Riyyan Khan
,
Shahzeb Naeem
,
Usman Tariq
,
Abhinav Dhall
,
Malik Nasir Afzal Khan
,
Fares Al-Shargie
,
Hasan Al-Nashash
Exploring Neurophysiological Responses to Cross-Cultural Deepfake Videos.
ICMI Companion
(2023)
Marjorie Armando
,
Isabelle Régner
,
Magalie Ochs
Toward a Tool Against Stereotype Threat in Math: Children's Perceptions of Virtual Role Models.
ICMI Companion
(2023)
Marion Ristorcelli
,
Emma Gallego
,
Kévin Nguy
,
Jean-Marie Pergandi
,
Rémy Casanova
,
Magalie Ochs
Investigating the Impact of a Virtual Audience's Gender and Attitudes on a Human Speaker.
ICMI Companion
(2023)
Julia Ayache
,
Marta Bienkiewicz
,
Kathleen Richardson
,
Benoît G. Bardy
eXtended Reality of socio-motor interactions: Current Trends and Ethical Considerations for Mixed Reality Environments Design.
ICMI Companion
(2023)
Stéphane Viollet
,
Chauvet Martin
,
Ingargiola Jean-Marc
LinLED: Low latency and accurate contactless gesture interaction.
ICMI Companion
(2023)
Weiyu Zhao
,
Liangxiao Hu
,
Shengping Zhang
DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models.
ICMI Companion
(2023)
Catherine Neubauer
HAT3: The Human Autonomy Team Trust Toolkit.
ICMI Companion
(2023)
Sean Andrist
,
Dan Bohus
,
Zongjian Li
,
Mohammad Soleymani
Platform for Situated Intelligence and OpenSense: A Tutorial on Building Multimodal Interactive Applications for Research.
ICMI Companion
(2023)
Rodolfo L. Tonoli
,
Leonardo B. de M. M. Marques
,
Lucas H. Ueda
,
Paula Dornhofer Paro Costa
Gesture Generation with Diffusion Models Aided by Speech Activity Information.
ICMI Companion
(2023)
Björn Severitt
,
Nora Jane Castner
,
Olga Lukashova-Sanz
,
Siegfried Wahl
Leveraging gaze for potential error prediction in AI-support systems: An exploratory analysis of interaction with a simulated robot.
ICMI Companion
(2023)
Dhia-Elhak Goumri
,
Thomas Janssoone
,
Leonor Becerra-Bonache
,
Abdellah Fourtassi
Automatic Detection of Gaze and Smile in Children's Video Calls.
ICMI Companion
(2023)
Tobias B. Ricken
,
Peter Bellmann
,
Sascha Gruss
,
Hans A. Kestler
,
Steffen Walter
,
Friedhelm Schwenker
Pain Recognition Differences between Female and Male Subjects: An Analysis based on the Physiological Signals of the X-ITE Pain Database.
ICMI Companion
(2023)
Crystal Yang
,
Karen Arredondo
,
Jung In Koh
,
Paul Taele
,
Tracy Hammond
HEARD-LE: An Intelligent Conversational Interface for Wordle.
ICMI Companion
(2023)
Fábio Barros
,
António J. S. Teixeira
,
Samuel S. Silva
Developing a Generic Focus Modality for Multimodal Interactive Environments.
ICMI Companion
(2023)
Sutirtha Chakraborty
,
Joseph Timoney
Multimodal Synchronization in Musical Ensembles: Investigating Audio and Visual Cues.
ICMI Companion
(2023)
Théo Deschamps-Berger
,
Lori Lamel
,
Laurence Devillers
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations.
ICMI Companion
(2023)
Paul Pu Liang
,
Louis-Philippe Morency
Tutorial on Multimodal Machine Learning: Principles, Challenges, and Open Questions.
ICMI Companion
(2023)
Anna Lea Reinwarth
,
Tanja Schneeberger
,
Fabrizio Nunnari
,
Patrick Gebhard
,
Uwe Altmann
,
Janet Wessler
Look What I Made It Do - The ModelIT Method for Manually Modeling Nonverbal Behavior of Socially Interactive Agents.
ICMI Companion
(2023)
Alex-Razvan Ispas
,
Théo Deschamps-Berger
,
Laurence Devillers
A multi-task, multi-modal approach for predicting categorical and dimensional emotions.
ICMI Companion
(2023)
Eleonora Aida Beccaluva
,
Marta Curreri
,
Giulia Da Lisca
,
Pietro Crovari
Using Implicit Measures to Assess User Experience in Children: A Case Study on the Application of the Implicit Association Test (IAT).
ICMI Companion
(2023)
Nguyen Tan Viet Tuyen
,
Viktor Schmuck
,
Oya Çeliktutan
Gesticulating with NAO: Real-time Context-Aware Co-Speech Gesture Generation for Human-Robot Interaction.
ICMI Companion
(2023)
Nada Alalyani
,
Nikhil Krishnaswamy
A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents.
ICMI Companion
(2023)