​
Login / Signup
Ryo Ishii
ORCID
Publication Activity (10 Years)
Years Active: 2006-2024
Publications (10 Years): 56
Top Topics
Spatial Arrangement
Body Motions
Spoken Language
Multimodal Information
Top Venues
IVA
ICMI
ICASSP
INTERSPEECH
</>
Publications
</>
Yukiko I. Nakano
,
Fumio Nihei
,
Ryo Ishii
,
Ryuichiro Higashinaka
Selecting Iconic Gesture Forms Based on Typical Entity Images.
J. Inf. Process.
32 (2024)
Kenta Hama
,
Atsushi Otsuka
,
Ryo Ishii
Emotion Recognition in Conversation with Multi-step Prompting Using Large Language Model.
HCI (20)
(2024)
Koya Ito
,
Yoko Ishii
,
Ryo Ishii
,
Shin'ichiro Eitoku
,
Kazuhiro Otsuka
Exploring Multimodal Nonverbal Functional Features for Predicting the Subjective Impressions of Interlocutors.
IEEE Access
12 (2024)
Takato Hayashi
,
Ryusei Kimura
,
Ryo Ishii
,
Fumio Nihei
,
Atsushi Fukayama
,
Shogo Okada
Rapport Prediction Using Pairwise Learning in Dyadic Conversations Among Strangers and Among Friends.
HCI (22)
(2024)
Chihiro Takayama
,
Shinichirou Eitoku
,
Fumio Nihei
,
Ryo Ishii
,
Yukiko I. Nakano
,
Atsushi Fukayama
Investigating the effect of video extraction summarization techniques on the accuracy of impression conveyance in group dialogue.
OZCHI
(2023)
Atsushi Ito
,
Yukiko I. Nakano
,
Fumio Nihei
,
Tatsuya Sakato
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
Estimating and Visualizing Persuasiveness of Participants in Group Discussions.
J. Inf. Process.
31 (2023)
Ryo Ishii
,
Akira Morikawa
,
Shin'ichiro Eitoku
,
Atsushi Fukayama
,
Takao Nakamura
How Far ahead Can Model Predict Gesture Pose from Speech and Spoken Text?
IVA
(2023)
Toshiki Onishi
,
Naoki Azuma
,
Shunichi Kinoshita
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
,
Akihiro Miyata
Prediction of Various Backchannel Utterances Based on Multimodal Information.
IVA
(2023)
Atsushi Otsuka
,
Kenta Hama
,
Narichika Nomoto
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
Learning User Embeddings with Generating Context of Posted Social Network Service Texts.
HCI (15)
(2023)
Shumpei Otsuchi
,
Koya Ito
,
Yoko Ishii
,
Ryo Ishii
,
Shinichirou Eitoku
,
Kazuhiro Otsuka
Identifying Interlocutors' Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic Features.
ICMI
(2023)
Takato Hayashi
,
Candy Olivia Mawalim
,
Ryo Ishii
,
Akira Morikawa
,
Atsushi Fukayama
,
Takao Nakamura
,
Shogo Okada
A Ranking Model for Evaluation of Conversation Partners Based on Rapport Levels.
IEEE Access
11 (2023)
Fumio Nihei
,
Ryo Ishii
,
Yukiko I. Nakano
,
Atsushi Fukayama
,
Takao Nakamura
Whether Contribution of Features Differ Between Video-Mediated and In-Person Meetings in Important Utterance Estimation.
ICASSP
(2023)
Shunichi Kinoshita
,
Toshiki Onishi
,
Naoki Azuma
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
,
Akihiro Miyata
A Study of Prediction of Listener's Comprehension Based on Multimodal Information.
IVA
(2023)
Chaitanya Ahuja
,
Pratik Joshi
,
Ryo Ishii
,
Louis-Philippe Morency
Continual Learning for Personalized Co-Speech Gesture Generation.
ICCV
(2023)
Ryo Ishii
,
Fumio Nihei
,
Yoko Ishii
,
Atsushi Otsuka
,
Kazuya Matsuo
,
Narichika Nomoto
,
Atsushi Fukayama
,
Takao Nakamura
Prediction of Love-Like Scores After Speed Dating Based on Pre-obtainable Personal Characteristic Information.
INTERACT (4)
(2023)
Akira Morikawa
,
Ryo Ishii
,
Hajime Noto
,
Atsushi Fukayama
,
Takao Nakamura
Determining most suitable listener backchannel type for speaker's utterance.
IVA
(2022)
Toshiki Onishi
,
Arisa Yamauchi
,
Asahi Ogushi
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
,
Akihiro Miyata
Modeling Japanese Praising Behavior by Analyzing Audio and Visual Behaviors.
Frontiers Comput. Sci.
4 (2022)
Toshiki Onishi
,
Asahi Ogushi
,
Yohei Tahara
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
,
Akihiro Miyata
A Comparison of Praising Skills in Face-to-Face and Remote Dialogues.
LREC
(2022)
Fumio Nihei
,
Ryo Ishii
,
Yukiko I. Nakano
,
Kyosuke Nishida
,
Ryo Masumura
,
Atsushi Fukayama
,
Takao Nakamura
Dialogue Acts Aided Important Utterance Detection Based on Multiparty and Multimodal Information.
INTERSPEECH
(2022)
Atsushi Ito
,
Yukiko I. Nakano
,
Fumio Nihei
,
Tatsuya Sakato
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
Predicting Persuasiveness of Participants in Multiparty Conversations.
IUI Companion
(2022)
Asahi Ogushi
,
Toshiki Onishi
,
Yohei Tahara
,
Ryo Ishii
,
Atsushi Fukayama
,
Takao Nakamura
,
Akihiro Miyata
Analysis of praising skills focusing on utterance contents.
INTERSPEECH
(2022)
Bo Yang
,
Ryo Ishii
,
Zheng Wang
,
Tsutomu Kaizuka
,
Toshiyuki Sugimachi
,
Toshiaki Sakurai
,
Tetsuo Maki
,
Kimihiko Nakano
Evaluation of Driver Assistance System Presenting Information of Other Vehicles through Peripheral Vision at Unsignalized Intersections.
Int. J. Intell. Transp. Syst. Res.
19 (1) (2021)
Paul Pu Liang
,
Terrance Liu
,
Anna Cai
,
Michal Muszynski
,
Ryo Ishii
,
Nicholas Allen
,
Randy Auerbach
,
David Brent
,
Ruslan Salakhutdinov
,
Louis-Philippe Morency
Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data.
ACL/IJCNLP (1)
(2021)
Ryo Ishii
,
Ryuichiro Higashinaka
,
Koh Mitsuda
,
Taichi Katayama
,
Masahiro Mizukami
,
Junji Tomita
,
Hidetoshi Kawabata
,
Emi Yamaguchi
,
Noritake Adachi
,
Yushi Aono
Methods for Efficiently Constructing Text-dialogue-agent System using Existing Anime Characters.
J. Inf. Process.
29 (2021)
Ryo Ishii
,
Xutong Ren
,
Michal Muszynski
,
Louis-Philippe Morency
Multimodal and Multitask Approach to Listener's Backchannel Prediction: Can Prediction of Turn-changing and Turn-management Willingness Improve Backchannel Modeling?
IVA
(2021)
Chihiro Takayama
,
Mitsuhiro Goto
,
Shinichirou Eitoku
,
Ryo Ishii
,
Hajime Noto
,
Shiro Ozawa
,
Takao Nakamura
How People Distinguish Individuals from their Movements: Toward the Realization of Personalized Agents.
HAI
(2021)
Paul Pu Liang
,
Terrance Liu
,
Anna Cai
,
Michal Muszynski
,
Ryo Ishii
,
Nicholas Allen
,
Randy Auerbach
,
David Brent
,
Ruslan Salakhutdinov
,
Louis-Philippe Morency
Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data.
CoRR
(2021)
Ryo Ishii
,
Shiro Kumano
,
Ryuichiro Higashinaka
,
Shiro Ozawa
,
Testuya Kinebuchi
Estimation of Empathy Skill Level and Personal Traits Using Gaze Behavior and Dialogue Act During Turn-Changing.
HCI (41)
(2021)
Ryo Ishii
,
Chaitanya Ahuja
,
Yukiko I. Nakano
,
Louis-Philippe Morency
Impact of Personality on Nonverbal Behavior Generation.
IVA
(2020)
Terrance Liu
,
Paul Pu Liang
,
Michal Muszynski
,
Ryo Ishii
,
David Brent
,
Randy Auerbach
,
Nicholas Allen
,
Louis-Philippe Morency
Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study.
CoRR
(2020)
Toshiki Onishi
,
Arisa Yamauchi
,
Ryo Ishii
,
Yushi Aono
,
Akihiro Miyata
Analyzing Nonverbal Behaviors along with Praising.
ICMI
(2020)
Ryo Ishii
,
Ryuichiro Higashinaka
,
Koh Mitsuda
,
Taichi Katayama
,
Masahiro Mizukami
,
Junji Tomita
,
Hidetoshi Kawabata
,
Emi Yamaguchi
,
Noritake Adachi
,
Yushi Aono
Methods of Efficiently Constructing Text-Dialogue-Agent System Using Existing Anime Character.
HCI (45)
(2020)
Chaitanya Ahuja
,
Dong Won Lee
,
Ryo Ishii
,
Louis-Philippe Morency
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures.
EMNLP (Findings)
(2020)
Ryo Ishii
,
Xutong Ren
,
Michal Muszynski
,
Louis-Philippe Morency
Can Prediction of Turn-management Willingness Improve Turn-changing Modeling?
IVA
(2020)
Ryo Masumura
,
Mana Ihori
,
Tomohiro Tanaka
,
Atsushi Ando
,
Ryo Ishii
,
Takanobu Oba
,
Ryuichiro Higashinaka
Improving Speech-Based End-of-Turn Detection Via Cross-Modal Representation Learning with Punctuated Text Data.
ASRU
(2019)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Ryuichiro Higashinaka
,
Junji Tomita
Estimating Interpersonal Reactivity Scores Using Gaze Behavior and Dialogue Act During Turn-Changing.
HCI (14)
(2019)
Ryo Ishii
,
Taichi Katayama
,
Ryuichiro Higashinaka
,
Junji Tomita
Automatic Head-Nod Generation Using Utterance Text Considering Personality Traits.
IWSDS
(2019)
Fumio Nihei
,
Yukiko I. Nakano
,
Ryuichiro Higashinaka
,
Ryo Ishii
Determining Iconic Gesture Forms based on Entity Image Representation.
ICMI
(2019)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Ryuichiro Higashinaka
,
Junji Tomita
Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation.
Multimodal Technol. Interact.
3 (4) (2019)
Ryo Ishii
,
Ryuichiro Higashinaka
,
Junji Tomita
Predicting Nods by using Dialogue Acts in Dialogue.
LREC
(2018)
Takahiro Matsumoto
,
Mitsuhiro Goto
,
Ryo Ishii
,
Tomoki Watanabe
,
Tomohiro Yamada
,
Michita Imai
Where Should Robots Talk?: Spatial Arrangement Study from a Participant Workload Perspective.
HRI
(2018)
Ryo Ishii
,
Taichi Katayama
,
Ryuichiro Higashinaka
,
Junji Tomita
Automatic Generation of Head Nods using Utterance Texts.
RO-MAN
(2018)
Ryo Ishii
,
Taichi Katayama
,
Ryuichiro Higashinaka
,
Junji Tomita
Generating Body Motions using Spoken Language in Dialogue.
IVA
(2018)
Ryo Ishii
,
Ryuichiro Higashinaka
,
Kyosuke Nishida
,
Taichi Katayama
,
Nozomi Kobayashi
,
Junji Tomita
Automatically Generating Head Nods with Linguistic Information.
HCI (14)
(2018)
Ryo Masumura
,
Tomohiro Tanaka
,
Atsushi Ando
,
Ryo Ishii
,
Ryuichiro Higashinaka
,
Yushi Aono
Neural Dialogue Context Online End-of-Turn Detection.
SIGDIAL Conference
(2018)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Ryuichiro Higashinaka
,
Junji Tomita
Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level.
ICMI
(2018)
Ryo Ishii
,
Taichi Katayama
,
Ryuichiro Higashinaka
,
Junji Tomita
Automatic Generation System of Virtual Agent's Motion using Natural Language.
IVA
(2018)
Ryo Masumura
,
Taichi Asami
,
Hirokazu Masataki
,
Ryo Ishii
,
Ryuichiro Higashinaka
Online End-of-Turn Detection from Speech Based on Stacked Time-Asynchronous Sequential Networks.
INTERSPEECH
(2017)
Ryo Ishii
,
Shiro Kumano
,
Kazuhiro Otsuka
Analyzing gaze behavior during turn-taking for estimating empathy skill level.
ICMI
(2017)
Shiro Kumano
,
Ryo Ishii
,
Kazuhiro Otsuka
Computational model of idiosyncratic perception of others' emotions.
ACII
(2017)
Ryo Ishii
,
Shiro Kumano
,
Kazuhiro Otsuka
Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings.
HAI
(2017)
Shiro Kumano
,
Kazuhiro Otsuka
,
Ryo Ishii
,
Junji Yamato
Collective First-Person Vision for Automatic Gaze Analysis in Multiparty Conversations.
IEEE Trans. Multim.
19 (1) (2017)
Shiro Kumano
,
Ryo Ishii
,
Kazuhiro Otsuka
Comparing empathy perceived by interlocutors in multiparty conversation and external observers.
ACII
(2017)
Ryo Ishii
,
Shiro Kumano
,
Kazuhiro Otsuka
Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings.
ICMI
(2016)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Junji Yamato
Prediction of Who Will Be the Next Speaker and When Using Gaze Behavior in Multiparty Meetings.
ACM Trans. Interact. Intell. Syst.
6 (1) (2016)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Junji Yamato
Using Respiration to Predict Who Will Speak Next and When in Multiparty Meetings.
ACM Trans. Interact. Intell. Syst.
6 (2) (2016)
Ryo Ishii
,
Shiro Kumano
,
Kazuhiro Otsuka
Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings.
ICMI
(2015)
Ryo Ishii
,
Shiro Kumano
,
Kazuhiro Otsuka
Predicting next speaker based on head movement in multi-party meetings.
ICASSP
(2015)
Ryo Ishii
,
Shiro Ozawa
,
Akira Kojima
,
Kazuhiro Otsuka
,
Yuki Hayashi
,
Yukiko I. Nakano
Design and Evaluation of Mirror Interface MIOSS to Overlay Remote 3D Spaces.
INTERACT (4)
(2015)
Shiro Kumano
,
Kazuhiro Otsuka
,
Ryo Ishii
,
Junji Yamato
Automatic gaze analysis in multiparty conversations based on Collective First-Person Vision.
FG
(2015)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Junji Yamato
Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings.
ICMI
(2014)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Junji Yamato
Analysis and modeling of next speaking start timing based on gaze behavior in multi-party meetings.
ICASSP
(2014)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Junji Yamato
Analysis of Timing Structure of Eye Contact in Turn-changing.
GazeIn@ICMI
(2014)
Shiro Kumano
,
Kazuhiro Otsuka
,
Masafumi Matsuda
,
Ryo Ishii
,
Junji Yamato
Using a Probabilistic Topic Model to Link Observers' Perception Tendency to Personality.
ACII
(2013)
Ryo Ishii
,
Yukiko I. Nakano
,
Toyoaki Nishida
Gaze awareness in conversational agents: Estimating a user's conversational engagement from eye gaze.
ACM Trans. Interact. Intell. Syst.
3 (2) (2013)
Kazuhiro Otsuka
,
Shiro Kumano
,
Ryo Ishii
,
Maja Zbogar
,
Junji Yamato
MM+Space: n x 4 degree-of-freedom kinetic display for recreating multiparty conversation spaces.
ICMI
(2013)
Ryo Ishii
,
Kazuhiro Otsuka
,
Shiro Kumano
,
Masafumi Matsuda
,
Junji Yamato
Predicting next speaker and timing from gaze transition patterns in multi-party meetings.
ICMI
(2013)
Ryo Ishii
,
Shiro Ozawa
,
Harumi Kawamura
,
Akira Kojima
MoPaCo: High telepresence video communication system using motion parallax with monocular camera.
ICCV Workshops
(2011)
Ryo Ishii
,
Shiro Ozawa
,
Takafumi Mukouchi
,
Norihiko Matsuura
MoPaCo: Pseudo 3D Video Communication System.
HCI (12)
(2011)
Ryota Ooko
,
Ryo Ishii
,
Yukiko I. Nakano
Estimating a User's Conversational Engagement Based on Head Pose Information.
IVA
(2011)
Yukiko I. Nakano
,
Ryo Ishii
Estimating user's engagement from eye-gaze behaviors in human-agent conversations.
IUI
(2010)
Ryo Ishii
,
Yukiko I. Nakano
Estimating User's Conversational Engagement Based on Gaze Behaviors.
IVA
(2008)
Ryo Ishii
,
Toshimitsu Miyajima
,
Kinya Fujita
,
Yukiko I. Nakano
Avatar's Gaze Control to Facilitate Conversational Turn-Taking in Virtual-Space Multi-user Voice Chat System.
IVA
(2006)