MYamenSPERSONAL SITE
.01

ABOUT

PERSONAL DETAILS
yamen@myamens.com
Hello! I am a Research Scientiest at SonyAI based in Tokyo.

BIO

ABOUT ME

Received my M.Sc and Ph.D. degrees in Media Design from Keio University in 2015 and 2018, respectively. After finishing my Ph.D. degree, I worked as an assistant professor at Keio University from 2018 until 2020. I worked as a Research and Development Director at AvatarIn Inc until late 2022. Currently I am at SonyAI working as a Research Scientiest. My research, namely “Radical Bodies”, expands on the topic of the machines as an extension of our bodies, and emphasizes the role of technologies and robotics in reshaping our innate abilities and cognitive capacities. My work, which is experience driven, has been demonstrated and awarded at various international conferences such as SIGGRAPH, Augmented Human, and ICAT.

HOBBIES

INTERESTS

Love photography, hiking and traveling whenever I had time!
.02

RESUME

ACADEMIC AND PROFESSIONAL POSITIONS
  • 2020
    2022
    JAPAN

    AvatarCore Director

    AvatarIn Inc.

    Research and Development Director for Avatar Core system. Robotics software development, optimization, and control. Real-time communication with low latency over network and internet. Optimizations for mass deployment. Leading front end developers.
  • 2018
    2020
    JAPAN

    Assistant Professor

    KEIO UNIVERSITY GRADUATE SCHOOL OF MEDIA DESIGN

    Mentoring master and Ph.D. student research and thesis. Conducting and publishing my research on human augmentation. Teaching Machine Learning classes. Teaching Physical Prototyping classes.
  • 2009
    2011
    SYRIA

    SENIOR PROGRAMMER, SYSTEM ANALYST

    JOY BOX

    System Designer, Analyzer and Programmer at a game development company: Joy Box. Worked with professional people: Administrators, 2D/3D Designers and Programmers for various projects related to entertainment, education and media.
  • 2011
    2012
    SYRIA

    SENIOR PROGRAMMER, SYSTEM ANALYST

    SYRIATEL

    - Communicating with clients and stakeholders to understand their requirements. - Database development and data immigration. - Developing both Front End and Back End of the software. - Developing reporting tools for the stakeholders. - Communicating with the upper management staff.
  • 2022
    Now
    Tokyo

    Research Scientist

    SonyAI

EDUCATION
  • 2005
    2010
    SYRIA

    BACHELOR DEGREE IN COMPUTER SCIENCE

    DAMASCUS UNIVERSITY

    Received my degree in computer science while majoring in Artificial Intelligence from Damascus University.
  • 2013
    2015
    JAPAN

    MASTER DEGREE IN MEDIA DESIGN

    KEIO UNIVERSITY

    I received my Master Degree in Media Design from the graduate school of Media Design - Keio University with a GPA equivalent to 3.98/4.33 and were selected as one of Dean's students upon graduation.
  • 2015
    2018
    JAPAN

    PHD DEGREE IN MEDIA DESIGN

    KEIO UNIVERSITY

    Majoring in Media Design at Keio University. Research activities involved in: Haptics, Telexistence, Telepresence, Virtual/Augmented Reality
HONORS AND AWARDS
  • 2012
    2018
    JAPAN

    MEXT SCHOLARSHIP

    MINISTRY OF EDUCATION IN JAPAN

    Fully paid scholarship covering Master and PhD Degrees in Japanese universities.
  • 2015
    SINGAPORE

    BEST DEMO AWARD

    AUGMENTED HUMAN 2015

    Received Best Demo award at the International Conference of Augmented Human 2015 for the project titled: "Mutual Hand Representation for Telexistence Robots Using Projected Virtual Hands" Reference
  • 2015
    JAPAN

    DEAN'S LIST

    KEIO MEDIA DESIGN

    Honoured as being selected in the dean's list of my graduate school. Reference
  • 2015
    KYOTO - JAPAN

    BEST DEMO AWARD & HONORABLE MENTION AT ICAT'15

    EUROGRAPHICS - ICAT2015

    Won two awards at the 25th International Conference on Artificial Reality and Telexistence held in Kyoto, Japan. - Honorable Mentions: "Development of Mutual Telexistence System using Virtual Projection of Operator's Egocentric Body Images", Mhd Yamen Saraiji, Charith Lasantha Fernando, Kouta Minamizawa and Susumu Tachi. - Best Demo: "Development of Mutual Telexistence System using Virtual Projection of Operator's Egocentric Body Images" Mhd Yamen Saraiji, Charith Lasantha Fernando, Kouta Minamizawa and Susumu Tachi. Reference
  • 2015
    TOKYO - JAPAN

    PEPPER APP CHALLENGE - BEST AWARD

    SOFTBANK ROBOTICS

    Received best award and best care and welfare prize for the project: HUG Project Reference
  • 2016
    Anaheim - Los Angeles

    STUDENT RESEARCH COMPETITION - GOLD PRIZE

    ACM SIGGRAPH 2016

    Received the first prize at ACM's Student Research Competition for Graduate students category at SIGGRAPH 2016 Reference
.03

PUBLICATIONS

PUBLICATIONS LIST
01 Jan 1970

SlideFusion: Surrogacy Wheelchair with Implicit Eyegaze Modality Sharing

SIGGRAPH 2020


SlideFusion: Surrogacy Wheelchair with Implicit Eyegaze Modality Sharing

For mobility-impaired people, the wheelchair is considered as a main navigation and accessibility device. However, due to the inherited design of the chair, the user is restricted to utilize his hand all the time during navigation resulting in a dual impairment. Our proposed system, SlideFusion, expands on previous work of collaborative and assistive technologies for accessibility scenarios. SlideFusion is focused on the remote collaboration of the mobility impaired person, that an operator can remotely access an avatar embedded into the wheelchair. To reduce the physical and cognitive overload on the wheelchair user, we propose the use of eye gaze modality sharing with the remote operator. The eye gaze modality enables implicit interactions that do not require the user to point or verbally say, thus leveraging indirect communication. By using this, not only accessibility can be provided to wheelchair users and their caregivers, but also hearing impairments and pronunciation disorders are applicable.

01 Jan 1970

Arque: artificial biomimicry-inspired tail for extending innate body functions

SIGGRAPH 2019


Arque: artificial biomimicry-inspired tail for extending innate body functions

For most mammals and vertebrate animals, tail plays an important role for their body providing variant functions to expand their mobility, or as a limb that allows manipulation and gripping. In this work, Arque, we propose an artificial biomimicry-inspired anthropomorphic tail to allow us alter our body momentum for assistive, and haptic feedback applications. The proposed tail consists of adjacent joints with a spring-based structure to handle shearing and tangential forces, and allow managing the length and weight of the target tail. The internal structure of the tail is driven by four pneumatic artificial muscles providing the actuation mechanism for the tail tip. Here we highlight potential applications for using such prosthetic tail as an extension of human body to provide active momentum alteration in balancing situations, or as a device to alter body momentum for full-body haptic feedback scenarios.

01 Jan 1970

Naviarm: Augmenting the Learning of Motor Skills using a Backpack-type Robotic Arm System

AH2019


Naviarm: Augmenting the Learning of Motor Skills using a Backpack-type Robotic Arm System

We present a wearable haptic assistance robotic system for augmented motor learning called Naviarm. This system comprises two robotic arms that are mounted on a user’s body and are used to transfer one person’s motion to another offline. Naviarm pre-records the arm motion trajectories of an expert via the mounted robotic arms and then plays back these recorded trajectories to share the expert’s body motion with a beginner. The Naviarm system is an ungrounded system and provides mobility for the user to conduct a variety of motions. In this paper, we focus on the temporal aspect of motor skill and use a mime performance as a case study learning task. We verified the system effectiveness for motor learning using the conducted experiments. The results suggest that the proposed system has benefits for learning sequential skills.

01 Jan 1970

Fusion: full body surrogacy for collaborative communication

SIGGRAPH 2018


Fusion: full body surrogacy for collaborative communication

Effective communication is a key factor in social and professional contexts which involve sharing the skills and actions of more than one person. This research proposes a novel system to enable full body sharing over a remotely operated wearable system, allowing one person to dive into someone’s else body. “Fusion” enables body surrogacy by sharing the same point of view of two-person: a surrogate and an operator, and it extends the limbs mobility and actions of the operator using two robotic arms mounted on the surrogate body. These arms can be used independently of the surrogate arms for collaborative scenarios or can be linked to surrogate’s arms to be used in remote assisting and supporting scenarios. Using Fusion, we realize three levels of bodily driven communication: Direct, Enforced, and Induced. We demonstrate through this system the possibilities of truly embodying and transferring our body actions from one person to another, realizing true body communication.

01 Jan 1970

MetaArms: Body remapping using feet-controlled artificial arms

UIST 2018


MetaArms: Body remapping using feet-controlled artificial arms

MetaArms, wearable anthropomorphic robotic arms and hands with six degrees of freedom operated by the user’s legs and feet. Our overall research goal is to re-imagine what our bodies can do with the aid of wearable robotics using a body-remapping approach. To this end, we present an initial exploratory case study. MetaArms’ two robotic arms are controlled by the user’s feet motion, and the robotic hands can grip objects according to the user’s toes bending. Haptic feedback is also presented on the user’s feet that correlate with the touched objects on the robotic hands, creating a closed-loop system. We present formal and informal evaluations of the system, the former using a 2D pointing task according to Fitts’ Law. The overall throughput for 12 users of the system is reported as 1.01 bits/s (std 0.39). We also present informal feedback from over 230 users. We find that MetaArms demonstrate the feasibility of body-remapping approach in designing robotic limbs that may help us re-imagine what the human body could do.

02 Jul 2017

MetaLimbs: metamorphosis for multiple arms interaction using artificial limbs

Los Angeles, California


MetaLimbs: metamorphosis for multiple arms interaction using artificial limbs

If we could have the capability to edit or customize our body scheme by technology, could our abilities and activities be enhanced? This research proposes a novel interaction to alternate body scheme using artificial limbs substitution metamorphosis. In this work, two additional robotic arms are added to user’s body, and are manipulated by legs movement. Limbs control is achieved using two sets of tracking systems: global motion tracking of legs using an optical tracker and local motion tracking for manipulation purposes using socks type device. These data are mapped into artificial limbs’ arms, hands and fingers motion. Lastly, force feedback is provided to the feet and mapped to manipulator’s touch sensors.

18 Mar 2017

Observation of mirror reflection and voluntary self-touch enhance self-recognition for a telexistence robot

Los Angeles, California

Virtual Reality (VR), 2017 IEEE

Poster Yasuyuki Inoue, Fumihiro Kato, Mhd Yamen Saraiji, Charith Lasantha Fernando, Susumu Tachi

Observation of mirror reflection and voluntary self-touch enhance self-recognition for a telexistence robot

Yasuyuki Inoue, Fumihiro Kato, Mhd Yamen Saraiji, Charith Lasantha Fernando, Susumu Tachi Poster

In this paper, we analyze the subjective feelings about the body of the operator of a
telexistence system. We investigate whether a mirror reflection and self-touch affect body
ownership and agency for a surrogate robot avatar in a virtual reality experiment. Results
showed that the presence of tactile sensations synchronized with the view of self-touch
events enhanced mirror self-recognition.

09 Aug 2015

Twech: a mobile platform to search and share visuo-tactile experiences

Los Angeles, California

SIGGRAPH 2015

Poster Nobuhisa Hanamitsu, Kanata Nakamura, Mhd Yamen Saraiji, Kouta Minamizawa, Susumu Tachi

Twech: a mobile platform to search and share visuo-tactile experiences

Nobuhisa Hanamitsu, Kanata Nakamura, Mhd Yamen Saraiji, Kouta Minamizawa, Susumu Tachi Poster

Twech is a mobile platform that enables users to share visuo-tactile experience and search other experiences for tactile data. User can record and share visuo-tactile experiences by using a visuo-tactile recording and displaying attachment for smartphone, allows the user to instantly such as tweet, and re-experience shared data such as visuo-motor coupling. Further, Twech’s search engine finds similar other experiences, which were scratched material surfaces, communicated with animals or other experiences, for uploaded tactile data by using search engine is based on deep learning that ware expanded for recognizing tactile materials. Twech provides a sharing and finding haptic experiences and users re-experience uploaded visual-tactile data from cloud server.

23 Mar 2016

Changing body ownership using visual metamorphosis

Laval, France

Laval 2016

Poster Tomoya Sasaki, MHD Yamen Saraiji, Kouta Minamizawa, Michiteru Kitazaki, Masahiko Inami

Changing body ownership using visual metamorphosis

Tomoya Sasaki, MHD Yamen Saraiji, Kouta Minamizawa, Michiteru Kitazaki, Masahiko Inami Poster

This paper presents a study of using supernumerary arms experience in virtual reality applications. In this study, a system was developed that alternates user’s body scheme and motion mapping in real-time when the user interacts with virtual contents. User arms and hands are tracked and mapped into several virtual arms instances that were generated from user’s first point of view (FPV), and are deviated from his physical arms position at different angles. Participants reported a strong sense of body ownership toward the extra arms after interacting and holding virtual contents using them. Our finding is body ownership perception can be altered based on the condition used. Also, one interesting finding in this preliminary experiment is that the participants reported strong ownership toward the arm that actually is not holding the virtual object. This study contributes in the fields of augmented bodies, multi-limbs applications, as well as prosthetic limbs.

01 Jul 2016

Layered Telepresence: Simultaneous Multi Presence Experienceusing Eye Gaze based Perceptual Awareness Blending

KYOTO-JAPAN

ACM SIGGRAPH 2016 Emerging Technologies, Anaheim, USA, Article No. n, July. 2016 (to appear)

DemonstrationsPoster Selected Mhd Yamen Saraiji, Shota Sugimoto, Charith Lasantha Fernando, Kouta Minamizawa, Susumu Tachi

Layered Telepresence: Simultaneous Multi Presence Experienceusing Eye Gaze based Perceptual Awareness Blending

Mhd Yamen Saraiji, Shota Sugimoto, Charith Lasantha Fernando, Kouta Minamizawa, Susumu Tachi DemonstrationsPoster Selected

We propose “Layered Telepresence”, a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer using image-processing techniques, and pushes the most weighted layer around the users gaze into the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content such as video see-through HMD, television screen or even your PC screen enabling true multitasking.

SIGGRAPH 2016 ETech page

01 Oct 2015

Effectiveness of Spatial Coherent Remote Drive Experience with a Telexistence Backhoe for Construction Sites

KYOTO-JAPAN

ICAT-EGVE 2015 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, pp. 69-75, Oct 2015

Conferences Charith Lasantha Fernando, Mhd Yamen Saraiji, Kouta Minamizawa, Susumu Tachi

Effectiveness of Spatial Coherent Remote Drive Experience with a Telexistence Backhoe for Construction Sites

Charith Lasantha Fernando, Mhd Yamen Saraiji, Kouta Minamizawa, Susumu Tachi Conferences

In this paper, a spatial coherent remote driving system was designed and implemented to operate a telexistence backhoe over a wireless network. Accordingly, we developed a 6 degrees of Freedom (DOF) slave robot that can mimic the human upper body movement; a cockpit with motion tracking system and a Head Mounted Display (HMD) where the operator was provided with a HD720p ultra low latency video and audio feedback from the remote backhoe, and a controller to manipulate the remote backhoe. Spatial coherent driving could help manipulating heavy machinery without any prior training and perform operations as if they were operating the machinery locally. Moreover, construction work could be performed uninterrupted (24/7) by operators remotely log-in from all over the world. This paper describes the design requirements of developing the telexistence backhoe followed by several field experiments carried out to verify the effectiveness of spatial coherent remote driving experience in construction sites

01 Oct 2015

Development of Mutual Telexistence System using Virtual Projection of Operator’s Egocentric Body Images

KYOTO-JAPAN

ICAT-EGVE 2015 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Kyoto, Japan. pp. 125-132 (Honorable Mention & Best Demo Award)

ConferencesDemonstrations Selected Mhd Yamen Saraiji, Charith Lasantha Fernando, Kouta Minamizawa, Susumu Tachi

Development of Mutual Telexistence System using Virtual Projection of Operator’s Egocentric Body Images

Mhd Yamen Saraiji, Charith Lasantha Fernando, Kouta Minamizawa, Susumu Tachi ConferencesDemonstrations Selected

In this paper, a mobile telexistence system that provides mutual embodiment of user’s body in a remote place is discussed. A fully mobile slave robot was designed and developed to deliver visual and motion mapping with user’s head and body. The user can access the robot remotely using a Head Mounted Display (HMD) and set of head trackers. This system addresses three main points that are as follows: User’s body representation in a remote physical environment, preserving body ownership toward the user during teleoperation, and presenting user’s body interactions and visuals into the remote side. These previous three points were addressed using virtual projection of user’s body into the egocentric local view, and projecting body visuals remotely. This system is intended to be used for teleconferencing and remote social activities when no physical manipulation is required.

01 Mar 2015

Transparent Cockpit using Telexistence

ARLES- FRANCE

2015 IEEE Virtual Reality (VR), Arles, France. pp. 311-312

Poster Takura Yanagi, Charith Lasantha Fernando, MHD Yamen Saraij, Kouta Minamizawa, Susumu Tachi, Norimasa Kishi

Transparent Cockpit using Telexistence

Takura Yanagi, Charith Lasantha Fernando, MHD Yamen Saraij, Kouta Minamizawa, Susumu Tachi, Norimasa Kishi Poster

We propose an indirect-vision, video-see-through augmented reality (AR) cockpit that uses telexistence technology to provide an AR enriched, virtually transparent view of the surroundings through monitors instead of windows. Such a virtual view has the potential to enhance driving performance and experience above conventional glass as well as head-up display equipped cockpits by combining AR overlays with images obtained by future image sensors that are superior to human eyes. As a proof of concept, we replaced the front windshield of an experimental car by a large stereoscopic monitor. A robotic stereo camera pair that mimics the driver’s head motions provides stereoscopic images with seamless motion parallax to the monitor. Initial driving tests at moderate speeds on roads within our research facility confirmed the illusion of transparency. We will conduct human factors evaluations after implementing AR functions in order to show whether it is possible to achieve an overall benefit over conventional cockpits in spite of possible conceptual issues like latency, shift of viewpoint and short distance between driver and display.

01 Mar 2015

Telexistence drone: design of a flight telexistence system for immersive aerial sports experience

SINGAPORE

Proceedings of the 6th Augmented Human International Conference, Singapore. pp. 171-172

Poster Hirohiko Hayakawa, Charith Lasantha Fernando, MHD Yamen Saraiji, Kouta Minamizawa, and Susumu Tachi

Telexistence drone: design of a flight telexistence system for immersive aerial sports experience

Hirohiko Hayakawa, Charith Lasantha Fernando, MHD Yamen Saraiji, Kouta Minamizawa, and Susumu Tachi Poster

In this paper, a new sports genre, “Aerial Sports” is introduced where the humans and robots collaborate to enjoy space as a whole new field. By integrating a flight unit with the user’s voluntary motion, everyone can enjoy the crossing physical limitations such as height and physique. The user can dive into the drone by wearing a HMD and experience the provided binocular stereoscopic visuals and sensation of flight using his limbs effectively. In this paper, the requirements and design steps for a Synchronisation of visual information and physical motion in a flight system is explained mainly for aerial sports experience. The requirements explained in this paper can be also adapted to the purpose such as search and rescue or entertainment purposes where the coupled body motion has advantages.

27 Mar 2015

Virtual Embodied Telexistence: Telecommunication using Sensory Feedback and Virtual Body Representation

JAPAN

My Master Thesis in Media Design

Theses MHD Yamen Saraiji

Virtual Embodied Telexistence: Telecommunication using Sensory Feedback and Virtual Body Representation

MHD Yamen Saraiji Theses

With the advancements in Internet based services, the demands for better communication tools became higher in order to fill the distance gaps between people. Especially in our daily activities when we are traveling abroad. The traditional video/audio communication tools became actively available within our reach thanks to smartphone advancements. Internet based conferencing such as “Google Hangouts” made it more reachable by using web-based tools, allowing two or more participants to engage in a social meeting. However, the demand for social tools which allows the participants to move freely in the remote place existed. On one hand, Telepresence robots stepped into the scene to fill that need, also it opened new services to the consumers such as remotely visiting museums or attending exhibitions and conferences. On the other hand, Telexistence systems provided more sophisticated capabilities to the user by replicating several functions of his body into the remote place using mechanical parts. “Virtual Embodied Telexistence” is a system that combines low cost telexistence system with virtually embodied human functions into it. Allowing the user to have an immersive visual feedback of the remote place while being aware of his body via visual representation of it.
“Virtual Embodiment” proposed in this thesis defines the important elements for replacing physical representations of human’s body into a virtual representation, as a way to increase the sense of presence in a different place. Increasing the sense of body presence in telecommunication helps the user to act more naturally and intuitively using his body motion.
This thesis explores the proposed concept of Virtual Embodiment for human’s body and its tight link with presence. The design flow to realize a virtually embodied telexistence system is discussed, as well as the implementation procedure that has been done. We show the efficiency of using this system by objective user evaluation, and how intuitive it is to be used by any user.

01 Mar 2015

Mutual hand representation for telexistence robots using projected virtual hands

SINGAPORE

Proceedings of the 6th Augmented Human International Conference, Singapore. pp. 221-222 (Best Demo Award)

Demonstrations Selected MHD Yamen Saraiji, Charith Lasantha Fernando, Kouta Minamizawa, and Susumu Tachi

Mutual hand representation for telexistence robots using projected virtual hands

MHD Yamen Saraiji, Charith Lasantha Fernando, Kouta Minamizawa, and Susumu Tachi Demonstrations Selected

In this paper, a mutual body representation for Telexistence Robots that does not have physical arms were discussed. We propose a method of projecting user’s hands as a virtual superimposition that not only the user sees through a HMD, but also to the remote participants by projecting virtual hands images into the remote environment with a small projector aligned with robot’s eyes. These virtual hands are produced by capturing user’s hands from the first point of view (FPV), and then segmented from the background. This method expands the physical body representation of the user, and allows mutual body communication between the user and remote participants while providing a better understanding user’s hand motion and intended interactions in the remote place.

01 Dec 2014

Enforced telexistence: teleoperating using photorealistic virtual body and haptic feedback

SHENZHEN - CHINA

SIGGRAPH Asia 2014 Emerging Technologies, Shenzhen, China, Article 7, pp. 2.

Demonstrations Mhd Yamen Saraiji, Charith Lasantha Fernando, Yusuke Mizushina, Youichi Kamiyama, Kouta Minamizawa, Susumu Tachi

Enforced telexistence: teleoperating using photorealistic virtual body and haptic feedback

Mhd Yamen Saraiji, Charith Lasantha Fernando, Yusuke Mizushina, Youichi Kamiyama, Kouta Minamizawa, Susumu Tachi Demonstrations

Telexistence systems require physical limbs for remote object manipulation. Having arms and hands synchronized with voluntary movements allows the user to feel robot’s body as his body through visual, and haptic sensation. In this method, we introduce a novel technique that provides virtual arms for existing telexistence systems that does not have physical arms. Previous works [Mine et al. 1997; Poupyrev et al. 1998; Nedel et al. 2003] involved the study of using virtual representation of user hands in virtual environments for interactions. In this work, the virtual arms serves for several interactions in a physical remote environment, and most importantly they provide the user the sense of existence in that remote environment. These superimposed virtual arms follows the user’s real-time arm movements and reacts to the dynamic lighting of real environment providing photorealistic rendering adapting to remote place lighting. Thus, it allows the user to have an experience of embodied enforcement towards the remote environment. Furthermore, these virtual arms can be extended to touch and feel unreachable remote objects, and to grab a functional virtual copy of a physical instance where device control is possible. This method does not only allow the user to experience a non-existing arm in telexistence, but also gives the ability to enforce remote environment in various ways.

01 Aug 2014

Enforced telexistence

VANCOUVER - CANADA

ACM SIGGRAPH 2014 Posters, Vancouver, Canada, Article No 49.

Poster MHD Yamen Saraiji, Yusuke Mizushina, Charith Lasantha Fernando, Masahiro Furukawa, Youichi Kamiyama, Kouta Minamizawa, and Susumu Tachi

Enforced telexistence

MHD Yamen Saraiji, Yusuke Mizushina, Charith Lasantha Fernando, Masahiro Furukawa, Youichi Kamiyama, Kouta Minamizawa, and Susumu Tachi Poster

Telexistence systems require physical limbs for remote object manipulation. Having arms and hands synchronized with voluntary movements allows the user to feel robot’s body as his body through visual, and haptic sensation. In this method, we introduce a novel technique that provides virtual arms for existing telexistence systems that does not have physical arms. Previous works [Mine et al. 1997; Poupyrev et al. 1998; Nedel et al. 2003] involved the study of using virtual representation of user hands in virtual environments for interactions. In this work, the virtual arms serves for several interactions in a physical remote environment, and most importantly they provide the user the sense of existence in that remote environment. These superimposed virtual arms follows the user’s real-time arm movements and reacts to the dynamic lighting of real environment providing photorealistic rendering adapting to remote place lighting. Thus, it allows the user to have an experience of embodied enforcement towards the remote environment. Furthermore, these virtual arms can be extended to touch and feel unreachable remote objects, and to grab a functional virtual copy of a physical instance where device control is possible. This method does not only allow the user to experience a non-existing arm in telexistence, but also gives the ability to enforce remote environment in various ways.

01 Dec 2013

Real-time egocentric superimposition of operator’s own body on telexistence avatar in virtual environment

TOKYO - JAPAN

23rd International Conference on Artificial Reality and Telexistence (ICAT) 2013

Conferences MHD Yamen Saraiji, Charith Lasantha Fernando, Masahiro Furukawa, Kouta Minamizawa, Susumu Tachi

Real-time egocentric superimposition of operator’s own body on telexistence avatar in virtual environment

MHD Yamen Saraiji, Charith Lasantha Fernando, Masahiro Furukawa, Kouta Minamizawa, Susumu Tachi Conferences

With the advancements in Internet based services, the demands for better communication tools became higher in order to fill the distance gaps between people. Especially in our daily activities when we are traveling abroad. The traditional video/audio communication tools became actively available within our reach thanks to smartphone advancements. Internet based conferencing such as “Google Hangouts” made it more reachable by using web-based tools, allowing two or more participants to engage in a social meeting. However, the demand for social tools which allows the participants to move freely in the remote place existed. On one hand, Telepresence robots stepped into the scene to fill that need, also it opened new services to the consumers such as remotely visiting museums or attending exhibitions and conferences. On the other hand, Telexistence systems provided more sophisticated capabilities to the user by replicating several functions of his body into the remote place using mechanical parts. “Virtual Embodied Telexistence” is a system that combines low cost telexistence system with virtually embodied human functions into it. Allowing the user to have an immersive visual feedback of the remote place while being aware of his body via visual representation of it.
“Virtual Embodiment” proposed in this thesis defines the important elements for replacing physical representations of human’s body into a virtual representation, as a way to increase the sense of presence in a different place. Increasing the sense of body presence in telecommunication helps the user to act more naturally and intuitively using his body motion.
This thesis explores the proposed concept of Virtual Embodiment for human’s body and its tight link with presence. The design flow to realize a virtually embodied telexistence system is discussed, as well as the implementation procedure that has been done. We show the efficiency of using this system by objective user evaluation, and how intuitive it is to be used by any user.

01 Dec 2012

Virtual Telesar – Designing and Implementation of a Modular Based Immersive Virtual Telexistence Platform

FUKUOKA - JAPAN

IEEE Virtual Reality (VR) 2013, pp.595 - 598.

Conferences MHD Yamen Saraiji, Charith Lasantha Fernando, Masahiro Furukawa, Kouta Minamizawa, Susumu Tachi

Virtual Telesar – Designing and Implementation of a Modular Based Immersive Virtual Telexistence Platform

MHD Yamen Saraiji, Charith Lasantha Fernando, Masahiro Furukawa, Kouta Minamizawa, Susumu Tachi Conferences

In this paper, we focus on designing a customizable modular based virtual platform for modeling, simulating and testing telexistence applications, where the physical parameters are preserved in the virtual environment; for motor control of physical characteristics of robot as well as sensory feedback of vision, auditory, haptic. We proposed “Virtual Telesar” which allows telexistence engineers to model a prototype system before manufacturing it and to experience how the final model will look-like, perform manipulations etc… in real world without building it. The platform consists of three features: first, the user can define a robot using predefined modular components. Second, the user can customize and tune up parameters. Third, the user can have immersive experiences of operating the robot with visual, auditory and haptic sensation. In this paper, we describe a design concept of Virtual Telesar platform and report modeling result based on a physical robot and result of immersive experience with it.

.05

TEACHING

CURRENT
  • 2018
    Japan

    Lecturer

    Keio University Graduate School of Media Design

    Practical Machine Learning
  • 2018
    Japan

    Lecturer

    Keio University Graduate School of Media Design

    Innovation Pipeline: Prototyping and Physical Computing
HISTORY
  • 2015
    2017
    Japan

    Teacher Assistant

    Graduate School of Media Design, Keio University

    Worked as a TA for Embodied Media class, supporting students in their class project as a mentor
  • 2012
    2012
    Syria - Damascus

    Teacher

    Syrian Computer Society (SCS)

    -Introducing young students to programming languages -Providing them with the required skills for analyzing real-life problems and to propose solutions for it in algorithmic way -Prepare them to participate in international programming computations
.06

SKILLS

PROGRAMMING SKIILLS
System Engineering >
LEVEL : ADVANCED EXPERIENCE : >10 YEARS
C++ C# Python
Robotics >
LEVEL : INTERMEDIATE EXPERIENCE : 4 YEARS
Kinematics Mechanical Design Embedded systems
DESIGN SKILLS
3D/2D Design >
LEVEL : INTERMEDIATE EXPERIENCE : 5 YEARS
Photoshop Maya 3D Studio Max
.07

PROJECTS

MY PORTFOLIO
Projects number 15
Research

SlideFusion

SlideFusion

SlideFusion: Surrogacy Wheelchair With Implicit Eyegaze Modality Sharing

For mobility-impaired people, the wheelchair is considered as a main navigation and accessibility device. However, due to the inherited design of the chair, the user is restricted to utilize his hand all the time during navigation resulting in a dual impairment. Our proposed system, SlideFusion, expands on previous work of collaborative and assistive technologies for accessibility scenarios. SlideFusion is focused on the remote collaboration of the mobility impaired person, that an operator can remotely access an avatar embedded into the wheelchair. To reduce the physical and cognitive overload on the wheelchair user, we propose the use of eye gaze modality sharing with the remote operator. The eye gaze modality enables implicit interactions that do not require the user to point or verbally say, thus leveraging indirect communication. By using this, not only accessibility can be provided to wheelchair users and their caregivers, but also hearing impairments and pronunciation disorders are applicable.

Credits: Ryoichi Ando, Kouta Minamizawa, MHD Yamen Saraiji

Keio University Graduate School of Media Design

SIGGRAPH 2020 ETech page

Research

Arque

Arque

Arque: Artificial Biomimicry-Inspired Tail for Extending Innate Body Functions

Arque addresses a long asked question of the lack of tail in human bodies. For most mammals and vertebrate animals, tail plays an important role for their body providing variant functions to expand their mobility, or as a limb that allows manipulation and gripping. In this work, Arque, we propose an artificial biomimicry-inspired anthropomorphic tail to allow us alter our body momentum for assistive, and haptic feedback applications. The proposed tail consists of adjacent joints with a spring-based structure to handle shearing and tangential forces, and allow managing the length and weight of the target tail. The internal structure of the tail is driven by four pneumatic artificial muscles providing the actuation mechanism for the tail tip. Here we highlight potential applications for using such prosthetic tail as an extension of human body to provide active momentum alteration in balancing situations, or as a device to alter body momentum for full-body haptic feedback scenarios.

Credits: Junichi Nabeshima, Kouta Minamizawa, MHD Yamen Saraiji

Keio University Graduate School of Media Design
Radical Bodies Group & Embodied Media

SIGGRAPH 2019 ETech page

Research

Fusion

Fusion

Effective communication is a key factor in social and professional contexts which involve sharing the skills and actions of more than one person. This research proposes a novel system to enable full body sharing over a remotely operated wearable system, allowing one person to dive into someone’s else body. “Fusion” enables body surrogacy by sharing the same point of view of two-person: a surrogate and an operator, and it extends the limbs mobility and actions of the operator using two robotic arms mounted on the surrogate body. These arms can be used independently of the surrogate arms for collaborative scenarios or can be linked to surrogate’s arms to be used in remote assisting and supporting scenarios. Using Fusion, we realize three levels of bodily driven communication: Direct, Enforced, and Induced. We demonstrate through this system the possibilities of truly embodying and transferring our body actions from one person to another, realizing true body communication.

This project is done in collaboration between Keio University Graduate School of Media Design and The University of Tokyo.

SIGGRAPH 2018 ETech page

Media and Press:
ACM SIGGRAPH Blog
MIT Technology Review
IEEE Spectrum
Japan Science and Technology Agency (JST) – Japanese
Fast Company
Dezeen
hackster.io
Seamless – Japanese
DigitalTrends
YouFab Global Creative Award 2018
James Dyson Award 2019

DESIGNPhDResearch

Telexistence Toolkit

Telexistence Toolkit

Our world now is becoming a connected village of information and social experiences, but we still experience it as spectators and from the personal point of view. Telexistence Toolkit, or TxKit, is a compact, novel communication device that virtually transports you from one physical location to another, allowing you to experience the world from your point of view. Designed based on essential sensory feedback and in humanoid factor, TxKit provides the means for stereoscopic vision, binaural audio, speaking, and head motion. Using this design, both user and remote participants can have natural and mutual communication as if both were in the same location.

MHD Yamen Saraiji, Charith Fernando, Kouta Minamizawa, Yasuyuki Inoue, Susumu Tachi

Designed in collaboration with Karakuri Products, Inc.

PhDResearch

MetaLimbs

MetaLimbs

“MetaLimbs” is a novel system to expand the number of arms by using limb substitution. In this system, legs are mapped into artificial robotic arms mounted on user’s back and are used to control arms and hands motion. This system provides an immediate and intuitive control over the new limbs, and the users can adapt to operating it without any training.

Tomoya Sasaki, MHD Yamen Saraiji, Charith Fernando, kouta Minamizawa, Michiteru Kitazaki, Masahiko Inami

SIGGRAPH 2017 ETech page
Reuters
KMD Reference Page

PhDResearch

Layered Presence

Layered Presence

“Layered Telepresence”, a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer using image-processing techniques, and pushes the most weighted layer around the users gaze into the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content such as video see-through HMD, television screen or even your PC screen enabling true multitasking.

Gold Prize Award – ACM Student Research Competition at SIGGRAPH 2016 Anahiem
Best Aural Presentation – Virtual Reality Society Japan September/2016 (Japanese)

SIGGRAPH 2016 ETech page

Research

Telexistence Surveillance Project

Telexistence Surveillance Project

Spatial Coherent Remote Driving Experience for Disaster Sites under Hazardous Conditions
Collaborated work with Obayashi Corporation, ICAT 2015, ROBOMECH 2015, VRSJ 2015 OS

Research

Telexistence Marathon Runner

Telexistence Marathon Runner

In collaboration with ASICS Corporation, we demonstrated experiencing real-time 360 video and audio streaming from a Marathon Runner on February 28th at the Asics Store Tokyo. The project was initiated together with a former KMD Masters student Yusuke Mizushina who is currently working at ASICS Corporation Wearable Device Business Operation Team Corporate Strategy Department.

Delivering ultra low latency 360 video and audio content using conventional LTE mobile network was powered WebRTC technologies developed by NTT Communications SkyWay Service.

For more details please refer the Press Release (Japanese only)

DESIGNPhD

HUG – Connecting those who you love over the distance

HUG – Connecting those who you love over the distance

An idea has emerged from Mr Norikazu Takagi (from Ducklings Japan) to let his grandmother who is a resident in Nagoya prefecture to join an important personal event of him, his wedding ceremony held in Tokyo. However, due to her medical conditions she cannot attend the ceremony physically.

map1.h2

So in a cross collaboration between our research group in Keio Media Design, CREATIVE ORCA, and FOVE, we designed a user friendly system to let her “Virtually” attend the ceremony. And to realize this idea, we used the concept of telepresence using a Head Mounted Display to provide immersive stereo visuals of the event in the remote site. At the ceremony, we used Pepper robot as a platform to build our system on top of it. A custom 60FPS stereo camera set was added to robot’s head, and a low latency stereo video stream is sent to Nagoya (<180ms). The user (Grandmother) controls robots head using her eyes movement, and can do simple hands gestures using a joystick.

system-image01

During the event, grandmother’s reactions were priceless! She almost cried when she saw the pride and her grandson in the ceremony, also she expressed her feelings saying “I felt as I was there with them”.
This type of feedback is what keeps me going on and doing what I do!

The event was covered by NHK TV, and broadcasted over Japan.

best award and best care and welfare prize at Softbank’s Pepper App Challenge 2015 Winter (Japanese)

Screen Shot 2015-11-14 at 9.53.38 PM

DESIGN

Microsoft Design Challenge: Hacking Mars

Microsoft Design Challenge: Hacking Mars

“The routine of being stuck on Mars can get dull quickly. Especially in the situation our character finds himself in. However, with the “NOVA” cards (a set of devices that he can scrap together with available equipment), he can connect to his loved ones by capturing and sharing single moments of his emotional states which include an image and haptic recording of his heartbeat. In return, the loved ones can stay connected with him by doing the same and sharing their moments and experiences to reassure him that they are waiting for his return.
The tangibility of the “NOVA” cards let’s our character hang up a dedicated card for each of his loved ones, much like a traditional picture, and carry them around on his missions. The tactile message lets him feel their emotional state through the simulated heartbeat. We believe that the “NOVA” cards one message per person, per day function will present a new challenge for our character each day and keep him occupied to create new positive moments until he finds his way back home.”

NovaTitle

A challenge proposed by Microsoft Inclusive Design Team: Hacking Mars attracted many designer around the globe to come up with ideas to help an isolated astronauts to survive. Me and my team members (Jimi Okelana & Roshan Peiris) addressed this topic by looking at it from a global perspective, how to maintain the emotional attachment between this astronaut and with people he love. We began with an ideation to come up with ideas around this topic

Nova2

This project was submitted to Microsoft Design Challenge, #HackingMars.

Microsoft Design Challenge 2015 #HackingMars top three finalists

Research

Telexistence Drone

Telexistence Drone

This drone is a type of Telexistence system whereby the user can experience the feeling of flight. Telexistence is a technology that enables users to synchronize their motions and emotions with robots, such that they can be at a place other than where he or she actually exists, while being able to interact with a faraway remote environment.

In this case of the drone, a camera is attached to it and then whatever is captured by the camera is synchronized with the Head Mounted Display (HMD) of the user, such that the user is able to experience the flight of the drone. By integrating the flight unit with the user and thus crossing physical limitations such as height and physique, everyone can now enjoy a whole new concept of ‘space.’

DESIGN

TEDxTokyo 2014 – Connecting the Unconnected

TEDxTokyo 2014 – Connecting the Unconnected

“Do you see the connections?”

Inspired by TEDxTokyo 2014 theme: “Connecting the Unconnected”, TEDxTokyo design team this year aimed to create interactive visual graphics for the event.
After several meeting, we decided to make use of the real-time tweets which the participants of the event write, and display it as connected dots with TED speaker. Each dot holds a picture of twitter user, thus creating visual information. Also when other users reply to a tweet, these users get *Connected* with each other and this connection is visualized on the screen.

TED_Preview

In order to decide which tweets to be used and displayed, the application filters for tweets with tag: “#TEDxTokyo” thus it would know this tweet relate somehow to the event. Then a classification step takes place to identify which speaker those tweets are related to. So it searches inside each tweet for speaker name and if found it get connected to him.

TED_Installation

For interaction, we used leapmotion placed in front of the screen (inspired by sci-fi movies type of interaction), and users can hover and select tweets to read (by hovering over the dots) or the can change the speaker by sweeping over it.

TED_people

On our design team website:
http://eatcreative.jp/en/connecting-the-unconnected/
http://eatcreative.jp/en/case_study/tedxtokyo/

Master

TELUBEE 2.0 (TELexistence platform for Ubiquitous Embodied Experience)

TELUBEE 2.0 (TELexistence platform for Ubiquitous Embodied Experience)

The successor of TELUBEE 1.0, human size telexistence robot with 3-axis rotatable head (tilt, yaw, roll) and movable base (roomba). In this version, the robot can access the robot over the internet, and using OculusVR, the user can see real-time wide field of view stereo images from the robot side. Also the user can get real-time audio feedback. The robot is fully portable, currently using Wi-Fi network connection to stream the video and receive control. TELUBEE 2.0 was exhibited in International Conference on Artificial Reality and Telexistence (ICAT 2013) in Japan/Tokyo while connected with France.

Master

TELUBEE 1.0 (TELexistence platform for Ubiquitous Embodied Experience)

TELUBEE 1.0 (TELexistence platform for Ubiquitous Embodied Experience)

We introduce a novel interface where a user can connect with a remote avatar-like robot among ubiquitously distributed multiple robots around the world and experience the distant world as if he is remotely existing there.

The system “TELUBee” includes distributed small-scaled telexistence robots; and a graphical user interface with a world map for selecting the desired geolocation where you want to experience. By wearing a HMD, user can see 3D stereoscopic remote view and is able to interact with remote participants using binaural audio communication. Remote Robots head motion is synchronized with the user’s head and the combination of audio/visual sensation he would experience allows him to feel the same kinesthetic sensation as would be physically there. The robots are small, low cost and portable battery powered. Hence it can be used in anywhere where a Internet connection is present.

“TELUBee” user interface can be a gateway to travel between many places around the world without spending much time and will be useful for sightseeing, attending remote meetings and have face-to-face conversations etc. . .

 

Publications

International Conferences

  • Laval Virtual Revolution 2013

Domestic Conferences

  • 日本バーチャルリアリティ学会 第17回大会
  • 日本機械学会 ロボティクス・メカトロニクス講演会

Awards

  • Invited Demonstration at Laval Virtual Revolution 2013
Research

Virtual Telesar

Virtual Telesar

Virtual telexistace platform in which designers and engineers can define their prototype and experience it.

A work which was published in System Integration 2012 in Kyushu, Japan.

ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6426950

.08

CONTACT

Drop me a line

GET IN TOUCH

Have a question, a feedback, a suggestion? Please don't hesitate to get in touch with me! Just drop me a message here and I will get back to you ASAP!