Category Archives: 未分類

Lab Retreat in Yoshino (吉野)

Hi everyone! Nice to meet you, I’m M1 Nieda!

This time we had a lab camp at Taiko-ban Hanamu Hanamu in Yoshino, Nara Prefecture for two days on November 11 and 12.

I would like to introduce how the two days went!

Day 1 (M2 students present their research and new members introduce themselves)

M2 students presented their research and new members (Md Mustafizur Rahman, Tuwaemuesa Thapakorn, and intern Liu Jiayin) introduced themselves.

自己紹介の様子

Day 2 (Demonstration presentations by M1 and doctoral students)

In pairs, M1 students and PhD students challenged themselves to “create a demonstration of an interesting and fun idea” and gave presentations.

発表の様子

デモ展示の様子

感想

It was a good opportunity for us to communicate with professors, doctoral students, international students, and other people with whom it is usually difficult to communicate.

During free time, we went sightseeing in Yoshino and enjoyed the vast nature, famous places, and local specialties.

I regret a little that we visited Yoshino just before the autumn leaves turned red….

Finally, we would like to hold the camp again next year.

See you again.

Former IMD member research student visited us!

A former IMD member, Mr. Daniel Eckhoff (currently at City University of Hong Kong, formerly at Universitat Oldenburg) visited us from Hong Kong (where he is currently working).

He gave a lecture on his research, which was a rare opportunity for M1 and M2 students to interact with outside researchers.

Daniel’s research is on pseudo-tactile sensation, and he presented his research on perception in the area of human touch and changing the appearance of hands (showing the effect of 🔥).

(Maybe I can be a real Ende⚪︎- (My Hero AOOOemia) in the future?)

I hope to see you in Hong Kong next time.

Open Campus Day 2023

Nihao(你好), everybody!

I’m your friendly lab mate, Ma, from IMD Lab!
Glad to be writing my first blog post and getting to know all of you!

We had this OpenCampus event on May 13th, and our lab was lit up with many demos. So many curious minds dropped by for a look-see, thank you very much!

On the first floor, Akiyoshi, Noguchi, and Ueda were hustling, showing off our poster to the crowd.

We’ve sometimes had that heart-racing moment of talking to strangers, right? Yoneyama’s research might just be the key to soothe your nerves!

Ever been stuck in a speech without feedback, clueless about your performance? Takahama is on the case to tackle this problem!

Then there’s Akira, diving deep into object recognition in the environment. 2D recognition is a piece of cake, but 3D, especially with not-so-powerful devices like HMD, now that’s the real challenge. But Akira is trying his best to cover!

And like magic, with Geert’s research, your iPad or phone screen can become transparent! As your head position changes, so does your screen display!

Ever wondered if praise during gameplay pumps you up or is it the other way around? Taguchi’s demo focuses on utilizing conversation robots in gaming.

Noguchi’s research is pretty cool. Estimating a sprinter’s posture through a 3D camera provides an even more detailed basis for athletic training.

Soshiro and Matoba are developing a general-purpose AR work support system. Sure, the idea of AR aiding work is now mature, but creating such a system is tough as nails! With our system, you can swiftly whip up an AR work support system.

IMD is also into human-computer interaction research!

And guess what? We’ve got the only autonomous vehicle in NAIST. The biggest roadblock to the widespread adoption of autonomous vehicles might not be technology, but psychology! Our goal is to reduce the psychological stress of passengers riding in highly autonomous vehicles, promote the adoption of autonomous vehicle technology, and enhance passenger comfort. Ma and Shimizu are in the garage, eager to share this research!

Up on the seventh floor, Nieda and Liu are introducing haptic robots. Paired with AR or VR systems, these robots can offer you an entirely different experience!

IMD Lab is brimming with exciting research topics. If you’re interested, we’d love to have you pop in!

Last but not least, big thanks to our intern Guo for snapping some awesome photos!

Did you have fun at the Open Campus?! We hope to see you around IMD again!

Broadcast on Nanikore Chinhyakkei(ナニコレ珍百景)

Hi, I am Keishi Tainaka, doctor student.

Our research using TSUNDERE that was one of Japanese sub-cultures was broadcast on “Nanikore Chinhyakkei”!!
This is one of the most famous and popular TV programs in Japan.

It usually introduces unusual or interesting things and landscapes that you have never seen before.

In this program, it introduced our research that motivates workers using TSUNDERE characteristics.


In the program, They introduced the good points of NAIST, AR and VR technologies, and our research (TSUNDERE) in a very amusing way.

When I watched it live, I felt very happy that the contents were more complete and impressive than I had imagined.

Finally, if the three program committees feel it interesting and unusual, it will be registered as “Chinhyakkei”. (“Chinhyakkei” means 100 unusual and interesting views.)

Finally, our research was registered as “Chinhyakkei”.

We are proud to appear in such a program and get this title.

Thank you.

CNR 2022.07

Hello, I am Keishi Tainaka, D3 student.

I attended a research group of Cloud Network Robotics (CNR) with Asst Prof. Sawabe at Okinawa on this July.

He presented “Evaluation of VR appearance changes and comfort impressions of multimodal interactive agents that ‘talk while stroking’.”

I presented my research of TSUNDERE at the award ceremony since I received the “Student Presentation Award” for my previous presentation “TSUNDERE Interaction – Investigation of the Influence of a Stroking AR Agent on Behavior Modification” last year.

At my presentation, during the 20 minutes I could speak freely, I talked mainly about my activities in the last year and future work. The audience seemed tired since the presentation was given in the evening, but “TSUNDERE” made them unexpectedly fun, and I was very satisfied with the presentation.

Also, listening to the other presentations, there were researchers who were focusing on “behavior modification using agents” from different viewpoints from ours, and it was very helpful to us.

I got to eat Okinawa’s specialties like root beer, hamburgers, and Soki soba!
The weather was perfect, we went snorkeling in a beautiful sea!

After all, it’s nice to present onsite!
I hope the corona will calm down and onsite presentation will become usual~!

Cloud Network Robotics Research Group (CNR) 2022-3

Hello, this is Yamauchi, M1.

At the recent Cloud Network Robotics Research (CNR) meeting held on 3/3~3/4, I gave a presentation on the topic of “analyzing human ‘talking and stroking’ behavior for touch care robots”.

MVE and the Biometrics Research Group (BioX) also co-sponsored this year’s event.

The conference was originally planned to be held in Tokyo, but due to the current climate, it was decided to hold the conference online.

Although the proceedings did not run as smoothly as they did onsite, we received many comments from the moderator and participants during the Q&A session regarding “realization of robots with interactivity” and other topics, which motivated us to continue our research.

Based on the opinions and words we received this time, we would like to connect them to new learning.

HRI 2022

Hello, this is Kanda, M1.

I presented my research in Late-Breaking Reports at HRI (ACM/IEEE International Conference on Human-Robot Interaction) held from 3/7 to 3/10.

The title is “A Communication Robot for Playing Video Games Together to Boost Motivation for Daily-use.”

HRI2022 was held online, and Late-Breaking Reports was presented in a style where participants visited my presentation booth online, viewed slides and videos, and engaged in discussions.

The presentation time was from 7:55 in the morning Japan time, so it was a bit difficult to wake up early and prepare for the presentation, but I was able to finish it successfully, and I am glad that I was able to present at my first international conference.

CICP2020

Hi、I’m Tetsuya Kodama, a M2 student.

I participated in the CICP with members of IMD laboratory., which is a research proposal-based project led by students.In this project, our team proposed “Maintaining Work Motivation at Home with AR Avatar Integrated with Robotic Touch” and won the Best award!

Commemorative photo

The team members are described below.
Team leader: Keishi Tainaka
Team member: Isidro Butaslac and Tetsuya Kodama
Supervisor: Taishi Sawabe and Masayuki Kanbara

First,I explain our research briefly.
Based on operant conditioning, which is a method to promote behavioral change by giving a combination of guilt and reward, we proposed “TSUNDERE interaction,” which combines TSUN (cold behavior) as the guilt and DERE(kind behavior) as the reward.If you want to know more detail, please check our pubilication

TSUNDERE Interaction

 

In this project, we had the cooperation of maid caffe CCOちゃ and Yoshimoto’s comedian 小寺真理さん.In order to create TSUNDERE interaction model, we correct motion data and voice from them.And you can watch the data collection process video on CCOちゃ youtube channel.

Data collection in CCOちゃ

With CCOちゃ maid staff and 小寺真理さん

There were opinions and insights that could be gained by interacting with professionals in the field.I believe that our discussions based on these opinions and insights led to the Best award.Thank you very much for your cooperation.  The CICP also gave me the opportunity to do honest and serious research on my curiosity.Our research is still incomplete, so I would like to work hard to complete it this year!

ISMAR2020

Hi、I am Keishi Tainaka, a D1 student.

I attended IEEE International Symposium on Mixed and Augmented Reality(ISMAR) between November 9 and 13.
In this conference, I presented about my paper, “Guideline and Tool for Designing an Assembly Task Support System Using Augmented Reality“.

If there was no COVID-19, this conference would be held in Brazil, with its beautiful tropical skies and seas. So, this was a remote conference in a virtual world. And the time of the event of Brazil is the exact opposite of Japanese time(JST), starting around 8:00 pm and ending at 8:00 pm in JST. So, it has confused my health.

Before the presentation, I made my avatar first. I made my hair blonde for a makeover.

In the presentation, I did this on the stage.

Even if it was virtual, I was very nervous. The question and answer session was held in English, so I prepared a variety of questions and answers beforehand, because I am not very good at that. I would like to thank Fujimoto-sensei and the lab members for their cooperation in their busy schedules.

As a result, I got some valuable questions from the great professors, which was very helpful for my future work.

During the break, I played dance, boat race and soccer on the beach with my lab members.

This is dancing on the beach with my seniors.

 

After my presentation, I learned a lot of useful things about the future research and presentation ways from other presentations.

The conference itself was streamed live on Youtube, and the tension and excitement of presenting in front of people from all over the world was something I have never felt before. It made me feel happy to be in the doctoral course.

Next time, I would like to experience that atmosphere again in a real place, so I would like to work hard on my research.

Thanks.

 

 

ICMI 2020

Hello, I’m Zhou Hangyu, D2 student.

I participated in ICMI 2020 during October 25th to 30th. My short paper, “effectiveness of virtual reality playback in public speaking training”, was accepted for it’s workshop, Social Affective Multimodal Interaction for Health (SAMIH).
It was my first time to participate a academic conference, felt a bit stressed but it was online which sounds much better, and everything goes fine. They asked some questions.
What I remember most is the question asking how the facial expression could impact the users, it’s a very important question of our research since we also hide the facial expression form the record. Of cause it’s a important part for public speaking, but getting only facial expression many require more device which may cause some effects to the presentation, and it may not so important comparing to other performance. So we decided to remove that.
But it would be a very interesting topic to clear the importance for each part of public speaking performance in playback.

I also learned a lot from the keynotes at the workshop and it related much to my research, or my future research. They shared their works in social skill training and virtual doctors for mental health, very enthusiasm. It was a good chance to know what research questions are people around world now thinking or facing about, and believe the AR/VR could bring more possibility strongly.

The meeting was nice, but since it was online so I’m sorry that I can not put any pictures here.