Demo: Yes, right there! A self-portrait application with sensor-assisted guiding for smartphones

Chi Chung Lo, Sz Pin Huang, Yi Ren, Yu-Chee Tseng

研究成果: Conference contribution

2 引文 斯高帕斯(Scopus)

摘要

Taking self-portrait on a smartphone is a lot of fun and can be easy when we know how (Fig. 1a). In addition, thank to self-timer, giving a delay between pressing the shutter release and the shutter's firing, we are able to take photos of ourselves when nobody on hand to take it (Fig. 1b). However, there are rare cases that we can get satisfied snapshots at our first try. The reason is that we usually had no idea whether we were in the right position of the camera frame (Figs. 1c, 2a and 2c). Although the front camera can help us to check position in the frame, its generated snapshot quality is much lower than that taken by the back camera. In this demo we will introduce a self-portrait App - "Yes, right there!" The App enables a camera to prevent faces from being cut out of the camera frame by giving suggestions to users until they are in a suitable position in the frame, as shown in Fig. 1d, Fig. 2b, and Fig. 2d. The suggestions are voice commands including "Raise your hand!", "Come closer!", "Please move to the left", and so on. They depend on the face's position in the camera frame [1] and the inertial sensing data on the phone. In the end, a good picture is then taken after the beep sounds. More concretely, "Yes, right there!" supports two modes: self-portrait mode and self-timer mode. In self-portrait mode, our App initially asks users to take favorite photos as pre-training references. In the meantime, the application measures the yaw, pitch, and roll values from accelerometer, electronic compass, and gyro. Note that the users usually like to take photos with a special angle. Once a user wants to take self-portrait, she points the lens at herself so that her face is reflected in the camera frame. The App detects the face's position in the camera frame and measures the yaw, pitch, and roll values. Then the face's position and the inertial sensing data are being compared against the pre-training references while the App suggests the user to change her posture by voice commands. If the face moves to the right position and the yaw, pitch, and roll values are suitable, a good picture is then taken automatically; otherwise, the App repeats the suggestions. Note that the self-portrait mode can be easily extended for multiple users. In self-timer mode, the user firstly specifies an area where she wants her face to appear in the frame. When the App works, it compares the position of her face against the area specified, and suggests the user to change her location by voice commands. As shown in Fig. 2a, the user's face is detected appearing in the grid 1 (outside the specified area in the frame). The voice interactive function is triggered to lead the user by saying "Please move to the left and then come closer, etc." until she moves to the suitable position (Fig. 2b). When the App works for multiple users, as shown in Fig. 2c, it detects that a user in the left side is not in the area specified (gird 5), and then the voice interactive function will give suggestions (e.g., The user in the right side please moves to the left, etc.) until all the users are appearing in the specified area in the frame (Fig. 2d). Fig. 3 shows our system model. We will distribute Android phones to demo visitors, allowing real-time interaction with user interface on these phones. We will also show how "Yes, right there!" recommends users taking self-portrait without assistance from others. Moreover, users can set timer for different waiting intervals, and specify the number and range of the grid which is considered as suitable position. To the best of our knowledge, this is the first work that shows how to help users take self-portrait by detecting the face's position in the camera frame and measuring the inertial sensing data on the phone.

原文English
主出版物標題MobiSys 2013 - Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services
頁面505-506
頁數2
DOIs
出版狀態Published - 12 八月 2013
事件11th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys 2013 - Taipei, Taiwan
持續時間: 25 六月 201328 六月 2013

出版系列

名字MobiSys 2013 - Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services

Conference

Conference11th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys 2013
國家Taiwan
城市Taipei
期間25/06/1328/06/13

指紋 深入研究「Demo: Yes, right there! A self-portrait application with sensor-assisted guiding for smartphones」主題。共同形成了獨特的指紋。

  • 引用此

    Lo, C. C., Huang, S. P., Ren, Y., & Tseng, Y-C. (2013). Demo: Yes, right there! A self-portrait application with sensor-assisted guiding for smartphones. 於 MobiSys 2013 - Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services (頁 505-506). (MobiSys 2013 - Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services). https://doi.org/10.1145/2462456.2465705