Thursday, 21 May 2020

Google uses AI to train the "headphone cable" to realize most functions of touch screen

Google has never stopped developing wearable devices, such as the smart jacket Commuter Trucker launched in collaboration with Levi 's.

 A sensor is added to the cuff on the clothes, and the user can interact with it through a Bluetooth link.

 You can double click, slide and other operations to cut songs and other operations.



 To make persistent efforts, Google hopes to make the device smaller and more functional.

 Google then stared at the headphone cable.

 Google AI engineers have developed an electronic interactive knitting (E-Textile), which allows people to realize most of the functions of previous touch screens by pinching, rubbing, holding, and shooting gestures.



 Operations such as volume control and changing songs are not to mention. Google 's new features point to the next step of perceptual interaction, and the ultimate goal is to liberate our hands.

 Gesture dataset training process

 The device developed by Google is a combination of machine learning algorithms and sensor hardware, and the headphone cable is just the load.

 In fact, the cable is not an ordinary headphone cable, it is a flexible electronic material, and the sensor is woven into it, so human-computer interaction is possible.

 If you like, hoodies can also be transformed.

 First, Google recruited 12 participants for data collection, made 8 gestures each and repeated 9 times, a total of 864 experimental samples.

 In order to solve the drawback of too small sample size, the researchers used linear interpolation to resample each gesture time series.

 Each sample extracts 16 features, and finally obtains 80 observation results.



 Each user's trained gesture recognition can enable 8 new discrete gestures.

 Not only are there quantitative figures, but also the personal experience of the participants, the researchers hope to provide a human-centered interactive experience.

 Participants also provided qualitative feedback through rankings and comments. Participants also proposed a variety of interaction methods, including sliding, flicking, pressing, pinching, pulling, and squeezing.



 Quantitative analysis results show that the perceived speed of the interactive knitwear is faster than the existing headset button controls, and the speed is comparable to that of the touch screen.



 Qualitative feedback also shows that electronic textile interaction is more popular than headphone wire control.

 Considering different usage scenarios, researchers have developed different devices for different usage scenarios:

 Electronic textile USB-C earphones are used to control media playback on mobile phones; hoodies draw cord to add music control to clothes invisibly.

 Algorithm for precise recognition of gestures

 Google 's ability to make an electronic braid is not a machine learning algorithm, but a gesture capture and interaction on the headset line.

 Due to volume considerations, braids such as earphone cords cannot be equipped with large and numerous sensors, and their sensing and resolution capabilities are very limited.

 The second is the ambiguity and ambiguity of the hand gestures, such as how to distinguish between pinch and grab, and how to distinguish between slap and pull?

 Google engineers use 8 electrodes to form a sensor matrix, and divide the data set into 8 times as training data and 1 time as test data, and get 9 gesture transformations.

 They found that there is an inherent relationship in the sensor matrix, which is very suitable for machine learning classification algorithms, which allows the classification algorithm to be trained with a limited data set. It takes only about 30 seconds to realize a gesture recognition.



 The final accuracy rate is 93.8%. Considering the size of the data set and the training time they use, this accuracy is enough for daily use.

 The next step in headset control

 Google's training of the headset line involves gesture gesture recognition and micro-interaction.

 On touch screen devices, the space below the screen can accommodate many sensors, such as Apple's 3D Touch recognition module.

 But in external devices such as earphone cables, it may not be so easy, because the number and volume of sensors must be limited.

 During the experiment, the engineers found that multiple trainings for multiple gestures were required, and different individual gestures required multiple captures of motion.



 This study shows the possibility of achieving accurate small-scale movements in a compact form factor, and we can look forward to the development of intelligent, interactive braids.

 one day.  The micro-interaction of the wearable interface and the smart fabric can be used arbitrarily, and finally the external device can follow the shadow, interact at any time, and finally liberate our hands.

 Are you looking forward to this day?

No comments:

Post a Comment

Most of our user are interested in this post:

who is Zoominfo?

With over US$900 million in financing (approximately RMB6.3 billion), the first day of the IPO rose as high as 100%...  It has bee...