经验
Computing
Blog

An Inside Look at Voice-enabled Edge Computing and Computer Vision

Sep 10, 2019

By Saleel Awsare


It’s that time of year again for the bigIBC conference in Amsterdamand we’re really excited to meet with our key customers and the service providers that deliver all of the wonderful entertainment to our smart living rooms and smart devices. This is an excellent show for Synaptics because it brings together under one roof just about all of our partners in the media streaming world so that we can collaborate on current and future projects. It’s also a great way for us to showcase our ideas on how they can deliver new and helpful user experiences for the consumers of their services.

A big part of our showcase will focus on our edge computing SoCs in the smart home with an emphasis on helpful machine learning techniques that deliver both user convenience, and also potential new revenue streams for service providers. We call this Smart Edge AI and it involves our powerful SoCs with neural network accelerators combined with intelligent computer vision and voice processing. I think one of the greatest aspects about edge computing is the ability to operate devices with or without an internet cloud connection. Not only does this provide for improved user privacy and security, it also delivers robust and reliable performance.

Let’s jump into what I consider to be the coolest demonstrations we are doing at IBC.

语音识别
Most of us are now comfortable talking out loud to computers and getting feedback from voice assistants. Synaptics is a leading global provider of far-field voice technology for products like smart speakers and other smart home devices. So we took that technology and integrated it on our edge computing SoCs, and in this case placed voice into a media streaming device for televisions. Now imagine talking to your TV and it recognizes your voice. You do not even need to register your voice, it just knows your voice from other voices via biometric information. Now that the media streamer knows who is talking, it can deliver a menu of personalized content preferences based on user history. Same thing if a different person was in the room. Pretty cool right! And this is all done at the edge in the privacy of your home. You cancheck out this videoand see it in action.

Face identification
Very similar to voice ID is face identification, but this time using a camera and computer vision intelligence. This time the device recognizes your face and delivers preferred content similar to what I described with voice identification. You like sports, your spouse loves mysteries, this is all sorted out depending on who is in front of the camera. But what if you are both watching TV together? Well, it figures that out too through machine learning and displays a content menu related to programs that you normally watch together. Combine all this with voice and you have a powerful way to deliver user convenience.Watch our face identification demo video here.

Logo detection
Another computer vision demo we perform is unrelated to cameras. This time our Smart Edge AI technology “sees” what is playing on the TV. It can recognize certain content like a logo for example BMW or CNN with 99% accuracy. If a service provider knows what their customers prefer to watch, it can deliver recommended content such as movies that require a payment. This of course is a great way for service providers to increase monetization, but it also delivers a better user experience. They could also deliver highly targeted scaled cost advertising. You can see howwe do this in a video demonstration here.

Event detection
Similar to logo detection, we can use machine learning to detect a variety of content. Let’s take for example a baseball game. I watch a lot of baseball but don’t have the time to indulge in several 3 hour games. We took that problem and created an opportunity to watch multiple games in a fraction of the time. We trained the device to look for pitches, and through machine learning all of the pitches are stamped in a timeline of the game. Now I can simply skip to each pitch and watch all the excitement of three games in the same time as one. You could consider this your personal sports highlight reel. This example is a great asset to any media streamer and something service providers could market to sell more devices. Watch how we do thishere in a short video demonstration.

Synaptics is innovating with edge computing AI. Please join us at IBC 2019 in Amsterdam, Sept. 13-17, where we will be showcasing our newest solutions for the helpful smart home. We are located on the second-floor balcony in Hall 1, Suite 16. To make an appointment, please contact your local Synaptics account representative.

关于作者

saleel

Saleel Awsare
Senior Vice President and General Manager, PC & Peripherals Division
linkedin

News & Views

博客

想象一下传感器和人工智能将会如何改变我们的工作场所。

让我们想象一下,比如十年后的一天,如今智能家居设备背后的技术已经在工作场所变得无处不在。计算能力将不再局限于服务器机房和笔记本电脑等独立设备。各类传感器将几乎融入工作场所的每一处。所有这些将完美匹配具备人类感知智能的设备,…

无论你是否意识到,我们都正在进入人机感知的智能新时代。在新突思(Synaptics),我们正在帮助客户通过新一代的产品和服务创新引领人机界面的未来发展。基本上,新突思结合了易用性、功能性和美观性,使产品能够帮助我们实现更高效、更安全和更愉快的数字生活。…

想象一下,如果十年前的你今天早上出现在你的卧室里。什么会使她惊讶?你的孩子已经12岁了,并且你搬到了一个更大的房子里。这些都是可以预料到的。新技术看起来并没有什么未来感。现在的手机比过去大了一些,使用电力的汽车也多了一些,但即使是最先进的高科技设备也与十年前没什么不同。(好吧,…

查看全部

Synaptics WeChat

接收最新消息