SwiftKey is adding animation emoticons to its mobile keyboard, and the Microsoft-based company is using artificial intelligence to express user facial expressions in a 3D animal avatar.
SwiftKey calls this feature a puppet, a feature that will be launched recently. These 3D animal models include pandas, cats, dinosaurs, owls and dogs. Currently, it can make blinks, head movements, smiles and more.
The company said that in order to create this new feature, it collected thousands of pictures and videos from volunteers to train deep neural networks that recognize each person’s facial expressions and movements in real time.
This is very similar to Apple’s animal expression in the market and Huawei’s AR Lens, but it puts this function directly into the popular third-party mobile keyboard application, making it more popular. Indeed, keyboard applications are almost the most used applications on mobile platforms, and they are used in almost every aspect, including social networking and mobile messaging applications. It is a very popular feature to convert a user’s realistic expression into a virtual expression.
“People want to type faster, SwiftKey has done a great job in this area. They want to express themselves in a more interesting way, and the new features of SwiftKey meet their needs.” Microsoft Product manager Deepak Paramanand said in a blog post.
Three years ago, Microsoft acquired SwiftKey for $250 million. Over the past three years, Microsoft has continued to add new features to the app, including web search built into the keyboard. The keyboard app is very scalable, and users can customize and add features to the top menu bar, including GIF images, new themes, translation, and pasteboards. Now added 3D animal expressions.
Currently, this feature can only be used on the Android version of SwiftKey, users can record up to 30 seconds of video and sound, and then send it to other applications like normal messages.
Finally, when using this feature, the background of the character is preferably simpler, so that the user’s expression can be recognized more quickly, and only one person can be identified at a time.