Three years ago, I wrote about a Google Patent that looked at a searcher’s reaction to search results to rank those results, in the post Biometric Parameters as a Ranking Signal in Google Search Results?
Since then, I’ve been keeping an eye out for patent filings from Google that used a smartphone camera to look at the expression of a user of that device in order to try to understand the emotions of that person better.
A patent application about such a process has been filed. I’m wondering if most people would feel comfortable using the process described in this new patent filing.
The summary background for this new patented approach is one of the shortest I have seen, telling us:
“Some computing devices (e.g., mobile phones, tablet computers, etc.) provide graphical keyboards, handwriting recognition systems, speech-to-text systems, and other types of user interfaces (“UIs”) for composing electronic documents and messages. Such user interfaces may provide ways for a user to input text as well as some other limited forms of media content (e.g., emotion icons or so-called “emoticons”, graphical images, voice input, and other types of media content) interspersed within the text of the documents or messages.”
The patent application is:
Graphical Image Retrieval Based On Emotional State of a User of a Computing Device
Inventors: Matthias Grundmann, Karthik Raveendran, and Daniel Castro Chin
US Patent Application: 20190228031
Published: July 25, 2019
Filed: January 28, 2019
A computing device is described that includes a camera configured to capture an image of a user of the computing device, a memory configured to store the image of the user, at least one processor, and at least one module. The at least one module is operable by the at least one processor to obtain, from the memory, an indication of the image of the user of the computing device, determine, based on the image, a first emotion classification tag, and identify, based on the first emotion…
Read More Here