"...Unilever, an Anglo-Dutch consumer-goods giant, is using expression-analysis software to pinpoint how testers react to foods. Procter & Gamble, an American competitor, is using similar technology to decipher the expressions of focus groups viewing its advertisements."
States the article "Machines that can see" here in The Economist. Similarly, the ability of sensors driven understanding of environments can enable in-context communication with a perspective consumer. Further, remembering the consumer and the context allows for touching them in a non-disruptive fashion where the flow of information or marketing is "continued". For example, as the article states:
"Digital billboards—the large TV screens that display advertisements in public places—already take into account the weather (touting cold drinks when it is hot) and the time of day (promoting wine in the evening). NICTA, a media laboratory funded by the Australian government, has gone a stage further. It has developed a digital sign called TABANAR, which sports an integrated camera. When a passer-by approaches, software determines his sex, approximate age and hair growth. Shoppers can then be enticed with highly targeted advertisements: action figures for little boys, for example, or razors for beardless men. If the person begins to turn away, TABANAR launches a different ad, perhaps with dramatic music. If he comes back later, TABANAR can show yet another advertisement. “You tend to go: ‘Wow, thanks, how did you know I needed that?’,” says Rob Fitzpatrick of NICTA."
At P&G, I led an effort in kiosks that enabled a perspective consumer to conduct virtual beauty care at the shelf for beauty care products. Such services have existed in South Korea in specific but they may be going main stream across the world in other CPG categories:
"Computer vision has even advanced to the point that it can perform internet searches with an image, rather than key words, as a search term. Later this year Accenture, a consulting firm, will launch a free service, called Accenture Mobile Object-Recognition Platform (AMORP), that will enable people to use images sent from mobile phones to look things up on the web. After sending an image of, say, a Chinese delicacy, a curious foodie might receive information gleaned from AsianFoodGrocer.com, for example. Fredrik Linaker, head of the AMORP project at Accenture’s research centre in Sofia Antipolis, France, likens the project to “physical-world hyperlinking”."
The use of computer vision based applications beyond consumer centric opportunities are tremendous as well. The article discusses the applicability and examples of use in intelligence and safety sector also.
No comments:
Post a Comment