The Dressler Blog

I have opinions. Lots of opinions.

Back

AI falls into the Uncanny Valley Machine learning is very good at pattern recognition. You need to recognize patterns in order to recreate them. A number of startups have applied machine learning to the creation of artificial patterns using sound or images. Lyrebird got some viral buzz a few weeks ago when they used their voice-imitating technology to make it appear that Barack Obama, Hillary Clinton and Donald Trump were endorsing the company. Faceapp, a company out of Russia, allows users to add a smile to an existing photo of a person or to add and subtract years of age. Finally, Face2Face out of Stanford can manipulate video footage to make people’s facial expressions match a tracked individual. As a result, you can see Donald Trump’s face burst into a goofy grin, an exact match for a former Stanford student. To quote a recent MIT Technology Review article: “The result is often eerily realistic.” Why does this matter? Well, it’s eerily something. Watching these demos or listening to Lyrebird’s viral promos is a deep dive down into the uncanny valley. The uncanny valley is the theory that the closer a technology comes to recreating the human form, the more disturbing it becomes. So, South Park just looks like a silly cartoon. But Grand Moff Tarkin in Rogue One leaves many viewers feeling nauseated. (The actor who played Grand Moff Tarkin had died and he was digitally inserted into the movie.) Face2Face comes the closest to creating something recognizably human, but it still looks wrong. The eyes don’t match the expressions and the movement of the face is unnatural since the muscles aren’t moving in sync. Faceapp recreates amateurish airbrushing on an industrial scale and Lyrebird sounds like a GPS system doing bad celebrity impressions. While I agree these applications are intriguing, they are definitely not appropriate for consumer-facing communications. In a nutshell: AI is really good at getting you all the way to the bottom of the uncanny valley. Read More Privacy is a choice A recent article in Wired points out that many apps are manipulating permissions to use smartphones microphones to listen to the inaudible pings of beacons. Beacons have become popular among retailers who use near field communication to push marketing messages to shoppers or to track who is coming into their stores. But beacons are constantly sending out pings that are inaudible to human ears, but can be picked up by smartphone microphones. Apps could take advantage of these pings to track user behavior and then sell that information to marketers. Granted, there are a few justifiable use cases for allowing an app to use your microphone. But researchers are finding that many apps are requesting this permission without justification. Why does this matter? The smartphone has always had the potential to be “the spy in our pocket.” Everyday, millions of users thoughtlessly agree to grant permissions and sign user agreements that violate their privacy. I’m sure you don’t have anything to hide. But I’m uncomfortable with the idea that major technology companies are reducing the events of my life to a purchase funnel that is their intellectual property. Maybe you feel the same way. If that is the case, you cannot rely on privacy activists to pressure tech companies to anonymize or delete user data. Privacy is a choice. You need to choose not to share your life and identity with your ISP, your phone manufacturer, an app developer, etc. Because the default setting is that everything is shared and then owned by those companies. In a nutshell: Review all your settings. Everywhere. Read More Social media marketing works! After last year’s election, Facebook angrily insisted that fake news and propaganda on their platform was unrelated to the outcome of the election. Now, reluctantly, they have released a report that admits that fake profiles on Facebook were used by a state actor (Russia) to try to influence the election by spreading fake stories and hacked personal data from one of the candidates (Clinton.) This is hardly surprising to anyone with a Facebook feed stuffed with bizarre, conspiracy-themed articles from mysterious websites, shared by an uncle or cousin or college friend. Why does this matter? Much as I personally dislike it, this was the most successful social media campaign of all time. This is a marketing triumph. I know, I know, it’s terrible. But it’s also awe-inspiring. I have no idea what the Russian government spent on their “influencer network” or their “native content.” What I do know is that they took a campaign that has traditionally been decided by billions of dollars in television advertising and turned it on its head using social media. Ignore the fact that most of the news was fake. That’s incidental. The Russians created an artificial network of peer-influencers and used the naturally-loose vetting standards of shared gossip on social networks to their advantage. Yes, they had to tap into the ingrained prejudices of a particularly vocal group of social media users to get their stories shared. But once they had started seeding their marketing message, they expanded their audience of credulous readers through sheer repetition. Yes, it’s gross and offensive. But it worked. In a nutshell: Stop giving credit to Cambridge Analytica. The Russian secret police have mastered digital marketing. Read More

Sign up to receive weekly Uneven Distribution emails about technology, design, marketing, and user experience.