The Dressler Blog

I have opinions. Lots of opinions.

Back

Base Reality over Dessert Recently, I was out to dinner with a large group of friends. Included in the group was a successful technology entrepreneur, a senior executive at a major technology company, an economist working in a tech giant’s research division, and two senior consultants who specialize in the tech industry. These people all know more about technology than I do. Between the wine and my intellectual incapacity, I was barely managing to keep up my side of the conversation when it suddenly turned to Elon Musk’s assertion that there is a billion to one chance we are living in base reality. (To summarize Musk’s argument: Blah, blah, blah – AI – blah, blah, blah – video games – blah, blah, blah, simulation.) Now I have always operated under the assumption that everyone knows that Musk was just being puckish and that Musk himself knows very well that his numbers are crap and his argument is tendentious. So I was shocked to hear people who knew so much more than me about technology take his point seriously. I sputtered my incredulity and tried to talk about the Lucas-Penrose Argument, but the evening was drawing to a close anyway. That night I sat up thinking. When knowledgeable and intelligent people think you’re wrong, I find it’s valuable to re-examine your assumptions. Over the next few days, I revisited Musk’s main points. They still felt prejudicial and speculative. But I couldn’t put my finger on why. So I was hugely relieved when Rodney Brooks published an article entitled “The Seven Deadly Sins of AI Predictions” in the MIT Technology Review. (Link Below.) Why does this matter? To quote the Silicon Valley character Richard Hendricks: “Not crazy, opposite.” Rodney Brooks is a former director of the Computer Science and Artificial Intelligence Laboratory at MIT and a founder of Rethink Robotics and iRobot. Like my dinner companions, he knows a lot more about technology than I do. I suspect that he had Mr. Musk’s simulation argument in mind when he wrote his article showing how and why people get predictions about AI wrong. He begins with a reference to Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” From this he extrapolates from the history of technology to show how bad we are at understanding the true implications and effects of a new technology. He moves on to criticize AI futurists for creating almost magical scenarios for future technology which he correctly points out is a faith-based, rather than a rational argument. Next, he address the inherent ambiguity of “suitcase words” that contain multiple meanings. “Learning”, “intelligence”, and even “will” are all words that have multiple meanings that are easily confused. But I believe the core of his argument (at least where it concerns Elon Musk) is exponentialism. Basically, progress may appear exponential for a period of time until a physical limit is hit or there ceases to be an economic argument for further development. He follows this up by talking about the absurdity of “Hollywood scenarios” (like a massive simulation?) and concludes by talking about the practical limitations to the speed of deployment. Essentially, he makes all the arguments I would have made if I were smarter, more knowledgeable, and less drunk. In a nutshell: You are living in base reality. Deal with it. Read More The Myth of Math The criminal justice system has flaws. One particularly vexing problem is the role of human bias in sentencing and parole decisions. The ACLU has found that black men are incarcerated for an average of 20% longer than white men convicted of the same crimes. That’s pretty damning evidence of racism. To eliminate racial bias on an individual basis, researchers created an algorithm that can calculate the likelihood of recidivism. These numbers can objectively calculate whether or not a given inmate is likely to break the law again if they get out of prison, regardless of race. Sound good? Not really. Among the considerations that are weighed by the algorithm are whether or not the inmate grew up in a high-crime neighborhood or has a relative in prison. Those are basically proxies for race and class. Meaning that the supposedly “fair and objective” algorithm created to eliminate racial bias just enshrines racial bias in a new form. This is what mathematician Cathy O’Neil calls a “A Weapon of Math Destruction.” These algorithms have three properties according to O’Neil: (1) they are widespread and important, (2) they are mysterious in their scoring mechanism, and (3) they are destructive. As a mathematician, O’Neil is uniquely qualified to explain how any algorithm is only as objective or valuable as the data you put in. In a recent 99% Invisible podcast (link below), she talks about this and other examples from her new book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.” Why does this matter? I come to technology by way of marketing. Marketers are not what you might call “rigorous.” So little of what we do can actually be quantified. If marketers are introduced to a quantification method (say: click through rates), they are unlikely to ever question that quantification method. This makes them uniquely susceptible to weapons of math destruction. If you sprinkle a little math on your marketing process and call it an algorithm, no one will question your data sources, your collection methods or how you score the data. It’s math. It’s objective. Smart marketers understand that marketing is largely unquantifiable and use math to justify their gut instincts. Bad marketers believe the numbers and act on them. However, this unconsidered belief in the objectivity of numbers has spread beyond marketing. The benefit I have gained from years in marketing is that I become immensely suspicious with numbers that seem too convenient to someone's argument. The processes of mathematics may be objective. But the data you feed into those processes is not. In a nutshell: The existence of an algorithm isn’t evidence of truth or accuracy. Read More A Wedge in the Partisan Divide Facebook, Google, and Twitter remain in the news due to ongoing investigations into the possibility that agents of the Russian government may have used their platforms to influence the 2016 presidential election. Facebook and Google actively campaigned for exceptions to FEC regulations that would have prevented the worst of these abuses. Political ads on television, need to disclose who paid for the effort, even if the actual funders are hidden behind super-PAC’s. Facebook and Google have become the dominant platforms in marketing by providing self-service ad campaigns. There simply isn’t a mechanism for either disclosing who paid for an ad or preventing foreign nationals from buying political ads. That’s not considered a bug; it’s the main feature of the service being offered. Considering the frictionless buying of political ads online, it will prove extremely difficult to tie specific efforts to Russian and even more difficult to prove that they did so with any collusion from the Trump campaign. Unless someone at Trump campaign HQ was dumb enough to write an email laying out the terms of the collusion, there will be no smoking gun. (Based on what I’ve heard about Cambridge Analytica from people who know, it’s entirely possible someone was exactly that dumb.) Why does this matter? It is clear at this point that Russia interfered in the US elections by trying to sow social dissension that would help Trump as the “insurgent” candidate. Before we turn an accusing finger to either Russia or the big technology companies, it’s worth considering that Russia did not manufacture social dissension in this country. We had that before they bought a single ad. A country that is willing to believe that the president was secretly born in Kenya or that the moon landing was faked or that global warming is a globalist conspiracy or that drug companies are hiding that their vaccines cause autism is a deeply, deeply damaged society. If we hand a geopolitical adversary like Russia such easy ways to undermine our society, we can hardly blame them for applying a little pressure. We shouldn’t have made it that easy. In addition, it’s absurd to get hysterical about Russian interference and ignore the larger point that unregulated campaign finance has made anonymous interference in our elections the norm. Does it really matter if dissension is sowed by the Russians or the Koch brothers or a large teacher’s union? There is no good money in the current system. In a nutshell: Facebook and Google need to change their ad buying systems for political ads. But there’s plenty of blame to go around. Read More

Sign up to receive weekly Uneven Distribution emails about technology, design, marketing, and user experience.