The Dressler Blog

I have opinions. Lots of opinions.

Back

The Google Conspiracy During the Cold War, Americans struggled to understand the meaning of developments inside the Soviet Union. A class of experts known as Kremlinologists existed to explain the shifting alliances at the top of the Soviet government, revealed by things like position on the reviewing stand at the annual May Day celebration. As a youth, I imagined that the hazy and incomplete knowledge we had of the Soviet Union was due to the repressive and secretive nature of that state. Now we have Google. Google is not particularly repressive. They are not particularly secretive. And yet, Google’s behavior and announcements are endlessly parsed and interpreted by a new class of pundits. (Googologists?) The news that Google plans to include an ad blocker in a future version of Chrome has been subjected to intensive interpretation. Because Google makes the majority of their money in advertising, pundits have suggested that they must have a secret (and probably underhanded) motivation for including an ad blocker. (See some theories in the link below.) Why does this matter? I have no special knowledge of Google. So I am reduced to interpreting this decision based upon only obvious things. For example: 1. Ad blocking is popular technology and if Chrome failed to integrate an ad blocker it might lose its dominance in the browser space. 2. Ad blocking is currently available, but the vast majority of internet users don’t bother installing it, so it is perhaps not such a threat to ad revenue. 3. Google believes that they make money when the internet works so anything that screws up people’s internet experience hurts Google. 4. There are too many ads online and that is screwing up people’s internet experience. 5. Google insists that ads that run on their platforms be relevant, useful and not too invasive. 6. Google limits that number of ad units their publishers can feature on a page. When you look at all these obvious things the natural conclusion is that Google wants Chrome to stay the most popular browser and they feel that the proliferation of ads is actually damaging to their core business. If this is a conspiracy, it’s terribly hidden in plain sight. In a nutshell: Google doesn’t want the internet to suck. Read More Thoughts on AR Reading Benedict Evans is like binge-watching Richard Feynman lecture videos. Evans’ lengthy blog post “The first decade of augmented reality” (link below) challenges assumptions about the direction of augmented reality. One must forgive him for ignoring the controversy about the Magic Leap demo videos. (His company is an investor.) But his conjectures about a possible future relationship between AR and VR are fascinating. It’s easy to imagine a future state in which AR and VR are not separate technologies, but points on a continuum between pure reality and pure abstraction. Once AR has accomplished the task of recognizing and mapping against an environment, it is easy to see how such technology would solve some of VR’s biggest problems. If VR could be mapped on top of the physical environment – turning the walls of your office into the walls of a cave – then the nausea problem gets solved. This isn’t pure VR. It’s a kind of reality-augmented virtual reality. (Forgive me, I couldn’t resist.) One quibble I have with Evans is his assumption that we would reach out to manipulate or activate things in AR. This is a fallacy that seems to have emerged from the movie The Minority Report. A world where everyone walks around touching and grabbing invisible objects at chest level is unlikely and impractical for multiple reasons. It seems to me much more likely that the technology will evolve to allow people to manipulate AR by small movements against their forearm, knee or on a desktop. Why does this matter? As Evans points out, new technologies are developed and adopted iteratively. Yet, we expect the next technological revolution to arrive “before Christmas” when it is far more likely to emerge by fits and starts over 2 to 5 years. In addition, the lessons of the failure of Google Glass haven’t really been learned by the tech industry. They continue to develop technology that makes the user look foolish, working under the assumption that cool enough technology will change behavior. In real life, users wearing thick goggles would be gesturing at invisible objects. If you thought glassholes were creepy, this is on another level. In a nutshell: AR will happen. Eventually. Read More Parsing Intelligence MIT Technology Review recently posted a long article called “The Dark Secret at the Heart of AI.” Despite the unjustified hysteria of the title, the article does dive into one of the biggest challenges in the practical application of machine learning – namely, we have no idea how machine learning algorithms figure things out. To take one example from the article, an algorithm trained on patient data from Mount Sinai Hospital in New York is capable of anticipating the onset of psychiatric disorders like schizophrenia. This is a boon for doctors since schizophrenia has always been difficult to predict. But no one knows how the algorithm is making these predictions. Machine learning systems train themselves and there are precious few ways to figure out how they are reaching their conclusions. Back propagation does allow for the manipulation of individual neural nodes in a neural network and some AI researchers are asking their machine learning systems to flag relevant data points. But the deeper the neural network, the more it’s processes become a black box. If the algorithm is making some kind of basic logical error or demonstrating bias, it cannot be corrected because it is impossible to pinpoint the location and nature of the error. Why does this matter? Discussions of machine learning have always been marred by misunderstandings about the nature of the human brain. Many software engineers still imagine that the brain is a mechanistic logic-machine capable of being reverse-engineered. Early work on artificial intelligence tried to duplicate inputs and outputs to match human cognition in a series of steps. This failed. Now that we have a technology that resembles the often-inscrutable leaps of logic of the human brain, we want to reduce it to a mechanistic process. But that reductionism is based on a false understanding of human intelligence and a false understanding of machine learning. Machine learning algorithms are tools. These tools may be more or less useful as they are more or less accurate. As tempting as it is to try to require an algorithm to “show its work,” this isn’t what makes deep learning effective. In a nutshell: Sometimes, you just know something. Read More

Sign up to receive weekly Uneven Distribution emails about technology, design, marketing, and user experience.