The Dressler Blog

I have opinions. Lots of opinions.

Back

Hidden Bottlenecks New York State is suing Charter/Spectrum/Time Warner for providing lower than advertised broadband speeds to their customers and manipulating FCC speed tests. Among other alleged misdeeds, Time Warner intentionally deceived customers about the speed they could achieve and artificially throttled traffic by providing out of date modems to customers. But far more damning, they hired the same contractor that the FCC hired to measure speeds using a group of 800 Time Warner subscribers. This contractor provided Time Warner with measurements before sharing them with the FCC, allowing Time Warner to artificially increase speeds on only those 800 subscribers. This allowed them to continue to advertise higher speeds to potential customers than they could possibly provide. Internal Time Warner communications seems to indicate that the company knew the practice was duplicitous, but continued to mislead the FCC and the general public. Why does this matter? Outrage, while understandable, is misplaced in this situation. “Buyer beware” warns the old adage and Time Warner customers are generally aware that advertised upload and download speeds are false. For people in technology, this lawsuit is significant because it reminds us of the real gap between our "official" technological infrastructure and the infrastructure that actually exists. Current approaches to digital design and development assumes network speeds that simply do not exist for many customers in urban areas during peak times. False assumptions about network speed already inform many projections about what technology will emerge in the coming year. Video content loads less reliably than advertised. IoT applications will respond more slowly and less reliably. In digital advertising, the assumption that polite loads allow for higher bandwidth rich media applications may be false. In general, we need data on network speeds that do not rely on self-reporting from industry players or a hopelessly over-matched FCC. In a nutshell: Don’t assume your resource-heavy site or application is loading as advertised. Read More The R&D IPO Last week technology unicorn Snap announced plans to hold their initial public offering in March. In some ways, it is not surprising that this industry darling is determined to get their next round of financing through public markets. At this point, there is too much potential money on the table and the projected valuation of Snap is so high that investors are getting antsy. But Snap loses money ($514.6 million last year) and they are projected to continue losing money. This is not Google. What Snap is offering potential investors is a chance to latch on to the company’s vision, including smart glasses, augmented reality and first person video. Investors are essentially buying into Snap’s culture of innovation, assuming that profit will result at some point in the future from an application or platform yet to be announced. Why does this matter? Research and development has traditionally been a division of a larger company. Legendary R&D facilities like Xerox Parc or Bell Labs created much of the technology we enjoy today, often generating little or no profit for the parent company. During the cost-cutting fervor of the 80’s and 90’s, many companies eliminated large scale R&D as unnecessary overhead. Partly as a result, the center of innovation shifted from large corporations to small and nimble startups. This tended to profit both the startups and the corporations. The corporations could keep this overhead off their books until the technology and the market were proven and then they could turn around and buy the innovation at a substantial markup, but with full investor approval. Startups (and the VC’s who love them) could assume the risks of innovation knowing that a substantial payout was likely if they were successful. What Snap seems to be offering investors is something else entirely. Snap is selling itself as a research and development company with some promising products, but no profits. If Snap is successful in developing new products that generate a great deal of income or generating new sources of income from their existing products, that may work out just fine for investors. Although, looking at the valuation it looks like those anticipated profits might be baked in. In a great economy, innovation alone may be enough to sustain equity value. But what happens in a bad economy? In a nutshell: EBITDA. Like bootcut jeans, it will come back in style someday. Read More Actual Clouds and Machine Learning Descartes Labs is a startup that combines satellite images with data about our planet to generate insights and forecasts. However, they quickly discovered that the raw satellite images could not be successfully processed by machine learning algorithms. The presence of clouds (the white, puffy variety), or just the different colored terrain from the natural movement of the sun can alter different satellite images of the same location enough that they can’t be handled programmatically. Descartes Labs solved this problem by “cleaning up” their satellite images. They created composite images that establish common coloration and eliminate interference from weather. In order to automate this process, they needed to build a massive parallel processing system to create composite images that change over time. After all, forecasting demands that you are able to see the changes in a landscape, not have it averaged-out as interference based on previous satellite images. Why does this matter? Machine learning is the flavor of the week. There are good reasons for that. Neural networks are incredibly powerful and can automate tasks that computers have traditionally performed poorly. However, machine learning depends on clean, well-labeled data and inputs. Too many companies are blindly pursuing machine learning initiatives without realizing that the “big data” they have spent years collecting is nowhere near detailed or clean enough to allow neural networks to perform their magic. The first step in any machine learning project is to evaluate the current state of your data. As in the case of Descartes Labs, it may be that the effort of preparing inputs exceeds that of the actual machine learning. In a nutshell: Any process can only be as good as its inputs. Machine learning included. Read More Mobile 2.0 or Web 3.0 or neither Ben Evans of Andreessen Horowitz is always insightful and enjoyable to read. In his latest blog post (link below), he makes an analogy between the famous Web 2.0 conference in 2004 and the 10 year anniversary of the launch of the iPhone. I believe his point is that it always takes some time for the kinks to be worked out in a new platform and for the technology to evolve and the understanding of the technology to evolve enough that the platform is being used to its potential. In the case of the internet, it took the ten years from the launch of the Netscape browser until the announcement of Web 2.0 for people in technology to start understanding the possibilities. Evans makes the argument that mobile technology has reached a similar point where we are no longer building websites for mobile, but building truly mobile sites (apps) that take advantage of native functionality and native capabilities. He points out, for example, that the cell phone camera is being used less as a traditional point-and-shoot and increasingly as an image input mechanism. Why does this matter? People have always liked to designate a definitive date and title to the long term drift of historical currents. Web 2.0 didn’t begin at Tim O’Reilly conference, but it certainly didn’t hurt his career or the technology industry to create a brand name to memorialize long terms trends. Similarly, if calling the current generation of mobile technologies “Mobile 2.0” will help people sell their products and get budgets approved, I’m all for it. But the general user’s understanding of technology is constantly evolving. And with their understanding, the technology is able to evolve as well. Explanations and anachronistic design can be abandoned and the potential of the new technology is slowly revealed. As users become comfortable, the design and functionality nudges forward by imperceptible degrees, until finally it becomes useful to recognize the distance we have travelled using a catchy name – like Mobile 2.0. In an nutshell: Yes, we’re using mobile technology more effectively. Read More

Sign up to receive weekly Uneven Distribution emails about technology, design, marketing, and user experience.