Two words in this statement popped out to me like a flying dinosaur in a mixed-reality headset: when possible. When I flagged this in a subsequent call, Roku reassured me that a fix for my issue will happen. In the worst-case scenario, if the problem won’t be solved in the next OS, sufferers will be provided some incantation to have their televisions backdated to the previous operating system. (Does this mean we’re back to hitting that home button five times?) And if that doesn’t work, which Roku says totally won’t be the case, the company will make sure to make everyone satisfied somehow. The company was ready to satisfy me right away, offering me a new TV. I declined, since they weren’t offering it to everyone whose Netflix was crashing.
I think Roku is dealing in good faith. I’d been happy with my Roku-powered smart TV, until I wasn’t because it kept crashing. I take Roku at its word that it’s working on the problem and might actually fix it. I acknowledge that updating software on a static platform like a television set is a particular challenge. And God knows how common bugs are in software.
In any case, my inability to stream Netflix without resetting the TV every time I watch a movie is a pretty trivial problem. And you know what? Even if I never watched Netflix again, I’d live. Now that Netflix has added advertising to its business model, I’m dreading the day when everyone on the service is exposed to endless commercials, unless we pay even more than the already out-of-control monthly fee. Beef was great, but I’d pass if every 10 minutes it was interrupted by pharma ads.
Nevertheless, my Roku problem is a warning. Artificial intelligence is thrusting us into an era that intertwines our lives with digital technology more than ever. If you think that our current software is complicated, just wait until everything works on neural nets! Even the people who create those are mystified about how they work. And, boy, can things go wrong with that stuff. Just this week, OpenAI suffered a few hours where its chatbots blurted out incoherent comments, evoking the word salad of a stroke victim or the Republican front-runner. And Google had to temporarily stop its Gemini LLM from generating images of people, because of what it called “historical inconsistencies” in how it depicted the diversity of humanity. These are disturbing portents. We’re now in the process of turning over much of our activities to these systems. If they fail, “community discussions” won’t save us.
Time Travel
Digital technology is too damn complicated, and we’re doomed to a life of bug-resolution. That was my observation 30 years ago when I wrote Insanely Great, in a passage spurred by a freezing problem I had with my Macintosh IIcx. As the Mac operating system struggled to handle a complicated ecosystem of extensions, boundary-pushing applications, and data at a scale the original had not imagined, bugs appeared that required Sherlock Holmes–level sleuthing to resolve.
This was the background to my Macintosh troubles: the computer had become more complicated than anyone had imagined. I enacted a short-term fix, stripping the system of possible offenders. I was stepping back in time, making the Mac emulate the simpler, though less useful, computer I once had. As I wiped out Super Boomerang, Background Printing, On Location and Space Saver, I pictured myself as Astronaut Dave in 2001, determinedly yanking out the chips in the supercomputer H.A.L., with the uncomfortable feeling that I was deconstructing a personality. When I finished my Macintosh IIcx was not so atavistic as to sing “Daisy,” but it was, in a Mac sense, no longer itself. On the other hand, it no longer hung.
Credit: Source link