What an AI future will look like

It's a very common trope for the future to be ruled by AI. Usually this is a dystopia, where the AI either enslaves humanity or decides that they are a threat and must be exterminated. In many cases the AI is instead benevolent, providing for humans better than humans could provide for themselves and offering a secular paradise where everyone can mindless consoom content forever. With things like machine learning having some impressive results, there are probably more people than ever who think that an AI ruled future is inevitable. And it doesn't seem like it's going to be the good one: after all, in the utopian AI futures one things that humans generally were still responsible for was art, but even now we have AI algorithms that can produce artwork that is at least better than a painter with a few years of experience.

The truth is that strong AI is philosophically impossible. An AI will never truly decide anything. And that's where the true horror lies: despite the fact that AIs can't make decisions or meaningful content, we still will push off more and more of our decisions and content creation to AIs. Defending the thesis that strong AI is impossible would be an entirely different essay entirely, requiring a development of metaphysics and the philosophy of mind. That's not what I hope to do here. But if you look at examples of how current AI algorithms are used, you can at least see that our current AI is not thinking in any profound sense, and that people still trust it too much anyway.

This ties into previoues essays which touched on how search engines suck. It's definitely not the case that when you search for something online, either in a general search or on a specific site like Amazon or youtube, that some computer is actually calmly thinking about what you want and then returning the results that best match both your actual search terms and what you were subconsciously thinking about. There have been many times where I have searched for a product on Amazon as specifically as could be reasonably expected, such as searching for an exact book title in quotes together with the author, and still not had the result show up (despite the book actually being sold on Amazon.) The thought that these sites are providing even results beyond minimally useful on most searches is absurd, but you still run into people who think that they got some weird item on their search because Amazon "really knew what I wanted." Similarly there are people who think that "the algorithm" is making deep decisions into what youtube videos they would be interested in watching rather than being essentially random in most cases (besides the obviously pushed political versions whenever there is a happening.) What makes it more plausible for these people is the fact that they give away so much personal information, not only through normal browsing of these sites but also through their smartphone, Alexa device, etc. The thought process goes ``surely these companies wouldn't take all of this personal data and not use any of it in a way that benefits me!''

For something just a little more serious, consider "smart" appliances, in particular thermostats. People buy smart thermostats for one of two reasons: either they are tech cultists who absolutely must have the most "advanced" item, or they think that the thermostats will handle their heating and cooling so efficiently, using ``machine learning algorithms,'' that their utility bills will be dramatically less. This is pretty implausible on the face of things, since all a theromstat can do is either run or not run the heating or air conditioning. There are perhaps some very minor optimizations which could be made based on when your house is naturally cooler or warmer (based off things like which machines you are using) but for the most part all that matters is what temperature you have set. There isn't a big difference between keeping a house cooled to a certain temperature, or letting it get a little warmer by not running the AC and then running it for longer afterwards because if the temperature had been maintained to begin with it wouldn't have to run as much to keep that temperature. If you hired someone to just sit next to your thermostat all day and either run it or not, according to what would be best for your power bill, there would be very little that he could do to save you money. So there's no magic algorithm that an AI can use to save you tons of money.

And in fact they don't save you money. You end up with higher utility bills. But note the defense given in this article: It's claimed that they would work if people acted like "Commander Spock" and just let external companies run their thermostat without putting in any input of their own, but instead they are acting like "Homer Simpson" in that they make their own decisions about how hot their houses should be. However, note that the bills are still higher than non-smart devices, even though people definitely make adjustments to non-smart devices. Thus the magic AI algorithms make things less efficient, and you can only get any savings whatsoever by being forced into living with a more uncomfortable temperature. If you were okay with that you could easily just raise the temperature on your non-smart thermostat. Thus smart thermostats are useless according to their own metrics. The only way that anyone believes that they work is by allowing the AI to do something that they would be unwilling to do and then saying "this is what I really want, since the AI knows more about my preferences than I do." Consider that people are not only willing to pay more for such a "smart" device but you are also giving a company the ability to shut down your thermostat whenever they like, or unintentionally through server errors, possibly putting you into a life or death situation (such as having no heat in the middle of the winter.)

But it gets even more serious than just thermostats. You can find plenty of companies that have given over price controls of their sales to AIs, there are plenty of stock trades automatically done by AIs. You will definitely see companies using AIs to do sorting of their job applications. Probably not the very final decision, though that day may come, but they will definitely use AI to automatically choose ``the top 20 applications'' or something. You will see AIs used to assist in medical diagnoses (the demand for "expert systems" in medicine has been a constant demand since at least the 80s.) And that's not even getting into absolutely things like self-driving cars. AIs will make decent decisions in the usual case, since that can be done just by "averaging out" what worked in large data sets, but AI algorithms will lead to absolutely catastrophic decisions. But many will defend these decisions by pretending that humans would make just as bad of decisions as worse.

Thus the horror of mass AI is not that it will be a hyperintelligence that rules over us (benevolently or not) but that it is a set of dumb algorithms which work only acceptably well in the normal case and screw us over repeatedly, but we give our decisions over to the AI anyway. This raises a question: Why would people do this? The people who will trust AI will fall into these categories:

It's not all black pills in that this all relies on the normies going along. If another fad comes along that distracts them from artificial intelligence, they will go along with that instead. But currently that looks unlikley. Prepare for a future of being forced to follow the dictates of barely functional algorithms.

September 25, 2022

<- Back |Main Page| Forward ->