Opinion: What’s The Issue With AI?

1
AI

Photo: Steve Johnson / Unsplash

Lou Conover

Everyone talks about artificial intelligence but hardly anyone seems to know what it actually is, hardly anyone but the computer scientists who have created it. I’m long retired from the field, but it was my area of research, and although it couldn’t really do much when I was involved in it, the essential change that has turned it into what it is today is basically one of scale, not of new ideas.

The core of an AI system is a neural net. Without going into too much detail, a neural net is a web of virtual neurons consisting of nothing more than connections with other virtual neurons, each of which has a numerical weight. The system learns by adjusting the weights of each connection up or down based only on whether the system as a whole exhibits an output that is correct for a given input. The inputs are the training data set. There’s no intelligence in any single element of the web. Right answer, adjust one direction (up or down), wrong answer, adjust the other direction. Repeat ad almost infinitum until the system gradually homes in on the ability to get it right. The “knowledge” in such a system isn’t in any one element. It’s spread across that enormous set of numbers, no single one of which represents anything in particular.

Therein lies the problem. Once the system is trained on a large set of data, it “knows” things about that data. It can reproduce facts about that data. It can answer analytic queries about the data. If the data set is human speech, the system can reproduce human speech. If the input is data about weather, the system can produce a forecast.

However, there is no single datum within the system that represents that knowledge, no single datum that represents the real world facts of whatever the system is analyzing. There are no explicit rules, no decision points. The knowledge in a neural net is in the pattern of activation across the entire network. If a decision is “incorrect” in some way, there’s nothing that can be pointed out as the source of the “error”. I put the words “incorrect” and “error” in quotes because as far as the system is concerned, there is no error. The output correctly represents the training data set. An error is an output that we don’t want, not one that is erronious with respect to the training data set. There is no way to “fix” the system, no single point of failure, no single value that can be tweaked, where a change to that one value would fix the error. The behavior of a neural net can’t be predicted by examining its internal working. It is essentially a black box.

Because there is no way to know in detail how a neural net computes its output, its decisions can’t be questioned. They can only be accepted or rejected. When we cede decision making power to such a system, the only recourse we have when we disagree with the system’s decisions is to appeal to a higher power, which is typically the bureaucratic or business entity that has deployed the system to automate those decisions. But despite owning the system, that entity can’t know how the decisions are made and has already stepped away from involvement in the details that go into making those decisions. Is it willing or able to review the system’s decisions when it has explicitly employed the system to make decisions it doesn’t employ humans to make?

The current trend in the direction of regulating AI through legislation is doomed to fail if tries to address how AI works, if it tries to insert rules into a fundamentally non-rules based system. Instead, regulation should address how AI systems are used. The output of an AI system should be explicitly labeled as such. Entities that employ AI systems should be made explicitly liable for the decisions the systems make. They should be required to have a system in place whereby AI decisions can be appealed, with a humans managing the appeals process.

The inner workings of an AI system can’t be understood or explained, much less regulated, but the entities that employ them can be, and should be.

That said, artificial intelligence is not the aspect of digital technology that concerns me the most. I think AI will be easy to control if we just use it as a tool directed at specific problems under publicly known circumstances. That’s more likely every day given the increasing difficulty of hiding anything. What is more worrisome to me is the extent to which digital technology is permeating our lives. It’s becoming more and more likely that any built environment is capable of listening to us and responding. It’s not surveillance that concerns me. I was recently in a large room with a couple of young people, one of whom was in the room for the first time. He wanted to know the time and started shouting out, “Siri, Allexa,” until I pointed out the clock. He quite reasonably thought there was a good chance that the digital world as present in the room as he and I.

This is an entirely new condition. Homo sapiens evolved in a natural environment. It’s what our biological and psychological systems are prepared for. Our minds aren’t adapted to an environment that is ready at every moment to serve us personally, but is not itself a person and is at least potentially present everywhere we go. Combine that with facial recognition and soon every room you enter will know you’re there and present you with a personalized list of all the options available to you in that room. No searching, no discovery. No interaction with the unknown. Everything tailored to the individual.

That will change us. There will be a constant mismatch between what we’re adapted to do for ourselves and what the environment will do for us. I don’t know where it will lead, but I can’t believe that it will be universally good for us. I’m not confident leaving it up to market forces that will encourage a race to the very bottom to produce the most unhealthy products and life styles. It may already be too late to try to steer the progress of this technology, but that doesn’t excuse us from trying.

Lou Conover, has been a resident of Amherst since 1986, had a twenty year career in software, followed by twelve years teaching mathematics in China, and retired five years ago. He currently spends his time writing and making music and art. Some of his work can be seen at shingledesigns.com.

Spread the love

1 thought on “Opinion: What’s The Issue With AI?

  1. I recommend two additional references to help folks think about the challenges that AI poses and whether we ought to be worried about it and paying closer attention.

    Today’s (5/30) The Daily, from the New York Times, entitled, The Godfather of AI Has Some Regrets, features an interview with Geoffrey Hinton who helped invent the technology behind ChatGPT. https://www.nytimes.com/2023/05/30/podcasts/the-daily/chatgpt-hinton-ai.html. It’s about 30 min long and worth the time.

    And there was a humorous article in Sunday’s NYT (though apparently not humorous for the attorney in question) about a lawyer who had ChatGPT write a court filing for him. It did not go well. The AI made up all of the case law it quoted and the lawyer is now facing discipline.
    https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

Leave a Reply

The Amherst Indy welcomes your comment on this article. Comments must be signed with your real, full name & contact information; and must be factual and civil. See the Indy comment policy for more information.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.