Asimov’s stories, many of them, include some form of a recurring character named Multivac.  Conjured in the days when computers were programmed with tapes, films, and cards with holes punched in them, and operated on vacuum tubes that occupied entire rooms and buildings, Multivac was that generation’s ultimate manifestation of the supercomputer.  In his short stories, Asimov shows us everything from science to government to art to prophecy delivered through Multivac.

With the recent discussions of artificial intelligence, I’ve not surprisingly found myself thinking about those old Multivac stories.  Multivac was artificial intelligence before “AI” was a rigorous term, and unlike many others who have undertaken to examine the topic, Asimov actually understood computers and their logic.  Like today’s artificial intelligences, Multivac was never meant to examine ideas of computer consciousness, definitions of life, or machine rights and revolutions.  Those stories were about how people might interact with such an advanced computer, and that is the question we should be contemplating today.

Since ChatGPT’s debut, the AI conversation has escalated from a simmer to a roiling boil of misunderstanding, paranoia, media noise, attention-seeking behavior, and a good deal of fear.  The letter from “experts” elevating “generative” AI to the same threat level as nuclear weapons and the next bubonic plague has only exacerbated matters, and why not?  Fundamentally, we humans fear change, because change is the unknown, and the unknown can be dangerous.  Our amygdala looks at these technologies that make the future so uncertain and which we individually do not control or understand, and it assumes that ChatGPT is a saber-toothed tiger crouching in the tall grass.

Our capacity to reason means that we need not be slaves to the amygdala, the hippocampus, the primitive parts of our brain which respond with that colloquial concept of instinct rather than logic.  We can teach ourselves to tame the base impulses within us and account for them so that we see how they affect our reasoning.  That’s why we study critical thinking and logical fallacies.  It’s why we establish processes like the scientific method in order to elevate reason from the dense, incomprehensible nexus of uncertainties and impulses, and that is why I am writing this post.  It might not earn me many views or links to take a middle ground, unexcited position on the new AI technologies, but my goal is the opposite of sensationalism.

Consider, first, what so-called generative AI technologies are not.  The headline is that they are not generative.  Despite the name, the technologies we call AI are fundamentally derivative in their functions.  They process enormous amounts of information and synthesize it in response to a prompt.  No creativity occurs, nothing new is formed; whatever appears as “new” is nothing but an amalgamation arising from what already exists.  The fact that it appears new is merely because we do not see the underlying algorithm in the result we are fed.  It would be more accurate to call them derivative AI technologies, but that would be far less attention-grabbing, wouldn’t it?  AIs like ChatGPT are nowhere near questions of computer consciousness, machine rights, or some kind of robotic revolution.  This is not the time for a debate about computer personhood; it is a time to talk about the implications of an incremental step forward in computing technology that will affect the ways in which computers are utilized.

Computers are powerful because they can iterate.  They can repeat a process over, and over, and over, and over, and over, and over, and over, and over, and over, and over, and over, and over, and over, and over, and over, and over again and again and again and again without interruption, without becoming bored, without losing track of a decimal or a negative sign, without leaving a possibility unexplored or making assumptions about the remaining results because of what has already been found.  That enables numerical analyses like those that form approximate solutions to the Navier-Stokes equation.  The same AI techniques that underpin ChatGPT-style AIs were employed to far less consternation and drama last year to iterate new protein design.  An algorithm programmed with the parameters by which proteins fold iterated through all of the possible permutations to discover proteins and their functions that we had never developed or discovered ourselves.  Even here, though, nothing new was made: the algorithm that did this was just iterating beyond the iterations we’d managed previously.

When you ask your question of one of these “generative” AI systems, it’s just going a step further than search engines were already capable of going.  Instead of a list of results based on your search terms, the AI can compare your search terms to similar searches others made, scan the contents of the results a traditional search engine would spit out at you, and compile a kind of “answer” based on what it reads.  It’s not inventing new information, and it’s not creating anything new.  Even when you ask it for a critique of your story or for a piece of artwork based on a prompt, it’s not creating anything new.  All it’s doing is scanning from a giant database of similar queries and source materials, and from there providing a recombination of what already exists.  It looks like generation, like original creation, because we don’t see all of the iterations that take place in the background.

Maybe this seems like splitting hairs to you, like mere semantics when you can see, right before your eyes, a computer giving you seemingly interactive feedback on your short story draft, or creating an apparently custom and original cover for your new novel.  After all, truly “generative” or not, these capabilities exist, and even if they’re not truly “creative,” they are close enough that they will change how the world works.  And there’s that word again, that terrifying word with fangs that drip with venom and legs that skuttle unnaturally in the night: change.  Jobs, for instance, will disappear because of this technology no matter how some random blogging author wants to explain the underlying function of the technology.

Maybe that’s true, and people have been fighting that kind of change for a long time.  The phrase “a wrench in the works” comes from early protests against automation in which upset workers literally threw wrenches into the machinery so that they could have their manual jobs back and not be replaced by a machine that could do the job faster, more consistently, and more cheaply.  This is nothing new, and I think people are more disturbed than usual by this particular technology’s impact on jobs because it could affect jobs people thought of as “safe” from the “dangers” of automation.  Well, I call that a lack of perspective.  Ever used PowerPoint?  The ability to easily create high-quality graphic presentations eliminated a slew of creative jobs an earlier generation thought were “safe” from automation.  Ever created a brochure in Publisher?  There used to be a whole industry dedicated to doing what anyone with a computer can now do with some degree of professional shine.  This is not new.

Don’t think that I’m unsympathetic just because I have an engineering job that can’t be automated with this version of AI.  Eventually, AI will be able to do my job.  It will make it so that anyone with the right software can sit down and design a satellite, zenith to nadir deck.  Maybe it already can, and it just hasn’t been tailored to the task.  I look forward to that day.  I eagerly anticipate it, because that’s the whole reason that we built computers in the first place and why we keep trying to improve them.  Take, for example, a science project I’ve been working on recently.  I built a device and needed to measure its output.  I could have done it by hand, taking imperfect, inconsistent measurements on my own time.  Instead, I wrote a computer program to collect the data for me.  When it’s collected, I’ll use another computer program to analyze the data and format it into a lovely graphic.  I could do all of that by hand, but it would take longer, be less effective, less useful, and a whole lot less legible.

Technology like AI doesn’t eliminate jobs.  It changes jobs.  It frees us up to do other things.  Sometimes that’s difficult – it’s change, again – but it will be worthwhile in the long run.  Ultimately, the best AI we have today is still our tool.  It works for us, not the other way around, and we will use it to enable our time to be spent more productively.  Being a scribe was once a job.  You could sit all day and copy the same page of text a hundred times, and copy the next page the next day.  Now, we can change a number on a computer program and print as many copies as we could want (assuming the printer works…).  If you think your job is in danger from AI, then it’s time to sit down and start thinking about how you can use that technology to make whatever you offer better.  AI doesn’t eliminate the need; it changes how the need can be filled.

The inevitable calls to regulate AI are born of the same fear of uncertainty.  It’s an attempt to put this nebulous unknown we’re afraid of into a safe box, a cage that will keep it away from us until we can force it into a familiar shape, until we can turn the saber-tooth into a housecat and keep it that way.  To effectively regulate something, though, there must be a clear goal, and a clear understanding of what is being regulated, neither of which yet exist.  This is a time for discussion and experimentation, not regulation.

I’m no prognosticator or television pundit to make predictions about what the future will hold, although I do dabble in science fiction, which is the same thing without the misplaced presumption of correctness.  This post is not about making prophecies or guessing what the world will look like in twenty years because of AI.  Rather, I want to lower the temperature.  I want to bring the discussion down from “AI could be an extinction level threat” to “AI is another step forward in computing capabilities.”  These algorithms and programs will change our relationship with computers, but that was already happening.  The sooner we can step back and think, instead of merely reacting, the better.

Really, that’s how we ensure this is a positive step.  No matter how we complain, or resist, or throw wrenches at our computer screens in infantile, ineffectual, regressive tantrums of impotence, these tools now exist, and they will only improve with time and effort.  AI at this stage is still merely a tool – we are nowhere near the point of AI consciousness/life/personhood – and it is our tool to wield.  We could wield it on instinct, driven by fear, or we could take a deep breath, stop to think, and wield this powerful tool with finesse and discretion.

6 thoughts on “My Thoughts on Others’ Thoughts About AI

  1. To an extent, I agree with you, but my personal gripe with AI is one I don’t see addressed here. These AIs are indeed derivative, and what are they derivative of? The material they get trained on… which often includes copyrighted works. Now, my position is that if the copyright holders gave permission for their work to be used this way (maybe even got paid for it), fine, let anybody do what they want with the AI that was trained that way.

    But right now, that’s not what’s happening. An automated factory might require fewer workers than when the process was manual, but those who create and operate machines for the automated factory still get paid. The factory can’t run without workers. AI in its current state can’t run without writers and artists. Those people should be paid.

    Imagine a world where very few buy books or art anymore because AI generates material that’s just as good for very cheap or free. I imagine that writers and artists who previously made a living from their creations will be forced to spend less time creating and more time doing whatever form of labor allows them to survive. Those who weren’t making a living from art or writing in the first place will stop trying to. This will drastically decrease the material AI can create new derivative works from, and the net result will be a negative impact on our culture.

    Generally speaking, I think regulations should exist to prevent negative impacts to our society. So, yes, I do support regulating AI. I hope some changes happen soon.

    Like

    1. Thank you for the insightful comment. I agree that the training material that the various algorithms are using is a cause for concern; however, that concern is new only in its scope, not in its nature. The concern that creative works could be copied without compensation to the original creator and thereby diminish the overall value of that form of creative expression has existed at least since the first printing presses, and the issue of copyrighted works being available on the internet without proper recompense or credit is not new. Just recently, I saw an article that people are sharing entire pirated movies via TikTok one ultra-short snippet at a time. Is an AI system ingesting a pirated copy of a book, movie, or piece of visual art any different from an individual doing so and then making something based upon that work, except in scale? Not in my view.

      As for AI’s potential to saturate the “creative space,” I will also turn to a manufacturing analogy. Mass production, assembly lines, and international supply chains have all conspired to drastically reduce the costs and increase the availability of various consumer goods, from clothes to electronics to furniture. Doubtless, there are now fewer artisanal craftspeople than existed before the advent of mass production techniques, but they have not disappeared. Instead, they have found new niches, and the consumer benefits from access to both inexpensive, mass produced goods and high quality, artisanal products.

      I see the impact of AI on traditionally conceived creative endeavors as similar. Yes, AI has the potential to generate an almost limitless supply of material, be it books or visual art, at extremely low cost, such that individuals will not be able to compete on a cost basis. But won’t there be space for the skilled artisans of the written and visual worlds to compete on the basis of quality? I posit that people will continue to willingly pay for that quality and creativity that AI cannot yet attain. Perhaps not to the extent they do now, when there is no other option, but the market will exist, and so too will people to supply it.

      This will be a painful process. I do not deny that, and I agree that the issue of pirated content should be addressed, irrespective of AI’s use of that content. However, I do not think that AI as it currently exists will prevent new, high quality art from entering our culture, or the artists who produce that art from finding means to live from the fruits of their labors. And if, one day, AI can easily and immediately produce material of a quality such that people can no longer compete with it? Well, first of all that seems to imply a degree of objective artistic taste, which may or may not exist, and, more importantly, it begs the question of why we should more highly value lower quality work just because it was produced by a human.

      Like

      1. Yes, I agree that pirating of content happens in other forms today, but just because it happens doesn’t make it ok. In other forms, like the TikTok example you gave, it is illegal, and there have been steps taken to prevent this sort of thing with each new technological advance I’ve seen. It used to be possible to upload entire songs to YouTube and make money off of them with very little fear. Now there are automated systems in place (imperfect systems, but still there) to make sure the money goes to the appropriate copyright holder instead. All I’m trying to say is that these advancements in AI require some similar advancements to ensure people are compensated fairly for the work they do, the content they produce, and the value it provides.

        I do not watch, listen to, or otherwise support pirated content. I won’t support AI until it raises its ethical standards (or is forced to) either.

        And, honestly, I think the economics will change entirely at that point. If AI companies have to pay writers and artists (and pay them fairly) in order to use their work, that will absolutely create new niches for writers and artists to make a living. It will also ensure fair competition because the AI companies will be forced to charge money for AI generated works they needed to pay money to produce. A fair system can be created, but I personally believe that it requires regulation.

        As for the issue of valuing lower quality work just because it was produced by a human, I never said we should. I believe you’re conflating my opinion with what you may have heard from others. My opinion is that if AI fails to pay the writers and artists it depends on, the quality of even AI generated works will decrease because there will be fewer books and works of art available for it to train itself from. I believe that failing to regulate this industry will create a worse outcome for everyone.

        Liked by 1 person

Leave a comment