Yury Molodtsov

COO and Partner @ MA Family where we help tech companies get into the news

About Me
Twitter ↗
Threads ↗

Why AI Doomerism is Flawed and Misguided

The Internet favors simple opinions, meaning we’re stuck between AI dommers and e/acc people. And yet the most urgent and interesting questions relate not to its potential capacity to kill us all, but rather mundane things.

November 23, 2023

Most politicians are populists, which is the natural result of the basic rule of communications: the larger your audience is, the simpler your message should be. And with politicians specifically, there’s a straightforward line from the size of their following to their actual power.

Politicians are incentivized to find a slogan that resonates with people and run with it regardless of their personal opinion on the matter. As an off-topic, I doubt certain US Republicans have a significant issue with LGBT themselves, but they adopt the message that works for people who can vote them in (or out), however cynical this might sound.

The same is happening to AI. And it’s accelerated by AI becoming a major trend covered by practically every media publication. Not only most have dedicated AI writers, but there are entire outlets (e.g. VentureBeat) that have largely shifted their focus to AI.

And sometimes, they’re just broadcasting the opinion of crazy people like Elizier Yudkowskiy, who isn’t an expert in any modern area of AI. He’s a self-proclaimed expert on “AI safety”, which seemingly means regurgitating ideas from sci-fi novels with specific outlooks to convince everyone we’re doomed. Balanced opinions are complicated, and slogans are easy, which is why both doomers and proponents of AI (i.e. “e/acc” movement) dominate the discourse even though all practical discussions are somewhere around the middle (and should be happening in a completely different plane).

Whether Biden or Sunak actually believe AI is dangerous is irrelevant (although Biden might have indeed been spooked by an underperforming action movie). If politicians think doing something publicly will help them score, they’d do this. OpenAI, Anthropic, Google, and all others will be the first to tell you they support strict regulation of AI precisely because this will help cement their hard-earned position at the forefront of the industry.

There are copyright and safety questions, but all practical concerns fall into a completely different bucket than what pundits say. Generative models won’t drop a nuke on Los Angeles (“The Creator” is a great movie, though), nor would they automate everyone’s job. But they raise the same old problem of algorithm-based decision-making and pose a new problem that doesn’t map well on our current copyright laws (both in the letter and the spirit).

Safety

Algorithms have been used to make decisions affecting people’s lives for decades. Sometimes, they’re basic; sometimes, they’re pretty advanced. Credit scores, mortgage assessments, and background checks aren’t manual but run by algorithms.

The actual problem worth discussing is the potential of inherent bias AI might ingest from the training data and negatively affect someone’s life while obfuscating the process.

Old OCR algorithms used to recognize printed texts were highly complicated and cumbersome and relied on computer vision and advanced math. We use AI to solve specific complex problems differently and abstract away the solution when it’s too complicated to build. We use data to train more-or-less generic networks and then run primitive arithmetical operations many times, ultimately providing good results we wouldn’t achieve through classic algorithms.

Because of this, most modern AI networks are essentially black boxes. You know the input and the output, but you don’t see how the decision was made, and it’s challenging to dissect the model while it’s “thinking” to understand why.

So, if a financial institution uses AI for credit checks and fraud assessments, one could imagine a scenario where flawed training data would lead it to discriminate by race or other parameters. And since it’s a black box, only long-term statistical observations would allow us to see the problem, though most likely, the people in charge would claim the discrepancies are accidental.

This problem is more tedious but is way more accurate. AI won’t do it because it’s terrible. It’s emotionless and doesn’t have a goal. This is a popular trope in sci-fi, but I don’t believe Artificial General Intelligence could be a cold, emotionless beast. Emotions give purpose, which is why ChatGPT doesn’t think anything until you tell it to. It’s simply not there yet. What we have now is way more basic.

Copyright

Neither large language models nor image generative networks fit well into our copyright frameworks. The line between plagiarism and inspiration was always blurry. All human artists, including writers and painters, learn by observing the works that came before them and then produce something of their own.

Vanilla Ice used a guitar riff originally composed by Queen and David Bowie. He then claimed he added a beat between the notes and that the original melody was different from his. It was ultimately decided that this was not the case. Ultimately, he had to pay up, but only because it was too transparent. Musicians get inspired by others’ music all the time.

Generative networks aren’t too different from humans in this regard. But since this is machine intelligence, they can do this much better, at scale, and then provide everyone with a quick and easy way to create pieces that might copy someone’s material too directly.

Writers like George R.R. Martin claim that OpenAI’s GPT4 was trained on their material, which could very well be correct. Before, we had a simple delineation. A writer consumes existing material and hopefully uses it to write their own Song of Ice and Fire. Nobody just starts writing one day without reading a single word imagined by others.

If you use others’ material too directly, we call it fan fiction and mainly don’t pursue it as long as you aren’t trying to sell it. Then it’s illegal.

But the ultimate use case of GPT4 is not someone using it to retell some author’s book but to write new stories under a user’s command. Yet the reason it’s capable of doing so is all the writing that was consumed during the training stage. So, does it break the copyright?

Generative networks are stuck between humans and plain computers, and until we develop new frameworks and the common law system in the US and the UK create precedents, nobody is sure what to do.

***

Now, the emergence of generative networks might have already affected the job market globally. As usually happens with technology, the biggest threat is to the low-level contractors, such as people who accept $5 orders on Upwork since they’re based in a cheaper (and poorer) country.

The scandal around the intro to Marvel’s Secret Invasion is a perfect illustration. The VFX people working on it used purposefully poorly-made AI-generated images due to their Uncanny Value quality relevant to the story. Half of the Internet reacted as if the producers fired everyone and went to write the prompts themselves (and manually did all the VFX on top to make the animation, I guess).

In fact, actual commercial writers, designers, and other content creators were the first to start using tools like Stable Diffusion and MidJourney in their work. A game development company I know uses these tools to accelerate its image pipeline. This specifically helps designers spend less time on mundane things. For them, AI hasn’t replaced a single worker; it simply allows people to produce more content and, most importantly, elevate the floor of the overall quality.

Sometimes, it simply acts as a reference. I’ve heard a music artist explaining that he can now use MidJourney to explain what he wants for title art from his designer. And I myself use it to produce covers for my Spotify mixtapes. Again, technology always raises the floor, and MidJourney, along with other tools, allows people who can’t draw anything to turn their fantasies into reality (well, a real image).

You probably do need to think about AI when making life-long choices. My aunt recently told me her friend’s daughter had just enrolled in a university as a translator. Google Translate was shit ten years ago, got pretty good after this, and then was blown by GPT4. Does this mean we will not need translators soon? No, but there surely will be fewer jobs, and they will primarily focus on high-value tasks, such as adaptations of movies, popular books, etc. Ultimately, this means fewer translators and less money for them.

What happens when AI goes upmarket and captures way more? We’ll probably have to deal with this than by finding a balance. On the one hand, if we don’t automate things, the economy won’t grow, and everyone who is poor stays poor. On the other, as a society, we sometimes make decisions that might be suboptimal for the greater good. We’ll see what happens here, but even if one country severely regulates AI out of existence, others might embrace it.

Technology has replaced thousands of professions, all while creating new ones. We no longer have human “calculators” or telephone operators. Caution and the desire to preserve the living standards for some people is welcome, but imagine if we used the same approach a hundred years ago. We’d probably still have actual humans running elevators.

Comment on Twitter
tech

If you liked this post, subscribe to get new content right in your inbox

Read More

  • What The EU Should Have Done Instead of DMA

    The Digital Markets Act is a far-reaching framework that can be used against any major company the EU holds a grudge against. It also effectively prohibits product improvements and vertical integration.

  • The United Internet is Collapsing

    The internet is one of my favorite inventions of all time. When nobody was watching, it emerged as a global network without borders, but now the governments are returning the physical borders.

  • Why Arc is The Best Browser

    Arc reinvented web browsing for the modern Internet. And I’m very thankful.

  • Can Markets Regulate Themselves?

    Sometimes, governments regulate markets. And sometimes, market participants regulate themselves. The outcome can be surprisingly different; thankfully, we have several examples that can serve as case studies.

  • Why Execution Eats Ideas For Lunch

    Most people tend to overvalue ideas and undervalue execution. In my experience, that holds even for many people in the tech industry. Yet it couldn’t have been further from the truth. Let me tell you about a product that allowed you to easily create and manage your own relational databases together with your team members. It’s not Airtable but their early competitor.