Who Decides If AI Is Safe?


AI safety is complicated and bizarre.

Can we admit that sentence is a bit terrifying?

You know that I’m a huge AI optimist - but I’m also a food optimist and a car optimist. I have high hopes that food and cars are making our lives better, but I still want the FDA keeping carcinogens out of my peanut butter and regulations to ensure my brakes aren’t made out of tree bark.

Most products and tools are fairly straightforward to regulate. It’s fairly black and white when something benefits the consumer and when it doesn’t.

AI though… that’s hazy.

Why?

Because generative AI is, at the moment, relatively harmless. (Relatively! Don’t @ me about bias in the system, I know about that, I’m saying nobody’s died or been maimed or anything.)

Think about it - genAI has been in headlines for over a year, and little if any real damage has been done. It’s been, overwhelmingly, just sort of a helper.

But there is no question that we are barreling toward a wild future that nobody is able to confidently predict - not even the folks who are creating this technology.

And that’s not a distant future either.

If you listen to Sam Altman (OpenAI) or Dario Amodei (Anthropic), you’ll learn that we are perhaps 2-4 years away from what they call AGI. That’s artificial general intelligence - the state of being when AI has grown so complex that it can think and act by itself.

This is not something to take lightly. This is something we have to plan for.

And nobody really knows how to do it.

Before you start hoarding canned goods, let's take a step back and really think about how to think about all this.

THE OPTIMIST'S DILEMMA

As an eternal optimist, I'm a firm believer that AI is going to make the world a radically better place.

But here's the catch:

The first time something goes wrong (and let's be real, something always goes wrong), we're going to see a mass freak out that makes the meltdown over a self-driving car accidents look like a fender-bender…even though self-driving cars are statistically safer than having a human behind the wheel.

So the question quickly becomes:

What would "something going wrong" with AI even look like?

And perhaps more importantly, who gets to decide what "wrong" is in the first place?

Because as we will see, there’s a lot of vested interest in defining “wrong.”

THE RULES ARE STILL BEING WRITTEN (OR ARE THEY?)

We've got plenty of rules and regulations around all sorts of risks.

We've got arms treaties, product safety tests, the FDA - you name it, there's probably a thick binder of rules for it somewhere. But when it comes to AI? The government's rulebook is, shall we say, a bit more on the "work in progress" side.

Just recently, the Department of Homeland Security (DHS) announced the establishment of the Artificial Intelligence Safety and Security Board. This 22-member board, born out of President Biden's October 2023 AI executive order, is packed with tech industry titans like Sam Altman (OpenAI), Satya Nadella (Microsoft), and Sundar Pichai (Alphabet).

Their mission is to provide sage recommendations on the safe development and deployment of AI across our critical infrastructure.

Now, everyone is irritated at the make-up of this board, for various reasons. Let’s not worry about that jar of pickles just yet. (Just made up that expression. I’m already obsessed with it.)

More importantly, the government seems to really be actually trying, here. They’re trying to balance safety with not slowing down ingenuity and innovation. There are differences of opinions between the US and Europe (you can imagine).

They know that they can’t really slow down if other countries (cough China cough) is still going full steam ahead.

But this is the question:

Is it government that is actually going to decide this?

Outside of this freshly-minted board and all the white papers and regulatory proclamations, the fate of AI safety seems to rest largely in the hands of a select few individuals.

And I mean that literally - not like “the global economy is in the hands of a few people” which is sort of true but not exactly.

I mean this in the way that there are only a few mega companies that are building mega things, and inside those companies are the people who can make and understand the products that could be a literal existential risk to humanity.

We're talking about the Sam Altmans, the Dario Amodeis (of Anthropic), the Mustafa Sulymans (now of Microsoft) the Demis Hassabis (Google) and even the Zucks and Elons of the world (when they're not hydrofoiling or whatever.)

THE VISION VS. THE REALITY

This is where it gets really interesting.

You know how when you’re young and breezy and you have all these great ideals and you’re all like “I’m never gonna work for The Man!” or whatever?

I feel like we have a bit of that with AI.

And AI is quickly growing up and getting a corporate job and slowly getting a taste of eating at those fancy restaurants and now all of a sudden you have a huge mortgage and golden handcuffs.

Here’s what I mean:

Let's rewind the clock a bit. When these AI companies were first dreamt up, the vision was lofty and idealistic:

If this tech becomes too powerful, too unwieldy, we'll simply hand over the reins for the greater good of the public.

OpenAI and Anthropic literally build their governing bodies with this in mind.

This clash of vision and reality has led to some pretty wild board structures cropping up at places like OpenAI and Anthropic. Structures that very nearly brought the OpenAI empire crashing down. At one point, just two OpenAI board members, Helen Toner and Tasha McCauley, held the power to send the whole thing tumbling like a house of cards.

Anthropic has a similar board, the Long Term Benefit Trust. Five members with no fiduciary duty, who are solely tasked with deciding whether or not the work towards AGI should continue or whether it is getting too powerful.

Now, that’s all well and good when you’re young and AGI feels a million years ago.

But now Amazon is into you for billions - and you think they want to slow anything down?

And OpenAI, with that lofty board structure- Microsoft now owns 50% of the company.

Get this:

OpenAI has stated that with Microsoft's multibillion-dollar investment, there's a clause that excludes Microsoft from controlling OpenAI's artificial general intelligence (AGI) technology, should they achieve that mind-blowing milestone.

You catch that?

OpenAI's nonprofit board will be the ones to determine when they've reached AGI, not Microsoft. That means that the pursuit of AGI will remain in the hands of an organization bound to the public good, not a profit-driven tech giant.

But you think Microsoft is going to give this all up at the very height of technology?

I am - to put it gently - skeptical.

THE HAZY TIMELINE OF THE AI ENDGAME

In a recent interview with Ezra Klein, Anthropic's Dario Amodei dropped a bit of a bombshell: We're a mere 3-4 years away from the AI endgame.

Let that sink in for a moment. Think about what you were doing back in 2021. Doesn't feel like that long ago, does it? Now, I may be an optimist, but I'm also a realist.

The path forward is going to be messy, confusing, and filled with more twists and turns than the roller coasters at the American Dream Mall (just visited, sorry- it’s top of mind).

We're not going to have a single, cinematic "eureka!" moment like we did with the splitting of the atom. The arrival of AGI is likely to be a gradual, hotly contested process filled with endless debates and pontificating think pieces.

Keep an eye on this one, friends. It’s gonna affect all of us.


AI NEWS OF THE WEEK

  1. OpenAI to Launch a Search Engine?

    OpenAI may launch a search engine, as early as May 9. Unconfirmed, but there's some evidence. OpenAI dropped the login for ChatGPT, Perplexity going unicorn, Google I/O coming up. Lotta interesting stuff here...

  2. Rabbit R1 “barely reviewable”

    I’m a fan of new tech. I love the big swings. But I’ve yet to find many folks who find the Rabbit R1 to be a useful piece of hardware. My fear is that it doesn’t do what it claims. And I support tech, but that would be really frustrating.

  3. ChatGPT - now with memory

ChatGPT has always been a bit weird with conversations. It remembers nothing from one conversation to the next, but they remain cryogenically frozen, you can always go back. Well, they’re changing that with a new memory. Will be interesting to see how it works…


Generative AI Tips

One of my favorite things to show audiences is ChatGPT’s voice function. It’s absolutely uncanny, and in a way it’s the big unlock moment for people to feel like they are talking to AI. My favorite use cases are things like hard conversations - you can role play with ChatGPT - but it’s also outstanding for practicing sales calls, negotiations, or giving feedback.

I also use it as a brainstorming friend in the car - it tracks your entire conversation for later.

Pick any one of five voices and have at it. It feels like sci-fi.


Previous
Previous

Why Most AI Strategies Don’t Work (and One That Does)

Next
Next

Why Generalists Rule with Generative AI