AI Is the Paradigm Shift of the Next Decade

From OpenAI CEO Sam Altman

Join the community of 18,905 individuals who are cutting through the noise by subscribing here today:

Hey friends 👋 ,

Happy Wednesday and welcome to Through the Noise!

We've had some exponential growth over the last week with new people joining the party. Sunday Signal keeps getting better (I'm including an "Alex's Analysis" section this week from popular demand) and my deep fascination for AI is only growing stronger each day.

I came across an interview with Sam Altman, CEO of OpenAI at AngelList Confidential a few weeks before they launched ChatGPT. Today, I'm distilling my insights from the conversation to save you 20 mins of your time.

Read time: 5 minutes

Today's Through the Noise is brought to you by 1440

One thing I seek out each day is signal in a world of information abundance. It's why I called this newsletter "Through the Noise". What if you could find a way of extracting the nonsense from the news?

Every morning, 1440 sends a newsletter to over 2 million daily readers, edited to be as unbiased as humanly possible. It's a daily digest of all the most important info in culture, science, sports, politics, business, and everything in between. It's the fastest way to an informed and impartial point-of-view.​

Sign up for 1440 now and get your first five-minute read this minute. It's completely free— no catches, no nonsense, and absolutely no BS.

AI Is the Paradigm Shift of the Next Decade

DALL·E prompt: "a black and white sketch of the AI paradigm shift"

If you look at the Cambrian explosion of new things being built that are really delighting people and getting significant usage, it seems like AI is the platform that the industry has been waiting for.

AI is on-demand intelligence. It is way cheaper and way faster than what you can get from humans. What does this mean for us?

The role of humans

The cost of cognitive labour is going to near-zero. It will change how we undertake the future of work. Altman goes on to say there are categories that we want to remain expensive. People want wine to be more expensive. They want certain art to be expensive. Luxury goods related to status or exclusivity will remain undisplaced by AI.

There will also be categories where we want a human to be involved– we don’t just want to talk to a computer. Humans are much better than AI in tasks involving creativity, flexibility, and common sense reasoning.

The way we’re building AI right now is a very alien intelligence. The way birds inspired us to build airplanes; but airplanes work very differently from birds. There will be surprising strengths and weaknesses to both kinds and in turn, plenty of roles for humans.

What will the next 5 years look like as a programmer?

Altman expects AI to make you more and more efficient. Over time, it will learn to do more things with less supervision from you– eventually, it will get really good at the whole thing. For some tasks, this will happen over the next 5 years.

What is the limiting factor preventing us from doing it all?

OpenAI’s strategy for making progress: "We knock down whatever is in front of us". When we get stuck on something new, we go and do more research and figure that next thing out.

Everyone is saying AI is "cool", "it’s the new hot thing" and "it generates these images for me". People even say: maybe it’s going to do 90% of the world's cognitive labour at 1/1000th of the cost.

But if you actually stop and think about what that means, once AI learns how to do science, the rate of scientific discovery goes up by a factor of 1000.

Better humans or autonomous AI?

AI today is an amplifier for humans. It can assist humans in adding to the wealth of scientific knowledge. There’s another world where AI is adding to scientific knowledge on its own.

Altman doesn’t think it matters if it’s fully autonomous or if it’s helping humans. What actually matters is asking ourselves: “Is the pace at which scientific discovery is happening 1000x faster than the world today?"

If it is, that has breathtaking consequences for society over whether it’s fully autonomous or not.

My take: Handing over control to the AI to ‘lead the charge’ for scientific discovery seems like a daunting notion. But actually, when we think about it, the person who has the most control is the one who controls the least. Using our human capability to train these systems to be autonomous will lead to a rate of change that is far greater than what we can achieve if we are simply being ‘augmented’ by AI. The human brain has a limit. When we hit that limit, we reach mental exhaustion and burn out. With machines, they can keep churning 24 hours a day, 365 days a year. Pair that with lower computing costs and faster processing time: you have a very clear pathway for what needs to be prioritised.

Altman hopes the cost of cognitive labour goes to near-zero. Maybe there are a bunch of other reasons where electricity gets super expensive, GPUs get super expensive and there’s a market price for intelligence and it doesn’t fall as fast as we’d hope. Altman doesn’t think this will happen, but it could.

Assuming it doesn’t, think about how society has changed in the last few hundred years. For a new technology, there are limits to how fast society, institutions and people take to update to new processes and ways of thinking.

Imagine if all of the scientific progress we had since the beginning of the Enlightenment period until now could happen in a year.

This is an exponential curve that keeps getting steeper over the next 500 years. What if you were to compress that timeline of everything we’ll do over the next 500 years into one? It’s chaotic and society has a hard time adapting, but:

  • We can cure all disease

  • We can travel to the stars

  • We have unlimited power

Who knows where the limit lies.

What is OpenAI doing to help society adapt to this rate of change?

The rate of change in AI is increasing year by year. If you look back over the last decade, society has had a hard time adapting. When you add all of the scientific knowledge over the past several hundred years into one year, things break.

With regard to helping our society, there have been lots of things that people have been talking about that Altman didn't want to rehash, including:

  • How do we re-skill people for new jobs

  • How do we build resilience

  • How do we build better judgement (discerning real vs fake content)

Whilst all important, these are the 3 things that Altman believes are under-discussed:

1. How we share wealth

He believes the fundamental forces of capitalism will break down. As a solution, there could be some version of basic income or basic wealth redistribution. OpenAI are exploring this right now. People haven’t properly internalised what happens if the playing fields shift this much. People assume a small tweak in capitalism will work.

2. How access to these systems work

Altman believes the resource that matters most in a world with artificial general intelligence (AGI) is who gets to use it and what it gets used for– understanding how to provide access to a limited resource.

3. Governance

Who decides what you get to use the systems for? What are the rules? How do we get to global agreement treaties? How do we agree on what this set of values is going to be?

We are currently not well suited for these and each will be important in the years to come.

How do we make sure governance doesn’t turn into “who has the biggest guns?” and get’s to control this system?

What happens when we have something that threatens to upset the global equilibrium? There are a lot of possibilities. The basic assumption of capitalism starts to break down as we have AGI take off.

Let your wildest sci-fi dreams come to life for a second and consider this example:

Ask the AGI: “Okay AGI, start a new trillion-dollar company.” It figures out how to be CEO, raise capital, it does all the engineering, all the marketing and all the distribution. You typed one thing into a prompt, pushed enter and it did it.

How we think about a world like this is not yet obvious.

What are the quality of life impacts that we’ll see assuming we figure out the governance questions?

Society when we hit AGI and figure out governance

It’s all going to get a lot better, really quickly. Not always in the ways we think.

Before OpenAI launched Copilot, nobody believed models could write code that well. The answer is when we see something useful that technology can do, we’re going to put it out there in the world and hope it keeps getting copied.

So the way we progress is by discovering new ways to use technology instead of imagining some crazy future.

How do you think about the value chain of startup creation with AI?

The most powerful models will be large. There will be a relatively small number of companies in the world that can train them. The value that is built on top of these with fine-tuning will be tremendous.

Once you have trained a good base model e.g., a general-purpose text model, you can train it to be an AI lawyer. But if you go out and train from scratch something that would do everything a first-year legal associate could do, it would be hard:

  1. The model wouldn't have learned basic reasoning

  2. It wouldn't have world knowledge

When you start with a model that knows everything and give it a little bit of data to push it in the direction of being a really good lawyer, that’s a much easier path.

Base model: I understand the whole worldSpecific model: I understand the whole world + the nuances of the function of what I’m trying to achieve

Sam Altman’s advice for students or founders getting started today

Start doing anything in the field. Start building, just don’t miss out on this one.

The realisation of what this technology can do and probably will do in the coming years I find truly humbling.

Let me know what you think

Login or Subscribe to participate in polls.

You can find the full interview here.

A Little Something Extra

  • 🤝 Sponsorship: If your company would like to partner with the fastest-growing AI newsletter on planet earth, please fill out this partnership form.

That’s all for today friends!

As always feel free to reply to this email or reach out @thealexbanks as I’d love to hear your feedback.

Thanks for reading and I’ll catch you next for some Sunday Signal.


If you liked this piece, subscribe below: