skip to Main Content
Five (+1) AI Ideas from Five AI Leaders
Artificial Intelligence

We don’t really know how AI models work, and we may never fully understand them.

That was one of the recurring ideas from hours of interviews I watched of several different AI company CEOs.

On its face, the inability of people building AI models to understand how those systems work seems scary. Rampant AI systems that we don’t understand are the sci-fi nightmare of an AI doomsayer. I think the reality is a little more nuanced.

Dario Amodei, CEO of Anthropic, suggested that there are limits to understanding any complex reasoning process. We don’t fully understand how humans or animals reason. We may never understand fully how computers learn to reason. That doesn’t mean we shouldn’t try, but there’s a leap of faith the AI industry needs to earn from the public in trusting that they can control how an AI acts even if they can’t fully describe why the AI does what it does.

The inability of AI leaders to describe expected output also points to an investment area: AI interpretability.

Whether interpretability emerges as a feature developed as a moat by individual companies or a third-party service, it’s a significant value add area in AI. AI models that are better understood are likely to be more desirable to customers than volatile systems that do unexpected things. Wherever value is added to customers, some of that value can be captured by businesses that create it.

Aside from the idea that we don’t know how AI models work and figuring out how they do creates a competitive advantage, several other ideas stuck out to me from these interviews. Here are the top five ideas I considered from leaders in AI.

Using a GPU is a Sign of Failure.

Groq CEO Jonathan Ross

Ross says that customers view using GPUs as a failure because they couldn’t figure out how to do the same thing on a CPU where it would be easier to develop. He makes the further point that GPUs weren’t built for AI tasks. It’s accidental that they happen to be ok at it, even though they’re not great. We just don’t have any better solutions right now.

 

Ross’ characterization of using GPUs as a sign of failure is one of the most interesting frames we could put on the AI landscape right now. NVIDIA dominates AI compute, and the strong consensus view is that they’ll maintain that dominance.

But what if there’s a better way?

In my AI mental model framework, I explained that winners in the compute element of technological paradigm shifts always win on improvements in efficiency. Ross’s point about GPUs being accidentally good at AI compute begs a question: What if a chip solution emerges that was purpose-built for AI tasks?

It seems that specificity is the only potential danger to NVIDIA’s dominance. Expect a lot of funding to support these solutions given the potential reward for dethroning, or at least sitting next to, NVIDIA. My firm, Deepwater, is bullish on the potential of compute-in-memory solutions like those being developed by Rain.

Causal Understanding is Missing from Systems to Get to AGI.

Rain CTO Jack Kendall

There are many perspectives on how far away we are from artificial general intelligence (AGI) or human level intelligence. Geoff Hinton thinks it might happen in 20 years. Jack Kendall thinks 50.

Kendall outlines three requirements he believes an AGI would need:

  • An AGI can’t just understand text. It must also understand vision, speech, and motor control.
  • An AGI needs to be able to apply prior experience to learn things faster.
  • An AGI must have a causal understanding of the data generation process.

It’s that last one, causal understanding, that seems like the biggest leap.

 

Today’s systems understand the world via statistics. AI models are basically just prediction machines that look at patterns to discern what should come next in some sequence. Kendall makes the point that Dall-E doesn’t understand anything about the images it creates, only that they make statistical sense given how the model was trained.

Kendall’s three requirements of an AGI all point to one thing — more compute. Understanding more modalities, applying prior experience (more data in the context of the model), and causal understanding all demand more computing resources. That’s probably why Kendall is focused on developing AI-specific chips at Rain.

AI is Like a Really Talented Improv Actor

Character.ai CEO Noam Shazeer

Shazeer’s lighthearted characterization of AI today is very much a reflection of the company he’s building. Character.ai lets people create AI characters with different personalities from Elon Musk to Saul Goodman. When asked to predict what breakout applications we might see from AI in the near term, he said he thinks it will be helpful for people who are lonely or depressed. An intelligent improv friend could be a great antidote to loneliness and depression.

But Shazeer’s improv analogy holds true beyond just using AI for character creation.

 

Improv is an act of rapid prediction as actors react to a real-time situation to predict what would entertain an audience next. Given our prior description of AI’s as statistical prediction tools, that’s what those systems are doing too. AI is improv’ing to create things it thinks the user will find valuable. Like the improv actor, sometimes the AI gets it right, sometimes it doesn’t. Like the improv actor, the more the AI gets it right, the more the audience comes back.

Search isn’t Dead, but the Shape of it is Changing

Anthropic CEO Dario Amodei

A literal trillion-dollar question in AI is whether it will somehow disrupt search. The shape of search is changing because machines can answer questions in direct ways that traditional search results can’t. In some cases, a direct answer is a better experience for consumers. If some new platform could replace Google in the minds of consumers, it would be a trillion-dollar company.

But therein lies the challenge — replacing Google in the minds of consumers.

Google is so tightly integrated with consumers that we barely recognize it. It’s the default way many of us interface with the internet via search bars in Android and Apple phones, Chrome and Safari browsers, etc. To disrupt Google’s dominance in search requires some company to also disrupt Google’s vast distribution. That might prove harder than developing a better search experience driven by LLMs.

 

To the point of better product, Dario makes another statement that seems to be another point to Google maintaining its search dominance rather than losing it. The most valuable data for AI has two properties: It’s relevant to the specific situation and it’s not available anywhere else.

Google has collected decades of data on human interaction with search queries. They probably understand human intent better than any other company on the planet. There is a danger that OpenAI or someone else replaces Google, but if Google can turn their data advantage into a better, AI-enhanced search product with superior distribution, they’ll be hard to beat.

Statistical Understanding of the World is Understanding

OpenAI Chief Scientist Ilya Sutskever

A frequent criticism of current AI models is that they don’t understand the world as humans do. Rather, AIs are just statistical prediction tools that can’t compete with humans.

Sutskever argues the distinction between machine and human understanding may be incorrect. Paraphrasing him from a few different interviews, to predict is to “understand.” One cannot make useful predictions without a form of understanding. For an AI model to make a correct prediction, it must understand some underlying reality of what led to the current state. It could be argued that the machine has some level of understanding about how the world created the statistics with which it is dealing. The machine may not be able to express that understanding as a human, but it understands something well enough to predict.

 

This is an important idea as investors consider the opportunities in AI. To believe we need human equivalent systems for machines to provide massive value through intelligence is misguided. It may be the case that most of the investment returns from AI breakthroughs happen well before we ever get to AGI. Don’t wait for human level understanding to make bets. Statistical understanding is enough.

Bonus idea from Sutskever: Hardware isn’t a current limitation. He said, sure, he wishes that hardware were cheaper or that the memory processor bandwidth was better (see memory-in-compute solutions), but hardware is not creating a limitation for OpenAI’s current approaches.

Conclusion

AI begs so many big questions. Even the experts have differing opinions on many of them from the importance of AI understanding the world to the state of hardware. For all the possibilities of what AI might bring, there’s one thing I’m certain of — AI will create a tremendous collision between the change brought by paradigm shifting technology and the stasis of human nature. When change and stasis collide, it creates persistent growth opportunities that bring extraordinary investment returns.

Disclaimer

Back To Top
Search