Article

Neural Network Effects: Scaling and Market Structure in Artificial Intelligence


As artificial intelligence reshapes our economy, policymakers must act swiftly to prevent a winner-take-all scenario in the rapidly evolving market for AI foundation models.

In the span of just two years, AI systems like ChatGPT, Claude, and Gemini have become powerful tools for knowledge workers, demonstrating capabilities that were once the realm of science fiction. The AI models powering these systems, known as foundation models, are already reshaping industries and sparking fierce competition among tech giants and startups alike. But as the dust begins to settle, a crucial question emerges: Will the market for AI follow the path of digital platforms, ultimately concentrating power in the hands of a few dominant players?

In a new INET Working Paper, “Concentrating Intelligence: Scaling and Market Structure in Artificial Intelligence,” coauthored with Jai Vipra, we examine the evolving structure and competition dynamics of the rapidly growing market for foundation models. Our findings paint a picture of an industry at a crossroads, with significant implications for the future of innovation, economic power, and societal well-being.

The AI Arms Race: A Snapshot of Fierce Competition

As of October 2024, the landscape for frontier AI models is incredibly dynamic. No fewer than 14 different companies have produced models surpassing the capabilities of the original GPT-4, according to popular benchmarks. This includes offerings from tech giants like Google DeepMind and Meta, as well as smaller labs like OpenAI and Anthropic and new entrants like Elon Musk’s xAI.

The pace of innovation in the field is breathtaking. OpenAI, for instance, has released seven model updates since the first version of GPT-4 went public in March 2023, with the latest, o1, being a new model capable of advanced reasoning. These updates have not only improved the quality of responses but also increased processing speed more than 3-fold and expanded the amount of text that can be handled by a factor of 16 – all while reducing the cost of generating output by 92%.

The fierce competition has kept prices low, with some observers noting that the leading AI labs are barely covering their variable costs. The dynamics resemble textbook Bertrand competition, where firms compete primarily on price and push down prices to their marginal cost.

The Seeds of Concentration: Economies of Scale and Scope

However, beneath this competitive frenzy lie economic forces that could push the industry towards significant market concentration: the production of foundation models exhibits massive economies of scale and scope. One important driver of these are growing fixed costs for the pre-training of frontier AI models, which now costs hundreds of millions of dollars, with many projections suggesting billion-dollar price tags within the next few years. Once trained, the cost of operating these models (inference costs) is relatively low. Moreover, there are also significant economies of scope as a single foundation model can be adapted for a wide range of applications across different industries, from copyediting to coding to healthcare diagnostics.

These characteristics create substantial first-mover advantages. Early leaders like OpenAI and Google DeepMind have not only gained technological expertise but have also locked up scarce assets crucial for AI development – vast amounts of compute resources, proprietary datasets, and top-tier AI talent.

The Bottleneck Triad: Compute, Data, and Talent

Three key inputs are shaping the competitive landscape:

Compute: The computational resources required to train frontier models are growing exponentially. Our research shows that AI labs have increased the amount of compute deployed in frontier models by a factor of 4.1x per year over the past 15 years. This trend is expected to continue for at least another 3-5 years, potentially longer.

Data: High-quality training data is becoming scarce. We’re approaching the limits of publicly available text on the internet, making proprietary datasets increasingly valuable. This gives an edge to large tech companies that control vast amounts of user-generated content.

Talent: The pool of researchers and engineers capable of building cutting-edge AI systems is limited. Competition for this talent is fierce, driving up costs and creating another barrier to entry.

The Specter of Market Tipping

Given these dynamics, there’s a real risk that the market for foundation models could “tip” towards monopoly or oligopoly, much like we saw with digital platforms in the early 2010s. While network effects are less pronounced for AI models compared to social media platforms, other forces could drive concentration. These include data feedback loops, whereby better models attract more users, generating more data, which in turn improves the models – a virtuous cycle for incumbents. Another significant force is user inertia – once users become accustomed to a particular AI system, switching costs (both monetary and in terms of learning curve) can create lock-in effects. Finally, there is also a potential for what we term “intelligence Feedback Loops” that arise when AI systems become more capable and winning firms can use their advanced AI systems to accelerate the development of the next generation AI system and so on, potentially allowing a leading lab to pull far ahead of competitors.

The Dangers of Vertical Integration

Another second trend that may be concerning from a competition perspective is the increasing vertical integration between AI labs and large tech companies. Microsoft’s deep partnership with OpenAI, Google’s merger of DeepMind with its internal AI efforts, and Amazon’s ties to Anthropic all raise concerns. This integration could lead to the foreclosure of essential inputs, whereby tech giants control of key resources like cloud computing or proprietary data could limit access for rival AI developers. Moreover, companies dominant in one sector (e.g., cloud services) might also leverage their market power to gain an unfair advantage in the new market for AI. One common implication is that fewer independent players means less diversity in approaches to AI development, which may imply less surplus for consumers.

Policy Imperatives: Fostering Competition, Innovation, and Safety

To prevent excessive concentration in the AI market and ensure its benefits are widely shared, policymakers have several options:

  • Promote Data Sharing: Mandating access to training data for all market participants could level the playing field. This may require rethinking some data privacy regulations that inadvertently reinforce the dominance of data-rich incumbents.
  • Encourage Interoperability: Common API standards and reduced switching costs can prevent user lock-in.
  • Support Open-Source AI: While balancing safety concerns, promoting open-source AI development for systems deemed safe can counteract the concentration of capabilities in a few dominant players.
  • Scrutinize Vertical Integration: Antitrust authorities must closely examine partnerships and acquisitions in the AI space, even when they fall short of traditional merger thresholds.
  • Invest in Public AI Infrastructure: Government-funded compute resources and research could help level the playing field for smaller players and academic institutions.
  • Address Safety Concerns: As AI systems become more powerful, safety considerations may necessitate some degree of centralized control. Policymakers must find ways to balance safety with the benefits of a competitive market.

The Stakes: Power, Progress, and Prosperity

The decisions we make now about the governance of AI will have profound implications for the future of our economy and society. If we allow the market for foundation models to become overly concentrated, we risk creating unprecedented accumulations of economic and political power. On the other hand, fostering a diverse ecosystem of AI providers could drive innovation, ensure that the benefits of AI are widely shared, and maintain important checks and balances on the power of these transformative technologies. The challenge for policymakers is to strike the right balance – promoting competition and innovation while also addressing concerns about AI safety and misuse. This will require unprecedented collaboration between technologists, economists, ethicists, and policymakers.

As we stand at this technological inflection point, one thing is clear: given that this new technology creates natural forces towards concentration and is rife with externalities, the invisible hand of the market alone will not be sufficient to guide the development of AI in a direction that maximizes societal benefits. Thoughtful, proactive policies are essential to ensure that the era of advanced AI is one of broadly shared prosperity and aligned with our collective values and aspirations for a more equitable world. The race to develop transformative AI capabilities is well underway. The question now is whether we can shape the rules of that race to benefit all of humanity, not just a select few at the finish line.

Share your perspective