.
biotech
biotech
Biotechnology and Synthetic Biology
.
cryptography
crypto
Cryptography and Computer Security
.
Sustainable-Energy-Technologies_133px.jpg
energy
Energy Technologies
.
materialscience
nano
Materials Science
.
neuroscience
neuroscience
Neuroscience
.
Quantum Technologies
Quantum Technologies
Quantum Technologies
.
robotics
robotics
Robotics
.
semiconductor
semiconductors
Semiconductors
.
space
space
Space

Key Takeaways

•   Artificial intelligence (AI) is a foundational technology that is supercharging other scientific fields and, like electricity and the internet, has the potential to transform societies, economies, and politics worldwide.

•   Despite rapid progress in the past several years, even the most advanced AI models still have many failure modes and vulnerabilities to cyberattacks that are unpredictable, not widely appreciated or easily fixed, and capable of leading to unintended consequences.

•   Nations are competing to shape the global rules and standards for AI, making interoperability, sizeable national compute resources, and international governance frameworks critical levers of geopolitical influence.

Icons_card_AI.png

Overview

Artificial intelligence (AI) is the ability of computers to perform functions associated with the human brain, including perceiving, reasoning, learning, interacting, problem solving, and exercising creativity. AI promises to be a fundamental enabler of technological advancement and progress in many fields, arguably as important as electricity or the internet. In 2024, the Nobel Prizes for Physics and Chemistry were awarded for work intimately related to AI.

Three of the most important subfields of AI are computer vision, machine learning, and natural language processing. The boundaries between them are often fluid.

  • Computer vision enables machines to recognize and understand visual information, convert pictures and videos into data, and make decisions based on the results.

  • Machine learning (ML) enables computers to perform tasks without explicit instructions, often by generalizing from patterns in data. ML includes deep learning that relies on multilayered artificial neural networks to model and understand complex relationships within data.

  • Natural language processing (NLP) equips machines with capabilities to understand, interpret, and produce spoken words and written texts.

Although AI draws on other subfields, it is mostly based on ML, which requires data and computing power, often on an enormous scale. Data can take various forms, including text, images, videos, sensor readings, and more. The quality and quantity of data play a crucial role in determining the performance and capabilities of AI models. Models may generate inaccurate or biased outcomes, especially in the absence of sufficient high-quality data. Furthermore, the hardware costs of training leading AI models are substantial. Currently, only a select number of large US companies have the resources to build cutting-edge models from scratch.

 

Key Developments

Dominating the AI conversation since late 2022 are foundation models, which are large-scale systems trained on very large volumes of diverse data. Such training endows them with broad capabilities, and they can apply knowledge learned in one context to a different context, making them more flexible and efficient than traditional task-specific models.

Large language models (LLMs) are the most familiar type of foundation model and are trained on very large amounts of text. LLMs are an example of generative AI, which can produce new material based on its training and the inputs it is given, which enable it to make statistical predictions about what other words are likely to be found immediately after the occurrence of certain words.

These models generate linguistic output surprisingly similar to that of humans across a wide range of subjects, including computer code, poetry, legal case summaries, and medical advice. Specialized foundation models have also been developed in other modalities such as audio, video, and images.

Taking full advantage of AI will require managing the risks associated with the technology, some of which include:

  • Explainability Today’s AI is for the most part incapable of explaining how it arrives at a specific conclusion.

  • Bias and fairness ML models are trained on existing datasets, which means that any bias in the data can skew results.

  • Deepfakes AI provides the capability for generating highly realistic but entirely inauthentic audio and video, with concerning implications for courtroom evidence and political deception.

  • Hallucinations AI models can generate results or answers that seem plausible but are completely made up, incorrect, or both.

 

Over the Horizon

AI agents are AI-based software entities that execute tasks, such as setting people’s daily agendas and coordinating software tools, with minimal human input and oversight. However, present-day AI agents face major limitations, such as reliability issues and their inability to communicate with each other. 

Embodied AI means AI integrated into robots or other physical devices that are able to sense and act in the physical world, thus expanding the range of interactions robots have with that world. More advanced systems combining robots and AI could lead to applications in various fields such as logistics and domestic assistance.

POLICY ISSUES

AI and Jobs

A major challenge posed by AI involves the future of human work. AI models have already demonstrated how they can be used in a wide variety of fields, including law, customer support, coding, and journalism. This has led to concerns that AI’s impact on employment will be substantial, especially on jobs that involve knowledge work. In some cases, the technology will help workers to increase their productivity and job satisfaction; in others, AI will lead to job losses—and it is not yet clear what new jobs, if any, will arise to take their place.

Governance of AI

Over the past couple of years, nations have explored various possible regimes for governing the technology. In the United States, the Trump administration has taken executive action to promote innovation and leadership by eliminating previous executive restrictions and requirements that had been placed on AI. The administration also set forth America’s AI Action Plan to “accelerate innovation, build American AI infrastructure, and lead in international diplomacy and security.” This plan faces challenges, including its alignment with concurrent proposals to reduce broader scientific research funding. US states are also experimenting with their own AI legislation, often proposing requirements that go well beyond federal guidance.

AI Talent

Talent remains a critical policy issue as the number of graduates in AI who are joining industry, particularly start-ups, is increasing, taking away from the number contributing to foundational AI research. The United States is thus experiencing an AI “brain drain” that does not favor the future of the US research enterprise or its innovation capacity.

AI and Geopolitical Competition

The technological race between the United States and China regarding AI is intensifying. China is aggressively pushing existing AI capabilities into every sector—from education to manufacturing to government—aiming to lock in large-scale network advantages at home and abroad. In response to these and other efforts, the United States is seeking to contain China’s growing technological prowess by using tools such as export controls on technologies that would facilitate Chinese advancement.

 

Report Preview: Artificial Intelligence

Faculty Council Advisor

fei-fei-li_profilephoto.jpg
Fei-Fei Li
Author
Fei-Fei Li

Fei-Fei Li is the Sequoia Professor of Computer Science and professor, by courtesy, of psychology at Stanford University. She serves as codirector of Stanford’s Human-Centered AI Institute and as an affiliated faculty at Stanford Bio-X. Her current research includes cognitively inspired AI, machine learning, computer vision, and ambient intelligent systems for health-care delivery. She received her PhD in electrical engineering from the California Institute of Technology.

View Bio
fei-fei-li_profilephoto.jpg
Fei-Fei Li

Fei-Fei Li is the Sequoia Professor of Computer Science and professor, by courtesy, of psychology at Stanford University. She serves as codirector of Stanford’s Human-Centered AI Institute and as an affiliated faculty at Stanford Bio-X. Her current research includes cognitively inspired AI, machine learning, computer vision, and ambient intelligent systems for health-care delivery. She received her PhD in electrical engineering from the California Institute of Technology.

Access the Complete Report

Read the complete report.

Explore

Date Range
CONTENT TYPE

Select Content Type

  • News
  • Article
  • Videos
  • Podcasts
  • Events
AUTHORS

Select Author

  • Condoleezza Rice
  • John Taylor
  • Jennifer Widom
  • Amy Zegart
  • Herbert Lin
  • Hon. Jerry McNerney
  • Hon. Robert Gates
  • Hon. Steven Chu
  • Hon. Susan M. Gordon
  • John Hennessy
  • Lloyd B. Minor
  • Mary Meeker
  • Peter Scher
  • Thomas M. Siebel
  • Zhenan Bao
  • Dan Boneh
  • Yi Cui
  • Simone D’Amico
  • Drew Endy
  • Siegfried Glenzer
  • Mark A. Horowitz
  • Fei-Fei Li
  • Allison Okamura
  • Kang Shen
  • Eric Schmidt
  • Steven Koonin
  • Sally Benson
  • Norbert Holtkamp
  • Martin Giles
  • Jon Simon
FOCUS AREAS

Artificial Intelligence

  • Artificial Intelligence
  • Biotechnology and Synthetic Biology
  • Sustainable Energy Technologies
  • Cryptography
  • Materials Science
  • Neuroscience
  • Nuclear Technologies
  • Robotics
  • Semiconductors
  • Space
  • Technology Test Page
  • Lasers
  • Artificial Intelligence
  • Biotechnology and Synthetic Biology
  • Cryptography
  • Materials Science
  • Neuroscience
  • Robotics
  • Semiconductors
  • Sustainable Energy Technologies
  • Space
  • Artificial Intelligence
  • Biotechnology and Synthetic Biology
  • Cryptography and Computer Security
  • Materials Science
  • Neuroscience
  • Robotics
  • Semiconductors
  • Space
  • Energy Technologies
  • Quantum Technologies
Date (field_date)
Read More
Director Rice, Senators Coons, and McCormick in conversation during their panel session.
News
Books
The 2026 Stanford Emerging Technology Review Debuts In The Nation’s Capital

Senators Chris Coons (D-DE) and Dave McCormick (R-PA) joined Hoover Institution Director Condoleezza Rice, Stanford School of Engineering Dean Jennifer Widom, Senior Fellow Amy Zegart, and contributing scholars at an event in the US Capitol on January 28 to…

February 04, 2026
Read More
SETR 2026 Cover
News
Books
2026 Edition Of The Stanford Emerging Technology Review Offers Policymakers And Business Leaders New Insights Into The Implications Of Frontier Technologies.

The 2025 edition of the Stanford Emerging Technology Review (SETR) report is now available, offering American policymakers a comprehensive overview of how ten frontier technologies, from artificial intelligence to robotics, are transforming the world.

January 26, 2025
Read More
Drone
Article
Books
Technology Applications By Policy Area

This chapter explores applications from each of the ten technology fields described in the report as they may relate to five important policy themes: economic growth, national security, environmental and energy sustainability, health and medicine, and civil…

February 07, 2025
Read More
Globe
Article
Books
Cross-Cutting Themes

One of the most important and unusual hallmarks of this moment is convergence: emerging technologies are intersecting and interacting in a host of ways, with important implications for policy. This chapter identifies themes and commonalities that cut across…

February 07, 2025
Read More
Binary
Article
Books
Foreword

Emerging technologies are transforming societies, economies, and geopolitics. Never have we experienced the convergence of so many technologies with the potential to change so much, so fast, and at such high stakes. This report is intended to help readers…

Read More
Stanford
Article
Books
Executive Summary

This report offers an easy-to-use reference tool that harnesses the expertise of Stanford University’s leading science and engineering faculty in ten major technological areas: artificial intelligence, biotechnology and synthetic biology, cryptography and…

January 26, 2026
Read More
SETR_booklet_2026_cover.jpg
Events
Books
The Stanford Emerging Technology Review 2026

The Hoover Institution and the School of Engineering at Stanford University invite you to the DC launch of the 2026 Stanford Emerging Technology Review on Wednesday, January 28th, from 4:00 PM - 7:00 PM ET.

January 28, 2026
Read More
Lasers
Article
Books
Lasers: Useful and Ubiquitous

In peace and war, the technology continually finds new applications.

August 27, 2025
Read More
SETR Lasers Video
News
Books
The Future of Laser Technology

From eye surgery to missile defense, lasers have become an invaluable technology reshaping medicine, communications, manufacturing, and national security. Sound policy is needed to balance innovation, dual-use risks, and U.S. competitiveness in the laser age.

August 27, 2025 by Siegfried Glenzer
Read More
The Future of Robotics
News
Books
The Future of Robotics

From autonomous machines in construction to wearable robotics in eldercare, robotics is transforming the physical economy, reshaping the future of work, defense, and human-machine interaction. Strategic leadership is needed to harness robotics innovation,…

August 05, 2025 by Allison Okamura

You May Also Like

.
Director Rice, Senators Coons, and McCormick in conversation during their panel session.
The 2026 Stanford Emerging Technology Review Debuts In The Nation’s Capital
.
SETR 2026 Cover
2026 Edition Of The Stanford Emerging Technology Review Offers Policymakers And Business Leaders New Insights Into The Implications Of Frontier Technologies.
.
Drone
Technology Applications By Policy Area
.
Globe
Cross-Cutting Themes
.
Binary
Foreword
.
Stanford
Executive Summary
overlay image