Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Strategic/Policy)
(Strategic/Policy)
 
(16 intermediate revisions by the same user not shown)
Line 19: Line 19:
 
* Epoch AI: [https://epoch.ai/trends Machine Learning Trends]
 
* Epoch AI: [https://epoch.ai/trends Machine Learning Trends]
 
* AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?]
 
* AI Digest: [https://theaidigest.org/progress-and-dangers How fast is AI improving?]
 +
* 2025-06: [https://80000hours.org/agi/guide/when-will-agi-arrive/ The case for AGI by 2030]
  
 
==AGI Definition==
 
==AGI Definition==
Line 62: Line 63:
 
* 2025-04: [https://www.anthropic.com/research/impact-software-development Anthropic Economic Index: AI’s Impact on Software Development]
 
* 2025-04: [https://www.anthropic.com/research/impact-software-development Anthropic Economic Index: AI’s Impact on Software Development]
 
* 2025-05: [https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete Better at everything: how AI could make human beings irrelevant]
 
* 2025-05: [https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete Better at everything: how AI could make human beings irrelevant]
 +
* 2025-05: Forethought: [https://www.forethought.org/research/the-industrial-explosion The Industrial Explosion]
  
 
==Job Loss==
 
==Job Loss==
Line 84: Line 86:
 
* 2025-05: [https://www.oxfordeconomics.com/resource/educated-but-unemployed-a-rising-reality-for-us-college-grads/ Educated but unemployed, a rising reality for US college grads] Structural shifts in tech hiring and the growing impact of AI are driving higher unemployment among recent college graduates
 
* 2025-05: [https://www.oxfordeconomics.com/resource/educated-but-unemployed-a-rising-reality-for-us-college-grads/ Educated but unemployed, a rising reality for US college grads] Structural shifts in tech hiring and the growing impact of AI are driving higher unemployment among recent college graduates
 
* 2025-05: NY Times: [https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html?unlocked_article_code=1.LE8.LlC6.eT5XcpA9hxC2&smid=url-share For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here] The unemployment rate for recent college graduates has jumped as companies try to replace entry-level workers with artificial intelligence
 
* 2025-05: NY Times: [https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html?unlocked_article_code=1.LE8.LlC6.eT5XcpA9hxC2&smid=url-share For Some Recent Graduates, the A.I. Job Apocalypse May Already Be Here] The unemployment rate for recent college graduates has jumped as companies try to replace entry-level workers with artificial intelligence
 +
* 2025-06: [https://80000hours.org/agi/guide/skills-ai-makes-valuable/ How not to lose your job to AI] The skills AI will make more valuable (and how to learn them)
 +
* 2025-06: [https://arxiv.org/abs/2506.06576 Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce]
 +
[[Image:0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png|300px]]
  
 
==National Security==
 
==National Security==
 
* 2025-04: Jeremie Harris and Edouard Harris: [https://superintelligence.gladstone.ai/ America’s Superintelligence Project]
 
* 2025-04: Jeremie Harris and Edouard Harris: [https://superintelligence.gladstone.ai/ America’s Superintelligence Project]
 +
 +
==AI Manhattan Project==
 +
* 2024-06: [https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf Situational Awareness] ([https://www.forourposterity.com/ Leopold Aschenbrenner]) - [https://www.lesswrong.com/posts/nP5FFYFjtY8LgWymt/quotes-from-leopold-aschenbrenner-s-situational-awareness select quotes], [https://www.youtube.com/watch?v=zdbVtZIn9IM podcast], [https://danielmiessler.com/p/podcast-summary-dwarkesh-vs-leopold-aschenbrenner text summary of podcast]
 +
* 2024-10: [https://thezvi.substack.com/p/ai-88-thanks-for-the-memos?open=false#%C2%A7thanks-for-the-memos-introduction-and-competitiveness White House Memo calls for action on AI]
 +
* 2024-11: [https://www.uscc.gov/annual-report/2024-annual-report-congress 2024 Annual Report to Congress]: [https://www.reuters.com/technology/artificial-intelligence/us-government-commission-pushes-manhattan-project-style-ai-initiative-2024-11-19/ calls] for "Manhattan Project-style" effort
 +
* 2025-05-29: [https://x.com/ENERGY/status/1928085878561272223 DoE Tweet]: "AI is the next Manhattan Project, and THE UNITED STATES WILL WIN. 🇺🇸"
 +
* 2025-07: [https://epoch.ai/gradient-updates/how-big-could-an-ai-manhattan-project-get How big could an “AI Manhattan Project” get?]
  
 
=Near-term=
 
=Near-term=
Line 95: Line 107:
 
*# The socioeconomic value of linearly increasing intelligence is super-exponential in nature
 
*# The socioeconomic value of linearly increasing intelligence is super-exponential in nature
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 
* 2025-03: [https://www.pathwaysai.org/p/glimpses-of-ai-progess Glimpses of AI Progress: Mental models for fast times]
 +
* 2025-03: [https://www.nature.com/articles/s41598-025-92190-7 Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways]
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean: [https://ai-2027.com/ AI 2027] ([https://ai-2027.com/scenario.pdf pdf])
 
* 2025-04: Stanford HAI: [https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf Artificial Intelligence Index Report 2025]
 
* 2025-04: Stanford HAI: [https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf Artificial Intelligence Index Report 2025]
Line 100: Line 113:
 
* 2025-04: Dwarkesh Patel: [https://www.dwarkesh.com/p/questions-about-ai Questions about the Future of AI]
 
* 2025-04: Dwarkesh Patel: [https://www.dwarkesh.com/p/questions-about-ai Questions about the Future of AI]
 
* 2025-05: [https://www.bondcap.com/report/pdf/Trends_Artificial_Intelligence.pdf Trends – Artificial Intelligence]
 
* 2025-05: [https://www.bondcap.com/report/pdf/Trends_Artificial_Intelligence.pdf Trends – Artificial Intelligence]
 +
* 2025-06: IdeaFoundry: [https://ideafoundry.substack.com/p/evolution-vs-extinction-the-choice Evolution vs. Extinction: The Choice is Ours] The next 18 months will decide whether AI ends us or evolves us
  
 
=Overall=
 
=Overall=
 
* 2025-03: Kevin Roose (New York Times): [https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html?unlocked_article_code=1.304.TIEy.SmNhKYO4e9c7&smid=url-share Powerful A.I. Is Coming. We’re Not Ready.] Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
 
* 2025-03: Kevin Roose (New York Times): [https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html?unlocked_article_code=1.304.TIEy.SmNhKYO4e9c7&smid=url-share Powerful A.I. Is Coming. We’re Not Ready.] Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
 
* 2025-03: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html My Thoughts on the Future of "AI"]: "I have very wide error bars on the potential future of large language models, and I think you should too."
 
* 2025-03: Nicholas Carlini: [https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html My Thoughts on the Future of "AI"]: "I have very wide error bars on the potential future of large language models, and I think you should too."
 +
* 2025-06: Sam Altman: [https://blog.samaltman.com/the-gentle-singularity The Gentle Singularity]
  
 
==Surveys of Opinions/Predictions==
 
==Surveys of Opinions/Predictions==
Line 131: Line 146:
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 
* 2025-03: Future of Life Institute: [https://futureoflife.org/ai/are-we-close-to-an-intelligence-explosion/ Are we close to an intelligence explosion?] AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
 
* 2025-03: Forethought: [https://www.forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion Will AI R&D Automation Cause a Software Intelligence Explosion?]
 
* 2025-03: Forethought: [https://www.forethought.org/research/will-ai-r-and-d-automation-cause-a-software-intelligence-explosion Will AI R&D Automation Cause a Software Intelligence Explosion?]
 +
[[Image:Gm-1jugbYAAtq Y.jpeg|450px]]
 
* 2025-05: [https://www.thelastinvention.ai/ The Last Invention] Why Humanity’s Final Creation Changes Everything
 
* 2025-05: [https://www.thelastinvention.ai/ The Last Invention] Why Humanity’s Final Creation Changes Everything
* 2025-06:
 
  
 
==Superintelligence==
 
==Superintelligence==
Line 184: Line 199:
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-04: Scott Alexander (Astral Codex Ten): [https://www.astralcodexten.com/p/the-colors-of-her-coat The Colors Of Her Coat] (response to [https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse semantic apocalypse] and semantic satiation)
 
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
 
* 2025-05: Helen Toner: [https://www.ai-frontiers.org/articles/were-arguing-about-ai-safety-wrong We’re Arguing About AI Safety Wrong]: Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions
 +
* 2025-05: Joe Carlsmith: [https://joecarlsmith.substack.com/p/the-stakes-of-ai-moral-status The stakes of AI moral status]
  
 
==Research==
 
==Research==
Line 197: Line 213:
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/ai-for-ai-safety AI for AI safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16790183-ai-for-ai-safety audio version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
*# [https://joecarlsmith.substack.com/p/the-stakes-of-ai-moral-status The stakes of AI moral status]
 
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  
Line 226: Line 241:
 
* 2025-05: [https://uncpga.world/agi-uncpga-report/ AGI UNCPGA Report]: Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly: Report for the Council of Presidents of the United Nations General Assembly (UNCPGA)
 
* 2025-05: [https://uncpga.world/agi-uncpga-report/ AGI UNCPGA Report]: Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly: Report for the Council of Presidents of the United Nations General Assembly (UNCPGA)
 
* 2025-06: [https://writing.antonleicht.me/p/ai-and-jobs-politics-without-policy AI & Jobs: Politics without Policy] Political support mounts - for a policy platform that does not yet exist
 
* 2025-06: [https://writing.antonleicht.me/p/ai-and-jobs-politics-without-policy AI & Jobs: Politics without Policy] Political support mounts - for a policy platform that does not yet exist
 +
* 2025-06: [https://x.com/littIeramblings Sarah Hastings-Woodhouse]: [https://drive.google.com/file/d/1mmdHBE6M2yiyL21-ctTuRLNH5xOFjqWm/view Safety Features for a Centralized AGI Project]
 +
* 2025-07: [https://writing.antonleicht.me/p/a-moving-target A Moving Target] Why we might not be quite ready to comprehensively regulate AI, and why it matters
  
 
=See Also=
 
=See Also=
 
* [[AI safety]]
 
* [[AI safety]]

Latest revision as of 11:46, 4 July 2025

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

AGI Achievable

AGI Definition

Progress Models

AI impact models01.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

National Security

AI Manhattan Project

Near-term

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Science & Technology Improvements

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

See Also