Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Surveys of Opinions/Predictions)
(Productivity Impact)
 
(40 intermediate revisions by the same user not shown)
Line 13: Line 13:
 
* 2025-09: [https://arxiv.org/abs/2509.09677 The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs]
 
* 2025-09: [https://arxiv.org/abs/2509.09677 The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs]
 
* 2025-09: [https://www.julian.ac/blog/2025/09/27/failing-to-understand-the-exponential-again/ Failing to Understand the Exponential, Again]
 
* 2025-09: [https://www.julian.ac/blog/2025/09/27/failing-to-understand-the-exponential-again/ Failing to Understand the Exponential, Again]
 +
* 2026-02: Ryan Greenblatt: [https://www.lesswrong.com/posts/rRbDNQLfihiHbXytf/distinguish-between-inference-scaling-and-larger-tasks-use Distinguish between inference scaling and "larger tasks use more compute"]
  
 
==Scaling Laws==
 
==Scaling Laws==
Line 29: Line 30:
 
* 2023-11: Allan Dafoe, Shane Legg, et al.: [https://arxiv.org/abs/2311.02462 Levels of AGI for Operationalizing Progress on the Path to AGI]
 
* 2023-11: Allan Dafoe, Shane Legg, et al.: [https://arxiv.org/abs/2311.02462 Levels of AGI for Operationalizing Progress on the Path to AGI]
 
* 2024-04: Bowen Xu: [https://arxiv.org/abs/2404.10731 What is Meant by AGI? On the Definition of Artificial General Intelligence]
 
* 2024-04: Bowen Xu: [https://arxiv.org/abs/2404.10731 What is Meant by AGI? On the Definition of Artificial General Intelligence]
 +
* 2025-10: Dan Hendrycks et al.: [https://www.agidefinition.ai/paper.pdf A Definition of AGI]
 +
* 2026-01: [https://arxiv.org/abs/2601.07364 On the universal definition of intelligence]
  
 
==Progress Models==
 
==Progress Models==
[[Image:AI impact models01.png|450px]]
+
From [http://yager-research.ca/2025/04/ai-impact-predictions/ AI Impact Predictions]:
 +
 
 +
[[Image:AI impact models-2025 11 24.png|450px]]
  
 
=Economic and Political=
 
=Economic and Political=
 
* 2019-11: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3482150 The Impact of Artificial Intelligence on the Labor Market]
 
* 2019-11: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3482150 The Impact of Artificial Intelligence on the Labor Market]
 +
* 2020-06: [https://www.openphilanthropy.org/research/modeling-the-human-trajectory/ Modeling the Human Trajectory] (GDP)
 +
* 2021-06: [https://www.openphilanthropy.org/research/report-on-whether-ai-could-drive-explosive-economic-growth/ Report on Whether AI Could Drive Explosive Economic Growth]
 
* 2023-10: Marc Andreessen: [https://a16z.com/the-techno-optimist-manifesto/ The Techno-Optimist Manifesto]
 
* 2023-10: Marc Andreessen: [https://a16z.com/the-techno-optimist-manifesto/ The Techno-Optimist Manifesto]
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
 
* 2023-12: [https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html My techno-optimism]: "defensive acceleration" ([https://vitalik.eth.limo/index.html Vitalik Buterin])
Line 71: Line 78:
 
* 2025-05: [https://arxiv.org/abs/2505.20273 Ten Principles of AI Agent Economics]
 
* 2025-05: [https://arxiv.org/abs/2505.20273 Ten Principles of AI Agent Economics]
 
* 2025-07: [https://substack.com/home/post/p-167879696 What Economists Get Wrong about AI] They ignore innovation effects, use outdated capability assumptions, and miss the robotics revolution
 
* 2025-07: [https://substack.com/home/post/p-167879696 What Economists Get Wrong about AI] They ignore innovation effects, use outdated capability assumptions, and miss the robotics revolution
* 2025-07: [https://conference.nber.org/conf_papers/f227505.pdf We Wont be Missed: Work and Growth in the Era of AGI]
 
 
* 2025-07: [https://www.nber.org/books-and-chapters/economics-transformative-ai/we-wont-be-missed-work-and-growth-era-agi We Won't Be Missed: Work and Growth in the Era of AGI]
 
* 2025-07: [https://www.nber.org/books-and-chapters/economics-transformative-ai/we-wont-be-missed-work-and-growth-era-agi We Won't Be Missed: Work and Growth in the Era of AGI]
 +
* 2025-07: [https://www.nber.org/papers/w34034 The Economics of Bicycles for the Mind]
 
* 2025-09: [https://conference.nber.org/conf_papers/f227491.pdf Genius on Demand: The Value of Transformative Artificial Intelligence]
 
* 2025-09: [https://conference.nber.org/conf_papers/f227491.pdf Genius on Demand: The Value of Transformative Artificial Intelligence]
 +
* 2025-10: [https://peterwildeford.substack.com/p/ai-is-probably-not-a-bubble AI is probably not a bubble: AI companies have revenue, demand, and paths to immense value]
 +
* 2025-11: [https://windowsontheory.org/2025/11/04/thoughts-by-a-non-economist-on-ai-and-economics/ Thoughts by a non-economist on AI and economics]
 +
* 2025-11: [https://www.nber.org/papers/w34444 Artificial Intelligence, Competition, and Welfare]
 +
* 2025-11: [https://www.anthropic.com/research/estimating-productivity-gains Estimating AI productivity gains from Claude conversations] (Anthropic)
 +
* 2025-12: [https://benjamintodd.substack.com/p/how-ai-driven-feedback-loops-could How AI-driven feedback loops could make things very crazy, very fast]
 +
* 2025-12: [https://philiptrammell.com/static/Existential_Risk_and_Growth.pdf Existential Risk and Growth] (Philip Trammell and Leopold Aschenbrenner)
 +
* 2026-01: [https://www.anthropic.com/research/anthropic-economic-index-january-2026-report Anthropic Economic Index: new building blocks for understanding AI use]
 +
* 2026-01: [https://www.anthropic.com/research/economic-index-primitives Anthropic Economic Index report: economic primitives]
  
 
==Job Loss==
 
==Job Loss==
Line 101: Line 116:
 
* 2025-07: Harvard Business Review: [https://hbr.org/2025/06/what-gets-measured-ai-will-automate What Gets Measured, AI Will Automate]
 
* 2025-07: Harvard Business Review: [https://hbr.org/2025/06/what-gets-measured-ai-will-automate What Gets Measured, AI Will Automate]
 
* 2025-08: [https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/ Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence]
 
* 2025-08: [https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/ Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence]
 +
* 2025-10: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5560401 Performance or Principle: Resistance to Artificial Intelligence in the U.S. Labor Market]
 +
* 2025-10: [https://www.siliconcontinent.com/p/the-ai-becker-problem The AI Becker problem: Who will train the next generation?]
 +
* 2026-01: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6134506 AI, Automation, and Expertise]
 +
* 2026-02: [https://arachnemag.substack.com/p/the-jevons-paradox-for-intelligence The Jevons Paradox for Intelligence: Fears of AI-induced job loss could not be more wrong]
 +
 +
==Productivity Impact==
 +
* 2025-05: [https://www.nber.org/papers/w33777 Large Language Models, Small Labor Market Effects]
 +
** Significant uptake, but very little economic impact so far
 +
* 2026-02: [https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419dc5 The AI productivity take-off is finally visible] ([https://x.com/erikbryn/status/2023075588974735869?s=20 Erik Brynjolfsson])
 +
** Businesses are finally beginning to reap some of AI's benefits.
 +
* 2026-02: New York Times: [https://www.nytimes.com/2026/02/18/opinion/ai-software.html The A.I. Disruption We’ve Been Waiting for Has Arrived]
  
 
==National Security==
 
==National Security==
Line 128: Line 154:
 
* 2025-06: IdeaFoundry: [https://ideafoundry.substack.com/p/evolution-vs-extinction-the-choice Evolution vs. Extinction: The Choice is Ours] The next 18 months will decide whether AI ends us or evolves us
 
* 2025-06: IdeaFoundry: [https://ideafoundry.substack.com/p/evolution-vs-extinction-the-choice Evolution vs. Extinction: The Choice is Ours] The next 18 months will decide whether AI ends us or evolves us
 
* 2025-07: [https://cfg.eu/advanced-ai-possible-futures/ Advanced AI: Possible futures] Five scenarios for how the AI-transition could unfold
 
* 2025-07: [https://cfg.eu/advanced-ai-possible-futures/ Advanced AI: Possible futures] Five scenarios for how the AI-transition could unfold
 +
* 2025-11: [https://android-dreams.ai/ Android Dreams]
 +
 +
==Insightful Analysis of Current State==
 +
* 2025-11: Andy Masley: [https://andymasley.substack.com/p/the-lump-of-cognition-fallacy The lump of cognition fallacy: The extended mind as the advance of civilization]
 +
* 2026-02: Eric Jang: [https://evjang.com/2026/02/04/rocks.html As Rocks May Think]
 +
* 2026-02: Matt Shumer: [https://x.com/mattshumer_/status/2021256989876109403 Something Big Is Happening]
 +
* 2026-02: Minh Pham: [https://x.com/buckeyevn/status/2014171253045960803?s=20 Why Most Agent Harnesses Are Not Bitter Lesson Pilled]
  
 
=Overall=
 
=Overall=
Line 186: Line 219:
 
=Positives & Optimism=
 
=Positives & Optimism=
 
==Science & Technology Improvements==
 
==Science & Technology Improvements==
 +
* 2023-05: [https://www.planned-obsolescence.org/author/kelsey/ Kelsey Piper]: [https://www.planned-obsolescence.org/the-costs-of-caution/ The costs of caution]
 
* 2024-09: Sam Altman: [https://ia.samaltman.com/ The Intelligence Age]
 
* 2024-09: Sam Altman: [https://ia.samaltman.com/ The Intelligence Age]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
 
* 2024-10: Dario Amodei: [https://darioamodei.com/machines-of-loving-grace Machines of Loving Grace]
Line 193: Line 227:
 
==Social==
 
==Social==
 
* 2025-09: [https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale Coasean Bargaining at Scale]: Decentralization, coordination, and co-existence with AGI
 
* 2025-09: [https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale Coasean Bargaining at Scale]: Decentralization, coordination, and co-existence with AGI
 +
* 2025-10: [https://www.nber.org/system/files/chapters/c15309/c15309.pdf#page=15.23 The Coasean Singularity? Demand, Supply, and Market Design with AI Agents]
 +
 +
==Post-scarcity Society==
 +
* 2004: Eliezer Yudkowsky (MIRI): [https://intelligence.org/files/CEV.pdf Coherent Extrapolated Volition] and [https://www.lesswrong.com/s/d3WgHDBAPYYScp5Em/p/K4aGvLnHvYgX9pZHS Fun Theory]
 +
* 2019: John Danaher: [https://www.jstor.org/stable/j.ctvn5txpc Automation and Utopia: Human Flourishing in a World Without Work]
 +
 +
==The Grand Tradeoff==
 +
* 2026-02: Nick Bostrom: [https://nickbostrom.com/optimal.pdf Optimal Timing for Superintelligence: Mundane Considerations for Existing People]
  
 
=Plans=
 
=Plans=
Line 205: Line 247:
 
* Yoshua Bengio: [https://time.com/7283507/safer-ai-development/ A Potential Path to Safer AI Development]
 
* Yoshua Bengio: [https://time.com/7283507/safer-ai-development/ A Potential Path to Safer AI Development]
 
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
 
** 2025-02: [https://arxiv.org/abs/2502.15657 Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?]
 +
* 2026-01: Dario Amodei: [https://www.darioamodei.com/essay/the-adolescence-of-technology The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI]
 +
* 2026-02: Ryan Greenblatt: [https://www.lesswrong.com/posts/vjAM7F8vMZS7oRrrh/how-do-we-more-safely-defer-to-ais How do we (more) safely defer to AIs?]
  
 
==Philosophy==
 
==Philosophy==
Line 241: Line 285:
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/can-we-safely-automate-alignment Can we safely automate alignment research?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17069901-can-we-safely-automate-alignment-research audio version], [https://joecarlsmith.substack.com/p/video-and-transcript-of-talk-on-automating?utm_source=post-email-title&publication_id=1022275&post_id=162375391&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email video version])
 
*# [https://joecarlsmith.substack.com/p/giving-ais-safe-motivations?utm_source=post-email-title&publication_id=1022275&post_id=171250683&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email Giving AIs safe motivations] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17686921-giving-ais-safe-motivations audio version])
 
*# [https://joecarlsmith.substack.com/p/giving-ais-safe-motivations?utm_source=post-email-title&publication_id=1022275&post_id=171250683&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email Giving AIs safe motivations] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17686921-giving-ais-safe-motivations audio version])
 +
*# [https://joecarlsmith.com/2025/09/29/controlling-the-options-ais-can-pursue Controlling the options AIs can pursue] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/17909401-controlling-the-options-ais-can-pursue audio version])
 +
*# [https://joecarlsmith.substack.com/p/how-human-like-do-safe-ai-motivations?utm_source=post-email-title&publication_id=1022275&post_id=178666988&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email How human-like do safe AI motivations need to be?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18175429-how-human-like-do-safe-ai-motivations-need-to-be audio version])
 +
*# [https://joecarlsmith.substack.com/p/building-ais-that-do-human-like-philosophy Building AIs that do human-like philosophy: AIs will face philosophical questions humans can't answer for them] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18591342-building-ais-that-do-human-like-philosophy audio version])
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  
 
==Strategic/Technical==
 
==Strategic/Technical==
 
* 2025-03: [https://resilience.baulab.info/docs/AI_Action_Plan_RFI.pdf AI Dominance Requires Interpretability and Standards for Transparency and Security]
 
* 2025-03: [https://resilience.baulab.info/docs/AI_Action_Plan_RFI.pdf AI Dominance Requires Interpretability and Standards for Transparency and Security]
 +
* 2026-02: [https://www.gap-map.org/capabilities/?sort=bottlenecks Fundamental Development Gap Map v1.0]
  
 
==Strategic/Policy==
 
==Strategic/Policy==
Line 272: Line 320:
 
* 2025-07: [https://writing.antonleicht.me/p/a-moving-target A Moving Target] Why we might not be quite ready to comprehensively regulate AI, and why it matters
 
* 2025-07: [https://writing.antonleicht.me/p/a-moving-target A Moving Target] Why we might not be quite ready to comprehensively regulate AI, and why it matters
 
* 2025-07: [https://www-cdn.anthropic.com/0dc382a2086f6a054eeb17e8a531bd9625b8e6e5.pdf Anthropic: Build AI in America] ([https://www.anthropic.com/news/build-ai-in-america blog])
 
* 2025-07: [https://www-cdn.anthropic.com/0dc382a2086f6a054eeb17e8a531bd9625b8e6e5.pdf Anthropic: Build AI in America] ([https://www.anthropic.com/news/build-ai-in-america blog])
 +
* 2025-12: [https://asi-prevention.com/ How middle powers may prevent the development of artificial superintelligence]
  
 
==Restriction==
 
==Restriction==

Latest revision as of 10:22, 18 February 2026

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

Scaling Laws

See: Scaling Laws

AGI Achievable

AGI Definition

Progress Models

From AI Impact Predictions:

AI impact models-2025 11 24.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

Productivity Impact

National Security

AI Manhattan Project

Near-term

Insightful Analysis of Current State

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Positives & Optimism

Science & Technology Improvements

Social

Post-scarcity Society

The Grand Tradeoff

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also