Difference between revisions of "AI predictions"

From GISAXS
Jump to: navigation, search
(Capability Scaling)
(Economic and Political)
 
(3 intermediate revisions by the same user not shown)
Line 94: Line 94:
 
* 2026-01: [https://www.anthropic.com/research/economic-index-primitives Anthropic Economic Index report: economic primitives]
 
* 2026-01: [https://www.anthropic.com/research/economic-index-primitives Anthropic Economic Index report: economic primitives]
 
* 2026-02: Nate Silver: [https://www.natesilver.net/p/the-singularity-wont-be-gentle The singularity won't be gentle: If AI is even half as transformational as Silicon Valley assumes, politics will never be the same again]
 
* 2026-02: Nate Silver: [https://www.natesilver.net/p/the-singularity-wont-be-gentle The singularity won't be gentle: If AI is even half as transformational as Silicon Valley assumes, politics will never be the same again]
 +
* 2026-03: [https://www.anthropic.com/research/economic-index-march-2026-report Anthropic Economic Index report: Learning curves]
  
 
==Job Loss==
 
==Job Loss==
Line 125: Line 126:
 
* 2026-01: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6134506 AI, Automation, and Expertise]
 
* 2026-01: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6134506 AI, Automation, and Expertise]
 
* 2026-02: [https://arachnemag.substack.com/p/the-jevons-paradox-for-intelligence The Jevons Paradox for Intelligence: Fears of AI-induced job loss could not be more wrong]
 
* 2026-02: [https://arachnemag.substack.com/p/the-jevons-paradox-for-intelligence The Jevons Paradox for Intelligence: Fears of AI-induced job loss could not be more wrong]
 +
* 2026-03: [https://www.dropbox.com/scl/fo/689u1g785x8jp6c8v1s21/AKxZ_N15vUxMA3PBtpbr5nM?dl=0&e=1&preview=2026.03.24+Bundles.pdf&rlkey=ottgcu71u1t4mhn6tblvatu8w&st=dj6k0x2o Weak Bundle, Strong Bundle:How AI Redraws Job Boundaries]
  
 
==Productivity Impact==
 
==Productivity Impact==
Line 284: Line 286:
 
==Alignment==
 
==Alignment==
 
* 2023-03: Leopold Aschenbrenner: [https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/ Nobody’s on the ball on AGI alignment]
 
* 2023-03: Leopold Aschenbrenner: [https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/ Nobody’s on the ball on AGI alignment]
* 2024-03: [https://static1.squarespace.com/static/65392ca578eee444c445c9de/t/6606f95edb20e8118074a344/1711733370985/human-values-and-alignment-29MAR2024.pdf What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
+
* 2024-03: [https://arxiv.org/abs/2404.10636 What are human values, and how do we align AI to them?] ([https://meaningalignment.substack.com/p/0480e023-98c0-4633-a604-990d3ac880ac blog])
 
* 2025: Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
 
* 2025: Joe Carlsmith: [https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problem How do we solve the alignment problem?] Introduction to an essay series on paths to safe, useful superintelligence
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16617671-what-is-it-to-solve-the-alignment-problem audio version])
 
*# [https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignment What is it to solve the alignment problem?] Also: to avoid it? Handle it? Solve it forever? Solve it completely? ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/16617671-what-is-it-to-solve-the-alignment-problem audio version])
Line 295: Line 297:
 
*# [https://joecarlsmith.substack.com/p/how-human-like-do-safe-ai-motivations?utm_source=post-email-title&publication_id=1022275&post_id=178666988&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email How human-like do safe AI motivations need to be?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18175429-how-human-like-do-safe-ai-motivations-need-to-be audio version])
 
*# [https://joecarlsmith.substack.com/p/how-human-like-do-safe-ai-motivations?utm_source=post-email-title&publication_id=1022275&post_id=178666988&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email How human-like do safe AI motivations need to be?] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18175429-how-human-like-do-safe-ai-motivations-need-to-be audio version])
 
*# [https://joecarlsmith.substack.com/p/building-ais-that-do-human-like-philosophy Building AIs that do human-like philosophy: AIs will face philosophical questions humans can't answer for them] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18591342-building-ais-that-do-human-like-philosophy audio version])
 
*# [https://joecarlsmith.substack.com/p/building-ais-that-do-human-like-philosophy Building AIs that do human-like philosophy: AIs will face philosophical questions humans can't answer for them] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18591342-building-ais-that-do-human-like-philosophy audio version])
 +
*# [https://joecarlsmith.substack.com/p/on-restraining-ai-development-for?utm_source=post-email-title&publication_id=1022275&post_id=191385185&utm_campaign=email-post-title&isFreemail=true&r=5av1bk&triedRedirect=true&utm_medium=email On restraining AI development for the sake of safety] ([https://joecarlsmithaudio.buzzsprout.com/2034731/episodes/18869440-on-restraining-ai-development-for-the-sake-of-safety audio version])
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
 
* 2025-04: Dario Amodei: [https://www.darioamodei.com/post/the-urgency-of-interpretability The Urgency of Interpretability]
  

Latest revision as of 14:59, 24 March 2026

Capability Scaling

GmZHL8xWQAAtFlF.jpeg

Scaling Laws

See: Scaling Laws

AGI Achievable

AGI Definition

Recursive Self Improvement (RSI)

Progress Models

From AI Impact Predictions:

AI impact models-2025 11 24.png

Economic and Political

Job Loss

F-kVQuvWkAAemkr.png

0dab4c86-882d-4095-9d12-d19684ed5184 675x680.png

Productivity Impact

National Security

AI Manhattan Project

Near-term

Insightful Analysis of Current State

Overall

Surveys of Opinions/Predictions

Bad Outcomes

Intelligence Explosion

Gm-1jugbYAAtq Y.jpeg

Superintelligence

Long-range/Philosophy

Psychology

Positives & Optimism

Science & Technology Improvements

Social

Post-scarcity Society

The Grand Tradeoff

Plans

Philosophy

GlchEeObwAQ88NK.jpeg

Research

Alignment

Strategic/Technical

Strategic/Policy

Restriction

See Also