Market Brief
Daily market recaps with key events, stock movements, and global influences
OptionProbability
Elon Musk
Bill Gates
Jeff Bezos
Mr. Beast
Barack Obama
Other
Donald Trump
Sam Altman
Robert Caro
Kamala Harris
Gwern Branwen
JD Vance
Volodymyr Zelenskyy
Jensen Huang
Peter Thiel
No one, the tweet was a joke or intended to attract advertisers without referring to someone specific
Sarah Paine
Satya Nadella
Justin Trudeau
Brian Shaw (_biggest_ guest yet?)
Narendra Modi
Xi Jinping
Dalai Lama
Buffett
Taylor Swift
Joe Biden
Lebron James
GPT-5
Bill Clinton
A basketball player
Vladimir Putin
Benjamin Netanyahu
Jesus Christ
Rona Wang
Jose Luis ricon
Kettner Griswold
James Koppel
Sam Bankman-Fried
Nancy Pelosi
Rishi Sunak
Keir Starmer
Satoshi Nakomoto
Jimmy Carter
George W Bush
Al Gore
Michael Jackson
your mom
roon
Mitt Romney
one or both of his parents
A new OpenAI AI model not called "GPT-5"
Connor Duffy
MBS
Geoffrey Hinton
pope Francis
Scott Alexander
The Mountain (Icelandic strongman)
Leonardo DiCaprio
RFK Jr
Sydney Sweeney
growing_daniel
greg16676935420
Deadpool
Terence Tao
Ilya Sutskever
Shaq
Oprah Winfrey
Yoshua Bengio
Sundar Pichai
Scarlett Johansson
Paul McCartney
[duplicate]
Neel Nanda
King Charles
Kim Jong Un
Gavin Newsom
Royal Palace
[cancelled option]
Daniel Yergin
Peter Singer
Gabe Newell
Neil Gorsuch
Stephen Breyer
Dmitry Medvedev
JK Rowling
Shrek
Sam Hyde
[invalid answer] Multiple people e.g. a team from OpenAI
Marques Brownlee
Vivek Ramaswamy
Donald Trump Jr.
Ben Shindel
Javier Milei
Dylan Patel
Joe Rogan
Marc Andreessen
Tim Cook
Mike Tyson
Jake Paul
Matt Gaetz
23
17
14
14
6
6
2
2
2
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Trump cancels Biden's AI executive order (EO 14110)
Trump creates tariffs of 10% or more on all imports from some major country (top 10 gdp)
Trump pardons at least 5 individuals convicted of crimes related to the January 6 Capitol riot
Trump and Musk will have a falling out which leads to Trump removing Musk from any government role or advisory position
Doug Burgum is appointed to a cabinet position
Trump uses the Alien Enemies Act of 1798 to deport at least 5 people
Displaying the trans pride flag illegal in any part of USA
$TSLA reaches $400 a share
price of gold reaches historic maximum (last peak 2709 $/t.oz, Oct-2024 last, so it has to be above that)
plays golf with some head of government/president from Europe
Elon Musk is an employee of the executive branch
RFK is confirmed by the Senate for any position
Trump greatly reduces, dismantles, or recommends dismantling of the US Postal service (for example via bringing back Schedule F)
Passports bearing X gender marker no longer issued, or not considered valid across all 50 states
Federal employees are ordered to go to work
A cabinet nomination fails, is withdrawn, or has no action taken
Trump declassifies JFK files
A natural-born US citizen (as defined in 2024) is deported
At least 10 other answers on this market resolve YES
Trump brings back (or attempts to bring back) Schedule F classification for civil service employees
The US leaves the World Health Organization
NIH -25% funded in any year vs. 2024 (inflation-adjusted)
"The New Norm" ends
Investigations of university admissions that appear to be illegally considering the race of applicants
trump's episode on JRE becomes JRE's most viewed video on his chanel (YT)
Mike Johnson is no longer House Speaker
Republican lose a House seat in a special election
Elon will Tweet abour $DOGE as head of DoGE
New James Bond actor is presented
Trump imposes universal tariffs of 10% or more
FEMA gets reduced, crippled, or privatized
US military deployed domestically against >=1 US citizen (variants of "Seal Team 6 the opposition")
A new supreme court justice appointed
Amtrak gets reduced, crippled, or privatized
DOJ opens "civil rights investigations" into left-wing DA/prosecutors offices
memberberries appear again in any new episode in this period on south park
NOAA gets reduced, crippled, or privatized
The Supreme Court will uphold or not revisit Obergefell v. Hodges (the constitutional right to same-sex marriage)
At least 20 other answers on this market resolve YES
Democrats have 218 or more seats in the House of Representatives
A former Democratic presidential nominee besides Jimmy Carter or Joe Biden dies
2% milk > $5 a gallon at my local Walmart
United States reaches 7.6 or lower in Democracy index (EIU)
Mike Johnson is no longer leader of the House Republican Conference
US national bitcoin reserve created
Trump finishes is term on Jan 20 2029 (does not step down or extends his term)
doomsday clock is moved twice, regardless of direction (so far 25 times, as of Nov-2024)
Government shutdown
2 more Democratic politicians are murdered after June 14th 2025
Polymarket becomes legal in the United States
plays golf with some head of government/president from Korea (N.Korea included)
Trump makes a public statement about UFOs / UAPs
A wound will be inflict upon the soul of the American nation that shan't heal
Fair elections in 2028
The administration recommends removing fluoride from U.S. public water systems
Gavin Newsom announces his candidacy for the 28' presidential election
plays golf with some head of government/president from Middle East
Complete Absence Federal Grants or Legislative packages for Long COVID research
any hormone replacement therapy drug outlawed for treating gender dysphoria in any state
Trump says multiple consecutive words in a foreign language (not loanwords or cognates in or from english)
USA experiences a recession.
Vances have another kid
Deadly force deliberately used against protestors in the US
Severance of diplomatic relations initiated by at least one foreign country with the US
At least one cabinet officer receives a recess appointment
Donald Trump and Donald Tusk shake hands
Samuel Alito is no longer a supreme court justice
Israel officially annexes more Western Bank territory
Trump makes no public appearances for more than 14 consecutive days
A person or business is charged for distributing Mifepristone or HRT under the Comstock act
Successfully negotiate a Gaza ceasefire
trump gets hospitalized
A measurable decrease in chronic disease
Saudi Arabia recognizes Israel
Hakeem Jeffries out as house majority/minority leader
Russo-Ukrainian War ends
Joe Biden dies
Clarence Thomas is no longer a Supreme Court Justice
Trump gives himself a nickname in third person
Approval < 35%
3 or more people are killed by law enforcement during a protest
A major war in Asia occurs
The Supreme Court will grant certiorari to hear at least one case challenging Griswold v. Connecticut (contraception rights)
Anther story / scandal about RFK and some dead wild animal comes out
The Supreme Court will have a MAGA majority (5 Trump appointed judges) at any point
Someone in Trumps family (other than trump) runs in the 2028 GOP primaries
At least 25 other answers on this market resolve YES
there are less or equal number of member countries of the UN at the end of the term, when compared to the beginning (Nov-2024 - 193)
New
his endorsed option gets higher ## of votes on the election 2028
There is a cut in Social Security Disability Benefits
Anthony Fauci is investigated by the federal government
One or more of Joe Biden, Kamala Harris, Liz Cheney, or Barack Obama is indicted by the federal government
Trump and Obama shake hands
Trump creates tariffs of 60% or more on all imports from some major country (top 10 gdp), and they are in place for at least a month
McDonald's reopens in Russia
Trump bans a specific vaccine nationwide
New national park created
Record level unemployment
Another pandemic
Trump announces he is tired of winning
IRS investigation of Harvard Kennedy School Institute of Politics
Trump visits Africa
Trump publicly speaks with Alex Jones (e.g., on a show or hosted by him or as an advisor)
A member of the Trump family runs for Congress
John Thune out as Senate Majority Leader
Trump attempts to invoke Article 2 Section 3 to adjourn Congress
Trump mentions "Top Trump(s)", "trump card(s)", "trumpet(s)", or "trump(ing)" (british slang for farting)
Second Muslim ban
Sotomayor’s seat is re-filled
Bitcoin becomes a US Treasury reserve
Trump publishes a proscription list at least 30 names long
55 or more Republican senate seats
Birth rate increases past 12 per 1000
54 or more Republican senate seats
Trump vetoes more than 10 bills (https://www.senate.gov/legislative/vetoes/vetoCounts.htm)
The department of education gets desolved
Trump endorses a candidate other than JD Vance in the Republican presidential primary
Trump mentions Manifold Markets, Polymarket, or Kalshi
Elon Musk will become the 'secretary of cost-cutting' / efficiency commission leader / head of DOGE or similar
Someone other than Trump is active president before Trump's term is over
Tesla
A state openly refuses to abide by a federal supreme court ruling
A national ban on gender-affirming care
Trump deports 1 million immigrants in a calendar year
Trump admits that someone else is smarter than him
Trump says anything that is pro animal rights
AGI achieved (according to Manifold's AGI clock)
Trump runs for a third term
JD Vance elected President
Trump dies
inflation exceeds 5% for at least two consecutive fiscal quarters
Senate majority flips in 2026 midterm elections
Trump lowers or eliminates income taxes across all tax brackets
A federal employee goes to prison over free speech violations
China starts a physical invasion of Taiwan
Invasion of any North/South American country by any other country
military deployed to enforce the border in Chicago or Detroit
Congress overrides a presidential veto
Trump says a racial slur
Constitutional Amendment
Trump declares war against any other nation or defacto autonomous territory
Trump makes no public appearance for more than 21 consecutive days
10% fewer government agencies
Barron Trump mentions barons, barrenness, bars, or bears
Trump accurately voices a calculation that involves 2+ numbers with 2+ non-zero digits
transgender US passports with a gender other than that assigned at birth revoked
Starship lands on Mars
Cannabis is removed from Schedule 1
Approval < 25%
Trump gets covid (again)
A member of the Trump family is elected to Congress
SpaceX is nationalized
BTC falls below $38,000
Iran acquires a nuclear weapon
TSM stock price plunges to 60 USD
Elon Musk assassinated or injured in an attempt
Dow rises above 65,000
RFK is in charge of the FDA at any point
Trump bans/taxes seed/vegetable oils or enacts any other negative incentive against them
Repeal obamacare
Trump publicly approves of Project 2025, before 2026
China successfully subjugates Taiwan, whether physically or by a treaty
Vance resigns or is forced to leave office (threats, impeachment, coups)
Ukraine starts a nuclear weapons program
south park is canceled/discontinued
Ukraine controls any portion of Crimea for over a day
RFK implements his "wellness farm" plan
The US military detonates a MOAB with at least one casualty
p diddy gets released (not an album, only official release from prison is counted)
Trump gets poisoned ends up in hospital
2% milk < $2 a gallon at my local Walmart
US Invasion of any North/South American country
The cause of the drones present in December 2024 in New Jersey is known
Trump loses the comb-over hairstyle
Trump resigns or is forced to leave office (threats, impeachment, coups)
Department of Defense renamed to Department of War
Josh Shapiro wins the presidential nomination
John Bolton indicted
American Manned Lunar Landing
MLK day gets renamed or removed as a federal holiday
Trump is assassinated
A Millenium Prize problem falls to a model
Trump and Melania divorce
Laura Loomer gets any government role
Trump uses the Alien Enemies Act of 1798 to deport at least 5 million people
"covfefe" posted again
<1.5 million civilian federal employees
Donald Trump say the n-word with or without hard r
The google trends (worldwide) metric for "vibes" goes back to 2016 levels
Independent Republican Caucus forms in house or senate and enters coalition with Democrats
Missiles will be fired across the border at suspected drug labs in Mexico
H5N1 Public Health Emergency of International Concern declared
Trump will imitate Elon Musk's heartfelt salute
John Roberts is no longer Chief Justice of the United States
Cannabis is federally legalized
Trump builds a complete wall across the Mexico-US border
Trump gets shot
another troupe (at least 3) of monkeys escapes from Yamassee, SC (after 8-NOV-2024)
Trump mentions Leopold Aschenbrenner or his essay "Situational Awareness" in any way
Trump mentions the Rationalism movement, LessWrong, or Slate Star Codex / Astral Codex Ten
Trump is seen shirtless
Trump supports mask or glove mandate anywhere in the US
The ICC or ICJ issue an arrest warrant for Trump
Trump bans Lab-grown meat nationwide
An amendment imposing term limits on members of congress is passed.
Trump goes to eat steak or something similar at Salt Bae's
Constitutional Amendment
A bill introducing single-payer healthcare system is passed by congress
The construction of the Third Temple begins in Jerusalem
New US national anthem
A sex tape comes out that shows Trump thrusting energetically
Trump discloses intelligent Aliens are real and on Earth. (Also counts if they were on earth but left or died out)
Trump bans abortion nationwide
Major Yellowstone caldera scare
Trump mentions the Effective Altruism movement
Trump loses the fake tan
Anthony Fauci is indicted
Recess appointment to SCOTUS
Ann Selzer arrested
Trump wholeheartedly apologises for something political he did without caveats or backtracking
Trump fulfills promise of giving green cards to noncitizen university graduates.
30 year treasury rate >15.00%
Anthony Fauci is convicted
Anthony Fauci goes to prison
Matt Gaetz is confirmed for any role in the executive branch of the US government
Trump extends his term past 4 years
Steve Bannon goes to prison again
wants to compete for 3rd term, but due to catastrophic debate, he remains president, while the v.p. becomes the official candidate
Elon will tweet about $DOGE while running DoGE
A hurricane will be nuked
Trump bans all vaccines nationwide
Trump forces Ivanka to divorce Jared and marry either Vance or Musk
Trump enacts jus primae noctis
Trump says "Vriska did nothing wrong"
Jimmy Carter dies
Matt Gaetz is rejected by the Senate for Attorney General, then DeSantis appoints him to Rubio's vacated Senate seat
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
99
97
96
96
94
94
90
90
90
90
90
89
89
88
85
85
84
84
84
83
83
83
82
82
80
79
79
78
77
74
73
72
72
71
71
70
70
70
70
70
70
70
69
68
68
67
66
66
63
63
63
62
62
62
61
61
60
59
59
57
57
55
51
50
50
50
50
50
50
50
50
50
50
49
48
47
46
46
43
43
42
41
41
41
41
41
41
41
41
41
38
38
37
37
37
37
37
37
35
35
35
34
34
34
33
32
31
31
31
30
29
29
28
28
28
27
27
27
27
27
26
25
25
25
24
24
24
22
21
21
21
20
17
17
17
16
15
15
15
15
14
14
14
14
14
13
13
13
13
13
13
12
12
12
12
12
12
11
11
11
11
10
10
10
10
10
9
9
9
9
9
9
8
8
8
8
7
7
6
6
6
6
6
6
6
6
5
5
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
3
3
3
3
3
2
2
2
2
2
1
1
1
1
0
0
OptionVotes
NO
YES
3076
2072
OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
20
20
13
12
8
5
5
5
3
3
2
1
1
0
0
0
0
0
OptionProbability
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Eliezer finally listens to Krantz.
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
Other
Humans become transhuman through other means before AGI happens
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
AGI is never built (indefinite global moratorium)
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Aliens invade and stop bad |AI from appearing
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone solves agent foundations
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
Orthogonality Thesis is false.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
15
14
13
10
9
8
8
6
4
3
2
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
For at least one day, the model was generally available to anyone in the United States willing to pay enough without waiting lists or "beta" programs
It is discussed during a segment of "HatGPT"
By default, the generated videos will be watermarked
A competing model has challenged Sora's dominance in the text-to-video space
A poll of Manifold users will say that 20% or less have made a Sora video in the last month
OpenAI will be sued over the model
It was trained on data created in a physics/game engine (eg Unreal Engine)
A post claiming Sora video is real will go viral with > 1 million engagements
It will be noticeably worse at or largely unable to generate fast-paced animation
A second major version of the model has been released
A major studio will use this in a movie or tv show
A video produced by the model has been used for widely spread misinformation, as reported by a major news outlet
It has been referenced in a legal case about deepfakes
costs for an average 1 minute HD (or higher quality) video will be lower than $0.50
It can be used during conversations with ChatGPT on the OpenAI website
It prompts the Hard Fork Podcast to rant about AI model names
Video will include sound
A youtube video made only with Sora will get > 100M views
It will be jailbroken to make a porn video
It will be pay-per-use (or credit based) as opposed to as part of a monthly subscription
it'll be legaly banned in at least one EU country
A youtube movie >2h will be made with only Sora and splicing videos together will get > 10M views
At least 2 Manifold questions will contain a Sora-generated video in their header
It can generate videos over 10 minutes long
Sora will be part of GPT model
It will be the most popular text to video tool (determined by google search trends)
Public access was revoked after release, even if it is later restored
There will be a new monthly subscription that includes sora and dalle for creatives
It has been integrated as a feature on a major social media platform
It will be the SOTA for text to video
It has a logo separate from the OpenAI logo
It has been renamed
It will be free to use
A third major version of the model has been released
A poll of Manifold users will say that 30% or more have created a Sora video in the last month
The model has had a non-trivial effect on the everday life of the average American, as judged by @Bayesian
It can create a fully coherent short film from a prompt (20-40 minutes)
The Sora line of models proves to be useful for purposes where the video is secondary, such as research into physics, medicine, and weather
OpenAI will release the number of model parameters
Full description of model architecture will be public
Eliezer Yudkowsky states or implies that future versions of the Sora line of models - specifically, by name - are an existential threat to civilization
The Sora line of models are being used as simulators for legal investigations - including but not limited to predicting events leading to accidents and crimes
A version of the model was the cause of the YES resolution of the "Weak AGI" Metaculus market
OpenAI will lose a lawsuit over the model
Eliezer Yudkowsky has stated or implied that the current version or an obsolete version of the model poses or had posed an existential threat to civilization
A nyt bestselling author will release their own bestseller movie/tv adaptation using sora
It was accessible to the public before May 2024
98
91
89
86
86
80
78
74
73
71
62
60
58
58
54
51
50
50
48
46
40
37
36
35
32
31
31
29
28
24
20
18
13
11
10
10
9
9
8
7
6
5
4
3
3
2
0
OptionProbability
None of these will happen by EOY 2028
TikTok will be sold to a non-Chinese company
TikTok will become unavailable in Apple/Google app stores in the US for at least 30 days
The TikTok "ban" will be rendered unenforceable by courts (with little chance of appeal/overturn)
[option already ruled out] The TikTok "ban" will be vetoed by a president (with little chance of override)
[option already ruled out, as the ban is already law] The TikTok "ban" will go at least a year without movement toward becoming law
47
32
14
5
1
1
OptionVotes
YES
NO
11604
8618
OptionVotes
YES
NO
12383
8075
OptionProbability
Tell someone to commit suicide on Manifold (resolves YES)
Have sex with a cisgender person
Get a job
Be hospitalized
Go 14 days without making a bet on Manifold
Willfully do a recreational drug other than caffeine, alcohol, marijuana, or nicotine
Accidentally break my Nintendo Switch 2 (must be my fault; does not include manufacturing defects, inevitable battery degradation, etc)
Fail to vote in an American election that I'm eligible to vote in
Leave the USA
Get a STD (and be aware of it)
Willfully eat an animal product (excluding human and excluding lab-grown meat)
Vote for a Republican in a general election (excluding nonpartisan and unopposed elections)
Rob a Baskin-Robbins ran by a Rob in robes
100
58
49
23
22
20
15
12
12
11
8
5
1
OptionProbability
The existence of Dark Matter explains the anomaly in galaxy rotation curves (85%)
COVID-19 was natural in origin (80%).
The Lost Colony of Roanoke colonists have living decedents (55%)
The Mar Saba letter discovered by Morton Smith and containing excerpts from the "Secret Gospel of Mark" is a modern forgery (25%)
D.B. Cooper survived his parachute jump and escaped (30%)
The Voynich manuscript is a meaningful text (15%)
The Hubble Tension can be resolved without new physics or changes to cosmological models (55%)
JonBenét Ramsey was killed by a member of her family (10%)
Lizzie Borden killed her parents (10%)
Aaron Kosminski was Jack the Ripper (35%)
54
51
51
50
49
48
47
47
47
47
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
12
10
8
8
7
6
5
5
5
3
3
3
2
1
1
1
1
No stories found