Less doom, please.
P(doom) has become shorthand for the idea that you can assign a probability value to the belief that AI will destroy us all. However, it’s not often (really, rarely) clear what someone’s time horizon is, and how they are estimating p(doom). It clearly isn’t a frequentist probabilistic estimate. We have not and cannot run the future forward many times and estimate the fraction of times that AI destroys the world. It also isn’t clearly a Bayesian estimate. In Bayesian inference, you can estimate the probability of some hypothesis H given some evidence E as:
\[p(H \vert E) = \frac{p(E \vert H)p(H)}{p(E)}\]Now, if you have two competing hypotheses, H1 being “AI doom” and H0 being “no AI doom,” you have to weigh each p(H) reasonably. That is, p(H0) should be very large, and p(H1) should be very small. You can ignore p(E) because it’s the same for H0 and H1.
If you set p(doom) very high, one of two things is true:
- they are zooming way out: humans will go extinct at some point, and yeah, AI could be the driver, but that could be thousands of years away (or more)
- they are overvaluing p(H1) such that it is not in line with reasonable probability estimates of mass extinction events (that is, super low).