Superintelligence: Paths, Dangers, Strategies

Category: Computer Science
Author: Nick Bostrom
This Month Reddit 2


by fossuser   2021-01-25
AGI = Artificial General Intelligence, watch this for the main idea around the goal alignment problem:

- Human Compatible:

- Life 3.0, A lot of the same ideas, but the other extreme of writing style from superintelligence makes it more accessible:

Blog Posts:




The reason the groups overlap a lot with AGI is that Eliezer Yudkowsky started less wrong and founded MIRI (the machine intelligence research institute). He's also formalized a lot of the thinking around the goal alignment problem and the existential risk of discovering how to create an AGI that can improve itself without first figuring out how to align it to human goals.

For an example of why this is hard: and probably the most famous example is the paperclip maximizer:

by SUOfficial   2019-07-21

This is SO important. We should be doing this faster than China.

A branch of artificial intelligence is that of breeding and gene editing. Selectively selecting for genetic intelligence could lead to rapid advances in human intelligence. In 'Superintelligence: Paths, Dangers, Strategies', the most recent book by Oxford professor Nick Bostrum, as well as his paper 'Embryo Selection for Cognitive Enhancement', the case is made for very simple advances in IQ by selecting certain embryos for genetic attributes or even, in this case, breeding for them, and the payoff in terms of raw intelligence could be staggering.

by Scrybblyr   2019-07-21

Monster is a relative term. But if we manage to create an AI which figures out how to make itself more intelligent, and ends up thousands of times more intelligent than humans (which could theoretically happen in a nanosecond) our survival would be wholly contingent upon the decisions of the AI.

by Danihan   2017-08-19
You're not the only one who finds it scary, as there are massively popular books on the topic..