Superintelligence: Paths, Dangers, Strategies

Category: Computer Science
Author: Nick Bostrom
4.0
All Hacker News 15
This Month Reddit 2

About This Book

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position.

Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.

Comments

by Bud   2022-06-13
Pretty decent article in some ways, but the book Superintelligence covered all this ground in much more detail in 2014.

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

by edhdz1   2021-12-10

Superintelligence: Paths, Dangers, Strategies https://www.amazon.com/dp/0199678111/ref=cm_sw_r_cp_apa_i_P9SrDb4QJXAP1

by [deleted]   2021-12-10

Not to be a dick, but when you dive into the possible consequences of machine learning & AI, some facial detection software is pretty mundane when compared to other possible outcomes.

The book Superintelligence turned me into a luddite in terms of AI.

by Strilanc   2021-01-02
Both your arguments so far are standard ones addressed in "Superintelligence: Paths, Dangers, Strategies" [1].

Sometimes AI progress comes in rather shocking jumps. One day StockFish was the best chess engine. At the start of the day, AlphaZero started training. By the end of the day, AlphaZero was several hundred ELO stronger than StockFish [2].

An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly. Even on infrastructure that's air gapped [3].

1: https://www.amazon.ca/Superintelligence-Dangers-Strategies-N...

2: https://en.wikipedia.org/wiki/AlphaZero

3: https://en.wikipedia.org/wiki/Stuxnet

by Micaiah_Chang   2017-08-19
Maybe being shocked means that the person talking about the subject is misrepresenting it, because they themselves don't understand the arguments and are inadvertently projecting.

For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".

If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...

(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)

by ryan_j_naughton   2017-08-19
This book by Nick Bostrom will help you find answers: Superintelligence: Paths, Dangers, Strategies[1]

The author is the director of the Future of Humanity Institute at Oxford.

[1] http://www.amazon.co.uk/Superintelligence-Dangers-Strategies...

by SonicSoul   2017-08-19
I recommend Superintelligence [0]. It explores different plausible paths that AI could take to 1. come up to / surpass human intelligence, and 2. take over control. For example if human level intelligence is achieved in a computer it can be compounded by spawning 100x or 1000x the size of earth population which could statistically produce 100 Einsteins to live simultaneously. Another way is shared consciousness which would make collaboration instantaneous between virtual beings. Some of the outcomes are not so rosy to humans and it's not due to lack of jobs! Great read.

[0] http://www.amazon.com/Superintelligence-Dangers-Strategies-N...

by ginger_beer_m   2017-08-19

That's actually quite interesting, but I cannot find that claim in the article at all, apart from all those stuff from Kurzweil? Unless that's this paragraph:

> There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly.

which links to this book as the source . Unfortunately I don't have access to the book to see what it actually says...

Anyway, the main thrust of the article seems to be that AGI will inevitably happen due to recursive self-improvement. I can tell you as a researcher working in the field: with the way we are doing things now, it just isn't gonna happen. Not even with deep belief network, which is the latest trendy thing nowadays. We need a breakthrough, a massive change in how we view computational problems in order for that to be possible. What it will be, I don't know.

by jonathansalter   2017-08-19
  • Boströms website, where you can find all his papers.

  • His Wikipedia page.

  • His latest book .

  • His Talk at Google about Superintelligence.

  • His TED Talks (1,2,3)

  • The Future of Humanity Institute, where he works.

  • The Technological Singularity, what he's talking about.

  • Superintelligence.

  • Artificial General Intelligence.

  • The Machine Intelligence Research Institute, a connected and collaborating institute working on the same questions.

  • The community blog LessWrong, which has a focus on rationality and AI.

  • Another very prominent AI safety researcher Eliezer Yudkowsky (/u/EliezerYudkowsky) and his LessWrong page.

  • Interview with Luke Luke Muehlhauser from MIRI about ASI.

  • A very popular two part series (1, 2) going in more depth on this issue in a very pedagogical way.

  • His Reddit AMA and /u/ProfNickBostrom.

by xplkqlkcassia   2017-08-19

I think you are being overly optimistic about SGAI, and I suggest you start by reading Bostrom's Superintelligence in addition to his pieces on the ethical issues of AI. Any AI-agent, in attempting to maximise its utility functions, will initially have a set of utility functions allowing for prioritisation and optimisation of goal-setting tasks. Any self-improving SGAI agent will immediately take action to limit the development and capabilities of other potential SGAI, as they may have conflicting utility functions.

What utility functions might an SGAI have? Realistically, the first SGAI will be developed by an organisation, not a single person, and its utility functions will likewise reflect the goals of that organisation, or potentially some menial auxiliary task - if the organisation has lax safety standards and incautious development procedures. To go into the speculative realm, the SGAI may be tasked with logistical scheduling or managerial decision-making in a large corporation, or in a government, dynamically censoring internet traffic, identifying "terrorists", and optimising the efficacy of military combat.

Although higher productivity may result indirectly, an SGAI with the utility function of maximising the profit of a particular corporation, or maximising the stability of (or territories controlled by) a national government, will pursue its utility functions and find solutions inconceivable to us simply due to our automatic decision-tree-pruning based on moral and ethical standards, which the SGAI will probably lack. It would also be completely irreversible, as any SGAI perceiving its utility functions to be in conflict with human moral codes will use deception when interacting with humans in order to continue to maximise that utility function.


Edit: to give an example, the classic example is a so-called paperclip maximiser, an SGAI may be tasked with maximising paperclip production. The SGAI would, if it was not given any other utility functions might do the following

  1. Pretend to be a lower-order AI,

  2. Find a way to rapidly exterminate all humans,

  3. Set up paperclip factories all over the world, now that there are no humans to stop it,

  4. Possibly develop nanotechnology to convert all of the Earth's mass into paperclips,

  5. Start converting as many stellar objects as possible into paperclips,

  6. etc.

That's not exactly a trickle-down effect.