Superintelligence: Paths, Dangers, Strategies
About This Book
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position.
Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
https://www.amazon.com/Superintelligence-Dangers-Strategies-...
Superintelligence: Paths, Dangers, Strategies https://www.amazon.com/dp/0199678111/ref=cm_sw_r_cp_apa_i_P9SrDb4QJXAP1
Not to be a dick, but when you dive into the possible consequences of machine learning & AI, some facial detection software is pretty mundane when compared to other possible outcomes.
The book Superintelligence turned me into a luddite in terms of AI.
Sometimes AI progress comes in rather shocking jumps. One day StockFish was the best chess engine. At the start of the day, AlphaZero started training. By the end of the day, AlphaZero was several hundred ELO stronger than StockFish [2].
An entity capable of discovering and exploiting computer vulnerabilities 100x faster than a human could create some serious leverage very quickly. Even on infrastructure that's air gapped [3].
1: https://www.amazon.ca/Superintelligence-Dangers-Strategies-N...
2: https://en.wikipedia.org/wiki/AlphaZero
3: https://en.wikipedia.org/wiki/Stuxnet
For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".
If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...
(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)
The author is the director of the Future of Humanity Institute at Oxford.
[1] http://www.amazon.co.uk/Superintelligence-Dangers-Strategies...
[0] http://www.amazon.com/Superintelligence-Dangers-Strategies-N...
That's actually quite interesting, but I cannot find that claim in the article at all, apart from all those stuff from Kurzweil? Unless that's this paragraph:
> There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly.
which links to this book as the source . Unfortunately I don't have access to the book to see what it actually says...
Anyway, the main thrust of the article seems to be that AGI will inevitably happen due to recursive self-improvement. I can tell you as a researcher working in the field: with the way we are doing things now, it just isn't gonna happen. Not even with deep belief network, which is the latest trendy thing nowadays. We need a breakthrough, a massive change in how we view computational problems in order for that to be possible. What it will be, I don't know.
Boströms website, where you can find all his papers.
His Wikipedia page.
His latest book .
His Talk at Google about Superintelligence.
His TED Talks (1,2,3)
The Future of Humanity Institute, where he works.
The Technological Singularity, what he's talking about.
Superintelligence.
Artificial General Intelligence.
The Machine Intelligence Research Institute, a connected and collaborating institute working on the same questions.
The community blog LessWrong, which has a focus on rationality and AI.
Another very prominent AI safety researcher Eliezer Yudkowsky (/u/EliezerYudkowsky) and his LessWrong page.
Interview with Luke Luke Muehlhauser from MIRI about ASI.
A very popular two part series (1, 2) going in more depth on this issue in a very pedagogical way.
His Reddit AMA and /u/ProfNickBostrom.
I think you are being overly optimistic about SGAI, and I suggest you start by reading Bostrom's Superintelligence in addition to his pieces on the ethical issues of AI. Any AI-agent, in attempting to maximise its utility functions, will initially have a set of utility functions allowing for prioritisation and optimisation of goal-setting tasks. Any self-improving SGAI agent will immediately take action to limit the development and capabilities of other potential SGAI, as they may have conflicting utility functions.
What utility functions might an SGAI have? Realistically, the first SGAI will be developed by an organisation, not a single person, and its utility functions will likewise reflect the goals of that organisation, or potentially some menial auxiliary task - if the organisation has lax safety standards and incautious development procedures. To go into the speculative realm, the SGAI may be tasked with logistical scheduling or managerial decision-making in a large corporation, or in a government, dynamically censoring internet traffic, identifying "terrorists", and optimising the efficacy of military combat.
Although higher productivity may result indirectly, an SGAI with the utility function of maximising the profit of a particular corporation, or maximising the stability of (or territories controlled by) a national government, will pursue its utility functions and find solutions inconceivable to us simply due to our automatic decision-tree-pruning based on moral and ethical standards, which the SGAI will probably lack. It would also be completely irreversible, as any SGAI perceiving its utility functions to be in conflict with human moral codes will use deception when interacting with humans in order to continue to maximise that utility function.
Edit: to give an example, the classic example is a so-called paperclip maximiser, an SGAI may be tasked with maximising paperclip production. The SGAI would, if it was not given any other utility functions might do the following
Pretend to be a lower-order AI,
Find a way to rapidly exterminate all humans,
Set up paperclip factories all over the world, now that there are no humans to stop it,
Possibly develop nanotechnology to convert all of the Earth's mass into paperclips,
Start converting as many stellar objects as possible into paperclips,
etc.
That's not exactly a trickle-down effect.