More

    “Here Be Dragons: Science, Technology and the Future of Mankind” Conference – Ahrvid Engholm (Sweden)

    fasad med staty

    I went to the Institute for Future Studies (Institutet för Framtidsstudier) in Stockholm February 15th, to hear Professor Olle Häggström (Professor of Mathematical Statistics, Chalmers University of Technology, Gothenburg, Sweden) talk about his new book about future threats to mankind.

    The year is 2056, and scientists have just created the first computer with superhuman intelligence. Aware of the risks, the programmers trained it in ethics. The machine functions flawlessly. Aiming to maximise happiness in the universe, and calculating that sentient beings are happy less than half the time, the computer exterminates all sentient life. The balance of happiness increases from negative to zero – only there’s nobody left to enjoy it. Futurologists refer to this sort of misunderstanding as perverse instantiation, and Olle Häggström is concerned about it.”

    The title of the book is “Here Be Dragons: Science, Technology and the Future of Mankind” (Oxford University Press, 2015; written in English – no Swedish translation yet).

    Here Be Dragons: Hostile aliens, tall people and black holes

    (Image: Stephane Harter/Agence VU/Camera Press)

    The lecture hall was packed and Häggström began with bragging about the favourable reviews in New Scientist and Financial Times.

    Ahrvid3_Debate_OHagstromLectures

    Häggström talked about many science-fiction subjects, everything from space colonies and nano technology to transhumanism and artificial intelligence. But mainly it was about threats to humanity’s future, things that could obliterate us totally. He divided the threats into external and internal threats.

    The external threats were: asteroid or comet impacts, supernovas, our own sun, super volcanoes, natural pandemias, attacks by aliens. About the last he said it wasn’t likely for them to bother about us except if they see us as a threat to themselves, and they might want to get rid of us with a “pre-emptive strike”. But it’s a very unlikely threat. Despite SETI, Drake’s equation and so on, we haven’t yet found the slightest trace of extra-terrestrial life.

    The internal threats were: nucelar war, global warming, a super-intelligent AI or a robot uprising, the “grey goo” from nanorobots running wild, other nano-technology, dangerous physics experiments. There are many of these things we know very little about, and he strongly advised governments and scientists to direct more energy into basic research about potential dangers. Recently we’ve seen the publication of an open warning letter signed by 16 000 scientists, about possible dangers with AI and self-governed military drones.

    We don’t know what may be the result may be of developments into advanced technology with military applications, he said. We were lucky that the first weapon of mass destruction, the atomic bomb, was shown to be easy to contain. Only about ten states have nuclear weapons, because you need huge resources to to produce the basic nuclear ingredients. But what if a terrorist group use some DNA technology in someone’s kitchen to produce a super virus that could wipe us all out. Future dangerous technologies may not be containable.

    Ahrvid1_DebateAfterLecture

    After Häggström’s opening notes, there was a heated debate to which Hannes Sjöblad (Sweden ambassador for Singular University), Karim Jebari (philosopher active at the Institute for Future Studies) and Ann-Sophie Crepin (economist at the Beijer Institute and the Stockholm Resilience Center) were invited. Moderator was Christer Sturmark, chairman of the Humanisterna society and head of Fri Tanke publisher.

    Sjöblad and Häggström seemed to be the two with most conflicting views. Sjöblad being a transhumanist and optimist, and Häggström being more pessimistic. Sjöblad started the debate with saying that Homo Sapiens is one of the ca 30 species of the Homo group that have existed – the 29 others are extinct. Maybe we should just accept that we will be just another step on the evolutionary ladder, we will evolve into something new and Homo Sapiens as such will go extinct. At present, the most valuable substance in the known universe is the grey goo of the human brain, he said, but in the future we may be able to upload the contents of our brains to super-duper computers and live forever in digital shape…

    Sturmark noted that everything that may be a threat also may have positive applications. DNA tinkering may also produce a vaccine that can save millions of lives.

    Häggström said it was impossible to ban AI research. It is estimated to be worth tens of thousands of billions of dollars in the future. But we could maybe re-direct such research just a little bit to make it safer, so that we won’t wake up one day and find that HAL 9000 v 2.0 has taken over – and we are unwanted inferior beings.

    Jebari noted that today’s AI research may be on the wrong track. We try to create AIs “bottom up”, ie by trying to teach computer things. That won’t work. We need a “top bottom” approach, to first understand how thinking works.

    Häggström talked about how we have been lucky this far. Other discoveries, like how to use fire, have been technologies with limited destructive force if let loose. A fire may destroy a forest or a few houses, but will go out. We could use “trial and error” in learning to control fire. But with some advanced technologies we get only one chance, and if things go wrong it’s too late.

    Sjöblad continued to promote his idea of uploading people to RAM memories (hm, what about computer viruses or electricity blackouts…?) and said it would be a way to conquer space. We could that way travel at the speed of light, as information, bits and bytes, on radio waves through the universe. We don’t need these “meat bags” which are our bodies.

    Sturmark said he missed one thing among the threats mentioned: destructive memes. Memes (term coined by Richard Dawkins) are ideas, ideas that are so strong that they spread. Häggström agreed that there have been examples of very destructive memes, for instance the Holocaust of World War II. He also went into what he called “synthetic biology”, and warned that it’s a technology that may make the mass surveillance society necessary. (Hello, Mr Snowden, wherever you are!)

    Crepin talked about what she called reversibilty and the precautionary principle. We should plan actions so that they may be reversed if things go sour. But things could go too far in the precautionary direction. Some research with beneficial potential is difficult to do, for instance concerning stem cells or GMO.

    Häggström found arguments against Sjöblad’s ideas of uploading human minds to computers. If we all become just computer files society and all that would collapse. Take the labour market. You just need to find one employee (=file) with the right qualifications, and then you just copy that file/employee.

    The debate went more and more into philosophy. Someone pointed out that what you do in your own sphere also affects the entire society. Say that we can manipulate DNA to make children smarter. That will create a pressure on everyone to use those methods on every child. The conclusion is that you can’t decide what you want to do with your own child without “forcing” everyone else to do the same.

    The audience was invited to comment, and the comment I remember best was a man who vehemently protested against human minds being uploaded to computers. That’s a destructive meme, he said, and we become traitors if we abandon our genes!

    The debate was a bit disorganised, but interesting and very much science fiction.

    Ahrvid2_Debate_OHaggstrom&ADavour

    My fellow sf fan Anna Davour (right on the picture) was there and when we chatted afterwards we were in agreement that most of the ideas put forth were Old Stuff in our genre. Mankind would benefit from reading a little bit more of those books with spaceships and aliens and planets on the covers.

    http://www.iffs.se/en/

    This book challenges the widely held but oversimplified and even dangerous conception that progress in science and technology is our salvation, and the more of it, the better. The future will offer huge changes due to such progress, but it is not certain that all changes will be for the better. The unprecedented rate of technological development that the 20th century witnessed has made our lives today vastly different from those in 1900. No slowdown is in sight, and the 21st century will most likely see even more revolutionary changes than the 20th, due to advances in science, technology and medicine. Areas where extraordinary and perhaps disruptive advances can be expected include biotechnology, nanotechnology and machine intelligence. We may also look forward to various ways to enhance human cognitive and other abilities using pharmaceuticals, genetic engineering or machine–brain interfaces—perhaps to the extent of changing human nature beyond what we currently think of as human, and into a posthuman era. The potential benefits of all these technologies are enormous, but so are the risks, including the possibility of human extinction. The currently dominant attitude towards scientific and technological advances is tantamount to running blindfold and at full speed into a minefield. This book is a passionate plea for doing our best to map the territories ahead of us, and for acting with foresight, so as to maximize our chances of reaping the benefits of the new technologies while avoiding the dangers.” – Oxford University Press

    1. Science for good and science for bad

    1.1. A horrible discovery

    1.2. The ethical dilemma of hiding research findings

    1.3. Some real-world examples

    1.4. The need for informed research policy

    1.5. A hopeless task?

    1.6. Preview

    2. Our planet and its biosphere
    2.1. A note to the reader
    2.2. Dramatic changes in past climate
    2.3. Greenhouse warming
    2.4. Milankovitch cycles
    2.5. The role of carbon dioxide
    2.6. The need for action
    2.7. A geoengineering proposal: sulfur in the stratosphere
    2.8. Other forms of geoengineering
    2.9. No miracle solution
    2.10. Searching for solutions further outside the box

    3. Engineering better humans?
    3.1. Human enhancement
    3.2. Human dignity
    3.3. The wisdom of repugnance?
    3.4. Morphological freedom and the risk for arms races
    3.5. Genetic engineering
    3.6. Brain-machine interfaces
    3.7. Longer lives
    3.8. Uploading: philosophical issues
    3.9. Uploading: practical issues
    3.10. Cryonics

    4. Computer revolution
    4.1. Cantor
    4.2. Turing
    4.3. Computer revolution up to now
    4.4. Will robots take our jobs?
    4.5. Intelligence explosion
    4.6. The goals of a superintelligent machine
    4.7. Searle’s objection

    5. Going nano
    5.1. 3D printing
    5.2. Atomically precise manufacturing
    5.3. Nanobots in our bodies
    5.4. Grey goo and other dangers

    6. What is science?
    6.1. Bacon
    6.2. Are all ravens black?
    6.3. Popper
    6.4. A balanced view of Popperian falsificationism
    6.5. Is the study of a future intelligence explosion scientific?
    6.6. Statistical significance
    6.7. Decision makers need probabilities
    6.8. Bayesian statistics
    6.9. Is consistent Bayesianism possible?
    6.10. Science and engineering

    7. The fallacious Doomsday Argument
    7.1. The Doomsday Argument: basic version
    7.2. Why the basic version is wrong
    7.3. Frequentist version
    7.4. Bayesian version

    8. Doomsday nevertheless?
    8.1. Classifying and estimating concrete hazards: some difficulties
    8.2. Risks from nature
    8.3. Risks from human action
    8.4. How badly in trouble are we?

    9. Space colonization and the Fermi Paradox
    9.1. The Fermi Paradox
    9.2. The Great Filter
    9.3. Colonizing the universe
    9.4. Dysonian SETI
    9.5. Shouting at the cosmos

    10. What do we want and what should we do?
    10.1. Facts and values
    10.2. Discounting
    10.3. Existential risk prevention as global priority?
    10.4. I am not advocating Pascal’s Wager
    10.5. What to do?

    http://www.math.chalmers.se/~olleh/

     

     

     

    spot_img

    Latest articles

    Related articles

    spot_img