To serve and protect? Strategies for an artificially super-intelligent future


Vantage Consulting director David Miller is leading a presentation and discussion on the global risks arising from super-intelligence as part of the Hutt City Council’s STEMM Festival. His talk takes place on Thursday May 18 from 5.30-6.30pm at The Dowse Art Museum, 45 Laings Road, Lower Hutt, Wellington – in the James Coe 2 Room.

The following is David’s brief outline of what he will cover. It is also being promoted as an EXOSphere Meetup.

 

This is a subject in which I’ve had a keen interest for several years.

Despite not having any domain expertise in the technical disciplines associated with artificial intelligence, it has been fascinating to read about the potential for the so-called “singularity” – a hypothetical point when artificial intelligence exceeds and then accelerates far beyond human intelligence.

While argument from authority is never valid, it is interesting to note that some of the world’s relevant leading thinkers on the subject have expressed significant concerns, e.g. Stephen Hawkings, Eion Musk.

The writings and thinking of Prof Nick Bostrom at Oxford University are especially stimulating, and I will draw on several of his important ideas. There are of course some who assure us that there is no risk. Remember the bright sparks (sometimes “experts” in their day) who assured us that aeroplanes, computers and telephones had no future when they were first invented?

The session I have initiated is not concerned with the technicalities of artificial intelligence in the short/medium term.

It starts with the assumption that there are significant risks to the human race from super-intelligence.

The topics I’ll cover include:

  • What sorts of super-intelligence might develop, and what are the different risks associated with these?
  • How important in assessing risks are self-awareness and sentience vis a vis sheer intelligence?
  • What type of sneaky short-term strategies might superintelligence adopt?
  • What timeframe are we talking about?
  • What are the likely human sources of superintelligence and what are the risk implications? (I believe these are hugely significant)
  • Can we learn from academia, industry, science fiction writers and producers and from social science?
  • What mechanisms and approaches are possible to minimise the risks?
  • How might the global community (i.e. human race) respond and develop strategies to protect future generations? What precedents are there and what is different about superintelligence that is particularly concerning?

This is not a session designed to demonstrate any particular knowledge or to provide any answers.

Following a presentation which is a “starter for 10”, there will be plenty of time for questions and discussion and perhaps to cover off some global issues which don’t seem to have been well covered in the literature to date.

So be prepared to pitch in!

Thanks and regards

David Miller
Director
Vantage Consulting

To help with numbers – please either sign up at EXOSphere here or simply email david@vantagegroup.co.nz if you’re coming.

Advertisements

About sticknz

sticK is by Peter Kerr, a writer for hire. I have a broad science and technology background and interest, with an original degree in agricultural science. My writing speciality is making the complex understandable. I am available for outside consultancy work, and for general discussions of converting a good idea into something positive
This entry was posted in high tech, Innovation, SciBlogs and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s