-  [WT]  [Home] [Manage]

[Return]
Posting mode: Reply
Name
Email
Subject   (reply to 398)
Message
Captcha
File
Password  (for post and file deletion)
  • Supported file types are: 7Z, FLAC, GIF, JPG, M4A, MP3, OGG, PDF, PNG, RAR, SWF, TORRENT, TXT, WAV, ZIP
  • Maximum file size allowed is 2097152 KB.
  • Images greater than 200x200 pixels will be thumbnailed.
  • Currently 119 unique user posts. View catalog

  • Blotter updated: 2012-05-14 Show/Hide Show All

File 133063269714.jpg - (114.00KB , 327x499 , VernorVinge_RainbowsEnd.jpg ) Thumbnail displayed, click image for full size.
398 No. 398
Anybody read this?

I'm no singularitarian, but if the future isn't soft apocalypse it will be either like this book or River of Gods. I very heartily recommend it, especially now that I have a mobile phone and have a better grasp on this augmented reality stuff.
>> No. 401
Why no singularitarian, OP? Seems obvious that powerful AI is soon, and that powerful AI will rapidly self-improve and resource-grab its way to godhood. It certainly won't care about stepping on our toes (or melting us down for raw materials) in the course of doing what it does.

It really doesn't look good for us, unless we solve a whole lot of hard engineering, philosophy, ethics, and mathematics problems *before* anyone builds a real AI. And if we *do* solve those problems, it looks *very* good for us. Google "friendly ai".

Thanks for the reccommend, I'll check it out.
>> No. 403
Because we don't know what "intelligence" is yet. If the singularity comes from anything, it won't be AI, It will be from human minds augmented to be have more "storage space" so to speak.
>> No. 404
>>403

You are right that the singularity could be caused by uploads. I am slightly skeptical of that because of how fragile the human brain is.
see this: http://lesswrong.com/lw/xd/growing_up_is_hard/

Also, an upload singularity would have an extremely high probability of ending up with a slighty insane superpowerful "human" dictator, so we really ought to do something to prevent that or make it very safe.

Despite philosopher's confusion about "intelligence", a singularity could be done by any *powerful optimisation process* capable of improving itself. Some people (including me) take "intelligence" to mean "general cross-domain optimization ability", but it is unnecessary to call such a process "intelligent" to see that it could very well take over the galaxy and optimise it for whatever utility function it had. See this: http://wiki.lesswrong.com/wiki/Lawful_intelligence

The reason I bring up AI is that a *friendly* AI seems to be the *only* way to make the singularity at all acceptable. All other sources of intelligence explosion produce either something entirely meaningless (a universe tiled with maximum paperclips, and no people), something that sucks, like a superpowerful dictator keeping everyone else as pets, an arms race between intelligent beings to conquer the galaxy at the expense of achieving value, or something stupid like the universe tiled with computers simulating minimal "beings" experiencing "pleasure" or "art" or whatever the philosophers think morality reduces to these days.


Delete post []
Password  
Report post
Reason  




Inter*Chan Imageboard Top List