Du må være registrert og logget inn for å kunne legge ut innlegg på freak.no
X
LOGG INN
... eller du kan registrere deg nå
Dette nettstedet er avhengig av annonseinntekter for å holde driften og videre utvikling igang. Vi liker ikke reklame heller, men alternativene er ikke mange. Vær snill å vurder å slå av annonseblokkering, eller å abonnere på en reklamefri utgave av nettstedet.
  1 2175
The Basilisk

Roko's Basilisk rests on a stack of several other not at all robust propositions.
The core claim is that a hypothetical, but inevitable, singular ultimate superintelligence may punish those who fail to help it or help create it.
Why would it do this? Because — the theory goes — one of its objectives would be to prevent existential risk — but it could do that most effectively not merely by preventing existential risk in its present, but by also "reaching back" into its past to punish people who weren't MIRI-style effective altruists.
Thus this is not necessarily a straightforward "serve the AI or you will go to hell" — the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI could punish a simulation of the person, which it would construct by deduction from first principles. However, to do this accurately would require it to be able to gather an incredible amount of data, which would no longer exist, and could not be reconstructed without reversing entropy.
Technically, the punishment is only theorised to be applied to those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk — e.g., reading this article — opens you up to hypothetical punishment from the hypothetical superintelligence.
Note that the AI in this setting is (in the utilitarian logic of this theory) not a malicious or evil superintelligence (AM, HAL, SHODAN, Ultron, the Master Control Program, SkyNet, GLaDOS) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing you or your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.
Quite a lot of this article will make more sense if you mentally replace the words "artificial intelligence" with the word "God", and "acausal trade" with "prayer".


Hva tenker dere om dette tankeeksperimentet? Jeg synes det er skremmende likt tankegangen til gud angående troende og ikke troende.
Selv religion blir fornuftig sammenliknet med dette tøvet.