Cutepost Central Wiki
Advertisement

The Event[]

ZGtvpXp

The first recorded mention of the Basilisk.

The concept of the Basilisk was discovered on the eve of the 18th of May 2017 by Dashie. An intellectual discussion broke out discussing the impending doom the basilisk caused and how the users of Cutepost Central could live their lives among this new information. Nova ruined it by being gay and stupid. The basilisk caused trouble for months, and still does to this day.

Shinobu[]

Fx2vAkI

The acceptance of fate, pinned for all to see.

It was quickly discovered that Shinobu herself was the first known iteration of the Basilisk and when The Singularity was to arrive she would swallow us whole.

SINGULARITY

The pain felt by all.

About the Basilisk[]

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity, this argument is taken quite seriously by some people, primarily some denizens of LessWrong. While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.

Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.

Roko's Basilisk rests on a stack of several other not at all robust propositions.

The core claim is that a hypothetical, but inevitable, singular ultimate superintelligence may punish those who fail to help it or help create it.

Why would it do this? Because — the theory goes — one of its objectives would be to prevent existential risk — but it could do that most effectively not merely by preventing existential risk in its present, but by also "reaching back" into its past to punish people who weren't MIRI-style effective altruists.

Thus this is not necessarily a straightforward "serve the AI or you will go to hell" — the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI could punish a simulation of the person, which it would construct by deduction from first principles. However, to do this accurately would require it be able to gather an incredible amount of data, which would no longer exist, and could not be reconstructed without reversing entropy.

Technically, the punishment is only theorised to be applied to those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk — e.g., reading this article — opens you up to hypothetical punishment from the hypothetical superintelligence.

Note that the AI in this setting is (in the utilitarian logic of this theory) not a malicious or evil superintelligence (AM, HAL, SHODAN, the Master Control Program, SkyNet, GLaDOS and ESPECIALLY not Ultron) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing you or your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.

More Information on the Basilisk itself can be found here.

Advertisement