This paper navigates artificial intelligence’s recent advancements and increasing media attention. A notable focus is placed on Eliezer Yudkowsky, a leading figure within the domain of artificial intelligence alignment, who aims to bridge the understanding gap between public perceptions and rationalist viewpoints on artificial intelligence technology. This focus analyzes his predicted course of action for artificial intelligence outlined within his unpublished paper AGI Ruin: A List of Lethalities. This is achieved by attempting to understand the concept of intelligence itself and identifying a reasonable working definition of that concept. The concept of intelligence is then applied to contemporary artificial intelligence capabilities and developments to understand its applicability to the technologies. This paper finds contemporary artificial intelligence systems are, to some extent, intelligent. However, it argues that both weak and strong artificial intelligence systems, devoid of human-defined goals, would not inherently pose existential threats to humanity, challenging the notions of artificial intelligence alignment, bringing into question the validity of Nick Bostrom’s Orthogonality Thesis. Furthermore, the possibility of artificial life created through the method of assembling various modules each emulating a separate mind function is discussed.
Loading....