[Topics: Artificial Intelligence, Consciousness, Philosophical Zombies, Phenomenology, Pragmatism]
Respect the Machines:

A Pragmatist Argument for the Extension of Human Rights to P-zombies and Artificial Intelligences

 

Artificial Intelligence Sketch by Alejandro Zorrilal Cruz - consciousness, rights, A.I., philosophical zombies - David Chalmers, John Searle, Alan Turing, G.E. Moore

Sketch by Alejandro Zorrilal Cruz

Introduction:

In this article, I will argue that pragmatists and phenomenologists must grant to zombies (philosophical zombies) and A.I. (weak or strong artificial general intelligences) all of the rights, dignities, and protections that they currently grant to other human beings (and in some cases, other animals).

I would like to confront two potential misapprehensions immediately. The first is that this article will devolve into quibbling among various materialist, idealist, and dualist models of consciousness. This article is not about whether an artificial intelligence or somesuch can possess consciousness. Rather, this article proceeds from the fact that the hypothetical entities of sufficiently complex A.I. and philosophical zombies (both explained below) are definitively and pragmatically indistinguishable (in intellectual behavior, from the outside) from the other humans to whom we extend rights and respect.[1]

The second potential misapprehension is that I intend this article as a flippant argumentum ad absurdum against some versions of egalitarian ethics or physicalism; far from it, this article is a sincere expression of a state of affairs (at least concerning A.I.) that I see as practically inevitable.

Frankly, although I have not exhaustively sought whether this is the case, I would be enormously surprised to learn that this argument is original; plenty of political and ethical philosophers have argued for the personhood of future A.I., so it is no very great stretch to imagine that one or more of them have done so from this pragmatist and phenomenological perspective.

A Philosophical Argument for the Rights of Zombies and Robots:

Alright, so I am going to argue for the extension of rights to philosophical zombies (“p-zombies”) and sufficiently advanced artificial intelligences (strong or weak artificial general intelligences). Well, what are they? What is a philosophical zombie? And what is a sufficiently or arbitrarily advanced A.I.?

A philosophical zombie is an entity that is indistinguishable from a human being in every way, but which lacks consciousness. That is, from the outside, a p-zombie would look, act, walk, talk, create, laugh, express thoughts and desires, have habits, cry, eat, grin, dance, and on and on—just like a conscious human being. Yet they would have no first-person perspective. It would all be, to oversimplify it, automatic interaction. To paraphrase Thomas Nagel, while there may be something that it is like to be another person, and there may be something that it is like to be a bat, there is definitely nothing that it is like to be a p-zombie. It is a hollow, vacuous, physical being.[2]

Meanwhile, a sufficiently or arbitrarily advanced artificial intelligence would be an intelligence not arising by indifferent, disinterested, naturalistic processes as did human intelligence, and which exists at the arbitrarily distant future point in time when said intelligence has been brought to the level of external indistinguishability from human intelligence.

Okay, so that’s whose case I am arguing. And here is the argument, in six steps:

(1) The challenge of radical skepticism or solipsism is insurmountable, except from certain common-sense, pragmatist, or phenomenological positions.

G.E. Moore - consciousness, rights, A.I., philosophical zombies - David Chalmers, John Searle, Alan TuringRadical skepticism is the denial of the existence of external reality. I have previously written against G.E. Moore’s common-sense answer to radical skepticism . . . only to end that same article by stating an allegiance to a phenomenological and pragmatist position which holds to the existence of an external reality. The latter position holds to the existence of an external reality simply because the in-fact existence of an external reality (and thereafter, certain knowledge of it) is irrelevant to a phenomenologist in their lived experience; only the consistent, insurmountable experience of an external reality is relevant.

(2) I am not a radical skeptic or a solipsist.

If this is not true for you, at the very least on a pragmatic and abstracted level, then this argument simply fails in your case. But if you are a true solipsist, then you’ve got a hard enough time figuring out how to justify (or even whether to justify) any and all of your own human-like behaviors, let alone those relating to an ethical orientation toward hypothetical entities.

(3) Therefore, if I am being rational, I must accept one of several common-sense, pragmatist, or phenomenological positions.

There is certainly a healthy population of philosophers who would disagree with aspects of my first premise, and therefore with this premise as well. But from a standpoint of pure formal logic, this is a straightforward move (Not-A unless B; A; B).

(4) One holding consistently to such an ontological and epistemological position would bracket and accept the consciousness of other minds on the basis of nothing other than the convincing outward appearance of other entities as having consciousness, just as they bracket and accept the existence of an external reality on the basis of nothing other than the convincing appearance of an external reality.[3]

This is, to my mind, the most contentious premise in the argument. After all, affirmation of the existence of an external reality does not actually imply affirmation of the existence of other consciousnesses. But it is a similar phenomenological and pragmatist maneuver on yet another topic of philosophical skepticism, and it would be a rare, inconsistent thinker who denies radical skepticism on phenomenological grounds, yet who feels that the evidence for the existence of other minds is too shaky.

(5) By definition, it will never be possible from an external position to distinguish in intellectual behavior between, on the one hand, a philosophical zombie or an arbitrarily advanced artificial intelligence and, on the other hand, a conscious human being.

This is simply a restatement of the concepts of p-zombies and artificial general intelligence, as introduced above.

(6) Therefore, in all practical real-world cases (where the consciousness-status of other humans, p. zombies, and weak or strong artificial general intelligences is similarly ambiguous), a rational and unbiased individual holding to a pragmatist philosophical position must grant the same ethical status to all such entities.

David Chalmers (Zereshk) - consciousness, rights, A.I., philosophical zombies - G.E. Moore, John Searle, Alan Turing

Photo by Zereshk

Yes, recall from the outset that this was a stronger statement than merely, ‘we should extend equal rights to artificial intelligence.’ I am saying that we are rationally required to extend equal rights even to philosophical non-conscious entities, i.e. hypothetical human-like entities that have no consciousness by definition. Why would this be? Well, tracing back through the argument, as far as I definitely know, there are already no other consciousnesses in the world but me.

It is impossible (at present) for it to be proven to me, beyond a shadow of a doubt, that all other human beings in the world right now are not philosophical zombies. Only my allegiances to pragmatism and phenomenology (and my own philosophy of mind, as explained in the second footnote below) give force to the notion that there are other minds.

But in granting that I have sufficient justification on pragmatic and phenomenological grounds for believing in the existence of other minds, I am also saying that other entities providing the same level of evidence are providing the same level of justification. That being the case, I have only two options: either I cease granting rights, dignities, and protections to all conscious-seeming entities other than myself (on the grounds of ambiguous consciousness), or I grant them to every sufficiently convincing claimant.

Conclusion:

You may have noticed that this argument is pretty specific in focus. It argues merely that philosophers operating within a particular subset of a particular epistemological position will have to grant the same status to a couple of potentially unintuitive hypothetical groups. So what good is this article then? Well, I happen to fall within that particular subset of that particular epistemological position, so this was a notion that struck me one day: my blasé and pragmatic attitude toward radical skepticism has far-reaching consequences. I am comfortable with these consequences; others may not be.

Either way, considering such consequences is a vital aspect of evaluating the positions in question. If I discover a consistent and beautiful line of thinking, but the society resulting from a community holding to that line of thinking seems odious and undesirable, then the line itself becomes suspect. And for me, for whom these positions seem not overly specific but instead vibrantly true, what I have here is a further truth, which I have previously discussed with an analogy between genetics and language: a logical extension of equality beyond humanity.

Notes:

[1] Though this is straying pretty far from the topic of the section, I thought it was important to at least note the fact that this article is also not concerned directly with ethics. It is trivially true that philosophers disagree about which rights, if any, should be extended to all humans; about whether rights ought to be granted to any beings besides humans; and even about whether rights, as such, coherently exist. For the purposes of this article, those concerns are irrelevant; what is here briefly argued is simply that whatever rights, dignities, and/or protections are granted to all conscious entities must also be granted to all ambiguously conscious entities with convincing outward appearance of consciousness.

[2] Again I would be veering off-topic if I mentioned this in the body of the article, but I wanted to point out that this hypothetical of a philosophical zombie is by no means necessarily possible. It was notably posited by, among others, David Chalmers in his attempt to refute physicalist conceptions of consciousness. But certain philosophies of consciousness (including the one to which I myself subscribe) would hold that the very notion of a philosophical zombie is a contradiction, as it may be the case that there is no way to achieve that state of affairs in reality (genuine creating, crying, laughing, pain response, and all the rest together) without consciousness. Consciousness could be an essential physical property of matter, which in certain arrangements can not be else but experiential; there is no such thing, such a philosopher would say, as a being which is identical to a conscious being in every way except for consciousness. As a result, it may also be the case—though this is even more controversial among philosophers—that there can be no such thing as a sufficiently advanced artificial intelligence without consciousness. This article merely draws an analogy between these hypothetical ambiguously conscious entities and the state of affairs regarding the ambiguous consciousness-status of all humans other than yourself, as a means of arguing for the rights-bearing or rights-lacking status of all of them.

[3] An important note: ‘convincing outward appearance of other entities as having consciousness’ is a much higher standard than it may seem in its phrasing. There are wrong kinds and even presentations of sophisticated behaviors that do not pass a universal equivalent of a Turing Test. This is not intended, however, as a side-stepping of John Searle’s Chinese room argument. First, Searle’s argument is itself side-stepping the kind of physicalist panpsychist position sketched in the second footnote above. Second, one could grant Searle’s conclusion regarding algorithmic actions, yet maintain credulity regarding the prospect of some future non-algorithmic artificial intelligence. And third, as stated repeatedly above, the consciousness-possession of ambiguously conscious entities is not being affirmed here, merely bracketed and pragmatically accepted.

[Topics: Artificial Intelligence, Consciousness, Philosophical Zombies, Phenomenology, Pragmatism]
Respect the Machines:

A Pragmatist Argument for the Extension of Human Rights to P-zombies and Artificial Intelligences

was last modified: December 5th, 2022 by Daniel Podgorski
Bookmark the permalink.

Comments are closed