banner



Does David Chalmers Think He (His Identity) Would Survive If His Consciousness Were To Be Uploaded?

The encephalon is the engine of reason and the seat of the soul. Information technology is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and forth with them and so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically avant-garde, substrate has proved so attractive to futurists and transhumanists.

Merely is information technology really feasible? This is a question I've looked at many times before, but the recent book Intelligence Unbound: The Future of Uploaded and Machine Minds offers perhaps the most detailed, sophisticated and thoughtful handling of the topic. It is a drove of essays, from a diverse array of authors, probing the key bug from several unlike perspectives. I highly recommend it.

Within its pages you will find a pair of essays debating the philosophical aspects of mind-uploading (you'll find others too, just I desire to zone-in on this pair considering one is a direct response to the other). The first of those essays comes from David Chalmers and is broadly optimistic nigh the prospect of mind-uploading. The 2d of them comes from Massimo Pigliucci and is much less enthusiastic. In this two-office series of posts, I want to examine the debate between Chalmers and Pigliucci. I start by looking at Chalmers's contribution.

1. Methods of Listen-Uploading and the Problems for Debate
Chalmers starts his essay by because the different possible methods of heed-uploading. This is useful because it helps to clarify — to some extent — exactly what nosotros are debating. He identifies three different methods (note: in a previous mail I looked at piece of work from Sim Bamford suggesting that there were more than methods of uploading, but we can ignore those other possibilities for now):

Subversive Uploading: As the proper noun suggests, this is a method of mind-uploading that involves the destruction of the original (biological) mind. An example would be uploading via series sectioning. The brain is frozen and its structure is analyzed layer by layer. From this analysis, one builds upward a detailed map of the connections between neurons (and other glial cells if necessary). This information is then used to build a functional computational model of the brain.

Gradual Uploading: This is a method of mind-uploading in which the original copy is gradually replaced by functionally equivalent components. Ane case of this would be nanotransfer. Nanotechnology devices could exist inserted into the brain and attached to private neurons (and other relevant cells if necessary). They could so learn how those cells work and use this information to simulate the behaviour of the neuron. This would lead to the construction of a functional counterpart of the original neuron. In one case the structure is complete, the original neuron tin can be destroyed and the functional counterpart can accept its identify. This process can be repeated for every neuron, until a complete re-create of the original encephalon is synthetic.

Nondestructive Uploading: This is a method of mind-uploading in which the original copy is retained. Some form of nanotechnology brain-scanning would exist needed for this. This would build upwards a dynamical map of electric current brain function — without disrupting or destroying it — and apply that dynamical map to construct a functional analogue.

Whether these forms of uploading are really technologically feasible is anyone'south guess. They are certainly non completely implausible. I can certainly imagine a model of the brain beingness congenital from a highly detailed scan and assay. It might take a huge amount of computational ability and technical resources, but information technology seems within the realm of technological possibility. The deeper question is whether our minds would actually survive the procedure. This is where the philosophical debate kicks-in.

There are, in fact, two philosophical issues to debate:

The Consciousness Consequence: Would the uploaded mind exist witting? Would it experience the earth in a roughly like way to how we at present experience the earth?

The Identity/Survival Issue: Assuming information technology is conscious, would it be our consciousness (our identity) that survives the uploading process? Would our identities be preserved?

The two issues are connected. Consciousness is valuable to usa. Indeed, information technology is arguably the almost valuable affair of all: it is what allows us to enjoy our interactions with the world, and it is what confers moral status upon u.s.. If consciousness was non preserved by the heed-uploading process, it is hard to meet why we would care. So consciousness is a necessary condition for a valuable grade of mind-uploading. That does not, however, make it a sufficient status. Later on all, ii beings can be conscious without sharing any important connection (you are conscious, and I am conscious, but your consciousness is not valuable to me in the same way that information technology is valuable to y'all). What we really want to preserve through uploading is our individual consciousnesses. That is to say: the stream of conscious experiences that constitutes our identity. Merely would this exist preserved?

These two issues grade the heart of the Chalmers-Pigliucci debate.

2. Would consciousness survive the uploading procedure?
So permit'south start by looking at Chalmers's have on the consciousness issue. Chalmers is famously one of the new-Mysterians, a group of philosophers who doubt our power to have a fully scientific theory of consciousness. Indeed, he coined the term "The Hard Problem" of consciousness to describe the difficulty we have in accounting for the first-personal quality of conscious experience. Given his scepticism, ane might have thought he'd have his doubts virtually the possibility of creating a conscious upload. But he actually thinks we have reason to exist optimistic.

He notes that there are two leading contemporary views nigh the nature of consciousness (setting non-naturalist theories to the side). The first — which he calls the biological view — holds that consciousness is only instantiated in a detail kind of biological organization: no nonbiological system is likely to be conscious. The second — which he (and everyone else) calls the functionalist view — holds that consciousness is instantiated in any system with the correct causal structure and causal roles. The important affair is that the functionalist view allows for consciousness to be substrate independent, whereas the biological view does not. Substrate independence is necessary if an upload is going to be conscious.

So which of these views is correct? Chalmers favours the functionalist view and he has a somewhat elaborate statement for this. The argument starts with a idea experiment. The thought experiment comes in two stages. The first stage asks us to imagine a "perfect upload of a encephalon inside a calculator" (p. 105), past which is meant a model of the brain in which every relevant component of a biological encephalon has a functional analogue within the calculator. This figurer-brain is as well hooked up to the external world through the same kinds of sensory input-output channels. The result is a reckoner model that is a functional isomorph of a real brain. Would we dubiety that such a organization was conscious if the existent brain was conscious?

Maybe. That brings us to the 2d stage of the idea experiment. Now, we are asked to imagine the construction of a functional isomorph through gradual uploading:

Hither we upload different components of the brain one by 1, over time. This might involve gradual replacement of entire brain areas with computational circuits, or it might involve uploading neurons one at a fourth dimension. The components might exist replaced with silicon circuits in their original location…Information technology might take place over months or years or over hours.

If a gradual uploading process is executed correctly, each new component will perfectly emulate the component information technology replaces, and will interact with both biological and nonbiological components effectually it in only the aforementioned way that the previous component did. So the system will behave in exactly the aforementioned fashion that it would have without the uploading.
(Intelligence Unbound pp. 105-106)

Critical to this practice in imagination is the fact that the process results in a functional isomorph and that you can make the procedure exceptionally gradual, both in terms of the time taken and the size of the units being replaced.

With the edifice blocks in place, we now ask ourselves the critical question: if we were undergoing this process of gradual replacement, what would happen to our conscious experience? There are three possibilities. Either information technology would of a sudden stop, or it would gradually fade out, or it would be retained. The first two possibilities are consequent with the biological view of consciousness; the concluding is not. It is only consistent with the functional view. Chalmers's argument is that the final possibility is the most plausible.

In other words, he defends the following argument:

  • (ane) If the parts of our brain are gradually replaced by functional isomorphic component parts, our conscious experience will either: (a) be all of a sudden lost; (b) gradually fadeout; or © be retained throughout.
  • (2) Sudden loss and gradual fadeout are not plausible; retention is.
  • (3) Therefore, our conscious experience is likely to be retained throughout the procedure of gradual replacement.
  • (4) Retentiveness of witting feel is only uniform with the functionalist view.
  • (5) Therefore, the functionalist view is like to exist correct; and preservation of consciousness via mind-uploading is plausible.

Chalmers adds some detail to the conclusion, which nosotros'll talk virtually in a infinitesimal. The crucial thing for now is to focus on the key premise, number (two). What reason do we have for thinking that retention is the only plausible option?

With regard to sudden loss, Chalmers makes a simple argument. If we were to suppose, say, that the replacement of the 50,000th neuron led to the sudden loss of consciousness, we could break downwards the transition signal into ever more gradual steps. And then instead of replacing the fifty,000th neuron in ane go, we could divide the neuron itself into ten sub-components and supervene upon them gradually and individually. Are nosotros to suppose that consciousness would suddenly be lost in this process? If and so, then break down those sub-components into other sub-components and starting time replacing them gradually. The point is that eventually nosotros will reach some limit (east.1000. when nosotros are replacing the neuron molecule past molecule) where it is implausible to suppose that at that place will be a sudden loss of consciousness (unless you believe that one molecule makes a deviation to consciousness: a conventionalities that is refuted by reality since we lose brain cells all the time without thereby losing consciousness). This casts the whole notion of sudden loss into doubtfulness.

With regard to gradual fadeout, the statement is more than subtle. Call back it is critical to Chalmers' thought experiment that the upload is functionally isomorphic to the original brain: for every encephalon state that used to be associated with conscious experience there will be a functionally equivalent state in the uploaded version. If we accept gradual fadeout, we would have to suppose that despite this equivalence, there is a gradual loss of certain conscious experiences (e.thousand. the ability to experience black and white, or sure high-pitched sounds etc.) despite the presence of functionally equivalent states. Chalmers' argues that this is implausible considering information technology asks us to imagine a system that is deeply out of touch with its ain conscious experiences. I find this slightly unsatisfactory insofar every bit it may presuppose the functionalist view that Chalmers is trying to defend.

But, in whatever event, Chalmers suggests that the procedure of partial uploading will convince people that retention of consciousness is probable. Once we accept friends and family who have had parts of their brains replaced, and who seem to retain conscious experience (or, at least, all outward signs of having conscious experience), we are likely to accept that consciousness is preserved. Subsequently all, I don't incertitude that people with cochlear or retinal implants take some sort of aural or visual experiences. Why should I doubt it if other parts of the brain are replaced by functional equivalents?

Chalmers concludes with the suggestion that all of this points to the likelihood of consciousness beingness an organizational invariant. What he means by this is that systems with the verbal same patterns of causal system are probable to have the same states of consciousness, no thing what those systems are made of.

I'll hold off on the major criticisms until role two, since this is the part of the statement most which Pigliucci has the most to say. All the same, I will make ane comment. I'm inclined towards functionalism myself, simply it seems to me that in crafting the idea experiment that supports his statement, Chalmers helps himself to a pretty colossal assumption. He assumes that we know (or can imagine) what information technology takes to create a "perfect" functional analogue of a conscious organization similar the encephalon. Simply, of course, nosotros don't know really know what information technology takes. Whatever functional model is likely to simplify and abstract from the messy biological details. The problem is knowing which of those details is disquisitional for ensuring functional equivalence. Nosotros tin can create functional models of the centre because all the critical elements of the center are determinable from a third person perspective (i.due east. nosotros know what is necessary to make the claret pump from a third person perspective). That doesn't seem to be the example with consciousness. In fact, that's what Chalmers's Hard Problem is supposed to highlight.

3. Volition our identities be preserved? Will we survive the procedure?
Let'southward assume Chalmers is right to exist optimistic most consciousness. Does that mean he is correct to be optimistic nearly identity/survival? Will the uploaded mind be the aforementioned as nosotros are? Will it share our identity? Chalmers has more doubts about this, only once more he sees some reason to exist optimistic.

He starts by noting that there are three unlike philosophical approaches to personal identity. The first is biologism (or animalism), which holds that preservation of one's identity depends on the preservation of the biological organism that ane is. The 2nd is psychological continuity, which holds that preservation of i'southward identity depends on maintaining threads of overlapping psychological states (memories, beliefs, desires etc.). The 3rd, slightly more unusual, is Robert Nozick's "closest continuer" theory, which holds that preservation of identity depends on the beingness of a closely-related subsequent entity (where "closeness" is defined in diverse ways).

Chalmers then defends ii different arguments. The kickoff gives some reason to be pessimistic most survival, at least in the instance of destructive and nondestructive forms of uploading. The second gives some reason to exist optimistic, at least in the case of gradual uploading. The end issue is a qualified optimism about gradual uploading.

Allow's kickoff with the pessimistic argument. Again, it involves a idea experiment. Imagine a man named Dave. Suppose that one mean solar day Dave undergoes a nondestructive uploading process. A re-create of his brain is made and uploaded to a computer, just the biological encephalon continues to exist. There are, thus, two Daves: BioDave and DigiDave. It seems natural to suppose that BioDave is the original, and his identity is preserved in this original biological grade; and it is equally natural to suppose that DigiDave is simply a branchline copy. In other words, it seems natural to suppose that BioDave and DigiDave have separate identities.

Just at present suppose we imagine the same scenario, only this fourth dimension the original biological copy is destroyed. Exercise nosotros have any reason to change our view nearly identity and survival? Surely non. The just deviation this time round is that BioDave is destroyed. DigiDave is the same as he was in the original idea experiment. That suggests the following statement (numbering follows on from the previous argument diagram):

  • (9) In nondestructive uploading, DigiDave is not identical to Dave.
  • (ten) If in nondestructive uploading, DigiDave is not identical to Dave, and so in destructive uploading, DigiDave is not identical to Dave.
  • (11) In subversive uploading, DigiDave is not identical to Dave.

This looks pretty audio to me. And as we shall see in office two, Pigliucci takes a similar view. Even so, there are 2 possible ways to escape the decision. The offset would be to deny premise (2) by adopting the closest continuer theory of personal identity. The idea so would exist that in destructive (simply non non-destructive) uploading DigiDave is the closest continuer and hence the vessel in which identity is preserved. I recall this only reveals how odd the closest continuer theory really is.

The other option would be to contend that this is a fission case. It is a scenario in which one original identity fissions into two subsequent identities. The concept of fissioning identities was originally discussed by Derek Parfit in the example of severing and transplanting of encephalon hemispheres. In the brain hemisphere example, some part of the original person lives on in two divide forms. Neither is strictly identical to the original, but they do stand in "relation R" to the original, and that relation might be what is disquisitional to survival. It is more than hard to say that nondestructive uploading involves fissioning. Just information technology might be the best bet for the optimist. The statement then would be that the original Dave survives in two separate forms (BioDave and DigiDave), each of which stands in relation R to him. But I'd take to say this is quite a stretch, given that BioDave isn't really some new entity. He'due south simply the original Dave with a new name. The new name is unlikely to make an ontological difference.

Let's now turn our attention to the optimistic argument. This one requires us to imagine a gradual uploading process. Fortunately, nosotros've done this already so yous know the drill: imagine that the subcomponents of the brain are replaced gradually (say 1% at a time), over a period of several years. Information technology seems highly likely that each stride in the replacement procedure preserves identity with the previous step, which in plow suggests that identity is preserved once the process is consummate.

To state this is in more than formal terms:

  • (14) For all due north < 100, Davenorth+1 is identical to Davedue north.
  • (fifteen) If for all n < 100, Daven+i is identical to Daven, then Dave100 is identical to Dave.
  • (sixteen) Therefore, Dave100 is identical to Dave.

If y'all're not convinced by this 1%-at-a-time version of the argument, you lot can adapt it until it becomes more than persuasive. In other words, setting bated certain extreme physical and temporal limits, you tin make the process of gradual replacement as tiresome as yous like. Surely at that place is some point at which the degree of change between the steps becomes so minimal that identity is conspicuously being preserved? If non, then how do you explain the fact that our identities are being preserved as our torso cells supersede themselves over time? Maybe you explain information technology by appealing to the biological nature of the replacement.  Merely if we take functionally equivalent technological analogues it's hard to run into where the trouble is.

Chalmers adds other versions of this statement. These involve speeding upwards the process of replacement. His intuition is that if identity is preserved over the grade of a really gradual replacement, and then information technology may well be preserved over a much shorter period of replacement too, for example 1 that takes a few hours or a few minutes. That said, there may be of import differences when the process is sped up. It may exist that too much alter takes place too quickly and the new components fail to smoothly integrate with the old ones. The outcome is a break in the strands of continuity that are necessary for identity-preservation. I have to say I would certainly be less enthusiastic about a fast replacement. I would like the time to see whether my identity is being preserved post-obit each replacement.

four. Conclusion
That brings usa to the end of Chalmers' contribution to the debate. He says more in his essay, peculiarly about cryopreservation, and the possible legal and social implications of uploading. Just there is no sense in addressing those topics here. Chalmers doesn't develop his thoughts at any great length and Pigliucci wisely ignores them in his respond. We'll be discussing Pigliucci's reply in part two.

Does David Chalmers Think He (His Identity) Would Survive If His Consciousness Were To Be Uploaded?,

Source: https://philosophicaldisquisitions.blogspot.com/2014/09/chalmers-vs-pigliucci-on-philosophy-of.html

Posted by: nasheatepas.blogspot.com

0 Response to "Does David Chalmers Think He (His Identity) Would Survive If His Consciousness Were To Be Uploaded?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel