10 Comments
User's avatar
Cathy's avatar

Hi Justin

What I meant by saying the theorem only applies to physicalism theories, not to systems, is that strictly we cannot define what we are dealing with unless we formalize it in terms of some theory. In other words there is no formalizable equivalence between theories and the things they model. So strictly we cannot say anything one way or the other about reality itself, we can only make models and test them. So the answer to your question depends on whether you think there can exist physical systems which cannot be modelled. It's a little like the Church-Turing thesis.

Thanks for reading the paper.

Cathy

Expand full comment
Justin T. Sampson's avatar

Hi Cathy, thanks for sharing the revised paper privately. I've been giving it a lot of thought. Your approach gets more interesting the more I sit with it. You are touching on a general problem for philosophy of mind, which is that all the arguments of philosophy are made up of idealized logical propositions despite being generated and manipulated by imperfect physical minds. Now, the thing is, I think I'm simply okay with that. I don't believe that my self-awareness is perfectly logical, so my self-awareness is not inconsistent with physicalism. I don't identify as an illusionist, but I would expect any illusionist to happily accept your result, because they see self-awareness as fundamentally mistaken anyway. Does that seem right to you, or would you distance your result from illusionism in some way?

Expand full comment
Cathy's avatar

Hi Justin

I have dealt with these points in a JCS paper last January, "The no-supervenience theorem and it's implications for theories of consciousness". This is available from the journal website, or I can send you a preprint privately. The upshot is that a physicalism can believe they self-aware if and only if they make a deliber ate decision not to think rationally. This is a much sharper condition than a generic belief that brains can be irrational -- it actually makes self-awareness contingent on being intentionally irrational.

I apply the theorem to Illusionism in the JCS paper. The upshot is that any Illusionist will have to abandon either rationality or self-awareness. So far I have not heard from any Illusionist which of these they are prepared to let go.

Cathy

Expand full comment
Justin T. Sampson's avatar

That sounds like a very interesting paper. Now you’ve got me wondering: If it’s not rational to believe my mind is physical, is it any more rational to believe my mind is non-physical? Or is the only rational course to remain agnostic about the physicality of my mind?

Expand full comment
Cathy's avatar

I'm afraid it's worse than that. Even the epistemic possibility of physicalism is inconsistent with rationality and self-awareness. So if you are self-aware, and choose to be rational, then physicalism must be false.

Cathy

Expand full comment
Cathy's avatar

For another example of a no-go theorem relating to consciousness, please see my preprint

https://arxiv.org/abs/2307.10178

I would be interested in your views.

Cathy R

Expand full comment
Justin T. Sampson's avatar

Hi Cathy, thanks for sharing! I gave it a quick read. I might be a physicalist, but I don't understand your formal definition of physicalism well enough yet to know whether it captures my view. At the very least, your paper gave me an excuse to refresh my memory of modal logic!

Here's one question, inspired by your "final point, which is subtle but extremely important." There, you clarify that your proof applies to physicalist models of systems, not to physical systems themselves. Does that mean that, on your view, a physical system CAN be conscious, without any non-physical dynamics, but we just can't MODEL conscious properties in a physicalist paradigm while staying consistent?

Expand full comment
Johannes Kleiner's avatar

Thanks again very much for thinking so deeply about our paper @Justin! Very much appreciated.

Here is my reply, now publicly accessible too: https://johanneskleiner.substack.com/p/on-our-no-go-theorem-for-ai-consciousness

Expand full comment
Ariel Zeleznikow-Johnston's avatar

This is a great writeup! Have you spoken to Johannes about it? Does he accept your criticisms?

Expand full comment
Justin T. Sampson's avatar

Thanks! Johannes and I had chatted about an earlier preprint but my thoughts were not fully formed until reading the final version of the paper that got published. I look forward to discussing further.

Expand full comment