I think I found a way for the infinite consciousness to be falsafiable
The Black Hole information paradox.
Basically, Hawking radiation emitted by a Black Hole is independent of the hole itself and hence cannot provide information of what fell into the hole.
Now, if data/information can be destroyed, then the infinite conscioussness is in a rut.
However, it it could be resolved by transfering the information to other universes (Einstien-Rosen bridges), if that is the case, then I'm still good to go. Once the LHC is turned on, we may get better insight into the nature of black holes (Since it may produce minature ones).
Well, what do you think?
- Login to post comments
While I don't disagree per se, but our level of consciousness allows for emotional feelings. I don't think consciousness can be pigeon holed into one level.
According to evolution, our level of consciousness was developed by stages. From the first single celled organism (which I would consider conscious, just not on our level) to modern day humans who can feel emotions and are sentient.
Nope, because the computer program still needs input. The computer won't develop on it's own, it needs researchers and such (conscious entities) to analyze the data for them and compile it into an algorithim or whatever they're using.
I don't think the computer has a choice. It won't independitly reject programs that it should accept otherwise. That would make the computer on our level of consciousnes. (Unless of course you're talking about window's vista >_>.). Mice do have choices.
This reminds me of the book 'Do andriods dream of electric sheep?' and the T.V show Andromeda. In both, andriods are pretty much indistinguishable from the human population.
I do doubt that technology will ever produce those kinds of results.
I'm not ruling out the possibility of course, but unless I missed some dramatic revolution, I hold my doubts. I did hear Japan is making progress, but nowhere near the results of andriods etc.
Yes; you could think of a conscious system as a type of operating system. However, instead of running programs the job of this operating system is to make predictions about the world and test them. Analogues of anger, sadness, and frustration would develop as those expectations are violated. In order to relieve the tension, the operating system must adjust its predictive methods in order to produce better predictions. If its predictions are accurate, the machine could be said to be happy.
As an operting system, there is no superior chain of commands to return to. The program doesn't stop thinking until it is turned off, crashes, or its hardware breaks. If the memory is lost, it could be said to have died. This is a philosophical thing with no real truth to be found in it. Ethically, we shouldn't wipe sentient computer's memories. But I digress; the model is equivalent in function to cognition.
If the operating system were coded into the hardware then it would not be a computer, having no capacity to run programs or perform calculations at the whim of its operators.
You're right. Unless the goal is to produce human analogues it is unlikely that intelligences created by us will have any semblance of human features. As I mentioned, emotions are created by the pressures which drive us to change our thinking. It may be, and this is the craziest thing, that we will produce machines with a far greater range of emotions than we. It would then be up to the machine to describe these emotions to us - something which could be a social problem.
With the advances we are making in artificial intelligence, saying that we will never achieve something like that is akin to saying that we would never find a cheaper light source than incandescent lighting, or that we would never split the atom. It's a bit of a leap for our time period, but far from an inconceivable design.