TransHumanist ?
Posted on: June 22, 2011 - 12:06am
TransHumanist ?
ANy transhumanists in here ?
- Login to post comments
Navigation
The Rational Response Squad is a group of atheist activists who impact society by changing the way we view god belief. This site is a haven for those who are pushing back against the norm, and a place for believers of gods to have their beliefs exposed as false should they want to try their hand at confronting us. Buy any item on AMAZON, and we'll use the small commission to help improve critical thinking. Buy a Laptop -- Apple |
TransHumanist ?
Posted on: June 22, 2011 - 12:06am
TransHumanist ?
ANy transhumanists in here ?
|
Copyright Rational Response Squad 2006-2024.
|
I like it.
I had to look it up. I guess I am mostly in favor given the Wikipedia definition, especially the bit about eliminating aging. Not enough so as to rush out and sign up. For those who are also clueless,
-- I feel so much better since I stopped trying to believe.
"We are entitled to our own opinions. We're not entitled to our own facts"- Al Franken
"If death isn't sweet oblivion, I will be severely disappointed" - Ruth M.
I needed informing. I like the idea. I love this recent Internet mining for human habits, like the idea of eternal electronic life (though could not be bothered to create a digital me).
But a digital self in real time? Why the hell not. Would it be just as much fun living inside the matrix as out? If consciousness is what it appears to be - electro-chemical stimuli - then good-oh.
"Experiments are the only means of knowledge at our disposal. The rest is poetry, imagination." Max Planck
There's a famous intellectual experiment: if we were to substitute every neuron slowly dying with a new artificial one, in the end would we obtain an artificial brain, an artificial intelligence?
If the "artificial" neuron were indistinguishable from a "real" one, then nope, we would be a human with artificial neurons.
The idea behind artificial intelligence is to program a totally non-human machine to appear to be human. You have to be a programmer geek to get all the nuances of that discussion. I used to follow it years ago, but haven't kept up. I'm guessing some of the most recent networks have a glimmer of attaining artificial intelligence someday. Psychologists and neurologists are still working on the meaning of intelligence, so it is hard to say just what artificial intelligence looks like.
A key is the concept of self awareness. Why do living organisms attain self awareness yet we have no evidence of even the most sophisticated computers having no indications of even rudimentary gains in information let alone being self aware? For me, that would be the indication that we were headed in the right design direction. A computer that would see the information and reports the humans valued and then would spontaneously generate new reports that more efficiently gathered and presented necessary data in a format with more usable information. I could get excited about that.
PS - data is not information - that is what gets the creationists all crosswise with their "DNA does not gain information" argument. They don't understand that information has the potential to be useful, data is just a jumble.
-- I feel so much better since I stopped trying to believe.
"We are entitled to our own opinions. We're not entitled to our own facts"- Al Franken
"If death isn't sweet oblivion, I will be severely disappointed" - Ruth M.
Well cj, as far as the current state of research, I know there is a guy in France with a blue gene blade farm who claims to have produced a single neural column modeled on that of a rat. According to him, the output from the model even “looks like brain waves” whatever that means. I shall try to hunt up some info on him.
Past that, you do raise an interesting question in that we don't even know what it means to be intelligent. This discussion has happened many times all over the internet and one of the cases which I make is related.
Specifically, if we don't have a good working definition, it is at least possible to conceive that it will happen before we can even say with certainty that it has. Many years ago, I remember that the researchers who handle Koko the gorilla put her in a chat room to see what would develop. Mind you, the actual conversation had to be handled by a team of grad students and who knows what bias they may have injected into the deal without even realizing it.
Even so, simple communication with at least primates has been documented well enough to think that something interesting is going on there. Not that we should expect primates to be doing advanced science any time soon but some low level of communication seems to be real enough.
As far as copying myself onto a computer, well I am kind of on the fence about that. One problem that I see is that I don't really know that I want to live forever. Don't get me wrong, an extra century or two might be kind of a kick but transhumanism has the potential to keep us going as long as we can sustain the economic level needed to maintain the technology.
Another issue would be the pace of technological change. Let's say that I have myself copied into a computer. OK, great, at that moment, I will be a snapshot of myself (or something, again we don't have the terminology worked out). What if, every couple of years, I upgrade the hardware? Am I still me? Am I going to become something fully machine over time? I don't think that Marvin the Paranoid Android got such a great deal from his manufacturer.
=
Strictly speaking, 'information' in physical sense is just a record of the position/momentum of all the particles within a specified volume. It is apparently conserved, in sense of the measure of the amount of data involved, possibly until some particles are swallowed by a Black Hole, which is a subject of concern to Physicists.
In a more abstract sense:
That is how I remember 'information' being defined when I did "communication theory' as part of my Honors degree in Electrical Engineering. A guy called Claude Shannon figured prominently.
"Useful' information is another level of usage of the term, determined by the significance of specific patterns to particular outcomes which are important in some particular context. It is the conflation between this idea and the definitions above which creationists rely on.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Yes, I have heard of Shannon, and I am not about to pretend I understand the rigorous definitions. I can understand my usefulness definition. Admittedly it comes from years of writing database reports to convert vast quantities of data into information people can use when trying to run a business.
What I recall of discussions re information theory from university days is from the "Human Interaction with Computers" class. At any time, there is over a billion bits of data in our environment. Some changing geologically rapidly, some changing at quantum speeds. The human brain usually manages to get one of those bits into long term memory every second. My professor thought this was a sad commentary on the limits of the organic brain. I thought it was brilliant that organisms with brains managed to remember enough of the right information to survive and reproduce. I only got a B in the class as I disagreed with the professor on a number of issues and I am/was too dumb to keep my mouth shut.
The exact numbers as I was taught may have been slightly different from the above anecdote. Cut me some slack for CRS as it was over 20 years ago.
I have looked at what talkorigins has up on the subject as it pertains to the creationist arguments. I don't profess to understand all of that either. Seems to me, those that spout off about information theory "proving" creationism really don't know what they are talking about. But what else is new?
edit: hopefully for clarity, should have previewed before posting - dumb ass
-- I feel so much better since I stopped trying to believe.
"We are entitled to our own opinions. We're not entitled to our own facts"- Al Franken
"If death isn't sweet oblivion, I will be severely disappointed" - Ruth M.
No possibility of repeating memory erasure and inventing new puzzles to solve "simultaneously" while working to solve them? (ie multitasking)
“A meritocratic society is one in which inequalities of wealth and social position solely reflect the unequal distribution of merit or skills amongst human beings, or are based upon factors beyond human control, for example luck or chance. Such a society is socially just because individuals are judged not by their gender, the colour of their skin or their religion, but according to their talents and willingness to work, or on what Martin Luther King called 'the content of their character'. By extension, social equality is unjust because it treats unequal individuals equally.” "Political Ideologies" by Andrew Heywood (2003)
I'm definitely a transhumanist. I can spend hours thinking about how humanity will turn out, like a giant computer trying to understand reality, where we have grown into one giant brain. But that's a little far into the future.
A friend of mine said transhumanists are fools, because they all wanted to freeze their brain, and have a future civilisation unfreeze is, so they can see the future. Of course, I would love to see the future, but why would the future civilisation want to have me around? If the Earth is even a viable environment for me, if they could produce the food I need, and show me how they think (if they're all computers, I would never be able to see what they know, so it would still be pointless), why would they want to do that? Do we try to explain to others apes how the earth formed, or how quantum mechanics work?
Also, I think he generalised the group...
Well since im having you guys do some search, have a look at this and see if you fall into it
Extropianism
http://en.wikipedia.org/wiki/Extropianism
Well, do you mean that the second law of thermodynamics need not apply to the future?
I am tending to doubt that such would be possible, even over fairly short time scales.
Don't get me wrong here, I think that given the resources, it might be a kick to get to live for a few centuries. However, there are a few problems that I see coming up.
As I said before, if we get to the point where we cannot sustain the economy needed to keep all of the AI going, then eventually they have to get shut down. That would have to suck almost as much as dying does. The provision being that the biological survivors might be able to rebuild the economy at some point. But would the survivors actually turn the AI people back on?
Honestly, there is no sensible answer to that question until it actually happens. Even so, it the survivors know that we exited, then they might know what happened and why not blame us for the fact? Really, don't we know better than the biological critters? Possibly not if we allow that shit to go down.
=
Holy crap, I know, but it would be a big jump.
Defining intelligence is indeed a very difficult task, and it could definitely shut down the Intelligent Design (seeing we're atheists I suppose this should be noted).
Like selfawareness, it seems to be an emergent phenomenon.
Could intelligence just be recollection of memory to benefit ones survival ?
http://www.ted.com/talks/lang/eng/jeff_hawkins_on_how_brain_science_will_change_computing.html
Jeff Hawkins' book On Intelligence solidified my understanding of intelligence, especially how a neural column works in the human brain, and I would highly recommend it for anyone interested in getting a clearer picture of what intelligence is, especially biological human intelligence (as Hawkins' goal is to replicate the brain's method of intelligence).
I already had inklings of how I thought of intelligence, and had a fairly good intuitive feel for it, but after reading On Intelligence and a few other articles in particular (don't know the titles off the top of my head), I settled on what is my current view, and it all settled nicely into the pragmatic philosophy stuff that I advocate these days.
Basically, long story short, intelligence is the ability to predict the future. The better the prediction, the greater the intelligence. The more variety of situations in which you can predict the future, the greater the intelligence. The more robust the predictions (getting one detail wrong doesn't throw the whole thing into chaos) the greater the intelligence.
The human brain, especially the neo-cortex (which is what neural columns are made of; or from another angle, the neo-cortex is made of neural columns), is essentially a massively parallel, naturally evolved, prediction machine.
Human culture, especially technology and science, are one level beyond the human personal intelligence of the brain (a massively parallel network of brains, aka a society), and represent a kind of technological intelligence that is already 'trans' human if you think about it. For example, a government or even a modern corporation, with its established 'personhood' does not depend on any particular person to 'be' that entity. It exists independently of a single human life-time, can exist for many decades, etc. And these kinds of organizations survive by predicting the future well enough to maintain their own stability under changing environmental conditions. The modern scientific enterprise might be considered a neo-neo-cortex, of sorts.
But these examples still depend on the biological substrate of human brains. The next step would obviously be to automate the functions of human brains (quite possibly by automating neurons themselves).
What holds it all together is the ability to predict the future, especially to be able to impose a will on it, to influence which possible futures will become more likely or less likely. To make conscious choices and to act on them. Essentially, any time you make any sort of choice and try to act on it, you are predicting a possible future and trying to influence the present moment in order to shift the likelihoods in the direction you chose.
If you want to bring about a better future, you try to increase the likelihoods of events that will in turn increase the overall probability that that particular future will occur. If you want to avoid a bad future, you try to decrease the likelihoods of events that will in turn decrease the probability of that particular bad future.
All of this requires fast, accurate, and robust prediction of the future (of possible, imagined futures, to be more precise).
If anyone's into science fiction (the real stuff, not 'sci-fi'), I would also highly recommend Vernor Vinge. I have read three of his novels and not been disappointed yet. They tend to be tales of 'possible futures to avoid' (or rather, the heroes just barely manage to avoid an incredibly bad future, for a better-but-still-dangerous one). The first was actually available free online, and older, but still rather awesome, especially considering when it was written (1981): True Names
The next two, both Hugo award winners, are A Fire Upon the Deep, and Rainbow's End. A Fire Upon the Deep was particularly good, enormous in scale, but completely comprehensible due to Vinge's ability to relate big ideas into the lives of normal (but heroic) human characters. I especially liked his rat/wolf alien creatures who operate telepathically (with no magic or woo woo, you must read it) in small mutli-mind packs of about 6-10 individuals who grow more intelligent (within limits) the more members in the pack there are (e.g. a pack with only 3 or so would be barely smarter than a frightened dog, but gaining 1 or 2 more members brings it to the equivalent of a talking chimp level, and 1 or 2 more might get it to full human level). Vinge handles this seemingly bizarre multi-mind with ease and immerses you into this distributed network persona better than any similar-sounding SF device I've encountered.
And that's not to mention the mega AI ... but that would be giving too much away.
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!
I think the other important aspect of our intelligence is predicting the likely reactions of other individuals in our society to the actions of ourselves and third parties, which requires modelling, with the help of the mirror-neurones, the thought processes and feelings of others, including what they might be thinking about what we are thinking...
BTW, I too like Vernor Vinge. I am not sure I have read 'Rainbow's End', it may not have been out when I finished 'Fire on the Deep' - it was a few years ago now.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Yes, I agree Bob. Human intelligence is particularly sophisticated, especially in social modelling.
Aside: One thing I'm always wary of when trying to understand intelligence is the tendency a lot of people I've met who try to separate human intelligence vs. animal intelligence or basically any non-human intelligence. It's a kind of human exceptionalism that verges on dualism. There's real (humans only!) intelligence, and then there's sorta-looks-intelligent-but-actually-it's-not-really intelligence. Pet peeve of mine. Had too many drunken arguments back in university days when I wasn't as good at expressing myself. Very frustrating.
I tend to think of general intelligence as something that exists on a continuum, such that humans are generally the most intelligent (but not in every measure), but there is no clear dividing line between humans and other animals. I think it's more of a difference of degree than a difference of kind. A jet fighter is faster than a paper airplane, but that doesn't mean the paper airplane 'can't really fly'.
Anyways, I realized that I completely forgot to mention the connection between intelligence and learning.
It's one thing to be able to predict something. I think I would revise what I said and characterize the ability to predict as more like 'knowledge' than generalized 'intelligence'. I would probably re-phrase myself to say that intelligence is the ability to *learn* how to predict the future based on prior experiences. Or, even shorter, intelligence is the ability to accumulate knowledge.
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!