Ethics and science.
What role can/should science play in morality? Should science even have a role in determining morality? If so, why? Can science determine what is moral and immoral if morality is purely subjective?
I believe we can objectively determine/approximate whether moral proposition are true or false, in order to determine whether we should accept or reject peoples moral claims. At the moment we allow people to say a behaviour or lifestyle is immoral or moral without any empirical justification or examination of their reasoning despite the fact they are inherently making empirical assertions.
Here's some examples as to how we go about this:
Someone says the homosexual lifestyle is immoral.
You ask them why it is immoral and they say it leads to the destruction/fall of society.
So they are claiming the following: We desire a functioning society and homosexuality does not lead to a functioning society.
That person is therefore ONLY telling the truth if two things are true:
(a) we as a matter of empirical fact desire a functioning society, and
(b) homosexuality as a matter of empirical fact does not lead to a functioning society.
So, the moral proposition homosexuality is immoral is ONLY true if both (a) and (b) are true. If either are false then so is the moral proposition. The truth of the moral proposition necessarily follows from the truth of both (a) and (b). And since (a) and (b) are empirical statements; they are testable, and this is where I think science must come it. If science were to determine that both (a) and (b) were truth, then the moral proposition would be confirmed, and homosexuality would be indeed be immoral. If science were to determine that either (a) and (b) were false, then the moral proposition must be rejected. So the validity of the moral proposition necessarily depends on a scientific inquiry of those two underlining claims (predictions) that are being made.
Another example:
It is moral to wear a seat belt.
You ask why and they say because it saves lives.
So they are claiming the following: We desire to be safe from harm and to not cause death to others, and if we wear seat belts we are more likely to be safe, and not cause the death of others.
That person is therefore ONLY telling the truth if two things are true:
(a) we desire as a matter of empirical fact to be safe from harm and to not cause death, and
(b) seat belts as a matter of empirical can increase our safety and decrease the death toll.
Anyone have problems with this method/process? The essential point I am making is that we can, and should, scientifically examine moral proposition and try to approximate, to the best extent that we can, whether they are true or false. We can ask them to empirically justify the predictions (that 'a' and 'b') of the moral proposition.
Now this only applies to specific reasoning/claims (e.g. homosexuality is immoral because it leads to destruction; seat belts are moral because it saves lives, etc), so how can we establish a 'universal' criteria/moral compass by which we can determine moral from immoral. (I use scare quotes for universal as I know it carries a lot of connotations in moral philosophy. I couldn't think of another word.)
We can make the following prediction (which is testable):
Humans desire to seek X and avoid Y, more than anything else.
X = happiness / contentment / satisfaction / well-being / joy / friendship / success / peace / etc / etc.
Y = harm / pain / dissatisfaction / hurt / loss / etc / etc.
There are necessarily some behaviours that maximise the former (X), and minimise the latter (Y). Whatever those behaviours are, are by definition moral behaviours, because there is absolutely nothing else you would want more than this. Those behaviours therefore supersede everything else.
Some people may say that just because some behaviour helps me achieve (X) it does not make it moral, however, it does not make any sense to determine morals in contradiction to those things that humans desire more than anything, so what is moral must conform what we desire most, and that is why behaviours which help us achieve (X) would overall be moral.
It is also important to point out that we would determine whether an action is moral or immoral by its overall and long-term effects, on the individual, on others, and on society in general. So it doesn't matter that a bad action like murder may in some cases be moral, or telling the truth may sometimes bring harm, because overall murder would negate (X) and so would be immoral, and honestly would negate (Y) and so would be moral. This system therefore does allow for context, but still nevertheless can make factual conclusions on morals.
"It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring" -- Carl Sagan
- Login to post comments
I think the issue I might have is this. Suppose that one can somehow do a "scientific morality". Presumably moral claims have something like the following propositional structure:
a)You ought to do A.
or
b) You ought not to do A.
A scientific approach to this would presumably have something to do with testing a hypothesis. In the sense that you form a hypothesis, state some variables, test the hypothesis depending on these variables. This is somewhat of a bastardizatioin of what scientists actually do but my point is that testing hypothesis is essentially what you do in science and is a, if not THE major part of what science is. The problem is I just don't see how you can say, AS A SCIENTIST, things like:
c) We ought to do x in Iraq.
How would we form an hypothesis like (c) and test that hypothesis?
A second worry is about the notion of practical interests and risks in decision making. When one says things like (a) or (b), one weighs the practical interests and risks of the agent and depending on practical interests and risks one makes the ethical decisions that they should do. But what counts as "sufficient evidence" for choosing what is best, seems to change depending one's practical interests, stakes, and risks (this is kind of a new view in epistemology and it is explored more in Fantl & McGrath, and Jason Stanley) . But when one does objective science, practical interests and stakes have nothing to do with evidence, evidence points to the truth of the proposition. "Sufficient evidence" does not shift depending on the scientists practical interests or risks. One way of getting around this is to argue that in science practical interests and stakes do matter in science, and what counts as "sufficient evidence" does shift depending on the scientists interests. But i'm not sure a lot of scientists want to do this.
But for who? The person commiting the action? The society?
Anyway, Saying X is moral because it's good for society, may not be so easy to verify.
For example, I say it is moral to cut taxes because it will lead to a better society.
Now, in principle we should be able to verify this, however, cutting taxes may or may not work, due to other factors, maybe the government needs tax money to provide health care, and if we cut taxes health care will suffer which is worse for society, or maybe the government is spending too much money, and cutting taxes won't hurt the government and will provide more jobs, leading to a better society.
[edit]
forgot.
determining whether cutting taxes will hurt or help society, could become an extremely complex process and not so easy to verify given the complexity of modern economics.
[/edit]
Put another way, science cannot answer a moral question, but it can verify or refute moral claims. I think any moral foundation which does not embrace the scientific method is basically worthless since it defies falsification.
I believe it's possible that science could provide a moral code, but I am not prepared to defend this belief, and only propose it as being falsifiable in the future. I don't think we have enough data about human psychology yet. If I had to take a stab, though, this would be the basis of a scientific morality: People's wants and emotions are not always the same as people's beliefs about their wants and emotions. Science can illuminate where our beliefs about our desires are actually contrary to our true desires. Once these desires are uncovered, science can falsify predictions based on behaviors designed to acheive these desires. That which is moral is that which genuinely increases the genuine desires of the largest number of people. Of course, this has a lot of problems, and I suspect in the end, we will still be left with the problem of conflicting genuine desires. For instance, we genuinely want to raise a family but we genuinely want to reduce starvation and suffering in the world. We cannot have both, and people vary considerably on how much they want one or the other. Any measure between the two would be arbitrary -- even if it was something as noble as "Saving the Human Species."
Still, science is quite useful within existing semi-arbitrary systems because it can flesh out what we think we want and what we really want.* I'm trying to think of a good example, and the kinds of things that are coming to mind are Christian prescriptions for happiness: Monogamous lifelong marriage between one man and one woman, in which the wife dutifully submits to the will of the husband. Supposedly (according to many sects of Christians) this is the true route to happiness. Many people believe that they want this kind of relationship because they believe it will make them happy. Science can study large groups of people who believe this before and after marriage, and compare their mental health, happiness, sense of sexual fulfillment, etc, etc, and compare them with people who hold other beliefs. Science can then tell us that X strategy has a very low likelihood of increasing anybody's happiness, while Y strategy has a very high likelihood. Since morality can only be coherently expressed as an IF-THEN statement, we can say, IF increasing mental health, happiness, sense of sexual fulfillment, as much as possible, is our goal, THEN we should act according to strategy Y.
I think it's very dangerous to start talking in terms of universal morals. We often take it as a given that such things ought to exist, but other than the claims made by various religions and philosophers, what evidence do we have that they do exist?
The most common answer to this question is that humans universally behave in certain ways -- there are no societies, for instance, where humans routinely kill one out of every three babies at random. This sounds satisfying at first, but it's not really an argument. It's just an observation:
1. Humans behave consistently with regard to some things deemed "moral."
2. ???
3. Therefore, there are universal moral principles.
Humans universally engage in war, but we don't often think of war as being a moral virtue. Morality can only be coherently expressed as an IF-THEN statement. Otherwise, it's just pronouncements, not logic or science. Suppose science could prove conclusively that the entire human race would die in three generations if we did not reduce the world population by a third within one generation. We could then make the following argument:
IF the human race is to survive, THEN we must at any cost reduce the population by a third.
IF voluntary population reduction does not occur immediately, THEN (If we are to survive, then we must act) we must forcibly reduce the population.
IF babies will live longer and use more resources than currently living adults, THEN it is better to kill babies than the elderly.
Etc, etc, etc... I don't need to spell the whole thing out. I'm just illustrating that even universal behaviors might be against "moral" behaviors, given extreme enough circumstances.
Consider the following moral "universal." It is always wrong to kill one's own child.
Now, consider a mother with identical twin babies who is accosted by a crazed gunman. (I just saw The Dark Knight last night, and I'm feeling very Joker-esque.) The gunman assures the woman that she has exactly five minutes to decide which of her children will live and which will die. She must kill one of her two babies, or the crazed gunman will kill both in drawn out painful ways.
Obviously, the mother ought to kill one of her own babies, even though it seems emotionally wrong. By killing one baby, she will be preserving as much life as possible, which seems to be a higher moral imperative at this time than always avoiding killing your own children.
However, contrast this with a group of ten terrorists holding one man hostage. The moral rule, "Criminals are worth less than innocents, and killing many criminals is acceptable to save few innocents" takes precedence over "preserve as much life as possible.
If we wanted to get really awful, we could imagine a father whose five sons have formed a gang and have kidnapped one innocent girl. We can imagine a situation in which a father would be forced to conclude that killing all five of his sons was the most moral thing to do.
To put it simply, I've never heard a moral universal which cannot be trumped by another moral code. Even if we apply our scientific method to these moral questions, science cannot tell us in all cases which of several moral imperatives is dominant.
Well, I think you're going to run into some trouble here, because many of these things are hopelessly relative. Suppose it makes me happy, content, and satisfied to have lots of gold, and the only way I'm going to get any gold is to kill the guy who has the gold. We can argue that I won't truly be happy and content with blood on my hands, but this claim must be scrutinized scientifically. Are there people who can commit crimes and maintain high levels of personal fulfillment? I don't think you'll like the answer in light of your current methodology.
How are you taking the happiness of others into the equation? It seems self-evident that there are times in which I do things which diminish my overall happiness because of a sense of duty to other people. Suppose I found a suitcase with ten million dollars, and it had the name of the owner on the tag. I could move to another country and live extraordinarily well for the rest of my life. I could justify this by supposing that anyone who would put ten million dollars in a suitcase must have a lot more money than that to be able to do so. Still, many people would agree that even if the suitcase belonged to Bill Gates, and ten million dollars would hardly be noticed in comparison to his vast wealth, most people would agree that the moral thing to do would be to give up a life of ease and happiness in exchange for the grudging moral high ground of "doing the right thing."
What is the right thing in your model? I will almost certainly have a happier life by taking the money, and Bill Gates will almost certainly not miss it.
I guess I would ask by what method you would prioritize universals. If you don't have such a method, I'd argue that universals which only apply in limited situations are not universals, and if they are moved sufficiently down a list of priorities, we cannot consider them to still be on the list.
* Even this terminology gets fishy. Maybe we really want a monogamous submissive marriage, so our belief matches our desire, but our belief is based on the certainty with which we believe that our life will be happy if we have such a marriage. So, do we really want the marriage, or do we really want long term happiness?
Atheism isn't a lot like religion at all. Unless by "religion" you mean "not religion". --Ciarin
http://hambydammit.wordpress.com/
Books about atheism
Morality is a complete BS concept. Basically it says one should do or not do something and expect nothing in return except the promise of heaven instead of hell. Morality crusaders are just people that want something for nothing.
All cooperative and social relationships must be based on contracts, where all parties agree to the rules for cooperative behavior. All standards of behavior must be based on recieving a reward in this life not an imaginary next life.
So yes science can play a role in determining the potential outcomes of various social contracts and rules, but 'morality' is just a scam.
Taxation is the price we pay for failing to build a civilized society. The higher the tax level, the greater the failure. A centrally planned totalitarian state represents a complete defeat for the civilized world, while a totally voluntary society represents its ultimate success. --Mark Skousen
Well we can look at the net effect of cutting taxes. We would also need to determine what it is that makes a society better (more freedoms, better health, etc) in order to actually determine whether the actions leads to a better society. I agree however that it would be complex.
"It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring" -- Carl Sagan
To elaborate on my point. I am saying that there are objective and universal moral truths/facts, but not that morality in its entirety is objective.
I do not deny that there is some element to morality which is subjective/relative to context. I think morals come in two forms: those derived from human nature (which result in similarities in our morals) and those derived from culture (which results in the differences in our morals). The argument that I present here relates to the former. And I am saying the former (human nature) should supersede the latter (culture) since morals based on actual facts about the human condition must be more true and/or significant than morals based on culture, tradition or religion. Richard Carrier writes in an article that it might be better to call these relative morals (that come from culture) "principles rather than morals, in order to reserve the title "morals" for only those principles that are universal." I would agree. It is certainly true that morals can come from both facts about human nature, and from cultural elements, but since these both have a widely difference scope (e.g. the former is universal, at least for everyone of sane mind, i.e. sociopaths do not apply, whereas the latter only applies to certain individuals and/or certain situations) it would make sense distinguish between them.
Morality could be like food (to quote Sam Harris). There are various types of food, but there is still an objective distinction between food and poison, and within the category of food we can objectively say whether something is healthy or unhealthy. To bring the analogy back, I am saying we can objectively determine/approximate moral from immoral, and that we can objectively place certain actions within the category of 'moral' whereas other will be the result of non-objective methods.
I agree, but I would suggest that this could be the result of ignorance. This inconsistency would likely be the result of a misunderstanding about the effects of a belief or behaviour.
I do not see how someone can believe something contradictory to some underlining desire, since that underlining desire (or desires) would drive their behaviours and their beliefs. There must be something about a belief that leads the person to think that it will satisfy this underlining desire.
To give you an example:
If humans desire happiness, and
murder does not result in happiness, then
we should not commit murder.
Then suppose someone murders my girlfriend.
I then wish to kill the murder because I think revenge would make me happy.
I would place this down to ignorance. I believe murder will satisfy my desire for happiness, however I would be ignorance or failing to take into account the actual negative effects of murdering someone, both to myself, others, and society overall.
I don't see how these are mutually exclusive.
Either way I believe that if we had to choose between the two, most people, when fully informed, would want to take action to help reduce starvation, or at the very least would want their government to take actions on their behalf.
Yes, this is exactly the sort of thing the argument entails.
In a sense I propose two levels of scientific morality:
a) empirically testing the reasoning people give for their subjective morals statements (e.g. wearing a seat belt is moral because it saves lives.) Even if a moral statement is subjective and relative it is still the case that the moral statement they make either do or do not follow from the reasoning they give.
b) using science to discover the supreme desires of humans and to discover which actions have the greatest chance of achieving those supreme desires.
I'm sure most of us can agree to (a). I would agree that (b) would be harder in practical terms.
I think the idea of universal morality has been tainted by religion. I don't really have a problem with it in principle. If humans desire Z, and behaviour Y is statistically more likely to result in Z, then this would apply to every one of sane mind (so, not those outside of the natural spectrum of human disposition, e.g. sociopaths). I think it would be accurate to call this universal. It is still an IF-THEN statement. We can say: if Z then Y, with Z applying to all non-sociopaths.
I do not think universal morality entails anything metaphysical, that applies to everyone regardless of the facts about their human nature/condition.
I appreciate that this is just an example to illustrate the point, although I must point out that this would violate other desire like to be safe, happy, etc.
When I've proposed this to others some many taken it to means whatever I desire; whatever my goal is is therefore moral, without taking into account the overall and long-term effects of our actions; how it affects others and society in general, etc.
I don't think it matters since it isn't purely about individual happiness, but rather the overall happiness unleashed, both on yourself, on others, and on society in general. Even if someone was continually happy with their criminal actions (such as raping or murdering), the level of unhappiness their behaviour would have generated would overall almost certainly be greater. Still, the hypothesis predicts that (assuming they are not sociopaths) if they knew how happy they could be as a non-criminal they would likely favour that lifestyle.
This is a really good example and one that I'm not too sure how I would answer it. I guess you could say that Bill Gates might call the police, forcing you to be on your guard from the law; you may raises suspicions as to where you got all that money from and perhaps would have to lie (and remember those lies).
Still, good example.
I would suggest those actions which have the greatest positive effect on yourself, others and society, in particular long-term effects, would be more important. Conversely, those actions which have the greatest negative effect on yourself, others and society, in particular long-term effects, would be the ones we would most want to avoid.
Also regarding universals in limited situations, would you regard what I wrote above (re: applying to all non-sociopaths) not be sufficient to be a universal?
"It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring" -- Carl Sagan
Forgive my brevity, but I want to ask you one question, and I hope to return to this topic with some time in the next day or two.
Please explain this more. Why is the ubiquity of our instinctive feelings about morality more "true" than that which a culture could derive? Is it not possible that a culture which valued rationality above all else could develop a system of morality which, though it sometimes defied universal human morals, actually served humanity better than those universals?
As an example, if we gave up on the universal moral that childbearing is good, we could create a sustainable population, which would be better than what we have now.
Atheism isn't a lot like religion at all. Unless by "religion" you mean "not religion". --Ciarin
http://hambydammit.wordpress.com/
Books about atheism
Well I guess I mean there is more justification for them (and from a universal human perspective, are by definition more significant) in particular if you wish to make a moral claim as to how others should or should not behave, then pointing to a moral proposition validate by empirical evidence would hold more weight than pointing to some purely cultural or religious value. (I think it could be said they are also more true, insofar that truth (in my view) constitutes correspondence to reality. Empirically derived moral facts could therefore in fact be called true; cultural/religious moral values couldn't, under that theory of truth.)
For example, do honour killings, or forcing women to be subservient, actually lead to a better, happier society, with more confident, self-assured children/women? That is an empirical question which science could answer. If the answer turns out to be no, then it trumps any cultural/religious moral argument as to what is best to humans, since we would then have a conclusion based on facts about the human condition; they would only have an ancient set of values based on tradition/authority
With your question/example, I would say it would fits with the scientific moral system being proposed, since it is okay if some moral system sometimes defied our underlining human desires, so long as the overall and long-term effects didn't negate that desire.
"It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring" -- Carl Sagan
Hmmm... Something doesn't sit quite right with me here. I'm still trying to get a hold of specifically what it is. The only justification you've presented thusfar (that I can recall) is that certain feelings of "ought" are universal among humans. Consider:
TRUE: Humans virtually universally believe they ought to do X.
UNPROVEN: Humans ought to do what they believe they ought to do.
By the same token:
TRUE: There are some situations in which people's beliefs about what they ought to do do not correlate to outcomes that will be beneficial for them.
UNPROVEN: Humans ought to do other than what they believe they ought to do.
In both of these instances, there is a disconnect between the true and the unproven. Neither proves anything. All you've really done is claim that people's beliefs about what is true correspond to that which is more true, and from a certain point of view, that's certainly correct. An individual's perception of reality is the only reality they can have at any given moment, so there can be nothing more "true." However, if the goal is to transcend subjective feelings, it seems like you've just gone in a circle.
But what gives universal morals the trump of both science and cultural/relgious morals?
Atheism isn't a lot like religion at all. Unless by "religion" you mean "not religion". --Ciarin
http://hambydammit.wordpress.com/
Books about atheism
Ummm.... there's something not right about the unproven sentence (i.e. it doesn't sound like what I am saying; might be how you worded it).
The argument proposes this...
TRUE: Humans virtually universally desire X.
TRUE: Certain behaviours/lifestyles have a greater chance of achieving X.
so
IF humans desire X more than anything else,
THEN we should adopt those behaviours/lifestyles.
To put it formally...
P1) Morality is about what we should and should not do.
P2) We desire X more than anything else. (Empirically demonstrated)
P3) Behaviours that maximise our chances in achieving X supersede all over behaviours (by definition) because they have the greatest chance in achieving that supreme desire and there is nothing else we want more.
P4) Certain behaviours Y are statistically more likely to result in you achieving X. (Empirically demonstrated)
P5) Certain behaviours Z are statistically more likely to result in you negating X. (Empirically demonstrated)
C1) Therefore you should behave in accordance to Y, and not behave in accordance to Z. [via P2 and P3]
C2) Therefore Y are moral behaviours and Z are immoral behaviour. [via P1 and C1]
Right, which is the result of ignorance of the facts, or faulty reasoning. The argument predicts that with full knowledge of the facts and correct reasoning, you will always make the best decisions.
(And just so we're clear, I presume you're not suggesting that the thing they desire most would not be beneficial to them, but rather that their beliefs and behaviours may not correspond to that underlining desire. Right? If so then see the previous sentence. If not then I would deny that people would have an underlining desire which was not beneficial.)
Yes, they should, and if they had all the facts, and were reasoning correctly, then they would; there would never be a conflict between their underlining desire and their belief/behaviour if these two conditions (knowledge and reasoning) were met. If they knew all the facts, and they reasoned correctly, they would never conclude anything which contradicted that underlining desire.
I'm saying universal morals are scientific morals/those derived from science, since it is by science that we determine their universality.
And I'm saying they trump the subjective/relative morals of cultures and religions if you're trying make claims as to how others should behave (which includes if/when your behaviours directly interfere with others.) Given a moral conflict, someone with objective empirical evidence to justify their moral stance takes precedence over purely subjective morals, such as in a debate over how people should act, or whether a bill should pass, etc.
"It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring" -- Carl Sagan
I don't think science is the right word.
I think that morals are derived through reasoning, but science is a particular form of reasoning.
History and Geography, for example, are rational pursuits but don't qualify as science.
I think that even psychology has been questioned as to whether it meets the conditions for it to qualify as a science.
There's a reason why we distinguish between science and philosophy, and I think for that same reason morality comes under the latter category.