The Bogeyman of Relativism and a Rambling View of Morality
Morality is a comparison between one's normative standard and a given circumstance. An entity can apply moral standards to others within its society, itself, another culture, another species, a fictional character, or an inanimate object. All it's doing is developing an impression of intent (accurate or not, applicable or not, real or imagined), and evaluating it against a given criteria. Where these criteria come from is the point of controversy Abrahamic religions claim to answer, and by implication they brandish the threat of moral relativism as though it were applied within a society, rather than across cultures -- the appeal to consequences that frightens many people from this discussion.
A society is an end in itself, and it is only with the adoption of a standard of evaluation that it can be judged. If science, for instance, becomes a concern of many people, then those people will likely be motivated to evaluate things with scientific criteria. A society could as easily define any arbitrary criteria, and attempt to sort through reality with that -- such as what happens when simple causality is applied in conjunction with anthropomorphism, producing gods, spirits, etc. But once we choose, or have chosen by a natural selection of ideas over a long span of time, the criteria by which a society judges itself, and others, is not something easily altered.
When one addresses the subjectivity of moral application, one must turn their attention to the subject itself. Where does the subject's views come from? The subject is largely a product of its society -- so what is, is simply a product, an evolution, of what came before it. There is simply an unlikelihood of broad variation within a generation's morality in a given society (barring unusual strife). And this is circular, because the society, too, will be the product of each of those subjects; so whatever it is the individuals do in a given society will be at least somewhat acceptable to themselves, and by definition, that society.
- Login to post comments
I remember having to address this problem unexpectedly when designing artificial intelligence systems. At first, when most people think about artificial intelligence, they think of an individual humanoid robot that interacts with people using mimicry. Aside from the robotics issue, in order to build such a system, it first becomes necessary to give them relative parameters by which they trust information that is given to them, depending on the source, using a fuzzy scale. That becomes the "user trust matrix", and, just like children, they start off really trusting. In order to get a system that would be able to interact as an adult among adults, it would take something like 18 years of input (simulated or otherwise), but trust becomes the central issue to that input.
I'm sure you see the problem. Considering an individual as an individual becomes laughable. Our belief systems are so influenced by trust that our community becomes more important than actual truth or falsity of information. I programmed a (disturbingly simple) genetic-algorithm-styled system that, because it had four users it interacted with, essentially accused me of lying!
The community has far more power than most will acknowledge. It would be interesting to see what the mechanism is that produces non-conformists in a society. Are we like "mutations"?
Saint Will: no gyration without funkstification.
fabulae! nil satis firmi video quam ob rem accipere hunc mi expediat metum. - Terence