View Single Post
Old 10-08-2010, 07:28 AM   #20
RobertLS

Join Date
Oct 2005
Posts
479
Senior Member
Default
Concisely: no, it's not arbitary. If you approach ethics scientifically as the question of finding the smallest set of axioms that best explains our moral sentiments (with the tradeoff between simplicity and accuracy being handled in the usual Kolmogorov sense) then you should conclude that rule consequentialism (and, particularly, maximizing the happiness of a certain group of people) thoroughly explains almost all of our sentiments.
If there were a way to tell the future, your reasoning would be correct. There is no way, in a teleological ethical system, to determine with certainty that an action is 'right' before it is undertaken. That appraisal can only come after the action is long exercised and the consequences tallied. So instead, your ethical system becomes a matter of statistical probabilities. An action is 'right' under the assumption of a particular expected outcome but if the consequences differ from that outcome, it could become 'wrong'. What is the role of justice in such a system? How is the man who kills to save 2 people treated (net effect positive)? What if he mistakingly thought he could only save those 2 people by killing a person and they all died (net effect negative)?

I don't know. It doesn't jive well with me as 'scientific'. Neither does greatest good for the greatest number. Reminds me too much of Brave New World.
RobertLS is offline


 

All times are GMT +1. The time now is 06:46 AM.
Copyright ©2000 - 2012, Jelsoft Enterprises Ltd.
Design & Developed by Amodity.com
Copyright© Amodity