I've been accused in the past of positioning myself within a dubious moral grey-zone, leaving me capable of (sometimes retroactively) finding a justification for any action I take.
In the interest of probing the shortcomings of my current sense of morality and what matters, and with a genuine desire to discuss what I believe to be the strengths and weaknesses of my personal approach, I hope to build a clearer, stronger, and more immediately defensible heuristic by which I can evaluate and make moral decisions.
If I don't feel like paying a bill, for instance, I can easily dismiss it in my own mind by seeking a long-term justification. In a hundred years, when I'm dead anyway, what the fuck will it matter if I repaid US Bank $300 from a closed checking account? I'm not particularly concerned with any moral obligation to repay. Why, then, does the idea of seeking the same kind of ultimate justification for killing another person ring so hollow?
While, in practice, I don't often find myself feeling particularly conflicted while seeking to justify a past or future action, I'm unsettled by my occasional inability to properly convey to others--and sometimes recognize or understand precisely myself--the basis for my moral decision making.
In considering an explanation for the flexibility of my moral outlook (which is a term I'm using to describe the extent to which I believe my actions matter), I've postulated that its basis lies in what I see as the disparity between the differing available scales for considering whether an action has consequence.
One extreme of this spectrum is the ultimately small scale of the self. On this scale, in some way, every action we take, voluntary or involuntary, has an effect. (These actions, however minor, impact the world around us as well as ourselves. Taking a breath, for example, alters the chemistry of the air around you and in some indiscernibly small way the chemistry of the Earth itself. Even a passing cognition alters your pattern of thinking for a moment and, in some way, however large or minute or fleeting or lasting, has altered you and your brain. (butterfly effect, etc)
On the opposite end we have the cosmic scale. On this (essentially infinite) scale, I assume it to be the case that eventually there will be no beings capable of appreciating any of the accomplishments of man. In the absence of a consciousness to contextualize our accomplishments, my life and that of Isaac Newton are indistinguishable. As humans, simply put, our ULTIMATE significance is zero.
Just as physics has it's task of rectifying the disparity between the very small and the very large in attempting to blend the standard model and relativity, I feel obliged to at least explore what I see as the disconnect between the moral considerations of the immediate self and the awareness of seemingly ultimate futility.
The attempt to hone our understanding of "morality" has been constant, it would seem, since the original break from simply acting "morally" on an evolutionary/survival basis to adding the layer of consideration. Philosophy, as a practice, has at its heart the search for this understanding and has for thousands of years been at the forefront of man's noble pursuits. Much more recently, the fields of evolutionary biology and neuroscience have allowed us to increase our understanding of the origins of a moral sense.
we know that we are faced with moral decisions, and that, at all times, all of our resources could be going toward a "higher," "better," or "more moral" purpose... so we are forced to choose
At least in some part based on the immediate--what i can see/feel right now--as proposed in http://reason.com/archives/2007/11/21/the-theory-of-moral-neuroscien. not a complete answer, but a piece we can use in our development of a tool for deciding when to apply what scale.
a quote "But we do not have to be the slaves of our evolved moral intuitions. By showing us the neural workings of our moral sense, neuroscience is giving us the tools to understand and improve our moral choices. "
a quote "But we do not have to be the slaves of our evolved moral intuitions. By showing us the neural workings of our moral sense, neuroscience is giving us the tools to understand and improve our moral choices. "
There are multiple sets of variables. Scale (large or small) and empathy (near/present vs distant).
"ultimate justification," as I refer to it, simply means that on a long-enough timeline the impact of my decisions always drops to zero.
Not everyone needs to have the same exact set of morals, in fact, i will argue that, as long as there are some basics adhered to (probably those closest related to the effects of evolutionary psychology/neuroscience), a world filled with people with differing sets of morals is a stronger world (mixed groups outperform homologous groups, etc)
need concrete definition for piece of what i will mean by morals:
need concrete definition for piece of what i will mean by morals: