Free Capitalist Network - Community Archive
Mises Community Archive
An online community for fans of Austrian economics and libertarianism, featuring forums, user blogs, and more.

Ethics question, continued

rated by 0 users
This post has 52 Replies | 3 Followers

Top 500 Contributor
Female
Posts 260
Points 4,015
Lady Saiga Posted: Thu, Oct 27 2011 12:56 PM

I'm so sorry, but something has happened to the thread in which I just asked this:

Hey okay, so I'm finding the materials available here a little overwhelming...

I have a question that will help me understand the deontological approach to ethics.  In this viewpoint, how would an agent make a decision where 1) inaction would cause harm, and 2) all available actions would cause harm?   Harm being taken to mean restricting or removing individual liberties.  One could come up with all kinds of practical examples of this, it happens all the time.

I can see where a consequentialist approach would be to mitigate harm by sort of spreading it around, or maybe picking a scapegoat, right?  This would justify taking the initiative and voluntarily causing some sort of harm.  But if I understand this correctly it seems to put the sovereignty of one entity above another...which sounds utterly against the whole concept of libertarianism.

Can someone point me in the direction of an author, book, or other resource that deals with this?  I've skipped through my copy of the Ethics of Liberty without finding a section that seems to answer my question, and I'm not at all practiced in reading philosophy.  I have a feeling this is a dumb question but as I say, I'm just trying to define my terms.  Help!

I want to respond to Autolykos and continue to ask the question, but I can't access the original thread without going to some sort of Canadian drug store.

Autolykos, I suspect you are questioning the validity of my original question.  You've answered specifics with specifics.  Do you believe that there are NEVER situations in which harm is inevitable?  How do you come to this conclusion?  I'm not saying my example was the greatest.  It's why I didn't give one to start with.  I want to ask a general question.

Also I believe you are confusing moral accountability with what I'm actually asking.  I have no trouble assigning blame in this (or most) cases based on what I understand of a deontological approach.  But in the situation of inevitable harm, I have trouble understanding how the deontological approach gives the agent a way to actually make a choice.  Does it suggest that all choices are equal if they all require harm?

In actuality you NEVER have all the information.  But if you believe in free will, you also believe that a rational decision can be made even when your options are limited and your information incomplete.

Again, I may be asking a stupid question. But please explain to me why it is stupid.

 

  • | Post Points: 35
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Thu, Oct 27 2011 4:08 PM

sorry for interrupting,

There certainly are situations where harm is inevitable (won't try to dvelve into specific examples for now), and in best cases, it's between lesser harm to a greater. But that's just natural world. If you cause someone harm, even if you didn't intended that way, but had no choice, well then, bad for you, and you must be ready to face the consequences. Sure, the victim has all the rights to forgive you etc, but you can never escape personal responsobility.

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 35
Top 10 Contributor
Male
Posts 4,987
Points 89,745
Wheylous replied on Thu, Oct 27 2011 4:13 PM

1) We should not talk about arbitrary scenarios because we lack the tools to answer you with specifics. Give examples and we can help

2) Actively causing harm is different from inactively causing harm (which I believe is preferable). You should never willingly do harm.

3) I am pretty sure that even if some gigantic catastrophe were to occur, you can have voluntary cooperation.

  • | Post Points: 5
Top 500 Contributor
Female
Posts 260
Points 4,015

There's no interruption MaikU, it was an open question not a conversation...as to your thoughts: it sounds like you're suggesting that deontological libertarianism accepts a hierarchy of harm by degrees but maybe in a different way than consequentialist libertarianism, and maybe that answers my question.  Whereas Consequentialists judge the moral value of their actions by the RESULTS, and excuse rights-violations on that basis, Deontologists would ignore the effects and solely make judgements based on the degree of the original action?

So if I, as a consequentialist, steal your silver spoons and sell them to buy food to keep myself from starving, I feel more "okay" about it than I do under deontology?  Less likely to seek a completely nonviolent solution, because the moral implications are less emphasized? 

If I understand this correctly, how is consequentialist libertarianism anything other than an excuse to stop beating yourself up over the tough decisions, or to get out of the negative repercussions?  Or seen another way I guess, how is deontological libertarianism anything other than an impossible goal to aim for without ever reaching?  I mean every action can be judged as harmful to someone in some way.

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

Oh hey, I think I’m getting this.   So the difference might be found in what they expected the injured party to rightfully do about the action?  Like a consequentialist would expect you to take the context into account, whereas the deontologist would figure it was the victim’s right to react as though the context didn’t matter?

  • | Post Points: 5
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Thu, Oct 27 2011 5:48 PM

Lady Saiga:
I want to respond to Autolykos and continue to ask the question, but I can't access the original thread without going to some sort of Canadian drug store.

Very bizarre. Did the original thread get hacked or what? Can a mod or admin look into this?

Lady Saiga:
Autolykos, I suspect you are questioning the validity of my original question.  You've answered specifics with specifics.  Do you believe that there are NEVER situations in which harm is inevitable?  How do you come to this conclusion?  I'm not saying my example was the greatest.  It's why I didn't give one to start with.  I want to ask a general question.

That's right. There are never situations in which harm is inevitable. Nothing is inevitable.

Lady Saiga:
Also I believe you are confusing moral accountability with what I'm actually asking.  I have no trouble assigning blame in this (or most) cases based on what I understand of a deontological approach.  But in the situation of inevitable harm, I have trouble understanding how the deontological approach gives the agent a way to actually make a choice.  Does it suggest that all choices are equal if they all require harm?

The deontological approach presumes that harm is wrong. If one does not want to do wrong, then one won't cause harm. Inaction does not cause harm, so it seems the decision is clear: if you believe that you would cause harm by acting (a certain way) in a given situation, and you don't want to cause any harm, then you won't act (that way) in that situation.

Lady Saiga:
In actuality you NEVER have all the information.  But if you believe in free will, you also believe that a rational decision can be made even when your options are limited and your information incomplete.

The Misesian view of rationality, which I follow, does not require one to have "all the information" in order to act rationally. Indeed, the very phrase "act rationally" in the Misesian view of rationality is necessarily redundant, because "act" itself implies "rational". All that the Misesian view of rationality involves is making (and presumably carrying out) decisions based on what one knows and/or what one believes to be true. Even a paranoid schizophrenic is rational in this sense.

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 35
Top 500 Contributor
Female
Posts 260
Points 4,015

It's off topic but I'd like to discuss that last comment.  How do you distinguish between an agent operating by instinct and one operating by reason?  If you don't, then animals could be defined as having free will as well.  And moral culpablity is in question too; would you, a deontologist, hold the paranoid schizophrenic to the same consequences as a normally functioning person?

  • | Post Points: 20
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Thu, Oct 27 2011 9:58 PM

Lady Saiga:

There's no interruption MaikU, it was an open question not a conversation...as to your thoughts: it sounds like you're suggesting that deontological libertarianism accepts a hierarchy of harm by degrees but maybe in a different way than consequentialist libertarianism, and maybe that answers my question.  Whereas Consequentialists judge the moral value of their actions by the RESULTS, and excuse rights-violations on that basis, Deontologists would ignore the effects and solely make judgements based on the degree of the original action?

So if I, as a consequentialist, steal your silver spoons and sell them to buy food to keep myself from starving, I feel more "okay" about it than I do under deontology?  Less likely to seek a completely nonviolent solution, because the moral implications are less emphasized? 

If I understand this correctly, how is consequentialist libertarianism anything other than an excuse to stop beating yourself up over the tough decisions, or to get out of the negative repercussions?  Or seen another way I guess, how is deontological libertarianism anything other than an impossible goal to aim for without ever reaching?  I mean every action can be judged as harmful to someone in some way.

 

Yeah, I wanted to imply exactly that, in a case where harm is inevitable, the best way to do is to take consequentialist side. But this thing is just part of my view of morality. I am not a consequentialist per se, but I am not a hard deontologist either, even though I'd prefer objective morality (which I advocate, in fact). The point is, you can never escape consequentialist approach even if you believe in deontological ethics, because there are always situations with trade offs. One choice can make bigger harm than the other. And especially in extreme cases (I think we are talking about them now anyway) when there is no time to sit and rationaly think how to solve the problem, you gotta act according to your instincts. Many folks would disagree with me here however.

I'd like to be as clear as possible, I favor deontological ethics (principled set of rules) but also think, that it's bad to ignore the consequences of your actions. Well, not bad, just that humans can not escape that, that how we are made up (aka evolved). It's 5:53 AM here, so I should go to bed before I tell something stupid. Later will probably check and re-read what I wrote.

And yeah, absolutism in deontological libertarianism is impossible goal, I agree with you here. Sure, if I am correctly imagining what this term means to you. But I prefer reaching the impossible and getting the best results I can rather than stay with only pure consequentialism, which, by the way, to my mind, has also a "calculation problem" of a kind.

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

Actually I'm not sure I think you can combine compatibilism and deontology all that well.  Think about it: the compatibilist view must imply that free will exists by degrees, that when your choices are limited your free will is limited and therefore your moral culpability is limited.

But as I said before, I don't think you can believe in free will and yet believe that you lose it when your options are all bad; as long as you have more than one option (and lets face it, you always do) you have the ability to originate your action; in that case you DO have free will and hence moral culpability.  It doesn't stop others from also being morally culpable in the scenario, if there is another aggressor acting too.

Plus my original question didn't specify "when you have NO options but one".  That's sort of a different question.

So I guess I fall onto the side of deontology, that harm is morally wrong even when you have your choice of bad options.  You choose the lesser harm if possible, and expect to be treated as the aggressor that you are and, if the other guy is nice, maybe they won't; but it's their option, and I can't ethically defend myself against THEIR self-defense if it's appropriate for the action I took. 

So I guess I should expect to have to pay for those spoons.

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

Hey Autolykos, I still don't see where we've arrived at "inaction does not cause harm".  There are certainly cases where my choice not to do something can be harmful to others.  In fact inaction, if free will exists, can be treated as just a form of original action.  I chose not to pay for those spoons, etc...this is a separate "action" from the taking of the spoons.  It just happens to be, uh, a sort of inactive action.

Furthermore I still want to explore the idea that you can never ONLY have bad choices.  If I can play this out on a chessboard it's possible in our universe.  It's called checkmate.  It's a situation where the possibilities are strictly controlled. I still have many options, it's just that they're all equally bad.

  • | Post Points: 20
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Fri, Oct 28 2011 7:18 AM

Lady Saiga:
It's off topic but I'd like to discuss that last comment.  How do you distinguish between an agent operating by instinct and one operating by reason? If you don't, then animals could be defined as having free will as well.

I distinguish between conscious and unconscious behavior. Rational behavior, in the Misesian sense, is simply conscious behavior. I think all animals do have free will, in the sense that we can't predict (i.e. with absolute precision and with certainty) what an animal will do at any point in the future. Many animals are conscious, however - maybe even insects and the like have some form of consciousness.

Lady Saiga:
And moral culpablity is in question too; would you, a deontologist, hold the paranoid schizophrenic to the same consequences as a normally functioning person?

Yes. If the paranoid schizophrenic murdered someone due to his delusions (as happened in Canada not too long ago), I would hold him liable for murder.

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 20
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Fri, Oct 28 2011 7:31 AM

Lady Saiga:
Hey Autolykos, I still don't see where we've arrived at "inaction does not cause harm".  There are certainly cases where my choice not to do something can be harmful to others.  In fact inaction, if free will exists, can be treated as just a form of original action.  I chose not to pay for those spoons, etc...this is a separate "action" from the taking of the spoons.  It just happens to be, uh, a sort of inactive action.

If I choose not to (try to) rescue a drowning person, have I then killed him? My answer is no, because I didn't put him in the situation where he started to drown. Had I not been there at all, he presumably still would've drowned.

Lady Saiga:
Furthermore I still want to explore the idea that you can never ONLY have bad choices.  If I can play this out on a chessboard it's possible in our universe.  It's called checkmate.  It's a situation where the possibilities are strictly controlled. I still have many options, it's just that they're all equally bad.

You have more options than possible moves in the chess game.

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

How do you conclude that conscious behavior is rational behavior?  Also, how do you conclude that instinctive behavior is conscious behavior and therefore (by your reasoning) rational behavior?

I would not classify all animals or even most animals as conscious.  As far as I can make out, the ability to think objectively is the best way to define rationality and only a couple of animals have shown this kind of possibility.  Elephants, perhaps, and some primates.  The test of self-awareness using the Mirror Test is what I'm thinking of: Wikipedia reminds me that bottle nosed dolphins and maybe some birds are able to pass this test.

In the case of a mentally ill human acting on the basis of their delusion, you'd want to verify that they can make objective decisions.  I know someone, for instance, whose medications cause him to see illusions a lot; bugs on the wall, people looking dead and rotten, and so forth.  It's terrible for him of course, and can cause him to make assumptions that aren't true like "my kitchen is full of bugs so I can't eat in there today".  But he has the ability to make objective analyses, therefore he can either go to a restaurant or reason with himself that he's seeing an illusion and he should just go in and make a sandwich.  He's morally culpable.  A person who can't make that call, though, has gone past the point of having free will in my opinion.

  • | Post Points: 20
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Fri, Oct 28 2011 7:56 AM

Lady Saiga:
How do you conclude that conscious behavior is rational behavior?

I don't. It's a premise, not a conclusion.

Lady Saiga:
Also, how do you conclude that instinctive behavior is conscious behavior and therefore (by your reasoning) rational behavior?

I thought I said that instinctive behavior is unconscious behavior. Sorry if that was unclear. But this is just another premise.

Lady Saiga:
I would not classify all animals or even most animals as conscious.  As far as I can make out, the ability to think objectively is the best way to define rationality and only a couple of animals have shown this kind of possibility.  Elephants, perhaps, and some primates.  The test of self-awareness using the Mirror Test is what I'm thinking of: Wikipedia reminds me that bottle nosed dolphins and maybe some birds are able to pass this test.

Okay, so you define "rational" differently from how I define it. That's fine, but it means we won't understand each other when that word is used in our discourse.

Lady Saiga:
In the case of a mentally ill human acting on the basis of their delusion, you'd want to verify that they can make objective decisions.  I know someone, for instance, whose medications cause him to see illusions a lot; bugs on the wall, people looking dead and rotten, and so forth.  It's terrible for him of course, and can cause him to make assumptions that aren't true like "my kitchen is full of bugs so I can't eat in there today".  But he has the ability to make objective analyses, therefore he can either go to a restaurant or reason with himself that he's seeing an illusion and he should just go in and make a sandwich.  He's morally culpable.  A person who can't make that call, though, has gone past the point of having free will in my opinion.

Again, I simply define "rational" differently. It also sounds like the same is true for "free will". All definitions are premises, not conclusions.

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

I pretty much entirely disagree with your analysis of the drowning man thing.  Once you know about the situation you have a choice, and your choosing is an original action that requires you to accept moral culpability.  You didn't create the situation; in fact you rarely create the situations you're reacting to in daily life; you only create your own reactions to them.  Especially in the case you mentioned there IS no moral culpability until you come along and observe the person drowning, it was an accident based on the laws of nature and nobody's fault until you ignored the event.

Now in chess, all options are possible moves.  If it's the king in checkmate the game ends solely because the game depends on the king; but the same situation occurs with other pieces too.  If that happens with my knight or bishop I still have to make a choice and it's still going to be a bad one but the options are still possible moves.

  • | Post Points: 65
Top 500 Contributor
Female
Posts 260
Points 4,015

I follow what you're saying about our definitions being different, but I have outlined the rationales for my premises.  I'd like it if you could do the same, because they're just not making sense to me.

I should also point out that the Mirror Test I mentioned might demonstrate consciousness but I'm not actually convinced that it demonstrates the ability to make objective choices, so I'm still not sure that you could consider an elephant to have free will based on the results of that test.

  • | Post Points: 20
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Fri, Oct 28 2011 8:17 AM

Lady Saiga:
I pretty much entirely disagree with your analysis of the drowning man thing.  Once you know about the situation you have a choice, and your choosing is an original action that requires you to accept moral culpability.  You didn't create the situation; in fact you rarely create the situations you're reacting to in daily life; you only create your own reactions to them.  Especially in the case you mentioned there IS no moral culpability until you come along and observe the person drowning, it was an accident based on the laws of nature and nobody's fault until you ignored the event.

Apparently our definitions of "morality" also differ. I'd say that I have no a priori or prima facie moral culpability for the drowning person, because I didn't put him in that situation. Not (attempting to) help him is in no way the same as putting him in that situation in the first place. If it was an accident, then it's never anyone's fault, even after I ignore the event.

Of course, you're not required to agree with me.

Lady Saiga:
Now in chess, all options are possible moves.  If it's the king in checkmate the game ends solely because the game depends on the king; but the same situation occurs with other pieces too.  If that happens with my knight or bishop I still have to make a choice and it's still going to be a bad one but the options are still possible moves.

My point was that life is bigger than a chess game. Perhaps a good choice for you, when facing checkmate in a chess match, is to punch your opponent in the face.

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

Ok, we don't agree.  Your last statement was funny, but not a resolution to that particular problem.  I don't think your premises are capable of giving you a way to answer my original question, so personally I don't think they are valid.  But to each is own, and from now on I'll go armed to the chess board.

  • | Post Points: 5
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Fri, Oct 28 2011 8:48 AM

Lady Saiga:
I follow what you're saying about our definitions being different, but I have outlined the rationales for my premises.  I'd like it if you could do the same, because they're just not making sense to me.

Where have you outlined the rationales for your premises? Sorry if that sounds obtuse.

Also, if your premises have rationales, are they really premises?

Lady Saiga:
I should also point out that the Mirror Test I mentioned might demonstrate consciousness but I'm not actually convinced that it demonstrates the ability to make objective choices, so I'm still not sure that you could consider an elephant to have free will based on the results of that test.

I, for one, am not concerned with the results of that test.

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

My premises were:

1) that free will exists; I didn't rationalize this because my question was about how you'd react GIVEN this premise and a deontological libertarian approach.  I don't think I'll meet  many on this board who disagree with it, though.

2) that it is possible to be in a situation where all choices involve initiating harm: I used chess to demonstrate that this is possible.

3) that the existence of free will depends on the ability to make rational choices: You're right, I didn't express the rationale, but here goes.  Irrational actions like instinct or the actions of an agent that cannot percieve value differences are not "free" because it is impossible that the agent do otherwise than as they do.  The old Frank Herbert definition of "being human" works if we assume the bene gesserit are testing for free will; the ability to make a choice that supersedes instinctive choices due to an objective analysis of value, tests for free will. 

4) that regardless of original accountability, once an agent must make a choice about any incident they assume moral culpability for their action.  In this case I referred to the laws of nature and the actions of other agents: as I mentioned, we never operate in a vacuum of solely our own choices.  We operate in a world in which lots of agents, conscious and  unconscious, set up situations for us.  Each choice is separately originated and can be judged on that basis; to think otherwise would invalidate free will.  If premise #1 was a requirement of the question, then this premise also looks like a requirement to me.

Certainly you could argue against all my premises by arguing against the existence of free will.  But I've illustrated reasons for them.  There are probably other arguments against them too, but you haven't provided any that seem to hold up to analysis. 

You seem to believe that you don't need a reason to accept a premise, or that a premise is valid regardless of how well it explains reality or holds up to logic.  This opinion strikes me as lazy and it means that your answers to my original question have no real value.

  • | Post Points: 20
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Fri, Oct 28 2011 9:27 AM

Lady Saiga:

Actually I'm not sure I think you can combine compatibilism and deontology all that well.  Think about it: the compatibilist view must imply that free will exists by degrees, that when your choices are limited your free will is limited and therefore your moral culpability is limited.

 

I'll try to explain it short:

Compatibalists do not believe in free will. They only accept that there is an ilusion of free will. What compatibalists claim is that people make choices and many free will-ists confuse it with having true free will, when it's not the case. So there are no degrees. By making a choice you take a responsibility for it.

I am influenced by millions of things and so I can't truly freely choose, everything is connected (determinism part) but that doesn't mean, again, we can't make right or bad choices. It's complicated topic, I know, and I find it hard to explain my view.

Yes, harm is ALWAYS wrong, however, people are able to forgive harm, especially if it's done in extreme situations and you had no other way or had no time to think about better way. Yes, you would have to pay for them if the victim demanded. But he can just forgive it.

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 20
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Fri, Oct 28 2011 9:34 AM

Lady Saiga:

I pretty much entirely disagree with your analysis of the drowning man thing.  Once you know about the situation you have a choice, and your choosing is an original action that requires you to accept moral culpability. 

 

 

Your views are problematic.Sure, not saving a person when you truly CAN save him is not socially good decision, I mean, I'd say it's ethically wrong, but from moral standpoint, if you caused no harm, you can not be liable for no-action (simply ignoring the situation). But I believe, that most people aren't that way and they would save the person (well, maybe except chinese people http://www.youtube.com/watch?v=jMb15V4yrV4).

Just in case, to me morality and ethics are not the same thing even though I sometimes use them as synonims out of laziness, I suppose.

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 5
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Fri, Oct 28 2011 9:42 AM

Lady Saiga:
My premises were:

1) that free will exists; I didn't rationalize this because my question was about how you'd react GIVEN this premise and a deontological libertarian approach.  I don't think I'll meet  many on this board who disagree with it, though.

Okay. How do you define "free will"?

Lady Saiga:
2) that it is possible to be in a situation where all choices involve initiating harm: I used chess to demonstrate that this is possible.

Given your chess example, it seems to me that you're equivocating over the meaning of "harm" here. A person is not physically harmed by being checkmated in a chess match.

Lady Saiga:
3) that the existence of free will depends on the ability to make rational choices: You're right, I didn't express the rationale, but here goes.  Irrational actions like instinct or the actions of an agent that cannot percieve value differences are not "free" because it is impossible that the agent do otherwise than as they do.  The old Frank Herbert definition of "being human" works if we assume the bene gesserit are testing for free will; the ability to make a choice that supersedes instinctive choices due to an objective analysis of value, tests for free will.

What do you mean by "value differences"? How can "value" (whatever you mean by that) be analyzed objectively?

Lady Saiga:
4) that regardless of original accountability, once an agent must make a choice about any incident they assume moral culpability for their action.  In this case I referred to the laws of nature and the actions of other agents: as I mentioned, we never operate in a vacuum of solely our own choices.  We operate in a world in which lots of agents, conscious and  unconscious, set up situations for us.  Each choice is separately originated and can be judged on that basis; to think otherwise would invalidate free will.  If premise #1 was a requirement of the question, then this premise also looks like a requirement to me.

This is the premise that I completely and categorically reject. I think it confuses correlation with causation. Let me illustrate with an example.

Early in the movie Minority Report, John Anderton (played by Tom Cruise) and Danny Whitwer (played by Colin Farrell) have a discussion about predetermination. During the discussion, Anderton rolls a ball down a railing. Whitwer catches the ball before it - presumably - falls to the floor. "Why did you catch that?" Anderton then asks. "Because it was going to fall," Whitwer answers. My question to you now is, if Whitwer had refrained from catching the ball, and it subsequently fell to the floor, does that mean he caused it to do so? My own answer here is unequivocally no.

It appears to me that, despite claiming an adherence to deontological ethics, you're really a consequentialist. Then again, all ethics can be expressed in deontological terms, including consequentialism: "Don't act in such a way that [a given kind of] consequences follow" or "An action that is followed by [a given kind of] consequences is unethical." This is a premise, not a conclusion - and one that I'm neither required nor do accept.

Lady Saiga:
Certainly you could argue against all my premises by arguing against the existence of free will.  But I've illustrated reasons for them.  There are probably other arguments against them too, but you haven't provided any that seem to hold up to analysis.

A premise cannot be logically refuted, as it's merely assumed to be true. Hence it can only be accepted or rejected. If you're positing rationales for the propositions you're calling "premises", then I'd say they're not really premises. This then would invite the question, what are your real premises?

Otherwise, it's hard for me to argue against what you call "free will" when I'm not sure what that is.

Lady Saiga:
You seem to believe that you don't need a reason to accept a premise, or that a premise is valid regardless of how well it explains reality or holds up to logic.  This opinion strikes me as lazy and it means that your answers to my original question have no real value.

Premises - and logic in general - have no necessary connection to reality. I can just as easily presume that men are immortal as I can presume that they're mortal. What matters to logic is consistency, also known as non-contradiction. This doesn't mean consistency with reality - it simply means internal consistency. Do the conclusions contradict the premises or not?

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 5
Top 500 Contributor
Female
Posts 260
Points 4,015
Lady Saiga replied on Fri, Oct 28 2011 10:03 AM

Thanks MaikU, I appreciate that you clarify your ideas.  I'd say that determinism means only one action is possible in any given scenario.  Having choices limited by agencies outside of my control does not mean I live in a deterministic universe.  Limited choices are not no choices.

Also, how can harm always be wrong if free will doesn't really exist but is an illusion?  Can you explain how you get to moral culpability without freedom of action?

If you make a choice not to do a thing, you instead choose to do a different thing.  In the drowning man scenario, you choose to walk away or pick your nose or whatever instead of jumping in or throwing the guy a rope.  These are actions that, issuing from a choice where harm is the result of ONE of the choices, would be wrong from both a consequentialist AND a deontologist perspective as far as I can figure.

And finally, how do you distinguish between ethics and morals?  Why would you think that socially/ethically good decisions could not be arrived at from a moral analysis grounded in the concept of free will?

Autolykos, this time you've given me something good to chew on, so I'll pick this up again at lunch methinks.

Thanks everyone, please keep challenging my ideas so I can make them clearer.

  • | Post Points: 20
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Fri, Oct 28 2011 11:50 AM

Lady Saiga:

Also, how can harm always be wrong if free will doesn't really exist but is an illusion?  Can you explain how you get to moral culpability without freedom of action?

 

I would define (complete, absolute) free will as being able to choose without outside world influence, meaning, that you truly get to choose, like God. This kind of free will do not exist. But simple choices that humans make are not free at all. They are determined by the laws of the universe, by the neurons in the brain, genetics etc. I know it may sound deterministic, but that's how world works, and I don't believe in hard determinism, though, I am open to possibility. As I said somewhere else, quantum mechanics can clarify this thing in future, I hope.

So you are morally culpable just because you chose harm over no harm etc. Humans are rational but also, social animals, and harming your fellow humans is, well, bad :D There is no need for absolute free will for there to be morality (I think that sentence was gramatically incorrect, but whatever). Even some hard determinists believe in objective morality, though I don't know how the hell they rationalize it.

I believe in freedom of action, but I don't believe in freedom from outside influence.

Lady Saiga:

If you make a choice not to do a thing, you instead choose to do a different thing.  In the drowning man scenario, you choose to walk away or pick your nose or whatever instead of jumping in or throwing the guy a rope.  These are actions that, issuing from a choice where harm is the result of ONE of the choices, would be wrong from both a consequentialist AND a deontologist perspective as far as I can figure.

Yeah, mumbo jumbo, not doing something equals doing something. I strongly disagree with such view of morality. If you believe in NAP and self-ownership, then there is no problem, you would agree with me, but I think that you think, that humans have positive duties EVEN if they didn't voluntary signed a contract or agreed upon it... I don't like to sound like I'm evading, but I'm not that type of deontologist. :)

Lady Saiga:
And finally, how do you distinguish between ethics and morals?  Why would you think that socially/ethically good decisions could not be arrived at from a moral analysis grounded in the concept of free will?

Not sure what you mean, so I'll pass.

 

And by the way, morality wouldn't magically vanish if scientists discovered tomorrow, that we truly live in hard determined universe. Morality exists (in human society at least) for completely other and much more important reasons.

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015
Lady Saiga replied on Fri, Oct 28 2011 12:49 PM

Okay guys I'm going to put off my next reply until tomorow.  This is going off in too many directions so I want to address everything reasonably.  Thanks again!

  • | Post Points: 5
Top 500 Contributor
Female
Posts 260
Points 4,015

There is too much.  I will sum up.

MaikU: You appear to be a soft determinist/compatibilist to me.  Your definition of free will requires less actual control than the inteterminist definition of free will.  You also do not derive a moral system from the concept of free will but from some other basic concept that is not under discussion here.  You do, however, believe that harm is wrong and that some or all agents can be assumed to have moral responsibility.  We have not discussed how this can be determined.  You also believe that harm can be evaluated by degree or value by an agent.  The final response, in your analysis, may or may not be deterministic in nature but still implies moral responsibility.

Autolykos: I'm having trouble with you.  You call your thinking Misesian and I'll take your word for it but I don't have an overall picture of what this means yet.  What I've worked out for sure is that you believe only the originator of an event can be held morally responsible for it, and only if their actions are conscious.  You define free will and free action as one and the same or inextricably linked I believe, which differentiates your ideas from MaikU's.  You define conscious behavior as not-instinctive behavior.  There may be more to this definition but it has not been presented yet.  You also believe that most or all animals have free will and thus, I believe, that most or all animals act from more than solely instinctive processes.  You do not credit the ability to think objectively as having any bearing on whether or not an agent has the ability to act consciously/rationally. 

By what I have read so far, I can see one thing I did not make clear.  I am not only looking for internally consistent arguments.  I am looking for arguments that can also be demonstrated to be consistent with reality.  MaikU is treating the discussion this way already; Autolykos is not.  There was no reason to assume as much, so I needed to clarify.

I still have not made any replies to anyone, just tried to summarize.  Please adjust my summary as neccessary all; then we'll continue!

  • | Post Points: 35
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Mon, Oct 31 2011 9:26 AM

Lady Saiga:
Autolykos: I'm having trouble with you.  You call your thinking Misesian and I'll take your word for it but I don't have an overall picture of what this means yet.  What I've worked out for sure is that you believe only the originator of an event can be held morally responsible for it, and only if their actions are conscious.  You define free will and free action as one and the same or inextricably linked I believe, which differentiates your ideas from MaikU's.  You define conscious behavior as not-instinctive behavior.  There may be more to this definition but it has not been presented yet.  You also believe that most or all animals have free will and thus, I believe, that most or all animals act from more than solely instinctive processes.  You do not credit the ability to think objectively as having any bearing on whether or not an agent has the ability to act consciously/rationally.

I'd say that sounds about right. So what are you having trouble with?

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 20
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Mon, Oct 31 2011 10:28 AM

Lady Saiga:

There is too much.  I will sum up.

MaikU: You appear to be a soft determinist/compatibilist to me.  Your definition of free will requires less actual control than the inteterminist definition of free will.  You also do not derive a moral system from the concept of free will but from some other basic concept that is not under discussion here.  You do, however, believe that harm is wrong and that some or all agents can be assumed to have moral responsibility.  We have not discussed how this can be determined.  You also believe that harm can be evaluated by degree or value by an agent.  The final response, in your analysis, may or may not be deterministic in nature but still implies moral responsibility.

 

Yes, I can be called "weak determinist", that's basically what compatibalism is about. Speaking about morality, as I said before, absolute free will is not required. People only has to be able to make decisions based on rational/logical thinking. Also morality can be proved through cultural/social context too, if that makes sense. But yeah, under "hard determinism" there is a serious problem for morality.

Also it's true, that I do not derive morality entirely from the concept of free will, however, I only believe in that kind of free will if it is defined as ability to choose (i.e. between good or bad outcome). Though, I think those decisions are always influenced by Nature (capital N) and there is where determinism comes into play.

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 5
Top 500 Contributor
Female
Posts 260
Points 4,015
Lady Saiga replied on Mon, Oct 31 2011 10:28 AM

Okay, good Autolykos.  My trouble is working out what it all boils down to...as a Misesian, are you a consequentialist?  That would help to start with.  Then I'd like to ask if  you therefore believe that free will is only illusion but a good working principle, as MaikU indicates?  Believing this would imply that other considerations can supersede the primacy of self-ownership and thus NAP if I'm not mistaken.  Analyze?

  • | Post Points: 20
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Mon, Oct 31 2011 10:42 AM

Lady Saiga:
Okay, good.  My trouble is working out what it all boils down to...as a Misesian, are you a consequentialist?  That would help to start with.  Then I'd like to ask if  you therefore believe that free will is only illusion but a good working principle, as MaikU indicates?  Believing this would imply that other considerations can supersede the primacy of self-ownership and thus NAP if I'm not mistaken.  Analyze?

As I mentioned before, consequentialism is really just a form of deontologism. The Non-Aggression Principle, however, isn't consequentialist in terms of harm per se. Trespassing, for example, may well not cause any (discernible) physical damage to the property in question, but is still considered aggression and thus morally wrong by the NAP. On the other hand, just because I hold to Mises' view of "rationality" doesn't necessarily mean I'm a "Misesian" through and through. :P

I'm actually more or less with MaikU regarding free will. From the perspective of the universe as a whole - if such a perspective could exist - free will doesn't exist. We're no less at the mercy of the forces of nature than stars or stones. But from our perspective as conscious individuals existing within the universe and (presumably) interacting with each other, I'd say free will is an essential concept. None of us is able to predict what anyone will do at any given point in the future - including himself.

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015
Lady Saiga replied on Mon, Oct 31 2011 11:06 AM

So you believe in the NAP and you don't derive this belief from your definition of free will, Autolykos.  Am I wrong?  Can you discuss what, in your opinion, makes NAP valid other than the existence of free will?

I can imagine, from your perspective, saying something like "NAP is valid and moral responsibility exist because we experience the world AS IF we had free will; but this is illusion".

I DO believe that free will exists and I DO define free will as the real ability to take more than one action in any given situation.  I am also an indeterminist and I consider NAP to be valid because of these beliefs.  So if you could, try to frame your perspective in that light, because I have a hard time believing that NAP is valid in a deterministic setting; and feel free to tell me why you think my beliefs are bunk :)

Also I'd like somebody to take some pains to explain why I'm wrong in thinking that an observer, by having the OPTION of changing an event, does not gain a level of moral accountability.  MaikU indicated that this undermines self-ownership and I just don't see it. 

All of this is why I'm in the "newbies" area.  And I'm curious, will I not find others who believe the way I do in this forum?

  • | Post Points: 65
Top 25 Contributor
Male
Posts 4,922
Points 79,590
Autolykos replied on Mon, Oct 31 2011 12:25 PM

Lady Saiga:
So you believe in the NAP and you don't derive this belief from your definition of free will, Autolykos.  Am I wrong?  Can you discuss what, in your opinion, makes NAP valid other than the existence of free will?

I can imagine, from your perspective, saying something like "NAP is valid and moral responsibility exist because we experience the world AS IF we had free will; but this is illusion".

Objectively speaking, neither the NAP nor any other moral principle is valid in the sense of being a feature about the world at large. "Moral" and "immoral", "right" and "wrong", "good" and "bad", etc. are values that we impute to things. Furthermore, the imputation of these values depends upon what we think these values constitutes - or, one could say, the definitions of these values. Since all definitions are arbitrary, it follows that all such values and the imputation thereof are also arbitrary.

Lady Saiga:
I DO believe that free will exists and I DO define free will as the real ability to take more than one action in any given situation.  I am also an indeterminist and I consider NAP to be valid because of these beliefs.  So if you could, try to frame your perspective in that light, because I have a hard time believing that NAP is valid in a deterministic setting; and feel free to tell me why you think my beliefs are bunk :)

I'd say we appear to have a real ability to take more than one action in any given situation (more accurately, at any given point in time) because the future is always uncertain in a calculatively predictive sense. Another way of putting this is that the future is indeterminate from our point of view. But that's only because our point of view is extremely limited with respect to the universe as a whole.

Lady Saiga:
Also I'd like somebody to take some pains to explain why I'm wrong in thinking that an observer, by having the OPTION of changing an event, does not gain a level of moral accountability.  MaikU indicated that this undermines self-ownership and I just don't see it.

The notion that an observer has the option of changing an event implies certainty about the future, which doesn't actually exist (for us, that is). Your notion of moral accountability undermines self-ownership in the sense that other people could now be said to have higher claims over you than you yourself do. If you aren't considered to have the highest claim over yourself, how can you be considered to own yourself?

Lady Saiga:
All of this is why I'm in the "newbies" area.  And I'm curious, will I not find others who believe the way I do in this forum?

There's always a chance. :)

The keyboard is mightier than the gun.

Non parit potestas ipsius auctoritatem.

Voluntaryism Forum

  • | Post Points: 5
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Mon, Oct 31 2011 12:34 PM

In a libertarian sense, you can only be legitimately acountable for positive action. Self-ownership and NAP only defines what you can not do (don't steal other peoples' property, don't harm, don't murder, don't rape). All of it involves force and/or property damage (that's why fraud would be punishable in Libertopia).

But I have hard time understanding how you derive this: having an option makes you accountable... Hell, I have an option to feed 100 starving people each month (of course, then I would be starving myself) but am I accountable, if don't do it? Why you believe what you believe? Just because you have an option doesn't mean you have a positive obligation. I didn't put those starving people there. I didn't take "their jobs" or homes whatever.

Again, your worldview puts you in a very very unpleasent situation. How about if I said I don't have money, and asked you to give me 100 dollars? You have an option, to give me money or not. I can die without food. Do you think you would be accountable if I died?

By the way, how you would define "accountable". Are you talking from mere ethical perspective or in a legal sense (that Government, PDA, DRO or whatever could punish you for not giving me money)?

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 20
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Mon, Oct 31 2011 12:39 PM

Lady Saiga:

All of this is why I'm in the "newbies" area.  And I'm curious, will I not find others who believe the way I do in this forum?

 

We're all alone in our thinking :) You can find similarities, but there are degrees of being "on the same trail" (if this is correct expression). I kinda think I'm just the only one in the vast universe and you are just my imagination...:D

 

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 5
Top 500 Contributor
Female
Posts 260
Points 4,015
Lady Saiga replied on Mon, Oct 31 2011 12:51 PM

I don't believe in ANY accountability to any agency but myself.  That doesn't mean there won't be repercussions.  Everything I do has repercussions of one sort or another.  Moral accountability is another way of saying that a choice has been made that can be given a value.  THIS choice being better than THAT choice.  It is the way one makes decisions in a way that is fair to others.  It also makes me able to judge others' actions toward myself, giving me a standard by which I can decide on my reaction.

If you say you don't have money and ask me for 100 dollars I am accountable for my decision.  That doesn't mean that saying "no" is right or wrong.  It only means that I have made a decision that will have consequences for me and others.  Maybe the immediate result is that I inherit your property...or maybe I'm your tenant and I suddenly become homeless.  Those are consequences that I can accept some moral accountability for even though I didn't create the original scenario. 

It also doesn't mean that you have any claim over my self-ownership.  You can't call on moral accountability to make me give you that money.  You CAN point out that I'll go homeless if you starve.  That might affect my decision from a rational standpoint but it doesn't imply that anyone but me decides on my actions.

 

  • | Post Points: 5
Top 200 Contributor
Male
Posts 480
Points 9,370
Moderator

Lady Saiga:
I pretty much entirely disagree with your analysis of the drowning man thing.  Once you know about the situation you have a choice,
That is correct but with all due respect, your libertarian quest for knowledge has reached a brick wall that will never come down. 

 

First of all, you are right.  The good samaritan has a choice but in the context of libertarianism, he is morally correct to walk by and do nothing.  That is how libertarianism is defined. What you are looking for is to define the concept of morality or ethics to encompass a greater form of virtuous human interaction that involves a form of self-sacrifice towards our fellow man.  Nothing wrong with that except that your expectations contradict how libertarianism is defined. 

Second, only God knows for certain what choices the good samaritan faces.  For all you know, the drowning man in the lake could actually be a set-up to trap the good samaritan into jumping in the water and making himself vulnerable to attack from the wingman hiding behind the tree.  How does your sense of a moral code incorporate healthy caution? 

Before calling yourself a libertarian or an anarchist, read this.  
  • | Post Points: 35
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Mon, Oct 31 2011 2:32 PM

I think I get you now, Lady Saiga. You were talking from a determinist perspective, so yeah, I agree with you in a sense, that our choices are all interconnected so to speak, you basically described cause and effect here. So far so good.

 

Now I am interested about what you think of this situation in legal terms, legal accountability. Would you agree that it is legitimate to punish a man (with financial or physical punishment) who just walks by drowning person and does nothing?

By the way, are you utilitarian?

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 20
Top 500 Contributor
Female
Posts 260
Points 4,015

For all I know, the guy in the lake IS a setup.  If it is, and I jump in, then part of my next predicament is my own fault.  It's my obligation to myself to try to get my facts right, and when I fail, I pay consequences.  Sometimes I choose unwisely knowingly, too.  If the guy in the lake is my dad, I'm likely to value his safety higher than I would otherwise but by risking myself or him, depending on my choice, I still take on responsibility.

I derive all of this from my understanding of free will so I fail to see why a libertarian perspective supposedly doesn't support it.

  • | Post Points: 20
Top 75 Contributor
Male
Posts 1,289
Points 18,820
MaikU replied on Mon, Oct 31 2011 2:36 PM

Lady Saiga:

 

I derive all of this from my understanding of free will so I fail to see why a libertarian perspective supposedly doesn't support it.

 

Libertarian perpective doesn't support positive obligation to save him, that's it. It says nothing about ethics though. I would probably jump into water and try to save the man too, but hell, if that was a setup I am really a victim myself here, not an agressor, and there is no need to blame myself for trying to save a supposedly drowning man.

"Dude... Roderick Long is the most anarchisty anarchist that has ever anarchisted!" - Evilsceptic

(english is not my native language, sorry for grammar.)

  • | Post Points: 5
Page 1 of 2 (53 items) 1 2 Next > | RSS