Free Capitalist Network - Community Archive
Mises Community Archive
An online community for fans of Austrian economics and libertarianism, featuring forums, user blogs, and more.

A Paradox of Utilitarianism

This post has 22 Replies | 3 Followers

Top 200 Contributor
Posts 424
Points 6,780
Azure Posted: Sun, Feb 27 2011 2:22 PM

A serious dilemma for any utilitarians out there, if there are any left. This is not the usual interpersonal utility comparison thing, that can be solved, in my analysis, by making utilitarianism a moral philosophy which operates over yourself as an individual, rather than over society. Eating a pizza would make me happy, so I should eat a pizza. Stealing it would make me feel guilty, and thus unhappy, so I should not steal a pizza. I stuck to this idea for a long time, until this thought came up.

How would you feel about having all the parts of your brain gutted out, with the exception of the pleasure centers and the bare minimum to uphold subjective experience, with the pleasure values blared to maximum at all times? The question is not how you would feel after the procedure: You'd obviously be absolutely ecstatic then. The question is, how do you right now, with your current brain and your current values, feel about such a scenario?

I don't know how you feel about it, but I, and many others, do not like the prospect. Which, when you consider this from the perspective of personal utilitarianism, is quite a paradox. People want to be happy, don't they? If they went through with the procedure they'd be way happier than they could ever be by pursuing natural means. What's the problem here?

The problem is people don't want to be happy, per se. Happiness is an indicator, not the goal most people directly strive for. (Though plenty do; See the large numbers of drug addicts.) When people say they want to be "happy" they don't mean they want to spend the rest of their life doped up. They just use the term as a catch-all for the various things that happiness indicates. In the course of obtaining the referents, they get happiness as well.

Needless to say, this problem isn't insurmountable either. The obvious solution is to define utility by the presence of the referents, not the presence of the indicator. Here we tread heavily into objective-morality territory by divorcing utility from the content of anyone's actual brain. We define utility by the output of a computation performed by the brain, but not necessarily the output the brain produces (as the brain is capable of returning an incorrect result). Changes to the brain change the brain's output, but not the correct result of the actual computation.

But this just leads us to another problem.

We are not continuous streams of consciousness. I value heavily different things now than I did 5 years ago. Some of the kinds of choices I prefer to make now would have horrified my past self if she had considered the prospect. But why should my current self overrule my past preferences? After all, a change in the brain doesn't change the correct output of the utility computation, just my brain's output. If we reject this justification, then we're right back to the issue described before.

 

This analysis is, to my knowledge, original. The only way to have any sort of coherence in our reasoning is to pick a single rightness-function, not defined by any individual's preferences, and apply it universally. When we accept this, we are no longer dealing with a brand of moral relativism, but an objective morality.

  • | Post Points: 50
Top 100 Contributor
Male
Posts 917
Points 17,505

Or you can just accept egoism and stop trying to dance around with all this rationalization of what you do. You do what you do because you want to, and you don't have any other choice, and what is 'right' or 'wrong' has no bearing if it is not 'right' or 'wrong' according to your subjective preferences. All this moral philosophy is irrelevant nonsense; there are mores, not moralities; there are ethos, not ethics.

Libertarianism would be a lot better off if it just embraced it's Nietzscheanism + contractarian legalist nature instead of trying to rationalize it according to the religious and social mythologies of the herd. h/t/ William.

I will break in the doors of hell and smash the bolts; there will be confusion of people, those above with those from the lower depths. I shall bring up the dead to eat food like the living; and the hosts of dead will outnumber the living.
  • | Post Points: 20
Top 200 Contributor
Posts 447
Points 8,205

You could use happiness as an indicator for your example of a full labotamy.  I have a very basic survival instintct.  Survivng makes me happy.  The thought of not surviving (or transforming into a state that I equate with not surviving) makes me sad.  Therefor my current decision making process is based on this immediate sad/happy response.

Even long term happiness/goals can be looked at in an immediate sense.  For example, I know that working (something that makes me sad) will earn me money (something that makes me happy).  However, knowing that I will be happy in the future makes me happy enough to outweigh the sadness of working now.  So even here, I am deciding based on present happiness, not future happiness.  In your labotamy example the prospect of future happiness isn't enough to outweigh the happiness brought on by surviving.

  • | Post Points: 5
Top 50 Contributor
Male
Posts 2,051
Points 36,080
Bert replied on Sun, Feb 27 2011 2:42 PM

Someone's subjective value preference is religious and social mythologies of the "herd", they chose this, because they see it yielding greater satisfaction than not, this is what they chose as an individual.  What do you do when this doesn't fall in line with how you view libertarianism?  PROBLEMS!!!111.

I had always been impressed by the fact that there are a surprising number of individuals who never use their minds if they can avoid it, and an equal number who do use their minds, but in an amazingly stupid way. - Carl Jung, Man and His Symbols
  • | Post Points: 20
Top 100 Contributor
Male
Posts 917
Points 17,505

Then I don't bother with them. I have no use for cranks and identarians, left or right. As long as they stay out of my way, fine; but that doesn't mean I have any respect for their stupid crap.

This stuff is false and irrational, whether or not the leftoids or altright whackos are emotionally attracted to it. Frankly, the whole human species is a bit of drag.

I will break in the doors of hell and smash the bolts; there will be confusion of people, those above with those from the lower depths. I shall bring up the dead to eat food like the living; and the hosts of dead will outnumber the living.
  • | Post Points: 20
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Sun, Feb 27 2011 3:04 PM

The fact of the matter is, I have desires that I feel I shouldn't have, and desires I feel I should have but don't. If what is right is merely what we want, then why would we ever want to change what we want? Whether these feelings are justifiable, or even sensible, I still want to understand them and how they work.

  • | Post Points: 35
Top 100 Contributor
Male
Posts 917
Points 17,505

The fact of the matter is, I have desires that I feel I shouldn't have, and desires I feel I should have but don't.

That's evolutionary psychology at work. The human mind is a real piece of work.

If what is right is merely what we want, then why would we ever want to change what we want?

Because we recognize that some desires aren't coherent with our broader aims and we have to choose between various alternatives; many of which are mutually exclusive.

Whether these feelings are justifiable, or even sensible, I still want to understand them and how they work.

Social signaling and tropism.

I will break in the doors of hell and smash the bolts; there will be confusion of people, those above with those from the lower depths. I shall bring up the dead to eat food like the living; and the hosts of dead will outnumber the living.
  • | Post Points: 20
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Sun, Feb 27 2011 3:42 PM

Because we recognize that some desires aren't coherent with our broader aims and we have to choose between various alternatives; many of which are mutually exclusive.

What defines our "broader aims?" If how we want to change what we want is defined by our meta-wants, then how do we choose between conflicting meta-wants? Meta-meta-wants? Where does the recursion end?

Social signaling and tropism.

Once upon a time, it was considered right to own slaves. In our current society, it's considered right to violate the property of business owners to satisfy the greater wants of "society."

If our sense of morality is defined by society, then how can any of us ever break from the pack?

  • | Post Points: 20
Top 100 Contributor
Male
Posts 917
Points 17,505

What defines our "broader aims?"

You do. This is something you have to think about. Or not. It's up to you.

 

Once upon a time, it was considered right to own slaves. In our current society, it's considered right to violate the property of business owners to satisfy the greater wants of "society."

Yep.

If our sense of morality is defined by society, then how can any of us ever break from the pack?

Variations, counter-signaling, shifts in incentives, break-downs in power structures and just plain random events and memetic shifts.

I will break in the doors of hell and smash the bolts; there will be confusion of people, those above with those from the lower depths. I shall bring up the dead to eat food like the living; and the hosts of dead will outnumber the living.
  • | Post Points: 20
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Sun, Feb 27 2011 3:55 PM

You do. This is something you have to think about. Or not. It's up to you.

I figured that much. The important question is How. Ideas and desires don't simply spring up out of nowhere. And if wants are defined by other wants, where does the recursion stop?

Variations, counter-signaling, shifts in incentives, break-downs in power structures and just plain random events and memetic shifts.

Quite a strong set of assertions. Plus, I think we've seen where coherency goes once we start using the "right" label to mean different things in different places.

  • | Post Points: 20
Top 100 Contributor
Male
Posts 917
Points 17,505

There really isn't a recursion, there are a bunch of related neuroconnexions; some of our ideas and desires are contrary to others; this is what compartmentalization is.

I will break in the doors of hell and smash the bolts; there will be confusion of people, those above with those from the lower depths. I shall bring up the dead to eat food like the living; and the hosts of dead will outnumber the living.
  • | Post Points: 20
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Sun, Feb 27 2011 4:17 PM

What decides which of the contrary desires is pursued and which is not?

  • | Post Points: 20
Top 100 Contributor
Male
Posts 917
Points 17,505

I'm not a neurobiologist, and I don't think they know, either. Some kind of strength and aversion ratio, I'd figure.

I will break in the doors of hell and smash the bolts; there will be confusion of people, those above with those from the lower depths. I shall bring up the dead to eat food like the living; and the hosts of dead will outnumber the living.
  • | Post Points: 5
Top 50 Contributor
Posts 2,162
Points 36,965
Moderator
I. Ryan replied on Sun, Feb 27 2011 7:44 PM

Azure:

I don't know how you feel about it, but I, and many others, do not like the prospect. Which, when you consider this from the perspective of personal utilitarianism, is quite a paradox.

Why exactly would it be a paradox for somebody to feel dissatisfaction about something, and thus not do it?

Azure:

If they went through with the procedure they'd be way happier than they could ever be by pursuing natural means.

But their current self mightn't be happy engaging in the operation, which is clearly enough to stop them from doing it.

Azure:

The obvious solution is to define utility by the presence of the referents, not the presence of the indicator.

What if somebody really does just prefer the indicator (almost everybody at some time or another)?

If I wrote it more than a few weeks ago, I probably hate it by now.

  • | Post Points: 35
Top 100 Contributor
Male
Posts 917
Points 17,505

Why exactly would it be a paradox for somebody to feel dissatisfaction about something, and thus not do it?

Right, I may want to take a million dollars from the bank but I do not want to be shot in the process so I do not.

I will break in the doors of hell and smash the bolts; there will be confusion of people, those above with those from the lower depths. I shall bring up the dead to eat food like the living; and the hosts of dead will outnumber the living.
  • | Post Points: 5
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Mon, Feb 28 2011 2:59 AM

Why exactly would it be a paradox for somebody to feel dissatisfaction about something, and thus not do it?

But their current self mightn't be happy engaging in the operation, which is clearly enough to stop them from doing it.

What if somebody really does just prefer the indicator (almost everybody at some time or another)?

Because, for this_brain.happiness, the other side is infinite for the rest of eternity. If people only do seek to raise their internal utility indicator (as all forms of utilitarianism imply), the only reason they wouldn't go through with it is if they were absolutely terrible at forseeing future consequences. The discomfort of the procedure is finite and temporary. The only way that could win out over the reward is if they simply didn't understand it.

Yet people, even after understanding it, still don't want to go through with it. If the happiness-indicator is really all that matters for them, this doesn't make any sense at all.

  • | Post Points: 20
Top 50 Contributor
Posts 2,162
Points 36,965
Moderator
I. Ryan replied on Mon, Feb 28 2011 9:26 AM

Azure:

Because, for this_brain.happiness, the other side is infinite for the rest of eternity. If people only do seek to raise their internal utility indicator (as all forms of utilitarianism imply), the only reason they wouldn't go through with it is if they were absolutely terrible at forseeing future consequences. The discomfort of the procedure is finite and temporary. The only way that could win out over the reward is if they simply didn't understand it.

Yet people, even after understanding it, still don't want to go through with it. If the happiness-indicator is really all that matters for them, this doesn't make any sense at all.

But it's only this moment that matters. If right now, I feel a bunch of dissatisfaction about engaging in that operation, I simply won't do it. It's not my future self who makes the decisions for me, so it's perfectly irrelevant what he would feel about it. As my current self, I just don't give a shit about what he would think past what it makes me think. Sure, it might influence my decision, but clearly it's not doing that in this example. Right now, I wouldn't want to have that operation, so I guess that I won't try to work toward it. And that's all that we need to resolve the supposed paradox.

Anyway, there's a couple more things too.

  • I can't even manage to imagine (much less want) a situation where "the pleasure center is blared to maximum at all times". It's perfectly meaningless to me.
  • How could I be sure that it would actually work? If I see it happen to other people, wouldn't they just become zombies? How the hell would I know that they're a bunch of super happy zombies, and not just people almost perfectly brain-dead?

And, besides all of that, I'm not even sure what you're trying to replace personal utilitarianism with. Where would that universal rightness function come from? What would it even be? And so on.

If I wrote it more than a few weeks ago, I probably hate it by now.

  • | Post Points: 20
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Mon, Feb 28 2011 11:10 AM

But it's only this moment that matters. If right now, I feel a bunch of dissatisfaction about engaging in that operation, I simply won't do it. It's not my future self who makes the decisions for me, so it's perfectly irrelevant what he would feel about it. As my current self, I just don't give a shit about what he would think past what it makes me think. Sure, it might influence my decision, but clearly it's not doing that in this example. Right now, I wouldn't want to have that operation, so I guess that I won't try to work toward it. And that's all that we need to resolve the supposed paradox.

If we analyze this in an indicator-maximizing framework, this leads to problems. If the prospect of your future self being happy is not enough to compel you to undergo the procedure, then why do you worry about the consequences of your actions at all?

The proposed referent-maximizing model of utility, however, solves this problem. When you consider the future which results if you opt for the procedure, you see yourself floating in a jar, with your utility-referents going unfulfilled. Thus, in order to maximize the referents, you need to steer the future clear away from this possible outcome.

As the case of drug addicts shows, this mechanism frequently breaks in actual humans.

And, besides all of that, I'm not even sure what you're trying to replace personal utilitarianism with. Where would that universal rightness function come from? What would it even be? And so on.

I'm not sure about this one myself. It's definitely something deserving of more thought.

  • | Post Points: 20
Top 50 Contributor
Posts 2,162
Points 36,965
Moderator
I. Ryan replied on Mon, Feb 28 2011 11:52 AM

Azure:

If we analyze this in an indicator-maximizing framework, this leads to problems. If the prospect of your future self being happy is not enough to compel you to undergo the procedure, then why do you worry about the consequences of your actions at all?

Good point.

Azure:

The proposed referent-maximizing model of utility, however, solves this problem. When you consider the future which results if you opt for the procedure, you see yourself floating in a jar, with your utility-referents going unfulfilled. Thus, in order to maximize the referents, you need to steer the future clear away from this possible outcome.

In the past, I have put a lot of thought into this same exact problem. I used to think that the people who prefer the indicators to the indicated were simply a bunch of morons. I steered perfectly clear from every form of "artificial" indicator maximizer (such as pain medication, fake food, drugs, and so on), and I still do. But once I found Mises, I realized that it's not nearly as clean as that sounds. If you're simply just looking for some sort of short term enjoyment, then pizza, ice cream, or whatever will give that to you. If all that you're looking for is the good taste, then the taste isn't an indicator anymore (as it is for me), but the actual end in itself. It's only an indicator if you make it one.

But we're hard-wired to take our taste as an end in itself. In the past, when (I assume) we were in a situation where our options of what to eat were limited to just those which our taste system was built for, the indicator matched the indicated. It's sort of confusing, but it's like this. We are hard-wired to take our taste as an end in itself, but it's nevertheless an indicator. If that sounds contradictory, it's because I'm talking from 2 different points of view. For a single perspective, an indicator is necessarily not an end in itself. But here's the catch. One point of view (I guess our genes or whatever) designed another point of view (our consciousness) to take it as an end in itself, but it did that for a reason (not as an end in itself!). Our taste is like our subconsciousness telling us whether we should continue eating whatever we're eating, but it breaks down (at least for its original purpose!) when you give it a bunch of input that it wasn't designed to handle (fake food in particular). It tends to be an end in itself for one perspective (our consciousness), but an indicator for another (our genes or whatever).

Now, the problem with your approach is that we're always trying to be referent maximizing. If I take my taste as an end in itself, I might eat a bunch of junk food all the time, but that's because the taste is the referent. It's as un-Misesian is it gets to just arbitrarily call one thing the referent and the other the indicator. As I already said, it's only an indicator (or a referent!) if you make it one. If I really see a good taste (or anything like) as an end in itself, it is the referent at this point. We always try to be referent maximizing; it's just that the modern world throws people like me (and maybe you?) a huge barrage of formidable obstacles. It's a tricky situation, and it's no wonder that so many people like me (us?) fall for it anyway. But, if in general, you try to declare that the pleasure that somebody might get from some drug is simply "indicator maximizing" (and not "referent maximizing"), you're simply making an arbitrary value judgement.

Now, why don't I eat fake food, take drugs, watch emotionally manipulative movies, and so on? Is it because I'm some sort of genius who's super good at maximizing my referents, while the rest of the people are a bunch of moronic indicator maximizers? Well, not necessarily. I have a lot of goals which stretch into the extreme long term, and all of those things would simply interfere with them. If I'm remarkable in any respect, it's simply that I'm extremely focused. I pulled away from all of those things simply because I found them useless for my very specific purpose. For most people, if they had the same utility function, they would have figured it all out too. Maybe. I could go eat some pizza right now if I wanted, but the problem is that I wouldn't really enjoy it. I wouldn't be able to stop thinking about how short-sighted and idiotic that I was being (and that would outway the good taste that I'm experiencing). But that doesn't mean that the other people around me who plan to order a pizza tonight are being a bunch of high time preference morons. Not necessarily. Maybe they just have different goals than me? As condescending as I might come off in this (which depending on the person reading this, might range from none at all to an extreme), I'm trying to be as value-free as possible. It's just that I'm using a language whose structure isn't very conducive to avoiding value judgements. At all. But hopefully that meta-point will solve that difficulty.

If I wrote it more than a few weeks ago, I probably hate it by now.

  • | Post Points: 20
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Mon, Feb 28 2011 2:37 PM

First of all, I'd like to say I don't necessarily disapprove of "indicator-maximizing." I love greasy food, cute pictures of kittens, and video games. I wouldn't want to write those things out of myself even if I could.

Self-referential utility is a special case. Within the space of possible utility function designs, most of them don't self-reference. Humans happen to have evolved such a partial design, as reinforcement learning is based purely off of the pain/pleasure indicators, which themselves are imperfect indicators of what does or does not increase inclusive reproductive fitness. But to the only things we want are things that give us neurological pleasure, or even that neurological pleasure is the only reason we want them, is to greatly simplify the whole issue. Reinforcement learning is not our only cognitive tool. Humans are way more complicated than that. Reference-based utility is a generalization of the usual definition, not the other way around.

  • | Post Points: 20
Top 50 Contributor
Posts 2,162
Points 36,965
Moderator
I. Ryan replied on Mon, Feb 28 2011 7:01 PM

Azure:

But to the only things we want are things that give us neurological pleasure, or even that neurological pleasure is the only reason we want them, is to greatly simplify the whole issue.

Wait, whose philosophy are you calling "personal utilitarianism" anyway?

If I wrote it more than a few weeks ago, I probably hate it by now.

  • | Post Points: 20
Top 200 Contributor
Posts 424
Points 6,780
Azure replied on Tue, Mar 1 2011 6:25 AM

My old one. Though reasoning of utility as happiness is common within ordinary utilitarianism as well.

  • | Post Points: 5
Top 50 Contributor
Posts 2,162
Points 36,965
Moderator
I. Ryan replied on Tue, Mar 1 2011 11:11 AM

Going back to an earlier post.

Azure:

The fact of the matter is, I have desires that I feel I shouldn't have, and desires I feel I should have but don't. If what is right is merely what we want, then why would we ever want to change what we want? Whether these feelings are justifiable, or even sensible, I still want to understand them and how they work.

It would seem contradictory to say that you want something that you don't want, but only until you add time to the analysis. In the cases where you have some desires that you feel that you shouldn't have, what's going on? What's it mean to say that you have those desires, but don't think that you should? Let's take the example of the video games. Even though you like them, and wouldn't want to write it out of you even if you could, I'm sure that at least sometimes you end up playing them when you "should" be doing something else. So what's that mean? It's where you indulge at the time (play some even though your present self might even anticipate that your self of a few hours later would regret it), but then regret it later. It's where one of your selves does something that a later one would wish otherwise.

 
The fact of the matter is, I have desires that I feel I shouldn't have, and desires I feel I should have but don't. If what is right is merely what we want, then why would we ever want to change what we want? Whether these feelings are justifiable, or even sensible, I still want to understand them and how they work.
 
[/quot

But why would you ever do that? How would it happen that you would discount the utility function of your future selves? It's one thing if you make the wrong prediction (think that your future selves would approve), but it's a whole other thing if you make the right one (know that your future selves wouldn't approve), and still go through with it. If we're simply striving to achieve the most pleasure or something, why would we subject our future self to pain? If we don't care about the prospect of being happy in the future, what's left of the pleasure/pain idea? and as you said, why would we think of the consequences at all? Well, I'm being pretty loose with the terms here. It's not that you might discount the utility function of your future selves in general; it's that you might discount it for some of your future selves in particular. This is where our time preference comes into play.

In the original example (the one about the video game), what's going on is that your present self is more interested in pleasing some of your closer future selves than some of your further ones. Your present self might be sure that your future self of a few hours later would disapprove, but for some reason you're more interested in the fact that you know that your future selves of the next hour or something wouldn't disapprove. We often use the word "indulge" to refer to giving a much higher consideration to your short term well-being (the pain/pleasure for your future selves in the near future) than to your long term well-being (the pain/pleasure for your future selves in the far future). If you say that you feel as if you have desires that you shouldn't have, it's only because the word "desires" is referring to one category of your future selves, and the phrase "feel as if you should" is referring to another category of your future selves. As your present self, it's possible to predict those things.

And if what's right is simply what we want, why would we ever want to change what we want? First of all, that's a bit misleading. To make it clearer, let's change it to, "If what's ultimately right for me is simply what I ultimately want, why would I ever want to change what I proximately want?" Well, now it's pretty damn obvious. If I ultimately want A, I could easily think that X would lead to it, but then change my mind to thinking that it would actually be Y that would lead to it. In that case, I would change from proximately wanting X to proximately wanting Y. Perfectly unremarkable, right? But what do you ultimately want, how could we figure that out, and could it shift over time? Really I have no idea. I'm not sure how to reduce the proximate/ultimate distinction to my perceptions or anything. For now, they're just convincing words. But I think that the whole thing has to do with the time structure of how our different selves interact with each other (time preference and whatever).

Azure:
 
The fact of the matter is, I have desires that I feel I shouldn't have, and desires I feel I should have but don't. If what is right is merely what we want, then why would we ever want to change what we want? Whether these feelings are justifiable, or even sensible, I still want to understand them and how they work.
 

If I wrote it more than a few weeks ago, I probably hate it by now.

  • | Post Points: 5
Page 1 of 1 (23 items) | RSS