Positive impact - Positive

9 minute read

This is the second part of my thoughts on how I understand “Positive Impact”, where I’m wondering what “Positive” means. The “Impact” part can be found here.

Signs

A positive value is one that is larger than zero. Not all that helpful. Though at least we know it’s not negative, which is something. The question is what (and how) is being measured. I think it’s safe to assume that a positive outcome is one that the evalutor would describe as “good”, while a negative one as “bad”. If so, then it seems like there are two main big areas to ponder, i.e. what is “good” and how many dimensions of “good” can be had at once.

Dimensionality

Assuming that there is an algorithm that can output the value of an action according to some criterion, that isn’t enough to state whether it’s objectively positive or negative. A given action can have multiple dimensions of goodness. For example, building new apartments in a town with massively overinflated rents can be deemed good, as it lets more people afford to live there (assuming that is something of value, of course). At the same time, new buildings will increase the environmental impact of the town, which will have a negative impact upon the local fauna and flora. Or two identical twins have the plague, but there is only enough antibiotics to cure one of them - which one gets the medicine? Or any trolley problem, for that matter. The basic problem is conflicts of interest - different criteria give different values, but they have to be combined into some total sum. I find it useful to view this as a bunch of vectors pointing in different directions - the total impact being a combination of them. Which then raises the matter of how to combine them, for which I don’t have a good answer. My intuition is to do a sort of weighted sum, where each input’s weight is assigned pretty much by how I feel about it. Usually the availability bias raises its ugly head and takes over.

The previous examples seem to mix up two kinds of dimensionality problems. The first is where there are conflicting values (in the example cheap housing vs nature conservation) which are usually in and of themselves incompatible with each other, for neither can live while the other survives, etc. On the other hand, there are all kinds of problems where it’s pretty obvious what is good, the problem being how to divide the goodness up. Or how to dish out the badness, as in trolley problems. Though the difference is probably just a matter of framing.

The problem of how to dish out goodness seems to be proportional to how important the matter at hand is. In the case of Big Issues, which have a large impact, it seems imperative to choose the correct method of distribution of the good and bad. Which seems to be a moral question… When I think about it, I tend to be inclined towards a sort of value based weight distribution. That each being has a certain value based very loosely on their consciousness/feeling capabilities. This is very hand wavy and based on intuition and totally not rigorous in any way. So a sponge has some value but not as much as a fly (which can maybe feel pain?) and certainly a lot less than a cephalopod. This sort of also takes potentiality into account, too. So it’s better for a human to have a place to live, than to worry about a fox den (though that also has moral weight), but it’s better for that human to not have a place to live than for a species to go extinct if that is the only place they can live (as a species going extinct will totally end its potentiality). Or something like that? Both seem to be bad and I don’t have a decent way of differentiating. This runs into problems fast. Is it better to save a drowning child or an old person? What about a rich person vs a poor one? A intelligent one vs a stupid person? A highly talented musician vs a caretaker? If asked, I’d tend to say that according to potentiality, it’s better to save the first item in those pairs, as they’re more likely to have a large effect on the world. But that also seems to be abhorrent in some way…

Morality

This all assumes that there is a set of moral rules or something to decide which act is better or worse. This is a Problem. And a big one at that. It doesn’t help that everyone seems to have their own answer which they often think is the only correct one. It’s good that people tend to agree which actions are good and which are bad, or at least they have large overlapping areas of agreement. Though it would be good to finally solve philosophy and work out the basic rules of morality. Assuming there is such a thing as a basic set of correct morality, which for some totally obscure (to me) reason, I still do. Even when I have epistemological reasons to believe the opposite…

Omphaloskepsis

I once lost all of my moral foundations (i.e. God), which required rethinking everything from the beginning. This is something that I highly recommend, as it teaches quite a few very valuable lessons. My initial attempt at solving morality was that good is that what brings greater pleasure into the world, while bad is whatever introduces pain. As a definition it’s a start, but is only that. This is pretty much utilitarism. There are quite obvious problems with this, which have been done to death, so no point in going into them here. My way of fixing them was to go in a more Buddhist direction and change it to “suffering is bad, good is whatever minimises pain”. As a moral system this is a lot better from my point of view, as it removes a lot of the Omelas factor. But it’s still oversimplistic - reducing morality to one dimension removes a lot of interesting aspects of humanity which seem to also have value, like beauty, freedom or justice. One could say that they are simply proxies for pleasure and pain - beauty and freedom bring pleasure, while the lack of freedom or injustice bring suffering. Though that doesn’t seem to be the whole story. A slave can be totally happy with their situation, yet most people would say that is bad. Injustice can often result in greater pleasure, but is often condemned (e.g. stealing from the rich to give to the poor). Of course one could argue that freedom is overrated and that freedom to starve is worthless or that stealing from the rich is fine as there is no way they could have gotten rich morally. But that doesn’t ring true.

Values

While pleasure/pain are very important, they’re not the only important things. Overly focusing on them seem to lack a certain fullness. There are many other values that have intrinsic worth, or at least appear to. It doesn’t seem likely that the only reason that people will fight and die for love, faith, loyalty, honor etc. is that they think that is the best way to maximise pleasure. Especially as fighting comes with the very real risk of pain, injury and death. Which sort of can be construed as a sort of suffering… One way out is to change pleasure/pain to “the greater good” or some such thing. Which doesn’t really change anything, as now we have to go through the whole rigamarole with “good”. Especially as the initial question was what does “good” mean…

lawful vs chaotic good

No discussion of morality is complete without mentioning the basic types of ethics, i.e. virtue, deotology and consequentialism. My basic understanding here is that virtue ethics say that something is good if a good person does it (e.g. Batman hitting baddies is good), deotology say that good acts are those that are moral (e.g. not lieing to Nazis about Jews under the table is good, as the 9th commandment forbids it), while consequentialism says that moral acts are those that are good (i.e. stealing from the rich to give to the poor is good as they’re better off). Utilitarism is very much consequentialism - good acts are those that maximise utility. Kantian ethics (from what I have gathered from reading commentaries) are very much deontologistic - good actions are those that are in according to the objective and absolute laws of morality. Virtue ethics seem to be a lot less represented in modern thought - the ancients seemed to very much go in for it with their focus on arete. Contemporary examples seem to be superheros and politicians.

What to do?

I seem to be naturally inclined to deotology. I have a build in urge to create moral laws by which I should abide. And see the world through that lens. So far as to honestly think that it would be bad to lie about having hid Jews from the Nazis. That being said, I think it would be even worse to not lie about it, so there is also a large amount of consequentialism there. I initially thought it’s a matter of applying different weights to different rules (i.e. “do not let others come to harm” trumps “do not lie”), but that’s not quite true. There are situations where I’d not lie even if it would harm someone else, which implies that the weights depend on the circumstances, i.e. consequentialism. I also find virtue ethics useful - maximising ones arete is good, as that will result in one being better and so acting better.

Summary

This has been a longwinded way to say that I still have no proper answer as to how “positive” should be defined nor how best to approach it. Currently it’s like porn - I know it when I see it. And that a positive action is one that raises the average (or median or …) level of wellbeing of all effected. For some definition of “all” which doesn’t only include those dear to me but is a lot more general than many people would be comfortable with, potentiality including animals, ETs, AIs etc. Also applying to those in the future, but not necessarily those in the past, as I don’t see a way for causality to work backwards (and things like honoring the dead are mainly for those that are still living). So to sum this all up, my current view is something like this:

  • All beings that can feel and think should be considered
  • Beings that can feel/thing/? more should be accorded more weight
  • Locality (both spatial and temporal) shouldn’t effect a beings value (other than in practical issues like deciding who to help)
  • Just because I like someone more shouldn’t mean that they have more value - though in practice I don’t act this way
  • A moral system that doesn’t take consequences into account, which doesn’t have general rules or which doesn’t encourage actively becoming more virtuous is worse than one that does these things
  • A positive action is one that results in more results that would be called good than bad according to the overarching moral system
  • An action that results in a net positive but itself is not morally good is at best suspect and is worse than an action that in itself is moral and would result in the same net positive - this phrasing is clumsy, but pretty much is saying that the ends don’t justify the means, though sometimes they sort of do :