Positive impact - Impacts

8 minute read

I’ve been asked to define how I understand “Positive impact”. Which is an exceedingly good question. Also an unexpectedly hard one. Which shouldn’t be surprising, seeing as that is pretty much the main thing studied by philosophy of morality etc. So here’s my attempt at solving one of the hardest and most elusive problems of humanity in a single blog post. Trivial, I know. Just in case it wasn’t obvious, I’m not trying to create an overarching framework of morality here, just writing down my thoughts on the topic. The second part, where I pontify on morality, can be found here.

Impact

This part seems to be clearer to me. I’m currently going with a general split into temporal, spatial and causal impacts, with impacts being characterized by their magnitude and duration. So an impact can either be over a wide area (or long period or effect a lot of other events) or more of a point event (short duration, not effect other things). It can also have a massive effect or hardly be noticed at all. I find the image of ripples very helpful here, as they pretty much can be applied to all the kinds of impacts.

Ripples

Imagine two sources of ripples: a small object bobbing up and down on the water, or a large underwater explosion. It’s possible for both sources to output the same amounts of energy, as long as the bobber bobs long enough. I can’t be bothered to do any calculations, but I think it’s enough to clarify that “long enough” means “very, very long”. The difference, of course, being different values of magnitude and duration. From which we have a simple equation that energy = magnitude x duration. This is pretty much just a different interpretation of work.

Impactors

The idea of ripples doesn’t apply that well to spacial impacts, but the basic thought is the same - the total energy of an high energy impact in a small area can be the same as a that of a lot of small impacts spread over a large area. To illustrate this, I show off my leet time wasting skills to compare how much rainfall is equivalent to a meteoroid strike.

According to Wikipedia, a meteoroid that generates a 100m crater will impact some 14.22 PJ of energy, which is A Lotâ„¢. According to this Reddit answer, a 10 km x 10 km x 1 inch area of rain applies some 127 GJ of energy. Which is only some 112012.6 times smaller… Which should come out to the equivalent of 1 inch of rain on an area 1000 km x 10000 km? Seeing as 10000km is the distance from the equator to the poles, that is a lot of rain. I’m totally not sure about those calculations, but given an order of magnitude or two, they should be ok, and what’s more important, they give an idea of the differences.

Knock-on effects

Causal impacts are the interesting ones. Temporal and spacial impacts are really only important insofar as they help to explain causal ones. Even if an impact is something that lasts for a long time, e.g. lifelong UBI, what really is interesting is what effects that has on the well being of the beneficiary as well as how that impacts society at large. Or maybe causal impacts are simply combinations of temporal and spatial ones? I can’t think of any good examples that aren’t spacial or temporal ones (or a combination of the two). Maybe this category isn’t needed at all… Although as an abstraction maybe it’s still useful.

Moral impact

The point of all these classifications is that though a flashy action like giving away a million dollars (at least it’s flashy for an average person) seems like it’s worthy of praise etc., the actual value is the same as if that same person gave away 25k yearly for 40 years (ignoring inflation etc.), or 10k people each gave $100 (ignoring coordination costs etc.). This has some interesting properties.

Assumptions

Before going into details, it’s worth specifying my assumptions here:

  • all lives are equal etc. - this is really deserving of a whole separate treatment, but lets leave it at this

  • sentience is of some value, with more sentient beings being more so. This also needs a lot more exposition, but the basic idea is that a hamster is worthy of consideration, but a human a lot more so. With a lot of potential problems which I most certainly don’t want to address here

  • beauty has some intrinsic value - e.g. it’s better for pristine Mars scenery to exist than for it to be covered in rubbish. This is more of a feeling than a rationally arrived at conclusion, but it seems that a world without beauty is a lot less valuable than the alternative

  • there is some way to quantify impacts, or at least to compare them to each other

  • societal effects are ignored while thinking about all of this. Which is a massive oversimplification, as they have a large effect on everything. Here I assume that all actions are of the “let not thy left hand know what thy right hand doeth” type

Equivocations

With the above assumptions, and the additional one that the total amount of help is the same in both cases, there is no real difference between helping your neighbor and helping a total stranger on the other side of the world. Which very much goes against peoples’ inborn biases. A more difficult case is that of helping multiple people vs a single person. Assuming 60 years of helping, one could make dinner for someone 21900 times. Does that have the same value as 21900 people independently feeding the same person? The result is the same, which would suggest so. What about making dinner for 100 people 200 times? This seems a bit different, but that is probably stemming from the differences in inputs - making dinner for 100 people is a lot more work than doing it for 1. Even if the total time and effort spent is less than making dinner for one 100 times (efficiencies of scale come into play). This also starts to bring biases into the picture. One could say that biases should be expunged, but that isn’t how humans work. Which makes things a lot murkier. That being said, it seems safe to say that this case is also equivalent, ignoring all the caveats. What about helping someone today vs helping someone in a years time? If I help someone today, they are in a better position to help others. While if I wait a year to help them, then I’m to a certain extent responsible for the negative consequences of them not being helped (lost opportunities, them being in a bad situation). What about actions that won’t bear fruit until some time in the future? What about actions that won’t bear fruit until some time in the future and that have a certain probability of not having the desired impact?

The obvious problem here are different values of actions. The effort involved in two actions may be the same but have different values. Feeding a starving person is a lot more valuable than feeding a regular person. Not to mention feeding someone who has just had dinner. The same applies to the actual effort. Is it better to get really tired making dinner for 100 people, but only having to do it 219 times, vs making dinner for one person 21900 times? This is really only a matter of reframing the problem to take into account the effort involved, not just the impact, but it’s still worth mentioning. What this also means is that all else being equal, lower cost actions are better. Seems obvious when put that way, though…

A second problem are knock-on effects. How my actions will effect other actions etc. Is it better to feed starving children, who might then go on to do great things, or to feed starving elderly people, which can serve as an encouragement to others to not worry so much about whether they’ll be able to support themselves? If I feed a whole city, only to have them get killed by Putin the next day, then was that worth it? Is palliative care worth bothering with? What is the point of taking care of the future with the heat death of the universe continuously looming over us (which I have a real problem with… :D)? Should we all just paint Mandalas? I don’t have a answer for how this effects the impact or value of actions.

Another issue, and which seems to be extremely interesting, is future effects. In principle there should be no difference between someone alive today and someone who will live in 10000 years time (assuming that no radical changes happen). Which implies that actions undertaken to improve the lives of future people are equivalent to those which should improve people living today. That is a bit naive - I’d say that the value of future people should be scaled down by the probability of their existence and of the impact of current actions. If there’s only a 20% chance that an action taken today will help someone living in 10000 years time, then that action should be valued 5 times lower than an equivalent action for someone alive today. On the other hand, actions taken today can be massively important to future events - a successful colony on Mars would have a gigantic cumulative effect in 10k years time. This, of course, totally oversimplifies the case, but can serve to illustrate the point.

Summary

  • Lower cost actions are more worthwhile
  • A boring, slow and steady action can be just as good (and often better) than a flashy show
  • People tend to focus on what is close geographically - this is a problem
  • People tend to focus on what is close socially - this is also a problem
  • People tend to focus on what is close in time - this also is a problem - who would have guessed?