Maintaining a model of the world
If I look at the sky and think that it’s probably going to rain, this means something like “there are lots of dark clouds - in the past, this often resulted in rain. When there weren’t clouds, it didn’t usually rain. So it seems worth preparing for rain”. I’m not sure it will rain, I’m not sure it won’t rain - I think either might happen, but if I had to choose only one, I’d go with it going to rain? Maybe?
If it’s raining and I go out, I think I’ll get wet. My belief that I’ll get wet could be phrased as “it’s raining - in the past, when I went out into the rain, I got wet. Other people often also complain about this. So far, this has pretty much always been the case, the exceptions being things like it suddenly stopped raining, or someone happened to have an umbrella. Seems like it’s worth assuming that going out results in me being wet”.
In both cases I’m expressing a belief about how the world works, along with a prediction of what I expect to happen. The main difference is that in the second case, only one hypothesis seems to make sense (“I’ll get wet”), while in the first case both “it will rain” and “it won’t rain” seem like things that may happen.
A different, slightly more formal way of stating this, is that given:
- a model of how the world works (where “model” means “I think the world works in such a way that if I do X, then Y will happen” and “I believe that the world is in this specific configuration”)
- a way to generate hypotheses about what will happen in the future (e.g. “it will rain” vs “it won’t rain”)
- a way to work out a ranking of how likely each hypothesis is to turn out true (“it will rain” is more likely than “it won’t rain”)
then you can plan your actions in such a way that they turn out best for you (however you define “best”, of course).
One thing that is important to point out, is that these assumptions (and the whole process) don’t rely on you knowing what will happen, or really knowing much about the world at all. They help you maintain what you believe about the world, and help you make predictions about what will come next. You can have a very simple model of the world and still come up with useful predictions. On the other hand, the usefulness of this process depends a lot on how good your model is and how well you manage the hypotheses. If your world model doesn’t describe reality well, then your predictions probably will not be good.
A simple world model example
To illustrate the previous point, let’s assume you have the following world model (i.e. these are beliefs you hold):
- eating is good
- if you move towards food, you can probably eat something
- if you smell food, there’s probably something to eat there
- if you can’t smell food, you should start moving in a random direction
This is a simplified model of how bacteria work. It’s a trivial model, but surprisingly useful, as it describes their behaviour well. Assume a bacterium detects a strong food signal from the left, and a weaker one from the right. Some hypotheses that it could posit are:
- there’s more food to the left (probably, as the signal is stronger there)
- there’s more food to the right (maybe, but the signal is weaker, so maybe not?)
- there’s more food straight ahead (unlikely, as the signals are from the sides)
- there’s no food to be found (unlikely, what with these signals, but this is also possible)
Now it can sort them by how likely they seem, and choose the option it thinks best (probably go left?).
Bad world model examples
A bad world model is one that doesn’t fit reality well. This will result in predictions that don’t work well - often they turn out to be actively bad for you. Think of your favourite incorrect conspiracy theory - they have a complicated model of how a part of reality is constructed (hence the “conspiracy” part), which doesn’t fit reality well (as they’re obviously wrong). Predictions based on this theory will consistently turn out to be false.
Gamblers with a system are also a good example. There are people who come up with very elaborate methods for when and how to place their bets, but usually end up losing over time. Superstitions are a different version of the same process - some combination of things or actions resulted in a good (or bad) outcome in the past, so the world model of a superstitious person has been updated to store various more or less complicated rules that supposedly change what will happen, even though they don’t change anything.
Generating hypotheses
Once you have a world model, you can move on to thinking about what will happen. Usually this is about future things, but you can also do the same in other contexts, e.g. Richard Carrier wrote a book which uses this approach to work out whether Jesus really existed. That being said, looking at non-future events can be reframed as making a prediction about what your future self will believe (or see), at which point we’re back to predicting the future. Again - this approach doesn’t tell you what is True, it helps you construct a better and better world model, which (hopefully) is closer and closer to real truth, and which helps you make more and more accurate predictions about what will happen. The map is not the territory. This whole approach works on beliefs, not truth. Ideally your beliefs will be true, but being human, often they aren’t.
Going back to our rain example, after observing the sky, we could come up with the following hypotheses:
- it won’t rain
- there will be light showers
- there will be a proper rain
- it will rain cats and dogs
- there will be a storm
All of these talk about the same thing - how much water will fall from the sky. But when doing so, we should, to be thorough, consider all possibilities for water falling from the sky. Other options could be hail, or snow. Or we could go for more far-fetched things like frogs or fish, or even posit things like having the rain fall, but stop 10m above the ground or something equally outrageous. As long as it’s possible (according to your world model), you should ideally include it. In practice, lots of hypotheses are so unlikely that you just ignore them, or maybe add a sort of catch-all “other options” hypothesis.
Notice that I wrote “according to your world model” in the previous paragraph. Not to beat a dead horse, but that is once again pointing out that this whole process is a method to update what you believe about the world, in order that you can make better and better predictions about what will happen based on new evidence. You start with certain beliefs (whatever they are, and however valid they are), and any new evidence will be viewed through that framing. A hypothesis might be impossible according to your world view, in which case you (correctly in context!) won’t consider it, but it might actually be possible in the real world. Or a hypothesis might be possible according to your world view, but not actually be possible in the real world. Both of these mean that your world view is faulty and should be updated. But either way - you can only generate valid hypotheses in the frame of your world view. If you want to take this into account (you should!), then you can always add a “my world view is flawed and doesn’t take into account something” hypothesis to handle this case. Ideally this will be vanishingly unlikely, but it’s always worth keeping around (otherwise you won’t be able to fix these kinds of errors).
Labeling hypotheses
Now you have your hypotheses worked out, you want to know how likely they are, so you can plan accordingly. What you do with this information is a large and separate topic, but generally speaking, you try to do whatever you think will turn out best (whatever that means). For this to work, you need some way to compare hypotheses. You could, as some people claim, just say that everything is either sure, or 50/50 (it either will or won’t happen, after all). This isn’t very useful. KSG might win the UEFA championship (who, you say? Exactly!), but I wouldn’t bet on it. Similarly - it’s possible that New Zealand will have a civil war next month. But I also wouldn’t bet on that. There are certain possibilities that are more, well, possible. This seems important for planning things.
There are all kinds of ways of doing this. You can assign each hypothesis a probability number (e.g. from 0 to 100%). You can assign odds (“3 to 1 this will happen”). You can give each hypothesis a label (like “very likely”, “certain”). Many are the ways that this can be represented.
The thing all these approaches have in common is that they allow you to order hypotheses by how likely they are. Some approaches give you more finesse (e.g. “the probability of this is 0.13513”), others just allow you to group things together. Either way, this lets you then know which are worth focusing on and which can be ignored. Going back to the rain example again, let’s order the hypotheses:
- there will be a proper rain - highly likely
- there will be a light shower - likely
- it won’t rain - possible
- it will rain cats and dogs - possible
- there will be a storm - unlikely
- other options - pretty much impossible - this is the catch-all thingy to be consistent, so will be ignored
This order isn’t really needed, per se, but it makes things a lot clearer. For most questions (here “how much water will fall from the sky?") you can generate a whole bunch of possibilities, but most of them will be so unlikely that you can ignore them (practically speaking). Ordering them is a nice way of doing this, as you can set a cutoff (e.g. “anything above this level” or “the first 10 options”) to narrow down the number of options to be considered to a tractable number.
Updates
Once we have our hypotheses properly labeled, we can now update our world model. We started with the fact that the sky is overcast, now we have a new fact - that it will likely rain. “But wait!”, you say. Overcast skies are a fact, granted. But how is “it is likely to rain” a fact? Here I admit to some additional hand waving. Going back to how what we’re operating with are beliefs about the world not true things about the world, “overcast skies” is also a belief (albeit a very well founded one, in which we can be very confident), while “it will rain” is a new belief, but which we are less certain about. Both are treated as the same kind of inputs to our process, just the one is more certain than the other. This may seem strange, but it’s possible that those overcast skies are some kind of optical illusion, or you might have some kind of weird brain damage or something. Very unlikely, but possible (at least according to my world model…). So strictly speaking, you believe that the sky is overcast (and that you heard the weather report, and …), from which you also believe that it will soon rain. Your world model has gained a new belief (“it will rain”), with which you will proceed to produce new hypotheses, e.g. “staying outside will be uncomfortable”.
The main thing is that each new piece of information (including internal, i.e. thoughts) results in a bunch of predictions about the future (or the state of the real world - this amounts to the same thing), which get assigned various likelihood labels, and then result in another new piece of information that needs to be assimilated into your world model. Ideally each such new information nugget should trigger a whole update cascade of all beliefs affected by this new info. Ideally, only, as in practice it’s unfeasible to do this properly. Hence all the various stereotypes, heuristics etc. that are commonly used by people - it would cost too much to do the whole proper update on all new information, so we use shortcuts.
Is this true?
This approach is in theory optimal, though obviously impractical to do so properly. Human brains do a version of this with modifications (like how each tech company does “agile with slight modifications”). This is a very useful model with which to view changing one’s mind and updating one’s beliefs on the basis of new information.
All of the statements in the previous paragraph are beliefs I hold with likelihood label of “certain”. By that I don’t mean that they’re objectively, transcendentally, absolutely true. It means that my world model thinks so. I might be wrong! I’m pretty sure I’m not, though. And here we’re back to everything being viewed through the lens of your world model. When I first was exposed to these ideas, they were just possible hypotheses. Just like “it will rain” vs “it won’t rain”. Subsequent new information caused them to be labelled as more and more likely, while alternatives became less and less likely, until I ended up sure of them, just like I ended up certain that going out in the rain means getting wet.