New to Theory Mapping?

Theory Mapping is a new and potentially revolutionary method to improve the quality of theories that society uses. It does this by improving the generation, communication, critique, refinement and selection of theories. It is particularly applicable to areas of inquiry which are not amenable to controlled experiment, where it provides a systematic approach to using empirical evidence and logic in the evaluation of theories.

It consists of drafting Argument Maps for each theory (in which ideas and arguments are represented by boxes connected by arrows) and then measuring how coherently they can explain agreed facts.

Whatbeliefs.com is the home of Theory Mapping. For more information the best place to start is the FAQs, which link to all the various posts on the site.
f

Sunday, 28 December 2008

How can I find the Truth?

l
We have seen that the best definition of truth is the accurate prediction of sense experiences. In order to find the truth we therefore need a way to test whether a statement or theory is able to accurately predict sense experiences.


Truth test one: direct testing of predictions

The first method is of course to directly test its predictions in a manner that is publicly verifiable i.e. reproducible by anyone following the same methodology. This is the test used in the natural sciences. However, even if the predictions of a theory are verified, scientists do not accept it as true because new sense experiences may always come along in the future that it did not predict and which falsify it. All they can say is that it has not yet been falsified. To put it another way, if a theory is the best predictor of sense experiences, we can say that it is has the highest probability of being true than any other existing theory.

However, it is often not possible to apply this test. For instance, in the social sciences it is much harder to conduct controlled experiments. And the prediction of most theistic belief systems of life after death is very difficult to test directly. Some have argued that this makes such theories ‘unfalsifiable’ and that they should therefore be rejected. But this would violate a principle formulated by William James that “A rule of thinking which would absolutely prevent me from acknowledging certain kinds of truth, if those kinds of truth were really there, would be an irrational rule”. This seems to be a reasonable rule and so we should try to find another test for the truth that can be applied for those theories not amenable to scientific testing.


Truth test two: explanation of sense experiences

A more practical test is to see how well a statement or theory is able to explain existing sense experiences. This is because prediction and explanation are two ways of saying the same thing, since if ‘A predicts B’ then ‘B is explained by A’. The only assumption for this test to work is that the underlying nature of reality is relatively constant over time, so if A predicted/explained B in the past, then it will do so in the future too. As we have seen, this problem is also faced by the scientific method, and means that we can never say that a theory is proved to be true. However, if it is the best at explaining, we can say that it is more likely to be true than any other existing theories. Since we are primarily interested in truth in order to inform our actions, then it would be rational to act on this theory.

So how can we operationalise this test? I believe that the best way to see how well a theory can explain existing sense experiences is to measure its coherence in explaining facts.

By facts, I mean publicly verifiable experiences which are reproducible by anyone, and which you want the theory to explain. For instance, on the question of whether UFOs exist or not, one of the main facts to be explained is that "Many people think they have seen UFOs with the circumstances documented in books X, Y and Z". Anyone can verify this by reading the books mentioned.

By explaining, I mean stating at least the immediate cause of the fact e.g. "Many people have actually seen UFOs" (for the UFO believer) , or "Many people have seen certain things under certain conditions and mistaken them for UFOs" (for the UFO sceptic) .

By coherence, I mean the extent to which all beliefs within the belief system, including the explanations, should be logically consistent and logically justified.

If any of this doesn't make sense then go to Coherent Explanation of Facts for a detailed explanation.


Coherence Quotient (CQ)

If the truth test is the level of coherent explanation of the facts, it would be very useful to have a quantitative measure of it that could be used to objectively assess the coherence of competing belief systems in explaining the same facts. I think that such a measure is relatively straightforward to construct, and call it the Coherence Quotient (CQ) of a theory. I suggest that it be the percentage of the total number of beliefs that are both consistent and justified, where beliefs are weighted according to how important they are in the system. For instance, if there are 10 beliefs, one belief is not adequately justified and that belief is of average importance, then the CQ is 90%. If the belief is more important than average, then the CQ is lower than 90%, whereas if it is less important than average, the CQ would be higher than 90%. If instead, there is an inconsistency between two beliefs, this would be counted as a problem with the belief that is the least important of the two. If it is of average importance, the CQ would again be 90%.

This is very similar to the dissonance ratio used by cognitive dissonance theorists in psychology, except that they only look at the consistency between one's beliefs.

The Belief System Debate method uses the CQ measure to judge between belief systems in a debate. It is therefore important that the measure provide the right incentives and not be open to manipulation. In particular, two issues with this measure will need further development over time:

  • How to calculate the weighting: there are actually two issues here. One is whether to measure the importance of a belief to the system in terms of the number of beliefs that directly depend on it as a premise (equal to the number of arrows of logic flowing from it in a Belief Map), or the % of total beliefs that directly depend on it as a premise. Another issue is whether to consider those beliefs that indirectly depend on it as a premise as well. This would seem to make sense. For instance, imagine a belief A that justifies only one other belief B, but where belief B justifies many other beliefs, and compare this to a belief C that justifies only one other belief D, where this belief D only justifies one other belief. We would say that belief A is more important than belief C. Perhaps the ideal solution would be to take the weighted average of beliefs that depend on it as a premise, where the weights decline the further away one goes.
  • How to penalise belief systems that provide no explanation of a fact: in the CQ measure mentioned above, it is assumed that explanations are given for all facts. But if belief systems are competing to get the highest CQ score, and they cannot provide a coherent explanation for a particular fact, they have an incentive not to provide any explanation at all. Probably the best response is to say that if a belief system provides an explanation for more facts than another, then it automatically wins the debate irrespective of their CQs. This gives an incentive for all belief systems to try to provide an explanation, even if its justification has low coherence. If no sides in a debate can coherently explain a fact, they can always jointly agree to remove the fact as one to be explained.

Given the complexity of calculating CQ, it would need to be done by a computer. In fact, it would be quite straightforward to add to this feature to the Visual Concept software that is used for creating Belief Maps.

If this makes sense to you and you want a tool to help you to practically apply it, then go to Belief System Analysis.

1 comments:

Strahan Spencer said...

What follows is a copy of a discussion by email that I had with a friend Andrew back in 2004, which raises some key potential objections. The key point that Andrew is making is that a coherence test for the truth is no guarantee of finding the 'ultimate truth' about reality. I think he is right, and the definition of truth that I am suggesting on this website is based on the practical assumption that 'ultimate truth' is not possible (see post 'What is Truth?').

ANDREW: 3 problems with using coherence as a test for the truth:
1. You may get may equally coherent systems, especially if there are not very many coherent beliefs.
2. It would be easy to come up with fictional systems that were equally if not more coherent to the world's established religions.
3. Coherence is used by our minds because it was useful for our survival in the past. This doesn't mean that it is useful for us now.

STRAHAN: To respond to each of these points:
1. This in itself would be a useful result, which would mean that each system would have to be more tolerant of other systems and not be able to claim a 'monopoly of truth'. The main basis for choosing between different systems would then be according to one's psychological needs or 'taste'.
2. Again, it would be useful to test this out, although I suspect that it might be harder than one thinks. There could be a competition to come up with the most coherent system.
3. Since coherence is fundamental to human thought it cannot be questioned since any attack itself relies on coherence for its force of argument i.e. Andrew believes in evolutionary psychology because he finds it coherent, and his argument will only have persuasive force for me if it is able to coherently explain beliefs which we share.

ANDREW: I think you're doing a disservice to my third argument. The crux of my objection is that coherence doesn't equal truth. If you succeed then what you'll discover is what belief system is a best fit to our minds, and not necessarily which one best describes the ultimate reality of the universe.

My argument is that our minds evolved to process information in ways that happened to be useful to us in surviving the arbitrary conditions of our evolution - had we evolved under different conditions, our minds would be constructed differently, we would see the world in different ways, and quite possibly (although we can obviously never know this for certain) different things would strike us as "coherent".

This is also why I talked about the possibility of having equally coherent but contradictory belief systems - it seems plausible to me that something might appear to the human mind as coherent and still be completely wrong.

I think what you'll be doing is investigating something interesting about human psychology and promoting more efficient debate. These both very worthy goals, and if they're enough to satisfy you then that's great, but I don't think it can offer us any definitive answers to life's ultimate questions if that's really what you're aiming for.

STRAHAN: The way you have just described your argument is how I understood it but didn't write it down clearly enough. My response is therefore the same, which is that your critique is relying on coherence in order to make its case. Why should I accept your argument from evolutionary psychology when it is just a theory? You will only convince me of it if it provides a coherent account of my beliefs.

What do you mean by a 'definitive answer to life's ultimate questions'? I guess in addition to promoting efficient debate, I would like to think that over time humanity might converge upon a single most coherent set of answers that would be definitive in the sense of not being possible to improve upon in terms of coherence. A sentence is 'true' if it 'corresponds to reality'. Since we are not directly connected to 'reality' but instead experience it indirectly through our senses, we can never check whether this correspondence is really there or not. All we can do is look at the evidence of our senses and construct the most coherent theory of what is behind them.

According to your viewpoint our minds are not equipt to deal with life's ultimate questions, and so we cannot hope to answer them. If this theory is part of the most coherent explanation of the evidence, then humanity will converge over time to believe it (and agree with you) using my suggested method (assuming that it comes with beliefs which can provide meaning to peoples lives). Then the 'definitive answer to life's ultimate questions' will be that we cannot ever know!

ANDREW: I'm still not clear on whether or not you accept my argument that the most coherent belief system will not necessarily be the best fit to reality. You seem to accept it when you say we are not directly connected to reality, but then you imply that you would only accept this argument if it itself proved to be part of the most coherent belief system. This wouldn't be enough, as one could imagine that the most coherent belief system might be one that excludes any possibility that it might be wrong, and that it would nonetheless be wrong to do so!

Also I struggle to see what evidence could help us to assess the argument that our minds may be too inherently limited to understand life's ultimate questions - clearly we can't know what we can't know.

STRAHAN: Let me try and get my brain back in gear to answer this! I agree with you that I was being inconsistent in moving from a uncontroversial definition of truth as 'correspondence with reality' that can be treated as a 'common belief', to a more substantial theory of truth that fleshes out what is meant by the terms 'reality' and 'corresponds' and which different belief systems will debate over. For instance, some Hindus argue that there is no reality behind our senses, since reality is created by the mind.

To argue that the most coherent belief system will not necessarily be the best fit to reality requires adopting such a theory of truth, and so to remain neutral I will be agnostic about it.

Justification for the belief that our minds are too inherently limited (which I will call the 'limitations' belief) could come from two sources:
a) Common beliefs (shared across belief systems) that the limitations belief explains e.g. if after another 200 years scientists continue to be unable to come up with a 'theory of everything', the limitations belief might be a good explanation.
b) Beliefs within a belief system which explain the limitations belief, where the belief system is the most coherent e.g. if a form of atheism including evolutionary psychology turns out to be the most coherent explanation of common beliefs, then it would make the limitations belief more likely than not to be true.

Hope that's coherent enough!

ANDREW: I still get the feeling we're talking at cross purposes. You're saying that a belief in "limitations" can be justified only if it coherently explains other beliefs or is part of a coherent belief system. I'm saying that EVEN IF "limitations" is not coherent, we can have NO WAY OF KNOWING if that is because our minds are genuinely not limited or because they're too limited to understand how limited they are!

I don't think coherence has anything to say about limitations. Limitations will always be an imponderable, regardless of how confident we are in our judgements.

I'm trying to think of some illustrations -

- 500 years ago, the most coherent belief system would presumably have held that the sun revolves around the earth. The idea that "our current state of knowledge is too limited to determine whether the earth revolves around the sun or vice versa" might well have been dismissed as incoherent at the time. But it would have been right. How can we be more sure of ourselves than the intelligentsia of 500 years ago?

- much of the pathos in Watership Down comes from the descriptions of rabbits trying to understand things about their world that we, with our human minds, understand that the rabbit mind cannot ever grasp. The most coherent worldview the rabbits could come up could well have eschewed a "limitations" belief, but we can see it would have been wrong to do so. How can we know that we aren't like the rabbits?

- because moles evolved underground, they can't see well. Suppose a group of moles get together and try to understand some phenomena which can only be properly understood with a sense of sight. The moles come up with ingenious explanations that seem perfectly coherent to them, but are doomed to be false because they don't have sight. How do we know we aren't also, like moles, missing some vital sense?

Post a Comment