Monday, 30 May 2016

LACUNA DIVE

Bodley Head, London 2009



Predictioneer was highly recommended by one of my correspondents as a ‘must read’ – and so I did.
And was astounded right from the start, when reading in Game Theory 101:

“Bayes’ Theorem provides a way to calculate how people digest new information. It assumes that everyone uses such information to check whether what they believe is consistent with their new knowledge….in response to new information that reinforces or contradicts what we thought was true. In that way, the theorem, and the game theorists who rely on it, view beliefs as malleable rather than as unalterable biases lurking in a person’s head.

“In real life there are plenty of incentives for others (and for us) to lie… Therefore, to predict the future we have to reflect on when people are likely to lie and when they are most likely to tell the truth. In engineering the future, our task is to find the right incentives so that people tell the truth, or so that, when it helps our cause, they believe our lies.”

Definitely a book for me, I thought – great lacunae in my knowledge here.
So I read it cover to cover in one day (and a TYGER night).

And then a single line on page 215 hit me smack on with full mental tsunami power:

“Let’s take a look at the inconvenient truth that won Al Gore the Nobel Peace Prize.”

I’m surely not the only one remembering that UK courts ruled at the time that Al Gore was, in fact, so ‘economical with the truth’ that his advocacy was forbidden to be shown in schools. 

Not only in my opinion is the whole fable of man-made global warming (AGW) not just ‘irrational’, but amounts to no less than the biggest political and intellectual fraud ever. 
[1] [2] [3].

And there I was, thinking to be on terra firma with my smatterings of thermodynamics, armed with Thomas Payne’s advice on sources of ‘truth’ in his Age of Reason [4]

The Creation speaketh an universal language, independently of human speech or human language, multiplied and various as they may be. It is an ever-existing original, which every man can read. It cannot be forged; it cannot be counterfeited; it cannot be lost; it cannot be altered; it cannot be suppressed. It does not depend upon the will of man whether it shall be published or not; it publishes itself from one end of the earth to the other;”

all further supported by Sir Karl Popper who deserves special mention by referring if only to these five tokens from his legacy, from The Logic of Scientific Discovery [1934], to The Open Society and its Enemies [1945], to Conjectures and Refutations [1963], to the latest and most important summaries: The Lessons of this Century [1997] and All Life is Problem Solving [1994,1999]. [4].

And then recalling Bruce Bueno de Mesquita:

“In real life there are plenty of incentives for others (and for us) to lie… Therefore, to predict the future we have to reflect on when people are likely to lie and when they are most likely to tell the truth. In engineering the future (1), our task is to find the right incentives so that people tell the truth, or so that, when it helps our cause, they believe our lies.”
(1) and not least the future of energy supplies.

So here I found this gaping black hole – the lacuna of my Baysean ignorance – which needed diving into.  Starting from Meinong’s base tenet – Truth is only a human construct, but facts are eternal – but now remembering again not only that PANTA RHEI was written in big letters above the entrance portal of my erstwhile alma mater, Technical University Munich (TH in 1954), translating in the 1960s into “No Man can step into the same river twice”, or the current definition of Information I=log1/p, where p is the probability of any event occurring (and inevitably leading to inquisitions, autodafes and worse when claimed by the powers that be, that 'p' amounts to 100%).

Still diving and still on the way down, I discovered two things to start with:

An Intuitive Explanation of Eliezer Yudkowsky’s Explanation of Bayes’ Theorem

Where Luke Muehlhauser ends by quoting Yudkowsky further, at [5]:

“The Bayesian revolution in the sciences is fuelled, not only by more and more cognitive scientists suddenly noticing that mental phenomena have Bayesian structure in them; not only by scientists in every field learning to judge their statistical methods by comparison with the Bayesian method; but also by the idea that science itself is a special case of Bayes’ Theorem; experimental evidence is Bayesian evidence.  The Bayesian revolutionaries hold that when you perform an experiment and get evidence that “confirms” or “disconfirms” your theory, this confirmation and disconfirmation is governed by the Bayesian rules.  For example, you have to take into account, not only whether your theory predicts the phenomenon, but whether other possible explanations also predict the phenomenon.

“Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules; if p(X|A) ~ 1 - if the theory makes a definite prediction – then observing ~X very strongly falsifies A.  On the other hand, if p(X|A) ~ 1,  and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that p(X|B) ~ 1, in which case observing X doesn’t favor A over B.  For observing X to definitely confirm A, we would have to know, not that p(X|A) ~ 1, but that p(X|~A) ~ 0, which is something that we can’t know because we can’t range over all possible alternative explanations.  For example, when Einstein’s theory of General Relativity toppled Newton’s incredibly well-confirmed theory of gravity, it turned out that all of Newton’s predictions were just a special case of Einstein’s predictions.
You can even formalize Popper’s philosophy mathematically.  The likelihood ratio for X, p(X|A)/p(X|~A), determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence.  Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, p(X|~A) - there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not.  That’s the hidden gotcha that toppled Newton’s theory of gravity.  So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for confirmatory evidence.
On the other hand, if you encounter some piece of evidence Y that is definitely not predicted by your theory, this is enormously strong evidence against your theory.  If p(Y|A) is infinitesimal, then the likelihood ratio will also be infinitesimal.  For example, if p(Y|A) is 0.0001%, and p(Y|~A) is 1%, then the likelihood ratio p(Y|A)/p(Y|~A) will be 1:10000.  -40 decibels of evidence!  Or flipping the likelihood ratio, if p(Y|A) is very small, then p(Y|~A)/p(Y|A) will be very large, meaning that observing Y greatly favours  ~A over A.  Falsification is much stronger than confirmation.  This is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X.  This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.

“Similarly, Popper’s dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ~X would have disconfirmed the theory to some extent.  If you try to interpret both X and ~X as “confirming” the theory, the Bayesian rules say this is impossible!  To increase the probability of a theory you must expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory.  On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect.  Bayes’ Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.

“So we find that many phenomena in the cognitive sciences, plus the statistical methods used by scientists, plus the scientific method itself, are all turning out to be special cases of Bayes’ Theorem.  Hence the Bayesian revolution.”

I have to come up for air now before diving again farther  down – and learning what everyone should at least have been acquainted with before leaving school – any school.  But however inadequately, I needed saying and posting this a.s.a.p. 

Meanwhile, sincere thanks to Werner, Philip and Bruce.

oooOOOooo