Category Archives: Lateral thoughts

Thoughts on other matters.

Bertrand Russell and the perils of forecasting

Published in Unpopular Essays, 1950, Russell sets up the following trichotomy as a forecast to be realised before the end of the 20th century.

a) The end of human, and possibly all other, life
b) A collapse in human numbers and a return to barbarism
c) A unification of the world under a single government.

He prefaced these options by suggesting that something unforeseeable might also happen. And he rounded them off by suggesting that what he could say without hesitation was that humanity could not possibly continue as it was.

Ignoring the implicit option d) of something completely unforeseeable then, (which makes this not a trichotomy but whatever a 4 option-scenario is called), Russell has surely created a fallacious (and lesser spotted) false trichotomy. For certainly none of his options came to pass by the year 2000, and arguably humanity had largely carried on as it was.

Russell has clearly buried an IED in his front lawn with this argument, to be trodden on at a later date. But forecasting is a perilous business and superior intellect is no guarantee of accuracy. He was not the first, nor will be the last, to blunder in this regard.

More surprising perhaps is the curious argument a few pages on. Here Russell argues that either Soviet communism or American capitalism will come to dominate the world. He expresses a preference for the latter, but backs his commitment with a very curious notion. He does not prefer Americanism because capitalism is inherently better than communism. Rather he prefers it because of the respect it (Americanism) affords ‘freedom of thought, freedom of inquiry, freedom of discussion, and humane feeling’. Whereas the Soviet outlook values none of these things.
But surely here is a more serious informal fallacy at work? For Russell is ignoring the rather obvious point that he is presuming these freedoms have nothing whatsoever to do with capitalism, nor their absence anything to do with communism. He tacitly suggests that were America to adopt communism, these freedoms would continue to exist. A curious political philosophy this that surely only a theoretical Marxist could reasonably argue. The evidence for liberalist communist states is thin on the ground. For illiberal capitalist states we have the recent example of China. But on the whole, presuming the complete separability of capitalism and freedom of thought/inquiry/speech whilst also asserting the compatibility of these latter with Soviet-style communism, seems more than just contrarian; it is surely to be wilfully oblivious to a mountain of evidence.
Unpopular Essays is an excellent collection, but this particular essay is a bit of a turkey and easily the most dated of the anthology.


Leave a comment

Filed under Lateral thoughts

Does Occam’s razor give too close a shave?

Occam’s razor is a reasonably well-known principle of methodology. The idea is to prefer parsimony to complexity, between competing hypotheses of equal explanatory power. This, intuitively, seems a sensible idea. Occam’s razor has been re-packaged and re-phrased many times, whether as the lex parsimoniae, or as Einstein’s recommendation to makes things as simple as possible, but no simpler.
All sound advice. But note, that Occam’s razor is not a law. The lex parsimoniae is only a law in the same sense that we talk of a Law of Averages; there is no actual law to speak of. Occam’s razor is more of a principle or a recommendation.
But there is a danger that this is forgotten and the negative side to the razor is overlooked. For there is a fairly obvious flaw with this idea if taken to extremes, namely the risk of rejecting an accurate, complex hypothesis in favour of a simpler hypothesis, which is also correct but less powerful.
For example, imagine Newton had proposed Einstein’s theory of general relativity instead of his mechanical, gravitational theory. Suppose he had explained general relativity precisely as Einstein had done and then had shown how in a particular, localised arrangement, the Newtonian equations of motion ‘fell out’ and that therefore he had a hypothesis for the motion of particles.
Now a devotee of Occam might suggest that a more parsimonious, yet equally powerful explanation of the simple motion of particles, is given by the Newtonian equations of motion alone. All the additional theory of relativity is completely untestable (in the late 1600s). So we should keep it simple and reject this for now. And herein lies the problem; we have just cut out a far more powerful and (presumably) accurate theory with our razor.
My point is that parsimony is desirable, but this is not a logical demand. It is a normative statement. Efficiency or simplicity we naturally covet, but we must not be beholden to them. The world is complex and a lot of the simple stuff has been explained with simple theories. But it is not unreasonable to think that more complex theories might be necessary in future, to augment our knowledge. And an over-adherence to Occam might hold us back, by its embedded potential to catapult out complex yet helpful theories.

Leave a comment

Filed under Lateral thoughts

On debugging software and Popper’s philosophy of science

Software containing bugs often produces undesirable results. When the a posteriori output from a programme does not meet our a priori expectations, either the programme logic has a flaw or the programme logic is fine, but the programme design was flawed.
When we test a programme, we are effectively collecting empirical data from which we make inductive propositions about the programme. So, if the erroneous output exhibits a certain pattern, we might infer a rule that might be generating this data. This is inductive science; moving from a limited set of observations to a general principle or law. And inductive science is, philosophically speaking,not uncontroversial. Induction is subject to the risk of ‘black swan’ events; no matter how many examples we find in support of a supposition (“All swans are white”), induction alone can never prove the objective truth of “All swans are white”.
However, Popper noted that we can apply deductive reasoning to the results of induction. So when our supposition that “All swans are white” is contradicted by the discovery of a single black swan, we may deduce that “Not all swans are white”. This move has maximal justification.
When we debug code with known errors, a good procedure is to create test conditions that enable us to apply this deductive technique and thus isolate the faulty lines of logic. We induce a hypothesis such as “All inputs {A} when processed by logic {L1,L2..LN} produce output {B}”, when we expected to see output {C}. We induce this from the output we can see in testing; and remember it is just an unproven hypothesis at this point. Now, if we can product output {B*} from the same inputs and logic, we can deduce something far stronger; a concrete proposition that “Not all input {A} when processed by logic {L1,L2..LN} produce output {B}” or its logical counterpart “Inputs {A} when processed…can produce {B} OR {B*}”.
This is a powerful result; moving from supposition to fact, by deduction. And software debugging is particularly well-placed to benefit from its application.

Leave a comment

Filed under Lateral thoughts

On the productive use of randomised behaviour

It is boredom that drives Luke Rhinehart (Rhinehart, 1971) to act on the outcome of dice rolls in The Dice Man. He assigns actions to each side of a die which he then casts. His behaviour therefore is randomised in so far as it is not predictable in advance.
The cultural impact of The Dice Man was largely due to the shockingly violent and immoral behaviour in which he engaged with inadequate justification. The randomness he has purposely introduced gives his action an insupportability. It is, from one perspective, de facto meaningless and incomprehensible. This is shocking because this is not ‘normal’. As with any senseless act which is often the product of a crazed mind, it is our inability to ascribe or make sense of it, that drives our perturbance.
Part of what is unsettling about The Dice Man is the alien nature of the decision-making process Rhinehart employs. It is counter to our rationality to make choices on the basis of pure chance. This is not how humans work; we think things through and act for sensible and logical reasons. But do we and should we?

Putting evolution to work
Evolution has been described as design without a designer. An intelligent God would simply ‘build’ persons ready-made to a template. But evolution has no blueprint or end-product in mind as it ‘designs’. It is a process of trial and error; each model that rolls off the production line is tested to destruction and its offspring (if any) are partly modified by random means.
A child will inherit certain characteristics from its forebears but in a manner that is deliberately randomised. The nature of the randomness (presumably itself the consequence of an evolutionary process) rewards and preserves certain qualities but allows scope enough for new and dramatically divergent qualities to emerge. In other words, our offspring will inherit a great many of our qualitites in like proportions, but may still exhibit characteristics that are harder to simply attribute to either parent.
This mechanism is deliberately employed by Nature as an efficient search strategy when seeking a ‘well-designed’ person ie a person fit for the purpose of surviving in their environment. And the employment of randomness in this regard is key.

Global and local, minima and maxima
Imagine trying to locate the highest mountain in a range that one can only explore on foot. To do this, one might employ a simple strategy such as,
Strategy 1: “I shall walk forward five steps and review. If I have ascended at all, I shall repeat the strategy. If I have not ascended or descended, I shall turn to face left and repeat the strategy. If I have descended, I shall turn 180 degrees and repeat the strategy.”
Something like this, might lead one up the summit of a high local peak. It will effectively interpret any ascencion as good news. If we move away from the local peak, we turn around and head back up. Now this kind of search strategy (which is deterministic and not in any way randomised) may be successful at finding local peaks but as soon as a local summit is reached, it marks an end to our search; thereafter an movement away from the summit is immediately reversed as though we are tethered to the apex.
This strategy, I contend, is roughly approximate to how people ‘rationally’ live. With a goal in mind, we employ a reasonable strategy that we can test along the way, a strategy that in advance seems likely to lead to some sort of success. This strategy will, deductively, lead to a local maxima and turn away from local minima.
But for an Evolutionary designer, this would not be enough. Because this strategy is likely to be sub-optimal in the long term. After all, we can be certain that this strategy will be an evolutionary dead-end. Once we reach a summit of any sort, we have no means of escape and a potentially better outcome can never be achieved.
In contrast, evolution allows such strategies to be subject to random alteration. So imagine now that the strategy above becomes;
Strategy 2: “I shall walk forward five steps and review. If I have ascended at all, I shall repeat the strategy. If I have not ascended or descended, I shall turn to face left and repeat the strategy. If I have descended, I shall continue for a random number of steps up to a total of five, turn about a random number of degrees and then repeat the strategy.”
This strategy still favours ascent. But it no longer sees descent as an evil to be avoided at all costs. The introduction of random behaviour allows for an infinite array of outcomes; this strategy will undoubtedly reach diffenent, and higher, local maxima, in time and probably reach the global maximum. It may forgo notable local maxima in the short term and spend long periods in local minima. But the possibility of better outcomes is now available and the evolutionary dead-end has been avoided.

Using random behaviour to maximise outcomes
It is striking how little we allow our behaviour to follow even a mildly random path. If we intend to deploy a new marketing strategy, it will typically be planned in every detail with nothing left (literally) to chance. And yet we could employ evolutionary techniques by allowing our strategies to adapt and modify randomly. This might mean altering a ‘successful’ strategy for the worse, knowingly. And herein lies the rub; this creates a tension with our rational faculties. Why change a winning formula especially when we can reason that the change will be detrimental? And why persist with a losing formula that has no obvious reasonable chance of success? The justification for so doing, is that by analogy, this is the mechanism employed by Nature for successful evolutionary design. Randomising variation between generation means that winning formulae can indeed be reversed and losing formulae made persistent. But this is the route to global maxima and to avoiding local dead-ends.
The question then really pertains to the appropriateness of the analogy. Our marketing strategy may only be expected to run for a few months in one or two locations. It would be undoubtedly foolish to choose a strategy that our cognitive faculties suggest will fail in all circumstances in this time and in these places. But if the backdrop is amenable, the duration is long enough, the number of instances adequate, then randomising the strategy to a certain extent is not merely preferable, but is likely to be optimal. Because Nature says so.

Leave a comment

Filed under Lateral thoughts