New Scientist
March 1993
I write science fiction novels and mainstream novels. And I have a confession to make...because, having been asked to write about the science in my science fiction, I came to the embarrassing conclusion that there is probably more science in the mainstream books.
Well, more reliable science, anyway. The technology in the mainstream novels at least exists, and even when it’s a slightly off-the-wall pastiche of the scientific method at work, the results will tend to be in accord with our present understanding of reality (specifically, for example, there was a character in a recent novel who thought that just by humming he could make televisions malfunction, until he realised it was the vibrations he was setting up in his own eyeball that – by approximating the screen’s refresh rate – were producing the wavering effect he observed).
Future science is more problematic. As Arthur C Clarke famously observed, any advanced technology will appear indistinguishable from magic to those sufficiently further down the ladder of scientific understanding. But while SF writers can exploit the truth of this statement up to a point, they still have to pay at least lip service to the relevant physical laws, and the most irksome of those is hinted at, appropriately enough, in Clarke’s middle initial.
Like a lot of SF authors past, present and probably future, when I sit down to write I routinely tear up one of the most definite laws of physics; the one which states that c is an absolute. Getting around the unhappy disparity between the mind-boggling size of the Universe – or even just this one Galaxy – and the comparatively ponderous maximum pace imposed upon anything attempting to travel from one part of it to another has long been one of the most awkward problems confronting the average skiffy author wishing to indulge in the sort of SF (subset, space opera) Brian Aldiss memorably termed Wide Screen Baroque. The plain fact is that something feels wrong; there’s a mismatch somewhere. At first glance it looks as if c was set too low when the physical laws banged into existence (assuming that’s what happened). On reflection, though, the real problem is probably just that as a species we don’t happen to live very long, so that the absolute minimum travel time to our nearest neighbour star of four years (never mind the round trip or realistic speeds) seems an appallingly large fraction of one of our lifetimes.
It’s as though, product of our little globe as we are, we can accept the kind of limits implicit in our proportional relationship to the dimensions of Earth itself, but no more, regardless of the expanded context we may be considering. Certainly when I was doing my own bit of physical law revising for my first three SF books, I deliberately set the near-theoretical limit of interstellar velocity at a value which would mean it would take roughly as long for a starship to travel to the far side of the galaxy and back as it used to take a sailing ship to circumnavigate the Earth; it was a value that felt right, and – in fiction – that’s often all that matters.
There are – again, in fiction – lots of ways of achieving this kind of technically ludicrous velocity: one old favourite is hyperspace – where the assumption is usually that the value for c is much higher, though one enterprisingly ironic SF author wrote a story about a research programme to access hyperspace which culminated in the dismaying discovery that light there travels more slowly than in our space-time fabric...or there’s warp-drive, which implies it’s possible to distort that fabric in such a way as to fold distant points together temporarily, so making a journey from one to another is possible without having to cross the intervening space...or there’s the rather more recently fashionable and – as these things go – slightly less scientifically dubious idea of using singularities or wormholes to short-circuit space-time (there’s also the distantly related possibility of using other universes to apparently travel within this one, rather as one might apparently travel through time using the same technique, as Marcus Chown explained in Science, New Scientist, 28 March 1992).
Of course, just admitting that there is no way round the problem might be best; facing facts after denying them for a long time often leads to a sensation of relief, and the processing power and imagination released when one is no longer spending all one’s time producing the equivalent of new epicycles can deliver surprisingly fruitful results when applied to more intrinsically productive areas of thought – an example is the scene in 2001: A Space Odyssey where the astronaut blasts his way into the airlock and – for the first and possibly only time in all filmed SF – there is no sound until the lock fills with air again. It’s brilliantly effective because Kubrick turned soundlessness from a bug into a feature.
There is also the point that - when we’re talking about Big Issues, and events and rules and the limits of physical scale - the very fact we feel that something must be the case is usually a pretty reliable indication that it isn’t; the outrageous counter-intuitiveness of quantum physics is just one example.
However, let’s not pour too much scorn on intuition. It is, after all, a vital component of the scientific method. A scientist with no imagination would find it impossible to form new hypotheses against which to test results, and the intuitive leap that produces an explanation for previously inexplicable phenomena is arguably a form of applied hunch, even if, at some deep neurological level, it consists of a computer-like subconscious subroutine searching methodically through a hierarchically-ordered, probably template-based series of possible explanations before presenting its results to the brain’s front-office in the form of a bolt-from-the-blue brainwave...But there I go, powering up the law-and-logic-defying Extrapolation Drive and careering wildly off at lightspeed into the realms of iffy skiffy...