The next great paradigm shift : vehicle transport and pollution


I am stimulated to write on two closely linked topics, for two reasons. Firstly, there’s a great deal of noise and “demand for action” on the vexed issue of diesel vehicles,  with various proposals to ban private diesel cars from cities, or at least charge them at prohibitive rates for entering cities. Secondly, there seems to me to be a paradigm shift at play right now, which will affect how vehicle transport works in a very fundamental way – which if well understood, will allow a wholly different approach to how we move about, including what infrastructure we need to build for our future needs, how urban planning is tackled for the longer term, and what we should do to best tackle air pollution.

If we are prepared to stand back for a few moments and consider the evidence, we can see that the issue of emissions from diesel vehicles, and indeed internal combustion engines generally,  is a subset of a much bigger picture. I don’t mean to downplay the importance of air pollution. There is no question that it has a huge impact on health and quality of life, even if some of the numbers of deaths per year being quoted in the media might be challenged. I will come back to the subject later, and in detail  in Appendix 1 to this post, where I will attempt to explain the various types of emissions, their sources, and their effects.  I think it’s quite important to do that, as there is much confusion, even in the quality press, about what the problems are. If we don’t understand them properly, any action to mitigate is likely to be less effective than it might be; in the worst case action will merely be political expediency, to “be seen to be doing something”, leaving actual pollution levels largely unchecked, and causing harm/adding costs to activities which are an easy target but which aren’t the main problem.

But let me return now to the main thesis of this post, a paradigm shift in vehicle transport.  To explain what I mean, let’s cast our minds back to London in late 19th century, and the “great horse manure crisis” [1]. As in many major cities around the world, transport depended on horses, and the growth in their numbers was causing huge and seemingly intractable problems. For example, in 1894, The Times newspaper predicted… “In 50 years, every street in London will be buried under nine feet of manure.” All manner of experts were called in to devise a solution, but no solution was forthcoming. So things just stumbled along, worsening steadily, and the doom-mongers were having a field day. There was the smell of horse manure and urine everywhere, exacerbated by flies and spread of disease, especially in hot weather.

What happened in practice to make the problem go away was a change of paradigm. One might say “something turned up”, but the cause of the paradigm shift was already out there, in the form of the gasoline engine; its real significance had just not been recognised.  Carl Benz had already designed and built his first gasoline engine, operating it for the first time on New Year’s eve 1879. Although it was a stationary engine, it was so successful that Benz was able to focus on his dream of using it to power a lightweight car.  In July 1886 the newspapers reported on the first public outing of the three-wheeled Benz Patent Motor Car, model No. 1.

The timing here is significant – the Times leader on the “great horse manure crisis” was written some 8 years after Benz’s motor car made its debut. The motor car at that time was regarded as no more than a curiosity, not a game changer – no-one had much idea of the impact it would later have. What eventually made the difference was affordability, and the mass market penetration which ensued. When Henry Ford starting manufacturing his Model T in 1908, the motor car became cheap enough for the ordinary public to buy, and very quickly horse power was displaced by the internal combustion engine – a classic paradigm shift.

As I write, over 100 years later,  motor vehicles with petrol or diesel engines are ubiquitous, and it might be hard to imagine a world without their dominance. But come with me, as I explore what things will almost certainly look like not very far into the future – perhaps in another 10 years or so.

The paradigm shift for vehicle use

The essence of a paradigm shift is that it’s not about incremental improvement, or tinkering around the edges of an issue. It’s about a whole new way that things are done. We have seen many examples in living memory: motor vehicles replacing horses is one. They are not all necessarily a good thing – many come with huge risks and downsides, such as the atomic bomb, which  changed the paradigm of international warfare. Some other examples of paradigm shifts, with their inevitable upsides and downsides are

  • The internet, making virtually all human knowledge ever recorded instantly available to anyone, with just a few keystrokes –what is the future for printed textbooks, and even for our libraries, now?
  • The electronic calculator: when I sat my Engineering degree final exams in 1972, I was still using log tables and a slide rule.
  • Mass passenger air travel as a means to get around the world quickly, providing ordinary citizens like me the opportunity to go to pretty much anywhere we want to on the planet.
  • The contraceptive pill, which became available in the early 1960’s, transforming the lives of women in the developed world, by allowing control of when, and if, pregnancy occurred.
  • Email and attachments, replacing handwritten letters and printed documents in the post.
  • Mobile phones, allowing near instant communication almost anywhere; it’s hard to imagine a world without them, yet the first cellphone systems appeared as recently as the early 1980’s, and the first smartphones at the beginning of the 21st century.
  • Digital photography, making the 35mm camera largely a museum piece, multiplying enormously the number of photos people take, and replacing images printed on paper with digital files, viewed on a screen. Who needs a camera any more, when you have a great app on your smartphone?

You can probably come up with quite a few others without too much head scratching.

So what is the paradigm shift for vehicle use? It’s not about more car sharing, or more cycle lanes, or urban fast-tram systems, or Uber, or even replacing internal combustion cars with electric (though that will follow as a consequence). It’s not even about more working from home, taking advantage of email and video conferencing. Those are all helpful, but they are not game changers. The game changer, the paradigm shift, is driverless cars.

I am well aware of the risk of trying to be prescient about a new paradigm. Even those who have seen many paradigm shifts in their lifetimes tend to scoff at predictions of new ones, because they seem at the time to be unlikely, to the point of being completely preposterous. So please bear with me, suspend your disbelief for a few minutes, and follow where this leads.


Why driverless cars?

If we think about the paradigm for car use that we’ve lived with ever since car ownership for the general public became commonplace, and consider for example the UK situation as it is today, it involves the following characteristics:

  1. People buy cars. The car they choose is typically a compromise between various functions – a handy vehicle for getting to the shops or to work, and easy to park, cheap to run, yet big enough to use for family holidays or to carry the occasional large load, for example. There are about 31.7m cars in the UK, out of a total population of 65m of whom about 52m are above 17, ie old enough to drive. That represents 6 cars for every 10 people old enough to drive.
  2. Cars are status symbols – people who can afford it often want their car to make a statement about their wealth/ status. It’s desirable to have a stylish, expensive, and even fast, car in one’s drive, as a demonstration of one’s affluence. Most 4×4 vehicles rarely if ever leave the tarmac, and the speed and power capability of  most cars is way in excess of what can ever be utilized. Even the number plate indicates how new a car is, so people wait to make their purchase until the new plate comes out. This was a problem in the UK when a new plate letter was issued annually, as a disproportionate number of new registrations were issued in one month, distorting manufacturing schedules and the business model for dealerships. It is less acute now in the UK with twice yearly change, but it’s still a problem.
  3. Cars are an expensive purchase. For many people, a car is the second most expensive thing they ever buy, second only to their house. Unlike a house, the value depreciates: the average new car will have lost around 60% of its value by the end of its third year.
  4. Cars are only in use a small proportion of the time; the rest of the time they’re occupying space in a garage, a driveway, on the street, or in a car park. If we assume a typical car does 10,000 miles per year at an average speed of 25mph, that’s 400 hours of use per year, out of 8.746 hours – ie 4.6%. The other 95.4% of the time, it’s lying idle, using space, and of course depreciating while not offering a counteracting benefit, other than being available for the next time it’s needed. Street parking is a cause of congestion in towns and cities, limiting the amount of road space for traffic movement. One only has to walk down a typical suburban street in London to see that there’s hardly any kerb space not taken up by parked vehicles;  two-lane roads are effectively narrowed to one lane available for moving traffic. Finding a parking space, whether as a resident or a visitor, is a regular headache.
  5. Congestion is a problem in most towns and cities, and on commuter routes, especially at peak times, so journey times can be unpredictable, and a lot longer than they would be if traffic were light. Average speed achievable in central London is now 7.4mph. [2], about twice as fast as  brisk walking, and well under a normal cycling speed.
  6. The growth of car use has marginalized other transport services such as scheduled buses, rendering them uneconomic except on busy city routes. This has made life more difficult for those who live in villages or rural environments, if they have no access to a car, for example if they cannot drive, or cannot afford one.
  7. The growth of car use has led to proliferation of out-of-town retail parks and supermarkets, with an associated decline of local shops and services in the high street or village. This is very convenient for the households who do have access to a car, especially those who are money-rich but time-poor, as shopping can be done outside normal business hours. However, it has a negative effect on community cohesion, and on quality of life for older or disabled people, whose lives become more isolated.
  8. Safety and accidents: UK is one of the safest countries in the world in terms of road vehicle deaths, at 2.97 deaths per 100,000 people (2014 data). [3] In 2015, there were 1,732 deaths from road traffic incidents; however, there were 22,137 serious injuries, and 186,209 casualties of all severities. [4]. The main cause of accidents was driver error [5], characterized by (1) failure to look properly, (2) failing to judge another person’s path and/or speed, (3) being careless, reckless or in a hurry, (4) losing control, and (5) alcohol. If you’re on a bicycle, you are 17 times more likely to be killed  than if you’re in a car, and 75% of all cyclist serious accidents or deaths happen in urban areas. [6]. Motor vehicle congestion, and inconsiderate behaviour by drivers and associated safety concerns, provide significant  disincentive to commuting to work by bicycle.
  9. Car insurance is expensive. According to the AA, the average annual motor insurance premium stood at £747 in 2014. For younger drivers, the cost is even more, averaging £1,743 for 17-22 year olds.
  10. Alcohol and driving cars does not mix well. Since drink-driving laws were introduced in the UK in 1967, the taboo associated with drink driving has steadily increased. During the 1980’s, the UK saw a 50% drop in UK drink-driving offences. Nowadays, most people if going for an evening out, which involves going by road and drinking alcohol, will make arrangements to use a taxi, or to delegate one member of the group to be the driver and not drink. The incidence of drinking and “taking a chance” has fortunately  declined, though alcohol is still the 5th highest cause of vehicle accidents. [5]. Another consequence of drink driving laws is a general decline in the viability of pubs, restaurants, sports clubs, and social clubs, especially those accessible, for all practical purposes, only by car.
  11. All-electric or electric hybrid cars are a tiny share of the market. High capital cost and range limitation are key issues which mitigate against them becoming more popular. We seem wedded to the internal combustion engine.
  12. Vehicle transport is responsible for significant air pollution, especially in towns and cities, causing health problems, and probably significant numbers of premature deaths.

The above 12 points characterize, pretty adequately, I hope, our modern car-owning paradigm. Now let’s consider what the situation would be like with a completely different paradigm, namely autonomous vehicles, or driverless cars. The technology already largely exists [7], but what I ‘d like you to imagine is so-called “level 5” or full autonomy: where cars would be moving around our roads with no need for anyone on board to be in control, or even the need for anyone on board at all, as cars moved around to the location where they were required to be used. That might seem far-fetched, and there are indeed some crunchy issues to be resolved before it becomes reality. However, none of these issues is insurmountable, and the prize, as we’ll see, is enormous.

The issues I’d suggest are the greatest hurdles to fully driverless cars are as follows:

  • Technology being adequate to fully replace the human driver.
  • The interaction between the user and the vehicle, in terms of trust, and ability to intervene if required.
  • Regulation, legislation and insurance: while autonomous vehicles would be much safer than human driven vehicles, no technology is 100% and some accidents would occur. There is an important issue requiring clarity about allocation of responsibility.
  • Cyber security: preventing the systems from being hacked into, with nefarious intent, such as taking control of vehicles and causing them to crash or behave in a way that the users do not want.

Before attempting to comment on our prospects of solving these issues, let’s think about what the new paradigm might look like, and what it would enable.  All of us who wanted to would be able to summon a vehicle to turn up reliably at the time and place we chose, and take us to the destination we wanted; the vehicle could then go away again to serve another user. That would be ideal for a single journey such as going to work; we’d simply call up another one to take us home at the end of the working day, or even have a regular set booking for journeys we undertake regularly.   If we wanted to keep hold of the same vehicle for extended use for a number of days, for example, and keep some personal possessions in it, then when we arrived at a destination  we could send it away if necessary to park itself, to return when we needed it. We could choose the size, passenger and luggage capacity, and comfort level we wanted, so we’d be much more likely to be using a vehicle suited to our particular needs at the time, rather than the present system of compromise. Ultimately, as the technology became commonplace and accepted as reliable, the requirement for on-board driver intervention would recede, and we wouldn’t need a driving licence: children being born now would be grow up to a world where “passing one’s driving test” would be a rather specialized activity, that most would have no need to bother with.

If we wanted to go out for the evening, we could have a drink if we wanted; the breathalyzer would be obsolete. While travelling by car we could do whatever we liked: admire the scenery, read a book, watch on-board TV, sleep, catch up with work on our laptop, or whatever. There would no longer be an issue of driving while tired. People at present unable or unfit to drive, eg because they are old and infirm, or disabled,  need no longer be isolated. City centre multi-storey car parks would become largely obsolete, and towns and cities would become much more pedestrian-friendly. There would no longer be problems finding a space in the hospital car park. Ambulances could focus on  their primary purpose, rather than routine transport of patients between home and hospital; specialist vehicles suitable for wheelchairs could be  available, and turn up right to the door. The road network use would be optimized, and traffic would flow better and faster – helped by a reduction in the sclerotic effect of street parking as far fewer people would choose to own a car, and by a parallel revolution in road freight transport, which would also be autonomous, and take place at times when most other road users were asleep in bed. Pricing by time of day / congestion would also serve to smooth out peak road-use.

Electric vehicles would come into their own, as they lend themselves well to automation, and problems of range and how/when to recharge would be much more manageable. A vehicle getting low on charge could just be parked up at the next depot or off-road space, and a freshly charged one summoned; longer journeys could be staged in a planned way, not relying on a single vehicle. Airports could be cited well away from population centres, as access would be readily available to everyone, and they wouldn’t need much car parking space. We wouldn’t be stuck on a path of ever expanding our road network, as what we already have would be more than adequate; instead, we’d be optimizing it for the new paradigm. The blight of air pollution from road vehicles in towns and cities would be eliminated.   I could go on, but I’m sure you’ve got the idea. The possibilities are pretty much endless.

Who would be the providers, and how would we pay for our car use? All sorts of models are possible, but one is that companies would compete, much as car hire companies compete now. Indeed, it is likely that existing car hire companies would be first in the market with their driverless fleets. Each individual could have a unique user account, charged when the service was used. Use for most people could be via an app on their smartphone.

Let’s look at how the 12 descriptors of our present paradigm which I cited above would change with the new paradigm.

  1. People buying their own cars: with a reliable driverless system in place, there would be less incentive to own a car.  There would no longer be the compromise of having something too big for everyday use so that it would be suitable, for example,  for the annual holiday. There would no longer be discussions about “who has the car today”, as any adult member of the household could have access to a car when they wanted it. The total inventory of cars would reduce markedly, because utilization would increase.
  2. Cars as status symbols: there would be an inevitable period of transition, while privately-owned cars co-existed with fleets of driverless vehicles. But as the system of driverless vehicles gained the confidence of users, and evolved to best meet their personal needs, the advantages of owing and driving a car would recede. It is likely that cars chosen on the basis of their cachet as a status symbol would wither away. Cars would therefore be much more matched to their application, rather than too large, with speed capability far in excess of what might ever be used on the road, and 4WD where 2WD is only required.
  3. Cars being an expensive purchase: as private car ownership declined, capital cost to the consumer would transfer to capital investment by providers, and the user would experience his motoring costs as a revenue item of everyday life, like the utility bill or food shopping. Overall costs would decline, as increased utilization equates to fewer cars being needed – probably far fewer than the 6 for every 10 adults we have in the UK at present.
  4. Cars being only in use a small proportion of the time,  wasteful of resource, and occupying space the rest of the time which would often be useful if it were freed up: it is hard to predict by how much utilization would increase with driverless cars, but there is enormous scope from the average of less than 5% at present. Hire cars within the present system typically have mileages about double the average private car, but that would be a pessimistic guide: driverless cars offer much better opportunities for optimizing utilization.
  5. Congestion in towns and cities and commuter routes: reduction in congestion would be dramatic, with ultimately hardly any cars parked on the street, less traffic in towns and cities, and, with computer optimized vehicle spacing, speed, and route choice, much smoother traffic flow on commuter routes.
  6. Marginalization of bus services, and isolation of people living in villages or rural communities: bus services would largely cease to be required and would wither away, being replaced by autonomous vehicles. But the effect would be very helpful to those people who find transport services inconvenient and infrequent, and who are isolated by the present paradigm. Transport would be available to all when required, at reasonable cost.
  7. Proliferation of out-of-town retail parks and supermarkets, with an associated decline of local shops and services in the high street or village: the effect of driverless cars on this issue might not be great; however, the isolation, choice, and cost disadvantages this brings at present to those without access to a car would be swept away.
  8. Safety and accidents: elimination of the main cause of motor vehicle accidents, ie human error [5], would lead to a significant reduction in death and injury. It would also make cycling, eg for commuting to work, safer and less stressful.
  9. Insurance: ultimately, insurance cost would be built in to the cost of driverless car use, and be indifferent to the age or driving experience of the user. It would therefore cease to discriminate against younger users.
  10. Drink driving, and alcohol-related accidents, would be eliminated. We could also see an upsurge in the viability of pubs, restaurants, and sports and social clubs, especially those largely dependent of accessibility by car.
  11. Market share of electric cars would increase enormously, with beneficial effects on pollution and noise, especially in cities. There are several reasons that the driverless paradigm would foster a far greater market share for all-electric vehicles: they lend themselves to autonomous driving, with their technical simplicity / reliability; better matching of specific function (eg distance of journey planned) to vehicle choice would favour electric vehicles, as many journeys are short; capital cost would no longer fall directly on the user, but on the service supplier, who would be able to make vehicle choice based on an overall lifetime use cost model – with high utilization the running cost per mile would assume greater importance; and range would become less of an issue, with the user able, for example,  to summon a newly charged vehicle, or to stage a longer journey, dropping off a vehicle at a charging depot  and setting off with a newly charged one. In our new paradigm of driverless vehicles, the all-electric car is king.
  12. Air pollution in towns and cities would reduce, largely because most internal combustion engines would be replaced by electric, and would cease to have the impact on quality of life, health and premature death which it does today. Another benefit would be to reduce one of the  disincentives to cycling to work.

So I hope you will by now have bought in to the attractiveness of a new paradigm for vehicle transport, the fully autonomous car.  But probably, like me, you have some doubts about really how feasible it is. Let’s go back to the four main problems that would need to be solved: technology being adequate to fully replace the human driver; the interaction between the user and the vehicle, in terms of trust, and ability to intervene if required;  regulation, legislation and insurance; and cyber security. Of these, far and away the most important is the first one, technical feasibility. I would argue that once that is proven, accepted, and observed to operate highly reliably in practice, the issues of user/vehicle interaction, and regulation and insurance, will tend to fall away quickly. Insurance, and allocation of cause in the event of accident, would be dealt with using universal on-board camera/dash-cam and black box data recording systems. The problem of cyber security would attract the development effort  required to resolve it too; systems would be designed from the outset from the point of view of robustness against cyber threats.

So, what of technical feasibility? Many of us, myself included, are already experiencing some of the benefits as early features are built in to modern production cars, such as autonomous parking, adaptive cruise control, lane assist, and automatic recognition of speed limit signage. There is already a vast amount written on the topic. One approach which I like is exemplified by the innovation foundation Nesta [7], who take an objective look at the technology, its problems and limitations, and describe the various technologies which are developing in the field, such as Light Detection And Ranging (LiDAR).  Machine “vision” and decision making are the two key areas which determine whether the technology is truly workable, and they depend in turn on machine learning algorithms. This is an area of exponential progress, as it has applications in countless fields, not just driverless cars. It is probably true that we have not yet reached the stage where they are ready to fully replace human control, but we are not far away.

If you doubt this, a good reality check is to look at what manufacturers and transport companies are doing. [8] Autonomous vehicles are no longer just about Google or Tesla /Elon Musk.  In 2016, GM invested $500m in Lyft (a US transportation network company which facilitates peer-to-peer ride sharing by connecting passengers with drivers who have a car), bought  self-driving technology start-up Cruise Automation for >$ 1bn, and  announced in July 2016 that it would build its first self-driving cars for use within the Lyft fleet as self-driving taxis. In May 2016, BMW announced that they would have a fully autonomous car on the market within 5 years. Uber, which acquired autonomous truck start-up Otto for $680m, is now beginning field trials of fully self-driving taxis in Pittsburgh: CEO Travis Kalanick has said Uber’s survival depends on being first to roll out a self-driving taxi network.

Ford has announced plans to provide mobility services with fully autonomous self-driving Fords by 2021. It is a huge commitment of resource:  Ford is doubling its development staff in Silicon Valley, aimed to have the largest fleet of self-driving car prototypes by the end of 2016, and will triple the size of this fleet again in 2017. It has also purchased 3 companies related to autonomous driving technology, and a stake in Velodyne, the leading manufacturer of LiDAR technology.

The momentum is unmistakable. Driverless cars are the future. The present car driving paradigm is in its death throes, just as much as the horse and carriage was in the early 1900’s. Electric cars will be a huge part of this future. Governments and local authorities need to be alive to the trend, to avoid expensive white-elephant investments, overtaken by a world which has moved on. They need to plan actively for the sort of infrastructure that the new paradigm requires. Individual citizens can make smarter choices, for example, about where to live, where they work and how they will get there, and even whether a drive or garage is an important facility to have. Don’t be saying in 5 or 10 years’ time “If only I’d realized……”



Appendix 1: The downsides of the internal combustion engine: congestion, pollution & climate change, and inexorable increasing demand on finite fossil fuel resources

As I write, over 100 years after the internal combustion engine displaced horse-drawn transport, all major cities of the world, and developed countries with high population density, face a set of problems we all recognize. No-one worries now about our streets being buried in horse manure, but we do worry about traffic congestion and air pollution, and increasing demand for fossil fuels. We see a seemingly unstoppable rise in the desire to own and move around by car. Worldwide, the issue looks to be out of control, as the increasing wealth of developing countries adds more and more aspiring car-users to the mix. The chart [9] shows that the increase is most significant in Asia, but hardly a large town or city on the planet is not troubled by the situation.


There is a large human cost to congestion, as more man-hours are wasted in traffic jams:  people have less time to spend with their families and in various activities they’d like to be doing, are sometimes late for work and often exhausted when they get home. A recent report [2] shows that even in the past  4 years, average traffic speed in central London has declined by nearly 20%. Cities struggle to manage the trade-offs between more cycle, bus, and taxi lanes, and demand for road space for ordinary cars, and with carrying out road improvement measures which exacerbate congestion while they are being done. And an extraneous factor, the mushrooming of ordering goods on the internet, with home delivery by the ubiquitous white van, has added further stress on the system.

There is a significant business cost too, for example as the efficiency of commercial vehicles is reduced, delivery deadlines are missed: things take longer and cost more to achieve.

Then, on top of all that, there is of course the problem of vehicle emissions, from the point of view both of greenhouse gases (GHG), and of local pollution and impact on health and quality of life. In terms of climate change, road transport contributes about 20% of total greenhouse gas (GHG) emissions across the EU, and transport is the only major sector in the EU where GHG emissions are still rising. Pollution is also a major concern for its impact on health, especially in densely populated areas.  The effects of NOx (oxides of nitrogen) are highly topical in the popular media today, with demonization of diesel cars, and calls for them to be banned en bloc from cities- irrespective of whether they are old and dirty or new and relatively clean.  There is little doubt that NOx poses a real threat to health, but knee-jerk is not the best way to make the situation better. There is very clearly lack of public discernment about what constitutes pollution. Various types of pollution are often conflated and confused, not only on TV, but even in the so called “quality” press.  The right approach is to really understand and quantify the sources, tackle them rationally, and to keep measuring to ensure that our actions are delivering the required improvements. So let’s look at the various types of emissions, how they arise, and what we might do about them.


Pollution, greenhouse gases, etc: what are they?

While some emissions contribute both to pollution and to global warming, and there is no absolute demarcation between the two, we can broadly say that pollution is about localized effects, and greenhouse gas (GHG) emissions / climate change are about global effects. Greenhouse gases are predominantly carbon dioxide and methane, though some other gases contribute too.  A key point is that it doesn’t really matter much, in terms of anthropogenic global warming (AGW), where on the planet the GHG’s are emitted: a ton of CO2 released in London will have the same impact on global GHG levels as a ton of CO2 released in New York or Moscow or Shanghai. Another point sometimes not understood is that not all GHG’s are equal: we tend to measure them in “equivalent CO2”, because CO2 is by far the greatest contributor. But different gases have different “Global Warming Potential”: weight for weight, methane (CH4)  has an effect about 21 times greater than CO2, and nitrous oxide (N2O) about 321 times greater than CO2. Some man-made gases such as Chlorofluorocarbons (CFC’s) – often used as refrigerants- are much more potent still. A common one, CFC12 (CCl2F2), has a potency about 10,900 times that of CO2. The absolute amounts released are small compared to CO2, but the effects are significant. And CFC’s have another global effect in addition to global warming: they cause destruction of the ozone layer, an important factor in filtering out harmful UV radiation, and for that reason their manufacture is now banned.

Global warming and ozone-layer destruction might seem like distant, obscure issues to many people, and there is indeed a school of thought which still has some political traction, especially in the USA, which claims that global warming is at best a hugely exaggerated problem, and at worst is a complete hoax.  I deal with such anti-science ideas in another post [10]. Fortunately, most governments are more responsible than to subscribe to them, and at the Paris climate conference (COP21) in December 2015, 195 countries adopted the first-ever universal, legally binding global climate deal. It became international law on 4th November, 2016 [11]. I view the climate change rhetoric of Donald Trump with dismay, as his intent seems to be to set international progress back by decades. If his expressed desire to encourage a return to widespread coal-fired power generation in the USA is realized,  history will judge him as reckless, backward-looking, ignorant, and wrong, and a force for great harm to humanity. As a professional engineer and scientist, who has spent much of his career in the energy business, I am well qualified to comment, and I am doing my best to contribute to acceptance of the science and commitment to address anthropogenic global warming   – but that’s another story.

Pollution, as opposed to GHG emissions and climate change,  is something most people can relate to, and few would deny is a real problem. It covers a wide range of phenomena, and can be very obvious, such as foul smell,  or can be imperceptible yet damaging to health, such as tiny particulates or oxides of nitrogen. It can be naturally occurring, such as  sulphurous fumes from volcanic activity, or animal or human faeces in a water course used for drinking. Pollution is especially problematic when human beings live together in large numbers in cities, which is now over 50% of us worldwide. London has a rich history of pollution: in the summer of 1858, the “great stink” occurred when hot weather exacerbated the smell of untreated sewage and industrial effluent in the Thames. The problem had been getting worse for a long time, as most of the sewage from the city was discharged untreated into the river, and was associated not just with foul smell, but also transmission of disease.  Three outbreaks of cholera in London, prior to the “great stink”, were blamed on it. The solution was a radical system of new Victorian sewers and treatment plants, which serve rather well even up to the present day.

When we talk of pollution nowadays, however, we usually mean man-made, rather than from natural processes.

(Aside: I have not focussed here on events such as the Bhopal disaster of 1984, an accidental emission of methyl isocyanate gas with a huge death toll,  or the Chernobyl nuclear reactor meltdown in Ukraine in 1986. Although horrendous, these were the results of specific accidents, rather than ongoing “chronic” emissions, and are therefore somewhat off-topic for my present purpose.)

The first law restricting industrial emissions was the UK Alkali Act of 1863, in response to the problem of emission of hydrochloric acid  gas from the now obsolete Leblanc process for  production of soda ash (sodium carbonate, used primarily in glass manufacture) and caustic soda.  Extremely effective it was, too: prior to the Act, acid gas emissions from Alkali works in England were almost 14,000 tons per year, but after it came into force, it reduced to only 45 tonnes. The technical solution was simple and cheap, but it took an act of parliament to make it happen. History is littered with dreadful examples of deadly pollution: Minamata disease in Japan in the 1950’s/60’s, from industrial emission of methyl mercury over a period of 36 years, which bio-accumulated in shellfish, an important part of local diet; leakage of hexavalent chromium into a water course in Hinkley, California, was the subject of the well-known film Erin Brockovich, and did much to bring industrial pollution into the popular consciousness; the scandal of lead in drinking water in Flint, Michigan, began as recently as April 2014. All these are high profile dramatic cases of the effects of industrial pollution, and serve to highlight the importance of understanding what we do when our ongoing activities release chemicals into the air we breathe or the water we drink.

I can’t just let “chemicals” pass here, however, as it has acquired the status of a dirty word in some circles, as if life could somehow be “chemical-free”. This is of course a nonsense for at least two reasons.  Firstly, everything, including ourselves, is composed of chemicals. We are  all about 70% dihydrogen monoxide, that dangerous chemical that accounts for about 3,500 deaths per year in the USA. I refer of course to deaths from accidental drowning- dihydrogen monoxide is none other than H2O, ie water.  You mightn’t be too tempted if I offered you a plate of (C6H10O5)n laced with some NaCl and CH3COOH, but as “chips with salt and vinegar” it sounds a bit more appealing.  The NaCl is sodium chloride, or common salt, a compound of two dangerous elements sodium and chlorine, which individually are very harmful, but when combined into a simple molecule, are perfectly safe to ingest – provided one doesn’t overdo it.  Secondly, chemicals are useful – even essential – in our everyday lives. Chlorination of potable water, making it safe to drink,  is arguably the biggest single contributor to the improvement of human health over the past 100 years.

So chemicals can be benign, or deadly, and everything in between. It depends on what the chemicals are,  in what quantities, and in what circumstances. Consider the infamous “great smog” of London. On 5th December 1952, a suffocating pall settled over the city, and remained for 4 days, killing over 4,000 people. The circumstances were a deadly combination of very cold and still anti-cyclonic weather, and a temperature inversion, with a layer of cold air trapped beneath a warmer layer higher up. In response to the extreme cold, Londoners lit up their coal fires, pushing smoke pollution into the stagnant, foggy layer from which it couldn’t escape.  Smog was not a new phenomenon – it was well-known in London and other major cities – but this one was so bad that “something had to be done”.  In due course the Clean Air Act of 1956 successfully addressed the problem; deadly London smog was a thing of the past, and  December sunshine in the city increased by 70%. Those of us UK householders who still burn coal in open fires are required to use smokeless fuel. But the lessons of London have not been transferred across the globe. Smog is still a huge problem in Chinese cities such as Beijing and Shanghai, and only recently (November 2016) in New Delhi, reportedly the most polluted city on the planet, schools were closed and construction work halted for three days, and shops ran out of face masks. “The smog is acrid, eye-stinging and throat-burning” “Levels of the most dangerous particles, called PM 2.5, reached 700 micrograms per cubic meter on Monday, and over the weekend they soared in some places to 1,000, or more than 16 times the limit India’s government considers safe.” [12]

This brings me neatly back to my theme, which is road vehicles.  Diesel engines operate at higher temperature than petrol engines. (For my American readers, what we call “petrol” in UK, you call “gasoline”, or “gas” for short).    This brings a benefit of higher efficiency –higher miles per gallon, or lower litres per kilometer (depending on which units you favour). This is great from the point of view of reduced use of fossil fuels, and lower GHG emissions, and this efficiency argument has driven a significant shift from petrol (gasoline) to diesel for ordinary private cars over the past 20 years or so, despite the diesel engine being more expensive to manufacture. But diesel engines have two problems which make them potentially worse than petrol engines in terms of pollution: particulates, and oxides of nitrogen. We have all seen an old diesel lorry labouring up a hill belching out black fumes. The exhaust gases are laden with particulates, small particles of carbonaceous solids, ie soot. The more worn a diesel engine is, the more particulates it emits. But even modern diesel engines in good condition emit particulates, but these are not easily visible. The size of the particles is very important in the health hazard they pose. You might have seen them described as “PM10’s” or “PM2.5’s”. To explain, PM10 is particulate matter 10μ or less in diameter, and PM2.5 is particulate matter 2.5μ or less in diameter. (μ means micrometers, or 1/1000 of a millimetre). The smaller the particles, the longer they remain airborne, and the more they penetrate and lodge in the air sacs (alveoli) in the deepest part of the lungs, and the more dangerous they are, causing respiratory diseases and even lung cancer.  Large particles drop out before they can be breathed in,  or are intercepted in the mucous membranes, and so don’t pose such a serious health hazard. Typically, PM2.5’s are  considered an important health hazard, which is why you will often see pollution levels reported in terms of PM2.5’s, and why air quality standards quote permissible levels of PM2.5’s. Now, fortunately for diesel engines, the technology to clean up particulates from the exhaust gases is cheap, reliable, and very effective. They are diesel particulate filters, or DPF, and worldwide, legislation is increasingly stringent in requiring that they are fitted and tested to be operating correctly. The particulate problem from diesel engines is worst where there are many older engines in use, and/or where legislation has yet to catch up, and DPF’s are not the norm.  New Delhi is an example of where such measures are desperately needed.

But oxides of nitrogen is a more challenging problem for diesel engines, where the high temperature of combustion, higher than in petrol engines, causes more nitrogen in the inlet air to be oxidized, to form nitric oxide (NO) and nitrogen dioxide (NO2). These gases are commonly referred to generically as NOx. This matters because NOx gases react to form smog and acid rain, and contribute to fine particulate formation and ground-level ozone, which have adverse health effects, especially respiratory irritation and disease. Modern catalytic converters can reduce NOx emissions, in addition to their other exhaust clean-up functions, but it is very difficult for diesel engines to meet modern EU and US emission standards for NOx, without additional technology. The scandal which hit the headlines in September 2015, involving Volkswagen using software in its engine management systems to defeat emissions testing,  has had a huge and chastening impact on the finances and image of the company and its products, and on the way diesel engines are viewed overall in terms of their environmental impact.  Since the  VW scandal erupted, diesel’s image has gone from “green” to “dirty”, and some cities are now considering draconian measures to limit diesel engine use within their jurisdictions. It needn’t necessarily be so, since technologies are available which drastically reduce NOx emissions, and alternatives and improvements are being developed all the time. An example is SCR (selective catalytic reduction), which involves injecting a fine mist of urea plus water (also called diesel emission fluid or DEF, proprietary name AdBlue) through a catalyst into the engine’s exhaust stream to create a chemical reaction to turn NOx into nitrogen and water. New VW diesel vehicles sold in the UK now come with SCR as standard. The downside is that the vehicle requires an additional separate reservoir to store the DEF, typically 10 to 20 litre capacity, and the reservoir  requires refilling regularly. The consumption rate is about 1.5 litres per 620 miles, but this depends very much on the type of driving, whether long distance or stop/start.  The present cost is ~£1/litre, so it is not prohibitive, adding perhaps 2% to fuel costs. However, it all adds to capital cost and complexity of the vehicle, and additional hassle, as well as the higher operating costs, and it is understandable, but not forgivable, that manufacturers might try to beat the system, especially if they thought that regulators were colluding with other manufacturers to do so, and who might then gain an unfair competitive advantage.

In summary, internal combustion engines are now designed to comply with very stringent new environmental standards. In the EU, this is called Euro 6 [13], applicable to all new vehicles since September 2015. After the VW scandal, a manufacturer would be reckless indeed  to risk his reputation and business by attempting to cheat. As older vehicles are progressively replaced, environmental pollution from the internal combustion engine can become a  non-issue.


Appendix 2 : The future of electric vehicles – some more detail

I have described in Appendix 1 how diesel and petrol engines can be, and are being, made environmentally much cleaner than in the past, through technologies such as catalytic converters, particulates filters, engine management systems, and selective catalytic reduction to tackle NOx  emissions.

What then is the future for electric vehicles?  There will always also be an argument in terms of pollution, since an  all-electric car can deliver effectively zero local emissions, and internal combustion engines will always emit some measurable level of pollutants, however good the technology.  There is also an argument in terms of fossil fuel use, as electricity can be made entirely by non-fossil fuel means; however,  that is a long way from reality at present:  the all-electric car cannot be truly claimed as emission-free, since the process of manufacture, and electricity to charge it, involve GHG emissions, and will do for the foreseeable future.  If you’re in UK, you can see the grid carbon intensity on your smartphone or computer at any time, using the free app “GridCarbon”, which also displays the amount and % of feed to the UK grid from each type of generation, eg gas, nuclear, coal, wind, hydro, import /export from the French interconnector, etc. The average value for 2015 was 367gCO2/kWhr.[14]

There are two major barriers which prevent all-electric cars from gaining market share, with the present paradigm of car ownership and utilization: capital cost, and range.

To illustrate the capital cost problem,  consider the Nissan Leaf: it is the market-leading compact 5-door all-electric hatchback in the UK. A new one can be bought for just under £20,000; however, the petrol equivalent is the Nissan Note, which  can be bought new for just under £11,000 (prices as at November 2016). There is of course a fuel cost saving with an electric car: a 24kwhr Nissan Leaf costs about £2.40 for a single full charge, assuming off-peak electricity at 10p/kWhr. The nominal range for a fully charged battery is 124 miles, so the approximate fuel cost is 2p/mile. The equivalent vehicle’s petrol cost would be about 11p/mile, assuming 45 mpg and current fuel prices. There some other savings for the electric vehicle too, such as road tax, and if used in London, congestion charge.  But, using reasonable depreciation rates and annual mileage, if one does the arithmetic it is hard to see how the Nissan Leaf can compete with the Nissan Note on purely cost grounds. Although this is mitigated to some extent by Government financial inducements [ 15] to encourage electric car ownership, the high capital cost remains a significant disincentive.

But the principal downside with the electric option is range. Despite huge effort and expense in technical development, battery technology remains a fundamental limitation for electric vehicles: they are heavy, expensive, poor on capacity, and slow to recharge, compared to filling your tank at a filling station.   If one is dependent on charging at home, the electric car cannot be used on long journeys – about 60 miles from home is the maximum distance for a 24kWhr Nissan Leaf, not taking into account “stress-buffering” – the desire to have at least 20% charge in reserve for fear of being stranded and having to be towed.  There are charging points at Motorway service areas, for example, and a fast charge to 80% full takes about 30 minutes. Planning longer journeys has to be done with care to ensure one does not get stranded with a  flat battery.  The network is expanding though. As of 7 October 2016[update], the UK had 11,903 public charging points at 4,215 locations, of which 2,140 were rapid charging points at 696 locations. These fast-charge locations are the crucial ones: the 696 compares to just under 8,500 filling stations in the UK, a ratio of about 1:12.  Unless and until there is a much more complete network of charging points, market penetration for the all-electric car looks to constrained not just by cost, but also by convenience/range.

So are these difficulties and constraints reflected in the numbers of electric vehicles actually being bought? Care has to be taken with the figures, to distinguish between hybrids such as the Toyota Prius, and all electric, such as the Nissan Leaf and BMW i3.  My purpose here is to focus on all-electric, but I’ll quote hybrids as well, for comparison. Market penetration has been growing quite sharply, from even lower levels in the past, so recent data is most relevant. In the UK, total new car registrations in the first 9 months of 2016 were 2,150,495.[16]. Of these, 29,185 (1.36%) were electric, including hybrid [17]. This comprised 21,078 hybrid (0.98%, and 8,107 (0.38%) all-electric.[17]

This market share  for electric and electric /hybrid vehicles was delivered with the assistance of the UK Plug-in Car Grant [15] which covers 35% of the cost of a car, up to a maximum of either £2,500 or £4,500 depending on the model, or 20% of the cost of a van, up to a maximum of £8,000.

Even the most ardent enthusiast for electric vehicle technology would have to admit that these numbers are somewhat disappointing.

It’s not possible to argue that this is due to a lack of effort by manufacturers to make attractive offerings. Driven by legislation [18] on emissions targets, there’s a wide range of options on the market [19], both all-electric and electric hybrid, covering most types of vehicle, for example a tiny city car (Renault Twizy), compacts (VW e-Up, Renault Zoë),  family hatchbacks (Nissan Leaf, VW e-Golf, Toyota Prius, Audi A3 e-tron, BMW i3),  larger family cars (Kia Optima, VW Passat GTE), cross-overs/ SUV’s (Mitsubishi Outlander PHEV, Audi Q7 e-tron), and even up-market sport cars made by Elon Musk’s Tesla. You can’t argue that if you use an electric vehicle, you’ll be seen as a tree-hugging beard-and-sandals type – whether that image appeals to you or not.

One might argue from this evidence that electric vehicles are condemned forever to the margins of vehicle use, unless much greater financial incentives are made available, and/ or much more draconian legislation is introduced on vehicle emissions at the point of use. However, this ignores the paradigm shift which, as I’ve argued in the post, is almost upon us: autonomous vehicles. I hope that the main thrust of this post, that we are living through a paradigm shift, will convince you to be optimistic.



[1] The Great Horse manure Crisis,

[2 ]  London’s Traffic Really Is Moving More Slowly

[3] Road Traffic Accidents Death Rate Per 100,000 Age Standardized

[4] Reported road casualties in Great Britain, main results: 2015

[5]  Major causes of UK road traffic accidents

[6]  Cycling Accidents Facts & Figures, RoSPA

[7] Are we there yet? The journey towards driverless cars

[8]  Driverless car market watch

[9]  Global car sales

[10]   Man-made Global warming is/ isn’t real

[11]  The historic Paris climate change agreement just became international law


[12]  Smog Chokes Delhi

[13]  Euro emissions standards

[14]  Variations in UK/GB Grid Electricity CO2 Intensity with Time

[15]  Plug-in car and van grants



[18] Cars and Carbon Dioxide

[19]  Choose your electric car












Creation Science Apologetics v Mainstream Science


There are many who claim their religious views are scientifically sound, based on “Creation Science”.

They might (on a good day) concede some sort of equivalence – a competition, say – between the religious apologist views in, for example,, or the UK Creationist movement, and scientific views presented elsewhere (everywhere else, in fact, other than in religious apologists’ work) which come to very different conclusions. However, I intend to show that there is no more such an equivalence than there is between the Stork Theory of Where Babies Come From, and the mainstream biological one.  Silly analogy? OK, maybe, but I want to get your attention.

I should point out that of course not all creationists subscribe to the “Young Earth” creationist view, that the earth is something less than 10,000 years old. Some are happy to accept a more rational view of the age of the earth and age of the universe, yet are still attached to a creationist view of how things came to be.

For ease of reference, I have in my title  termed creation science religious apologist views as “Creation Science Apologetics” and the rest as “Mainstream Science”. The reader might accuse me of taking a biased view from the outset by choosing that terminology, but bear with me, and the justification will become clear.

I want now to draw some distinctions between apologetics and mainstream science.

  1. Religious apologists’ views usually start from a conclusion – “my religious text is true” – and interpret all information in such a way that it conforms with that conclusion. Mainstream science works without any preconceived view of what must be true, and follows wherever the evidence leads.
  1. Mainstream science works by peer review. Anything which is published is subject to such peer review, and is open to critique from anyone who wishes to present other evidence, or other interpretation. Thus the body of scientific knowledge is continually growing. Inevitably, what is accepted as correct at some point in time can later be rejected when new information is discovered. Sometimes this is presented as a weakness of science – “it keeps changing its mind; nothing is ever fixed” – but in reality it is a fundamental strength. Any scientific theory depends on falsifiability, the idea that it could be shown to be wrong. For example Dalton’s Atomic Theory, though still of great use in the understanding of chemistry and chemical reactions,  said that all matter is made of atoms, which  are indivisible and indestructible. This was later falsified when we understood more about the structure of atoms, and showed that they consist of component sub-atomic particles. Apologetics does not work by peer review – any views which conflict with apologetics do not find their way into apologetics texts or websites – they are roundly rejected, and only compliant views are accepted.
  1. Mainstream science works by developing hypotheses, and testing them to see if they are consistent with evidence. Once a hypothesis has been thoroughly tested and not found wanting, it acquires the status of a scientific theory. Any scientific theory provides predictions on what would be observed if it is sound. Those predictions are then tested by experiment. If the results are found to be repeatable, and to agree with the predictions, then the theory survives intact, for now. Note that I say “survives intact”, not “is deemed proven”. But as soon as results are found to be repeatable and inconsistent with the predictions, the theory has to be changed or even abandoned.  The example I gave above of Dalton’s Atomic Theory is a case in point.  So is the Theory of Evolution, except in this case all experimental evidence and further research, eg in DNA analysis, has so far shown to be consistent with the theory. If this were not the case, then science would have rejected it. In practice, the Theory of Evolution has been confirmed, and extended in depth by further understanding. It is part of Mainstream Science. In contrast, apologetics does not make testable predictions; instead, it focuses on offering explanations and rationalizations on how observations of the way the world works can be interpreted, so as not to conflict with religious dogma. Young Earth Creationism, for example, holds that the earth is less than about 10,000 years old, based on a literal interpretation of the Bible, particularly the Book of Genesis. To maintain such a view, YEC Apologetics has to discredit or “re-interpret” amongst other things ( there are too many others to list everything here) all of the following list of Mainstream Science disciplines and topics, most of which are independent from each other in the way they arrive at estimation of the timescales involved. I have arranged them in rough order of shortest timescales first (something over 10,000 years)  to longest timescales (many millions, or even billions, of years).
  • Thermo-luminescence dating
  • Dendrochronology
  • Oxidisable carbon ratio dating
  • Widmanstätten patterns in crystals of Ni and Fe, found in some meteorites.
  • Mitochondrial DNA (early human female)
  • Ice cores/ ice layering
  • Fission track dating
  • Speed of mineral replacement in petrified wood
  • Growth rate of certain large crystals, eg gypsum
  • Cosmogenic nuclide dating
  • Growth rate of limestone stalactites
  • Geomagnetic polarity reversals
  • Geological erosion rates
  • Milankovitch astronomical cycles
  • Growth rates of corals
  • Plate tectonics and continental drift
  • Radiometric dating, eg uranium-lead (there are many others)
  • Distant starlight
  1. Mainstream science comprises a vast and ever-growing body of knowledge in interlinked disciplines. Findings in any one discipline often have links and applications in what might at first sight appear to be entirely separate fields of enquiry. Modern medicine, for example, benefits from developments in physics, for example by Magnetic Resonance Imaging (MRI scanning), and in tribology and chemistry of materials , for example in prosthetic hip joints. A few moments’ thought will yield many other examples. These interdisciplinary links don’t just serve to offer us practical developments: they also serve as a continuous method of testing and verification of science – if it doesn’t work, ie doesn’t transfer across where it would be expected to, then its validity is called into question, and the science improves.   A corollary of this is that one cannot take any one part of science in isolation, and chose to ignore or reject it on grounds of religious dogma,  because other branches of  science will already be heavily integrated and dependent on it.  Apologetics, on the other hand, attempts to do just that, by focusing on some aspect of science which it recognizes at conflicting with its religious dogma, and attempting to undermine or discredit it in isolation.  Apologetics is therefore by its nature inward looking, narrow, and small, without links to extending knowledge in any other field, except support for religious dogma.
  1. Mainstream science works to the benefit of us all by enabling the provision of much of what we take for granted and depend on in our modern and relatively long and secure lives, in comparison to even a few hundred years ago. Clean water, abundant crops, eradication or near eradication of the scourge of some diseases (smallpox, leprosy, polio), cures and treatments for other diseases, electronic communications, transport, entertainment such as TV and radio, refrigeration to keep food edible and safe for longer, GPS, new engineering materials, forensic science by DNA analysis– the list goes on and on.  Apologetics has produced nothing of value to benefit the human condition, unless you count support for religious dogma as “of value”.

So if you regard apologetics are worthwhile, and want to argue that your “creation science” has got it right and mainstream science (such as the Theory of Evolution) is wrong, ponder the above and ask yourself “why do I believe that?” Am I really following the evidence where it leads, or am I a victim of confirmation bias, only interested in what reinforces my existing religious beliefs?

If you are attached, for example, to the claims and explanations in, it is a simple matter to check out some websites where Ken Ham’s ideas and statements are roundly debunked. Here’s an example, but if you look you’ll find plenty to keep you amused, and hopefully to get you doing some honest questioning.

Ken Ham’s 10 facts that prove creationism – Debunked




Democracy, and funding political parties in UK

The method of funding political parties in the UK has been controversial and troubled for a very long time. Outrage abounds in the media, as rich donors make large donations, with allegations of buying influence or honours, or both. Funding of the Labour Party by trade unions, via a levy on their members’ subscriptions, has also been controversial, both from the implied assumption that all the members of a trade union would be Labour supporters, and from the influence, or even control, of the political Party by trade union leaders through their power as the source of finance.

Membership of political parties has never been very high in the UK;  most of the electorate are not politically “active”, though most will vote in General  Elections. This chart [1] shows turnout in General Elections since 1945. It is arguable that the numbers show a worrying indifference to, or disenchantment with,  the political process, particularly since the turn of the millennium.

UK election turnout

According to Party press releases and media estimates [2] about a year ago, in August 2015,

  • The Conservative party had about 149,800 members (as at December 2013)
  • The Labour Party had about 270,000 members (as at August 2015)
  • The Scottish National Party had about 110,000 members (as at June 2015)
  • The Liberal Democrat Party had about 61,000 members (as at May 2015)
  • UKIP had about 42,000 members (as at January 2015)
  • The Green Party (England & Wales) had about 61,000 members (as at June 2015)


Membership of the 3 main political parties (Conservatives, Labour, and Liberal Democrats) was bumping along near a historic low, with ~1% of the electorate a member of one of these three parties in 2015, ~0.8% in 2011, compared to 3.8% in 1983: nearly a fourfold reduction.

However, these figures are in contrast to some stark increases in other parties.  The referendum on Scottish independence in September 2014 saw a large increase in political engagement in Scotland, with the SNP increasing its membership from around 25,000 at the end of 2013 to 110,000 in mid-2015. The turnout in that referendum of 84.6% was the highest recorded for an election or referendum in the UK, since the introduction of universal suffrage with the 1918 Representation of the People Act.

The Green Party has also been growing.  In late 2013 its membership was about 13,800; by mid-2015 it had risen by 440%, in only 18 months.

UKIP is a relatively new party, founded in 1993, and has seen its membership grow from small beginnings as the single-issue “leave the EU” Anti-Federalist League in 1991, through 32,000 in December 2013, to around 42,000 in January 2015. It remains to be seen whether such growth will continue, now that it’s main aim of achieving Brexit looks to have been won, or whether the party has an ongoing raison d’être at all.

But it’s in the Labour Party that recent membership upheaval has been by far the most dramatic and far-reaching, and it is this in particular which has driven me to think more deeply about political party membership and funding, and whether there might be a better way.

A radical change in Labour’s leadership election process, abandoning their electoral college system in favour of One Member One Vote (OMOV), was first moved by Ed Miliband in 2014, and was already in place  for the election of Labour’s new leader after the party’s poor showing in the general election of May 2015, and Ed Miliband’s ensuing resignation . It seems a while ago now, but Labour fielded four candidates for Party Leader: three “mainstream” MP’s, Andy Burnham, Yvette Cooper, and Liz Kendall, and a left-wing “wildcard” in Jeremy Corbyn, included apparently for “balance” to ensure that the candidates covered the spectrum of the Party’s political views.  It was in the gift of the Party’s MP’s to decide on the candidates who would stand in the leadership election, and some of them decided to lend their vote to Jeremy Corbyn to get him into the contest, even though they did not intend to vote for him in the leadership election itself.  Notable amongst these was ex-Labour Foreign Secretary Margaret Beckett, without whose last-minute vote Jeremy Corbyn’s  name would not have made it on to the ballot paper. She is on record about how much she rues that decision. It might yet prove to be a blunder which contributes to the demise of the Labour Party, or not – time will tell.

The Labour Party had three classes of membership for their leadership election in September 2015: party members, affiliated supporters, and a new category called “registered supporters”.  In late August 2015, the  Party reported that about 552,000 members and supporters were eligible to vote, as follows:

  • Full members: people who had joined the party and paid their membership subscriptions, numbering  about 292,000. There had been a surge of new members in August 2015 – the previous total was somewhere just over 200,000.
  • Affiliated supporters: members of trade unions and socialist societies who opted to affiliate, numbering about 148,000.
  • Registered supporters:  people who signed up with payment of a £3 fee, entitling them to vote in the leadership election. In late August the number in this category was reported as about 112,000.

The introduction of the new category of registered supporters at short notice proved problematic to manage, as at only £3 per head, there was lots of temptation to abuse by people seeking to influence the result, whether in favour of Corbyn or as an attempt to increase votes against him.  In the event, a huge number (around 56,000) of these “supporters” were rejected, for example because they could not be found on the electoral register, or because they were members of other political parties. This chaotic situation might have made the leadership contest result questionable, but in the event Corbyn won handsomely in one round of voting, with nearly 50% of party members, nearly 58% of affiliated members, and nearly 88% of registered supporters.  The total turnout was 76.3% of the eligible voters.

The events following Corbyn’s election of leader have not been edifying. He has certainly polarized opinion: on the one hand, a significant sized body of enthusiastic supporters who welcome his election as a much-to-be-desired break from the “old” politics of so-called centre left, or “Tory-lite” as some would style it, and on the other hand the vast majority of the Parliamentary Labour Party, who see him as weak, ineffectual, and an electoral liability. Whether they are right, and the electorate at large would decide that Corbyn is non-credible as Prime Minister, has yet to be verified in practice.

The following view, expressed by columnist Janice Turner in the Times [3] (Saturday 23 July 2016), seems bang on the money to me:

“ …. among Corbyn supporters during this leadership contest all reason has gone: this is now a movement run on faith alone. Not so last summer. Then many thoughtful people, contemplating lacklustre alternatives, decided to give this quirky, old-time leftie a spin. They imagined a genial, grandfatherly figure heading a broad, inspiring progressive movement, welcoming opponents, gathering brilliant minds. Instead, one by one, these supporters have realised Corbyn is a shambolic, rather dim man, who surrounds himself with reptilian ideologues, will ensure ten more years (at least) of Tory rule and doesn’t even care. Such rational folk, fearing their party is about to die and seeing a confident new Tory government with no one to hold it to account, have ditched Corbyn.

What remains is a cult: its members cannot be persuaded by reason or argument. Some believers are benign and well-meaning, like the women I keep meeting who “just love Jeremy” and, like Mary Magdalene, would happily wash his feet with their hair. Or the young folk who (understandably) want to smash a system that has laden them with debt and poor prospects, and cannot comprehend that beyond their Facebook bubble millions of voters will never share their faith in JC.”

“Facts become malleable when steeped in faith. Corbyn announced he’d ensure drug development would be conducted by the state, not evil corporations, without knowing or caring that this would bankrupt the NHS. Just as he can go to Yorkshire and demand the pits be reopened; impossible when they’re filled and flooded. Or declare that Article 50 should be immediately invoked and later dispute he said it. Never mind the truth, enjoy the socialist vibe.”

I suspect if you are one of the legion of traditional Labour voters up and down the land, who has never taken a particularly active interest in politics, you might view with similar despair the turn the party has taken since September 2015. Even the avowed socialist academic and author, Robert Harris, is equally scathing of Jeremy Corbyn’s abilities [3] :

“The current leader is about as useful at the dispatch box as a centre-forward with only one leg. The problem with Corbyn is not the policies, because there are no policies. They are simply soothing bromides for everybody. The problem is his sheer incapacity for the demands of the job, which require speed on one’s feet, cunning, skill in debate, wit, decisiveness. Almost everything it is necessary for a political leader to possess he does not possess. People say he’s like a geography teacher — he’s not qualified to be a geography teacher.”

Where am I going with all this? My point here is not to assert that Jeremy Corbyn is a cataclysmic choice as Labour Party leader.  While that is my firm opinion, there are many others out there who take an opposite view, and time will tell which view is more accurate. No, my point is about the impact of relatively low numbers of people  involved in the selection process,  and actively involved in politics in general. You might argue that 552,000 is a large, and therefore representative, number;  however, it is something under 6% of the number of people who voted Labour at the last general election in May 2015.

I can make the same point even more starkly by looking at the process of electing Theresa May as leader of the Conservative party, and de facto as Prime Minister, in July 2016. The Conservative Party rules required the parliamentary party, in the event of more than two candidates standing for election,  to produce a choice between a final two candidates; the party membership would then vote to select the leader from these two. One might argue that in our parliamentary democracy, there is nothing wrong with such a process; nevertheless, the electorate in this case would have been about 150,000 individuals, ie approximately 1.3% of the number of people who voted Conservative at the last general election in May 2015. There are many people who felt that this was highly unsatisfactory. As it  happened, one of the two final candidates (Andrea Leadsom) withdrew very early into the campaign, and we ended with a “coronation” of Theresa May, but this hardly detracted from the point about a small number of people making a key decision of great importance to the running of the country.

A more recent outcry has arisen from the Labour Party leadership challenge in July 2016. In this case, the Party NEC decided again to allow registered supporters to vote in the election, but increased the fee from £3 to £25, and set a very brief window for registering. The arguments in this case included accusations of trying to make it harder for people to vote, in order to manipulate the result against the incumbent, and of seeking large financial gain from the fact that a leadership challenge was taking place: if 100,000 people registered to vote, the Party would gain a windfall income of £2.5m.

It is undeniable that our political parties always struggle to bring in enough money to fund their activities. There is a trade-off between keeping the membership subscription low enough not to discourage people from joining, and high enough to deliver the income required. In the case of the Labour Party, full standard membership costs £47/year. For the Conservative Party, it is £25/year. These might not seem very large subscriptions, and it is unclear whether if the fees were lower membership would increase markedly. There are all sorts of factors at play besides the cost.

But funding of political parties has been a vexed question for a very long time in the UK. Funding via general taxation has been mooted from time to time, but always rejected. Part of the problem has been how to deliver an equitable system, fair to all political parties whether large or small. What is clear is that the system at present doesn’t work well. A good arrangement would deliver reasonable but not excessive funding, funding roughly proportional  to a party’s popular support, reasonable predictability of income to allow planning of expenditure, better engagement of the public with the political process, higher numbers of people involved in the decision making processes of parties, such as selection of parliamentary candidates and party leaders, complete transparency and fairness, and independence from the whim of a few wealthy individuals. The good news is that such a system is available, relatively simple, and not costly, and that is what I propose.

How would it work?

The general principle is to attach a notional “political subscription” to every individual on the electoral register. There would be a register of UK political parties, to which any party could subscribe, on payment of a reasonable fee (to discourage frivolous registration). Each year, the electoral register would be refreshed, and each individual (not household) would be presented with the option to allocate their notional political subscription to any one of the registered political parties, in a way that their choice could, if they chose,  be confidential to them,  to the civil service department operating the system, and to the political party to whom they had allocated their support.  On allocating their notional political subscription to a party, the individual would become a registered supporter of that party, until the next annual occasion that the electoral register was refreshed. This of course would be quite independent of the way that individual might decide to vote in any election.  The party to which the individual chose to allocate their notional political subscription would receive one unit of funding for that year, paid for out of general taxation.  Each party would be notified of the names and contact details of all people who had allocated their support to them for that year. If the individual chose not to allocate their notional political subscription to any party, no funding would be delivered in their name to any party for that year.

A key question of course is what level the “unit of funding” should be set. The UK electorate was 44.7m in 2015.  If say about 90% of the electorate chose to allocate their notional political subscription, then at a unit of funding of £5, the cost would equate to about 0.12% of UK income tax receipts, or 0.03% of total annual government revenue. It would however be somewhat offset by eliminating the need for “Short Money”[4], the funding given as an annual payment to Opposition parties in the UK House of Commons to help them with their costs. For those interested in the history,  it was introduced by the Harold Wilson Government of 1974–76, and named after the then leader of the House of Commons, Edward Short.  Under this system,  eligible parties receive annual funding, comprising £16.7k for every seat won at the last election plus £33.33 for every 200 votes gained by the party;  so the funding given to the Labour party, for example, following the 2015 general election, is £6.2m/year (including the £0.78m provided for the running costs of the Leader of the Opposition’s office).

If the numbers voting for the various parties at the 2105 general election were translated 1:1 into allocated notional political subscription, then the parties would receive an annual income as follows

Conservative Party 11,334,576 331 £56.7m
Labour Party 9,347,304 232 £46.7m
UKIP 3,881,129 1 £19.4m
Liberal Democrats 2,415,862 8 £12.1m
SNP 1,454,436 56 £7.3m
Green Party (England & Wales) 1,156,149 1 £5.8m
Plaid Cymru 184,694 3 £0.9m



Benefits of the proposed system

  • Such an income would transform the financial fortunes of all political parties, allow them to offer full membership at very low rates, and go well beyond the “Short Money” [4] system to redress the unfairness of the first-past-the–post system, whereby UKIP, for example, won only 1 seat despite gaining nearly 3.9m votes, whereas the Conservatives won one seat for every 34,450 votes.
  • A very large potential increase in the numbers of people recorded as registered supporters of the various parties, and therefore the opportunity for decision making, such as selection of party leader, to be much more representative of the views of people who support a party- should parties want to adopt it.
  • Increase in public engagement with the political process, as people consider whether to allocate their notional political subscription, and if so, to whom.
  • A reduction in the presently disproportionate influence on party politics of a small number of wealthy donors, with the associated distortion of “honours for cash”.
  • A reduction in the arguably disproportionate influence of a small numbers of trade union leaders on the policies and decisions of the Labour party.

Of course, £5 per unit of funding might be seen as too large, and too expensive on the public purse. A smaller amount might be chosen – at say £2.50 per unit, it is likely that most of the benefits of the proposal would still accrue. The optimum level of funding would need to be considered by Parliament, and reviewed as necessary, rather as is the existing system of “Short Money”.





[3]          The Times, Saturday 23 July 2016

[4]          “Short Money”:




Man-made Global warming is/ isn’t real

It is fascinating to observe how information is perceived into public consciousness: how “public opinion” is formed, and what relationship, if any, it has to what might be taken as objective information about the topic at hand. The issue of man-made global warming illustrates beautifully the processes which play out, how public opinion can be swayed or manipulated, and how the media and politicians can play an issue to their own ends.

Man-made global warming, sometimes shortened to the acronym AGW (anthropogenic global warming), is such a vital issue that we cannot afford to get it wrong. There are two main schools of thought here. One is that AGW is real, and poses a huge, and for practical purposes irreversible, threat to humanity: for example, loss of great swathes of highly-populated land mass to rising sea levels, reduction in the ability of our planet to keep us in food and water, through widespread damage to marine life and change in climate affecting the productivity of agriculture, and increased numbers of extreme weather events, such as hurricanes, droughts, and floods; the future of the planet to support our children and grandchildren is threatened. The opposing school of thought is that AGW is not happening at all, or if it is, it is a minor phenomenon that will have little impact in the short to medium term, and might even offer some benefits, so we can afford to wait and see what transpires; if a real problem does arise we will have time to deal with it then. If this view is correct, we are engaging in a hugely expensive worldwide diversion of resources to address a non-problem, when those resources would be much better spent on real and present priorities. In this preamble I am gracing both views as “opposing schools of thought”, though as I proceed it will become clear which side of the argument I stand. Anyone reading this might well have his/her own strongly held views already.

In trying to illustrate my point about how public opinion is formed and influenced, I was seeking a less controversial analogy, but it hard to find one where some people will not be utterly convinced of the rectitude of a demonstrably nonsense point of view. One example lies at the “lunatic fringe” end of topics which people get interested in, and often very vociferous about, namely the chemtrail conspiracy theory, which asserts that long-lasting trails left in the sky by high-flying aircraft are chemical or biological agents deliberately sprayed for sinister purposes undisclosed to the general public.

Equally wacky, yet more mainstream if measured by the number of subscribers in the USA, is Young Earth Creationism, which depends on convincing people that pretty much all of accepted science is wrong. If you are a subscriber to the idea that the earth is less than 10,000 years old, then I’m afraid there is little point in your reading further here: you are not going to be convinced by any arguments that depend on an open-minded assessment of science and evidence; you have already demonstrated that you will reject any argument, however compelling, if it conflicts with a literal interpretation of Genesis. In the USA today we have would-be presidential candidates who feel it would be politically damaging if they expressed their true views (eg Wisconsin Governor Scott Walker when asked in the UK in 2015 if he “believed in evolution”), and others (eg Ted Cruz, Mike Huckabee, Ben Carson) who nail their colours firmly to a creationist /anti-evolution world view. I don’t disguise my despair that  such anti-science nonsense can gain traction, but it is interesting to look at the way cynical money-makers can take in gullible people with even the most absurd ideas, given an audience conditioned to be receptive, in this case by religious indoctrination  –see my posts “How old is the earth?” 1 and “Atheism and the Theory of Evolution”. 2

If you are still reading at this point, good; you are still with me, and we have left the nutters behind. The topic which I do want to use as an analogy for public opinion on AGW is vaccination, especially of babies/young children. This is a subject where public opinion has been manipulated away from a realistic perspective towards alarmist nonsense, with seriously bad effect. It hit the headlines over 17 years ago, when Andrew Wakefield, a British former surgeon and medical researcher, published a research paper in support of the now-discredited claim that there was a link between use of the measles, mumps and rubella (MMR) vaccine, and autism and bowel disease. Wakefield’s claim that the MMR vaccine might cause autism led to a decline in vaccination rates in the UK, Ireland, and the USA, and a rise in measles and mumps, resulting in serious illness and deaths. Although Wakefield was disgraced, and struck off by the BMA, his original paper, the traction it gained, and his continued warnings against MMR vaccination, have contributed to a climate of distrust of all vaccines, and the re-emergence of other previously controlled diseases. There exists even today an “anti-vax” movement, exemplified by organizations such as the Anti-Vaccination League of America, and campaigners (eg the American Sherri Tenpenny) peddling their alarmist views with scant regard for epidemiological evidence. The main premise of their case seems to be that if there is any putative risk, however unsubstantiated, of any child being harmed by vaccination, we must cease all vaccination of children, even though this will lead to the statistical certainty of disease, permanent damage, or death for large numbers of people. Think smallpox, or polio, or even measles- its killing and maiming power underestimated because of familiarity. The rise of the anti-vax lobby is worthy of a whole post on its own, but it’s not my subject here. I raise it merely to illustrate how public opinion can be influenced by media coverage, where superficially plausible pseudo-science gets picked up and propagated by a popular media establishment, short on scientific expertise and responsible reporting, but long on seizing on any story which might be “sensational”, to sell copy. The effect in this case was to drive public opinion quickly away from a logical and evidence-based understanding, with dreadful consequences: the same principle applies to AGW.

Returning to my topic, the issue is man-made rise in global CO2 levels – is it a real effect, and if so, what problems might it create? My premise is that the answer depends on the science – it is not a matter for a democratic vote. Whether the majority of the public accept AGW as real does not impact on whether it is real, but it certainly does affect how we respond to it. As John Oliver said, in his satirical programme “Last Week Tonight” on US/Canadian cable channel HBO, in August 2014, you might as well have a vote on which number is bigger,  15 or 5? Or do owls exist? Or are there hats? In fact, the scientific consensus is so clear, that it is rather superfluous to attempt here to try to argue the case for AGW. This has already been done in tremendous detail by many scientific bodies, as any internet search will show. However, I will merely offer a bit of background on how we have arrived at a consensus that AGW is real.

By the late 19th century, scientists were starting to argue based on climate data that increased emissions of so-called “greenhouse gases” (GHG’s), mostly CO2 and methane (CH4), associated with human activity, could change the climate. By the 1960s, evidence of the warming effect of GHG’s became increasingly convincing, although there was considerable debate in the scientific community about the meaning of the data, and what other mechanisms might be involved.  During the 1970s, scientific opinion increasingly favoured the AGW view, but the science was far from settled. However, by the 1990s, there were improvements in quantity and quality of data and access to it, and in the power of computer modelling. There was also observational validation of the Milankovitch theory of global ice-age cycles, which provided a meaningful context against which to interpret the data. A scientific consensus emerged, that GHG’s related to human activity are deeply involved in global climate change, and that the consequences of such change pose great threats to mankind, notably  drought in some regions and flooding in others, increased frequency of extreme weather events, reduced agricultural yield, reduced fresh water availability, irreversible damage to coral reefs (affecting about a quarter of marine life), increased seawater temperatures leading to reduced quantity and viability of the marine food chain, and melting of polar ice caps causing sea level rise and loss of low-lying coastal land (much of which is heavily populated), and a runaway effect on global temperature.

Simple direct data, rather than predictions from sophisticated models, can provide an important part of the picture:

CO2 trend 1960 -present

The above chart3 shows the recorded monthly mean atmospheric carbon dioxide at Mauna Loa Observatory, Hawaii

It is interesting to look at directly measured data, charted along with earlier data from analysis of ice core samples4:

CO2 trend 1860 -present


In order to make predictions about the impact of this rise of CO2 levels into the future, it is of course necessary to use models. These are various models developed by different teams, using a range of data sets and techniques, and, of course, assumptions. Not surprisingly, there is no single settled consensus on the exact quantification of the conclusions. For example, The Intergovernmental Panel on Climate Change5 (IPCC), which includes more than 1,300 scientists worldwide, forecasts a temperature rise of 1.4 to 5.6 Co  over the next century – a wide range of outcomes. One possible response is to say “the scientists can’t even agree, so until they do, we should just park the whole subject”.

It is intensely frustrating for me to witness this type of response. As a scientist and engineer, with an education in maths and physical sciences, I have been taught to challenge methodologies and conclusions, and to assess conflicting data and opinions, and form judgments; I am used to using and understanding statistical data; I am schooled in the pitfalls of spurious correlations.  As such, I am comfortable with nuance, and expect scientific modelling to produce different predictions. In fact, if they all lined up exactly, I would immediately suspect that data was being manipulated. A useful analogy is weather forecasting: we are used in the UK to looking at our weather forecasts, living as we do in a climate where we can experience all four seasons in a single day.  We have no trouble seeing any forecast as a “best view” of what will probably happen. We know that things might turn out a bit different in practice, but our response isn’t to reject all weather forecasts as rubbish, but to see them as a useful guide. Furthermore, we have observed over recent decades a significant improvement in their accuracy, as the Met Office uses ever more sophisticated and powerful models, and ever better and more accurate real data input. So it is with climate change.

Despite the overwhelming agreement on the science, it is clear that there are many climate change deniers out there, and they do manage to achieve a certain amount of traction: not so much in Continental Europe, China, Brazil, India, or even now Russia; a few mavericks in the UK are quite vocal, but there is no strong media bandwagon propagating their views. Nevertheless, the UK “quality” press still publish articles aimed at challenging AGW, or asserting that AGW is of benefit rather than a source of harm. Even last week, I read in the Sunday Times an item headed “CO2 emissions boost crops6 . I quote here the opening paragraphs:

THE CO2 emissions blamed for climate change are good for humanity because they boost crop growth, according to a US official who co-wrote the original United Nations climate treaty.

Indur Goklany, a scientist and former US delegate to the UN’s intergovernmental panel on climate change (IPCC), is to publish controversial claims that increases in the gas have boosted crop production with little impact on temperatures.

His report, Carbon Dioxide: The Good News, comes as the UN prepares for next month’s Climate Change Conference in Paris, where a global agreement on cutting greenhouse gas emissions will be sought.

However, if we read on, we see:

The report is published by the Global Warming Policy Foundation, a think tank that has tried to cast doubt (my bold emphasis) over the peer-reviewed science that suggests greenhouse gas emissions could cause dangerous temperature increases.

 Goklany, whose report has not (my bold emphasis) appeared in a peer-reviewed journal………

And at the foot of the article there are quotes from climate experts Professor Myles Allen, head of the climate dynamics group at Oxford University, and Andy Challinor, professor of climate impacts at Leeds University, which cast doubt on Goklany’s claims, and a final paragraph:

The concentration of atmospheric CO2 has risen from a pre-industrial value of 280 parts per million (ppm) to 398ppm now. It is rising at 5% a year because humanity emits 35bn tons of CO2 a year from burning fossil fuels and forests.

So we have the headline and main thrust of an article purporting to undermine accepted science, with the “balance” included towards the end. This sort of thing provides quote-mining climate change-deniers more ammunition to support their views – only by reading the full article do we get the context with which to assess the likely meaningfulness of the headline and opening few paragraphs.

The country with the most high-profile deniers is the USA, where many senior politicians and the populist media seem to work hard to undermine the scientific consensus, and present the issue either as a hoax/conspiracy, or at the very least a debate where the scientific consensus is on the back foot, and being undermined all the time by “exposés” of malpractice and deceit by climate scientists. This raises two questions: what is the motivation for doing this, and are the objections valid.

Let’s start with motivations, and examine each in turn to explore the possible validity of the position they produce. Motivations can probably best be categorized as follows, though in any actual situation there is likely to be some sort of mix of these:

  1. Rejection on religious principle.
  2. Genuine science-based conclusions, which happen to differ from the scientific consensus. This is an open and academic approach, and is prepared to offer its work to peer review, and stand or fall on its scientific merits.
  3. Trying to make a name for oneself by finding flaws in accepted science – a pseudo- academic approach, rather than a scientific one, since it starts from a conclusion (“AGW is false”) and seeks to justify it.
  4. Business self-interest: vested interest of big business which perceives it would suffer financial harm through eg carbon tax, or through a public perception that its activities are environmentally damaging.
  5. Wishful thinking: the view that AGW is a real and serious threat to the future wellbeing on mankind on the planet, to our children and grandchildren, is a very uncomfortable one, so we are disposed to clutching at any information which suggests we don’t need to worry about it.
  6. Journalistic expediency – making a “good story” that people might want to read/ see on TV.
  7. Political expediency – a “non-expert” biased interpretation of information, to present a view which is felt to be electorally popular – “telling people what they want to hear”.
  8. Social media reinforcement: posting comments which get “liked” or “retweeted” by others who hold similar views to one’s own provides a feel-good response, so we tend to operate in a “digital information bubble” which exacerbates our bias towards confirming our pre-existing beliefs instead of challenging them.

If you can think of others that don’t fit in to any of these categories, I will be interested to hear your views, but in the meantime I want to take each of these in turn to examine whether they shed any light on the question: “ Global warming is/ isn’t real”.

  1. Rejection on religious principle, or religious determinism: this can be characterised by a view that God is in control, and will decide what the outcomes are for us and the planet. Also, for the Young Earth Creationists, remarkably prevalent in the USA, there is no such thing as long term data, and science which conflicts with a young earth view (eg plate tectonics, palaeontology, archaeology, geology, cosmology, astronomy, evolutionary biology /DNA, taxonomy, radiometric dating, paleoclimatology: in fact, every branch of science which has anything to say about long term timelines) is dismissed. Since religious determinism is by definition a non-scientific view of how the world works, it is not amenable to rational analysis based on scientific evidence. Those who hold a religious deterministic view might well dispute this, quoting their “creation science” apologists’ work, but to everyone else, myself included, it is irrelevant to the question “Global warming is/ isn’t real”, since any valid conclusion must be based on hard evidence rather than religious superstition.
  2. Genuine science based conclusions: we should look for peer-reviewed academic papers in respectable journals, ie mainstream science, which provide conclusions suggesting that AGW is not a real effect, or that it is greatly exaggerated. For any branch of science requiring modelling and statistical analysis, one would expect a spectrum of results and conclusions. This is the case with climate science. However, when we examine this spectrum, the results are overwhelmingly skewed towards the position that AGW is real. Various studies have been carried out; my reference here7 relates to an analysis of nearly 12,000 papers in peer-reviewed scientific literature from 1991 to 2011, and concludes as follows: “The number of papers rejecting the consensus on AGW is a vanishingly small proportion of the published research”. Of course it is possible to find many articles, other than in peer-reviewed scientific literature, which contest AGW, eg popular press, magazines, newspapers, and on-line publications. But here I am focussing on science, not on populist opinion, and the conclusion is not in doubt: any challenge to AGW based on substantiated science will not stand up.
  3. Trying to make a name for oneself by finding flaws in accepted science: this is an almost endless activity in various fields: evolution and climate change are prime targets. Any internet search will throw up a large range of articles disputing AGW, some couched in academic language, and some seizing on alleged distortion of data, or even suggesting deliberate falsification of statistics by climate scientists. Some of this is well written and has a veneer of plausibility. Many people are persuaded by such material. However, if one confines oneself to a reliance on substantiated science, as described in 2. above, it can be safely rejected. It is sometimes argued that that climate scientists gain kudos (or even money) by making their name in the field, and therefore manipulate and distort the science to make it fit their objectives. Any financial argument here is not borne out by the facts8. The kudos argument doesn’t stand scrutiny either: since climate change is accepted science, any personal fame and kudos is much more likely to be associated with standing out by taking an anti-AGW view.
  4. Business self-interest: examples are legion of business vested interests sponsoring research to head off concerns about their products/activities, or to produce results favourable to their products. Big pharma is renowned for conducting many trials on drugs they have developed, only publishing those which provide the results they seek, and quietly dropping the others; the move to a public register of all drug trials before the event is an attempt to deal with this. The tobacco industry spent huge amounts attempting to discredit research showing health risks of smoking. A parallel today is the carbonated drinks industry, which is active in sponsoring research to head off concerns of obesity and type 2 diabetes caused by their high sugar content. Unfortunately, there are examples of the fossil fuel industry sponsoring climate-change denying research. A prominent climate-change denying scientist Dr Wei-Hock “Willie” Soon, who worked at the Harvard-Smithsonian Centre for Astrophysics, accepted $1.25m in funding from companies such as Exxon Mobil and the industry group American Petroleum Institute.9 The threat of crack-down on fossil fuels, via carbon taxation and sponsorship of alternative “green” energy, is a strong motivation for the fossil fuel industry, and energy intensive industries such as steel and heavy chemicals, to attempt to discredit AGW.
  5. Wishful thinking: If we accept that AGW is a real and serious effect, we are forced to take a view that our lifestyle of conspicuous consumption, particularly in the western developed world, can’t continue indefinitely, and we will have to make some very difficult choices, involving giving up or curtailing some of the benefits of our lifestyle. This is a very hard message to accept, especially when we perceive that what we might do individually – or even as a nation – would be ineffective if others don’t join in and make similar changes. It is much easier to look for information which suggests the problem is not real, or at least much overstated, so that we can put the whole idea out of our minds as a non-issue, or at least tell ourselves “the science isn’t clear yet” and so assuage any guilt we might be feeling about damaging the planet’s ability to sustain our future generations.
  6. Journalistic expediency – making a “good story” that people might want to read/ see on TV. Sensationalism is a very common feature of popular journalism, as that’s where the money is. This frequently features gossip/scandal about personalities, but often it is seizing on a topic which is of significant public interest: examples are scares about vaccinations (feeding the anti-vax lobby) and reports of “miracle cures” for cancer. A common feature of such stories is a lack of scientific rigour: a journalist’s success rests on what copy he can sell, not on whether his story is scientifically sound – why let the facts spoil a good headline? There is a lot of public interest in stories which “debunk” AGW, especially if they involve a dose of conspiracy theory, for the reasons in 5. above, and articles are hard to miss. I already mentioned above a Sunday Times article 6 reporting on the work of anti-AGW apologist Indur Goklany, and not surprisingly this one source gets quoted elsewhere too. Yesterday Monday 19th October 2015, the London Times opinion column by climate change sceptic Matt Ridley was headed “Now Here’s the Good News on Global Warming” – rehashing selective comments about some possible positive effects and completely ignoring the main issues. Not surprisingly, a search failed to turn up any peer-reviewed papers by Indur Goklany which questioned AGW 10
Peer-reviewed skeptic papers by Indur Goklany

This page lists any peer-reviewed papers by Indur Goklany that take a negative or explicitly doubtful position on human-caused global warming.

There are no peer-reviewed climate papers by Indur Goklany that meet this definition.

7.  Political expediency: telling the electorate what they want to hear is a vote winner: people tend to support and agree with the expressed views of those who accord with their own bias. This creates a self-reinforcing “group think”. Telling uncomfortable truths, for example about the need to make difficult changes in response to an issue, is not necessarily a vote loser, but is much harder, especially if the topic is one where the public are likely to be sceptical. In the USA, public opinion is rather different from Western Europe. The following 2014 chart shows results on climate change scepticism. The data comes from United Kingdom-based Ipsos MORI, as part of the company’s Global Trends study11, which polled 16,000 people in 20 countries. The respondents were asked 200 questions about eight topics, including the environment. Here we have one concerning respondents’ view on whether observed climate change is largely man-made.

climate change belief by country

A quote from the study says: “Just a week after a non-profit revealed that the U.S. is lagging behind other developed countries in energy efficiency, a research firm’s data shows that the nation is the leader in denying climate change”.

8. Social media reinforcement: social media, as for the internet in general, provide a vast amount of information, much more than any individual can handle. People who are active on social media, such as Facebook and Twitter, have a forum to express any views they like, and can find others expressing similar views to their own, as well as all manner of counter views. The natural tendency is to gravitate to people who hold similar views to one’s own, to seek the “feel-good” from having one’s comments “liked” or retweeted”. One can “unfriend” or “block” anyone who expresses views we dislike, or who challenge us. Here is an extract from an article on the subject “How the web distorts reality and impairs our judgement skills “ by Tomas Chamorro-Premuzic12

Given that it is impossible to attend to even a fraction of the information that is available on the web, most individuals prioritise information that is congruent with their current values, simply ignoring any discrepant information. Recent studies show that although most people consume information that matches their opinions, being exposed to conflicting views tends to reduce prejudice and enhance creative thinking. Yet the desire to prove ourselves right and maintain our current beliefs trumps any attempt to be creative or more open-minded. And besides, most people see themselves as open-minded and creative, anyway.

There is no requirement on social media to be scientifically accurate or to have objective justification for one’s views; it is a free-for-all, where one can argue from one’s own predisposition or bias, and obtain positive reinforcement of those views from like-minded people.



I have argued that the science on AGW is settled: that is not to say that all scientific models produce identical predictions, but it is the case that the overwhelming view from peer-reviewed science is that AGW is a real and serious issue, which must be addressed, or the consequences will be severe.

I have shown why there are many voices gainsaying the science, and why many are seduced by those voices, to take a climate-change sceptic/ denier stance. If you fall into that category, I hope that by reading this post, you might be stimulated to question your views and look again at what the evidence actually shows.


  7. “Quantifying the consensus on anthropogenic global warming in the scientific literature”:;jsessionid=6B260CD1595C24CE1810E6EA85B356CB.c1