28. Measuring planning performance R.I.P.

Call me a nerd, but the measurement of planning performance is an interest of mine.  In fact, so interested am I that between 2001 and 2003 at the height of the New Labour ‘Best Value’ performance regime, I undertook a research project resulting in the 2004 book, Measuring Quality in Planning.  The project was fascinating (if you like that sort of thing), and amongst its findings, revealed a very simple truth: planning is hugely complex and important, and any approaches to measurement that are less than ‘holistic’ in their coverage are liable to misrepresent and potentially distort the service.  In essence, measurement of selective performance dimensions at a national level, however well meaning, would be distortionary and counterproductive to delivering an effective service at the local level; namely one that is efficient, accountable, responsive, positive and, above all, capable of influencing development outcomes for the better.

Despite the research, the previous Government persisted in attempting to deliver a national performance framework for planning until its dying day.  Thus over the thirteen years or so of New Labour they tried simple performance indicators (sometime one, sometimes many), national surveys of user satisfaction, checklists, composite indicators, service inspections, comprehensive performance reviews (in various guises), incentive systems, name and shame, the list goes on.

Notwithstanding the many innovations in between, when Labour came to power in 1997, the national indicator set included little more than crude quantitative measures for planning, most notably the percentage of applications processed in 8 weeks (with some other data on appeal rates, departures and expenditure).  On leaving office in 2010, the single planning-specific indicator in the National Indicator Set was NI157, measuring the speed of determining planning applications; albeit now distinguished by major (13 weeks) and minor applications (8 weeks), a refinement originally suggested in 1975 by the then Dobry Commission.

A final hurrah

Almost 35 years later, the Killian Pretty Review recommended that Government should replace the NI157 with a new satisfaction with the planning service indicator (actually something already attempted not very successfully before as part of the 2000/01 and 2003/04 Best Value Performance indicator set).  The Government’s response accepted the need to review the planning indicator and in late 2009 consultants were appointed to examine how planning might be better represented in the national indicator set, with an expert group constituted to review their work (on to which I was invited).

In the briefing note, CLG stated, “the development of the National Indicator Set underpins the local government performance framework: by providing the measures by which Local Area Agreement (LAA) improvement targets are set and supplying robust, comparable and timely data which the inspectorates can draw on for the new Comprehensive Area Assessments (CAA)”.  They confirmed that the review aimed “to test options for measuring performance in a more holistic way”.

Despite aspirations for a more ‘holistic’ measurement of the service, the focus of the review was quickly restricted to development management, confirming the old truism that those areas that are easy to measure inevitably become the focus for measurement.  By March 2010 the consultants had duly reviewed the evidence, conduced user group seminars and come up with a long list of indicators for testing.  In essence (with some refinements), these looked remarkably similar to the totality of indicators that at one time or other have been tried and tested by successive Governments to measure planning performance from the days of John Major’s Citizen’s Charter, to various incarnations of Best Value, to Comprehensive Performance Assessment, and beyond.

A clean sweep

At this point the thirteen years of New labour came to a close, the Coalition Government was elected and Eric Pickles came along as the new Secretary of State.  He wasted no time in, announcing the end of the planning system (at least as we had known it), abolishing the latest incarnation of national performance measurement (CAAs), and, just to put the final nail in the coffin, also abolished the body charged with much national measurement, the Audit Commission.  In a speech on the 14th October billed as denouncing Town Hall waste he actually focused much of his fire on Whitehall demands:

“Time and time again, I hear complaints from councils about how much of a burden the national indicator set is. Not because measurement and targets are always a bad thing.  But national targets tend to mean that councils are constantly working on things which matter to Whitehall, regardless of what local residents think. I’d much rather councils were tackling local issues. The money being spent on form fillers and bean counters could be far better spent … instead of the National Indicator Set, and instead of every single department’s endless demands that you measure this, that or the other, there’s just going to be one list of every bit of data that Government needs”.

Whatever one might think about the wider reforms to planning, to my mind, this is a positive move.  As I argued from my position of self-appointed sceptic on the expert group constituted to inform the recent review of NI157, the fundamental question that needs to be asked is do we actually need a national indicator for planning at all?  In my view, we don’t, at least not one of the kind that we have become used to.

The issue here is that planning is essentially a local service that will differ in its emphasis, aspirations and delivery across the country.   Trying to measure performance at a national level runs counter to this basic characteristic, not least because it is impossible (or at least prohibitive in time and cost) to capture all the data one needs to measure the planning service at the national level.  This remains the case even if – as was proposed – measurement only focuses on one aspect of the service such as Development Management; something which in itself is highly problematic, running the danger of reducing planning once again to a basic regulatory function.

An alternative

For me, we should start by asking, not how do we measure planning performance at a national level, but instead, what data is useful to gather at this level as a means to monitor national policy.  If use of brownfield land, or housing units delivered, or even the speed of processing planning applications are useful indicators for this purpose (or any others), then we should not hesitate to capture the data.  However, we should:

  1. Never pretend that these are a substitute for measuring planning performance (they only measure parts of that performance)
  2. Never use the data in a manner that distorts practice, for example in name and shame league tables or for punitive purposes.

Instead, if it is important, then national government should incentivise in a manner that does not distort local practice.  Effectively this is what the Planning Performance Grant did, and was very effective at driving performance in the areas on which it focussed as a result (although arguably to the detriment of others).  If this is done, everything else can be left to the local level, to reflect the local aspirations, priorities and resources of a vital local service.

So, as the new Government moves forward towards its single list of data, we should:

  • Drop the ‘headline’ national indicator for planning altogether on the basis that it is distortionary and that it is in fact impossible to represent such a complex local service at the national level. Any new system, unless hugely complex, is bound to fall into the same trap as previous national indicators in planning, and we will be back to square one again
  • Carry on gathering important national planning data and use this solely for the purposes of intelligent incentivisation, not distortionary naming and shaming or punitive practices
  • Work to put in place a series of tools that will allow local planning authorities to better measure their performance holistically and at a local level, and in a manner that is ultimately focused, not on speed, but on securing a more positive planning process and better outcomes from that process.

Matthew Carmona

Professor of Planning & Urban Design

The Bartlett School of Planning, UCL


October 2010