Let’s Think Deeply about Citizens’ Assemblies & Citizens’ Juries

Time for some healthy self-deliberation!

Paul Vittles
40 min readNov 15, 2020

Many recent publications about Citizens’ Assemblies and Citizens’ Juries have focused on how to design and implement them, as the ‘Deliberative Wave’ promotion of these deliberative processes has tried to move us quickly on, often discouraging critical analysis — one prominent DelibWave advocate told me, worryingly, “the science is settled on Citizens’ Assemblies”!

Let’s look at some fundamental questions of ‘why’ deploy these tools, and ask the kinds of critical questions a healthy democracy should be asking.

Some people have clearly thought deeply about Citizens’ Assemblies & Citizens’ Juries, some for many years, even decades in my own case, but the past couple of years — the ‘Deliberative Wave’ — has been characterised often by an uncritical promotion of these methods and an uncritical acceptance of them from those desperate for solutions to poor governance and pressing issues like Climate Emergency. Time to stop, reflect, think, reset!

In critical analysis and critical thinking disciplines, we go back to first principles and we also approach the issue from multiple perspectives.

Some evaluations of Citizens’ Assemblies (CAs) and Citizens’ Juries (CJs) have focused on what was done and how it was done, not why it was done, what was not done, and what the alternatives might have been. They’ve also sometimes been rather superficial and subjective, eg asking participants if they enjoyed taking part and asking post-rationalisation questions such as ‘did taking part in the CA/CJ make you…?’ rather than objective measurement of impact, so we need more rigorous evaluation as effectively another component of critical analysis.

For any given issue, many people approach it from a particular frame-of-reference and a particular way of thinking about the issue. It’s important to understand their frame-of-reference and thinking approach or thinking style, in order to then understand what they’re saying about the issue, and doing in response to it. Comprehensive critical analysis needs to be undertaken (with logical reasoning and empathy) within their frame, as well as yours.

Then for completeness of critical analysis, we need to consider the multiple perspectives, multiple start points, different frames-of-reference, different ways of thinking about the issue, different thinking styles and — importantly — different motivations driving thinking and behaviour as well as different knowledge levels, skills, and personality.

Once conclusions are reached — whether after lots of thought or very little — it’s inevitable that people will develop reinforcing narratives, often seek evidence that fits their worldview, reject evidence or arguments that challenges their worldview, and often not be able to see alternative perspectives, or flaws in their thinking. We all have blind spots!

In the world of Citizens’ Assemblies and, specifically, Climate Assemblies (the issue of ‘Climate Emergency’ driving a lot of recent Citizens’ Assemblies and Citizens’ Juries), we have the following frames (this is an illustrative list, it’s not meant to be comprehensive — you can probably think of others):

a) academics debating the role of CAs/CJs within a democracy, and studying CAs/CJs as case studies in testing political theory in practice;

b) applied researchers designing and evaluating models of researching, engaging with, and involving citizens and communities;

c) statisticians focused on the validity and reliability of the data;

d) political commentators focused on the credibility of the process from political, not just statistical, perspectives and also the key media narratives;

e) climate change activists (and other communities of interest) seeing CAs/CJs as a potential means to the end of climate action (some only interested in the method as a means to the ends, others becoming passionate believers in the process, with a range of motivations for that);

f) people (activists, politicians, media commentators, the broader citizenry) thinking the current political system has ‘failed’ or seeing ‘democracy in crisis’ and seeing CAs/CJs as a way forward, even ‘THE way forward’ (this includes some who advocate for CAs/CJs as part of a package of reforms and some who argue solely for CAs/CJs as ‘THE answer’);

g) consultants (in for-profit businesses and social enterprises) advocating for CAs/CJs as an appropriate and effective model — to some extent because they believe in the model, and are motivated by enhancing democracy, and to some extent because, as providers, it’s become a key part of their business model (several consultants or organisations now earn a large part of their income from CAs/CJs, and they have mouths to feed and bills to pay!);

h) think tanks and lobbyists, including those driven largely by a belief that CAs/CJs can enhance democracy through to those who see CAs/CJs and the whole ‘DelibWave’ movement as an opportunity to influence the political agenda — with a range of motivations;

i) governments, parliaments, councils or other bodies (government agencies, partnership bodies, regulated utilities, inquiries, professional associations) thinking CAs/CJs could be a helpful way forward on a complex issue — ranging from positive motivations via a belief in the philosophy of involving stakeholders through to a cynical attempt at manipulation, with other motivations or contexts in between, such as breaking a political stalemate, running out of other ideas, seeing public opinion via polls suggesting it’s time for a particular social change, and so on.

[To reiterate, the above list not meant to be comprehensive, just an illustration of the different frames-of-reference and motivations, and remind us, in our critical analysis and evaluation, we need to take these factors into account — in particular, not get blinkered or carried away on ‘the wave’ without thinking deeply about the context and all the (complex) factors involved here].

I’ve been asked for my comments on a range of issues, and have given those comments or will be writing further pieces, on the issues raised — there being particular interest in:

A) ground-up democracy and empowerment models and approaches instead of or complementing the more top-down, expert-led, models and approaches we’ve often seen in the past 2 years;

B) wide and deep (and long) democracy models and approaches that are at the core of a healthy democracy, and have a track record of big and lasting impact (see link at the foot of this piece), rather than narrow models and methods, including CA/CJ ONLY approaches which have often been deeper deliberation at the expense of wider participation, so not necessarily progress;

C) more focus, ideas and proposals on wider democratic or political system reforms (eg Qualified Voting or Conditional Voting, and my proposal for a standing, dynamic Transparency & Accountability Commission) not just a focus on ‘mini-publics’ which is just ‘one piece of the puzzle’ and often addressing the symptoms rather than causes of our broken democracy.

However, this particular piece here is not written with the purpose of giving my own views, ideas or proposals, but on listing all the questions we should be asking ourselves and others, in the interests of (together) developing a healthy/healthier democracy.

[NB: I’ve recently partnered with the Speakers’ Corner Trust and we’ve got some grant funding for research into the impacts of COVID19 on local democracy (in England). That will be an opportunity to explore other issues, the contemporary context and current state of participative democracy, and generate insightful case studies. All will be written up and published].

The DelibWave Narrative

Over the past 2 years, I’ve had many conversations and written exchanges with advocates for Citizens’ Assemblies and Citizens’ Juries (not all, as some choose not to engage, but with many who, like me, value transparency and accountability in our definition of democracy, share a passion for involving people in decisions that affect their lives — not letting petty politics get in the way of that shared goal — and want to learn and be challenged).

Below is a summary of what I’m often told by these advocates, and practitioners, is the essence of sortition-based deliberative processes:

‘…the principle of sortition is important as well as the practice of random selection so we need to randomly invite people to give everyone a fair chance of selection…

…we also need to have a representative sample, and the demographic profile of those who respond can over-represent certain groups so we set demographic quotas and use stratified random sampling of those who’ve replied to the random invites to match the population profile…

…also we sometimes boost some traditionally under-represented groups in the recruitment to try and be as inclusive as possible…

…and we don’t include anyone not selected through this process because they might not be representative of the population and we don’t want to have a biased sample…

…our primary engagement mechanism is the Citizens’ Assembly or Citizens’ Jury so we want those 20, 50, 100 or 150 people to be fully representative of the population…

…we want to guide them through a structured process of considering the issues, being informed, deliberating, deciding, reporting and recommending, so we have a ‘mini-public’ or ‘microcosm of the public’ and we know what ‘an informed public’ would say, ie if we had the opportunity to take the whole population through this process rather than just this one sample, we can be confident they would come to the same conclusion because this is a fully representative sample and we’re trying to eliminate biases at each stage of the process…

…and, finally, we can have some wider engagement in the form of public submissions for input (including those randomly-invited and saying ‘yes’ but not selected for the Citizens’ Assembly) and also a wider public conversation, including any citizen being able to watch live streams (of presenters, not the participants deliberating), read reports, and comment on the recommendations’.

So, let’s take this common set of definitions, key components, methodological points, justifications, and motivations — and subject it to critical analysis.

Critical Analysis within the Frame of the Citizens’ Assembly Advocates

For the record, in the right circumstances, I’d propose a Citizens’ Assembly or Citizens’ Jury as they can be useful tools within a mixed methods approach to tackle some issues in some contexts. I ran my first Citizens’ Jury in 1991, and I’ve been involved in designing, facilitating, observing and evaluating many different deliberative processes since then.

So, in this sense, I’m a ‘supporter’ of the approach, although I always strive hard not to be ‘method led’ and always tailor each design to be ‘horses for courses’ and ‘fit-for-purpose’ so, in the abstract, I don’t have a view on the Citizens’ Assembly or Citizens’ Jury model, just as I think it would make little sense to describe myself as a supporter of ‘polls’ or ‘focus groups’. I support listening and engagement in all its forms, and I seek natural designs that are empowering but, without a specific context, I’m methodologically agnostic.

Approaching the ‘DelibWave narrative’ above with an open mind, including believing there’s potentially a valuable role for Citizens’ Assemblies or Citizens’ Juries (and possibly not), the questions I’d be asking in any objective critical analysis would be as follows:

‘…the principle of sortition is important as well as the practice of random selection so we need to randomly invite people to give everyone a fair chance of selection…

[Warning: this is going to get highly technical in parts which may or may not be of interest to you, and you may or may not think it’s relevant, but several advocates for Citizens’ Assemblies and Citizens’ Juries have recently made the case for them based on what they regard as ‘scientific principles’ and/or ‘statistical science’; several prominent advocates have argued they should be adopted as a ‘scientific approach’ with ‘scientific sampling’; and when questions have been asked about the validity of CAs/CJs, the defence is often based around the technical validity and statistical representativeness of the sample…so let’s have (more) critical analysis within this frame-of-reference. Interestingly, many think the ‘technical’ aspect is overblown, and the CA/CJ process is still valid and (politically) meaningful even without ‘random, representative samples’ but, for now, here, let’s assume it’s important].

A1. Why is the principle of sortition important (as well as the practice of random selection)?

A2. Is sortition a fundamental principle of democracy or just a method of selection? If the former, why?

A3. In your justification of sortition being a democratic principle, not just a method of selection, you often refer to Ancient Athens, but in Ancient Athens sortition was often used to select political officials or decision-makers after an open invitation — do you consider sortition of candidates after an open invitation to be a democratic process or must it be random invites?

A4. Does random invitation actually give people a fair chance of selection? It’s complicated isn’t it? It might be ‘fair’ in theory? Is it ‘fair’ in practice? What do we mean by ‘fair’?

A5. In what ways could random invitations not be ‘fair’? Who might be disadvantaged by such a process?

A6. You often say that it will be fair in the long-term because, after many Citizens’ Assemblies on many different topics, everyone eventually will have a chance to participate, but what about in the short-term, eg those who want to be involved now on a current topic of interest for a given Citizens’ Assembly?

A7. How robust and transferable are the principles you are applying here? For example, in other fields, such as recruiting people for jobs, would you consider it appropriate and ‘fair’ to have random invitations and/or random selection from the pool of applicants?

A8. What is the order of priority here? Which is the overriding principle? Is it wanting to be ‘fair’ and therefore having random selection because you believe it’s the best route to ‘fairness’? Or is it a belief that sortition has higher level democratic properties and also happens to be ‘fair’?

A9. To what extent is it possible in practice to ‘randomly invite’ people? What are the practical barriers, eg incomplete postal address files or incomplete lists of telephone numbers for those in-scope; and can these barriers be overcome sufficiently to still justify ‘fairness’?

…we also need to have a representative sample, and the demographic profile of those who respond can over-represent certain groups so we set demographic quotas and use stratified random sampling of those who’ve replied to the random invites to match the population profile…

B1. Do you believe that random invitation and/or random selection is an important democratic principle or just a means to the end of achieving a ‘representative sample’?

B2. If you believe that random invitation is an important democratic principle, why is achieving a ‘representative sample’ so important — would it not be enough to just give a random sample an opportunity to be involved?

B3. You seem to be arguing that random invitations ensure dispersion in the sample (so technical reliability merit), and widen opportunities to be selected (so democracy merit), but that this is not sufficient and therefore there need to be additional measures to ensure a final sample of participants which matches the population profile, in order to be ‘representative’ of the population, hence the demographic quotas and stratified random sampling process from the pool of interested candidates — is that correct? Why is ‘representativeness’ so important?

B4. How do you define ‘representative’? For most Citizens’ Assemblies, it seems to be gender, age and location, then other selected criteria (eg household composition or socio-economic status) with some psychographic or behavioural criteria (eg attitudes to Climate Change for a ClimateAssembly or main transport mode for a town centre planning exercise) and some — especially with more recent concerns around diversity and inclusion — having quotas for disability and/or ethnic origin. What makes a sample ‘representative’ in your view? What are the minimum criteria and standards?

B5. As the number of selection criteria inevitably has to be limited, who decides which criteria are included and which not? Why?

B6. How effective do you think this process is in delivering a ‘representative sample’?

B7. How democratic do you think this process is?

B8. What do you consider to be essential selection criteria for all Citizens’ Assemblies/Citizens’ Juries (and why?) or does it depend on each Assembly, potentially being flexible for each CA/CJ with no set minimum requirements?

B9. What are other desirable selection criteria for all Citizens’ Assemblies/Citizens’ Juries in your view? Why?

B10. Given that you can’t control for every demographic, socio-economic, attitudinal or behavioural variable, what do you see as the key risks for a final sample not being (fully) representative?

B11. Which key groups do you think tend to be (consistently) under-represented, and how do you allow/correct for this?

B12. If certain groups are under-represented, to what extent does this matter — technically, politically; from the perspective of credibility with decision-makers, credibility with the media, credibility with peers, credibility with the public, confidence in the process?

B13. Some argue that the deliberative process is still robust even if the selected sample is not (fully) representative of the population. Do you agree or disagree? Why?

B14. Some argue that samples of participants should be weighted, eg ClimateAssemblies having a younger age profile to reflect the greater interest younger people have in the issue of ‘Climate Emergency’ and/or the stake they have in terms of how it affects them in their lifetime. Do you agree or disagree? Why?

B15. To what extent do you know if the sample of participants you select is representative of the population? What steps do you take to check, other than comparing the profile on broad demographic quotas (which is just one aspect of representativeness of a sample of course)?

B16. We know the response rate to random invitations for Citizens’ Assemblies and Citizens’ Juries is around 5%. We also know that, in order to assure representativeness, we need to check that the 5% who respond are not substantively different in their properties to the 95% who did not respond; otherwise there could be a significant non-response bias. What checks, if any, do you carry out to make sure there is no (significant) non-response bias, or at least be aware of what the biases are?

B17. In your past recruitment exercises, what have been the main non-response biases?

B18. Even though your 5% who respond to a postal sortition invitation may be substantively different to the 95% who don’t, meaning you have significant non-response bias in your sample, and meaning that — in a technical, statistical sense — nothing else you do subsequently in terms of selection process can make this a representative sample, do you think it matters? Morally? Politically? For the credibility of the CA/CJ process?

B19. It’s often highlighted that the court jury process has credibility despite not having the same steps applied in selecting people for Citizens’ Juries or Citizens’ Assemblies, and not being representative samples of the population in demographic terms, eg court juries are often 8 women and 4 men, so why does it matter if participants for Citizens’ Juries or Citizens’ Assemblies are not a full demographic cross-section of the population?

B20. TV programmes with public debates involving audiences often seek credibility by having ‘visually balanced’ audiences, ie making sure there are younger people and older people, men and women, and a representative number of black and Asian faces and cultural dress. To what extent is it important for the credibility of the Citizens’ Assembly or Citizens’ Jury process (for the participants and for those observing via video or livestreamed footage or photos) that it looks ‘visually representative’?

B21. What steps are taken to ensure representativeness among those whose first language is not English or who have a communications difficulty (from profoundly deaf people to those who are shy in groups) or who have episodic mental illness, or who don’t have internet access or who live in isolated rural areas or who have caring responsibilities…obviously we could go on, so where do you draw the line and say ‘this is sufficiently representative in the way we’re defining it’?

B22. When you set the demographic quotas, and undertake the stratified random sampling from the pool of available candidates (accepting these are a self-selecting pool — the 5% who responded to the random invites), what % of those selected who are then invited accept the firmed-up invitations?

B23. And what % of these actually participate (for the first session, then all of the sessions)?

B24. What further biases, if any, are introduced on top of the initial non-response bias (and any bias from inaccurate postal records affecting the initial invitations) in terms of selection bias, non-participation bias, and attrition?

B25. Given all of these sources or potential sources of bias, and the extent to which you have carried out empirical research and evaluation to monitor and measure impact, how confident are you that your participating sample is (fully) representative of the in-scope population?

B26. From a ‘technical’ perspective, in terms of having a sufficiently robust process and participating sample, eg for presenting to your peers, including peer review, how much does it matter for the participating sample to be, and to be seen to be after scrutiny, a ‘representative sample’?

B27. From a ‘political’ perspective, in terms of having a sufficiently robust process and participating sample, eg for presenting to politicians, the media and the public, including full public transparency and accountability, how much does it matter for the participating sample to be, and to be seen to be after scrutiny, a ‘representative sample’?

B28. What do you see as the benefits and drawbacks of postal recruitment (random invites posted out), telephone recruitment (calling random numbers), physical location sampling (random addresses or random start points for door-to-door or street interviews), and online recruitment (random selection from access panels or random website pop-ups) or combinations of these, in delivering a sufficiently robust sample?

B29. Which is the most cost-effective recruitment and selection method?

B30. Which recruitment and selection method do you think has most credibility with decision-makers and influencers?

B31. If divergence, how do you best balance the ‘ideal method’ with the most cost-effective method, and explain this to a wider audience?

B32. Some argue that the recruitment and selection process needs to be technically and politically rigorous to the point of ensuring random invites and stratified random sampling; others argue that the means to the end is less important as long as it’s a (fully) representative sample; others say the CA/CJ process is still robust as long as participants are a broad cross-section of the population (doesn’t need to be precisely statistically representative); and others say the CA/CJ process is still robust even with purposive samples (eg deliberately age-weighted to reflect the issue) or even if sampled from an open invite (as several councils have done, eg. Leicester, and as happened in Ancient Athens). What’s your view? Which options are acceptable or not acceptable? Which would you recommend or not recommend?

…also we sometimes boost some traditionally under-represented groups in the recruitment to try and be as inclusive as possible…

C1. Despite previously arguing that random selection is ‘fair’ and the best method of selection in a democracy, you seem to have qualified your position. Why is this?

C2. There’s an acceptance that pure random sampling would not lead to ‘fairness’ or efficacy because it can lead to skewed samples (in theory a pure random selection could lead to 100% men responding and, in practice, we know those responding to random postal invites are older with higher levels of educational attainment) but those advocating for CAs/CJs still often refer to ‘civic lotteries’ or selecting people ‘by lot’ so should we stop using that language to avoid confusion?

C3. If you think that random selection is a higher level democratic principle, not just a method of selecting participants for CAs/CJs, why would you not just have names drawn out of a hat or random number selection to decide the participants, ie giving everyone who wants to be involved an equal chance of selection, rather than imposing quotas?

C4. The argument seems to be that it’s important (in principle and/or in practice) to have a final sample which matches the desired profile (on demographics or other criteria you consider to be key) and the importance of having a ‘representative sample’ supersedes the importance of it being a ‘random selection’ — is this correct?

C5. You use stratified random selection along with the target quotas to achieve the desired sample profile, and you sometimes boost some groups known to be under-represented (either by sending out a disproportionately higher number of invites to people in these groups, where possible, or by increasing the chances of selection from the pool of interested citizens who are in these groups, or both). This again suggests the importance of having a ‘representative’ sample supersedes the importance of it being ‘randomly selected’ and the pragmatic selection process is more important than any higher level democratic principle of random selection — is this correct?

C6. Which groups do you typically find are under-represented in your recruitment and selection process, and why?

C7. Which groups do you typically try to boost in your recruitment and selection process, and how?

C8. Which other groups do you think might be under-represented but don’t have hard evidence/needs more research? Why do you suspect these groups are under-represented?

C9. You want to be ‘as inclusive as possible’ so what steps are you taking to maximise inclusivity in your design and implementation processes?

C10. Given your other objectives, logistical constraints, and budget constraints, what compromises do you feel you need to make? What is the order of priorities when you have to compromise or trade off?

C11. Which groups within the population are you inevitably not going to be able to represent due to the inherent limitations of the CA/CJ model?

C12. Do you have ways of including people living with disabilities, people whose first language is not English, people with intellectual disabilities, etc in your CA/CJ process?

C13. Are there some of these groups who could be engaged in other ways but, in practice, won’t be able to (fully) participate in your CA/CJ?

C14. Where there are some groups who clearly cannot be involved in the CA/CJ design/process, how important is it to include all of these groups in some way, in some supplementary process, or is it ok for a CA/CJ to credibly proceed with it being publicly known these groups are not represented?

…and we don’t include anyone not selected through this process because they might not be representative of the population and we don’t want to have a biased sample…

D1. So you send random invites in the post (or recruit via random telephone calls or via interviewers with random start points), identify the pool of people who want to take part in the CA/CJ, select the CA/CJ sample (via a combination of quotas for criteria you define as most important), invite the 20–150 people you think will comprise a ‘representative sample’ and don’t include any of the others wanting to take part in your CA/CJ because you think this would bias the sample — is that correct?

D2. You say they might not be ‘representative’ of the population — to what extent do you know for sure that they’re not ‘representative’ and to what extent are you making a judgement that they might not be without having all of the available information or data on which to make such a judgement?

D3. If you believe that a ‘civic lottery’ approach is desirable as a democratic principle, or that random selection is a higher democratic principle not just a method of selecting participants for a CA/CJ, why does it matter that the demographic profile or sample properties do not match the relevant population profile?

D4. If you believe that having a ‘fully representative sample’ of the population is the most important consideration, eg more important than randomly selecting from the pool of interested citizens (as was done in Ancient Athens) — to the point of excluding citizens who were randomly invited and expressed interest in participating in (what they thought was) a democratic process — how confident are you that you have a ‘fully representative sample’? And how are you defining ‘fully representative’ in this context?

D5. Given that the response rate to the initial random invitations is typically 5% and therefore the pool of available candidates to select from is unlikely to be fully representative of the population (you may know where it’s not representative or have a good idea or not know at all), is it actually possible to manufacture a ‘fully representative sample’ from this self-selecting (potentially) biased pool? If not, why exclude interested candidates based solely on the basis of their demographics?

D6. If it’s not possible to manufacture a ‘fully representative sample’ from what is a biased pool of available candidates, what are the relative merits of trying to make the final sample closer in profile to the total population profile or simply defining the pool as ‘the population of interested citizens’ and randomly selecting from that pool with every citizen having an equal chance of selection rather than you making judgements about whether or not they should be involved?

D7. Whatever your final decisions on who should or should not be included in the CA/CJ, what responsibility do you feel you have to the other citizens you’ve sent random invitations to who’ve said they want to be involved but you’ve chosen not to select for your CA/CJ?

D8. Is there an important (higher?) democratic principle here around random selection or is it so pragmatic as a methodology for recruitment and selection of participants for your CA/CJ that the only people who matter here are those selected for the CA/CJ and not those not selected?

D9. What do you usually say to those not selected? How do you explain/justify your decision?

D10. You said previously that you want to be ‘as inclusive as possible’, so how do you reconcile or balance the desire to be as inclusive as possible with excluding people who you’ve invited in the spirit of participative democracy and who want to be involved?

D11. Do you think there’s a trade-off between deliberative democracy as you’ve defined it (depth of deliberation among a sample you select) and participative democracy as it’s generally understood as the democratic right to be involved in decisions? If so, which is more important and why?

D12. Some advocates for sortition-based CAs/CJs argue for ‘randomly selected assemblies’ to replace elections, replace parliaments or second chambers in parliaments, or councils. What’s your view on this?

D13. If you believe that sortition-based assemblies should replace elections or forms of representation democracy (rather than complementing representation democracy), are you saying that random selection as a democratic principle is more important than citizens’ right to vote?

D14. You say you don’t want to include citizens randomly-invited, saying ‘yes please’ but not selected, because they might not be ‘representative’ and might be a biased sample but those you’ve chosen are from a self-selecting pool formed from the 5% response to your initial invites and, therefore, unlikely to be representative of the population at large due to non-response biases, so how can you justify including one unrepresentative sample but not another unrepresentative sample?

D15. To what extent is this a pragmatic decision, eg. budget constraints, or a principled decision, ie you could involve them but choose not to?

D16. What other opportunities, if any, do you provide for those not selected for your CA/CJ to be involved in the process in some way?

…our primary engagement mechanism is the Citizens’ Assembly or Citizens’ Jury so we want those 20, 50, 100 or 150 people to be fully representative of the population…

E1. As outlined in the previous section, there are many ways in which those in the available pool of candidates (after the 5% response rate) and then selected against certain desired criteria are not actually ‘fully representative of the population’, and some query whether this is even necessary, and CAs/CJs are a qualitative not quantitative methodology, so why is it so important to you to try and achieve a final sample of participants that is ‘fully representative of the population’ as you define it?

E2. Some CA/CJ advocates believe that the deliberative process is valid and robust without needing to have a ‘fully representative sample of the population’, as long as there’s a mix of participants or relevant purposive sample, sufficient numbers of participants, and a sufficiently robust design, including measuring the extent to which views change over the course of the CA/CJ. Why do some advocates or providers insist on trying to get a precise statistical match with the population at large?

E3. Those who insist on applying statistical principles to CAs/CJs (not all advocates or providers do but many absolutely insist on it, and use this as part of their claim that CAs/CJs are a ‘science’ with ‘scientific sampling’), often focus largely, even solely, on the statistical validity of the selected sample, arguing that due to random invites, demographic quotas, and stratified random sampling from the available pool with stratification for key variables, the sample is ‘fully representative of the population’ but what are the flaws in this argument, given the various biases, non-random factors, and design factors in the process which make it very difficult to achieve a ‘fully representative sample’ or to even know if it is (fully) representative?

E4. Does it actually matter if it’s a ‘(fully) representative sample’? And, if so, why?

E5. From statistical and non-statistical perspectives, why do you choose 20, 50, 100, 150 or whatever the number is that you choose to be involved? What factors do you take into account, what order of priorities for these factors, and what is essential or desirable?

E6. Do you regard CAs/CJs as a qualitative methodology or a quantitative methodology or both?

E7. If you regard Citizens’ Assemblies as a quantitative methodology, requiring statistical validity and reliability, what do you regard as the minimum sample size and spread for robust data and credibility?

E8. Statistical principles are often used to support the validity of a CA/CJ sample but not often to report on the reliability of the results from a CA/CJ. Why is that?

E9. If a selected sample of 100 participants in a CA/CJ were demonstrated to have the properties of a random sample (which no sample does in practice) or a ‘fully representative sample’ (possible to some extent, though unlikely), then we could say that an initial survey among this 100 sample is reliable in representing the views of the population to within +/- 10 percentage points at the 95% degree of confidence. How important, if at all, is it that their views actually represent those of the population?

E10. And is the +/- 10 margin of error (potentially higher given the sample biases) a problem?

E11. For a CJ with 20 participants, again subject to the sample being ‘random’ or ‘fully representative’ (and all the qualifications set out above), these 20 citizens chosen would be able to express views in an initial survey (pre-conditioning, pre-attrition) that would be reliable in representing the views of the population as a whole to within +/- 22 percentage points at the 95% degree of confidence. Is the +/- 22 margin of error a problem for a CJ?

E12. There seems to be a selective use of research and statistical methods for CAs and CJs. Often a prominent role for statistical principles and research methods plus the concept of ‘representativeness’ in the recruitment and selection of the sample, but then not so much after that. Do you think the whole CA/CJ process should be a robust, rigorous research methodology or do only some parts of the process need to be subject to such methodological rigour? And why?

E13. In particular, with research methodologies, it’s good practice to study patterns in data with either a sufficiently large sample (typically 1000+ for political opinion polls) or several data sources to compare (eg 8 lifestage focus groups — 2 groups of young people with no children, 2 groups of young families, 2 groups of older families/empty nesters, 2 groups of older people) and report on convergent and divergent patterns. With CAs and CJs, it’s almost exclusively the case that just one CJ (of just 20–25 participants) or just one CA (typically 50–150 participants) is proposed and implemented. How concerned are you, if at all, about the risk of just one CA/CJ being unrepresentative and not having ‘control samples’ or parallel processes to check if multiple CAs/CJs are reaching the same or different conclusions?

E14. If you had the opportunity (and budget) to have 3–4 CAs or CJs, would you replicate the process precisely to be able to, other things equal, see if independent parallel CAs/CJs reach the same or different verdicts, or would you have an alternative design?

E15. If you would recommend an alternative, what design do you think would be most valuable in generating insights and impact?

E16. If you choose to only have one Citizens’ Assembly or Citizens’ Jury, what do you see as the main risks, and how do you manage these risks?

…we want to guide them through a structured process of considering the issues, being informed, deliberating, deciding, reporting and recommending, so we have a ‘mini-public’ or ‘microcosm of the public’ and we know what ‘an informed public’ would say, ie if we had the opportunity to take the whole population through this process rather than just this one sample, we can be confident they would come to the same conclusion because this is a fully representative sample and we’re trying to eliminate biases at each stage of the process…

F1. When you say “guide them through a structured process”, what do you mean precisely by ‘guide’ (or some say ‘steer’ or ‘moderate’ or ‘facilitate’)?

F2. How much ‘guidance’ and structure to the process do you think is needed — what’s essential, desirable, adds value?

F3. What are the potential risks or drawbacks from too much or inappropriate structure or facilitation? Where can it be unhelpful, harmful, disempowering?

F4. What do you consider to be the minimum requirements for a CA/CJ process to qualify it as a CA/CJ as distinct from any other process or any other deliberative process?

F5. What do you consider to be the minimum requirements or standards for a CA/CJ to be able to deliver robust results that you feel you can defend publicly and to your peers?

F6. What kinds of information do you feel you need to provide to participants rather than simply asking them to spend time doing their own research?

F7. Who decides what information is provided to partipants, and how it’s provided? What criteria do you use when choosing what information you provide and what you don’t provide?

F8. What are the risks and potential biases in what information is selected to be presented (or not presented) and the way it’s made available?

F9. Who decides which ‘experts’, ‘specialists’ or ‘professional presenters’ provide information to participants, and how that’s provided? What criteria do you use when choosing which presenters you have or don’t have?

F10. What are the risks and potential biases in the selection of presenters and the way they deliver their presentations?

F11. Who decides which advocates and interest groups provide information to participants, and how that’s provided? What criteria do you use when choosing which influencers you have or don’t have?

F12. What are the risks and potential biases in the selection of influencers and the way they deliver their presentations?

F13. To what extent can participants shape the agenda and process, and how important is this — for ‘technical process’ and as a democratic principle?

F14. To what extent can participants choose what information they receive and who the presenters are, and how important is this — for ‘technical process’ and as a democratic principle?

F15. To what extent can participants shape the decision-making process within the CA/CJ, and how important is this — for ‘technical process’ and as a democratic principle?

F16. To what extent can participants shape the way their decisions and recommendations are framed in the written report, and how important is this — for ‘technical process’ and as a democratic principle?

F17. To what extent does your CA/CJ process enable participants to ensure follow-through and impact, providing a scrutiny function through to implementation, not just delivering a report, and how important is this — as ‘technical process’ and as a democratic principle?

F18. Who else is involved in the CA/CJ process as observer, overseer, steering group, advisory group, stakeholder forums, etc, what influence do they have on design, input, deliberations and decisions, what potential biases are there from these sources, and how is this part of the process managed?

F19. To what extent can participants shape the core question or questions being considered by the CA/CJ, at the outset or at any stage through the process, eg challenging the date for achieving Net Zero Emissions? And how important is this for ‘technical process’ and as a democratic principle?

F20. What is the role of the facilitators — overall facilitators and breakout session or table facilitators?

F21. Who decides or influences what the role of the facilitators will be, and who is chosen as facilitators?

F22. What are the risks and potential biases in the selection of facilitators and the way they facilitate?

F23. Do you have unfacilitated sessions or allocated time for the participants to deliberate without anyone else in the room? If not, why not?

F24. At a recent “DelibWave event’, one of the presenters (from the Involve Foundation — one of the main providers of Citizens’ Assemblies) said “Every session, including all the breakout sessions, have to be facilitated of course”. Why “of course”? Why is it essential to have facilitation, especially in a process which is meant to be empowering citizens?

F25. When participants make the final decisions on what recommendations they want to make in the final report, is there facilitation? If a facilitated session, why, and what are the risks and potential biases? What are the risks if not facilitated?

F26. If you don’t think this final decision-making session should be left to participants solely, with no-one else in the room, why would a CA or CJ be different to a court jury?

F27. Based on your experience of CAs/CJs in practice, do you think they need more facilitation or less, and why?

…we want to guide them through a structured process of considering the issues, being informed, deliberating, deciding, reporting and recommending, so we have a ‘mini-public’ or ‘microcosm of the public’ and we know what ‘an informed public’ would say, ie if we had the opportunity to take the whole population through this process rather than just this one sample, we can be confident they would come to the same conclusion because this is a fully representative sample and we’re trying to eliminate biases at each stage of the process…

G1. When making the case to politicians, media, peers, decision-makers, etc, it’s often said that one of the great benefits of a Citizens’ Assembly or Citizens’ Jury is this argument above, ie the mini-public or microcosm telling us what the population as a whole would think if they could go through the CA/CJ process and have access to the same information. Do you think this is an entirely theoretical concept or do you think there are practical ways of ‘scaling’ it up for the population as a whole?

G2. What do you think are the flaws in the statement above? Given the limitations in the recruitment and selection process, is a CA or CJ truly a ‘microcosm of the public’, is it a ‘representative sample’, does it eliminate biases? Does it create biases in its design?

G3. How confident are you that if you replicated your process among several different samples recruited and selected the same way, the results and final recommendations would be the same? If so, why? If not, does it matter?

G4. Have you carried out empirical research, or studied available evidence, to check on the extent to which the CA/CJ process is replicable and scaleable with consistency of outcomes when multiple CAs/CJs are carried out? If so, what were the results and what did you learn?

G5. If not, and you cannot support the claim that ‘this mini-public tells us what the population would think if they went through the same process’, does that matter? In other words, does it matter if different CAs/CJs generated different results?

G6. If different CAs/CJs do generate different results, what do you think the implications are? Would it make you still advocate for just one CA/CJ with democratic legitimacy or recommend more than one CA/CJ (eg a ‘control sample’ or 3 to be able to at least maybe have a 2:1 verdict) or recommend a broader engagement process with a CA or CJ just one component?

G7. As standard, do you always have pre-post polls, deliberative polls, or tracking surveys throughout your CA/CJ process to check on how views are changing from beginning to end or as information is provided, or do you not think that is important or adding value, eg you just think the final collective view is important?

G8. If you do have pre-post polls, deliberative polls, or tracking surveys through the process, what has your experience been of the extent of change of views at the macro and micro level, which views have changed, and why?

G9. What are the key lessons learned, and key implications for CA/CJ design?

G10. Have you carried out micro impact evaluation to monitor and measure which specific information provided, which presenters or which part of the process has had most impact on the participants in changing their views or firming up their views, individually or as a group? If so, what lessons have you learned from this?

G11. If you haven’t carried out polls among participants at the very beginning of the process, covering all of the key issues and variables, how can you be confident in saying you have a (fully) representative sample?

G12. If you haven’t tracked views through the process or carried out micro impact evaluation, how can you know if you’ve eliminated or mitigated biases from the design or aspects of the design, eg particular information or presenters?

G13. What % of those who participate at the beginning of the process also complete the process? Do you measure the impact of attrition? Do you allow subsitutes for those who can’t complete? What implications does this have on the process and outcomes and ability to argue it’s still a ‘microcosm’ and ‘what the population would decide’?

G14. Do you measure all forms of ‘conditioning’ effects — information, presenters, facilitators, process, experience, environment, etc — and how this impacts on participants, and what implications this has for the ability to say ‘this is what the population as a whole would think’?

G15. Given that the participating sample is at the end of the CA/CJ process, by definition, no longer a representative sample of the public, even if they were at the beginning, because of the conditioning effects, why is there so much emphasis given in the way CAs/CJs are promoted and designed on it being a (fully) representative sample at the beginning of the process?

G16. If the sample wasn’t a full cross-section of the population as a whole, eg only people aged 40 and under, would you still argue that the outcome from your CA/CJ among people aged under 40 is what all people in the country aged under 40 would think or decide if they went through the same process?

G17. What else do you think can be done to improve the likelihood that the conclusions drawn by your CA/CJ sample and the conclusions drawn by the population as a whole would be the same, eg in the design, the way non-participants are informed or involved about the process, and the way non-participants are informed or involved in the outcomes?

G18. What do you think are the main risks in terms of the conclusions drawn by the participants in your CA/CJ not reflecting what the population as a whole would think or decide, and how do the risks need to be managed?

G19. How do you think the lessons learned from the process, eg micro impact evaluation, can be more effectively used in broader communication to ensure that there’s a broader community convergence? Is this desirable, democratic, ethical, likely to be effective?

…and, finally, we can have some wider engagement in the form of public submissions for input (including those randomly-invited and saying ‘yes’ but not selected for the Citizens’ Assembly) and also a wider public conversation, including any citizen being able to watch live streams (of presenters, not the participants deliberating), read reports, and comment on the recommendations’.

H1. How comfortable are you — in terms of ‘technical process’ and trying to enhance democracy — just having a CA or CJ with no other involvement, or only having broader public input into the process, eg public submissions?

H2. To what extent are decisions to only have a CA or CJ, or only have wider participation in the form of input into the CA/CJ, driven by pragmatism (eg. budget, resources available) and to what extent driven by principle, ie believing you don’t need wider participation to achieve a technically robust and/or democratic outcome?

H3. To what extent do you think it’s important to aim to have future deliberative & participative democracy exercises, projects or programmes having wider involvement as well as deeper involvement via CAs/CJs?

H4. To what extent is your focus really just on the participants in the CA/CJ or also thinking about those watching via livestream or on video, non-participants making comments through the process and when the report is published, ensuring a broader public conversation before, during and after the CA/CJ?

H5. Do you think you’ve got the balance right?

H6. What has been your experience to date of the impact of CAs/CJs in terms of tangible evidence of practical change in policy and action to achieve the intended outcomes? How could impact be improved?

H7. Where there’s no evidence of tangible impact (yet) on policy or practical actions, do you still think CAs/CJs are ‘worth doing’ as a means of involving citizens and giving them the opportunity to participate, or can the investment only really be justified by clear evidence of (significant) impact?

H8. In what ways do you think a broader, more inclusive population could be more involved, including those who were invited to take part, wanted to take part, and weren’t selected, and the decision-makers you want to influence (given that the evidence suggests those directly involved in any process are more likely to be committed to the outcomes from that process)?

H9. Based on your experience with CAs/CJs so far, to what extent have they enhanced democracy would you say?

H10. And in what ways, if any, have they potentially harmed democracy, eg depth of deliberation at the expense of width of participation, any significant biases in the design or process, the way they’ve been facilitated, the way they’ve been promoted, the way the recommendations have been shared, the way critics have been responded to, the limitations on diversity & inclusion, the calls for the CA/CJ process to be ‘institutionalised’, the calls for sortition-based assemblies to replace elections and elected chambers, etc?

[Please feel free to contact me and suggest other questions we/me should be asking!]

Critical Analysis with the Frame of the Participative Democracy Advocate

As highlighted above, robust critical analysis examines issues from multiple perspectives with different frames-of-reference.

We know that some have a start point of ‘we need (more) Citizens’ Assemblies’ or ‘let’s have a Citizens’ Jury’ because they’ve already ‘bought in’ to the concept, convinced of the benefits, sub-consciously (or wilfully) blind to some of the drawbacks, partly lost in the passion…and this can lead to defensiveness or a narrow field of vision.

Many of those who advocate for (more) Citizens’ Assemblies or Citizens’ Juries, having already decided that it’s the right approach, don’t think about the alternatives available, and get locked in to a particular methodology, finding ways to justify that methodology. For example, only involving their selected participants without opportunities for other citizens to be involved because they’re convinced that their CA or CJ is ‘fair’, ‘right’, ‘representative’, and likely to get the desired outcome.

Public debates often get bogged down in technical detail such as the sample recruitment and selection process or the ‘representativeness’ of the final sample because advocates have argued a technical case for CAs or CJs so the questions and criticisms become technical. Or it gets bogged down in the detail of Ancient Athenian history, with selective drawing from that history to try and support the argument and counter-argument.

Others argue more from the basis of broader democratic theory, broader democratic principles, or broader reforms needed to our democracy and political systems but if they’re convinced of the value of CAs/CJs, they often keep the conversation anchored in the ‘technical’ merits and issues because their immediate goal is to promote the use of CAs/CJs.

The debate changes somewhat when the focus shifts to the importance of impact and evidence of impact. Some argue that ‘impact’ is everything — what matters is what works in facilitating change, such as effecting climate action. Others, with a different frame-of-reference, argue that it’s not just about CAs or CJs having impact on policy and action plans, that there are other dimensions around, for example, the impact on participants. However, the number of participants is very small, and most agree that impact is crucial.

Those writing and speaking publicly about CAs/CJs are split on evidence of impact. Some claim that there’s significant evidence of impact, others that there’s little or no evidence of impact. This inevitably leads to further qualified discussions about what we mean by ‘impact’ and by ‘evidence of impact’ with a common argument being ‘there’s not a lot of evidence of impact yet but we need to wait for governments and parliaments to consider the recommendations, eg of Climate Assemblies’.

Some argue that there’s already evidence of impact, often quoting the example of ‘Abortion in Ireland’, sometimes even making statements like “Ireland wouldn’t have changed its abortion laws without a Citizens’ Assembly” and even one article recently saying “the result of the Irish Citizens’ Assembly was a shock to everyone”.

Others, including those closely involved in the Irish CA, have argued that, firstly, it wasn’t ‘a shock to everyone’ as the change in public support had been clear in the polls for some time and was one of the reasons there were calls for a CA and/or referendum on abortion and, secondly, the change in the law on abortion was not ‘because of the Irish CA’ and could well have happened without the CA.

We just know there was evidence of public support for a change, the CA supported the change, the referendum supported the change, and the law was changed. Measuring the precise contribution of each of those components, plus political discourse and input from other stakeholders, is not an exact science of course!

So, if we are to undertake a comprehensive critical analysis of Citizens’ Assemblies and Citizens’ Juries, see them from multiple perspectives and a range of frames-of-reference, we at the very least need to take a step back, see the bigger picture, and examine crucial perspectives such as:

i) why are we doing this?

ii) what are we trying to change?

iii) is what we’re doing an appropriate approach?

iv) is what we’re doing an effective approach?

v) what alternative approaches are there?

vi) which is the best approach (for impact and meeting any other key goals)?

I’ll leave others to explore other perspectives, and just focus on two obvious frames-of-reference here.

Firstly, many people, including myself, believe that deliberative & participative democracy is fundamentally about people (citizens, communities, customers, service users, employees, etc) having the right to be — and the opportunity to be — involved in decisions that affect their lives. For more on my writings on this, click through the link at the foot of this piece.

Secondly, maximising impact in terms of the issues being covered by, or suggested for, Citizens’ Assemblies or Citizens’ Juries, and in enhancing our democracy and political/decision-making system.

In other words, not only believing in practice that people have a right to be involved in decisions that affect their lives, and creating opportunities to be involved, but making sure that in practice there is a tangible outcome to make all the effort, investment, time and cost worthwhile.

In my other writings (and public talks, university lectures, workshops, etc), I’ve outlined my own views on these matters, specific proposals for more appropriate and effective approaches, and — most importantly — multiple examples of impact (big, lasting, transformational change) from the ‘wide and deep and long’ approach to deliberative & participative democracy or, as I often refer to it, ‘the engagement and empowerment project’.

The link at the foot of this piece takes you through to my previous published pieces if you want to read the details.

The key point I often find myself making in public debates and discussions — dare I say, deliberations! — is that for democracy in principle and democracy in practice and big, lasting change, we need to have:

a) width of participation (all those wanting to be involved having opportunities to be involved) as well as depth of deliberation (through a range of processes, including Citizens’ Assemblies and Citizens’ Juries but not just CAs/CJs)…

b) an extended engagement process providing the time and opportunities needed for both width and depth (hence ‘wide and deep and long’)…

c) designs for the involvement processes that are based on the underpinning principle of Empowerment (the key principle missing in the OECD Deliberative Wave Report — and rejected by the OECD when I raised it in their public consultation around the draft report)…

d) ideally with ‘natural designs’, ie. research, engagement and involvement designs that are created and facilitated around the way people communicate and interact in the real world (which includes the ‘real digital world’) not just clinical research environments or artificially engineered engagement or involvement environments which are usually more for the benefits of the designers and facilitators rather than the citizen participants.

Applying these principles in practice usually means starting with seeking the widest possible participation, as well as the deepest possible deliberation; maximising or optimising diversity & inclusion; focusing on the issues and audiences at hand; being totally open-minded and flexible about the approach; being creative in designing a tailored, fit-for-purpose approach with a ‘where there’s a will, there’s a way’ mentality in trying to maximise width and depth of involvement and inclusion.

This frame-of-reference, thinking style, start point, and motivation means I may or may not recommend a Citizens’ Assembly or a Citizens’ Jury. It means I would never recommend a CA or CJ ONLY as this cannot maximise participation and inclusion, and it would limit credibility and impact.

It also means I don’t get trapped in the way blinkered CA evangelists do. I often argue for involving all those citizens invited to take part who want to take part because this is what democracy and inclusion requires.

I get CA/CJ advocates saying ‘we can’t include some of them because we have to have a representative unbiased sample and we want our CA/CJ approach to be inclusive, not biased against any group’.

So they argue that they should exclude a large number of those who they’ve randomly invited and who wanted to be involved, and they exclude them on the grounds they want to be inclusive!

As soon as you stand back from this and take the blinkers off, you can see a different picture, and find ways to involve everyone who wants to be involved — not just in your CA/CJ, because just having one CA/CJ will always be an anti-democracy straitjacket, but somewhere within your deliberative and participative democracy design.

If you’ve read this entire piece, reflected on all of the questions, and just read that last paragraph and still think you should be excluding citizens who want to be included, and still think just having one CA or CJ with no other opportunities for involvement (or just public submissions as input), then I can’t help you. I’ll probably just see you getting angry on Twitter!

Paul Vittles is a researcher, consultant, coach, counsellor, and facilitator. He has been an advocate for, practitioner in, and pioneer in, the field of deliberative & participative democracy for 35 years.

As the first Research & Engagement Manager for City of York Council, Paul helped devise and deliver a structure and culture of participative democracy and community empowerment which has shaped much of what is practiced today around the world — from ‘participatory budgeting’ to ‘empowered choice’ policies and services, to ‘open conversation’ models.

Paul has worked with government at all levels in both the UK and Australia, including being a consultant for more than 80 councils; sat on national task forces, committees and steering groups; carried out strategic reviews and evaluations; written and spoken extensively in this field; and is currently undertaking a grant-funded project on ‘COVID19 impacts on participative democracy’, as well as applying ‘ground-up engagement & empowerment’ approaches to helping people and organisations ‘survive & thrive’ (from arts & music charities to suicide prevention). He is also a key catalyst in the ‘York — 100% Digital City’ initiative.

You can click on my ‘magnum opus’ below for more on the history and evolution of deliberative & participative democracy, including ‘the York Revolution, 1989–1991’, a formative period for community engagement & empowerment; innovative ways of involving citizens (including so-called ‘hard-to-reach’ or ‘hard-to-engage’ groups); how extended online engagement has developed since 2008; the recent ‘Deliberative Wave’ boom in Citizens’ Assemblies and Citizens’ Juries, the pre-COVID19 #DelibWave biases, eg against digital democracy (those saying “Citizens’ Assemblies can’t be done online” are now ‘doing them online’!); and the downsides (eg greater depth of deliberation being at the expense of width of participation, method-led approaches often starting with “we want/you need a Citizens’ Assembly” rather than “this is the issue we’re trying to address” or “this is what we’re seeking to change” and then “what’s the best way forward?”.

--

--

Paul Vittles

Researcher (FMRS), marketer (FAMI), consultant, coach & counsellor who helps people and organisations with transformational change and sustainable success.