Guide to Effective Submission Management

This Super Simple Guide to Submissions Analysis will equip you tips to making the whole process of submissions from planning, execution to analysis and reporting easier and more effective

Graphic showing stylised elements of analysing submissions

When organisations invite feedback or formal submissions on complex documents  — such as draft strategies, planning proposals, policy frameworks, legislative amendments, or major project plans — they face a fundamental challenge: how do they gather responses that are structured enough to analyse at scale, while still capturing the depth of expert or community input that the subject matter demands? And how to make this submissions management process efficient, equitable and defensible?

Let’s start with the definiton of what submissions and a formal submissions process actually entails….

What is Submission Management?

The terms ‘submissions process’ and ‘submissions analysis process’ describe two interconnected but distinct stages in the broader practice of public and stakeholder consultation. These processes are central to democratic governance, evidence-based policymaking, and regulatory legitimacy across the world’s major democracies. Despite sharing a common conceptual foundation, they are named, structured, and given different legal weight in different jurisdictions.

What Is a Submission?

A submission is a formal written response from an individual, organisation, group or institution, made in reply to a documented invitation to provide comment or feedback on a proposed policy, regulatory instrument, plan, strategy, environmental assessment, legislative amendment, or other formal document that is open for public or stakeholder consultation.

Three elements distinguish a submission from informal feedback:

  1. There is an authorising document — a green paper, white paper, consultation paper, discussion draft, exposure draft, proposed project plan, or notice of proposed rulemaking — that establishes the subject matter on which comment is invited.
  2. There is a defined process, including a specified submission period with a closing date, a stated authority receiving submissions, and usually some guidance on scope.
  3. The submission carries some form of public record status: it is created with the intention that it will be considered by a decision-making authority and, in most jurisdictions, made available to the public.

Submissions may be made by a wide range of parties: private citizens, community organisations, industry associations, peak bodies, professional bodies, academic institutions, research organisations, government agencies, and elected representatives. They may be brief or extensive, technical or general, individual or coordinated. Their common feature is that they are addressed to a specific authority on a specific consultation, in response to a specific set of questions or an invitation to comment on specific proposals.

Distinguished from comments: In some jurisdictions — particularly the United States — the word ‘comment’ is preferred to ‘submission’ in regulatory contexts. In others — such as Australia and the UK — ‘submission’ is the standard term for formal responses to parliamentary inquiries and policy consultations, while ‘comment’ may describe shorter, less formal online responses. This guide uses ‘submission’ as the primary term but acknowledges its equivalents across jurisdictions.

What is a Submissions Process?

The submissions process is the end-to-end administrative and procedural framework through which a consultation authority or organisation:

(a) invites submissions from the public and other stakeholder groups;
(b) facilitates the receipt of those submissions;
(c) maintains an accessible record of them; and
(d) closes the submission period and transitions to analysis.

It encompasses both the public-facing activities (advertising the consultation, publishing the consultation document, providing guidance to would-be submitters) and the administrative management activities (receiving and registering submissions, managing confidentiality requests, and organising submissions for analysis).

A well-designed submissions process has six defining characteristics:

  1. Transparency: the process is publicly announced, the consultation document is publicly accessible, and the authority’s identity and mandate are clear
  2. Accessibility: the process is designed to enable participation by the full range of relevant parties, including those with limited resources or technical knowledge
  3. Defined scope: the subject matter on which comment is invited is clearly described, so that submissions can be directed and relevant
  4. Defined timeline: a clear submission period with an opening date, closing date, and, where applicable, an extension mechanism
  5. Acknowledgement and record-keeping: submissions are acknowledged on receipt, assigned a reference number, and entered into a register that constitutes the evidentiary basis for analysis
  6. Procedural integrity: the process is conducted in accordance with any applicable legislative, regulatory, or policy requirements governing the consultation authority

What is a Submissions Analysis Process?

The submissions analysis process is the structured methodology by which a consultation authority systematically reads, categorises, interprets, and reports on the body of submissions received in response to a consultation. Its purpose is to transform a collection of individual responses — ranging from brief expressions of support or concern to lengthy technical reports — into an evidence base that can inform decision-making.

The submissions analysis process is not a passive or mechanical exercise. It requires substantive analytical judgement about how to interpret and characterise submissions, how to weight different types of evidence, and how to fairly represent the range of views expressed. It also requires procedural rigour: the methodology must be transparent, consistent, and defensible to withstand scrutiny from submitters, Parliament or legislatures, courts, and the public.

At its core, the submissions analysis process answers four questions:

  • What did submitters say? — the descriptive layer of analysis, identifying the themes, arguments, evidence, and views expressed
  • Who said it? — the classificatory layer, identifying which stakeholder groups or types of respondents expressed particular views
  • How significant is it? — the evaluative layer, assessing the weight and quality of the evidence presented and the representativeness of the views expressed
  • What does it mean for the decision? — the interpretive layer, translating the findings into insights that can inform the recommendations or decisions that flow from the consultation

These four layers map onto the distinction between qualitative analysis (what was said and why it matters) and quantitative summary (how many said it). Both are necessary, but neither is sufficient on its own. A rigorous submissions analysis process integrates both and is transparent about the methodology used at each layer.

The submissions process and the submissions analysis process are sequential but deeply interdependent. The design of the submissions process determines the quality and analysability of the data the analysis process will have to work with. If the submissions process uses an unstructured format — simply inviting free-text responses without prompting respondents to address specific questions — the analysis process must invest heavily in initial categorisation before any substantive analysis can begin. If the submissions process uses a structured questionnaire with defined questions, the analysis process can proceed directly to coding and interpretation.

Equally, the intended outputs of the analysis process should shape the design of the submissions process. If the authority knows it will need to disaggregate responses by stakeholder type, the submissions process must capture stakeholder classification data. If it knows it will need to quantify support for or opposition to specific proposals, the submissions process should include scaled or preference questions that yield quantifiable data. The submissions process and analysis methodology should therefore be designed together, not sequentially.

 

Two Common Approaches For Managing Submissions

Two common approaches are online engagement platforms (such as Granicus, Social Pinpoint, Tracktivity) and structured questionnaires or surveys. While engagement platforms excel at broad community consultation and awareness-raising, a well-designed structured questionnaire is almost always the superior choice when the goal is to collect and analyse substantive feedback on complex documents or topics. This guide explains why, and provides practical guidance for designing, administering, and analysing questionnaire-based submissions.

 

Online Engagement Platforms

Platforms such as Granicus (formerly GovDelivery and Bang the Table), Social Pinpoint, and Engagement HQ are purpose-built community engagement tools. They typically offer features such as interactive maps, idea boards, polls, discussion forums, comment walls, and story submissions. Their strengths lie in:

  • Reaching broad, non-specialist audiences
  • Generating awareness and publicising a consultation
  • Facilitating informal, low-barrier participation
  • Displaying real-time participation metrics and heat maps
  • Hosting supporting materials like videos, FAQs, and documents alongside engagement activities

However, these strengths come with significant limitations when the subject matter is complex. Open comment walls and idea boards produce unstructured text that is difficult to categorise and compare systematically. The comments are usual short and ‘surface level’ or polarized positions, as the format itself does not encourage deeper deliberation. Polling features reduce nuanced positions to simple votes. The informal tone encouraged by these platforms is often poorly suited to professional stakeholders — such as industry groups, peak bodies, and subject matter experts — who need to make detailed, referenced submissions. Critically, engagement platforms do not ensure that respondents engage with the actual content of the document being reviewed.

Learn more: What Does Stakeholder Mean?

 

Structured Questionnaires

A structured questionnaire is a document or online form presenting a defined series of questions — including both closed (scaled or multiple choice) and open (free-text) questions — that directly map to the sections, proposals, or issues within the complex document under review. When well designed, a structured questionnaire:

  • Guides respondents through the document in a logical sequence
  • Ensures every submission addresses the same set of questions, enabling direct comparison
  • Captures both quantitative sentiment data (agreement scales, ratings) and qualitative reasoning
  • Provides a defensible and transparent analytical framework
  • Is suitable for expert and professional stakeholders as well as general community members
  • Produces data that can be systematically coded, tabulated, and reported

Why Structured Questionnaires Are More Suitable for Complex Documents

Ensures Substantive Engagement with the Content

Online engagement platforms present a persistent risk: participants may engage with the platform’s interactive features (placing pins, voting in polls, posting comments) without ever reading the underlying document. A structured questionnaire, particularly one that references specific sections, proposals, or criteria within the document, requires respondents to engage substantively with the material in order to answer meaningfully. This produces higher-quality submissions and a more reliable evidence base for decision-making.

 

Enables Systematic Comparative Analysis

When hundreds or thousands of submissions are received, the ability to compare responses across respondents is critical. Engagement platform comments are difficult to compare because respondents write to different prompts, at different lengths, on different topics. A questionnaire ensures every respondent addresses the same questions, meaning their responses can be directly tabulated, coded, and compared. Agreement-scale questions can be graphed and statistically summarised. Open-text responses to the same prompt can be thematically coded together.

 

Supports Proportionate Weighting and Disaggregation

Complex consultations frequently involve multiple distinct stakeholder groups — for example, industry proponents, environmental advocates, government agencies, and general community members — whose views may differ significantly and who may need to be reported on separately. A questionnaire can capture demographic and stakeholder classification data upfront, allowing analysis teams to disaggregate results by group and apply appropriate weighting. Engagement platform submissions are rarely structured in a way that makes such disaggregation straightforward.

 

Provides a Defensible and Auditable Process

For consultations with regulatory, legal, or political significance, the consultation process itself may be scrutinised. A structured questionnaire provides a clear, documented record of what was asked, in what order, and what responses were received. This audit trail supports transparent reporting and reduces the risk of the consultation being challenged on procedural grounds. Engagement platforms, with their mix of comments, reactions, and informal interactions, can be more difficult to defend as a rigorous process.

 

Accommodates Professional and Expert Respondents

Many complex documents require input from professional stakeholders: engineers, lawyers, planners, scientists, industry representatives, and government officials. These respondents typically expect a formal submission process. A structured questionnaire — particularly one made available as a downloadable document as well as an online form — meets professional expectations in a way that a social-media-style engagement platform does not. It signals that the organisation takes the consultation seriously and values the expertise of respondents.

 

Reduces Analysis Burden

Perhaps the most practical advantage is the reduction in post-submission analysis effort. Open-ended comments on an engagement platform require extensive manual reading, sorting, and categorisation before any analysis can begin. A questionnaire structures this work upfront: closed questions are pre-formatted for quantitative summary, and open questions are already grouped by topic. This makes it feasible to process large submission volumes within realistic timeframes and budgets.

At a Glance: When to Choose a Questionnaire Over an Engagement Platform
•  The feedback object is a specific document, plan, or proposal with defined content
•  Responses need to be compared, aggregated, or statistically summarised
•  Professional, expert, or industry stakeholders are key participants
•  The process may be subject to legal, regulatory, or political scrutiny
•  Resources for manual analysis of unstructured responses are limited
•  Disaggregation of results by stakeholder type is required

 

Best Practice Questionnaire Design

Start with Clear Objectives

Before writing a single question, clearly articulate what decisions or outcomes the feedback will inform. Are you testing support for a specific policy option? Identifying concerns about a technical proposal? Gathering evidence for a regulatory impact assessment? Each question in the questionnaire should serve one or more of these objectives. Questions that do not connect to a specific decision or output should be removed.

 

Mirror the Structure of the Document

Organise the questionnaire to follow the structure of the document being reviewed. If the document has five chapters or sections, the questionnaire should have five corresponding sections. This helps respondents locate relevant passages as they work through the questionnaire and makes it easier to map responses back to specific parts of the document in the analysis phase. Include section references (e.g., “The following questions relate to Section 3: Proposed Zoning Changes”) to orient respondents.

 

Use a Mix of Question Types Purposefully

Effective questionnaires for complex documents combine several question types, each serving a distinct analytical purpose:

 

Agreement/Rating Scales

Likert-type scales (e.g., Strongly Agree — Agree — Neither — Disagree — Strongly Disagree) are ideal for measuring the level of support for specific proposals, values, or principles. Use a consistent scale throughout the questionnaire and always include a “Don’t know / No view” option for questions where respondents may genuinely lack a basis for a view. Five-point scales are generally preferred for capturing nuance without overwhelming respondents.

 

Multiple Choice and Ranking Questions

Use these to identify preferred options from a defined list, or to rank priorities. These are particularly useful when a document presents multiple alternative approaches and you need to understand relative preference. Ensure the options presented are exhaustive and mutually exclusive, and always include an “Other (please specify)” option to capture positions outside the defined list.

 

Open-Text Questions

Open-text questions are essential for capturing the reasoning, evidence, and nuance behind rated responses. Every significant rating or preference question should be followed by an open-text prompt: “Please provide any comments or reasons for your response above.” Open-text questions should also be used to invite respondents to raise issues, identify risks, or suggest alternatives that the questionnaire may not have anticipated. Avoid making open-text questions mandatory — this discourages completion — but clearly communicate their value.

 

Factual and Demographic Questions

Include questions to capture the respondent’s stakeholder category, professional background, geographic location, and any other classification variables that will be needed for disaggregation in the analysis. These are best placed at the beginning of the questionnaire (before fatigue sets in) or at the very end, with a brief explanation of why the information is being collected.

 

Write Clear, Unbiased Questions

Question wording has a profound effect on response quality. Follow these principles:

  • Use plain language. Avoid jargon, acronyms, and technical terms that may not be understood by all respondents. If technical terms are unavoidable, provide brief definitions.
  • Ask one thing at a time. Avoid double-barrelled questions (“Do you agree with the proposal’s objectives and its implementation approach?”). Each question should test a single proposition.
  • Avoid leading questions. Phrasing such as “Do you agree that the proposed increase in green space will benefit the community?” signals the expected answer. Use neutral framing: “To what extent do you support the proposed increase in green space?”
  • Avoid loaded or emotive language. Words like “controversial”, “radical”, or “excessive” introduce bias. Describe proposals factually.
  • Ensure answer options are exhaustive and balanced. If using a scale, ensure it is symmetric — the same number of positive and negative response options.

 

Manage Length and Cognitive Load

Respondent fatigue is a significant risk in complex document consultations, where the subject matter is already demanding. To manage length:

  • Limit the total number of questions to what is strictly necessary. As a rough guide, aim for no more than 40–50 questions for a highly complex document, and considerably fewer for shorter documents.
  • Group related questions under clear section headings with brief introductory text.
  • Use skip logic (branching) to show only relevant questions to each respondent. For example, questions about a specific technical proposal may only be relevant to industry stakeholders.
  • Provide an estimated completion time at the start of the questionnaire.
  • In online formats, show a progress indicator so respondents can gauge how much remains.

 

Make the Questionnaire Accessible

Accessibility is both an ethical obligation and a practical necessity for reaching a diverse respondent pool:

  • Provide the questionnaire in multiple formats: an online form and a downloadable document (Word and/or PDF) that can be completed and returned by email for respondents who prefer or require this.
  • Ensure online forms comply with WCAG 2.1 accessibility guidelines (sufficient colour contrast, keyboard navigation, screen reader compatibility).
  • Offer assistance for respondents who need help completing the questionnaire (e.g., a contact phone number or email).
  • Provide clear instructions for how to complete and submit the form. Include details of other ways to submit (for example print and email the form as well as submitting online)
  • Translate the questionnaire for non-English speaking communities if the subject matter is likely to affect them.

 

Pilot Test Before Launch

Always pilot test the questionnaire before releasing it publicly. Recruit a small group of internal and external testers representing different stakeholder types. Ask them to complete the questionnaire and provide feedback on clarity, length, technical issues, and any questions they found confusing or ambiguous. Pilot testing consistently identifies problems that are invisible to the people who wrote the questions.

 

Questionnaire Design Checklist

•  Objectives clearly defined and each question mapped to an objective
•  Structure mirrors the document being reviewed
•  Mix of scaled, multiple choice, and open-text questions
•  Questions are single-barrelled, neutral, and in plain language
•  Answer options are exhaustive, balanced, and include ‘Don’t know’ where appropriate
•  Total question count reviewed for necessity and manageability
•  Skip logic applied to route respondents to relevant sections
•  Demographic/stakeholder classification questions included
•  Multiple submission formats available (online, Word, PDF)
•  Accessibility requirements met
•  Pilot test completed and feedback incorporated

 

Key Considerations for Analysing Submissions

 Establishing an Analysis Framework Before Data Collection

The analysis framework should be developed before the questionnaire is finalised — ideally at the same time as the questions are being written. The framework should specify: what outputs are required (e.g., a summary report, a detailed submission register, a cabinet briefing), what level of disaggregation is needed, what decisions the analysis will inform, and what analytical methods will be applied to closed and open questions respectively. Building the framework before data collection prevents the common problem of realising mid-analysis that critical classification variables were not captured.

 

Quantitative Analysis of Closed Questions

Closed questions (rating scales, multiple choice, ranking) lend themselves to straightforward quantitative summary. The key outputs typically include:

  • Frequency distributions showing the number and percentage of respondents selecting each response option
  • Mean scores and measures of spread for rating scale questions
  • Cross-tabulations showing how responses differ across stakeholder groups, geographic areas, or other classification variables
  • Statistical tests of significance where sample sizes permit and where testing group differences is analytically important

Be transparent about sample sizes and response rates when reporting quantitative results. A result showing that 70% of respondents support a proposal means something very different if it is based on 15 responses versus 1,500. Also note the non-random nature of most consultation samples: respondents self-select, meaning results should not be extrapolated as representative of broader population views.

 

Qualitative Analysis of Open-Text Responses

Open-text responses are often the most analytically valuable part of a questionnaire-based consultation, providing the reasoning, evidence, and contextual knowledge that closed questions cannot capture. However, they also require the most analytical skill and effort. The following approaches represent current best practice.

Thematic Coding

Thematic coding is the foundational method for analysing open-text responses. It involves reading through responses and assigning each response (or relevant portion of a response) one or more codes that describe its content. Codes are grouped into themes — higher-order categories that represent distinct types of feedback. For example, responses about traffic impacts, noise, and dust might all be coded under a “Construction Impacts” theme, while responses about economic benefits, job creation, and supply chain effects might be grouped under “Economic Outcomes”.

Develop the initial coding framework based on the structure of the questionnaire and the document, then refine it as coding progresses — adding new codes as novel themes emerge. Use a codebook to document each code’s definition and inclusion/exclusion criteria, ensuring consistency across coders. For large submission volumes, divide coding between multiple analysts and conduct inter-rater reliability checks (comparing codes assigned by different coders to the same responses) to ensure consistency.

Sentiment Analysis Within Themes

For each identified theme, record not just what respondents said but their evaluative stance: is the comment broadly positive (supportive), negative (concerned or opposed), neutral (informational or factual), or mixed? This allows reporting to convey both the substance of feedback and its directional character — for example, “The majority of comments on traffic management expressed concern, particularly regarding peak-hour congestion near the proposed access road.”

Identifying Representative and Outlier Responses

Select a set of verbatim quotes that best illustrate each major theme. Choose quotes that are representative of the mainstream view within that theme, but also note and document outlier responses — views that are held by a small number of respondents but represent a distinct and important perspective. Outliers are often particularly important in professional and expert consultations, as a single well-reasoned submission from a subject matter expert may warrant more consideration than many brief responses expressing a similar general view.

Noting the Weight of Evidence vs. Volume of Responses

A crucial and often misunderstood principle of qualitative consultation analysis is that the number of submissions expressing a particular view does not, on its own, determine the weight that view should carry in decision-making. A well-evidenced, technically detailed submission from a specialist may deserve more analytical weight than fifty brief form responses expressing the same general opinion. Equally, an organised campaign that generates large numbers of identical or near-identical responses should be identified as such and reported transparently, rather than being counted as independent evidence. Analysis reports should be explicit about these distinctions.

Handling Organised Campaigns and Form Letters

Consultations on high-profile topics frequently attract coordinated campaigns in which large numbers of people submit identical or near-identical responses, often facilitated by advocacy organisations. These should be identified through comparison of response text and reported separately from independent submissions. The existence of a campaign is itself analytically relevant — it demonstrates the mobilisation capacity of particular groups and may indicate the salience of an issue — but it should not be presented in a way that artificially inflates the apparent breadth of independent concern.

Using Software to Support Qualitative Analysis

For large volumes of open-text responses, qualitative data analysis software (QDAS) can significantly improve efficiency and rigour. Tools such as NVivo, ATLAS.ti, Dedoose, and MAXQDA support the management of large text datasets, systematic application of codes, retrieval of coded segments, and generation of summary reports. However these tools are very complex to use as they were primarily designed for an academic context. For more more real-world application, Simply Stakeholders provides the ideal mix of rigour and ease-of-use, with built in AI-assist for analysing issues, sentiment and triggering response workflows. For very large datasets, natural language processing (NLP) tools can assist with initial categorisation and sentiment analysis, though human review and validation of algorithmically generated codes remains essential for accuracy and defensibility.

Integrating Quantitative and Qualitative Findings

The strongest analysis reports integrate quantitative and qualitative findings to tell a coherent story about the range and nature of feedback received. Quantitative data provides the headline picture (“Three-quarters of respondents supported the proposed rezoning”), while qualitative data provides depth and nuance (“However, a substantial minority raised concerns about the adequacy of infrastructure to support increased density, with several engineering and planning professionals providing detailed technical arguments for a staged approach”). Neither type of data is sufficient on its own for complex document consultations.

Documenting and Reporting Analysis Transparently

Analysis reports should document the methodology clearly enough that a reader could understand and evaluate how conclusions were reached. This includes: the total number of submissions received, the breakdown by stakeholder type, the response rate (if the questionnaire was sent to a defined population), a description of the coding approach and who conducted the analysis, acknowledgement of any limitations or potential biases in the sample, and clear attribution of quantitative data to specific questions. Providing a full submission register (anonymised where appropriate) as an appendix is good practice and supports the transparency of the process.

Qualitative Analysis: Key Principles
•  Develop a codebook before coding begins and maintain it throughout
•  Use inter-rater reliability checks when multiple coders are involved
•  Distinguish between volume of responses and weight of evidence
•  Identify and separately report organised campaign responses
•  Select representative verbatim quotes to illustrate each major theme
•  Document methodology in the analysis report for transparency and auditability
•  Integrate quantitative and qualitative findings into a coherent narrative

 

Case Studies: Success and Failure in Submission Analysis

The following case studies illustrate how the quality of submission processes and analysis can shape — and sometimes distort — decision-making on complex projects and policies. They are drawn from publicly documented consultations and are intended to highlight both good practice and cautionary lessons. Where specific organisations or processes are referenced, these are based on publicly reported accounts.

Success: Infrastructure Analysis Saves a Major Transport Project

 

CASE STUDY Infrastructure submission analysis leads to important design refinements
Outcome: Structured analysis surfaces critical engineering concerns — design revised before construction
A state government infrastructure agency conducted a structured questionnaire-based consultation on the environmental impact statement (EIS) for a major urban rail extension. The agency received over 340 submissions, with a mix of responses from engineering consultancies, local councils, community groups, and individual residents.

Because the questionnaire was structured around the EIS chapters, the analysis team was able to systematically code open-text responses by topic area. During qualitative coding of responses to questions about the tunnelling methodology, two independent submissions from geotechnical engineering firms raised substantively similar concerns about the proposed tunnelling approach in a particular geological zone. Neither organisation was aware of the other’s submission.

The analysis report flagged this convergence explicitly — noting that while only two submissions raised the issue, both came from credentialed specialists and their technical arguments were internally consistent and specific. The project team commissioned an independent geotechnical review. The review confirmed the concerns were valid. The tunnelling methodology was revised at the design development stage, avoiding a potentially costly mid-construction change order.

Key lesson: A structured questionnaire with clear section mapping allowed the analysis team to surface low-volume but high-weight expert feedback that would have been easily overlooked in an unstructured submission process. The analytical discipline of distinguishing weight of evidence from volume of response was central to the outcome.

 

Success: Community Feedback Overturn of a Planning Decision

 

CASE STUDY Structured feedback reveals systematic miscalculation in traffic modelling
Outcome: Planning approval conditions revised; traffic model corrected following structured submissions
A local government authority (LGA) released a draft planning scheme amendment to allow a large format retail development on the urban fringe. The authority used a structured submission form asking respondents to address specific criteria, including traffic and access impacts, amenity effects, and consistency with the strategic plan.

The submission form required respondents to cite relevant sections of the draft amendment when commenting on traffic. This design choice proved consequential: multiple respondents — including a transport planning consultancy, a local residents association, and a neighbouring business — all independently cited the same traffic volume figures from the draft, identifying what they characterised as a significant underestimate in peak-hour modelling.

Because the questionnaire had grouped these responses under a common question about traffic impacts, the analysis team was able to identify that seven submissions, across three different stakeholder types, were all pointing to the same specific paragraph of the traffic assessment. The analysis report made this explicit, and the planning panel ordered a review of the traffic modelling. The independent review found the modelling had used background traffic growth rates that were outdated by several years.

The authority revised its conditions of approval to require infrastructure upgrades prior to the development proceeding, and the applicant was required to fund additional intersection works. The revised approval was upheld on subsequent review.

Key lesson: Questionnaire structure that anchored responses to specific document sections enabled pattern recognition across submissions — turning isolated concerns into a convergent evidentiary signal that demanded a response.

 

Failure: Poor Analysis Process Leads to a Flawed Decision — and a Legal Challenge

 

CASE STUDY Unstructured submissions process produces misleading analysis — decision later overturned
Outcome: Judicial review finds consultation process inadequate; decision quashed and process repeated
A government agency released a draft policy framework for public comment using only an open submission format — essentially a free-text email box with no structured questions. Over 1,200 submissions were received across a four-week period. The agency’s analysis team, under significant time pressure, produced a summary report that characterised the responses as “broadly supportive” of the framework, noting that the majority of submissions did not raise objections.

However, the absence of a structured questionnaire meant the agency had no way to reliably determine whether those non-objecting submissions had actually read or engaged with the substantive proposals in the framework. Post-publication analysis of the raw submissions by an affected industry group revealed that a significant proportion of the apparent support came from a coordinated email campaign, in which participants had been directed to submit brief messages of support without engaging with the policy content. The analysis team had not identified or disclosed this campaign in its report.

Separately, a number of technically detailed submissions from specialist practitioners — raising specific concerns about the framework’s implementation provisions — had been categorised under a generic “concerns raised” heading without their substantive arguments being reported. Decision-makers were unaware of the specific technical objections.

The affected industry group brought a judicial review challenge on procedural fairness grounds. The court found that the consultation process had not adequately gathered or reported substantive feedback, and that the analysis report had materially misrepresented the nature of the submission base by failing to distinguish campaign responses from independent expert submissions. The decision was quashed and the consultation process ordered to be repeated using a more rigorous structured approach.

Key lesson: The absence of a fit-for-purpose stakeholder relationship management system made it impossible to ensure quality engagement, detect campaigns, or systematically report on substantive qualitative feedback. The resulting analysis was legally indefensible. This case underlines why process design, campaign identification, and transparent reporting methodology are not optional refinements — they are the foundation of a legally and procedurally sound consultation.

 

Failure: Volume Mistaken for Consensus — Poor Outcome for a Community

 

CASE STUDY Organised campaign responses misread as community consensus; decision reversed after implementation
Outcome: Policy implemented based on flawed analysis; reversed following community backlash and independent review
A public utility consulted on proposed changes to its residential service pricing structure using an online engagement platform. The platform’s built-in polling function showed that approximately 78% of respondents “supported” the new pricing model. The utility’s consultation report cited this figure prominently and used it to justify proceeding with the changes.

What the report did not disclose was that the platform had been linked to from the websites of two industry peak bodies representing large commercial customers, who stood to benefit from the residential pricing changes through cross-subsidy effects. The majority of “supportive” responses were brief, single-click poll responses that had not engaged with the explanatory material about the pricing methodology.

A subsequent independent review commissioned after community complaints found that fewer than 12% of poll respondents had clicked through to read the detailed pricing methodology document — the core of what was being consulted on. Among the smaller cohort of respondents who had submitted detailed text comments (many of whom were residential customers), opposition to the changes was overwhelming, with the most common concern being the regressive impact on low-income households.

The utility had not used a structured questionnaire and had not asked any questions that would have captured respondents’ understanding of the proposal or their household income profile. The analysis had reported aggregate poll figures without disaggregating by respondent type or engagement depth.

The pricing changes were partially reversed eighteen months after implementation following an ombudsman investigation. The utility was required to conduct a fresh consultation using a structured questionnaire approach with mandatory respondent classification questions.

Key lesson: Poll-click engagement data from an online platform is not a substitute for substantive submission analysis. The failure to disaggregate responses, detect the source of organised participation, or require engagement with the actual proposal content produced an analysis that actively misled decision-makers.

 

Success: Expert Submissions Prevent a Poor Environmental Decision

 

CASE STUDY Rigorous qualitative analysis of specialist submissions identifies fatal flaw in environmental modelling
Outcome: Development approval deferred; environmental assessment methodology revised
An environmental protection authority consulted on a draft conditions framework for a proposed industrial development in a coastal zone. A structured submission questionnaire was used, with specific questions asking respondents to address each of the assessment criteria and reference the relevant sections of the draft conditions.

The consultation received 89 submissions. While the majority were brief and expressed general support for the development, the analysis team applied a rigorous qualitative coding approach to all responses, with a specific code for “methodological concerns” applied to any submission raising questions about the validity of the environmental modelling approach.

Seven submissions — from an independent marine ecologist, two university research groups, and a peak body for environmental consultants — all received this code. Their arguments were specific and technical: the hydrodynamic modelling used to predict sediment dispersal had been calibrated using data from a different coastal system with materially different tidal characteristics.

The analysis report presented these submissions together in a dedicated section, included an extended summary of each argument, and explicitly noted that while these seven submissions represented less than 8% of total responses, they collectively identified a single specific and verifiable methodological concern rather than a general policy objection. The report recommended that the authority seek technical review before finalising the conditions.

The authority commissioned an independent peer review of the hydrodynamic modelling. The review confirmed the calibration problem. The development applicant was required to commission new modelling before approval conditions could be finalised, adding approximately eight months to the timeline.

Key lesson: Qualitative analysis that distinguishes methodological challenges from general opinions, and that weights evidence rather than counting responses, can surface critical technical problems that would otherwise be submerged in a large submission dataset.

 

Lessons Across the Case Studies

•  Structure matters: Questionnaires anchored to document sections enable pattern recognition across submissions that unstructured processes cannot replicate
•  Weight of evidence, not volume: The most consequential submissions are often a small minority — analytical frameworks must be designed to surface them
•  Campaign identification is non-negotiable: Failure to identify and disclose organised campaign responses has resulted in legal challenges and overturned decisions
•  Disaggregation by stakeholder type reveals what aggregate figures conceal: Support figures that look strong overall may mask strong opposition among directly affected groups
•  Methodology must be documented and defensible: Analysis processes that cannot be scrutinised and explained expose organisations to procedural challenge
•  Online poll responses are not submissions: Engagement platform data should not be treated as equivalent to structured, document-engaged submissions

Evaluation of a Submissions Management Process

How do you evaluate or judge the success or failure of a submissions management process? Is it the number of submissions you receive or some other metric?

Here are our recommendations for best metrics to use when evaluating your submissions process, based on key principles for good stakeholder engagement, such as The Brisbane Declaration – United Nations standards for Community Engagement which our founder helped develop:

  • Quality of submissions
    • Did the submissions depict confusion or uncertainty about key aspects of the proposal which could’ve been addressed earlier and with better communication from the proponent?
    • Did the submissions indicate engagement with the content of the proposal – indicating deeper engagement and deliberation?
  • Diversity of repondents – did you obtain submissions from a diverse group of stakeholders (look at role of stakeholders, geographic distribution, demographic details as well as stakeholder mapping details such as stakeholders with high levels of impact or interest. Also give specific consideration to the participation of poor and marginalised communities and Indigenous communities.)
  • Did the submissions process offer the opportunity for participants to influence the proposal? Was that opportunity realised?
  • How were the results of the submissions process communicated to the respondents and the broader stakeholders, especially in relation to the impact the submissions had on the proposal?

Image of two people looking at a document

How Simply Stakeholders Streamlines Submissions Analysis and Reporting

Simply Stakeholders is a stakeholder relationship management platform used by government agencies and businesses to manage engagement, track interactions, and analyse submissions and feedback. Its built-in survey and forms function, combined with purpose-built analytical and reporting tools, makes it particularly well suited to the submissions analysis workflows described in this guide. The following explains how key Simply Stakeholders features directly address the challenges of managing and analysing structured questionnaire responses at scale.

Built-In Survey and Forms Tool

Simply Stakeholders includes a native forms and survey builder that allows teams to construct structured questionnaires directly within the platform. Forms support flexible question types — including scaled rating questions, multiple choice, ranking, and open-text fields — and can incorporate skip logic to route respondents through only the questions relevant to their stakeholder category or role. This means the questionnaire design best practices described in Section 4 can be implemented directly in the platform without relying on a separate survey tool.

Completed form responses flow automatically into the stakeholder records system, linking each submission to the relevant stakeholder profile. This eliminates the manual data entry step that consumes significant time when questionnaire responses are received via separate survey platforms and must then be matched to stakeholder records for classification and disaggregation.

Automatic Issues Detection and AI-Assisted Sentiment Analysis

One of the most time-intensive steps in analysing open-text submissions is the initial read-through to identify the issues and themes each response raises. Simply Stakeholders automates this step through its proprietary AI models, which automatically detect issues and analyse sentiment each time a new interaction — including a survey or form response — is added to the system.

Teams configure a set of issues categories relevant to their project or consultation in advance (for example: “Traffic and Access”, “Environmental Impacts”, “Economic Benefits”, “Implementation Timeline”). As submissions arrive, the system automatically highlights which issues are detected in each response and assesses whether the sentiment is positive, negative, or neutral. This provides an immediate at-a-glance view of the emerging patterns in the submission base before manual coding has begun, significantly reducing the time required for initial triage.

Critically, Simply Stakeholders AI analysis is “strictly controlled” with algorithms specifically designed for stakeholder engagement contexts, rather than general-purpose sentiment models. This is important for complex document submissions, where nuanced professional language and technical terminology can mislead general-purpose sentiment classifiers.

Qualitative Analysis Tools

Beyond automated issue detection, Simply Stakeholders provides tools for manual qualitative analysis that complement the AI-assisted layer. Analysts can annotate individual responses with comments, tag specific quotes for inclusion in reports, and apply additional codes beyond those detected automatically. This supports the thematic coding workflows described in Section 5.3, allowing teams to build a structured, searchable coding layer across the full submission dataset.

The ability to pull annotated quotes directly into reports is a particularly valuable feature for the submissions analysis context — it streamlines the process of selecting illustrative verbatim examples for each theme, which is otherwise a slow manual task involving cross-referencing spreadsheets and documents.

Stakeholder Segmentation and Disaggregation

Simply Stakeholders maintains a central stakeholder register that records each contact’s organisation, role, sector, geographic location, and any other classification attributes the team configures. When submissions are received via the built-in forms tool, responses are automatically linked to existing stakeholder profiles or new profiles are created. This means that disaggregated analysis — breaking down responses by stakeholder type, sector, or geography — can be performed directly within the platform without manual cross-referencing.

The platform also supports multidimensional stakeholder mapping, including influence, interest, and impact dimensions. This allows analysis teams to weight or contextualise submissions not just by stakeholder type, but by the relative influence and stake of each organisation — supporting the “weight of evidence” analysis approach described in Section 5.3.

Reporting and Dashboard Tools

Producing a submissions analysis report is typically one of the most time-consuming phases of a consultation process. Simply Stakeholders includes custom reporting dashboards that can be configured to display the key metrics and visualisations needed for a consultation report — including sentiment distributions, issue frequency charts, submission volumes by stakeholder group, and trends over the consultation period.

Reports can be saved, exported, and scheduled for regular distribution, making it straightforward to provide progress updates to project teams during the consultation period as well as producing the final analysis report. The platform’s AI Summaries feature can generate narrative summaries of the data, which analysts can then refine and expand — accelerating the drafting process without replacing the judgement and interpretation that a skilled analyst brings.

Centralised Record and Audit Trail

As the case studies in Section 6 illustrate, the defensibility of a consultation process depends heavily on the existence of a complete and auditable record. Simply Stakeholders maintains a full interaction history for every stakeholder — recording every submission, email, meeting, and event linked to the consultation. This creates a single source of truth that can support responses to freedom of information requests, judicial review proceedings, or internal audit requirements.

The platform also supports integration with external systems — including online engagement platforms such as those described in Section 2.1 — via API and integrations including Zapier. This means organisations that use both an engagement platform for community awareness and a Simply Stakeholders questionnaire for structured submissions can consolidate all consultation data into a single record, preventing the fragmentation of evidence that has contributed to the failures described in the case studies.

Automated Workflows for Risk and Escalation

For large-scale consultations where submissions may raise issues requiring immediate action — for example, a submission that identifies a safety risk, a legal compliance concern, or a significant reputational issue — Simply Stakeholders supports automated workflows that trigger notifications or tasks when specific issue categories or sentiment thresholds are detected. This ensures that high-priority submissions are routed to the right team member promptly, rather than being buried in a large submission queue awaiting the end of the consultation period.

 

Simply Stakeholders: Key Benefits for Submissions Analysis

•  Native survey and forms builder with flexible question types and skip logic — no separate survey tool required
•  Automatic AI-assisted issues detection and sentiment analysis on every submission as it arrives
•  Qualitative annotation and quote-tagging tools that integrate directly with report generation
•  Stakeholder register links every submission to a stakeholder profile for instant disaggregation by type, sector, or geography
•  Custom reporting dashboards with charts, summaries, and scheduled export — accelerating report production
•  Complete auditable interaction history provides a defensible record for regulatory and legal purposes
•  Automated workflows escalate high-priority submissions to the right team member in real time
•  Integration with engagement platforms and email tools consolidates all consultation data in one place

Graphic showing six tips for closing the loop in feedback management

Practical Considerations for Administration

Consultation Period

The consultation period should be long enough for respondents to read the document and prepare a considered response. For complex technical documents, a minimum of four to six weeks is generally appropriate. Shorter periods disadvantage professional stakeholders who may need to seek internal approval before submitting. Avoid scheduling the close date during public holidays or periods of peak workload for target stakeholder groups.

Promotion and Distribution

A structured questionnaire only generates useful data if it reaches the relevant stakeholders. A targeted distribution strategy is essential, including direct notification to known stakeholders via email, publication on the organisation’s website and social media channels, and engagement with peak bodies and representative organisations who can distribute to their members. Consider whether an online engagement platform might usefully complement the questionnaire by directing a broader audience to it — this hybrid approach captures the audience reach of platforms while preserving the analytical rigour of the questionnaire.

Confidentiality and Privacy

Be clear with respondents about how their information will be used, whether responses will be published, and whether identifying information will be removed. Many professional stakeholders actively want their submissions attributed to their organisation, while individuals may prefer anonymity. Design the questionnaire to accommodate both, with explicit questions about publication preferences. Ensure the process complies with applicable privacy legislation.

Combining with Targeted Interviews or Workshops

For very complex documents or consultations where deep expertise is essential, consider supplementing the questionnaire with targeted interviews or workshops with key stakeholders. This allows for exploration of issues in greater depth than a questionnaire permits and can generate insights that inform both the questionnaire design (if conducted beforehand) and the interpretation of questionnaire results (if conducted alongside or after).

Summary

Structured questionnaires are the most effective tool for gathering and analysing feedback on complex documents. They ensure respondents engage with the content, produce data that can be systematically compared and analysed, accommodate both professional and community respondents, and provide a transparent and auditable process. The case studies in this guide illustrate vividly what is at stake: well-designed structured processes have saved projects from costly design errors and enabled technically important concerns to reach decision-makers; poorly designed or inadequately analysed processes have produced misleading reports, overturned decisions, and in at least one case, legal challenge.

When supported by a purpose-built platform like Simply Stakeholders, the submissions analysis process becomes significantly more efficient and rigorous. Automatic issues detection and sentiment analysis accelerate the initial triage of responses; qualitative annotation and reporting tools streamline the production of analysis reports; stakeholder segmentation enables disaggregation by type and influence; and a complete auditable record protects the organisation against future challenge. Together, these capabilities address the core risks identified in the failure case studies — poor campaign identification, inadequate expert submission weighting, and indefensible reporting methodology.

Online engagement platforms remain valuable for awareness-raising, community engagement, and reaching broad audiences. Where both are used together, the questionnaire should be the authoritative source of analysable submission data, with the engagement platform serving to drive participation and public awareness, and Simply Stakeholders providing the analytical backbone that turns raw responses into defensible, decision-ready insights.

Simply Stakeholders is a smart (but simple) stakeholder relationship management tool that makes it easier than ever to analyze your stakeholders and visually map them based on 6 different criteria, as well as your stakeholder relationships.

Not only that, but our built-in AI-powered sentiment analysis enables up-to-date insights on your stakeholders throughout the entire project.

Interested in learning more? Take a look at our other features or contact us to request a demo.

 

Simply Stakeholders is a smart (but simple) stakeholder relationship management tool that makes it easier than ever to analyze your stakeholders and visually map them based on 6 different criteria, as well as your stakeholder relationships.

Not only that, but our built-in AI-powered sentiment analysis enables up-to-date insights on your stakeholders throughout the entire project.

Interested in learning more? Take a look at our other features or contact us to request a demo.

Related Items

See More
Illustration of man and woman with smiley faces.

Social License: Why it Matters For Your Project or Organization

What is a social license? We define the social license to operate, why it matters, and how it fits in with your stakeholder engagement processes.

Illustration of woman with Stakeholder Analysis Guide on screen.

Guide to Stakeholder Analysis

Our 2026 stakeholder analysis guide covers definitions, benefits, stakeholder mapping, stakeholder analysis steps, and more.

Illustration of woman with screen and charts.

Stakeholder Management Guide: Definitions, Processes & More

What is stakeholder management and how do you do it? We cover everything from definitions and models to processes and plans in this in-depth guide!

To get started with Simply Stakeholders, request a demo.

Get a Demo