Premium Campaign Stack
War-room design Voter intelligence Grassroots mobilisation Digital + WhatsApp outreach Survey systems Narrative management

Surveys

The survey application is the backbone of the consulting system.

Every serious consulting recommendation starts with structured data collection. Our survey application is built to run CATI, CAPI, technical, objective, subjective, and mixed-format questionnaires in one controlled stack, with sampling, instrument design, field control, QA, and statistical correction working together.

Survey system tree
Tree diagram showing sampling, instrument, field operations, quality control, weighting, and insight.
People and control loop
Workflow diagram showing supervisor, caller, enumerator, respondent, and quality review loop.

Operational Methods

Choose the mode based on speed, depth, supervision, and risk.

Reliable survey operations are not one-mode by default. We choose CATI, CAPI, CAWI, or hybrid execution based on coverage, respondent behavior, and deadline constraints.

CATI (Phone Surveys)

Best for: Fast turnaround, centralized supervision, short trackers

Use when

  • Rapid political pulse and quick market sizing
  • Customer feedback loops and frequent trackers

Strengths

  • Live supervision, call recordings, standardized scripts
  • Real-time quotas and centralized ops control

Risks

  • Non-response and rushed answers
  • Social desirability bias in sensitive topics

Controls

  • Disposition codes and refusal-conversion scripts
  • Call scheduling logic, audio audits, anomaly flags, and back-check calls

CAPI (In-Person / Assisted Digital)

Best for: Higher coverage, better comprehension, longer interviews

Use when

  • Rural or low-internet environments
  • Complex questionnaires and long-form sections

Strengths

  • Better respondent attention and stronger completion rates
  • Richer probing and assisted comprehension

Risks

  • Interviewer influence and falsification risk
  • Location fraud or weak field compliance

Controls

  • GPS and timestamp checks, route validation, paradata review
  • Supervisor rides, spot audits, re-contact verification, consent capture where needed

CAWI / WhatsApp / Link-Based Self-Serve

Best for: Low-cost scale for digitally reachable audiences

Use when

  • Urban audiences, product feedback, community panels
  • Rapid iteration and lower-cost response collection

Strengths

  • Low agent cost and automated skip logic
  • Fast deployment and reminder-based follow-up

Risks

  • Coverage bias and drop-offs
  • Low attention or duplicate participation

Controls

  • Attention checks, anti-duplication rules, time thresholds
  • Reminder workflows and short-form optimization

Hybrid (CATI + CAPI + Digital)

Best for: Representativeness and speed under tight timelines

Use when

  • Heterogeneous populations and state-wide studies
  • Projects where one-mode purity creates avoidable bias

Strengths

  • Better coverage and faster quota completion
  • Reduced reachability gaps across respondent types

Risks

  • Mode effects and higher operational complexity
  • Need for stronger coding and harmonization discipline

Controls

  • Mode harmonization rules and calibration weighting
  • Mode-specific scripts with common definitions and coding logic

Statistical Methods

So the results do not lie, drift, or overstate weak segments.

Sampling design, sample-size calculation, margin of error, weighting, trend control, and model-assisted estimation are built into the methodology layer before interpretation starts.

  • Simple random
  • Systematic
  • Stratified
  • Cluster
  • Quota
  • Oversampling
  • Design weights
  • Raking
  • Trimming
  • Significance testing
Sampling and estimation map
Diagram showing sampling approaches, margin of error, weighting, significance testing, and estimation flow.

Sampling Approaches

We select simple random, systematic, stratified, cluster, quota, or oversampling frameworks depending on list quality, geography, subgroup importance, and operational constraints.

Sample Size, Margin of Error, Confidence

Sample size is computed against required confidence levels, expected variability, and acceptable error at both headline and subgroup levels so thin cuts are not over-interpreted.

Weighting and Correction

Design weights, non-response adjustments, post-stratification or raking, and weight trimming are used when the achieved sample drifts from the target population.

Significance, Trend, and Driver Analysis

We use significance testing, trend smoothing, segmentation, and regression or classification logic to separate signal from noise and identify meaningful drivers.

Model-Assisted Estimation

For booth, ward, or other small-area reads, calibrated model-assisted estimation can be used when direct sample size per unit is too small, always with clear uncertainty limits.

Survey Flow Engineering

We build the flow carefully and correct it when live signals show drift.

A survey is a funnel. If the funnel leaks through bad sequencing, skip failures, or fatigue, the sample shifts and the estimate drifts. We treat flow like a product system.

  • Keep the first minute easy to reduce early drop-off
  • Place sensitive questions after trust is established
  • Reduce cognitive load by keeping one idea per question
  • Use neutral phrasing and avoid loaded options
  • Break long grids into smaller modules
  • Rotate options where order effects are likely
  • Use consistent recall windows
  • Keep skip logic simple enough for reliable execution
Flow correction loop
Diagram showing live signal detection, wording revisions, skip updates, retraining, and version control.

Skipping

Skip logic is enforced by the tool wherever possible so routing does not depend on agent memory.

Probing

Probes are standardized and neutral. They clarify meaning but never push respondents toward an answer.

Prompting

Prompting is used only for comprehension support, never for persuasion or answer shaping.

Refusal Handling

Refusal handling is scripted, ethical, and designed around respectful recontact or rescheduling, not pressure.

Quality System

Controls at design, field, and data layers keep the survey honest.

Quality is not a checklist. It is a prevention, detection, and correction system that reduces avoidable error before, during, and after fieldwork.

Quality timeline
Timeline diagram showing prevention before field, detection during field, and correction after field.

Before Field (Prevention)

We prevent avoidable errors before launch through cognitive review, pilots, role-play training, clear codebooks, and back-translation checks.

During Field (Detection)

We monitor live dashboards, listen-ins, spot visits, paradata anomalies, quota pace, and audit trails while the survey is running.

After Field (Correction)

We run back-checks, cleaning rules, coding checks, weighting, sensitivity review, and a documented QA report after closure.

Bias Control

Bias is managed actively, not assumed away.

We actively reduce interviewer, questionnaire, sampling, routing, coverage, recall, translation, and processing bias through design discipline and audit-led controls.

  • Agent behavior is monitored and standardized
  • Questionnaire design is tested before scale
  • Weighting and processing choices are documented and auditable
Bias matrix
Matrix diagram showing survey bias sources and the corresponding control mechanisms.

Agent / Interviewer Bias

Issue: Tone, paraphrasing, selective probing, or over-helping can reshape the response.

Control: We use tight scripts, standardized probes, monitoring, retraining, audio audits, and performance scorecards.

Questionnaire Bias

Issue: Leading, loaded, or double-barrel questions force distorted answers.

Control: We use neutral wording, split questions, balanced options, and cognitive testing.

Skipping / Routing Bias

Issue: Incorrect routing pushes respondents into the wrong sections and corrupts the instrument.

Control: We enforce tool-led skips, validations, mandatory checks, and simplified agent interfaces.

Non-Response Bias

Issue: Unreachable people may systematically differ from reachable respondents.

Control: We use callback schedules, mixed modes, reachability analysis, and non-response weighting.

Coverage Bias

Issue: A single mode can exclude parts of the target population.

Control: We improve frames, use hybrid modes, and report explicit caveats and corrections.

Social Desirability and Recall Bias

Issue: Respondents may answer what feels acceptable or remember events incorrectly.

Control: We sequence sensitive items carefully, use shorter recall windows, anchoring, privacy assurances, and self-admin modes where appropriate.

Order, Translation, and Processing Bias

Issue: Question order, language shifts, and inconsistent cleaning choices can distort outcomes.

Control: We use option rotation, back-translation, documented coding rules, dual coding, and reproducible audit-led pipelines.

Deliverables

What you get at the end of the survey program.

The output is not just a dataset. We deliver the instrument, operations structure, methodology note, reporting layers, and a documented integrity view.

Consulting-Ready Reports

Leadership summaries, issue rankings, respondent cuts, voter movement patterns, and recommendation briefs built from validated survey data.

Dashboards and Deep Dives

Interactive dashboards, trend views, booth or segment breakouts, and deeper analytical outputs that support both war-room reviews and long-form strategy work.