Your stakeholder consultation evidence was there — the survey responses, the findings, the numbers. Yet the evaluator still marked your Relevance score down. You have no idea why.
This post draws on Lesson 2.3 from Module 2 — European Evidence Sources and Data Literacy in the KA2NA course. In this lesson, we explore why stakeholder consultation evidence so often fails evaluators, and exactly what a compliant methodology statement must include to protect your score.
For more information please check Needs Analysis resources. The AI Agent Node community shares practical guidance on what evaluators look for in this area.
Why Stakeholder Consultation Evidence Gets Marked Down
Most applicants describe their consultation in general terms. They write that they gathered feedback and confirmed a need. However, this approach tells the evaluator nothing about how that feedback was collected or who provided it.
A missing instrument description raises immediate doubt. Furthermore, if you do not state who responded — from which roles, sectors, and countries — the evaluator cannot judge whether the sample was representative. The consultation section looks like a claim, not evidence.
Planned consultations left unlabelled cause further damage. Consequently, what reads as completed evidence may describe something that has not happened yet. When an evaluator identifies this, confidence in your entire Needs Analysis drops — not just in the consultation section.
What Evaluators Expect From Your Methodology Statement
A transparent methodology statement requires three specific elements. The first is the instrument type — whether you used a survey, structured interview, or focus group. The second is the respondent profile, covering role, sector, and country. The third is the geographic scope of the consultation.
Additionally, the distinction between completed and planned consultation must be explicitly labelled. Completed consultation must state the period it took place. Planned consultation must include a projected timeline. Without this labelling, evaluators cannot verify the quality of the evidence you are presenting.
The 2026 Erasmus+ Evaluator Guide is specific about this requirement. Notably, applications that describe findings vaguely — writing that “most respondents agreed” rather than stating a percentage — consistently score lower on the Relevance criterion. Numbers are required. Impressions are not enough.
Stakeholder Consultation Evidence and the Percentage Rule
Every finding must be presented in percentage-based language. Stating that 74% of respondents reported a specific skills gap, for example, carries real evidential weight. A phrase like “many stakeholders noted the same issue” carries none.
This is not a stylistic preference — it is a formatting requirement. Moreover, percentage-based findings allow evaluators to cross-reference your consultation data against the broader evidence in your Needs Analysis. Without numbers, that verification is simply not possible.
The good news is that this is a learnable process. Once you understand the exact components required, writing compliant consultation evidence becomes repeatable and reliable. You do not need to guess what evaluators want — because the framework tells you precisely.
The Shift That Changes Your Application
There is a structured way to meet every transparency requirement from the very start. Knowing the correct format before you begin your consultation — rather than trying to fix gaps afterwards — makes the difference between evidence that holds up and evidence that gets questioned.
The KA2NA training gives you direct access to that framework. Join a community of practitioners building Erasmus+ applications with real evidence structures that evaluators can verify and trust.
Conclusion
As conclusion, stakeholder consultation evidence is only as strong as the transparency behind it. Without a clear methodology, correct labelling, and percentage-based findings, even a well-conducted consultation will not protect your Relevance score. Join our Training Waiting List to gain access to the exact framework that evaluators expect.
















