How Not to Evaluate the Impact of Leadership Development

This informal CPD article ‘How Not to Evaluate the Impact of Leadership Development’ was provided by Dr. Maria Loumpourdi, Founder & Managing Director of Made From Within, an organisation who specialise in delivering leadership, team, and personal development programmes.

When organisations invest in leadership development, they understandably want to see a return. But how we evaluate that return is just as important as the development itself, and often, it’s done poorly.

Let’s start with the go-to model in most organisations: Kirkpatrick’s Four Levels of Evaluation (1). It’s been the gold standard for decades, offering a simple and practical way to assess training outcomes. And to be clear; this model isn’t the enemy. It provides a helpful framework, especially for measuring depth of training impact at individual and organisational levels.

But when it comes to leadership development, relying solely on this model can be misleading, ineffective, and even counterproductive.

The Four Levels: A Quick Recap

  1. Reaction – Did participants like the training?
  2. Learning – What did they learn?
  3. Behaviour – Are they doing anything differently?
  4. Results – Has the business improved as a result?

Sounds logical. So, what’s the problem?

1. Mistaking Positive Reactions for Impact

Leadership development isn’t about whether participants liked the facilitator or enjoyed the group discussions. Yet, too often, that’s the headline finding from post-training feedback forms.

Satisfaction ≠ engagement. And engagement isn’t about enjoying a session; it’s about being mentally, emotionally, and behaviourally involved in the development experience. True engagement means participants are reflecting, practicing, experimenting, and applying leadership principles, not just ticking smiley faces on an evaluation form.

Bottom line? Feedback forms are fine; but don’t equate them with real impact. They tell you the delivery didn’t flop, not that transformation occurred.

2. Overvaluing Quiz Scores as Proof of Learning

Many organisations assess learning through pre- and post-tests, often multiple choice. But memorising leadership theories doesn’t make someone a better leader.

You can score full marks on transformational leadership theory and still struggle to inspire your team. Equally, someone might flunk the quiz but be a phenomenal people leader.

In today’s world, where AI can recall any theory in milliseconds, the value of memorising content is fading. What matters now is not what leaders know, but how they apply that knowledge to build trust, vision, and action.

3. Assuming Behaviour Change Is Easily Measurable

Evaluating behaviour change is trickier than it looks. Sending out follow-up surveys 3–12 months after the programme, asking “To what extent has communication improved?” may sound solid, but it’s fraught with issues.

  • Ambiguity: What’s the real difference between “some extent” and “moderate extent”?
  • Bias: People tend to overrate themselves or conform to what they think is expected.
  • Blind spots: Participants might change their standards, rating themselves lower after learning more.
  • Attribution errors: Any change might be due to a new manager, shifting priorities, or other unrelated factors.

A more reliable evaluation method is using open-ended questions in interviews or surveys to get specifics. These qualitative insights tell a richer story and reveal not just if behaviour changed; but how and why.

Also: don't overlook barriers. Often, the biggest obstacle to applying new skills isn’t the participant’s willingness; it’s the organisation itself. Lack of time, poor support, or a toxic culture can derail even the best leadership programmes.

4. Chasing Business Results Without Context

Level 4 evaluation focuses on business outcomes: productivity, profit, (employee or customer) satisfaction, retention and so on. It answers the classic “So what?” question. But here’s the problem: leadership doesn’t operate in a vacuum.

Attributing business outcomes to a development programme is a complex, messy business. Was the improvement due to the leadership programme? Or did the company just land a major new contract? Or did a change in leadership reset the tone from the top?

We also need to stop tying leadership performance too tightly to job performance. Just because someone missed a quarterly target doesn’t mean they’re a bad leader. Maybe the market dipped. Maybe the goal was unrealistic. Or maybe they’re a brilliant leader stuck in a flawed system.

Leadership is about people, vision, influence; and those don’t always translate neatly into short-term numbers.

A Smarter Way Forward

None of this is to say you should completely abandon the Kirkpatrick model in training evaluation. It can be a valuable tool (in some cases). But it’s just that: a tool. Not a strategy.

If you want to truly evaluate leadership development:

  • Focus on engagement, not just satisfaction.
  • Prioritise application and behavioural change overtime instead of recall.
  • Use qualitative data to enrich behavioural evaluation.
  • Understand and remove (organisational) barriers to skill application.
  • Evaluate outcomes with context, not in isolation.

Leadership development is complex and nuanced. So your evaluation approach should be, too.

We hope this article was helpful. For more information from Made From Within, please visit their CPD Member Directory page. Alternatively, you can go to the CPD Industry Hubs for more articles, courses and events relevant to your Continuing Professional Development requirements.

REFERENCES

  1. https://www.ardentlearning.com/blog/what-is-the-kirkpatrick-model