Evaluating work based learning is increasingly appearing to be a messy business! With a range of developments to evaluate I am needing to focus the planning of evaluation structures at a level which attends to what we want to know, why we need to know it and what process we must go through to gather and analyse the data. The messy bit is making choices about the data collection, since initiatives vary by size, longevity, commercial sensitivity and teaching and learning arrangements (e.g. on campus, blended, third party delivery). I like neat data, in a symmetrical world I would conduct parallel surveys for each case, however the data collection approaches are going to have to be made on a case by case basis, using what ever means necessary to obtain meaningful data. I did so like the days of controlled experiments when everything fitted in to neat boxes 🙂

I have created a pattern of topics to address within four broad headings; evaluating teaching and learning, evaluating impact upon learners, evaluating impact upon employers and evaluating impact upon college staff. Data will be sought to address each area.

Then, by work based learning initiative, a process steer the evaluation (click to view):
evaluation process

It will be interesting to see how much consistency can be found in the methods of data collection across different WBL initiatives.

I always like the term by Bassey which refers to ‘fuzzy generalisations’ – I think for WBL when evaluating, we have a situation where comparisons between initiatives can only ever be fuzzy; imperfect but highly useful.