Tuesday, June 9, 2015

Beat E-Learning Fraud with The Recipe for Creating Successful E-learning




By Leon O'Ware
I once went for an interview at a fancy “new media” agency, in the City of London, for an e-learning author job. I was at the top of my game then, and was able to recite the recipe, I’m about to give you, off the top of my head.  On hearing my method, my interviewer said “oh no, we don’t work like that” and then ran a mile.

Sitting in a coffee bar, later, I wondered how they had a business model, if ensuring that learners learn is not at the very top of their priorities? Also, their attitude was “fire and forget”; build the e-learning as a one-off project and just get it out of the door, fast. Mulling this over, led me to the realisation that the reason they had a business, was that they were providing a service which clients wanted. This, further, led me to understand that clients saw e-learning as a single point solution and that they had no idea it could be made provably educationally effective on a test audience before it ships – you just have to gather the right metrics and tweak the thing before deployment. Later, it dawned on me, that the e-learning market is going to be awash with “PowerPoints with questions” and stuff that you just can’t learn anything from, in a million years. 

I was 33 at the time (48 now) and, after all the work that I’d put into learning e-learning, I was gutted. I knew that the technology would be wasted by these “new media” wotnots et al. However, by then, I’d had a much, much bigger realisation and that, dear reader, you’re going to have to earn; not because I’m a control freak or a power junkie or anything like that, it’s just that you have to see it. So I’ve constructed this little journey below in the hopes that you can see it and get it and then use it. However, as Deep Thought said, in Douglas Adams’ book “The Hitchhiker’s Guide to the Galaxy”: “You’re not going to like this, you’re really not going to like this”.

Sadly, my predictions as a thirty something, came to pass and a whole new market of ineffective e-learning was spawned – and everyone and their uncle thought that they could do it.

Personally, I’m a very efficient learner and yet it took me three years to learn and master what’s in this article, and that’s with e-learning as a big chunk of my 10% research and development time, at the office. I was the first person to deploy e-learning in any major international law firm in the world, apparently, and I got huge push-back from the training department, because, when they looked at e-learning, they saw what you see – a fraud, from an educational perspective.

However, my e-learning training started with the guys who build it for the UK and US military and global corporations – and they can prove that their stuff works before you deploy it. They can do this because they use a recipe that builds a particular structure; one that’s not only proven to be educationally effective, it’s maintainable and scalable. This structure consists of “content / question set pairings” that relate directly back to a Syllabus item, via Courses and the Modules that the Courses contain. On top of that, the system is self-monitoring and allows individualised learning pathways, with almost zero input from you.

There are two logical pre-requisites for reading this recipe, dear reader: First, you must be familiar with the functionality of an e-learning creation (or “authoring”) system and second, you must be familiar with the functionality of a Learner (or Learning) Management System - otherwise, this would have to be a techy article and not an e-learning article. It doesn’t matter which systems you’re familiar with but you may want to evaluate yours (or the one that you’re thinking of) against what follows:

The System View of E-learning

I’d like to give you an overview of how the e-learning system, that this recipe creates, works as a system and where it will probably always fail miserably – giving you that all important “Oh no, I’ve gone too far” error check – just so that you won’t be wasting your valuable resources, building e-learning for inappropriate contexts.

Right up first, though, I have to give you a health warning: Once you “get” this system, you are in danger of becoming Überly geek about the whole thing and to go for perfection. Don’t – it will keep you up at night. Given its “moving feast” nature, it’s far better to approach the e-learning that you’ve just created, as a management issue, as opposed to a project-based “point solution”. If you do, actually do what’s written below, know first, that you’re in it for the long game, so don’t peak too early and keep your initial project modest, so that you can really hone your process – and, in my experience, it takes 10 iterations (even in simulation) of any complex process to absolutely button down that process to the most efficient that it can be. There are more jumps in your understanding of the system’s efficiency but they’re the subtle ones and they’re found hundreds, if not thousands of iterations down the line and then you’re into “Big Data” processing. E-learning, done properly, is anything but cheap, just looking at its “externalities” (i.e. things that are not necessarily costed up with e-learning creation, such as video creation, translation services etc.), these are expensive.

The “Education (Pedagogy) Bit” – The Context for all of this


If you cast your mind back, to when compulsory education was introduced, you’ll remember that learning by rote (i.e. continually repeating the same thing) was an important part of the education system. Moving forward, as educators have tried to “engage learners” to be more enthusiastic about “education”, rote learning has mysteriously fallen by the wayside. 

Today, the facts speak for themselves: Whereas, before the introduction of compulsory education in the U.S.A., black literacy was in the high nineties percentile, today it’s in the low teens (if memory serves). The statistics, here in the U.K. also show a large decline in literacy, where we are now in a situation of having about half of our teenagers unable to read or write. It would appear trite, indeed, to pin these two phenomena down to the lack of rote learning but, actually, it’s not.

Let me explain, by taking you on a journey of creating military grade e-learning and showing you the thought processes that went into creating it, in the first place.

Before compulsory education in the U.S.A., black kids were taught to read by sitting on their grandparents’ knees and being read vast tomes like Moby Dick. Here, the grandparents were pointing to each word in the books - as they were reading. So, in this system, when you see the word “mellifluous”, say, for the thousandth time, you recognise it – and that’s, obviously, rote learning.
Here in the UK, my generation learned to spell by rote. The current generation is taught “phonetics” and their spelling is all over the place. Our teenagers are now embarrassed into confusing “to”, “too” and “two”, for example. Another common one, here, is “there”, “they’re” and “their”. I’m picking on literacy because the English language is particularly horrible to learn and students are rarely told that the English language is actually two languages: one written and one spoken. If you don’t believe me, consider, the words “bough”, “cough”, “plough” and “trough”. Given these last four words, how is a learner to know the pronunciation of the word “thorough” on first seeing it? See what I mean? In English, you can be sealing a ceiling or be allowed to speak aloud. How confusing is that?
E-learning allows learners to learn by rote, in subtle ways and you can see how, with the following diagram of the e-learning structure that’s created by the recipe.

Structure and Function - It’s Really a Question of Questions


As you can see, from the diagram above, the tests are testing the cumulative knowledge of the learner and so you’ll have to think about testing in more breadth and depth, as the journey through the Modules in a Course, progresses. The industry standard is that you have five different questions for testing each learning point (or “learning nugget”, as they’re now called, mmm). 

This enables you to deliver random versions of the same tests to different students and to the same students, across different Modules. However, the larger the Course gets, the more times a student will encounter identical questions and this is a form of rote learning that helps them to actually get the content into their heads. The technical term in pedagogy (and taken from operant conditioning) is called “reinforcement” but it’s done in such a way that (and this naturally occurs, structurally, in this system, so that you don’t ever have to worry about it) the learner never feels patronised or “set-up”; five questions is deemed the minimum to give the learner a positive “aha” moment, rather than leading them to feel manipulated (as is the case when only three questions are offered) – and to give you the minimum amount of data needed to actually test the tests (see later).

The really amazing and beautiful thing about this type of e-learning, is that it’s level of reinforcement scales up; with an increasing complexity of the Syllabus, Courses and Modules – the learner keeps encountering the same content and the same questions and that reinforces their learning. The opposite is also true: if things are trimmed from a Syllabus, then they are trimmed from the e-learning system, hence the phenomenon of “reinforcement” is diminished in smaller, less arduous, Courses. This stuff scales, both up and down, so the productivity of your effort is maximised.

The big secret is that e-learning is a means of delivering operant conditioning (hence the big studies into gameification and their outcomes, like the ice hockey example, mentioned in step three of the recipe, below) and delivering it very cheaply, to anyone with a web connection and that, properly done, it really does work.

Pre-tests test the suitability of the learner to enter the Module and post-tests test to see whether the learner has actually learned anything from the Module itself. In real life, pre-test questions are generally composed of questions from the previous Modules’ post-test and questions that weed out the fully competent. This allows learners to bypass the Module (or not, as the case may be) – thereby allowing them to learn at the fastest rate that they can. 

The optimal time between taking the post-test of one Module and the pre-test of the next, (which of course tests, and therefore reinforces, previous Modules’ learning nuggets too), is considered to be a fortnight for the most effective learning to occur (though this is rarely practiced in the real world, due to time pressure). However, the bottom line is that people do build on what they know and if you don’t leave out any logical pre-requisites (more on that later) from your system, it will work, providing that your content is accurate, complete, illuminating, illustrative, instructive or just mind-bogglingly good enough to get the point across – and the recipe allows you to test that you’ve got the point across, by constantly testing the validity of your tests.

As an aside, I’ve found that if you adhere to the golden ratio in your design and then give the learner a thing of real beauty to look at – something out of the norm and “special” in terms of interface design, then you will always have a positive engagement with the system, from the learner’s perspective; learning becomes a treat i.e. a reward mechanism, itself, in a wider operant conditioning strategy. You can see why the militaries love this stuff, now, can’t you? E-learning is Citizen Kane on steroids – if you do it right.

Anyway, back to the point, the “ideal” result for the test of a test, is that a bell-shaped curve of correct answers is achieved. There are several types of question that have what are called “distracters” and these are used in multiple choice, drag and drop and matching item question types. 

The kicker, here, is that In order to get a bell-shaped curve of responses to your questions, some answers have to be more correct than others – and you have to be able to measure that objectively. However, the industry standard for getting valid data from the testing system is that you have at least three distracters and one correct answer in each of the above question types – although five distracters has been shown to be slightly more effective, think of the pain and cost, outlined below.
The “oh no, I’ve gone too far” error check, with these is that, if one distracter is blatantly false, then all you’ve done is to present the learner with one correct answer and two distracters – and that makes distracters difficult and extremely “vexatious to the soul” to create...and this is why the big boys continually test their tests: If every learner always gets a particular question correct, then it’s probably not testing what you intend it to test (either that or the content is incorrect). Likewise, if every learner always gets a question wrong, the view of it not functioning properly with its content, still stands.
The higher level rationale, here, is that all learners learn at a different rate and it’s natural that these rates follow the standard population distribution of a bell-shaped curve. So your system must show that, in order to be seen to reflect reality – and, after you’ve created your Courses, that’s your continuing management task, right there; managing the effectiveness of the questions and updating the content, along with the appropriate new tests, so that the whole system, whichever way you look at it, shows a bell-curve distribution across its entire user base and question set, thereby achieving a “fractal”.

The “fractal” concept is vitally important in the understanding of e-learning and I’m going to walk you through that in a minute. 

Keeping to the point, though, this bell-shaped curve analysis method is particularly useful, when assessing the effectiveness and consistency of the marking for essay questions; trying to achieve a bell-shaped curve with your test results is the standard guide used to assess the effectiveness of a test – and this is all thoroughly tested before deployment.

Did you get that? The data that revealed the bell-shaped curve came from those questions with properly constructed distracters i.e. none with blatantly false options and their data is now used to map against all other question types – learners who perform well on the distracter questions, should, in theory perform well on the other questions too. If there’s a massive discrepancy, then you need to look at that and this system gives you the data to do just that – look at it and see what’s going on.
Notice some things, given this type of setup: First, you can tweak the e-learning to show any type of learner response curve you like. Second, you have a repository of tests, for each learning nugget and these can be included in as many Modules, in as many Courses as you like – and each one relates directly back to a specific point in a Syllabus. Consider, for example that the Module for “Resuscitation”, say, is included in the “Fire Officer” and the “First Aider” training Courses. Since the common Module is merely copied at the time it’s called from the Course, you only have to update that Module once, for it to be updated everywhere. 

On a meta-level, pre-and post-tests can be constructed, at the Course level, from existing Module question sets, to award a final certificate, for example, or to pre-test a learner into (or to allow them to pass by) an entire Course.

The Learning Process


E-learning theory is that “you build on what you know”, hence pre-tests, to see if you are actually building on what you know, or if you’re going to be way out of your depth, or that you know this stuff anyway, so move along.

There is a sound neuro-biological basis for this view and here are the fundamentals of that - in plain English: Imagine a large pile of tangled electric cables, in front of you. Imagine, then, that this pile represents all of the nerves and their connections in a brain (imagine the connections as the points where the wires in the pile touch each other). The brain of a new-born baby has a virtually uniform resistance to the electricity flowing in the wires. Perceptual differentiation is achieved by constantly stimulating a set of wires, because if a particular set is constantly stimulated, then that set changes its resistance between the wires in the group and the group separates itself from the others in the brain.
So a group of wires, that’s constantly stimulated (rote learning), forms its own clique of communication. This explains why newly qualified drivers find it difficult to listen to the stereo and drive at the same time, when experienced drivers can sing along too: over time, the pattern of driving is reinforced enough and eventually, it becomes automatic.

This brings us to the learning concepts of “logical pre-requisites” and “scaffolding”. If you’re allowing someone to learn how to boil an egg, you don’t miss out the logical pre-requisite of allowing the learner to learn how to light the gas or to control the electricity; you “scaffold” the learner to understand exactly how to get to the position of having an egg boiling in a pan, instead, by allowing them to learn the appropriate e-learning content for their cooker. 

This is despite the fact that, arguably, lighting gas and playing with electricity are engineering things and not cookery things at all. At the end of this article, I will argue that e-learning became irrelevant in the mid- to late nineties (certainly in business but not, necessarily in academia) and the point, above, that a system can present the learner with content appropriate to their circumstances is a fundamental plank of my argument, for your consideration, so please bear this in mind; if we’ve got to the point where an e-learning system can adapt to a learner’s position in the scheme of things, why can’t we ditch the e-learning and just get our day-to-day systems to respond in that way instead? Think about it. If your journey through your smart TV was constructed as an individual learning pathway, you would be able to use the thing, right off the bat and you would boggle at how your kids use it but they, too, would be able to use it off the bat – the thing would adapt to your respective states of learning.

The rate of travel, through the structure, shown here, ultimately depends on the mental velocity (i.e. speed and direction) of the individual – which also, conveniently, has that bell-shaped curve, mentioned above, and you may want to use this data upstream, for filtering out the potentially successful candidates – mainly by referring to and using your audience profile data (from step 3 in the Recipe) to elucidate who might be successful, by tweaking the questions that you ask up front.
The learner is building up a fractal (of closed areas of electrical resistance), in the structure of their brain, and so another rate-limiting factor to individual learning is the amount of logical pre-requisites and / or already ingrained bad habits that the learner has (the latter being part of the “direction” component of a learner’s velocity). So a “Three Mode” system is common in the e-learning industry: “Show me”, “Let me try” and “Test me”. 

“Show me” puts the e-learning in “playback” mode and you can opt to test the learner (or not) in this mode (purists would, for the data).

“Let me try” is generally the learner being presented with the situation that they would actually find, if doing the Course for real (though, with a different question set, maybe?). This enables the learner to test whether their fractal is complete. 

Lastly, of course, is “Test me”, where the results achieved become the official level of achievement for a given learner.

The pernickety would say that you should have three different sets of five questions for each learning nugget, for each of the three modes. This is rarely done in practice but it does allow you to tweak the effectiveness of the content, so that learning is as fast as possible – and the confidence to go for “Test me” is reached as quickly as possible. However, the e-learning structure that I’ve just defined is normally considered optimal, since using the same questions for each mode, does have an advantage in helping learning by providing an “aha that old chestnut” moment, thereby encouraging neuronal reinforcement of the fractal being promoted.  

In short, e-learning enables the learner to build up their own fractal, in their own space and time, by having these three modes of delivery. 18th Century mathematicians knew all about fractals but couldn’t produce them, without being able to do millions of calculations on a computer. A fractal is created by putting some data into an equation, taking the result and then putting the result back into the equation, for a few million iterations - and e-learning provides a learner with an optimal facility for achieving this process.

As you know, achieving anything worthwhile takes practice; this is you building up the fractal of electrical resistance in your brain that, eventually, enables you to excel at the task in hand. “Practice” is largely rote learning and e-learning that allows a learner free access to that “Practice” is powerful - and you can see this by watching childrens’ development  e.g. watching them learn to throw and catch a ball – same fractal equation, slightly different data each time. E-learning gives you the ability to deliver that facility to millions of people and to always be on top of how effective any part of the system really is – by testing the tests.

The free access part of e-learning is considered important because learners, who are allowed to learn when they want to, are much more engaged (and successful) than learners who are allocated time slots for their learning. Deploying this strategy, gives an “easy win” when ensuring that the e-learning is as effective as possible.

Personalised Learning


Consider this scenario: You’ve analysed that e-learning is the appropriate solution for training lots of secretaries, in the Organisation’s procedures. You decide that everyone will get a Course on understanding the organisation itself. Some secretaries will also get training to use the big photo-copiers in the corridor and yet others will receive training for updating overtime on the payroll system. 
 
When Courses and Modules are assigned to groups of learners, in a scenario where learners can belong to many groups, individual learning pathways are created by default. So in setting the system up, this way, you really don’t have to worry about individual learning pathways; they appear naturally.

Now that many organisations have many years of question data, modern systems allow the answers to the questions with distracters to trigger jumps through the whole Course. This leads to a virtual environment, where the learner can learn at their own rate, through the Course structure, and use their own pathway.

Modern e-learning is very compelling, on a cultural level, simply because its complete uniformity – which gives you a rational and maintainable system – is masked by an ostensible diversity  that allows people to learn what they want, when they want and according to what they already know – and that’s the strategic argument for e-learning, right there.

The Techie Bit


It’s simple, really: Create a tree-like structure, from the Syllabus, right down to the learning nugget / question set pairings. Take those and order them by logical pre-requisite. Go through the logical pre-requisites and create e-learning for those that are missing (plus the questions sets, of course). Tweak the others, so that it all flows nicely and this gives you the foundation of your “rational and maintainable system”, mentioned above.

Then start to create your Module structure and this has to be of the right duration, so the learner can just dive in and learn a bit more, whenever they like. Eight to 10 minutes is considered optimal (oh, and do tell the learner how much time it’s going to take them – at all points in the Module). Naturally, the “time remaining” metric can be calculated and displayed in real time, from the responses measured by the system.  Analysing this metric, allows you to see bottle-necks in your system, so it’s worth collecting the data for how long a learner spends in any part of the system, so that you can speed up the slow spots.

Then see how your Modules fit into Courses that cover different areas of the Syllabus. Note, that for a Course of any complexity, you’ll be likely to be flowing up and down this tree, re-flowing which content / question set pairs are in which Modules and which Modules are in which Courses.

Further, a “Decision Early Warning System” could be created to monitor when a question set appears to be deviating from its curve and this is great for monitoring the effects of changes to “content / question set pairs” and for monitoring which question set / content pairs are potentially ineffective.

Where E-Learning Really Doesn’t Work


E- learning really doesn’t work for “soft skills”, where an un-scripted (though, not necessarily unstructured) negotiation is going on between people. Examples of this are: a mortgage broker, trying to sell a particular mortgage; a doctor, negotiating treatments with a terminally ill patient; a defence lawyer, trying to work out how to manipulate their client’s testimony, in order to get them off the hook etc, and e-learning is, in my view, far from ideal for getting across things such as the Rand Corporation’s Delphi method and things like Neuro-Linguistic  Programming. It also used to be recognised in the industry that about 10% of any population just would not participate in e-learning, for various reasons. The most common one, in my experience, is “I’m not having a machine teach me” but I suspect that this number is a little lower these days. Another aspect, where e-learning doesn’t work, is where learners are overwhelmed with required learning. The other most common ones were just a fear of computers in general and what appeared to be a fear of failure. 

The overloading problem has been solved. The optimum time for training and personal research is deemed to be 10% of anyone’s time, in order for an organisation to successfully compete and innovate in its particular ecosystem, so the e-learning management system has to be able to drip-feed learners, so that they don’t exceed their 10% unless permission is given. This metric doesn’t apply to students, of course.

System View Conclusion


Looking at the other end of the scale, e-learning really excels at offering the opportunity to learn all things procedural and it’s also brilliant for simulations. Many flight simulators and some underground train driving simulators are based on e-learning software, where the software is actually taking input from and manipulating hardware, so that a pilot, for example, really feels he’s doing a victory roll.
Anecdotally, I used to have a nuclear reactor simulator (yes a real one that was actually used in real life) and I blew the thing up, every lunchtime for months on end, until I managed to master it. For all of that time, I was bouncing between “Show me” and “Let me try”, I never actually got to the “Test me” stage. The thing was huge and it was huge because of the number of scenarios that played out in it, not because of the size of the code.

Note that there is a valid argument for allowing learners to review the learning that they’ve already been certified for, so that they have the opportunity to stay sharp. You might also want to think of an update strategy, here, in the guise of “continuing professional development”.

Finally, e-learning doesn’t work if it’s out of date: In order to work, any e-learning has to be authoritative and should lead on timeliness. A continuing professional development strategy is now considered essential in any e-learning plan, so you’ll need to consider how you’re going to keep your teams / students sharp.

This level of dynamic re-certification can be achieved simply by using version control on the learning nugget / question set pairings and instructing the Modules to get the latest versions of each (i.e. you’re changing the contents of a Module). Likewise, version control can be used to change the Modules in a Course (i.e. you’re changing the contents of a Course), so the whole thing is fully configurable in a very simple way.

However, that’s a lot of versions and a lot of stats to keep track of. Your e-learning system is now in danger of being a client of one of your big data systems – and this just adds to the cost. Do bear that in mind.

The Recipe


I feel that I must rose-tint your glasses, before I really go into this. This recipe is so worth it because, if you adhere to it (in no particular order):

·        Your system scales seamlessly into however much hardware you can throw at it – theoretically, you could train the world.

·        You can pin down each point of the Syllabus to one “content / question set pairing” and test their effectiveness.

·        You can recycle any “content / question set pairs” into any Module or Course and update once, run everywhere.

·        Each learning nugget “content / question set pairing” can be monitored for effectiveness.
·        Learner’s time bottlenecks are identified, in the system.

·        You can demonstrate the effectiveness of each nugget of e-learning and the effectiveness of each Course and Module, against different audience demographics.

·        By allocating learners to Groups and Groups to Syllabi, Courses and/or Modules (your choice) and allowing learners to belong to more than one group, you guarantee that you deliver individual learning pathways, just by the nature of the thing you’ve set up. Anyway, by the time you’ve done this, you’ll have created the mother of all rods for your own back and you’ll have enough to deal with, so be thankful for that bit of automation there.

However, it is all extremely repetitive and stressful to produce, takes a very long time, though the maintenance is as slick as it can be - unlike a “new media” one-off system - but you do get a very large management footprint with this stuff: For each change in the system, you have to go through the whole recipe, dear reader – and that is when you get on to Amazon and order your Deluxe Personal Suicide Kit (Opium Edition) – or you become a world leader in e-learning creation (however, in 1995, my employer told me that they thought there were probably less than 35 e-learning authors, doing what I do, in the world) but I can’t lie to you, personally, I encountered the former situation and alarm bells went off when I wanted to click on that “Buy with one click” button on Amazon, so I went off to design a (still) market leading dot com (and its products) in the middle of the dot com bust, that was engineered by the bankers in the late nineties - which all of the team that I was working with saw coming a mile off, so we avoided all of the bankers’ silly shenanigans and just created a successful company, instead. An aside, that you may derive value from, but let’s get down to it.

The recipe consists of 12 steps in this order:

1)      Conduct a “Needs Analysis” to ensure that e-learning is the solution and that you can’t just change the rules, or tools, or something, to rectify the error or deficiency. Then come up with a series of needs statements like “18% of learners have never operated a cooker”, “33% of learners don’t know that eggs come from chickens” (is the latter important in this “How to boil and egg” Module?). When you do this, always record whether the need is perceived or actual and when and how you got your information. Then one day, you’ll have all the data you need to see which focus groups, for example, produce the most effective e-learning.

Prioritise the list of needs and then, for each need, write an instructional goal to sort it out and then prioritise those goals for grouping in the next stage. Instructional Goals are things like: “Teach all new interns to make a decent cup of tea”, or “Teach all locum doctors the hygiene regulations for this ward”.

Then, prioritise your instructional goals - use “Must have”, “Should have”, ”Could have”, “Would have” (MoSCoW) or something like “Critical”, “High”, “Medium”, “Low”.

This prioritised list of needs, each with their own, prioritised instructional goals, which answer the question of how training addresses each need, is your Needs Analysis.

2)      Write a “Mission Statement” (or “Scoping Statement”) for each Course and Module, to determine which bits of a Syllabus a Course will cover and which bits of a course, a Module will cover. By creating this as a tree structure, it is easy to re-scope a Module or a Course and, of course, it allows you to ripple backwards and forwards between the Syllabus and the instructional goals (which will eventually be covered by “content / question set pairs”), via Courses and Modules, to ensure that you’ve covered everything and that there’s no duplication.

3)      Profile the audience and work out what theme (or themes) you’re going to use to present your e-learning with – how about the more questions you get right, the more points you get in your ice hockey game and the higher you go on the company leader board? I kid you not, this was done back in the nineties, at an engineering firm in North America - and with great success. This approach is now called “gamification” and there are several great academic papers on that, which are well worth a read. I’ll leave you to investigate more.

4)      Now, create the Performance Objectives for each Instructional Goal.

You can write up five part objectives and these are structured as:
i)       Who needs to meet this objective (which group(s) of learners?) e.g. “first year physics students”
ii)      The circumstances under which the performance is to be measured e.g. “given a hypothetical situation the learner will” and “presented with a choice, the learner will”
iii)     Then write a verb that best describes the desired performance e.g. “distinguish” or “identify”
iv)     Next, describe what a learner has to do to demonstrate competency e.g. “between [content specific] and [content specific]”, perhaps “between, say, Hilbert space and any other co-ordinate system”
v)      Finally, specify the level of accuracy, learners have to achieve to meet the learning objective (bear in mind that this can be worked out from the responses to individual questions, as well across every question that comes with each Performance Goal in the Module, or even across questions in a Module or even a Course. So beware and really understand what it is that you are actually measuring. The golden rule is to gather all of the data so that you can make sense of it at your leisure. This is why the NSA and GCHQ behave in the way that they do; they know that this is a standard IT industry approach.

You can see that understanding which 90% of your students are actually achieving the Goal, is not that hard and besides, this is the initial data for your Decision Early Warning system for monitoring the questions – if learners’ performance deviates from what is specified, then the system can notify you, so that you can get on to it.

Now obviously, in a military situation, where people’s lives depend on them knowing their training, the level of accuracy will be set very high indeed. However, in a university setting, with a more “ideal” population, you’d want to see the responses to the questions, showing a bell shaped curve. Or maybe you wouldn’t but the important thing is that the choice is yours and the system just goes along with that.

5)      Lay out the outline of the content of each Module, as a list of headings. Fill in any gaps, weed out irrelevancies and then ripple up and down the outline, indenting and out-denting headings until you get the right structure (or more or less). It’s wise to either do this with a “subject matter expert” (SME) or to actually be the subject matter expert yourself. Finally, work out what the logical pre-requisites are for a person to actually get through your training.

Here, Quality Control lists are useful – e.g. for a Module involving tools, say, you need to check that the learner is going to know what each tool is called, what it’s for and how it’s used. Another one I had was, let’s start from Homo Erectus and work out what the logical pre-requisites are for this Course, then chop of the ones that apply universally to your audience e.g. do they speak the language that your Course is written in? Do they know what a BACS transfer is? etc.

6)     Lay out the Course map - here you lay out the map of what content / question set pairings go into your Course and then, ripple up and down the tree structure until you have Modules of equal duration, throughout the Course. Additionally, here, you might want to think about such things as Glossaries and listing each Module’s learning objectives. A tip, here, to reinforce the reinforcement is to use that old adage used by politicians – tell ‘em what you’re going to tell ‘em, tell ‘em and then tell ‘em what you’ve just told ‘em. So, you could start with a screen stating “This is what you should learn from this Module” as a bulleted list and then you finish the Module, with “This is what you should have learned from this Module” (the same bulleted list) and put the latter before the post-tests. The belt and braces (but very tedious) back-check here, is to ensure that all Instructional Goals have content and vice-versa.

7)     Define the look and feel. This not only applies to visuals, it applies to the learning strategy used e.g. are you going to “gamify” the thing, are tutors going to be online and at hand etc. Here, I’ve found it useful to write a style guide, so that every word is used consistently e.g. the word “click” is always used for a left mouse click and “right click” is always used for a right click. This ensures that the language is pared down to as few words as possible and things like “fees” and “payments” are not confused with each other by the learner. The style guide also sets the tone of voice and the tone that, in my experience, is most effective is the one I’m using now: you’re reading this as a one to one communication between me and you. So no “oi you lot” type phrases eh? It’s “us”, not “us and them” a dictatorship in e-learning doesn’t work, unless you have gulag level sanctions at play in real life i.e. outside of the sphere of influence of your e-learning, to enforce participation.

8)      Decide on what question types to ask and associate them with the content nuggets. The industry standard is shown below (apologies for the poor scan, it’s the best I could do, from a very old and very well used book – by Gloria Gery, listed below in the recommended reading section):


Then create the appropriate questions for the thing that you’re testing. Don’t forget, three distracters for each correct answer, for the drag and drop, match item and multiple choice questions and five different questions for each learning nugget.















9)      Storyboarding – now lay it all out as you think it should look in real life i.e. code it up in your authoring system (leaving out the expensive stuff like video and graphics and just putting placeholders there, instead) to get a prototype / alpha build, that you’ll only show to your closest friends and allies – with “show notes”, so that they can understand what is going to be in all of the placeholders – e.g. video dialogue transcripts, descriptions of graphics etc. There will be some stinking howlers in your work, just from the sheer volume of the information that you’ve processed up to this point – especially if late nights and sunny weekends have been involved. You also need to test the technological functionality of your creation with your IT department, so that it goes through firewalls, users have the right authentication / authorisation etc.

10)   Generate the media elements – In my view, this is the only place that “new media” companies belong in the e-learning production pipeline; get them to produce the video, graphics and icons that you need. Bear in mind that if you’re using outside contractors, your whole project could be stalled on contract negotiations, random delays (since you’re now beholden to a third party), over-runs, fighting for budgetary approval – and the list goes on.

Since this material is likely to be very expensive and hard won, it’s considered normal to hold your media in a “media library”, that is catalogued and indexed for future use and so that you can amend the media elements, so you’ll want to capture who produced what media and when, to give you the version control that you need throughout the system. Do make sure that you’ve replaced every placeholder with the real thing though.

11)   Beta test it – you’ve now got all of your media and what you think is a finished product (and never forget that the product is software, so you need to test the techy aspects of the thing too, just the same as any IT shop has to), so beta test it amongst your alpha test audience and when it passes muster with them, throw it out to the “pioneers” and / or “guinea pigs” in your audience, so that they can really hammer the thing and you can start to analyse your question set data on an audience that reflects the demographic that you’ll be deploying to.

12)   Deploy and monitor – Now that you can prove that your system works, from both the educational perspective and the technology perspective (and possibly the economic perspective – so it’s worth tracking your costs along the way). You’ve suddenly got that big rod for your own back – proving that your system is continually effective and even accommodates updates – bear in mind that any update has to go through this same 12 step process that I’ve just outlined, so what’s your likely update frequency and volume then? A rod for your back indeed. As I said in at the beginning, make your first project modest.

Conclusion


You may be confused as to why, at the beginning of this article, I said that I thought that e-learning is largely irrelevant, certainly in business, though not necessarily in academia, yet, you see that I can expound this method of creating e-learning to you, with obvious conviction and experience: There’s another level to all of this and it all goes back to step one of the recipe – the Needs Analysis; is e-learning really the solution to your problems or is it just your Belle du Jour?

So, here, I’m going to give you three real world examples of where e-learning is not the solution – but you’ll really baulk at the first one because, ironically, it produced e-learning.

Scenario 1: Produce e-learning to train trainee solicitors (largely Oxford and Cambridge graduates) in shipping finance law. I mentioned to the partner in the law firm concerned that I had absolutely no clue about shipping finance law and that producing this material would take up substantial amounts of her time as I transcribed her expertise into a piece of software. Being a big “name” in the field, she didn’t care about that, she just wanted it done and pointed out that since I was fluent in property finance, shipping finance is just the same stuff, applied to boats and, globally, it’s all British law anyway, so what was my problem? I had the IT expertise and she had the expertise to vet my stuff – what a great team. However, her stance was, how come I just couldn’t I just build it for her?
As a business analyst, this scenario to me had “risk” and “re-work” written all over it, so I made the suggestion that I build her an electronic performance support system to create her own e-learning. As an over-achiever, she loved the idea and so, I built her a system, using the e-learning software (Toolbook Instructor) that enabled her to author educationally robust courses.

At this point, the training department started howling at us about what we were doing, so we made them a bet. We’ll do the first two courses and train our trainees in parallel with training department’s classroom training and then test everyone six weeks later, for retention and understanding. Six weeks later, our cohort consistently showed a 13% better retention and understanding than those trained by the training department (that was easy for me to remember because my birthday is on the 13th) and e-learning was in and now being taken seriously. However, note that my solution to an e-learning problem was to produce an electronic performance support system, not the e-learning itself.

Scenario 2: Get retail customers to specify the decking requirements for the back of their house and get the system to produce a parts list, instructions for that particular decking and select the parts from the warehouse. I once asked a very experienced engineer, how long it would take him to come up with these results, given my measurements. Giving this some serious thought, he got through half a pint whilst coming up with the estimate for me. “About four hours” he said. I then told him about the Innovis DesignCentre – a big screen, a trackball and a button. At the time, this system was in use in over 150 retail outlets in the United States and a customer, with a sales assistant could do the job in an average of six minutes. Now, with a slight imperfection in the initial analysis, the conclusion could well have been to train customers in the process that my engineer friend was thinking of.

Scenario3: Enable property professionals to request the right due diligence searches for any property that they are handling in a property transaction. By way of background, in the UK at the time, there were 99 of these searches, from seeing whether the property is over a salt mine, to seeing if it was polluted by radon gas from the granite it sits on. We asked the 10 largest law firms in the City of London to monitor their transactions for six weeks and then tell us what the average time for them to do this was, for each property. I’m mentioning this one because I actually used to train and mentor trainees through this – it was an obvious training need. Or was it?

This was an arduous task and most of the high street lawyers just didn’t have the knowledge and the resources to do it properly. The 10 largest firms in the UK took 10 hours per property (we took eight, thanks to my training system but we didn’t tell them that, since we were only the 12th largest law firm on the planet at the time and didn’t want to appear to be an upstart) and the process was fraught with errors.

Today, in the UK, the solicitor in the high street can perform this task to the same level of quality and in the same time as the largest law firms. How? I designed an electronic performance support system that was built and is now market leader for this type of thing in the UK. Anyone who vaguely knows how to do this manually, can now do the job to perfection in about seven minutes.

So, given the pain and cost that you’re going to go through, creating e-learning that actually allows people to learn something, you need to add electronic performance support systems to the mix and compare the economics of that with e-learning. When you do that, you’ll see that the opportunities for e-learning are very limited indeed. I now consider e-learning useful for hardware training e.g. loading a shell into the muzzle of a tank and for software that doesn’t have a set procedure e.g. using a word processor, where you have a blank canvas, with a couple of thousand functions sitting behind the thing, that you can’t immediately see. Today, I consider changing the system and creating an electronic performance support system before I consider e-learning.

Suggested Reading


Decision Analysis by Howard Raiffa – by far the best book I’ve ever come across about rippling up and down tree structures in business. Raiffa formalises this process and learning his technique is invaluable in those “edge cases” on the meta-level, where you just can’t decide between two choices e.g. do we give the learner wordage and graphics or a really expensive video? 

 
The Economist Style Guide – Ensures that you cut the vocabulary down to the minimum and keep it predictable for the learner. http://www.amazon.com/Style-Guide-Economist-Books/dp/1610395387/ref=sr_1_1?s=books&ie=UTF8&qid=1431247010&sr=1-1&keywords=The+Economist+Style+Guide
 
Electronic Performance Support Systems by Gloria Gery – this gives you the nonce to re-design your systems, instead of investing heavily in e-learning, when you don’t need to. http://www.amazon.com/Electronic-Performance-Support-System-Gloria/dp/0961796812/ref=sr_1_1?s=books&ie=UTF8&qid=1431247058&sr=1-1&keywords=Electronic+Performance+Support+Systems+by+Gloria+Gery
 
NLP at Work by Sue Knight – personally, I consider using NLP on a population that’s not familiar with it, to be a form of psychological warfare and, therefore immoral. However, this one is an essential primer, if you’re in an environment where NLP is de rigueur and allows you to tweak your style guide appropriately. http://www.amazon.com/NLP-Work-Essence-Excellence-Professionals/dp/1857885295/ref=sr_1_1?s=books&ie=UTF8&qid=1431247172&sr=1-1&keywords=NLP+at+Work

Leon O’Ware, May 2015

No comments:

Post a Comment