Evaluating a learning platform

Perhaps the biggest unknown factor when choosing an online learning platform for your organization, is whether people will actually use it or not. And beyond that: will they like it and find it useful? These are questions of user experience (often abbreviated as UX). When making decisions, this topic is a beast of a problem, as it is equal parts vague and important.

We often see buyers and evaluators try to evaluate the user experience of a learning platform during the purchasing process, but this usually leads to platitudes and hand-waving instead of useful information. As a UX person, I have insight into getting the answers you are looking for in order to make good decisions. This is your hands-on guide to get those answers

Why UX matters when selecting a learning platform

First things first. Why should you care about UX?

Whatever term you choose to signify the user experience of a learning platform (we often see intuitiveness, usability, ease of use, perceived value, and end-user value used as rough synonyms); the effects on your business are the same.

Good user experience in the platform will positively affect:
  • How much the platform is used (which is good, unless you are billed by the amount of usage :))
  • Are learners persistent until the end of your course
  • Will learners recommend your services to others
  • Are learners likely to make additional purchases

In a nutshell, learners and educators will not separate the learning platform from your content and pedagogical design as a matter of experience. So don’t waste your investments in content and pedagogy with a bad learning platform choice.

Good metrics

Okay, so what data should you collect, and how?

Piloting with a representative population of learners and educators is by far the best way to tell if the product is a good fit. When piloting, try to actually utilize the new affordances created by the platform, don’t just copy over the existing learning process. You wouldn’t buy a car without driving it first and when you drive it, you would also try the features that your current car doesn’t have.

Collect feedback on the overall experience as well as pain-points and actually use those to inform your decision. We see organizations that try learning platforms, in the end, disregard the end-user feedback. We recommend asking specifically about the following factors.

  • Do they understand how to use the platform? (measures perceived ease of use)
  • Do they feel they trust the platform? ( measures perceived trustworthiness)
  • Do they feel they have the necessary skills and information to use the platform? (measures perceived capability)
  • Do they feel the platform is useful? (measures perceived usefulness)
  • Do they like the platform? (measures enjoyment and attitude)

These questions have been proven to be good predictors of use in e-learning systems.


Of course, piloting and user tests are not always possible for the usual reasons (money or time constraints). These questions will give you some indication of the UX-focus of the learning platform provider. The questions are intentionally concrete, in order to create as little wiggle room for ambiguous answers as possible.

Ask about the ratio of designers to developers in the product team.

In a UX-focused team, you should have a ratio of no more than 1:5 (designer: developer) and of course, the more designers the better (to a point, but we rarely see the opposite issue). If there are combined roles (like a UX developer) count those as half. Front end developers should almost never be considered design resources, save for some unicorns.

Ask how many updates were made to the product during the last year.

In a truly agile and user-focused team, you should have no less than 5 major updates per year. Size does not replace frequency here. When counting, disregard bug fixes and security updates if you have release notes available. Make sure you will get continuous updates after the purchase, instead of being stuck with whatever version you buy.

Ask about overall end-user feedback from organizations similar to your own.

Ask the prospective provider to provide some statistics, not quotes or testimonials. Especially good is to get an idea about the customers’ and users’ willingness to recommend the product. Results from formal usability tests are also a good indicator but are rarely available.

Bad metrics

There are, of course, many pitfalls and seemingly good questions that don’t really have anything to do with what you are trying to measure. Here are some common fallacies:

The number of matching feature bullet points has little to do with how useful the product is when it’s actually used. The added value will be created in a dialogue between the user’s needs and the opportunities the platform creates, these are not separate phenomena. Feature lists are important, but they should not trump results in real user testing.

Asking about the design approach is too vague. There’s too much wiggle room here, any team can claim to be user-centered, mobile-first, agile lean, and whatever else is the hot topic now.

Crude usability metrics, such as the number of clicks required to navigate to any point in the product, are minuscule parts of the bigger picture. These are definitely a part of the user experience, but should not be mistaken to indicate it as a whole.

First impressions of people who are not representatives of end-users (i.e. learners or educators). IT experts and administrators have important insight into the process, but can’t be expected to predict the experience of a different kind of person.

Red flags during the sales process

Here are some worrying behaviors we sometimes see from representatives of platforms where the user experience has been an afterthought. These should help you weed out the worst choices from a UX standpoint.

When asking about user experience, pedagogy, or end-user feedback, the discussion is directed back to a “more substantive” topic, such as administration, infrastructure, pricing, or information security.

The sales deck has no real content about learner experience or pedagogical affordances created by the product a few buzzwords that don’t correspond to any features you see in the product demo.

Requiring a sizeable commitment and investment before being able to do a pilot or trial project with the platform.


Trying a prospective learning platform in a real scenario with actual learners and educators is the way to go, if at all possible. If this isn’t possible, ask concrete questions about feedback from learners, their design staff, and their product development schedule with little as little wiggle room and interpretation as possible.

Hopefully, this UX cheat sheet is useful to you! Let us know if you have other useful tips or insights?


Relevant sources

Lee, Y. H., Hsieh, Y. C., & Chen, Y. H. (2013). An investigation of employees’ use of e-learning systems: applying the technology acceptance model. Behaviour & Information Technology, 32(2), 173-189.

Šumak, Boštjan & Hericko, Marjan & Pušnik, Maja. (2011). A meta-analysis of e-learning technology acceptance: The role of user types and e-learning technology types. Computers in Human Behavior. 27. 2067–2077. 10.1016/j.chb.2011.08.005.

Zaharias, Panagiotis. (2009). Comprehensive Usability Design in the context of e-Learning: A Framework augmenting ‘traditional’ usability constructs with instructional design and motivation to learn. International Journal of Technology and Human Interaction. 5. 37-59.

miska noponen

About the Author

Miska Noponen worked as a user experience and concept development lead in Claned Group. He is a multi-talented UX designer, ardent pedagogue, computer programmer and lifelong learner.