1) We’ve spent the last decade seeing the rise of authoring tools that require less time and skill to create learning content. The amount of learning content is growing very quickly.

For many of us who remember back in 2008, this was the rise of the rapid authoring tools and the beginning of making it easier for subject matter experts to create learning content themselves. Go ahead and create a powerpoint that you like, convert it to a course using one of these tools and you’re done! The resulting trackable course was ready to run on your company’s LMS. Times have changed though. Those systems were priced in the region of $500- $1000 per author and so this limited both the amount of content produced by the few number of “authors” who felt comfortable using them and was a natural barrier to only allowing those considered worthy of such tools in the company to create new materials in the first place.

Now, we are seeing the rise of all manner of content creation vehicles. Not only are there more and more learning systems which include authoring within them other non-learning tools such as blogs, web page based manuals, pdfs, video from phones and even Slack, email or text messages people create to answer a question are being used in 1-1 or one to many job training. Not to mention how slide decks, PowerPoints and Google Slides are not diminishing in use but growing as well. Much of this content may not be in the form of trackable courses but because of how much more easy it is to share and gain direct participation these collaboration tools are really how many people chiefly interact with their colleagues and by extension learn their jobs.

2) We as micro learning vendors are loath to admit; there is a LOT of bad content mixed with the good.

As learning content and learning system providers we never want to think that the content produced by our customers is not necessarily ideal for their learning audience. However, creating bad content is the flip side of unleashing extremely easy to use authoring and collaboration tools. To date, this is held in check by only letting those create/ update or manage content who are deemed qualified to do so. But as the amount of learning materials grow, it only adds to the burden, time and challenge of curating a company’s learning content.

3) What the learning content vendors never tell you; all content needs to be constantly updated and kept current.

Back when LMS vendors and 3rd party content vendors were primarily focusing their content on compliance/ generalized training using on demand learning as a replacement for existing classroom training, few were concerned with the effort required to maintain and update the learning content produced. It still needed to be kept updated there was simply a lot less of it. Today all that new micro learning needs to be maintained as well. But as we all well know, few of us have time to do so; especially those in most learning departments. Even 3rd party content vendors run into this issue. As their library grows so to does the effort and time required to keep all their titles relevant.

4) Current methods of measuring and optimizing learning content are not effective. They are good practice its just getting harder to follow them.

Adopting a content strategy where your team measures when content has reached the tipping point of usefulness or optimizes the content through revision or reviewing sounds like a great idea. But its a matter of the time such content strategy takes that is the limitation. There are increasingly not enough team members with enough time and resources to effectively do this on the myriad of learning content within your company.

5) The big content collaboration vendors recognize this already. 

Back in September of 2017, Box.com one of the worlds largest Enterprise Content Collaboration Platforms announced they would be applying Natural Language Processing (a form of Artificial Intelligence) to all 30 billion files that they managed for their clients. The goal is to specifically identify, classify and categorize automatically the billions of photos their clients have and be able to quickly organize these unstructured forms of content. Apple, Google, Amazon and Dropbox of course have similar goals and processes. All of them realize expecting us “humans” to manage this content deluge is not reasonable.

We can look no further than Amazon specifically in their successes in managing a vast amount of varied content. As a shopper on their site we have access to millions of book and product SKUs. They apply 1) Social rating – what you experience when using social sites like Facebook by measuring “likes”, shares and votes. 2) Collaborative filtering based on users past actions and 3) Semantic Analysis which understands the meaning of small segments of the content and then applying machine learning analyzes relationships between all these segments for future suggestion or recall of specific content. Amazon uses a combination of all three techniques to position the right book or product based on our behavior, peer experiences as well as having a semantic understanding of the product page we’re viewing. This has doubtlessly taken many thousands of man hours and peta bytes of data to analyze to improve these algorithms. But over the past few years we are seeing additional “unsupervised” AI driven algorithms that are able to automatically categorize and organize content without the need for training data and the required time involved. These are known as unsupervised AI / deep learning approaches.

Thankfully, as the above Amazon and Box.com example illustrates, this overall content curation and optimization problem is being tackled on many fronts. So certainly, such an automated solution like this will certainly be part of the learning content curation landscape as well.