Important questions can get swept aside when the rumble of excitement about future possibilities are loudest. Nowhere is this more clear than in the emerging arena of learning analytics.
The fact that we can track information just about anytime, anyplace and on any device boggles the mind with possibilities. But it also begs a few questions: Does the fact that we will eventually be able to collect, track, link, visualize, tokenize, correlate, factor or parse data pulled through APIs mean that we should? When we can collect everything, will we know what to look for when applying various analysis technique among the research protocols? Will we recognize the answers we are looking for when we see them forming in the mists or hanging from the branch of a CHAID tree?
Perhaps even more to the point - will we be able to do anything meaningful with what we find? What are we going to DO, once we know?
The notion that all learning should be tracked, or that there is value to tracking activity stream level information, or that Hadoop-like pattern-seeking technologies will serve the needs of the learning enterprise in meaningful ways, must also consider what that capability is really worth to the enterprise.
It presumes that once we DO recognize new patterns of loss, momentum and opportunity that we are going to be able to do something about what has been found.
But most people do not understand how long it is going to take the learning world at large to think of data as his or her friend. To see the results of analyses as benchmarks for better understanding performance, not as red flags requiring an intervention. One can only wonder how many enterprises will be slow to collect learning data worrying that they can't afford the Burden of Knowing what their data will reveal.