Natural Language Processing in Edtech

Natural language processing (NLP), a subset of machine learning, is embedded into our lives in so many ways, from how it helps us search on a web browser, correct our spelling and grammar, and predict our sentences in an Email.

The last ten years have seen rapid development in the world of home assistants to bots helping to answer our customer service questions; however, the process of such technology becoming widely accepted in our culture was not always the case. Think back to when we first started speaking with bots on a bank website, and it was not always plain sailing; hard to imagine now when fin-tech is one of the most significant users of such technology to help improve its service offerings.

That same process is starting to emerge in Edtech, while educators use technology such as Grammarly and Turnitin. The use of NLP in areas such as assessment marking is going through that exact phrase as it did as a customer service assistant. However, institutions and awarding bodies are still debating the ethics of such a process despite research growing and proving the accuracy of scores compared to that of a human marker. 

For those that work in the vocational education space, you will be all too aware of some of the core issues that teacher roles face. Examples include large caseloads of learners and standardisation, which impact the quality of teaching. NLP in assessment marking can help improve such problem areas in this sector and create a way of working, increasing tutor-teacher time. NLP doesn’t want to remove the teacher but supports the teacher’s role, reducing work and allowing greater feedback time to the end-user (the learner). 

One of the issues we need to solve within the sector culturally is the worry that it is not a human marking the work. At the same time, myths in this have been debunked. Once the culture changes within the awarding body powers (which it is heading towards), we can start seeing such technology improve the way we deliver education. 

Outcourse is one more step towards this. We use our assignments from BRITEthink and Learning Care Hub as datasets to train a model to work towards national awarding body standards and produce a support tool that can help educators.