AAA

Unlocking the potential of technology in education

Agata Czopek, Piotr Pietrzak

Introduction

Over the last years various attempts to legitimate technology in education have been observed. Beginning from the examples of simple digitization of schools books through „iPad school” initiative in the Netherlands till dedicated closed tablet solutions in the USA. At the end of 2014 first adaptive learning solutions were introduced. The Knewton's system using knowledge graph started giving recommendations for learners on what questions should they work next, provided learning analytics and instructions predicting future results. In 2015 cognitive computing and artificial intelligence systems that use natural language processing and machine learning entered the market. IBM started working on a comprehensive solution for personalized learning. The practical implementations of those system can not only improve students' outcomes but also help to close skills gaps and rise employability. This article presents solutions that are being unlocked right now showing examples of newly created value of technology in education.

The beginning of adaptive learning

Interest in what is now called adaptive learning can be traced back at least to educational research in the 1970s1 and received a boost when the large positive impact of one-on-one tutoring was experimentally quantified. The influential study by Bloom2 reported that the average student tutored one-to-one using mastery learning techniques performed two standard deviations better than students who learn via conventional instructional methods. This effect provided the motivation for a large body of work in adaptive tutoring research3 and raised a question if a human tutor can improve learning outcomes so radically, then how can technology unlock student's potential?

The answer to that question was not easy and due to the complicated nature of the research. Many early tutoring systems, e.g., Aleks (Assessment and Learning in Knowledge Spaces)4, AutoTutor5 and Andes (A Coached Problem Solving Environment for Physics)6 represented years of research effort and specialized in specific domains of study such as mathematics, chemistry, introductory statistics, business and physics. Aleks used the theory of knowledge spaces to develop a combinatorial understanding of the set of topics a student does or doesn't understand from the answers to its test questions. Based on this assessment, it determined the topics that the student is ready to learn and allowed the student to choose from interactive learning modules for these topics. AutoTutor focused on natural language dialog and used computational linguistics algorithms including latent semantic analysis, regular expression matching, and speech act classifiers. Andes system encouraged students to construct new knowledge by providing hints, gave immediate feedback after each action to maximize the opportunities for learning and minimize the amount of time spent going down wrong paths and gave the students flexibility in the order in which actions were performed and allowed them to skip steps when appropriate.

Consecutive research conducted by Van Lehn7 compared the effectiveness of human tutoring, computer tutoring, and no tutoring (where „no tutoring” referred to instruction that teaches the same content without tutoring). Van Lehn divided the computer tutoring systems by their granularity of the user interface interaction into: answer-based, step-based, and substep-based tutoring systems. It was widely believed that as the granularity of tutoring decreases, the effectiveness increases. In particular, when compared to „no tutoring”, the effect sizes of answer-based tutoring systems, intelligent tutoring systems, and adult human tutors was believed to be d = 0.3, 1.0, and 2.0 respectively. Van Lehn's review did not confirm these beliefs. Instead, he found that the effect size of human tutoring was much lower: d = 0.79. Moreover, the effect size of intelligent tutoring systems was 0.76, so they were nearly as effective as human tutoring.

However, as promising the results were, the systems created for research were not scalable to the massive amount of content across all educational domains. This limited the educational implementations in which they could be used and, by extension, the number of students that could benefit from them. One of the main reasons was the problem of modelling student knowledge and understanding.

The knowledge graph

In 2013 first adaptive learning solutions were introduced on a wider scale in the USA. The Knewton's system was giving recommendations for learners on what questions should they work next, provided learning analytics for learners and instructions predicting future results.

Knewton used a novel instrument to address the problems of student modeling that focused directly on scalability across content: the knowledge graph. The knowledge graph represented content in a semantic graph structure. This structure provided a way to diagnose student understanding and misunderstanding. It could also power intelligent tutoring strategies and remediating deep misconceptions. The knowledge graph was built using the adaptive ontology, an intuitive and flexible set of objects and relationships that are easy to learn, easy to express content relationships in, and powerful enough to use as a basis for analytics and adaptive tutoring. Elements in the Knewton adaptive ontology consisted of modules (pieces of content), concepts (abstract but intuitive notions of ideas that the content teaches and assesses), and relationships between these two. Since concepts were abstract, rather than tied to a particular book or pedagogy, they could be used to describe relationships between any modules, even those existing in different books, subject areas, or school years. One of the many implementations of the system was Adaptive Math created together with Young Digital Planet - a content provider company.

Figure 1. Screen of teacher's dashboard of Adaptive Math course (a view that allows teacher to break students into groups assigned by different needs and levels of proficiency and therefore apply different goals for each group)

Source: authors.

Figure 2. Screen of teacher's dashboard of Adaptive Math course (reporting student's performance, engagement and predicting results)

Source: authors.

IBM in personalized education

IBM started journey towards personalized education several years ago. Thanks to the success of Watson system winning against humans in Jeopardy contest powered by advanced analytics, DeepQA8 technologies and knowledge and experience of IBM researchers company decided to establish in early 2014 dedicated Watson unit and invest heavily into cognitive computing area. The initial focus was on healthcare, public safety and customer service (May-June 2012). From the education perspective the most known project was related to Watson Engagement Advisor for Education. Deakin University, based in Victoria, Australia, has adopted IBM's Watson technology to become the first university worldwide to use cognitive computing platform to enhance student experience. Using Watson helps Deakin to scale its services and offerings to its growing online and international student base. The Deakin solution, built on the Watson Engagement Advisor platform, has been trained on over 7000 questions and allows existing and potential students to access a „conversation” service via any mobile device and to be given instant information on matters related to studying at Deakin and beyond to include career information around the appropriateness of courses of study for a career in particular industries and career choices. The application was delivered as a cloud service. For enrolled students Watson was able to deliver personalised responses to student queries and engage in „conversation” with the student in full knowledge of their existing status and progress at the university. The system was implemented for Deakin's first-year intake students at the start of the academic year in February 2015.

The way how Watson basic core elements work is different than what we've grown accustomed to in the era of programmatic computing. Watson finds its responses not through business rules or decision trees, but by hypothesizing, gathering evidence to support or refute those hypotheses, weighing the relative strength of evidence for possible responses, and then sharing its results with full transparency of evidence for the user to consider. Looking at this step by step:

  1. When a question is first presented to Watson, it parses the question to extract the major features of the question.
  2. It generates a set of hypotheses by looking across the body of data for passages that have some potential for containing a valuable response.
  3. It performs a deep comparison of the language of the question and the language of each potential response by using various reasoning algorithms. This step is challenging. There are hundreds of reasoning algorithms, each of which does a different comparison. For example, some look at the matching of terms and synonyms, some look at the temporal and spatial features, and some look at relevant sources of contextual information.
  4. Each reasoning algorithm produces one or more scores, indicating the extent to which the potential response is inferred by the question based on the specific area of focus of that algorithm.
  5. Each resulting score is then weighted against a statistical model that captures how well that algorithm did at establishing the inferences between two similar passages for that domain during the „training period” for Watson. That statistical model can then be used to summarize a level of confidence that Watson has about the evidence that the candidate answer is inferred by the question.
  6. Watson repeats this process for each of the candidate answers until it can find responses that surface as being stronger candidates than the others.

Based on this core functionality Watson presents the answer in the hierarchical way with the confidence level and the most important documents (classifiers) which impacts the reasoning process at the top of the lists. The model can be trained and optimised further. This approach can be treated and named as a basic pipeline process. All of the surrounding solutions benefits out of this core functionality when it comes to natural language processing (NLP).

In late 2015, Watson for Education business unit has been announced. One of the expected solutions is the one which will focus on personalised education powered by Watson cognitive technology where each and every student will be provided with a tailored curriculum optimised and profiled to the student based on his past and current results and knowledge gaining curve.

The core components of this framework are designed to be:

  1. Student Information Hub (SIH) that gathers the administrative data about a student and the dynamic interaction data about a student's activities.
  2. Learning Content Hub (LCH) that supports deep analytics across many sources of content, including automated meta-tagging, alignment to curricular standards, and assessment of aspects such as readability and complexity.
  3. Longitudinal population analytics and machine learning methods on student and content data to discover and articulate best practices for a particular cohort of learners, delivered through meaningful visualizations to fit the workflow of a teacher, advisor or learner.
  4. A platform that includes security and authentication, mobile device management, change management and delivery to mobile devices on the cloud.

This set of components can support a teacher in identifying personalized approaches for at-risk students. As an example the teacher logs on to her management system and asks to see information about her eighth-grade math class. She sees a visual representation of her students through a number of lenses, including their risk profiles for doing poorly in the class. She can select a particular student to learn more. The analytics provide her with the insight that this particular student does well. She also learns that this student has reading and language challenges in other classes (something she might not know in the normal departmental structure of schools). She asks the system for suggestions about additional content and is shown very detailed analysis across several different sources (content from publishers, her own materials or materials from other local teachers) rated for applicability to the transformations but also scored for readability, density of concepts and success when historically used with similar students. The teacher chooses which pieces of additional content to highlight and will also be able to monitor the student's engagement and success with this new content. Not only does this support the teacher in personalizing instruction in a practical way, it also enables her to reach this student without slowing down the rest of the class or exposing the student to scrutiny by peers.

What needs to be emphasised is that this personalized learning is necessary for all students, including high performers who require differentiated content and instructions to stay motivated and engaged. By focusing on student similarity and cohort analysis and by analyzing each piece of learning content along many different dimensions to suit different categories of learners, the solutions can help ensure all learners receive personalized recommendations and guidance.

The second use case of IBM components can be focused on skills and future jobs. A solution can aggregate career opportunities using big data gleaned from the media, job postings, and other sources. The skills and talents associated with these career opportunities can be identified based on other successful employees in these job spaces. Statistical processing can identify then the number of people moving into some of these areas, as well as likely opportunities in a given region and industry. Recommendations can be provided to individuals, based on their own propensities, skills and talents as gleaned from the Student Information Hub. Other employees that have successfully transitioned to these careers can be analyzed with respect to their own experiences and training. A particular student and aspiring employee can be mapped to an already successful employee with similar learning styles in this target space, and academic and internship opportunities can be recommended accordingly.

Implementations of described systems depend on new partnerships and alliances. No single company can build the whole solution. The benefits for society are enormous taking into consideration that more than a third of global companies had difficulties filling open positions and reporting shortages of engineers, technicians and IT staff9. Unemployment statistics released by the International Labour Organisation support their concerns.

Will cognitive computing and artificial intelligence help us to close skills gaps and improve student retention, graduation, performance and employability? The expectation is that this is finally the right place where technology can add a real value to education.

References

INFORMACJE O AUTORACH

Agata Czopek

Agata Czopek, PhD, PMP is a consultant and researcher with more than 10 years of professional experience. Currently holds a position of Strategy & Development Associate at Young Digital Planet (company that delivers educational solutions to over 45 countries). She divides her time between research initiatives in the field of technology implementation in education, consultancy for country wide projects (Saudi Arabia; UAE; South Africa) and managing strategic partnerships with tech giants (Microsoft, IBM, Intel). Agata is helping organizations to enhance efficiency through professional project management, program management and corporate governance. Also a speaker (Project Management Days Romania, OPPI New York City, AgilebyExample Light Warsaw, IGEP Paris, Wolves Summit Gdynia, Frankfurt Book Fair).

Piotr Pietrzak

Piotr Pietrzak, Chief Technology Officer, IBM Poland and the Baltics has 17 years of experience in the IT industry. Piotr specializes in enterprise digital transformation projects, business model optimisation, incubation of innovative projects and their commercialization. He is responsible for collaboration with VC/PE investment firms, incubators and accelerators. On daily basis engaged in IBM R&D works. In his current role as a CTO in IBM responsible for development of the cloud and cognitive computing market in Poland. Author of many industry specific and business publications. Member of the Academy of Technology and CAS Advisory Board at Technical University of Gdansk.

 

Informacje o artykule

DOI: https://doi.org/10.15219/em65.1245

W wersji drukowanej czasopisma artykuł znajduje się na s. 78-82.

pdf pobierz artykuł w wersji PDF

Jak cytować

Komentarze

Nie ma jeszcze komentarzy do tego artykułu.

dodaj komentarz dodaj komentarz

Przypisy

1 S.G. Sherwood, Educational Uses of the PLATO Computer Systems, „Science” 1976, Vol. 192, No. 4237, pp. 344-352, dx.doi.org/10.1126/....

2 B.S. Bloom, The 2 Sigma Problem. The Search for Methods of Group Instruction as Effective as One-to-One Tutoring, „Educational Researcher” 1984, Vol. 13, No. 6, pp. 4-16.

3 K. VanLehn, The Relative Effectiveness of Human Tutoring, Intelligent Tutoring Systems, and Other Tutoring Systems, „Educational Psychologist” 2011, Vol. 46, No. 4, pp. 197-221, dx.doi.org/10.1080/... M.W. Evens, J. Michaels, One-on-one Tutoring by Humans and Computers, Lawrence Erlbaum, 2006.

4 J.P. Doignon, J.C. Falmagne, Knowledge Spaces, Springer 1999.

5 A.C. Graesser, K. Wiemer-Hastings, P. Wiemer-Hastings, R. Kreuz, AutoTutor: A Simulation of a Human Tutor, „Cognitive Systems Research” 1999, Vol. 1, No. 1, pp. 35-51, dx.doi.org/10.1016/...(99)00005-4.

6 A.S. Gertner, A Coached Problem Solving Environment for Physics, [in:] Gauthier G., Frasson C., VanLehn K. (eds.), Intelligent Tutoring Systems, Springer-Verlag, 2000, pp. 133-142, dx.doi.org/10.1007/....

7 K. VanLehn, The Relative Effectiveness..., op.cit.

8 D. Ferrucci, et al., Building Watson: An Overview of the DeepQA Project, „AI Magazine” 2010, Fall, www.aaai.org/Magazi....

9 The Talent Shortage Continues: How the Ever Changing Role of HR Can Bridge the Gap, 2014, https://www.manpowe....