Popular Terms in AI
Many terms are associated with AI, the most common ones being Machine learning, Deep learning, Supervised learning, Unsupervised learning, Reinforcement Learning and Large language models. Let’s analyse each of these to understand what the terms mean.


Artificial Intelligence (AI): AI is an umbrella term and is used to describe technology that is capable of carrying out tasks on its own.
Machine Learning (ML): ML is an application of AI, based around the idea that machines should be given access to data, which they should use to learn, and get smarter. ML is used when you have small to medium datasets, and hence the training time is shorter as compared to Deep Learning.
Deep Learning (DL): DL is a subset of machine learning that uses neural networks with many layers (deep neural networks) to learn patterns from large amounts of data. DL is useful for complex patterns and tasks, and involves long training time.
Types of ML and DL
Types of ML and DL
Both ML and DL can be divided further into three categories based on how the machine is trained. These are Supervised Learning, Unsupervised Learning and Reinforcement Learning.
Select each tab to learn more.
The main goal in supervised learning is to classify or calculate. When the goal is classification the machine is programmed to identify the target class or category. This could be binary classification or multi-class classification. To achieve this, data is labelled. ![]()
Example Imagine you have rolled out new training on the organizational LMS/LXP and you want to know which learners/employees are likely to default on completing the training this quarter. In this situation, you will use the data on learners who have dropped out in the past as build/training data to generate a classification model. You then run that model on the learners you’re curious about. The algorithms will look for learners whose attributes match the attribute patterns of previous drop-outs/non-drop-outs, and categorize them according to which group they most closely match. You can then use these groupings as indicators of which learners are most likely to default, and design interventions to address the “at risk” learners. |
The main goal of unsupervised learning is to discover whether natural groupings exist when there are no obvious groupings. Sometimes we suspect patterns, but we want the algorithm to reveal or confirm these patterns. This type of ML can be helpful for making sense of complex, high-dimensional, or noisy data where human intuition might struggle.![]()
Example For example, an unsupervised algorithm may be applied to learner data available within a learning management system (LMS), which over time will create clusters based on learner similarities, such as highly engaged learners, struggling learners and passive learners. |
RL is a type of machine learning where a software program (agent) tries to build a model of its environment (Systems Network and Users) by trying out different actions under various circumstances. It does this by receiving rewards or penalties based on its actions.![]()
Example The RL agent (LMS) assigns the learner a test. The result from this test is the current state of the learner. Based on this “state”, the RL agent recommends that the learner watches a video (this is the action) and completes a test based on the contents in the video. Now, if the learner performs well on this test, the RL agent receives a reward signal. Over time, through trial and error and maximizing cumulative rewards, the RL agent learns an optimal “policy” – a set of rules that dictate which action to take in a given learner state to maximize learning outcomes. This leads to a highly personalized and adaptive learning experience, where the LMS continuously adjusts to the individual needs and progress of each learner. |
Large Language Model (LLM)
LLM is a kind of neural network that’s really good at understanding and writing language — like a super-smart robot librarian who has read millions of books. LLMs use deep neural networks (especially transformers, which have many layers). It learns how people use words, sentences, and grammar by reading tons of text — books, websites, articles, and more. It doesn’t “know” things the way people do, but it’s great at guessing what comes next in a sentence. It can answer questions, write poems, help with homework, and more — all by using what it has learned from reading.
Think of it like a giant “auto-complete” machine:
- You type: “Once upon a…”
- It thinks: “What usually comes next? Oh! ‘time’ sounds right!”
- Then it continues: “Once upon a time, there was a…”
Question
As you would know, learning sticks when we connect new information with something we are already familiar with. So, here is a question for you to make such an association.