Not all AI is created equal.
Nor are traditional patient engagement solutions that use marketing techniques on par with those that use advanced behavioral science.
Large language models (LLMs) have contributed greatly to many of the recent advances in generative AI. And while LLMs can support patient engagement efforts, they fall short in helping organizations effectively move people to take specific actions that lead to better health outcomes. The reason is simple: when it comes to patient and member engagement, language is only part of the equation. Understanding human behavior is the rest.
With a team of renowned scientists, Lirio has developed the world’s first Large Behavior Model (LBM) for healthcare, pioneering the bridge between behavioral science and AI to significantly enhance health outcomes through hyper-personalized recommendations and communications.
The following definitions are our own based on extensive experience creating novel methods in these areas.
AI Terminology and Definitions
Large Behavior Model (LBM)
Large Behavioral Models are the pioneering bridge between behavioral science and AI. LBMs are trained on vast datasets containing human interactions, which enable them to learn complex patterns that are particularly relevant in human decision-making and communication.
Machine Learning
Machine learning, deep learning, and neural networks are all subfields of AI. But, neural networks is actually a sub-field of machine learning, and deep learning is a sub-field of neural networks.
- Machine learning (ML) is the field of study responsible for almost all of the modern innovations that are considered to be real AI. The field focuses on methods that allow a computer to learn based on examples or feedback, as opposed to requiring that every step in a process be precisely pre-determined. There are countless important problems that would be impossible to solve if a programmer had to program step by step what should happen in the case of every possible variation of the inputs. For example, when transcribing speech to text automatically, it would be impossible for a human to write a program that explicitly mapped out every combination of sounds to its written representation. There are infinitely many utterances that have meaning and infinitely many noisy environments in which each utterance could occur. ML is the field that focuses on mathematical methods that can solve these problems in a way that allows the captured knowledge to be generalizable. In other words, by observing many examples, it learns to accurately apply the core patterns to new examples. For example, modern text-to-speech algorithms are amazingly accurate despite the fact that they are consistently applied to examples of speech they have never observed in their training data. It is this focus on generalization ability that most clearly delineates machine learning from most other statistical and mathematical methods.
- Neural networks are a form of biologically inspired ML that uses nodes and connections between them inspired by neurons and synapses in the human brain. The 2024 Nobel Prize in Physics was recently awarded to John Hopfield and Geoffrey Hinton for their early work on artificial neural networks.
- Deep learning is a subfield of ML that utilizes multiple levels of neural network structures to allow for complex interactions between inputs as well as the creation of hierarchical learning structures..
- Reinforcement learning (RL) is a machine learning subfield that attempts to deal with decision making in interactive environments with feedback. Simple forms of RL include multi arm bandits and contextual bandits. Full reinforcement learning involves modeling of many states and the relationships between choices in the states and eventual rewards that may only be observed after many state transitions.
- Multi arm bandits is a simplified version of RL in which a set of actions (arms) must be explored. It is a stateless version that only considers immediate feedback. In other words, they solve problems that essentially consist of an exploration vs. exploitation tradeoff to ascertain which action(s) lead to the highest reward(s).
- Contextual bandits is a form of RL that only considers immediate feedback from choosing an action in a given state, as opposed to full RL, which considers the potential rewards that may come several states later. A contextual bandit can also be defined as a multi armed bandit with states.
Generative AI
Any ML model that is designed to create novel content. Prominent examples of generative AI models include conversational models powered by Large Language Models (LLMs) that can produce novel, natural-sounding strings of text.
Generative Model
In general, any ML model that learns to model the probability distribution behind an observable phenomenon. This learned model can be used to predict the likelihood of an observation or a model output. This predictive ability can be used for multiple purposes, including generating novel outputs or aiding predictions of a discriminative model.
Foundation Model
A large domain-specific ML model that is trained to capture patterns that are predictive or useful in many applications in that domain.
Natural Language Processing (NLP)
NLP is an application area in which ML models are used to interpret or create ordinary human language. Applications include document summarization, machine translation, and question answering.
- Natural language understanding: A specific application within NLP in which the goal is to infer the meaning or intent of human questions and/or responses.
Autonomous Agents
Autonomous agents are a form of AI in which ML models are allowed to interact with the environment — and potentially one another — to achieve some objective. Most references to autonomous agents are based on multi-agent reinforcement learning, but not all autonomous agents are RL based, since they can be pre-trained to perform a function without the need for continuous self-improvement. An RL-based autonomous agent can continuously learn to refine its internal model or policy to optimize some expected reward, and RL-based agents can interact with one another and learn to optimize an objective by cooperating in some fashion. Many future real-world applications of AI are likely to be deployed through a framework of autonomous agents. More recently, many conversational agents are being deployed through such frameworks.
Supervised Learning
The most common ML problem formulation, which covers any learning problem mapping input data (often called features or attributes) about examples to labels associated with said examples. This includes most classification problems, as well as regression problems.
Semisupervised Learning
Algorithmic methods for solving supervised learning problems that attempt to leverage unlabeled data to speed up learning when labeled samples are limited or expensive.
Transfer Learning
A set of ML methods that attempt to transfer or share learnings among multiple models. Multitask Learning is an important ML subfield that falls into the category of transfer learning. An example of transfer learning can also include taking a Foundation Model and tuning it to a specific task. This tuning process is typically referred to as alignment.
Multitask Learning
A ML subfield that leverages transfer of learned knowledge across multiple learning tasks. For example, a set of spam-filtering models might share patterns they each find to be common to spam messages, whereas each individual model can learn which of these patterns pertain to its specific user.
Kernel Methods
A more traditional form of ML in which a transformation function (a kernel) projects the input into an alternate (often higher-dimensional) space for analysis/learning. The most commonly known kernel-based method is a support vector machine.
Active Learning
An ML subfield related to experimental design that considers how best to obtain training samples to achieve optimal learning. This applies when the learning algorithm can have control over which samples are selected for labeling. Related subfields include budgeted learning and cost-sensitive learning.
Discriminative Model
In contrast to a generative model, a model that takes an observation and attempts to distinguish between possible classifications of it, often making predictions conditioned on the observation.
Image Processing
An application area in which ML models are extremely influential. For example, deep learning networks are often used with the intention of capturing and building pixel-level relationships into shape-level relationships into object-level relationships. Such networks often form the backbone of object recognition models and image generation models.
Recommender Systems
A broad application area of ML that deals with prioritization of choices given to a user. A prime example of where these are commonly used is placement of advertisements. There is often a focus on personalizing these recommendations, and most of these technologies use some form of Reinforcement Learning, with contextual bandits being particularly common.
Manifold Learning
Learning techniques that assume that relationships between data points cannot necessarily be captured by a direct distance measure in the input space. Instead, they leverage an assumption that geodesic distance along a particular path in the space is what determines things like class boundaries, as opposed to a traditional distance measure in the original space. A good example of a manifold assumption can be found in image analysis where an object is rotated 360°. Although the raw distance between the pixel representations of 2 images of the object at 0° and 180° might be quite large, there is a clear path linking the 2 images if you track the path through many other rotations from 1° to 179°.
Artificial intelligence, and the terms (AI Terminology) associated with it, are constantly evolving. The Lirio team will continue to contribute to this glossary, and the ground-breaking science behind it.
Read more in “How are Machine Learning and Artificial Intelligence Used in Digital Behavior Change Interventions? A Scoping Review” from Lirio researchers featured in Mayo Clinic Proceedings Digital Health.