Hype can be an impediment to real understanding. It can mask a lack of substance, raising unrealistic expectations that eventually lead to disillusionment. Conversely, when something feels overhyped, it’s easy to dismiss, causing developments that should engender excitement to be overlooked. Given that agentic forms of AI have been around for decades and that the hype around generative AI and Large Language Models (LLMs) is still alive and well, what is genuinely significant about the sudden rise in the use of agentic AI?
What is Agentic Artificial Intelligence (AI)?
The basic characteristic that makes something agentic is the ability to perform tasks on behalf of a user or broader system. Therefore, any AI model that can interact with digital resources or physical environments to accomplish a goal that would otherwise require human involvement is potentially agentic.
In general, rather than thinking of an AI system or model as either agentic or not, it is more useful to take the view that there are many levels of agentic systems, depending on how autonomous a system is. That can also include how adaptable the system or model is.
For example, a pre-trained, LLM-based static model that can autonomously schedule an appointment for a user (or for another AI agent) could be accurately classified as agentic, even if it is not able to adapt and improve automatically over time. Such an agentic scheduler could be designed to access an existing scheduling system digitally, or it could be designed to schedule appointments by communicating with a human scheduler. And such a system might only be semi-autonomous, requiring lots of feedback and prompting to complete its task(s).
A similar agentic model that is also adaptable could automatically retrain itself as its actions lead to measurable objectives, such as having an appointment scheduled and attended successfully. In other words, agentic AI systems that can automatically improve themselves over time based on feedback would be agentic in a sense that includes adaptability.
Agentic Platforms are Not New
Not only is the concept of agentic AI not novel; real-world applications of agentic AI systems are not even remotely new. In fact, it’s easy to point to an agentic AI application that has been ubiquitous in modern society for decades.
Telephonic voice recognition systems that ask you to say a number to route you to a department, operator, etc. are often frustrating to use, even today, but they are indeed a case of real AI being used to perform a task that a human would otherwise need to perform. Even the oldest versions of these systems are built using machine learning models that rely on the core principle of generalizing beyond the training data. These systems wouldn’t work if they couldn’t generalize, only recognizing a digit if it was said by the same voice with exactly with the same tone, inflexion, pitch, etc. as examples in the data used to train the system. In many cases these systems often just pass a caller on to another AI driven system, so even this simple example represents a case where multiple agentic AI tools might work together.
The Success of Generative AI Opened New Avenues to Automating Common Tasks
So, what is new? And why is there so much conversation around agentic AI now?
Recent advancements in LLMs have greatly enhanced the ability of humans to interface with computational models and other digital resources. And similarly, LLMs are being leveraged to simplify the engineering involved in having multiple machine learning agents interact with one another in meaningful ways. In other words, recent advancements in generative AI have done two things: (1) facilitate the creation of tools that can perform actions on behalf of humans (or other ML agents); and (2) accelerate the ability of such tools to interoperate.
Momentum for a New Level of Seamless Integration
Beyond the promise for more autonomous and intelligent systems to help us perform real-world tasks, the real change that the hype around agentic AI points to is an evolution in the way that humans and systems can interface with one another. As just one example of how tools to develop and coordinate agentic AI models are evolving, Microsoft recently developed and released AutoGen, “a framework for creating multi-agent AI applications that can act autonomously or work alongside humans.” Even more recently, Microsoft released a more general purpose open-source agentic framework on top of AutoGen, called Magentic-One.
These, and similar, frameworks demonstrate an evolution in how AI tools can interoperate. As integrations through agentic interactions become more standardized, it allows for a new level of interaction among such tools. Moreover, as the general approach to interfaces among these systems evolves to reflect natural human language, they become easier to engineer.
More Ubiquitous Agentic AI Can Smooth Action Paths for Healthy Behavior Change
This movement toward a universe full of easy-to-integrate agentic AI models that automate previously cumbersome tasks for patients (or healthcare entities) greatly benefits an approach like Lirio’s, where the goal is to intelligently orchestrate an individual’s health journey in a personalized way. Lirio’s tools are built as agents that can interact with the environment to support healthy behaviors and achieve positive health outcomes. As the world becomes more agentic in general, it becomes much easier to leverage other agentic AI tools to handle some tasks to smooth the action paths necessary for achieving various health-related behaviors. For example, Lirio systems are designed to guide the patient journey, but part of our approach is to work with existing tools to achieve outcomes. When these other tools are also agentic, it becomes increasingly easy to smooth the action path.
When Lirio’s platform can pass a patient on to a very effective and easy-to-use scheduling agent already in use by our clients, we can be more effective overall in achieving better health outcomes. When Lirio’s platform can be initiated by an agentic AI system acting on behalf of a doctor, we can be more effective.
An Analogy to IBM’s Watson
When IBM’s Watson won on Jeopardy, it seemed to many outside the field of machine learning as though there had been a quantum leap forward in AI almost overnight. To those within the field, it was an obvious progression requiring no new machine learning capabilities. It was an impressive feat of engineering to be sure, but the AI techniques being used were well known, even if they needed to be heavily fine-tuned in order to make Watson effective.
I see the agentic AI wave similarly. It is an engineering story related to the productionalization of AI models. Two major differences are the following: (1) The ML models themselves are part of the technical advancement that is making the coordination of multi-agent systems easier. (2) This is not a single corporation performing the engineering work, but rather, an entire community pushing the boundaries of how we can cooperatively leverage agentic AI models in real-world applications. For these reasons, I expect that the general movement toward more agentic AI tools and interfaces is an important and significant trend that is likely to greatly facilitate the adoption of practical, effective AI systems.
Christopher Symons, Ph.D., MSc is the Chief Artificial Intelligence Scientist for Lirio.