Beach Reading - Deep Dive into Microsoft’s Agent Patterns from SaaS to IaaS
Microsoft’s AI Stack: Agent and Reasoning Patterns Across SaaS, PaaS, and IaaS
Executive Summary
AI agents are rapidly evolving into a new software pattern spanning user-facing services and deep infrastructure. Microsoft’s AI ecosystem exemplifies this trend, offering Copilot-branded SaaS agents, the Azure AI Agent Service at the PaaS layer, and open frameworks like Semantic Kernel and AutoGen at the IaaS level. Each layer uses natural language understanding and semantic context as a kind of universal protocol to orchestrate tasks and share information among distributed AI components.
This article explores how agent nomenclature and design differ by service level and why memory and shared context are foundational in enabling agents to collaborate. We predict many enterprises – including healthcare organizations – will leverage IaaS-level frameworks to build proprietary, deeply contextual AI agents. Strategic opportunities abound for those who harness these agentic capabilities, and so do new requirements for data readiness, security, and orchestration know-how.
From Chatbots to Autonomous Reasoning Agents
Intelligent reasoning agents have come a long way from simple chatbots. Today’s AI agents are sophisticated, collaborative systems capable of handling complex, multi-step processes with speed and accuracy. This marks a shift from isolated bots to dynamic, scalable agent workforces that can coordinate tasks, share context, and adapt in real time. In practical terms, instead of a single bot handling one task, organizations are deploying ecosystems of specialized agents that interact, reason, and respond to changing conditions with minimal human oversight.
Microsoft has been at the forefront of this evolution, embedding AI “copilots” into its software-as-a-service offerings and building out platform and infrastructure support for agentic AI. Before diving into Microsoft’s stack, it’s important to note why this agent paradigm is emerging now. Advances in large language models (LLMs) have given agents improved reasoning abilities, while new frameworks let multiple AI components cooperate on tasks.
At the same time, enterprises are pushing beyond basic chatbots to automate more complex workflows – from handling lengthy customer service processes to assisting clinicians with multi-step administrative tasks. When designed well, AI agents promise to optimize operations and elevate user experiences through AI-driven workflows. But realizing that promise requires rethinking how we build software: moving from static applications to context-aware, orchestrated AI services.
Natural language has become the connective tissue of this new approach. AI agents use language to interpret user goals, fetch information, and communicate with other agents or tools. In essence, language is becoming an API. This means that semantic understanding – the AI’s ability to parse meaning and intent – now serves as a protocol for orchestration, allowing diverse agents and services to work together more fluidly. As we’ll see, Microsoft’s agent ecosystem leverages natural language at every level, from end-user Copilots to behind-the-scenes agent frameworks.
In the following sections, we examine Microsoft’s AI agents across three layers: SaaS, PaaS, and IaaS. We’ll discuss how each layer’s nomenclature and design abstract complexity at different depths and how features like orchestration, memory, and tool use become increasingly customizable as we move down the stack. We highlight the critical role of context and memory, especially shared memory across agents, in enabling these systems. Finally, we consider strategic implications for enterprises and why many will adopt the IaaS-level frameworks to build their own interoperable AI agents.
SaaS Layer, Copilots – AI Agents as User-Facing Tools
At the SaaS layer, AI agents manifest as polished, user-facing assistants that hide the complexity of AI orchestration behind simple interfaces. Microsoft’s Copilot family is a prime example. Microsoft 365 Copilot, Windows Copilot, and other Copilot-branded features act as AI copilots embedded in familiar applications (like Office, Teams, or Windows). These agents take natural language commands from users and autonomously perform tasks or retrieve information across Microsoft’s apps. For instance, a finance executive can ask Microsoft 365 Copilot to “prepare a budget slide deck summarizing Q4 performance,” and the Copilot will fetch data from Excel, draft slides in PowerPoint, and format them – all in response to a simple prompt. The user doesn’t see the underlying calls to multiple services; the Copilot abstracts that complexity as a convenient SaaS feature.
Beyond the out-of-the-box copilots, Microsoft introduced Copilot Studio, a SaaS platform for organizations to build their own AI agents with low code. Copilot Studio provides a graphical interface for creating custom agents (“agents”) and defining their logic and integrations. According to Microsoft, “Copilot Studio is a graphical, low-code tool for building agents and agent flows.” It enables connecting to enterprise data sources via prebuilt or custom plugins, orchestrating logic, and tuning an agent’s behavior. Crucially, this is designed to be accessible even to non-developers – a business analyst or healthcare operations manager can create an agent without writing code. The agent concept here is essentially a powerful AI companion that coordinates language models with instructions, context, knowledge sources, and actions to accomplish goals.
For example, a hospital might use Copilot Studio to build an “IT Support Agent” that employees can chat with within Microsoft Teams to troubleshoot technical issues. Under the hood, that agent might use an LLM to understand the problem description, reference an internal knowledge base (via a plugin), and then trigger an agent flow – an automated workflow – to reset a password or create a support ticket. Copilot Studio supports such agent flows triggered by natural language or events, which can run automation or even act as tools that the agent can invoke. All of this is delivered as a managed SaaS experience: Microsoft handles the AI models, hosting, and security features, while the user focuses on high-level configuration.
Security and guardrails are a major focus at the SaaS agent layer, given these agents directly handle organizational data. Microsoft has built enterprise data protection into Copilot Studio agents – including encryption, data loss prevention, and controls against prompt injection – to ensure that autonomous capabilities remain compliant. This is especially vital in regulated industries like healthcare, where an agent accessing patient information must adhere to privacy laws.
PaaS Layer, Azure AI Agent Service – Orchestration Platforms for Developers
As organizations seek more tailored and interconnected AI solutions, the focus shifts to the platform-as-a-service layer. Microsoft’s Azure AI Agent Service (part of the Azure AI Foundry) exemplifies the PaaS approach to AI agents. Introduced in late 2024, this service gives developers a managed platform to build, deploy, and scale intelligent agents with more control than a SaaS solution. The key idea is to provide a flexible, secure foundation that abstracts much of the orchestration complexity while allowing customization to fit enterprise workflows.
In Azure AI Agent Service, developers can design complex workflows composed of multiple specialized agents. Microsoft highlights two core orchestration concepts:
Connected Agents – an agent can directly call another agent as a tool for a sub-task. For example, an “HR Assistant” agent handling employee queries might invoke a “Payroll Agent” to answer a salary question. This point-to-point delegation allows modular, specialist agents to collaborate.
Multi-Agent Workflows – a structured orchestration layer where multiple agents carry out a long-running, multi-step process with stateful context. In this mode, the platform itself manages context and sequencing so agents can hand off tasks and maintain continuity. This is ideal for processes like onboarding a new employee or processing a loan application, where the conversation or task can span many steps and decision-making.
The Azure Agent Service addresses enterprise needs by integrating a broad set of tools and data. It can incorporate the latest foundation models from not only Microsoft and OpenAI but also other providers like Meta and Coheretech. databases. To take action, agents can leverage Azure Logic Apps workflows, call Azure Functions, use APIs described by OpenAPI specs, or even execute code through a built-in Code Interpreter. In short, the service “brings together all the models, data, tools, and services” needed to automate business processes of any complexity.
From a developer’s perspective, Azure AI Agent Service provides an intuitive authoring experience (through Azure AI Studio/Foundry) to configure agents, define their tool stack, and set up monitoring. It offers enterprise-grade features like bring-your-own storage (so you can keep conversational data in your own database), virtual network isolation, and observability via OpenTelemetry. These features reflect the reality that running AI agents in production requires more than just a good model – it requires persistent memory, security, cost controls, and integration with existing systems.
A concrete use case in the PaaS context might be a healthcare provider deploying a suite of agents for patient support. One agent could greet patients on the website, answer questions, or collect intake information, then invoke another agent who schedules appointments via an EMR system’s API. A third agent might follow up with post-visit instructions or billing queries. Azure AI Agent Service would let developers compose this multi-agent workflow, handle the hand-offs with a shared context, and ensure all data access is secure and logged. Indeed, maintaining context across steps is a defining capability – multi-agent workflows in Azure’s service manage context state, error recovery, and long-running durability so that, for example, a patient’s information carries through the entire process reliably.
It’s worth noting that Microsoft’s PaaS shares DNA with its IaaS frameworks. In fact, Azure AI Agent Service integrates with a converged runtime that combines Semantic Kernel and AutoGen under the hood. This means developers get the benefit of advanced orchestration patterns (from AutoGen research) and a robust, enterprise-ready architecture (from Semantic Kernel) through a unified API. The platform thus offers both ease of use and the power of lower-level agentic libraries without requiring the developer to assemble those pieces from scratch.
In summary, at the PaaS layer, Microsoft provides a manageable orchestration platform, that abstracts many infrastructure headaches (scalability, security, integration plumbing) yet exposes hooks for customization. This allows enterprises to build orchestrated agent solutions tuned to their domain – for example, a financial services firm automating fraud investigations with multiple AI agents – faster than if they started from the ground up. Competitors like AWS and Google are on similar paths, but Microsoft’s advantage is its cohesive strategy linking SaaS, PaaS, and tools, all leveraging the same Azure OpenAI and security backbone.
IaaS Layer, Agentic Frameworks and Building Blocks (Semantic Kernel, AutoGen, etc.)
Dropping to the infrastructure-as-a-service and developer framework level, we find the toolkits that power agentic AI. Here, Microsoft offers open-source frameworks such as Semantic Kernel and AutoGen, which provide the primitives for building AI agents and multi-agent systems as part of custom applications.
Unlike SaaS Copilots or the Azure Agent Service, these IaaS-level tools require software development effort – but they also offer maximum flexibility and control. For organizations with unique needs or those aiming to develop proprietary AI capabilities (common in industries like healthcare, where data and context are highly specific), this layer is crucial.
Semantic Kernel (SK) is Microsoft’s general-purpose SDK for orchestrating AI workflows. It’s described as an “enterprise-ready orchestration framework” to build intelligent AI agents and multi-agent systems. Developers can use SK (available for Python, .NET, and Java) to integrate LLMs into their applications and to compose complex behaviors. Some key features of Semantic Kernel include:
Model-agnostic connectors: SK can work with Azure OpenAI, OpenAI’s API, Hugging Face models, and others. This gives flexibility to use various language models (including on-premises or open-source LLMs).
Agent framework with tools and memory: It provides constructs to create modular AI skills or functions and equip agents with tools/plugins (for example, the ability to call an API or database). Critically, SK has built-in support for long-term memory (via semantic memory stores like Azure Cognitive Search, Elasticsearch, or vector databases). By including memory as a first-class feature, SK allows an agent to remember past interactions or retrieve domain knowledge as its reasons.
Planning and workflows: SK can perform planning – breaking down a high-level request into steps or deciding which tool to use – which is essential for agent autonomy. It also supports multi-step process definitions (similar to workflows) for more deterministic automation.
Interoperability through plugins and protocols: Developers can extend SK with native code functions or define actions via OpenAPI specifications. SK also embraces emerging standards like the Model Context Protocol (MCP) for tool usage at github.com. (MCP, originally proposed by Anthropic, is a standard way for agents to call tools or APIs in a JSON-formatted “language” the model understands.) This means SK-based agents can use a wide range of external tools and knowledge sources in a standardized way.
Another powerful piece at this layer is Microsoft Research’s AutoGen framework. AutoGen is an open-source project focused on enabling multi-agent conversations and complex agent behaviors. The research behind AutoGen emphasizes that agents can be made conversable and customizable and operate in different modes (some fully AI-driven, some with human input in the loop, and some leveraging external tools) .
A hallmark of AutoGen is that it allows developers to compose systems where multiple LLM-powered agents talk to each other to solve tasks collaboratively. For example, one agent (“Analyst”) might be tasked with analyzing data, and another (“Reporter”) might be tasked with writing a summary; they can exchange information in natural language to produce a result together. AutoGen provides a framework to define such multi-agent dialogues and patterns using both natural language prompts and code rules.
With the release of AutoGen v0.4 in early 2025, Microsoft signaled a push toward making these frameworks more enterprise-friendly. Microsoft’s strategy at the IaaS level appears to be a “best of both” approach: integrate AutoGen’s innovative multi-agent orchestration with Semantic Kernel’s mature, enterprise-ready features. The two teams have begun converging their runtimes and interfaces. Thanks to adapters that bridge the two, developers can already host AutoGen-style agents within Semantic Kernel and vice versa.
The result will be a unified framework where developers define agents and their conversations (from AutoGen’s paradigm) while leveraging Semantic Kernel’s connectors, memory stores, and hosting options. Notably, this shared runtime can be deployed in various ways – from in-process for simple scenarios to distributed systems like Dapr or Orleans for large-scale, multi-agent deployments. This flexibility essentially lets companies embed AI agent capabilities deep into their own infrastructure, much like any other software component.
It’s worth mentioning that Microsoft’s frameworks exist in a broader landscape of agentic libraries. LangChain (open-source) pioneered easy chaining of LLM calls and tools, and frameworks like Crew (CrewAI) offer more visual, low-code approaches to agent building. Microsoft’s AutoGen and SK are differentiated by tight Azure integration and emphasizing enterprise requirements (security, reliability).
In practice, many developers currently use these frameworks for prototyping and re-implementing stable agents in production code. However, as features mature and standards emerge, we expect more organizations will directly adopt frameworks like SK or AutoGen for production to save development time and ensure they are aligned with evolving best practices.
The Role of Natural Language and Shared Semantic Context
One unifying theme across all layers of Microsoft’s AI ecosystem is the use of natural language and semantic understanding as a communication layer. At the SaaS level, the user interacts with Copilot agents via plain English (or other languages), which the AI interprets and maps to actions. But beyond that, natural language is increasingly used internally for agents to talk to each other or to coordinate with tools.
For instance, when one agent calls another as a tool in Azure AI Agent Service, the invocation might be formulated as a natural language instruction or a standardized message that the second agent can understand. Microsoft’s introduction of an Agent-to-Agent (A2A) API in the Foundry Agent Service is telling: it enables multi-turn conversations between agents, even across different platforms or clouds. In other words, an agent running on Azure could engage with an agent on SAP’s platform (SAP Joule) or on Google’s Vertex AI, exchanging information to complete a user’s request.
The “protocol” here is essentially language (potentially structured in JSON or other formats for consistency), but the key is that semantic content is being exchanged rather than just raw data. This allows agents built by different vendors to interoperate without custom integration for each pair – much like how different web services interoperate over HTTP. Microsoft, IBM, and others are actively working on open agent communication protocols to avoid the “siloed island” problem of early AI agents.
IBM’s recent open-source Agent Communication Protocol (ACP), for example, aims to be the “HTTP for agent communication,” giving agents a shared language to coordinate on tasks. Microsoft’s A2A and adoption of standards like MCP (Model Context Protocol) in Semantic Kernel align with this industry push – MCP provides a standardized way for agents to represent tool calls, while A2A/ACP provides a way for agents to dialogue with each other.
Why does this matter for enterprises? This means that AI agents can become a distributed system of services that coordinate through a semantic layer rather than brittle, hard-coded integrations. In a hospital scenario, an administrative agent could ask a clinical agent (in natural language) for a patient data summary, who in turn might use a diagnostic agent’s output – all through a kind of common tongue that each AI understands.
Natural language (augmented with domain-specific protocols) is flexible and expressive, so it eases orchestration across different systems and contexts. As a result, contextual awareness improves – agents can share what they’ve learned or decided in one part of the system with others.
For example, an “authorization agent” could inform a “treatment recommendation agent” that a certain therapy is covered by insurance, affecting the recommendation. This is facilitated by shared semantic context rather than each agent operating in a vacuum.
Underpinning this is the concept of memory. In human teams, effective collaboration relies on shared knowledge – meeting notes, patient records, etc. Similarly, AI agents need memory to carry over context from one interaction to the next. Microsoft’s agentic platforms put heavy emphasis on memory:
In Copilot (SaaS), memory might be short-lived (e.g., the chat history with the user), but it ensures the agent remembers what the user asked earlier in the conversation.
In Azure AI Agent Service (PaaS), stateful orchestration means an agent workflow can retain information across a long process and recover context after interruptions. The service even provides knowledge integrations (like SharePoint or custom knowledge bases) so agents can fetch organizational context when needed.
In Semantic Kernel (IaaS), semantic memory is a core feature – embedding important pieces of text into a vector store so that any agent can later retrieve it by meaning. SK and AutoGen allow agents to share a memory store or “blackboard,” so to speak.
One agent can write a fact or intermediate result, and another can read it if relevant. Microsoft’s reference architecture for multi-agent systems (e.g., the Magentic-One demo) uses an orchestrator agent that maintains a task ledger of facts, plans, and progress, which all specialized agents refer to. This is essentially a shared memory design: an outer loop collects a knowledge base of the task, and inner loops handle step-by-step progress.
The assertion that memory and context sharing are foundational is evident – without memory, an agent’s effectiveness is limited to single-turn commands, and multiple agents would easily talk past each other. With memory, an agent system can exhibit continuity of thought. For example, if a diagnostic agent finds a patient has an allergy, it can record that, and later, the treatment-planner agent will avoid recommending medications with that allergen.
All layers of Microsoft’s ecosystem support some form of this context preservation, but the depth and persistence of memory grows as we move from SaaS to IaaS. A Copilot might rely on short-term context or cached user data, whereas a Semantic Kernel application could integrate a full long-term memory repository (e.g., a vector database of medical literature for a research agent).
In practice, memory and NL protocols together enable what we might call “hive mind” capabilities in enterprise AI: a collection of agents that collectively know what the others have learned and communicate in a fluid, intelligent manner. This is powerful but also introduces responsibility – ensuring that what one agent writes to memory is accurate (avoiding the propagation of an error or hallucination) and securing that shared context (to prevent leaks of sensitive info).
These are active areas of development, and Microsoft’s inclusion of responsible AI controls, monitoring, and verification steps in its platform shows an understanding that trust is as important as capability.
Strategic Implications and Enterprise Opportunities
The emergence of agentic AI across SaaS, PaaS, and IaaS layers carries significant strategic implications for enterprises, particularly in data-intensive and process-heavy sectors like healthcare. Healthcare executives, for example, can see clear opportunities.
AI agents that can coordinate could automate administrative workflows (prior authorizations, scheduling, patient inquiries), assist clinicians with decision support by collating data from multiple systems, and continuously learn from each interaction to improve over time.
Some key opportunities and considerations include:
Customization and Proprietary Advantage: Many organizations will not be content with one-size-fits-all AI assistants. They will adopt IaaS-level frameworks (like Semantic Kernel) to build proprietary agents infused with their unique data, terminology, and business rules. This deep customization can become a competitive advantage – imagine a healthcare provider with an AI agent that deeply understands its clinical guidelines and patient population or a pharmaceutical company with agents that have “read” all its research data.
By building on open frameworks, enterprises can keep these agents in-house (running on their secure infrastructure or cloud tenancy) and ensure interoperability with legacy systems. Indeed, industry analysts observe that early experiments often start in sandboxed frameworks like AutoGen or LangChain and then migrate into custom production environments as companies productize the solution. Microsoft’s approach is smoothing that path by offering frameworks that are open yet well-integrated with Azure tooling, making it easier to go from prototype to production without a complete rewrite.
Contextual Intelligence at Scale: By leveraging shared memory and multi-agent orchestration, enterprises can achieve a form of collective intelligence in their software. For instance, a hospital could have a constellation of agents – one monitors real-time bed availability, another analyzes incoming patient data, and a third forecasts staffing needs.
Together, orchestrated by a central planner agent, they could optimize hospital operations in ways no single bot could. This kind of scenario is becoming feasible as platforms like Azure AI Agent Service support multi-agent workflows with durability and error handling.
The strategic implication is that businesses should identify high-value workflows that could be enhanced or automated by such collaborative AI agents. Early wins might be in areas like customer service (as Microsoft’s NTT Data case showed, using agents to halve time-to-resolution) or internal knowledge management (as in Toyota’s use of multiple knowledge-sharing agents).
Data Readiness and Governance: While the technology is enticing, many enterprises are finding that success with AI agents depends on getting their data and governance in order first. As VentureBeat reported, organizations like the Mayo Clinic and Cleveland Clinic are currently focused on building robust data infrastructures before deploying autonomous agents. Clean, accessible data is fuel for these agents – without it, even the best AI will flounder or produce poor results. Moreover, controlled flow engineering (i.e., carefully managing how agents execute tasks and make decisions) remains critical.
Especially in healthcare and finance, companies will require that every action an agent takes is auditable and within policy. Executives must ensure that any agent platform adopted provides the necessary guardrails, audit logs, and compliance certifications. Microsoft’s inclusion of comprehensive security features (encryption, role-based access, content filtering, etc.) in Copilot Studio and Azure Agent Service speaks to this need. Similarly, governance processes should be established, for example, approving which tools or plugins an agent can use and instituting human review for high-stakes decisions.
Interoperability and Ecosystem Strategy: The push towards standard protocols (like A2A, ACP, MCP) means that in the near future, agents from different vendors might seamlessly work together. Enterprises should stay attuned to these developments; adopting platforms that embrace open standards will prevent lock-in and enable cross-platform AI workflows.
A healthcare network might one day have its internal AI agents interacting with an insurance company’s agents to automatically adjudicate claims – if both sides speak a common “AI language.” Strategic planning should include joining industry consortia or standards efforts to help shape these protocols and ensure they meet sector-specific needs (e.g., HL7/FHIR for patient data exchange, extended to AI agents).
Infrastructure and Cost Considerations: Running numerous AI agents, especially those employing large models, can be computationally intensive. Executives will need to weigh the cost-benefit and possibly invest in AI-optimized infrastructure. Notably, Microsoft is even developing specialized hardware (like the Azure Boost FPGA/DPU and AI chips) to support the demands of agentic AI workloads securely and efficiently.
Cloud providers will likely offer more cost-effective options for agent workloads (such as smaller domain-specific models or scheduling to run tasks during off-peak hours). Enterprise tech leaders should collaborate with their IT and cloud partners to optimize the runtime of agents, ensuring that this promising technology remains cost-effective at scale.
Conclusion
The evolution of AI agents – from simple chatbots to autonomous multi-agent systems – represents a significant shift in software architecture. Microsoft’s AI ecosystem illustrates how this evolution is playing out across different service levels. At the SaaS layer, agents like Microsoft Copilots deliver immediate value by abstracting complexity and acting as intelligent assistants for end-users.
Moving into PaaS, Azure AI Agent Service offers a platform for orchestrating multiple agents and customizing workflows, striking a balance between ease of use and flexibility. Finally, at the IaaS and framework level, tools like Semantic Kernel and AutoGen provide the building blocks to embed agentic capabilities directly into applications and infrastructure, with fine-grained control over memory, reasoning, and tool integration.
Across all these layers, the use of natural language and semantic context as a “lingua franca” is enabling a more organic form of system integration – one where AI components can understand the context and communicate actions in human-like terms.
Coupled with persistent memory stores and context-sharing mechanisms, this unlocks powerful emergent behaviors: agents can carry knowledge forward, coordinate with each other, and continually learn. For enterprises, and healthcare in particular, this means AI can transition from a point solution (like a lone chatbot) to a contextually aware network of assistants working in concert across the organization.
In conclusion, AI agents and agentic algorithms are more than a buzzword; they are quickly becoming an architectural pattern for the next generation of software. Microsoft’s approach, spanning Copilots to Semantic Kernel, provides a coherent blueprint for adopting these technologies.
Healthcare executives and their peers in other sectors should view this evolution as an opportunity to reimagine workflows and services with AI at the helm – not replacing humans but empowering them.
References
Microsoft AI Agent Stack - My list of training, learning and more.
Microsoft Tech Community – “Announcing General Availability of Azure AI Foundry Agent Service” (May 19, 2025)tech community.microsoft.comtechcommunity.microsoft.com
Microsoft Tech Community – “Introducing Azure AI Agent Service” (Ignite 2024 announcement, Nov 19, 2024)tech community.microsoft.comtechcommunity.microsoft.com
Microsoft Learn – “Copilot Studio overview” (Docs, May 15, 2025)learn.microsoft.comlearn.microsoft.com
VentureBeat – “Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developers” (Jan 18, 2025)VentureBeat.comventurebeat.com
Microsoft Research – “AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation” (Publication, Aug 2024)Microsoft.com
GitHub (Microsoft) – Semantic Kernel README (accessed 2025)GitHub.comgithub.com
CIO Dive – “Microsoft readies Copilot Studio for agentic AI” (Nov 19, 2024)code.comciodive.com
CIO Dive – “Microsoft readies Copilot Studio for agentic AI” – Security and adoption insightsciodive.comciodive.com
IBM Research Blog – “The simplest protocol for AI agents to work together” (May 28, 2025)research.ibm.comresearch.ibm.com
VentureBeat – Enterprise readiness of agentic AI (Jan 2025)VentureBeat.com