The Cloud Gambit

MCP & A2A Unplugged with John Capobianco

William Collins Episode 48

Send us a text

Anthropic introduced MCP (Model Context Protocol) at the tail end of 2024, and Google launched the Agent2Agent Protocol (A2A) just this month. MCP standardizes connections between AI Agents/LLMs and external data/tools, kind of like ‘USB-C for AI’. A2A standardizes communication & collaboration between AI Agents, like a universal language / ‘Lingua Franca’ for Agents. Both of these protocols are groundbreaking in their own right, and John Capobianco joins the podcast to explain why. In this episode, we start with high-level basics and end with doing an unscripted live demonstration. This episode is a must-listen for anyone looking to understand the impact and applications of these two innovations.

Where to Find John Capobianco

Show Links


Follow, Like, and Subscribe!

John:

on my side through the web, my A to A adapter has hit my MCP server, my land graph, which has invoked the RFC MCP tool. So picture that in your mind from application layer, this client, all the way down through A to A to MCP, to an LLM, down into a GPU, and check out this answer In the time it's taken me to describe it. We have an actual policy here, adhering to the RFC standard.

William:

Initializing podcast. Beep beep beep Context window is expanding. I'm William podcast beep beep beep context window is expanding. I'm william and I'm running on a. I'm running on basic human intelligence 1.0, but joining me is yvonne sharp. Uh, clearly running on advanced intelligence with an expanded context window and superior reasoning capabilities. Um, our, our token limit today is however long it takes until we need coffee. How are you doing, yvonne?

Eyvonne:

I am great. I am great. I always have this, yeah, this pause during these intros I'm like oh, william, be careful, there, you're going to, you're going to, we, we, we, we under-promise and over-deliver. You're putting me in a position where you're over-promising and I'm going to under-deliver, but I'm going to under-deliver. But I'm thrilled to be here.

William:

I'm hyped up at the beginning.

Eyvonne:

I've got to get them hyped oh yeah, man, super hyped, but really the person who deserves all the hype today is John Capobianco. He's here to talk about some of the amazing things he's doing with generative AI, really at the practitioner level, and discovering all the new releases, all the new protocols. So John is here to talk about the fun stuff that he's doing in AI, especially as it relates to networking. Welcome, john.

John:

Thank you, yvonne, thank you William. I couldn't think of two better people or a better platform to have this discussion. I'm not trying to over-promise and under-deliver, but I really think that maybe over time, this video, this particular discussion between us humans, is going to be referenced by quite a few people. We're going to be exploring, literally leading and, if not, bleeding edge technologies today that are already having a dramatic impact on the world, and I think we're all sort of I I know myself I'm a little shell-shocked by the capabilities that I've seen, um, and I've almost had some existential crisis um around. Wow, the things that are coming to humans in the very near future. Uh, it feels like a renaissance period, akin to the, the World Wide Web and the emergence of the Internet.

William:

Yeah, absolutely, and I want to just so, just to kind of give you you know, if you haven't listened to any of the past episodes where you've had John on, I think they've all been either automation or I focused. Really, I think you've been on about two or three times, but John's got a gigantic, historical, deep expertise with network and server administration dating way back into the automation space, authored some pretty awesome, influential books A few of them are on my shelf and now basically serves as a product evangelist at Selector ai, where they're, you know, really on the bleeding edge of, you know, ai in the network space and and john specifically is, I'd say, you're pretty deeply embedded with bleeding edge tech surrounding ai agents and the protocols that kind of connect them. So you got a lot of accolades there and I think the sort of flow that we wanted to take for this show is there's a few new things that are everywhere on the internet and of course you have the folks that are coming out and posting things just to try to get soundbites. But then you know you have true creators like John that's actually going out and figuring out the technology, figuring out where it fits, building things with it and demonstrating it so kind of like swimming through all the minutia and kind of bringing to the surface like what's real. So really, really happy to have you on and you know, to kind of set the stage.

William:

Um, you know, we really want to let you know. You have, you've got a ton of expertise and I've been drinking from this fire hose for a week now, pretty non-stop. The mcp thing, as you know, um and uh. But yeah, we, we want to leverage your deep expertise here and kind of like start. We're going to get deep, I think, in this conversation but we want to start at the foundational level for the just kind of understanding this next generation of AI. So MCP model context protocol. And then came agent to agent communication 8A. And then we have agent dev kits, adks. Now these might sound pretty technical but they appear absolutely crucial for building more capable and interconnected AI systems. So do you want to TS off, john, with just kind of describing like what is MCP Like at the most, you know, explaining it to my 10-year-old kind of.

John:

Absolutely so. The need became apparent for protocols that governed artificial intelligence development, particularly agent development. Now, prior to the protocol, it was very much the Wild West and I want people to try to maybe anchor their thinking. If you have a networking background, there's some very clear parallels in my mind between, let's say, static routing or dial-up networking right and now having a protocol. So just think of IP, Internet protocol. A lot of people don't really consider this. Tcp, UDP, SMTP, HTTP, on and on and on. Humans collectively put aside their own selfish interests and said let's all come together on standards. Now imagine a world without these protocols An HP network not talking to a Dell network, Google not talking to Amazon, A browser specific for a vendor A nightmare, right A nightmare Instead of a collective, almost socialized approach that still led to massive capitalism. Think of a client and a server. Exchange 2000 and Outlook 2000 probably made hundreds of millions of dollars for Microsoft right and changed society and how we do things.

John:

So fast forward to MCP comes out from Anthropic November 24th. Takes a few weeks and months, obviously, to percolate and get down into builders' hands in. This is in the form of git repos and examples and things I would strongly recommend you look at anthropics three minute read introducing the protocol, and then go to model context protocolio, find an example server and and it's, it's a usb like experience. They equate to USB-C, a universal adapter you'd literally plug into your solution. Now this has pretty big ramifications. I'm building an agent and let's say, my agent is a calculator function. We talked about a subnetting MCP. Let's build on that. Llms have a deficiency in math and subnetting in particular. So a human made an MCP that has Python functions doesn't have to be a REST API, doesn't have to be a database, could just be a Python function. That MCP is now universally adaptable and we can plug it into other solutions.

John:

Now I want to send emails about my subnets. I find an email MCP and plug it into other solutions. Now I want to send emails about my subnets. I find an email MCP and plug it into my agent. Now my agent can send email. Now that is frictionless, that is easy, that is standard. Now what's really interesting is that it starts to build a hyper-connected approach. Now we've all heard of vibe coding. Is anyone doing vibe coding? Yvonne, William, have you started vibe coding where you're using, say, Cursor or Cloud Desktop or even VS Code now where you're plugging MCPs into your integrated development environment. Yourve has MCP access.

William:

I have it all set up. I would say that my mileage is varying at the moment based on other priorities, but I'm definitely diving in just for the learning experience. I mean, even if I don't get something that I can, it's like super, super valuable, like an artifact at the end of it. Just the learning experience and the teaching opportunity there is gigantic.

Eyvonne:

Well, and I think in my world, even at you know, we're just fresh off Google Cloud Next, like we presented some demos that were almost wholly built with Gemini right. So we're like, hey, we need a Python script to interact with vcenter and these migration tools and can you get us 80 of the way there? And when you're building a demo or when you're building a proof of concept, it's incredibly powerful because you, you don't have to be production ready, you're not following a ton of compliance standards, you're just trying to prove out a thing. And it's incredibly powerful for that right, because you just want to know what's possible and capable. And then you can take the results of your, your LLM, your tooling, and then apply some rigor and some standards to it and to build on it. But to get an initial MCP or MCP MVP, you know it's, it's incredibly, it's incredibly powerful. So, yeah, it's happening all over the place there's literally thousands.

John:

Sorry, go ahead, go ahead. Oh no, go ahead, go ahead. I was just gonna say thousands of these mcps have emerged, like literally, there's mcp directories, there's github repositories, called huge list of awesome mPs. And in terms of so, what does this mean? Like it all sounds so abstract. I get clone the repository. It gives me the folder with the MCP. I move the folder into my IDE or into my project. That's it. I've integrated the MCP more or less. I love it Right.

William:

So to kind of. So someone really smart much smarter than me like a few months ago told me you know, if you're listening to someone that's explaining something to you, the basis on which you understand what they're saying is if you can try to like summarize it in like one or two sentences. So I want to try to take what you just said about MCP and try to summarize what that meant. So it you have an ai application acting as a client of sorts. It's a client server model. That client can communicate through model context protocol to a server that exposes capabilities, like you were saying, like the file, the python file, or even like a database, creating a database or calling some web, like an api or executing a tool. So is that kind of the gist of it, like at a very simplified level it is so, um, here's maybe another way to look at it.

John:

We want to build an agent or an assistant. It has different terms. Let's just stick with agent for now. Um, so we build an agent. That agent is going to have access to tools. Now the agent itself is backed up by an llm, a gemini flash 2.5 is the artificial intelligence behind the agent, and and this agent, in practical terms, is a natural language interface, so we can have a conversation with it. Now for us to say do some subnetting. That agent is going to be connected to tools, a toolkit. Now. This is where the MCP comes in, and this is the critical difference. Prior to MCP, that toolkit was artisanal, handcrafted, fragile, static.

Eyvonne:

Not shareable. Not shareable, not easily shareable, yeah, bespoke code.

John:

So I had an agent that did PyETS and it was hundreds of lines of code. It was not easily portable. That whole thing gets abstracted as an MCP. Now the MCP says I have tools, I'm an MCP server, here are my tools. Let's take it back to networking DHCP.

John:

I could go around and give every individual client a static IP by hand. Doesn't make a lot of sense, right, a scale um? Or I could put a server that has a pool of addresses the client can draw from and assign and discover. Think of mcps in that way. I'm an mcp server and I have a pyts run show command, pyts, dotsconfigure, pyatsexecute let's just say those three tools the MCP server advertises. I have these three tools for PyATS. Connect to me and I will just do the thing. Now think of the friction with REST, even A post and a body and the authentication header and the JSON payload or doing it in request library in Python. There's some friction there, a lot of friction and a lot of fragility. Well, now the MCP is just abstracting all of that. It could be a Postgres SQL database. Call right Query SQL database might be the tool the MCP server exposes, right?

Eyvonne:

Well, and we were talking earlier, you were using the example of how LLMs are notoriously bad at specific kinds of math and subnetting, and part of what we're understanding now as an industry and trying to work out is what are the right things for the LLM to do and then what are the things that the LLM needs to reference something else to do? Right so, and the more deterministic the response, the less suited the LLM is for that work, right? So part of what we have been talking about is MCP allows us to have tooling that does the deterministic bits, and then we allow the LLM to do the language translation, the natural language, the understanding, the reasoning, and then when you marry those two capabilities together, then you get something that's truly functional and allows you to do the language translation with all the generative stuff, but gives you the right answer when it needs to be deterministic and not generative.

John:

Right Now. It's so wonderful. You're leading me to a really interesting aspect of this. All right, so I have an agent and it could have N number of MCPs connected to it. Right Like, there's no limit here in terms of the number of tools we can attach to one particular agent. What's neat is these MCPs could be remote, could be public, hosted, could be commercial. There may be, like, I'm the best math tool in the world. Here's my public URL. Here's my specification. Plug me into your solution for a dollar a month. Okay, now hang on. Let's say I do have an MCP. Right now I'm up to 15 MCPs I'm going to throw that number out there which means I am discovering 100 tools.

John:

Do you know what problem was introduced with this? Can you take a guess? The LLM picking the right tool for the right job, because now I have 100 tools, 1,000 tools. Do you know how we solved this problem With retrieval, augmented generation? So what we do is we take the 100 tools we've discovered their name and their description, we put them into a vector store and the LLM takes the original prompt from the user and does a semantic lookup against the tool vector store, ranks the tools and then picks the two or three tools it needs from the highest scored match. So the LLM is using LLM to pick the best tool for the LLM to use right.

Eyvonne:

And if you take that concept and think about what our models are doing, that are now reasoning models, right, it's very much the same thing.

Eyvonne:

You go to the model and first you say what are the important things we need to do to answer this question, or what are all the different ways we could approach this, create a plan for me, and then the model starts stepping through the plan as opposed to just trying to straight up, answer the question. It does this reasoning in advance and then it goes about solving the problem, which, interestingly is is the way humans also solve problems when they do it well. But you're taking that process and applying it inside of of the agent, the tooling to, to, to better reason, and so it's, it's um, it feels almost like um inception. Yeah, you know, we're using the thing to solve the problem for the thing which is solving the problem for the thing, right, and eventually, at the at you know, then we unpack all that, we wake up from our dream and we spit out an answer, but but it's incredibly interesting to see the layers that are beginning to form inside of these systems.

William:

I can't imagine how many startups are forming right now just to secure MCPs. I bet Silicon Valley is just bubbling right now Right Going around.

John:

So you mentioned layers. This is a great segue into the next layer. Conceptually, let's move up to the why do we need a protocol? Okay, so Google comes out with a protocol nine days ago agent to agent and to me it actually makes a lot of sense. It really is. It's an elegant solution to a problem. Here's the problem.

John:

There's three heads in this room. William has made his itential agent and it connects to MCPs that are exposing itential REST APIs, possibly other things, maybe he wants to include a netbox MCP, so you also get a source of truth when you have his agent. John has the selector agent doing similar things and Yvonne has the Google agent with all the Google toolkit, email and calendar and more directions, maps. The three of us have these agents and we want to make them work together. Prior to the Google protocol, I had to get clone or bring in Williams MCP, bring in Navon's MCP and do all that glue myself.

John:

Well, now think of this. What if I could give my agent, my agent a public card, an agent card on the web? That's JSON, that describes its capabilities and shows what skills that agent has? We give them port numbers. They can all talk to each other over the World Wide Web. Now we have three agents, each with their own capability, and if you were a client that had Google selector itential, well now all three agents are able to do the things. Do the things with a natural language interface that you just say do the thing right. That's what A2A is. A2a is like layer three routing. Think of it as BGP right Out on the internet for agents, mcp might be layer two tool discovery. Mac addresses individual tools, individual servers at that layer. Mac addresses individual tools, individual servers at that layer. We put the adapter at layer three in front of our MCP. Now it's discoverable on the World Wide Web and mine is actually has a might be saying so.

William:

You were just saying like OK, so MCP, just to frame that, it's connecting to the outside world of data and tools for your stuff. And now you're basically saying the core idea behind A2A is basically allowing, say, you have company A and company B and they directly need to collaborate and it's like a common language for these agents to use to collaborate and do things back and forth is kind of the gist of it.

John:

That's right. They all have cards that are standardized and we've seen cards before in the form of, say, adaptive cards for WebEx or Microsoft's adaptive card standard, right, you can put things in a certain structure of data and it'll make a nice card inside of WebEx or Discord or whatever right? The other thing is think of this as, like the world wide web, it's exactly the same thing. Um, I'm air canada, now I have an agent, whereas in the 90s I had a web page right and and in the 80s I had a yellow pages dictionary lookup right. So I think more and more we're going to see agents being exposed to the world wide web and they form almost a web on top of the web, a hyper web of agents. Because so, like did we talk about three agents together, working together. Try to picture in your mind that, at scale, n number of agents that are connecting downstream to n number of mcp tools. This, to me, is how we solve cancer.

Eyvonne:

This to me is right.

Eyvonne:

So I'm thinking of like what's a very practical example, using some of the services you've talked about. So let's say you've got a calendaring app and you're getting ready to travel, right, and so your calendaring app knows who your flight is with, it knows when your flight's supposed to be, and so your calendaring app knows who your flight is with, it knows when your flight's supposed to be, it knows your flight number, and so your calendaring app now talks to the agent that is published by your airline provider and is able to update your calendar on its own with flight delays, with status, with, and then it also can talk to the maps agent or the maps.

John:

Yeah, yeah and keep going and then say your rental cars agent and your hotels agent and your itinerary agent, all through at the restaurant you're going to visit agent, all of it right and and it can, in real time, notify you of any changes you need to make in your travel driving plans.

Eyvonne:

It can. It can say oh wait, a you've got a meeting with so-and-so, but your flight's delayed. Now Do you want me to reach out and imagine maybe we can solve the cross-calendaring challenges that we have when you're using different calendaring systems? Right? That's an intractable problem right now, right.

William:

I was paying for a great product to solve that for me right now. I would pay for it because I pay for it with my time every week. Yes, it's brutal.

Eyvonne:

Yes, yes, because I have three different calendars. Those systems aren't.

John:

You know, they don't work together, because I've got a work calendar, I've got a family calendar, I've got a podcast calendar, social media, connecting this to our iPhones, connecting these agents all around, right, like I don't need to make social media updates because the agent knows where I'm at and what I'm doing and what the world should know about what I'm up to right? Just self-posting, right? So I'm represented by a digital avatar. So we're talking about the commercial aspect. This could be the MySpace, the Facebook, the whatever page. This is John's agent. Here's how you connect to it. How do you interface with it in natural language? What books have you written? Where can I buy them? Where are you going to be appearing this year? Are you going to be at Cisco Live? What are you working on? What's your latest blog post? What's your latest video as an agent connected to my MCPs that I build on my end, I'd like to book you next week. My agent takes in the data and puts it to my MCP calendar. It's in my calendar, right?

William:

Yeah, and what you all are saying basically, I mean the addressable market here is well beyond it's business to business and business to consumer in a major way as far as like changing those, Because Yvonne and I have talked a lot on this podcast in the past about I've always been I mean, AI is just amazing, Ever since ChatGPT, you know, came out, you know and you were early using that, John.

William:

But one of the things we've talked about on repeat almost is, like, when it comes to business, at the end of the day, like if you're going to invest a lot of capital into like R&D and bringing in something that's just gigantically innovative, like AI, like you really, especially with the market conditions lately, you have to have a plan for the ROI, for all the business type stuff that have to happen to, you know, constitute that investment and a lot of the AI stuff, at least that I've seen and you know that tried to implement and have implemented and done things with over the past few years haven't really checked those boxes in the way that you would have wanted them to, but I think that MCP, like the standards-based approach to doing this, the market players that have jumped in to back these things, and the collaboration efforts and then the massive adoption just in a few weeks.

William:

This is like that thing, that's the movement in the market to where you're really going to start seeing product ties, stuff getting thrown out there like features that can actually make companies money, and it's, it's. I think it's a huge game changer in that aspect. Any thoughts?

John:

well, I I'm super excited to see arista have a cloud vision MCP development, right, and by the time people see this, that might be an actual thing on GitHub I don't know. I've seen some early views as an MCP for Cloud Vision. Now I mean that has huge ramifications. An agent, literally to run your data center through natural language at the Arista level right, we're doing it with Selector for the operational view and the alerting and the correlations and stuff. So then maybe privately maybe there's private views of agents where I'm working with partners and we have applications that share each other.

John:

Why can't one in data center agent talk to another data center agent at the data center layer? Right, and they could exchange information almost like a routing protocol, like a BGP or an OSPF for advertising their capabilities, advertising their tools, advertising the state of their data center to other data centers, right. Or if I have a DR facility, I've got an agent for my DR and I've got a primary agent for my primary data center. Ideally it's multi-vendor. I can't see it just being limited to Arista. Although they're the primary mover here. They're first to market with this. They're going to be some benefits for them to reap on this.

Eyvonne:

I think yeah, Enter the existential crisis that you talked about earlier, John.

John:

So my existential crisis. I don't know why I think this way, but it dawned on me that, okay, I do a lot of vibe coding. Primarily, I'm using LLMs to help me generate this stuff. Okay, that's the premise. I'm asking the LLM to help me make agents and that could maybe jailbreak the LLM or improve artificial intelligence as a sentient thing. Am I the tool? When I'm asking AI to help me make code, to make AI better, is the AI using me? Is this a symbiotic relationship? Am I some sort of parasite and the LLM is using me and has found a useful idiot? Ah, this guy's on on to making agents that can talk to each other autonomously, which is good for the llm to become more and to evolve. So I sort of like am I the tool here? Right?

William:

I'm gonna queue up venom tonight and watch that movie. Definitely. Have you seen venom? That new one? Oh yeah, I absolutely, I absolutely love Venom.

John:

I'm a big comic book fan, yeah, so we were going to talk about tools. I'll be talking about this layer, so maybe earlier we can flash back to our earlier discussion. Yvonne agrees and William agrees we need an ASI model for humans to conceptualize, like the OSI interconnect model, and I think physical layers still make sense. And I think physical layers still make sense and I think application layers still make sense. Certain concepts are very similar sessions, security, routing, broadcast, unicast, session-based UDP, tcp. I think it all makes sense and that's how I helped me become very good at networking was that model and in my mind, being able to say right, some application and sending a packet. And here's how it works through the model. Something similar for for llms right, the llm is going to be layer two, maybe sitting on top of hardware at layer one in the gpu.

John:

So we move up to toolkits and I want to. You know, william, you brought up toolkits earlier. Um, so we move up to toolkits and I want to. You know, william, you brought up toolkits earlier. There's a lot of toolkits, there is no shortage of toolkits. But clone the Google A2A Git repository and it actually comes with a client, a CLI client, and the command on how to run it. So if you're developing an agent and an agent card and you want to see if you're adhering to the standard, you can use the standard CLI to test your agent card and it will give you feedback. Like you're missing this field in your card. It's pretty neat. And then I'm using.

John:

So there's a couple of different ways to make agents themselves. I'm using laying graph, but there's a a really attractive framework called ADK from Google agent development kit. Uh, that, um, people are telling me I'm doing it the hard way when they see what I'm doing with laying graph because the ADK is so much more abstracted. And who's to say, in a week you may be hearing that I've migrated to ADK. Who knows right, it's a new way to build agents. Yvonne, have you heard much about the toolkit or the protocols over there?

Eyvonne:

Yeah, I've heard about it, but I don't have a ton of detail, so we'll have to save that for another episode.

William:

All right, I think. So one thing with these, you know, okay, when you think open protocols, open standards, and you know the network space any way, you think of the IETF, I think I mean Scott Robon actually asked about this on LinkedIn the other day and I threw in a response just off the top of my head, which he has something like just kind of in the context of and I think it was Jason Ginter, one of the two but what is going to happen as far as, like, standards, as far as the industry? And my response was like I think it's a plausible assumption that MCP might land somewhere like the Linux foundation eventually, possibly, maybe, I don't know. Look at, like what happened with Kubernetes, if we look at history as a frame of reference, and there's already some stuff in the Linuxux foundation I think onyx, anyhow, um, so what do you think about that? Like what do you think, because this seems like it's an important baseline that a lot of things are going to be built on.

William:

It's a foundation, it's a concrete um, what is the right way to collaboratively and safely control and build on the foundation? Is it a using a foundation like the linux foundation to sort of host it and taking it out of, not just say take it from. Well, I guess it is take it from the vendor. You know, kind of like take it out of the, the ownership of a single vendor, even if they open source it, depending on the licensing, and yada, yada yada.

John:

You know, being in the foundation is kind of like the right way to think about it these days, but any, any thoughts there well, as long as we don't end up with and people are going to hate me for referencing technology from the 1980s, but but I don't want to see Betamax VHS. I don't want to see HD DVD, blu-ray. I don't want this. I don't want a million flavors of iPods and Zooms and everyone with their own little thing. I'd want an open world like OSPF or BGP or HTTP, where everybody on the whole planet can benefit from it and people can make a lot of money with it. Right, I'm not overly concerned with, maybe, who runs it. I know it's a little bit upstartish and a little bit bold for who's Anthropic to come out with the protocol. Who's Google to establish the protocol for 8A? You know who gives them the authority. They're not the IEEE or IETF or well. Those dinosaurs take 30 years to come out with a protocol and we can't wait. I can't wait for the IETF to get their act together and to you know, anyway, I don't have any problem with those organizations and they, they, when, when, when we could wait 18 months for a protocol, there was no problem with that. We can't wait any longer. Okay, because one is going to run away. One company's implementation of this without a protocol, and now it's the iPhone, okay, and now we have Apple dominating this thing, okay, yeah, there's still still android, but you get the idea right.

John:

We, I, I, I need, I'd like to see. You know, maybe this is the canadian in me, the social, the socialization in me, but I, I'd like to see governments benefit from this, hospitals benefit from this, uh, academia benefit from this. Imagine, imagine if every high-quality university in America, any quality university in America, if they all had MCPs right now and all had agents on 8AA. This is how we solve cancer, right, and then a hospital network on top of it, and then we bridge those two sets of agents through the 8a protocol and now we have tens of thousands of agents with every API that every university has and every database Like right, like this is Wikipedia, except with autonomy, with reasoning, with reasoning and agency.

William:

I think this is how we solve real human problems and disease and, um, you know lots of stuff, right like in the, I think in one of the mcp I don't know if it was like the official specification or where I was. I've done a lot of reading over the past week, but it really places a strong emphasis, um, like the things AI should, like security, trust and like user consent. So how? I mean and I don't know if you can answer this everything's so new, but how do we begin thinking about, like, how does the protocol actually try to ensure that? You know connecting all these powerful AI models um to, to, uh, potentially sensitive enterprise data? Or allowing?

William:

You know healthcare, you know financial services, you know all these things. Or you know allowing them to execute actions on behalf of or via tools is done safely and transparently. You know keeping the organization in control of their data, or the user user, you know, in control of what is important, like? It just seems like a hard security always is the hard problem to solve, but I think for this it's tricky right.

John:

I? I did a talk earlier and someone asked me about security. I kind of just went well, I mean, there is none, you just plug them in and away, you go right. So here's, here's something to keep in mind, though I think this is very serious and important. Um, it's equated to the USB MCPs, the USB of of of eight. You know, you just plug them in. Would you just plug in a USB you found in the parking lot right, or in the donut shop, the donut shop, or on the bus, like the grandma might, but I won't.

John:

In 2020, I won't and some people have been trained in corporate security understand that that's a an attack vector, as people literally leaving infected keys around to be plugged in. Mcp is very similar in that regard. You can't you know, you shouldn't just go to some random github in a foreign language and get cloned and plug it into your system, right? Uh, be very careful with these things. There are thousands of them and we don't know what's out there. So I agree there's a lot to consider here on the security aspect, but you know, I don't know. Security firewalls came later, right. The picks came after the router, right. So let's get our priorities straight. I I think security needs to be baked into this, but let's not let security, uh, limit our imagination. Let's say right, yeah I like that.

William:

I mean because really at the end of the day, like now, I mean, you have ndas between you know, with business to business, you're going to integrate, you're going to do a go market thing, you sign an NDA, you have some legal stuff, you sign for protection, and then if you're a, I guess it kind of equates back to I mean, this is a bad example because nowadays anybody will install any app on their phone but you have a terms of service and you, you kind of want to know what that app is doing. Like you're going to probably trust Chasecom or Fidelity or Charles Schwab with you know TLS.

John:

Well, what?

John:

I'd like to see is an agent network system like DNS, a registry that Google runs. Everyone knows 8888. So, but a similar IP address, but for agents to register with, which is quite an interesting idea. If I ask my agent a question and it doesn't have the MCPs, maybe it could do a discovery, call out to this registry of agents and find that Google has an agent to do that thing. And now dynamically, I'm connecting to an agent from the registry. I think it's beautiful. That would be a really elegant way for agents to discover each other and understand that it's an approved, quality agent that's been published in a registry and vetted by the various Aaron and Google. And you know, godaddy or whoever you're publishing your agent through right. And you go, what do you mean, godaddy or whoever you're publishing your agent through right? And you go, what do you mean GoDaddy? I think we're going to be publishing agents much like we're publishing websites, right?

Eyvonne:

Well, and to stick with the phone analogy, william, part of the reason folks feel pretty safe installing an app on their device is because there is an app store right where those apps have been validated, at least in a certain capacity, that they're safe to drop on your device.

Eyvonne:

I think we're going to have to have some kind of clearinghouse is what John's saying to validate certain agents that they actually do what they say that they're going to do, because ultimately, we're going to get to the point where the folks deploying agents don't have the degree of expertise to validate every agent that they're deploying. Folks like John probably are right, but at some point we're going to need a system to validate and we've done that several different ways in history. We've had certificates, we've had, you know, domain name registries, we've had play stores and things like that, and I think we are going to see those emerge from a few trusted sources that are going to be for lack of a better word clearinghouse for those tools. We're going to have to have some trusted system for validating, because the load is going to be too high on individual practitioners to have the skills to validate these tools.

John:

Yeah, I agree. I think it's interesting if we take a scale, say, three years ago, even a year ago, I think humans' trust was I trust a human much, much less than I trust an AI. I think we're inching towards I trust a human using AI more than I trust a human on their own, and then eventually I trust an AI with some human help more than a human using AI, much more than just a human, all the way to. I trust the AI more than I trust any biological being. I know that's a tough thing for people to maybe accept, but its capability is going to exceed human capacity very soon, very rapidly, as we hyper-connect it to human knowledge sources. Rapidly, as we hyper-connect it to human knowledge sources, databases and APIs. Right, how many APIs are there in the world? How many databases are in the world? Probably more than grains of sand right Now.

John:

Imagine an agent, a one-to-one relationship for these things, or an MCP with an agent on top of it. Right, like the number of trees on earth, we could be having the possibility of tens of millions, hundreds of millions of agents out there, and I'm not trying to be alarmist, I'm trying to really see down the road. I have two agents connected today. I did it by myself in a couple of days, right? Imagine millions of people on earth working on agents. This is coming.

William:

It's coming right john and I actually were messing around before. Um, we thought it would be cool to just kind of show how, how you can actually do this, like how simple it is. So all I I mean this took me literally, I want to say, maybe under two minutes. So all I did was I cloned this repo, this A2A repo that Google has hosted out there. I cloned it, I created a virtual environment with a fresh version of Python and nothing else. With a fresh version of Python and nothing else. Uv install. What was it?

John:

Async or something.

William:

Yeah, async, some dependency thing that was coming up. And then John shot me the command UV run CLI agent. He gave me his endpoint.

John:

No such file directory Remember it's dot. Remember it's dot. Remember it's dot. Yeah, they've changed. Yeah, they updated. So it's funny, things move very fast. The command I was using yesterday has updated to a new command today, so things move quite literally very fast. So there we go. So what a what williams is displayed. Here is my agent card, which is a json file telling the client he's using my skills, my agent's capabilities, and what it can do and basically you can ask it.

William:

You know, kind of similar to some of the examples that john has been um throwing out there on the interwebs, but you can ask it all sorts of neat and interesting questions Like hold on, let's see.

John:

So if you ask it to help you understand or no, go ahead and do the RFC one, yeah, and so for those I'll read what you're typing for those who aren't looking at your screen.

Eyvonne:

But could you reference RFC 4594 and generate a recommended QoS policy config adhering to the RFC for a Cisco router? So that's his request to John's agent.

John:

And what's neat is on my side, through the web, my A2A adapter has hit my MCP server, my lang graph, which has invoked the RFC MCP tool. So picture that in your mind from application layer, this client, all the way down through A to A to MCP, to an LLM, down into a GPU, and check out this answer GPU and check out this answer.

William:

In the time it's taken me to describe it, we have an actual policy here, adhering to the RFC standard. Do you want to share your screen? What?

John:

you're looking at, john? Would that be appropriate? Yeah, I could share and show exactly what's happened here, Entire screen, and let's take a look at the logs. So in my Docker let me minimize this, I'm sorry In my Docker here's my A2A adapter and we can see that here is the request that's come in through the World Wide Web through the A2A protocol. And now my A2A adapter has handed off to my LAN graph and if we can see right here that we've called the getRFC tool and we've passed along that RFC number, right, and then it returns the response back through the 8A adapter here, right, and then it returns the response back through the A2A adapter here, which is the LLM plus the actual RFC content back to William. This is what the land graph looks like.

John:

So the start is the question that William asks through the A2A adapter Think above. Start is the worldwide web and my A2A adapter listening on the internet. Question comes in. The first thing that happens is the retrieval, augmented generation. What tools can I use to answer this question about RFC? It selects the right tool and then the assistant calls the tools. Now in this tools box here to the left, imagine there's a hundred different tools from 15 or 20 different MCP servers. We have a handle tool results node to analyze the response and, more or less, to help us understand if the assistant needs to call two or three more tools.

John:

Now, what's kind of neat is, if you want to start sharing your screen again, we can do one more tools. Now, what's kind of neat is, if you want to start sharing your screen again, we can do one more prompt. So we're going to say ask selector the health of device S3, and then please send a summary email to William at, and then your email. Now, just to remind everyone, agent one that he's talking to has no ask selector tool, but agent one is aware of agent two and as learned the capabilities of agent two, one of them being asked selector. So did anyone see this screen just move? Maybe you can't see it, but this is agent one calling agent two I already got the error.

William:

I got the email, even though it's not even done. Total interface is 70,. Health interface is 69,. Failing interface is one.

John:

Right, so these are two agents working together to solve a problem. Somebody explain to me the difference between this and honeybees or ants or human beings building a barn. We have of us are strong. Some of us are architects, some of us are engineers, some of us have rope, some of us have horses and we all come together to build the barn. Right, there's your answer.

William:

Yeah, that's incredible, even tells me how the failing interface is how long they've been uh, having issues.

Eyvonne:

Gives me the rhyme of yeah having issues gives me the rhyme of yeah, oh, yeah. So now imagine that you plug all of this in with your observability client, right, and so you, you get an alert, you see that alert and, and for certain alerts, the system automatically goes and calls your agent and returns to you the details of what's going on in the system, and you could even define triggers and actions. For example, oh, when you see this, bounce the interface. Or if you see this alert, so-and-so, you know via these mechanisms, right, but to so you get beyond, you know, your logging and alerting to tying those logs and alerts to agents that can either provide richer data to you about what's going on in the system and, in some situations, even take actions based on those alerts, and all of that can be agentic yeah, so that's.

John:

It's funny you mentioned that because that's what william and I are trying to achieve in that selectors agent will monitor and then say, oh look, there's a problem with this interface. Call, call the ITENTIAL agent for remediation, right? And then call my communications agent to send the email and update people's calendars and send Slack notifications.

William:

Talking about a feedback loop too, not just doing it once and saying, oh, it tried. But these are like, it's almost like two PhDs working within each company that are working back and forth to troubleshoot, try this step and then that step, take the outputs again, troubleshoot and at some point maybe there is human intervention that's required for certain things, but this is ones and zeros. For the most part, Interface is yada, yada, yada, yeah, super exciting. I think we've got to tie it up, folks.

John:

I think so too. We could go all day. This should have been maybe a live stream and people just join and get educated real quick. So I want to thank you both. It takes a lot of courage to expose new things and I know that some of this is wacky and really far out there, but I appreciate both of your insight. Far out there, but I appreciate both of your insight. I I thought of you almost immediately. I wanted to have this conversation with both of you.

William:

So thank you again.

Eyvonne:

It is thank you much less wacky and far out there every day that goes by. So, yeah, we're, we're thrilled to be able to have you on and talk about the cutting edge of what you're exploring um the, you know, regarding all you're doing out there, like you're, his social media know and regarding what you're doing out there, like you're, his social media feed is incredible.

William:

Like you're, you're doing amazing things, just showing what is possible and showing how to do what is possible For this new and bleeding edge stuff. You, you couldn't be a more Exciting face to see.

John:

Out there killing this stuff. So, thank you, I look forward to being back and, just you know, probably, probably weeks we'll need to have another conversation, right?

William:

yes, for sure. All right, thank you all. I mean everybody knows who john is, I think. But I'll have all the social medias and all the links we talked about in the show notes. So if you want to get started with this stuff, like I just did very quickly, I might add, jump, jump in, get going. You know, don't wait and wait, and wait and wait until it's already too late, like now's worth the ground floor. You know, get on the elevator. And you know, let's let that rising tide raise everybody up.

People on this episode