
The Cloud Gambit
The Cloud Gambit Podcast unravels the state of cloud computing, markets, strategy, and emerging trends. Join William Collins and Eyvonne Sharp for valuable conversations with industry mavens that educate and empower listeners on the intricate field of innovation and opportunity.
The Cloud Gambit
Bridging the Gap: From Network Engineering to Application Networking with Marino Wijay
In this enlightening episode, we dive deep into the world of application networking with Marino Wijay, Staff Solutions Architect at Kong. Marino shares his journey from traditional network engineering to the cloud-native space, offering valuable insights for professionals looking to make a similar transition. We explore the parallels between VPNs and service meshes, discuss the evolution of networking technologies, and unpack the complexities of modern application architectures. Marino also shares his experiences as a CNCF ambassador and provides guidance for those interested in getting involved with the cloud-native community.
Where to Find Marino Wijay
- Twitter: https://x.com/virtualized6ix
- LinkedIn: https://www.linkedin.com/in/mwijay/
- Blog: https://marinow.hashnode.dev/
- Sessionize: https://sessionize.com/marinow/
- YouTube: https://www.youtube.com/@marinowijay
Show Links
- Container Lab: https://containerlab.dev/
- Cilium CNI: https://cilium.io/
- CNCF: https://www.cncf.io/
- KubeCon: https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/
- Istio Service Mesh: https://istio.io/
- Ambient Mesh: https://istio.io/latest/docs/ops/ambient/
- Tailscale: https://tailscale.com/
Follow, Like, and Subscribe!
Now these data planes are communicating with other data planes using a protocol called MTLS, but that MTLS protocol is simply a tunneling mechanism.
William:What is a VPN?
Marino:A tunneling mechanism.
William:Coming to you from the Cloud Gambit studio. This is your host, William, and with me, my co-host, the world-famous, highly distinguished cloud philosopher, Yvonne Sharp. How are you doing today?
Eyvonne:Hello, I was trying to avoid the eye roll but I couldn't quite manage it. But thank you, William, for that introduction.
William:Awesome. And also with us is Marino, with Jay. How are you doing today, marino?
Marino:I'm doing very well, it's Friday and that's the best part. It's like almost the weekend and, as I was talking about earlier, friday is generally the busiest day. How are you both doing?
William:All good here, busy with kids this morning, getting them ready for school and all that noise. But other than that, yeah.
Eyvonne:In theory I have a light meeting day. I have high aspirations of getting some stuff done, so we'll see how that goes.
William:Good luck with that.
Marino:Yeah.
William:Yeah, so you come to us from the beautiful, usually much colder than Kentucky, Ontario, Canada correct.
Marino:That is correct. I'm just outside of a very little city called Toronto. I live in Whitby, but Toronto is probably the most familiar city I can use and people would immediately recognize it. And I've lived here all my life. You know, I've kind of grown up in the area and seen how it's transformed and morphed into. You know what it is today. It's a bit of a tech hub. There's a lot of thriving opportunity here and, just you know, a very diverse culture out this way. So you know, I love being out here, but I also love traveling too. It's fun to get around and see what other people are doing and what other cities are up to.
William:Awesome, yeah, and you're not a hockey fan, are you?
Marino:I'm actually not a huge sports fan. You know what? I will go watch basketball games. If my sister's like, hey, I some tickets, let's go, let's go hang out. I'm like, yeah, sure, whatever three hours, let's kill in the stadium and hang out for a bit. But I can't. I don't have the patience or the um, the focus level to pay attention to like baseball or hockey or any other sport, because it's just, I don't know, just me I hear you had to ask.
William:I'm a hockey fan and every time I I'm a hockey fan and every time I can find another hockey fan is a special moment. There's not very many out there that I talk to day in and day out, so I'm always asking. But yeah, so kind of back to what I was saying. Before we hit the record button, I had two separate individuals reach out to me, like probably within a month of each other, saying hey, you need to have marino on the show. You know you need to reach out. So I I love suggestions for awesome guests you know to have on in the future. So if anyone in the audience has any ideas for future episodes, avon and I are all ears always. Um, yeah, so do you want to give us a brief overview, just kind of, of your background?
Marino:Yeah, absolutely so. Currently, as I reside in Canada, I work as a remote solutions architect for an API gateway company called Kong, and a lot of my career whether in the data center, in the cloud space or Kubernetes a lot of it had a lot to do with networking and that kind of brought me to this place, brought me to what I'm doing today. In a lot of ways, though, through my journey, I focused on certifications, took a variety of courses, changed job roles quite a lot to try different things and see how different roles function and how they interact with either the community or within the organization or across organizations in a cross-functional manner. So I've learned a lot. But I've also sat there and realized there's still so much to learn, tech-wise, environment-wise, to understand how organizations run.
Marino:And you know, I come down to the realization that a lot of the decision-making that happens, especially in massive organizations, tends to be political decisions at times. And I sat there kind of going back and forth with this a few days ago because I was asking one of my end users about some decision that they made a while back why did you decide that you'd use this workload environment along with this one, when you could just accomplish everything with one, and there was a lot of dancing around the answer and I came to the conclusion that a lot of this wasn't their own decision. It was just coming from the top, from the top down, because of some agreement that was made. So to get to that point of realization, obviously I've had to work through different parts of my career work in support, work in pre-sales work, in architecture, work in implementation and even the developer relations side of things and it just opens your eyes to what goes on in the industry and how organizations make the decisions that they do.
Eyvonne:I had a. I had a. I made a pivotal kind of discovery at one point in my career and I naively believed for a long time that that the right, the best technical solution wins. And the goal was to be right, to find the right answer, to find the best solution to meet all the requirements. And that was what it was to be successful. And I had this moment of clarity where I'm like, wait a minute, you can do all of those things and still not get decision makers on board, and that there's a human component to our work. And for me that's really when I started reading on organizational psychology and Kahneman and Tversky and trying to understand more of the human element of decision making, Because at the end of the day, people are implementing technology and making technology decisions and that really shifted the trajectory of the day. People are implementing technology and making technology decisions and that really shifted the trajectory of my career. So it's interesting to hear you bring that up this early in the conversation about your trajectory as well have.
William:You've probably made a life living, you know, somewhere in between the physical layer, like layer one of the osi model, and like layer three. You know the network layer of the osi model. So you know basically from that physical connectivity of bits up to the, the boxes that are switching frames and on up to the boxes that are routing packets, and you know, I guess you've established yourself as kind of an expert in the area of application networking. So when you say that to someone that's just been in the data center and in the weeds with physical network gear and BGP connecting ISPs and stuff like that, and you say application networking, they're going to say what? What do you mean? Can you take your best shot at defining to the audience?
Marino:what application networking is? Absolutely, I actually want to go back a little bit before I even answer that one question, because a lot of what I do today is heavily influenced by a lot of what I used to do, especially in the layer one to three era. Now, when I first started getting into physical networking and messing around with switches and routers, I was we're going way back to like 2007, ish, 2008, ish. And that's when, you know, I really started to realize, look, you know, when you plug something in, it's not just lights up, there's actually some logic going on behind the scenes. And oh, by the way, there's actually a configuration that needs to be loaded in to provide some additional logic. Like we, we have to take this switch and compartmentalize it. So it's doing different things and you provide some isolation, boundaries there. And when you think about that, you're doing that in service for security, for an application, to provide some QoS, to provide some boundaries, limit the crosstalk, and a lot of those patterns become very prevalent, especially when you work in the data center, when you work in various campus environments. But they become very repeatable, right? And you begin to realize that you're solving part of the problem. Part of the problem is building the road. The other part is knowing what is on that road, and I began to realize that a lot of what went on my roads were virtual machines and physical machines, a combination of both.
Marino:And it all came to a head when I was sitting in a hospital with a tech team and they were trying to design a data center, a little mini data center, and it all came down to routing of routing was working. We're talking about a vSphere environment where we're talking about vSwitches, standard vSwitches, nothing fancy. There's no special routing here, and in order to route you have to go to something top of rack, you have to leave the actual host, especially if you have a workload that might be adjacent to you in the same node. And I began to wonder why don't we have anything software-based that can handle this routing like? Why don't we have anything software-based that can handle this routing? Because we do run routing in software anyways. It's not like this is foreign to us in any way. And I started to dig a little bit more and I stumbled upon this notion of software-defined networking. Software-defined networking was just all this networking in software. We're creating these virtual networks as we need to, on demand, for whatever applications that we need, keyword applications, and I began to realize that the application sets people who are building the applications needed to provide guidance on how these networks should be built.
Marino:We can't just build networks for the sake of building it. We have to be very intentional and purposeful about them. If I want to build a very small you know slash 31 network, there's a reason I'm probably doing so. Obviously, I only want to build a point-to-point network and I only want two points to be able to communicate with each other. And this might be, you know, an occurrence in a WAN environment, whereas in a containerized environment with thousands of microservices, the networking is going to look differently. And at the same time, you can't sit there and go in and plug in a wire every single time for every one of those containers that come online and then, oh, by the way, disappear like five minutes later because of a short-lived action or function. So that's what I meant by.
Marino:Let's focus on applications for a second, because they are dictating how networks should be formed and with software-defined networking, you're basically codifying how you build your network. I'm going to build my network, but based off of this set of criteria, of how this application is defined. But we're still talking about layer three, four. We're not talking about anything you know immense or up at layer seven. But where that layer seven comes in is wait. I need additional logic. This layer four stuff isn't going to give me what I need.
Marino:And that's where the application bit truly comes in, because we write applications with the mindset that they're going to talk to other applications. How they talk to those applications is through either HTTP or gRPC as protocols, but underneath all of that it's DNS, and then there's a TCP IP layer right below that. And so when I go out and say that I'm an application network architect of some sorts, I'm not just thinking about layer one to three or layer one to four. I'm also thinking about these services that are making HTTP calls using a variety of attributes to decide hey, I'm going to communicate with this resource or that, but I'm not going to communicate with this third resource, because I don't have the right authentication in the header of that HTTP request I'm about to make. But that's application networking and it's so much more advanced than that and I'm sorry it was such a long way to get there.
Eyvonne:But as a network engineer, I feel like you need to understand layer one to seven, all the way through, to understand application networking need to understand layer one to seven, all the way through, to understand application networking, calling out the evolution and going back to talking about moving the logic behind the thinking, behind moving networking to software. That's an evolution that's been happening for 15, 20 years at this point, with different iterations of SD-WAN or SDN, if you go back to the Nicira days or even before then, and then there was an iteration of that which was SD-WAN more software defining than WAN, and I know that you did a lot of work in that space as well. And so this is just the next iteration of how we need to think about how our networks serve applications, because, at the end of the day, the network is there to deliver applications and we've created artificial barriers in our thinking that we need to evolve, remove change.
Marino:Yes, You're absolutely correct. I mean, software provides a substantially significant more amount of flexibility in the way we can construct our networks. Hardware, on the other hand, is effectively powering that, and we're only able to get to this level because our CPUs got so much faster. We're able to fit more transistors on that little chip and we can process a variety of different kinds of things in a matter of like sub seconds. And that becomes so much more important, especially when we're trying to consolidate more workloads into less devices.
Marino:You know, conserve power, even conserve like this, this notion of having to consolidate things into a few sets of cables, that was a huge thing when I was doing the data center stuff. And it still is like we don't. We don't want to have like 10 different cables leaving a server, we want two, and for redundancy, for high availability, for active-active setups, and it's going to be like 100 gig per server or 200 gig, if you're thinking that way. Right, and it's because of that combination of fast CPUs and so much available bandwidth that we can write software any way we want to and develop the networks we want to, but we also can destroy them very quickly. Destroying is probably the key part too, and that ties heavily into the whole notion of microservices and cloud native and Kubernetes, because Kubernetes likes to and even other orchestration environments like to, treat objects as very ephemeral. Like I should be able to toss this away if I don't need it anymore, and it's only because we don't want resources to be changing constantly.
Marino:If we need to change something, we change it elsewhere in our source of truth, and then that gets honored and recognized in the environment that we run. And that's how we have to adapt networks as well. So a lot of the principles of foundational networking has made its way into cloud native. Right, we think about container networking for a second. If we're building container networks, it's network namespaces that we have to construct, which is straight up Linux networking, which is straight up Linux networking, which is straight up things we used to do and we still do today, like there's nothing different here. It's just the level at what we're doing in this macro or micro.
William:That's a good point and that was a great sort of foundation of the container you know, just kind of working your way from just what network engineers see as network engineering all the way up to the application side and one thing that's interesting just kind of to tack. On another question, I feel like the tools and practices for managing this application-based networking is really different than what has been the status quo for your network professionals in the past. And I think this is almost where a lot of the uncertainty of wanting to take that next step is, because all the network engineers I know, yeah, bgp, yeah, dns, yeah, namespaces, yeah all those things, yes, yes, yes. But when you talk about Git like oh hey, you got to do this, or you have to create a pull request or you know some of those, I feel like those are like things you you pretty much have to if you're going to get into this space.
William:There's some fundamental tools that you kind of have to know a little bit and they're not that hard. That's the thing. I think there's some fear and uncertainty there, but they're not. It's very you know, if you've gone in the CLI and you've been doing Cisco and Arista and working with all these complicated platforms using Git is like low-hanging fruit. At that point it is not that hard. Do you see that as well? Or do you have any advice to professionals that are kind of wanting to take that next step to get into the cloud-native space? Absolutely.
Marino:I don't think you have to jump right away into Kubernetes or containers, but there are entry points to leverage cloud-native style operations with a lot of the networking stuff that you do today. So about a year ago I stumbled upon something called Container Lab, which I think the both of you are probably very familiar with right.
Marino:And it allows you to deploy routers and various versions of them from various vendors within a Docker environment. And you don't have to really know Docker, you just have to know how to work with Container Lab, and there's plenty of instructions on how to do so. But the purpose behind that is you're actually building desired state for how you want your topologies and networks to look like. Now where would you want to store that desired state? It's configuration? You want to store it somewhere. You want to store it on your laptop in Notepad, like I used to many, many years ago. You want to store it somewhere. One of the approaches many years ago was let's load it up into some TFTP server so it can be pulled in. Other approaches use some version control of some sorts. The standard today would be to use Git. Store this in Git, whether it's you know you're using Bitbucket or GitLab or GitHub. You store it there, and now it becomes a source of truth that multiple individuals on your team and other teams can review and interact with and even provide feedback across. Now, to learn Git is not very difficult in the sense of understanding what you're trying to do. You're effectively trying to create a file and then make sure that file gets uploaded into somewhere right, but you're also having this ability to track what you've made modifications to that file. You can also track when someone else has made a modification to that file which might conflict with your modification. And these become important because it prevents overwrites, it prevents clashing, it fosters much more collaboration. So it's more of a collaboration tool and, in a lot of ways, a tool to help you kind of build good practices around saving your documentation in more centralized repositories. Now, once you've gotten past learning Git, I honestly feel and I'm going through this exercise too right Learning a little bit of Python or some automation style programming language. That really helps you simplify basic tasks. You know, the most rudimentary task I can think of is like having to log into a bunch of switches and then creating your VLANs, and this was a pain to do many, many years ago. You could do this very easily today with an API, a REST API, because a lot of devices today offer that up, and then you can use some of the tooling out there. Honestly, it escapes me now because I haven't touched them in a while, but you can use Ansible and Terraform to provision a lot of your infrastructure and specifically your networking infrastructure. Napalm is another one that allows you to actually help you automate and deploy your network infrastructure as well. But Container Lab is a sandbox environment where you can test all of this out, and that's where I encourage network engineers to get started, because I found my path into cloud native through networking, through understanding how a CNI works, how a container network works, and the one notable CNI that provided a lot of familiarity to me was the Cilium CNI, the Cilium Container Networking Interface.
Marino:Cilium was created by a Cilium Container Networking Interface. Cilium was created by a company called Isovalent who is now part of Cisco, and what's interesting is a lot of the technology and the backing of that technology is very network engineering focused. So a lot of network engineers are trying to understand Kubernetes or microservices or how to do this multi-cloud thing. That's a great place to start. Spend up a cluster, spend up a Kubernetes cluster and deploy Cilium and learn how to work with Cilium, and then it becomes very familiar to you about how all of these different pieces fit and how you get to tie back that into the physical world, which you can, by the way.
William:That's a great. So Cilium and I guess, for those maybe uninitiated, you know one of the or a few of the technologies that have really been gaining traction in the market. You know, one of which is eBPF, or Extended Berkeley Packet Filter, which has, you know, basically led to the creation of amazing platforms like Selium, and I think I was listening to one of your talks where you actually highlight, I think not everything is like a cloud-native architecture. You have containers, you have to connect workloads with VMs and then in Kubernetes coming out of Kubernetes, you know, hey, this is the real world.
William:So ebpf was kind of, you know, it was adopted by the linux foundation, I think in like 2020 or something, and, you know, you know, thanks to isovalent meta I think netflix was part of that and a few other companies and has evolved beyond being just a packet filter to becoming really like a general purpose computing machine and inside the kernel, um and selium, you know, which is ebpf based, is like one of these awesome um platforms. Um, and something I see you talking a lot about is selium and ebpf. What is your experience with getting selium like the control plane, like up and running and kind of setting up a brief environment?
Marino:It is so easy. I mean you could be up in probably about like five to 10 minutes. Now, this is for a test environment. If we're talking production, there's so much more to consider. Right and I bring this up because you're going to reach multiple forks in the road Do I go bare metal Kubernetes or do I do Kubernetes on VMs and do this as a service? Do I use parts of Cilium? Do I use all of Cilium? And, quite honestly, there's also the Cilium Enterprise portion of it too. Not all the features are available in open source as well.
Marino:I don't know the full list off the top of my head, but it's just the consideration right In the lens of what am I trying to do. Do I need more of the BGP functionality or do I need to lean on just a lot of the magic that eBPF offers within Cilium natively? Now, there's so many considerations here when it comes to production too, because the reality is not everyone's running Kubernetes everywhere. It's not like my entire environment is Kubernetes. You'll run into situations where I've got a little bit of bare metal, I've got a little bit of virtual machines, I'm running native EC2 instances, I'm running ECS and maybe Cilium isn't the right fit for all of that. It could probably capture 70 to 75% of that.
Marino:But how do you address that remainder and that's a lot more of the design considerations too, where you start to get a little bit hacky? Or maybe you begin to start using some of the hooks, like the Cilium virtual machine onboarding, or maybe you extend the Cilium fabric into a physical switch, which you can do and this has been done with VXLAN previously, which I think Cisco does it today. Vmware's NSX used to do it with a bunch of switches way back when. I don't know if they still can today. I don't know what's going on with them, but in any case, hand today I don't know what's going on with them. But in any case, the idea comes down to the fact that to get started with Cilium is easy. To actually run it in production is a different conversation altogether, because it requires so many moving pieces and things like certificate authentication, your identity management, you have to define what your applications are going to look like and just a bunch of other considerations as well.
Eyvonne:That's an interesting observation. You talk about the intersection of certificates and identity, and one of the things that I see is folks are making transitions from either on-premises infrastructure to cloud-based infrastructure or even more modern constructs. It's that you don't have your own separate authentication for your networking constructs, but it's a larger holistic system. You have to think about identity, you have to think about certificates and you have to think about how all those services work together. Dns also is infinitely more complex than it was a decade ago, and so all of those services. Now the network is not a wholly independent thing of all of those services and constructs, so I I think it's really interesting that you call that out.
William:And speaking of um, you know we've talked briefly about service mesh, I guess, and one another talk that I thought you know you said something really interesting that I discussed before we hit the record button but you made this.
William:Hit the record button but you made this parallel, I guess, of service mesh is kind of like a VPN, which I think is an absolutely great call out. And I know that this is kind of a Debbie Downer like in my experience talking to app folks, because a lot of app folks that I've talked to they're just like VPN no, it's dead dead, it's old technology. But to kind of give credence to this, I mean you could easily make that case to say that, okay, well, these two do solve for very different problems. You know vpn is definitely in the context of uh, connecting um end users from the outside to like private resources on the inside, where service mesh is more like connecting microservices, containerized workloads or, you know, services in the cloud. But do you want to kind of like talk through your thought process behind? Why even draw a parallel between these two?
Marino:Absolutely so. Let's talk about VPNs for a second and let's talk about what we were trying to achieve with them. Now, the first goal was to connect two islands, two unique, remote locations, together, so that they can communicate. You can have resources that can communicate with each other, and the best way to do that was by using branch devices that supported technologies like IPSec or some other similar flavor of VPN technology. But ideally, these two devices at their respective locations would communicate with each other, negotiate a set of protocols and then all traffic that passes through these devices effectively were encrypted, so any third-party individual that was sniffing in on those packets wouldn't be able to replay them or understand what was going on or even decrypt the payload. Now that kind of technology comes with many benefits, but many, many drawbacks as well. You're implementing a layer of encryption, which means that's additional latency on that payload and communication stream, which also means that the devices have to do both encryption and decryption. We solved that problem with external engines and offloads and all of that cool stuff, but that idea never left us. That idea to be able to create a remote network or a virtual remote or virtual private network of sorts, never really left us, because we saw it come up again with SD-WAN. We saw it widely used with SD-WAN and we were trying to achieve the same goals except for like thousands of branches for them to be able to connect to each other and communicate across each other, if required, with the right policy in place. Connect to each other and communicate across each other, if required, with the right policy in place. And you know, centralizing your management plane while distributing your data plane is effectively what the goal was with SD-WAN.
Marino:Now you know, while this is all happening, you go off to the side here and focus in on the microservices world and specifically what services are trying to do to each other, and the common challenges that came up, probably about five years ago, was how do we see what a service is doing, how do we provide some security around it?
Marino:And then how do we make sure that there's always connectivity to that service? Kind of similar to what a VPN does in a lot of ways, because with those devices you're still capturing some telemetry, you're still able to track connections and understand the source, destination, flow and several other bits as well. That's what a service mesh is doing. That's exactly what a service mesh is doing, except at a smaller scale and it looks different. Now let me describe to you what a service mesh might look like. So you have a control plane that's deployed somewhere, and then you have data planes that are also deployed alongside workloads, alongside services. Now these data planes are communicating with other data planes using a protocol called MTLS, but that MTLS protocol is simply a tunneling mechanism.
William:What is a VPN?
Marino:A tunneling mechanism between two devices. What did we do with clients? We actually did the same thing. We had VPN clients that would connect into a head-end device and then allow us to connect into the network, which is exactly what a service mesh is doing. Now there are some additional capabilities that service meshes offer, like resiliency capabilities, load balancing and whatnot, and it's just taking in some additional networking technologies and paradigms and just bringing them into a more consolidated platform within, say, Kubernetes.
Marino:Now, when you think about when a service has to communicate with another service in two distinct locations, what happens? There has to be a tunnel that's formed. The same situation happens with VPNs. Whenever there is one client or one server that needs to call out to another server, there is a connection that has to be formed. It's not like the tunnel is always on. The tunnel is just on demand and will come online for that TCP session and then will destroy itself afterwards, which is exactly how MTLS operates.
Marino:So network engineers are listening into this conversation and wondering, like why the hell would I use a service mesh? Well, it's where you're using it. You're not using it to solve the branch location problem. You're using it to solve and treating Kubernetes as your branch and you're allowing Kubernetes clusters to communicate across each other, with the services and applications behind them, to do the exact same thing, but over a protected laneway or over a secured laneway, much like a VPN is offering. So the step up to understand how a service mesh works I've just explained it to you to learn how to get it started and get going with it. It's not much more work either. I think it's just a matter of jumping into, understanding how Kubernetes works and understanding how the workloads are provisioned and deployed. It gets a little bit more complex there too, but fundamentally they're solving similar problems just at different layers.
Eyvonne:That's an incredible comparison and one I think a lot of people haven't made, but when I think about the history of networking technology we talked about this before we started recording that there's always been tunneling, whether it's GRE, other types of tunneling that have happened in networking, since its inception, really, and VPNs added a layer of encryption over that tunneling. And now it's how do we get these two endpoints to see one another more directly when there's something more complex in between them, and service mesh is just iterating on that idea? So that was an incredible description, but I know that we're getting close on time. Before we go, though, I would really love to hear more about your work in the CNCF. I'd love to hear about what you're doing there and, yeah, your involvement.
Marino:So before I jump in, I'll provide some additional context around something around service mesh, and that ties into the CNCF too. So about a year ago or maybe a year and a half ago, there was this technology, open source technology called Istio it's still there, it's doing a lot of great things. Open source technology called Istio it's still there, it's doing a lot of great things. But they actually had a pivot point where they decided hey, we want to have a sidecarless approach to service mesh. In service mesh we deploy something called sidecars, which are just simply proxies, or you could almost think, mini routers, right beside services, so every service has a mini router. But that creates operational complexity, adds more CPU and memory overhead and just is intrusive in its own nature. So you have to really plan accordingly, whereas a sidecarless approach, which has been offered before in different formats, in different ways, is something that Istio decided to offer. They called it Ambient Mesh, and I'm leading up to this because you mentioned GRE. Right, you brought up GRE, but within Ambient Mesh they have these artifacts called Z-tunnels. What Z-tunnels run is effectively this protocol called H-Bone. What is H-Bone? It is basically VXLAN. I bring that up for network engineers out there trying to understand tunneling. Well, that is a perfect example of how tunneling technologies like VXLAN is used just under the hood. We just don't really see it Now.
Marino:How does that tie into the CNCF and the work that I've been doing so for a long time? I tried my best to really help network engineers understand where they could leverage and use and start to begin to see the power of service mesh. It's not a one size fits all but it could solve a variety of problems. But also I needed to help them step up a little bit more so they can understand Kubernetes, primitives, kubernetes, networking and whatnot, networking and whatnot. So over the last I'd say three years I developed a network foundations course that focuses in on Kubernetes, networking and how to get from I'm a network engineer to working inside of networking, inside of containers and whatnot, along with a lot of advocacy around just networking in general right, how to use Cilium, how to use Istio, how to work with service mesh technologies, how does that tie into multi-cloud as well and a lot of that was through getting on stage and just sharing a lot of my previous history and stories around how I used to do networking things and how I do them today, and it's been very powerful because that generates interesting side conversations around.
Marino:Hey, I'm trying to solve this problem, but I didn't realize that we could solve this with Tailscale. Absolutely you can, because you don't want to sit there managing your control plane for VPN tunneling. Just use this SaaS service, especially if this is all you need and you begin to realize that there are so many ways to solve the same problem. It goes back to what I said much, much earlier, before we started recording. It's all about politics and the decisions that are made at the top level.
William:Last question I had about service mesh was this is something I get asked all the time actually is when is it the right time to look at service mesh? Is it a level of complexity or what you know? Do you have any ideas on that?
Marino:I have like, I have personal opinions, but I also have just industry views. So let me just stick to the industry views right now, because personal opinion you know what Like there's so many different ways this can go. What I will say is that there has been an overemphasis on service mesh probably for a greater amount of time. When Kubernetes launched, about a year later, service mesh came out and a lot of people that wanted to adopt it saw so many more challenges consuming it, managing the complexity of it, managing upgrades, life cycling and there's been a lot of learnings throughout the way. Autotrader is a perfect example of one of the first out there using Service Mesh, but also some of the first to experience a lot of the pains of it.
Marino:Now, five years later, service Mesh is very mature, but we're building things very differently. We're not building on top of Kubernetes much like we thought we would. We thought we'd be doing everything on top of Kubernetes where service mesh would be the greatest fit, and we're beginning to realize that, while that is partially true, there are a million more workloads out there that don't require a mesh, and I say this because when you think about gateways, api gateways, for example we could spend an entire conversation on API gateways. They fundamentally do the same thing as service meshes do, but gateways are also API. Gateways are also like load balancers as well. They are load balancers. In fact. The networking world has seen this pattern so many times before with application load balancing. There's nothing different here. It's just who actually uses this technology, and it's more so developers.
Marino:What I've noticed is developers just want to build. They don't want to sit there and build for microservices, they just want to build code that works, and so we can't force them into a microservices pattern. But we can only provide them the right guardrails, the right pieces, the right environment where those applications can run but still have things like TLS. We could still make sure we terminate connections appropriately or load balance them or circuit break them, but we shouldn't have to depend on having Kubernetes. We shouldn't have to force ourselves into a mesh. Now, if folks are building on top of Kubernetes, I think mesh is a great technology for them to consume, because it's probably the easiest way to handle things like east-west routing, implement things like TLS and authentication as needed and just scale appropriately. But if you're outside of Mesh or outside of Kubernetes, you'd probably want to consider other technologies is my opinion, and Mesh is probably like maybe 5% to 10% of what, or Kubernetes, I should really say and containerization is like 5% to 10% of what's actually out there.
William:There's so much other kinds of workloads, gotcha, and sorry for derailing that, I think you know. Back to Vaughn's question. I'm curious as well about the your work with the CNCF. The CNCF is just doing great things right now. Selium's part, yeah, selium's part of the CNCF. So yeah, and what is it like to be an ambassador and what are you working on these days?
Marino:So I you know what. There isn't a specific like set of tasks you're expected to do as an ambassador, Like I just keep doing the same things that I'm doing anyways. I'll still write tweets about you know technologies, I'll still host Twitter spaces, and if you want to hop on, let me know. I'm more than happy to host you both, and I think that's like for me, that's enough. At the same time, you know I'll participate in the local meetups that we have here and help organize them, and that's how you give back to the community, because I've been in tech for like 20 plus years now and I've learned a lot. You know I've had a lot of people that have mentored me, given me so many different directions and pieces of advice that have gotten me to the point that I'm able to talk to the both of you, and that's massive appreciation, by the way, for that. But it's also time to give back where you can, and this is one of the best ways to do so, by providing paths forward for individuals that are trying to navigate early on in their career.
Marino:Where do I want to go? Should I focus on Kubernetes? Sure, but so many people are as well. Why not focus in on maybe a little bit more of the networking stuff that's out there, specifically like proxies, service meshes, cnis, maybe some of the automation behind that or the GitOps behind that as well, and that's generally where I find myself. I don't need to be the one that's actively contributing to code or to documentation, because we have plenty of people that can do that, but we don't have enough people out there talking about the benefits of this technology or why it's useful or how to use it, or how to get started even and break down those barriers. That's what I'm here to do or how to get started even and break down those barriers.
Eyvonne:That's what I'm here to do. So if there's somebody out there who has interest in the they've seen the CNCF, they're interested, they're curious, they want to get more involved what would you recommend they do?
Marino:Now, if it's within your affordability range, I always recommend a KubeCon. Kubecons are probably the most immersive experiences to really just get your career and just your mindset jumpstarted onto what's going on in the CNCF. It's kind of what did it for me. Actually, I went to my very first KubeCon in 2019 in San Diego. It was phenomenal. Sorry for the background noise, it was phenomenal. San Diego, it was phenomenal. Sorry for the background noise, it was phenomenal. And what I began to realize is how immersive the community is, how diverse the community is and how engaging they are.
Marino:Kubecon has come a long way since then. It's a lot more commercialized and it's just more oriented towards vendors, obviously because of how big it is, but there's a lot of the community element there. So, if you can go to a KubeCon, if you cannot, try to go to a local CNCF event like a KCDs or even a KubeDays, and if that's not within your availability, try to go to maybe even a CNCF meetup. There's probably a local chapter very close by and most meetups happen quarterly or even monthly. Try to go to one of those so you can get a taste of what the CNCF is like, and usually those sessions or those meetups give you the opportunity to see how other speakers talk about their technology even gives you insight as to what's going on as a whole in the program, like from updates to Kubernetes to how to be a CNCF ambassador, and usually you have ambassadors running those meetups as well.
William:Awesome, I know I want to be mindful of your time because I think you have to jump now, but do you want to tell the audience where to find you? And I think we need to do a part two because there's so much more we could dig into. So maybe later on, maybe next year or sometime yeah, I'd be happy to.
Marino:But if you want to find me, I am absolutely on Twitter. My handle is very unique. In some ways it's virtualized six, but I use the number six to kind of complete that. Maybe Yvonne or William can share that in the show notes later on. But I'm also on LinkedIn. So if you want to connect with me there and you don't want to be on Twitter, that's totally fine too. That's where I tend to hang around most either LinkedIn or Twitter. I do occasionally make YouTube videos and live streams, but honestly, I don't have the time and I just prefer to invest my time gaming or doing other things at times. But anyways, it's been a pleasure being on the show. Thank you so much. Thank you both. I really do appreciate it and hope to be back again.