The Benefits of Mocking Clouds Locally with Waldemar Hummer
Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Until a bit over a year ago or so, I had a loud and some would say fairly obnoxious opinion around the futility of mocking cloud services locally. This is not to be confused with mocking cloud services on the internet, which is what I do in lieu of having a real personality. And then one day I stopped espousing that opinion, or frankly, any opinion at all. And I’m glad to be able to talk at long last about why that is. My guest today is Waldemar Hummer, CTO and co-founder at LocalStack. Waldemar, it is great to talk to you.
Waldemar: Hey, Corey. It’s so great to be on the show. Thank you so much for having me. We’re big fans of what you do at The Duckbill Group and Last Week in AWS. So really, you know, glad to be here with you today and have this conversation.
Corey: It is not uncommon for me to have strong opinions that I espouse—politely to be clear; I’ll make fun of companies and not people as a general rule—but sometimes I find that I’ve not seen the full picture and I no longer stand by an opinion I once held. And you’re one of my favorite examples of this because, over the course of a 45-minute call with you and one of your business partners, I went from, “What you’re doing is a hilarious misstep and will never work,” to, “Okay, and do you have room for another investor?” And in the interest of full disclosure, the answer to that was yes, and I became one of your angel investors. It’s not exactly common for me to do that kind of a hard pivot. And I kind of suspect I’m not the only person who currently holds the opinion that I used to hold, so let’s talk a little bit about that. At the very beginning, what is LocalStack and what does it you would say that you folks do?
Waldemar: So LocalStack, in a nutshell, is a cloud emulator that runs on your local machine. It’s basically like a sandbox environment where you can develop your applications locally. We have currently a range of around 60, 70 services that we provide, things like Lambda Functions, DynamoDB, SQS, like, all the major AWS services. And to your point, it is indeed a pretty large undertaking to actually implement the cloud and run it locally, but with the right approach, it actually turns out that it is feasible and possible, and we’ve demonstrated this with LocalStack. And I’m glad that we’ve convinced you to think of it that way as well.
Corey: A couple of points that you made during that early conversation really stuck with me. The first is, “Yeah, AWS has two, no three no four-hundred different service offerings. But look at your customer base. How many of those services are customers using in any real depth? And of those services, yeah, the APIs are vast, and very much a sprawling pile of nonsense, but how many of those esoteric features are those folks actually using?” That was half of the argument that won me over.
The other half was, “Imagine that you’re an enormous company that’s an insurance company or a bank. And this year, you’re hiring 5000 brand new developers, fresh out of school. Two to 3000 of those developers will still be working here in about a year as they wind up either progressing in other directions, not winding up completing internships, or going back to school after internships, or for a variety of reasons. So, you have that many people that you need to teach how to use cloud in the context that we use cloud, combined with the question of how do you make sure that one of them doesn’t make a fun mistake that winds up bankrupting the entire company with a surprise AWS bill?” And those two things combined turned me from, “What you’re doing is ridiculous,” to, “Oh, my God. You’re absolutely right.”
And since then, I’ve encountered you in a number of my client environments. You were absolutely right. This is something that resonates deeply and profoundly with larger enterprise customers in particular, but also folks who just don’t want to wind up being beholden to every time they do a deploy to anything to test something out, yay, I get to spend more money on AWS services.
Waldemar: Yeah, totally. That’s spot on. So, to your first point, so definitely we have a core set of services that most people are using. So, things like Lambda, DynamoDB, SQS, like, the core serverless, kind of, APIs. And then there’s kind of a long tail of more exotic services that we support these days, things like, even like QLDB, the quantum ledger database, or, you know, managed streaming for Kafka.
But like, certainly, like, the core 15, 20 services are the ones that are really most used by the majority of people. And then we also, you know, pro offering have some very, sort of, advanced services for different use cases. So, that’s to your first point.
And second point is, yeah, totally spot on. So LocalStack, like, really enables you to experiment in the sandbox. So, we both see it as an experimentation, also development environment, where you don’t need to think about cloud costs. And this, I guess, will be very close to your heart in the work that you’re doing, the costs are becoming really predictable as well, right? Because in the cloud, you know, work to different companies before doing LocalStack where we were using AWS resources, and you can end up in a situation where overnight, you accumulate, you know, hundreds of thousands of dollars of AWS bill because you’ve turned on a certain feature, or some, you know, connectivity into some VPC or networking configuration that just turns out to be costly.
Also, one more thing that is worth mentioning, like, we want to encourage, like, frequent testing, and a lot of the cloud’s billing and cost structure is focused around, for example, hourly billing of resources, right? And if you have a test that just spins up resources that run for a couple of minutes, you still end up paying the entire hour. And we LocalStack, really, that brings down the cloud builds significantly because you can really test frequently, the cycles become much faster, and it’s also again, more efficient, more cost-effective.
Corey: There’s something useful to be said for, “Well, how do I make sure that I turn off resources when I’m done?” In cloud, it’s a bit of a game of guess-and-check. And you turn off things you think are there and you wait a few days and you check the bill again, and you go and turn more things off, and the cycle repeats. Or alternately, wait for the end of the month and wonder in perpetuity why you’re being billed 48 cents a month, and not be clear on why. Restarting the laptop is a lot more straightforward.
I also want to call out some of my own bias on this where I used to be a big believer in being able to build and deploy and iterate on things locally because well, what happens when I’m in a plane with terrible WiFi? Well, in the before times, I flew an awful lot and was writing a fair bit of, well, cloudy nonsense and I still never found that to be a particular blocker on most of what I was doing. So, it always felt a little bit precious to me when people were talking about, well, what if I can’t access the internet to wind up building and deploying these things? It’s now 2023. How often does that really happen? But is that a use case that you see a lot of?
Waldemar: It’s definitely a fair point. And probably, like, 95% of cloud development these days is done in a high internet bandwidth environment, maybe some corporate network where you have really fast internet access. But that’s only a subset, I guess, of the world out there, right? So, there might be situations where, you know, you may have bad connectivity. Also, maybe you live in a region—or maybe you’re traveling even, right? So, there’s a lot more and more people who are just, “Digital nomads,” quote-unquote, right, who just like to work in remote places.
Corey: You’re absolutely right. My bias is that I live in San Francisco. I have symmetric gigabit internet at home. There’s not a lot of scenarios in my day-to-day life—except when I’m, you know, on the train or the bus traveling through the city—because thank you, Verizon—where I have impeded connectivity.
Waldemar: Right. Yeah, totally. And I think the other aspect of this is kind of the developers just like to have things locally, right, because it gives them the feeling of you know, better control over the code, like, being able to integrate into their IDEs, setting breakpoints, having these quick cycles of iterations. And again, this is something that there’s more and more tooling coming up in the cloud ecosystem, but it’s still inherently a remote execution that just, you know, takes the round trip of uploading your code, deploying, and so on, and that’s just basically the pain point that we’re addressing with LocalStack.
Corey: One thing that did surprise me as well was discovering that there was a lot more appetite for this sort of thing in enterprise-scale environments. I mean, some of the reference customers that you have on your website include divisions of the UK Government and 3M—you know, the Post-It note people—as well as a number of other very large environments. And at first, that didn’t make a whole lot of sense to me, but then it suddenly made an awful lot of sense because it seems—and please correct me if I’m wrong—that in order to use something like this at scale and use it in a way that isn’t, more or less getting it into a point where the administration of it is more trouble than it’s worth, you need to progress past a certain point of scale. An individual developer on their side project is likely just going to iterate against AWS itself, whereas a team of thousands of developers might not want to be doing that because they almost certainly have their own workflows that make that process high friction.
Waldemar: Yeah, totally. So, what we see a lot is, especially in larger enterprises, dedicated teams, like, developer experience teams, whose main job is to really set up a workflow and environment where developers can be productive, most productive, and this can be, you know, on one side, like, setting up automated pipelines, provisioning maybe AWS sandbox and test accounts. And like some of these teams, when we introduce LocalStack, it’s really a game-changer because it becomes much more decoupled and like, you know, distributed. You can basically configure your CI pipeline, just, you know, spin up the container, run your tests, tear down again afterwards. So, you know, it’s less dependencies.
And also, one aspect to consider is the aspect of cloud approvals. A lot of companies that we work with have, you know, very stringent processes around, even getting access to the clouds. Some SRE team needs to enable their IAM permissions and so on. With LocalStack, you can just get started from day one and just get productive and start testing from the local machine. So, I think those are patterns that we see a lot, in especially larger enterprise environments as well, where, you know, there might be some regulatory barriers and just, you know, process-wise steps as well.
Corey: When I started playing with LocalStack myself, one of the things that I found disturbingly irritating is, there’s a lot that AWS gets largely right with its AWS command-line utility. You can stuff a whole bunch of different options into the config for different profiles, and all the other tools that I use mostly wind up respecting that config. The few that extend it add custom lines to it, but everything else is mostly well-behaved and ignores the things it doesn’t understand. But there is no facility that lets you say, “For this particular profile, use this endpoint for AWS service calls instead of the normal ones in public regions.” In fact, to do that, you effectively have to pass specific endpoint URLs to arguments, and I believe the syntax on that is not globally consistent between different services.
It just feels like a living nightmare. At first, I was annoyed that you folks wound up having to ship your own command-line utility to wind up interfacing with this. Like, why don’t you just add a profile? And then I tried it myself and, oh, I’m not the only person who knows how this stuff works that has ever looked at this and had that idea. No, it’s because AWS is just unfortunate in that respect.
Waldemar: That is a very good point. And you’re touching upon one of the major pain points that we have, frankly, with the ecosystem. So, there are some pull requests against the AWS open-source repositories for the SDKs and various other tools, where folks—not only LocalStack, but other folks in the community have asked for introducing, for example, an AWS endpoint URL environment variable. These [protocols 00:12:32], unfortunately, were never merged. So, it would definitely make our lives a whole lot easier, but so far, we basically have to maintain these, you know, these wrapper scripts, basically, AWS local, CDK local, which basically just, you know, points the client to local endpoints. It’s a good workaround for now, but I would assume and hope that the world’s going to change in the upcoming years.
Corey: I really hope so because everything else I can think of is just bad. The idea of building a custom wrapper around the AWS command-line utility that winds up checking the profile section, and oh, if this profile is that one, call out to this tool, otherwise it just becomes a pass-through. That has security implications that aren’t necessarily terrific, you know, in large enterprise companies that care a lot about security. Yeah, pretend to be a binary you’re not is usually the kind of thing that makes people sad when security politely kicks their door in.
Waldemar: Yeah, we actually have pretty, like, big hopes for the v3 wave of the SDKs, AWS, because there is some restructuring happening with the endpoint resolution. And also, you can, in your profile, by now have, you know, special resolvers for endpoints. But still the case of just pointing all the SDKs and CLI to a custom endpoint is just not yet resolved. And this is, frankly, quite disappointing, actually.
Corey: While we’re complaining about the CLI, I’ll throw one of my recurring issues with it in. I would love for it to adopt the Linux slash Unix paradigm of having a config.d directory that you can reference from within the primary config file, and then any file within that directory in the proper syntax winds up getting adopted into what becomes a giant composable config file, generated dynamically. The reason being is, I can have entire lists of profiles in separate files that I could then wind up dropping in and out on a client-by-client basis. So, I don’t inadvertently expose who some of my clients are, in the event that winds up being part of the way that they have named their AWS accounts.
That is one of those things I would love but it feels like it’s not a common enough use case for there to be a whole lot of traction around it. And I guess some people would make a fair point if they were to say that the AWS CLI is the most widely deployed AWS open-source project, even though all it does is give money to AWS more efficiently.
Waldemar: Yeah. Great point. Yeah, I think, like, how and some way to customize and, like, mingle or mangle your configurations in a more easy fashion would be super useful. I guess it might be a slippery slope to getting, you know, into something like I don’t know, Helm for EKS and, like, really, you know, having to maintain a whole templating language for these configs. But certainly agree with you, to just you know, at least having [plug 00:15:18] points for being able to customize the behavior of the SDKs and CLIs would be extremely helpful and valuable.
Corey: This is not—unfortunately—my first outing with the idea of trying to have AWS APIs done locally. In fact, almost a decade ago now, I did a build-out at a very large company of a… well, I would say that the build-out was not itself very large—it was about 300 nodes—that were all running Eucalyptus, which before it died on the vine, was imagined as a way of just emulating AWS APIs locally—done in Java, as I recall—and exposing local resources in ways that comported with how AWS did things. So, the idea being that you could write configuration to deploy any infrastructure you wanted in AWS, but also treat your local data center the same way. That idea unfortunately did not survive in the marketplace, which is kind of a shame, on some level. What was it that inspired you folks to wind up building this with an eye towards local development rather than run this as a private cloud in your data center instead?
Waldemar: Yeah, very interesting. And I do also have some experience [unintelligible 00:16:29] from my past university days with Eucalyptus and OpenStack also, you know, running some workloads in an on-prem cluster. I think the main difference, first of all, these systems were extremely hard, notoriously hard to set up and maintain, right? So, lots of moving parts: you had your image server, your compute system, and then your messaging subsystems. Lots of moving parts, and wanting to have everything basically much more monolithic and in a single container.
And Docker really sort of provides a great platform for us, which is create everything in a single container, spin up locally, make it very lightweight and easy to use. But I think really the first days of LocalStack, the idea was really, was actually with the use case of somebody from our team. Back then, I was working at Atlassian in the data engineering team and we had folks in the team were commuting to work on the train. And it was literally this use case that you mentioned before about being able to work basically offline on your commute. And this is kind of were the first lines of code were written and then kind of the idea evolves from there.
We put it into the open-source, and then, kind of, it was growing over the years. But it really started as not having it as an on-prem, like, heavyweight server, but really as a lightweight system that you can easily—that is easily portable across different systems as well.
Corey: That is a good question. Very often, when I’m using various tools that are aimed at development use cases, it is very clear that one particular operating system is invariably going to be the first-class citizen and everything else is a best effort. Ehh, it might work; it might not. Does LocalStack feel that way? And if so, what’s the operating system that you want to be on?
Waldemar: I would say we definitely work best on Mac OS and Linux. It also works really well on Windows, but I think given that some of our tooling in the ecosystem also pretty much geared towards Unix systems, I think those are the platforms it will work well with. Again, on the other hand, Docker is really a platform that helps us a lot being compatible across operating systems and also CPU architectures. We have a multi-arch build now for AMD and ARM64. So, I think in that sense, we’re pretty broad in terms of the compatibility spectrum.
Corey: I do not have any insight into how the experience goes on Windows, given that I don’t use that operating system in anger for, wow, 15 years now, but I will say that it’s been top-flight on Mac OS, which is what I spend most of my time. Depressed that I’m using, but for desktop experiences, it seems to work out fairly well. That said, having a focus on Windows seems like it would absolutely be a hard requirement, given that so many developer workstations in very large enterprises tend to skew very Windows-heavy. My hat is off to people who work with Linux and Linux-like systems in environments like that where even line endings becomes psychotically challenging. I don’t envy them their problems. And I have nothing but respect for people who can power through it. I never had the patience.
Waldemar: Yeah. Same here and definitely, I think everybody has their favorite operating system. For me, it’s also been mostly Linux and Mac in the last couple of years. But certainly, we definitely want to be broad in terms of the adoption, and working with large enterprises often you have—you know, we want to fit into the existing landscape and environment that people work in. And we solve this by platform abstractions like Docker, for example, as I mentioned, and also, for example, Python, which is some more toolings within Python is also pretty nicely supported across platforms. But I do feel the same way as you, like, having been working with Windows for quite some time, especially for development purposes.
Corey: What have you noticed that your customer usage patterns slash requests has been saying about AWS service adoption? I have to imagine that everyone cares whether you can mock S3 effectively. EC2, DynamoDB, probably. SQS, of course. But beyond the very small baseline level of offering, what have you seen surprising demand for, as I guess, customer implementation of more esoteric services continues to climb?
Waldemar: Mm-hm. Yeah, so these days it’s actually pretty [laugh] pretty insane the level of coverage we already have for different services, including some very exotic ones, like QLDB as I mentioned, Kafka. We even have Managed Airflow, for example. I mean, a lot of these services are essentially mostly, like, wrappers around the API. This is essentially also what AWS is doing, right? So, they’re providing an API that basically provisions some underlying resources, some infrastructure.
Some of the more interesting parts, I guess, we’ve seen is the data or big data ecosystem. So, things like Athena, Glue, we’ve invested quite a lot of time in, you know, making that available also in LocalStack so you can have your maybe CSV files or JSON files in an S3 bucket and you can query them from Athena with a SQL language, basically, right? And that makes it very—especially these big data-heavy jobs that are very heavyweight on AWS, you can iterate very quickly in LocalStack. So, this is where we’re seeing a lot of adoption recently. And then also, obviously, things like, you know, Lambda and ECS, like, all the serverless and containerized applications, but I guess those are the more mainstream ones.
Corey: I imagine you probably get your fair share of requests for things like CloudFormation or CloudFront, where, this is great, but can you go ahead and add a very lengthy sleep right here, just because it returns way too fast and we don’t want people to get their hopes up when they use the real thing. On some level, it feels like exact replication of the AWS customer experience isn’t quite in line with what makes sense from a developer productivity point of view.
Waldemar: Yeah, that’s a great point. And I’m sure that, like, a lot of code out there is probably littered with sleep statements that is just tailored to the specific timing in AWS. In fact, we recently opened an issue in the AWS Terraform provider repository to add a configuration option to configure the timings that Terraform is using for the resource deployment. So, just as an example, an S3 bucket creation takes 60 seconds, like, more than a minute against [unintelligible 00:22:37] AWS. I guess LocalStack, it’s a second basically, right?
And AWS Terraform provider has these, like, relatively slow cycles of checking whether the packet has already been created. And we want to get that configurable to actually reduce the time it takes for local development, right? So, we have an open, sort of, feature request, and we’re probably going to contribute to a Terraform repository. But definitely, I share the sentiment that a lot of the tooling ecosystem is built and tailored and optimized towards the experience against the cloud, which often is just slow and, you know, that’s what it is, right?
Corey: One thing that I didn’t expect, though, in hindsight, is blindingly obvious, is your support for a variety of different frameworks and deployment methodologies. I’ve found that it’s relatively straightforward to get up and running with the CDK deploying to LocalStack, for instance. And in hindsight, of course; that’s obvious. When you start out down that path, though it’s well, you tend to think—at least I don’t tend to think in that particular way. It’s, “Well, yeah, it’s just going to be a console-like experience, or I wind up doing CloudFormation or Terraform.” But yeah, that the world is advancing relatively quickly and it’s nice to see that you are very comfortably keeping pace with that advancement.
Waldemar: Yeah, true. And I guess for us, it’s really, like, the level of abstraction is sort of increasing, so you know, once you have a solid foundation, with, you know, CloudFormation implementation, you can leverage a lot of tools that are sitting on top of it, CDK, serverless frameworks. So, CloudFormation is almost becoming, like, the assembly language of the AWS cloud, right, and if you have very solid support for that, a lot of, sort of, tools in the ecosystem will natively be supported on LocalStack. And then, you know, you have things like Terraform, and in the Terraform CDK, you know, some of these derived versions of Terraform which also are very straightforward because you just need to point, you know, the target endpoint to localhost and then the rest of the deployment loop just works out of the box, essentially.
So, I guess for us, it’s really mostly being able to focus on, like, the core emulation, making sure that we have very high parity with the real services. We spend a lot of time and effort into what we call parity testing and snapshot testing. We make sure that our API responses are identical and really the same as they are in AWS. And this really gives us, you know, a very strong confidence that a lot of tools in the ecosystem are working out-of-the-box against LocalStack as well.
Corey: I would also like to point out that I’m also a proud LocalStack contributor at this point because at the start of this year, I noticed, ah, in one of the pages, the copyright year was still saying 2022 and not 2023. So, a single-character pull request? Oh, yes, I am on the board now because that is how you ingratiate yourself with an open-source project.
Waldemar: Yeah. Eternal fame to you and kudos for your contribution. But, [laugh] you know, in all seriousness, we do have a quite an active community of contributors. We are an open-source first project; like, we were born in the open-source. We actually—maybe just touching upon this for a second, we use GitHub for our repository, we use a lot of automation around, you know, doing pull requests, and you know, service owners.
We also participate in things like the Hacktoberfest, which we participated in last year to really encourage contributions from the community, and also host regular meetups with folks in the community to really make sure that there’s an active ecosystem where people can contribute and make contributions like the one that you did with documentation and all that, but also, like, actual features, testing and you know, contributions of different levels. So really, kudos and shout out to the entire community out there.
Corey: Do you feel that there’s an inherent tension between being an open-source product as well as being a commercial product that is available for sale? I find that a lot of companies feel vaguely uncomfortable with the various trade-offs that they make going down that particular path, but I haven’t seen anyone in the community upset with you folks, and it certainly hasn’t seemed to act as a brake on your enterprise adoption, either.
Waldemar: That is a very good point. So, we certainly are—so we’re following an open-source-first model that we—you know, the core of the codebase is available in the community version. And then we have pro extensions, which are commercial and you basically, you know, setup—you sign up for a license. We are certainly having a lot of discussions on how to evolve this licensing model going forward, you know, which part to feed back into the community version of LocalStack. And it’s certainly an ongoing evolving model as well, but certainly, so far, the support from the community has been great.
And we definitely focus to, kind of, get a lot of the innovation that we’re doing back into our open-source repo and make sure that it’s, like, really not only open-source but also open contribution for folks to contribute their contributions. We also integrate with other third-party libraries. We’re built on the shoulders of giants, if I may say so, other open-source projects that are doing great work with emulators. To name just a few, it’s like, [unintelligible 00:27:33] which is a great project that we sort of use and depend upon. We have certain mocks and emulations, for Kinesis, for example, Kinesis mock and a bunch of other tools that we’ve been leveraging over the years, which are really great community efforts out there. And it’s great to see such an active community that’s really making this vision possible have a truly local emulated clouds that gives the best experience to developers out there.
Corey: So, as of, well, now, when people are listening to this and the episode gets released, v2 of LocalStack is coming out. What are the big differences between LocalStack and now LocalStack 2: Electric Boogaloo, or whatever it is you’re calling the release?
Waldemar: Right. So, we’re super excited to release our v2 version of LocalStack. Planned release date is end of March 2023, so hopefully, we will make that timeline. We did release our first version of OpenStack in July 2022, so it’s been roughly seven months since then and we try to have a cadence of roughly six to nine months for the major releases. And what you can expect is we’ve invested a lot of time and effort in last couple of months and in last year to really make it a very rock-solid experience with enhancements in the current services, a lot of performance optimizations, we’ve invested a lot in parity testing.
So, as I mentioned before, parity is really important for us to make sure that we have a high coverage of the different services and how they behave the same way as AWS. And we’re also putting out an enhanced version and a completely polished version of our Cloud Pods experience. So, Cloud Pods is a state management mechanism in LocalStack. So, by default, the state in LocalStack is ephemeral, so when you restart the instance, you basically have a fresh state. But with Cloud Pods, we enable our users to take persistent snapshot of the states, save it to disk or to a server and easily share it with team members.
And we have very polished experience with Community Cloud Pods that makes it very easy to share the state among team members and with the community. So, those are just some of the highlights of things that we’re going to be putting out in the tool. And we’re super excited to have it done by, you know, end of March. So, stay tuned for the v2 release.
Corey: I am looking forward to seeing how the experience shifts and evolves. I really want to thank you for taking time out of your day to wind up basically humoring me and effectively re-covering ground that you and I covered about a year and a half ago now. If people want to learn more, where should they go?
Waldemar: Yeah. So definitely, our Slack channel is a great way to get in touch with the community, also with the LocalStack team, if you have any technical questions. So, you can find it on our website, I think it’s slack.localstack.cloud.
We also host a Discourse forum. It’s discuss.localstack.cloud, where you can just, you know, make feature requests and participate in the general conversation.
And we do host monthly community meetups. Those are also available on our website. If you sign up, for example, for a newsletter, you will be notified where we have, you know, these webinars. Take about an hour or so where we often have guest speakers from different companies, people who are using, you know, cloud development, local cloud development, and just sharing the experiences of how the space is evolving. And we’re always super happy to accept contributions from the community in these meetups as well. And last but not least, our GitHub repository is a great way to file any issues you may have, feature requests, and just getting involved with the project itself.
Corey: And we will, of course, put links to that in the [show notes 00:31:09]. Thank you so much for taking the time to speak with me today. I appreciate it.
Waldemar: Thank you so much, Corey. It’s been a pleasure. Thanks for having me.
Corey: Waldemar Hummer, CTO and co-founder at LocalStack. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, presumably because your compensation structure requires people to spend ever-increasing amounts of money on AWS services.
Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Join our newsletter
2021 Duckbill Group, LLC