The Infinite Possibilities of Amazon S3 with Kevin Miller

Kevin Miller, General Manager of Amazon S3, joins Corey to discuss the hard work and technical magic that has gone into S3’s evolution and the charity t-shirt fundraiser Corey is running featuring S3 as the eighth wonder of the world. Kevin explains the vital role testing plays in keeping S3 running and evolving successfully, and the astronomical number of states they must be ready to face at any given time. Kevin also reveals the benefits of Intelligent Tiering, his thoughts on using S3 as a database, and what really excites him about the transformations that are happening as a result of his work at S3.

Transcript

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: This episode is brought to us in part by our friends at Datadog. Datadog is a SaaS monitoring and security platform that enables full-stack observability for modern infrastructure and applications at every scale. Datadog enables teams to see everything: dashboarding, alerting, application performance monitoring, infrastructure monitoring, UX monitoring, security monitoring, dog logos, and log management, in one tightly integrated platform. With 600-plus out-of-the-box integrations with technologies including all major cloud providers, databases, and web servers, Datadog allows you to aggregate all your data into one platform for seamless correlation, allowing teams to troubleshoot and collaborate together in one place, preventing downtime and enhancing performance and reliability. Get started with a free 14-day trial by visiting datadoghq.com/screaminginthecloud, and get a free t-shirt after installing the agent.

Corey: Managing shards. Maintenance windows. Overprovisioning. ElastiCache bills. I know, I know. It’s a spooky season and you’re already shaking. It’s time for caching to be simpler. Momento Serverless Cache lets you forget the backend to focus on good code and great user experiences. With true autoscaling and a pay-per-use pricing model, it makes caching easy. No matter your cloud provider, get going for free at gomomento.co/screaming. That’s GO M-O-M-E-N-T-O dot co slash screaming.

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Right now, as I record this, we have just kicked off our annual charity t-shirt fundraiser. This year’s shirt showcases S3 as the eighth wonder of the world. And here to either defend or argue the point—we’re not quite sure yet—is Kevin Miller, AWS’s vice president and general manager for Amazon S3. Kevin, thank you for agreeing to suffer the slings and arrows that are no doubt going to be interpreted, misinterpreted, et cetera, for the next half hour or so.

Kevin: Oh, Corey, thanks for having me. And happy to do that, and really flattered for you to be thinking about S3 in this way. So more than happy to chat with you.

Corey: It’s absolutely one of those services that is foundational to the cloud. It was the first AWS service that was put into general availability, although the beta folks are going to argue back and forth about no, no, that was SQS instead. I feel like now that Mai-Lan handles both SQS and S3 as part of her portfolio, she is now the final arbiter of that. I’m sure that’s an argument for a future day. But it’s impossible to imagine cloud without S3.

Kevin: I definitely think that’s true. It’s hard to imagine cloud, actually, with many of our foundational services, including SQS, of course, but we are—yes, we were the first generally available service with S3. And pretty happy with our anniversary being Pi Day, 3/14.

Corey: I’m also curious, your own personal trajectory has been not necessarily what folks would expect. You were the general manager of Amazon Glacier, and now you’re the general manager and vice president of S3. So, I’ve got to ask, because there are conflicting reports on this depending upon what angle you look at, are Glacier and S3 the same thing?

Kevin: Yes, I was the general manager for S3 Glacier prior to coming over to S3 proper, and the answer is no, they are not the same thing. We certainly have a number of technologies where we’re able to use those technologies both on S3 and Glacier, but there are certainly a number of things that are very distinct about Glacier and give us that ability to hit the ultra-low price points that we do for Glacier Deep Archive being as low as $1 per terabyte-month. And so, that definitely—there’s a lot of actual ingenuity up and down the stack, from hardware to software, everywhere in between, to really achieve that with Glacier. But then there’s other spots where S3 and Glacier have very similar needs, and then, of course, today many customers use Glacier through S3 as a storage class in S3, and so that’s a great way to do that. So, there’s definitely a lot of shared code, but certainly, when you get into it, there’s [unintelligible 00:04:59] to both of them.

Corey: I ran a number of obnoxiously detailed financial analyses, and they all came away with, unless you have a very specific very nuanced understanding of your data lifecycle and/or it is less than 30 or 60 days depending upon a variety of different things, the default S3 storage class you should be using for virtually anything is Intelligent Tiering. That is my purely economic analysis of it. Do you agree with that? Disagree with that? And again, I understand that all of these storage classes are like your children, and I am inviting you to tell me which one of them is your favorite, but I’m absolutely prepared to do that.

Kevin: Well, we love Intelligent Tiering because it is very simple; customers are able to automatically save money using Intelligent Tiering for data that’s not being frequently accessed. And actually, since we launched it a few years ago, we’ve already saved customers more than $250 million using Intelligent Tiering. So, I would say today, it is our default recommendation in almost every case. I think that the cases where we would recommend another storage class as the primary storage class tend to be specific to the use case where—and particularly for use cases where customers really have a good understanding of the access patterns. And we saw some customers do for their certain dataset, they know that it’s going to be heavily accessed for a fixed period of time, or this data is actually for archival, it’ll never be accessed, or very rarely if ever access, just maybe in an emergency.

And those kinds of use cases, I think actually, customers are probably best to choose one of the specific storage classes where they’re, sort of, paying that the lower cost from day one. But again, I would say for the vast majority of cases that we see, the data access patterns are unpredictable and customers like the flexibility of being able to very quickly retrieve the data if they decide they need to use it. But in many cases, they’ll save a lot of money as the data is not being accessed, and so, Intelligent Tiering is a great choice for those cases.

Corey: I would take it a step further and say that even when customers believe that they are going to be doing a deeper analysis and they have a better understanding of their data flow patterns than Intelligent Tiering would, in practice, I see that they rarely do anything about it. It’s one of those things where they’re like, “Oh, yeah, we’re going to set up our own lifecycle policies real soon now,” whereas, just switch it over to Intelligent Tiering and never think about it again. People’s time is worth so much more than the infrastructure they’re working on in almost every case. It doesn’t seem to make a whole lot of sense unless you have a very intentioned, very urgent reason to go and do that stuff by hand in most cases.

Kevin: Yeah, that’s right. I think I agree with you, Corey. And certainly, that is the recommendation we lead with customers.

Corey: In previous years, our charity t-shirt has focused on other areas of AWS, and one of them was based upon a joke that I’ve been telling for a while now, which is that the best database in the world is Route 53 and storing TXT records inside of it. I don’t know if I ever mentioned this to you or not, but the first iteration of that joke was featuring around S3. The challenge that I had with it is that S3 Select is absolutely a thing where you can query S3 with SQL which I don’t see people doing anymore because Athena is the easier, more, shall we say, well-articulated version of all of that. And no, no, that joke doesn’t work because it’s actually true. You can use S3 as a database. Does that statement fill you with dread? Regret? Am I misunderstanding something? Or are you effectively running a giant subversive database?

Kevin: Well, I think that certainly when most customers think about a database, they think about a collection of technology that’s applied for given problems, and so I wouldn’t count S3 as providing the whole range of functionality that would really make up a database. But I think that certainly a lot of the primitives and S3 Select as a great example of a primitive are available in S3. And we’re looking at adding, you know, additional primitives going forward to make it possible to, you know, to build a database around S3. And as you see, other AWS services have done that in many ways. For example, obviously with Amazon Redshift having a lot of capability now to just directly access and use data in S3 and make that a super seamless so that you can then run data warehousing type queries on top of S3 and on top of your other datasets.

So, I certainly think it’s a great building block. And one other thing I would actually just say that you may not know, Corey, is that one of the things over the last couple of years we’ve been doing a lot more with S3 is actually working to directly contribute improvements to open-source connector software that uses S3, to make available automatically some of the performance improvements that can be achieved either using both the AWS SDK, and also using things like S3 Select. So, we started with a few of those things with Select; you’re going to see more of that coming, most likely. And some of that, again, the idea there as you may not even necessarily know you’re using Select, but when we can identify that it will improve performance, we’re looking to be able to contribute those kinds of improvements directly—or we are contributing those directly to those open-source packages. So, one thing I would definitely recommend customers and developers do is have a capability of sort of keeping that software up-to-date because although it might seem like those are sort of one-and-done kind of software integrations, there’s actually almost continuous improvement now going on, and around things like that capability, and then others we come out with.

Corey: What surprised me is just how broadly S3 has been adopted by a wide variety of different clients’ software packages out there. Back when I was running production environments in anger, I distinctly remember in one Ubuntu environment, we wound up installing a specific package that was designed to teach apt how to retrieve packages and its updates from S3, which was awesome. I don’t see that anymore, just because it seems that it is so easy to do it now, just with the native features that S3 offers, as well as an awful lot of software under the hood has learned to directly recognize S3 as its own thing, and can react accordingly.

Kevin: And just do the right thing. Exactly. No, we certainly see a lot of that. So that’s, you know—I mean, obviously making that simple for end customers to use and achieve what they’re trying to do, that’s the whole goal.

Corey: It’s always odd to me when I’m talking to one of my clients who is looking to understand and optimize their AWS bill to see outliers in either direction when it comes to S3 itself. When they’re driving large S3 bills as in a majority of their spend, it’s, okay, that is very interesting. Let’s dive into that. But almost more interesting to me is when it is effectively not being used at all. When, oh, we’re doing everything with EBS volumes or EFS.

And again, those are fine services. I don’t have any particular problem with them anymore, but the problem I have is that the cloud long ago took what amounts to an economic vote. There’s a tax savings for storing data in an object store the way that you—and by extension, most of your competitors—wind up pricing this, versus the idea of on a volume basis where you have to pre-provision things, you don’t get any form of durability that extends beyond the availability zone boundary. It just becomes an awful lot of, “Well, you could do it this way. But it gets really expensive really quickly.”

It just feels wild to me that there is that level of variance between S3 just sort of raw storage basis, economically, as well as then just the, frankly, ridiculous levels of durability and availability that you offer on top of that. How did you get there? Was the service just mispriced at the beginning? Like oh, we dropped to zero and probably should have put that in there somewhere.

Kevin: Well, no, I wouldn’t call it mispriced. I think that the S3 came about when we took a—we spent a lot of time looking at the architecture for storage systems, and knowing that we wanted a system that would provide the durability that comes with having three completely independent data centers and the elasticity and capability where, you know, customers don’t have to provision the amount of storage they want, they can simply put data and the system keeps growing. And they can also delete data and stop paying for that storage when they’re not using it. And so, just all of that investment and sort of looking at that architecture holistically led us down the path to where we are with S3.

And we’ve definitely talked about this. In fact, in Peter’s keynote at re:Invent last year, we talked a little bit about how the system is designed under the hood, and one of the thing you realize is that S3 gets a lot of the benefits that we do by just the overall scale. The fact that it is—I think the stat is that at this point more than 10,000 customers have data that’s stored on more than a million hard drives in S3. And that’s how you get the scale and the capability to do is through massive parallelization. Where customers that are, you know, I would say building more traditional architectures, those are inherently typically much more siloed architectures with a relatively small-scale overall, and it ends up with a lot of resource that’s provisioned at small-scale in sort of small chunks with each resource, that you never get to that scale where you can start to take advantage of the some is more than the greater of the parts.

And so, I think that’s what the recognition was when we started out building S3. And then, of course, we offer that as an API on top of that, where customers can consume whatever they want. That is, I think, where S3, at the scale it operates, is able to do certain things, including on the economics, that are very difficult or even impossible to do at a much smaller scale.

Corey: One of the more egregious clown-shoe statements that I hear from time to time has been when people will come to me and say, “We’ve built a competitor to S3.” And my response is always one of those, “Oh, this should be good.” Because when people say that, they generally tend to be focusing on one or maybe two dimensions that doesn’t work for a particular use case as well as it could. “Okay, what was your story around why this should be compared to S3?” “Well, it’s an object store. It has full S3 API compatibility.” “Does it really because I have to say, there are times where I’m not entirely convinced that S3 itself has full compatibility with the way that its API has been documented.”

And there’s an awful lot of magic that goes into this too. “Okay, great. You’re running an S3 competitor. Great. How many buildings does it live in?” Like, “Well, we have a problem with the s at the end of that word.” It’s, “Okay, great. If it fits on my desk, it is not a viable S3 competitor. If it fits in a single zip code, it is probably not a viable S3 competitor.” Now, can it be an object store? Absolutely. Does it provide a new interface to some existing data someone might have? Sure why not. But I think that, oh, it’s S3 compatible, is something that gets tossed around far too lightly by folks who don’t really understand what it is that drives S3 and makes it special.

Kevin: Yeah, I mean, I would say certainly, there’s a number of other implementations of the S3 API, and frankly we’re flattered that customers recognize and our competitors and others recognize the simplicity of the API and go about implementing it. But to your point, I think that there’s a lot more; it’s not just about the API, it’s really around everything surrounding S3 from, as you mentioned, the fact that the data in S3 is stored in three independent availability zones, all of which that are separated by kilometers from each other, and the resilience, the automatic failover, and the ability to withstand an unlikely impact to one of those facilities, as well as the scalability, and you know, the fact that we put a lot of time and effort into making sure that the service continues scaling with our customers need. And so, I think there’s a lot more that goes into what is S3. And oftentimes just in a straight-up comparison, it’s sort of purely based on just the APIs and generally a small set of APIs, in addition to those intangibles around—or not intangibles, but all of the ‘-ilities,’ right, the elasticity and the durability, and so forth that I just talked about. In addition to all that also, you know, certainly what we’re seeing for customers is as they get into the petabyte and tens of petabytes, hundreds of petabytes scale, their need for the services that we provide to manage that storage, whether it’s lifecycle and replication, or things like our batch operations to help update and to maintain all the storage, those become really essential to customers wrapping their arms around it, as well as visibility, things like Storage Lens to understand, what storage do I have? Who’s using it? How is it being used?

And those are all things that we provide to help customers manage at scale. And certainly, you know, oftentimes when I see claims around S3 compatibility, a lot of those advanced features are nowhere to be seen.

Corey: I also want to call out that a few years ago, Mai-Lan got on stage and talked about how, to my recollection, you folks have effectively rebuilt S3 under the hood into I think it was 235 distinct microservices at the time. There will not be a quiz on numbers later, I’m assuming. But what was wild to me about that is having done that for services that are orders of magnitude less complex, it absolutely is like changing the engine on a car without ever slowing down on the highway. Customers didn’t know that any of this was happening until she got on stage and announced it. That is wild to me. I would have said before this happened that there was no way that would have been possible except it clearly was. I have to ask, how did you do that in the broad sense?

Kevin: Well, it’s true. A lot of the underlying infrastructure that’s been part of S3, both hardware and software is, you know, you wouldn’t—if someone from S3 in 2006 came and looked at the system today, they would probably be very disoriented in terms of understanding what was there because so much of it has changed. To answer your question, the long and short of it is a lot of testing. In fact, a lot of novel testing most recently, particularly with the use of formal logic and what we call automated reasoning. It’s also something we’ve talked a fair bit about in re:Invent.

And that is essentially where you prove the correctness of certain algorithms. And we’ve used that to spot some very interesting, the one-in-a-trillion type cases that S3 scale happens regularly, that you have to be ready for and you have to know how the system reacts, even in all those cases. I mean, I think one of our engineers did some calculations that, you know, the number of potential states for S3, sort of, exceeds the number of atoms in the universe or something so crazy. But yet, using methods like automated reasoning, we can test that state space, we can understand what the system will do, and have a lot of confidence as we begin to swap, you know, pieces of the system.

And of course, nothing in S3 scale happens instantly. It’s all, you know, I would say that for a typical engineering effort within S3, there’s a certain amount of effort, obviously, in making the change or in preparing the new software, writing the new software and testing it, but there’s almost an equal amount of time that goes into, okay, and what is the process for migrating from System A to System B, and that happens over a timescale of months, if not years, in some cases. And so, there’s just a lot of diligence that goes into not just the new systems, but also the process of, you know, literally, how do I swap that engine on the system. So, you know, it’s a lot of really hard working engineers that spent a lot of time working through these details every day.

Corey: I still view S3 through the lens of it is one of the easiest ways in the world to wind up building a static web server because you basically stuff the website files into a bucket and then you check a box. So, it feels on some level though, that it is about as accurate as saying that S3 is a database. It can be used or misused or pressed into service in a whole bunch of different use cases. What have you seen from customers that has, I guess, taught you something you didn’t expect to learn about your own service?

Kevin: Oh, I’d say we have those [laugh] meetings pretty regularly when customers build their workloads and have unique patterns to it, whether it’s the type of data they’re retrieving and the access pattern on the data. You know, for example, some customers will make heavy use of our ability to do [ranged gets 00:22:47] on files and [unintelligible 00:22:48] objects. And that’s pretty good capability, but that can be one where that’s very much dependent on the type of file, right, certain files have structure, as far as you know, a header or footer, and that data is being accessed in a certain order. Oftentimes, those may also be multi-part objects, and so making use of the multi-part features to upload different chunks of a file in parallel. And you know, also certainly when customers get into things like our batch operations capability where they can literally write a Lambda function and do what they want, you know, we’ve seen some pretty interesting use cases where customers are running large-scale operations across, you know, billions, sometimes tens of billions of objects, and this can be pretty interesting as far as what they’re able to do with them.

So, for something is sort of what you might—you know, as simple and basics, in some sense, of GET and PUT API, just all the capability around it ends up being pretty interesting as far as how customers apply it and the different workloads they run on it.

Corey: So, if you squint hard enough, what I’m hearing you tell me is that I can view all of this as, “Oh, yeah. S3 is also compute.” And it feels like that as a fast-track to getting a question wrong on one of the certification exams. But I have to ask, from your point of view, is S3 storage? And whether it’s yes or no, what gets you excited about the space that it’s in?

Kevin: Yeah well, I would say S3 is not compute, but we have some great compute services that are very well integrated with S3, which excites me as well as we have things like S3 Object Lambda, where we actually handle that integration with Lambda. So, you’re writing Lambda functions, we’re executing them on the GET path. And so, that’s a pretty exciting feature for me. But you know, to sort of take a step back, what excites me is I think that customers around the world, in every industry, are really starting to recognize the value of data and data at large scale. You know, I think that actually many customers in the world have terabytes or more of data that sort of flows through their fingers every day that they don’t even realize.

And so, as customers realize what data they have, and they can capture and then start to analyze and make ultimately make better business decisions that really help drive their top line or help them reduce costs, improve costs on whether it’s manufacturing or, you know, other things that they’re doing. That’s what really excites me is seeing those customers take the raw capability and then apply it to really just to transform how they not just how their business works, but even how they think about the business. Because in many cases, transformation is not just a technical transformation, it’s people and cultural transformation inside these organizations. And that’s pretty cool to see as it unfolds.

Corey: One of the more interesting things that I’ve seen customers misunderstand, on some level, has been a number of S3 releases that focus around, “Oh, this is for your data lake.” And I’ve asked customers about that. “So, what’s your data lake strategy?” “Well, we don’t have one of those.” “You have, like, eight petabytes and climbing in S3? What do you call that?” It’s like, “Oh, yeah, that’s just a bunch of buckets we dump things into. Some are logs of our assets and the rest.” It’s—

Kevin: Right.

Corey: Yeah, it feels like no one thinks of themselves as having anything remotely resembling a structured place for all of the data that accumulates at a company.

Kevin: Mm-hm.

Corey: There is an evolution of people learning that oh, yeah, this is in fact, what it is that we’re doing, and this thing that they’re talking about does apply to us. But it almost feels like a customer communication challenge, just because, I don’t know about you, but with my legacy AWS account, I have dozens of buckets in there that I don’t remember what the heck they’re for. Fortunately, you folks don’t charge by the bucket, so I can smile, nod, remain blissfully ignorant, but it does make me wonder from time to time.

Kevin: Yeah, no, I think that what you hear there is actually pretty consistent with what the reality is for a lot of customers, which is in distributed organizations, I think that’s bound to happen, you have different teams that are working to solve problems, and they are collecting data to analyze, they’re creating result datasets and they’re storing those datasets. And then, of course, priorities can shift, and you know, and there’s not necessarily the day-to-day management around data that we might think would be expected. I feel [we 00:26:56] sort of drew an architecture on a whiteboard. And so, I think that’s the reality we are in. And we will be in, largely forever.

I mean, I think that at a smaller-scale, that’s been happening for years. So, I think that, one, I think that there’s a lot of capability just being in the cloud. At the very least, you can now start to wrap your arms around it, right, where used to be that it wasn’t even possible to understand what all that data was because there’s no way to centrally inventory it well. In AWS with S3, with inventory reports, you can get a list of all your storage and we are going to continue to add capability to help customers get their arms around what they have, first off; understand how it’s being used—that’s where things like Storage Lens really play a big role in understanding exactly what data is being accessed and not. We’re definitely listening to customers carefully around this, and I think when you think about broader data management story, I think that’s a place that we’re spending a lot of time thinking right now about how do we help customers get their arms around it, make sure that they know what’s the categorization of certain data, do I have some PII lurking here that I need to be very mindful of?

And then how do I get to a world where I’m—you know, I won’t say that it’s ever going to look like the perfect whiteboard picture you might draw on the wall. I don’t think that’s really ever achievable, but I think certainly getting to a point where customers have a real solid understanding of what data they have and that the right controls are in place around all that data, yeah, I think that’s directionally where I see us heading.

Corey: As you look around how far the service has come, it feels like, on some level, that there were some, I guess, I don’t want to say missteps, but things that you learned as you went along. Like, back when the service was in beta, for example, there was no per-request charge. To my understanding that was changed, in part because people were trying to use it as a file system, and wow, that suddenly caused a tremendous amount of load on some of the underlying systems. You originally launched with a BitTorrent endpoint as an option so that people could download through peer-to-peer approaches for large datasets and turned out that wasn’t really the way the internet evolved, either. And I’m curious, if you were to have to somehow build this off from scratch, are there any other significant changes you would make in how the service was presented to customers in how people talked about it in the early days? Effectively given a mulligan, what would you do differently?

Kevin: Well, I don’t know, Corey, I mean, just given where it’s grown to in macro terms, you know, I definitely would be worried taking a mulligan, you know, that I [laugh] would change the sort of the overarching trajectory. Certainly, I think there’s a few features here and there where, for whatever reason, it was exciting at the time and really spoke to what customers at the time were thinking, but over time, you know, sort of quickly those needs move to something a little bit different. And, you know, like you said things like the BitTorrent support is one where, at some level, it seems like a great technical architecture for the internet, but certainly not something that we’ve seen dominate in the way things are done. Instead, you know, we’ve largely kind of have a world where there’s a lot of caching layers, but it still ends up being largely client-server kind of connections. So, I don’t think I would do a—I certainly wouldn’t do a mulligan on any of the major functionality, and I think, you know, there’s a few things in the details where obviously, we’ve learned what really works in the end. I think we learned that we wanted bucket names to really strictly conform to rules for DNS encoding. So, that was the change that was made at some point. And we would tweak that, but no major changes, certainly.

Corey: One subject of some debate while we were designing this year’s charity t-shirt—which, incidentally, if you’re listening to this, you can pick up for yourself at snark.cloud/shirt—was the is S3 itself dependent upon S3? Because we know that every other service out there is as well, but it is interesting to come up with an idea of, “Oh, yeah. We’re going to launch a whole new isolated region of S3 without S3 to lean on.” That feels like it’s an almost impossible bootstrapping problem.

Kevin: Well, S3 is not dependent on S3 to come up, and it’s certainly a critical dependency tree that we look at and we track and make sure that we’d like to have an acyclic graph as we look at dependencies.

Corey: That is such a sophisticated way to say what I learned the hard way when I was significantly younger and working in production environments: don’t put the DNS servers needed to boot the hypervisor into VMs that require a working hypervisor. It’s one of those oh, yeah, in hindsight, that makes perfect sense, but you learn it right after that knowledge really would have been useful.

Kevin: Yeah, absolutely. And one of the terms we use for that, as well as is the idea of static stability, or that’s one of the techniques that can really help with isolating a dependency is what we call static stability. We actually have an article about that in the Amazon Builder Library, which there’s actually a bunch of really good articles in there from very experienced operations-focused engineers in AWS. So, static stability is one of those key techniques, but other techniques—I mean, just pure minimization of dependencies is one. And so, we were very, very thoughtful about that, particularly for that core layer.

I mean, you know, when you talk about S3 with 200-plus microservices, or 235-plus microservices, I would say not all of those services are critical for every single request. Certainly, a small subset of those are required for every request, and then other services actually help manage and scale the kind of that inner core of services. And so, we look at dependencies on a service by service basis to really make sure that inner core is as minimized as possible. And then the outer layers can start to take some dependencies once you have that basic functionality up.

Corey: I really want to thank you for being as generous with your time as you have been. If people want to learn more about you and about S3 itself, where should they go—after buying a t-shirt, of course.

Kevin: Well, certainly buy the t-shirt. First, I love the t-shirts and the charity that you work with to do that. Obviously, for S3, it’s aws.amazon.com/s3. And you can actually learn more about me. I have some YouTube videos, so you can search for me on YouTube and kind of get a sense of myself.

Corey: We will put links to that into the show notes, of course. Thank you so much for being so generous with your time. I appreciate it.

Kevin: Absolutely. Yeah. Glad to spend some time. Thanks for the questions, Corey.

Corey: Kevin Miller, vice president and general manager for Amazon S3. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, ignorant comment talking about how your S3 compatible service is going to blow everyone’s socks off when it fails.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Announcer: This has been a HumblePod production. Stay humble.

Join our newsletter

checkmark Got it. You're on the list!
Want to sponsor the podcast? Send me an email.

2021 Duckbill Group, LLC