SmugMug's Cloud Adventure with Andrew Shieh

SITC-538-Andrew Shieh-003
===

[00:00:00] Andrew: I found that AWS, they enjoy these customer stories that are not typical cases.

[00:00:09] Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn, and my guest today is someone that I have been subtly, and then not so subtly, and then outright demanding, appear on this show for a long time now. Andrew Shieh is a principal engineer at SmugMug slash Flickr slash whatever else you’ve acquired recently. Andrew, thanks for joining me.

[00:00:29] Andrew: Thanks a lot, Corey.

[00:00:30] Corey: It’s great to talk to you.

[00:00:31] You have been someone who’s been a notable presence in the AWS community for far longer than I have. You’ve been in your current role for 12 years at this point, and you’ve been around the edges of a lot of very interesting AWS problems as a result. At one point, SmugMug was the largest S3 customer that I was aware of, back when S3 launched. Weren’t you the first customer, or something like that?

[00:00:54] Andrew: Yeah, we were the first enterprise customer of AWS way back in early-2006. There was a cold call from AWS. In fact, I think it was back when people thought of them as a bookseller. They just cold called a bunch of companies that they thought might be interested in storage.

[00:01:19] Corey: “Hey kid, want to buy some object store?” Yeah, it’s better than the first service they launched shortly before, and I was, “Hey kid, want to buy a messaging queue?” Because SQS was the first service. And the correct response from almost everyone was, “What the hell is that?” At least storage is something, theoretically, people could understand.

[00:01:34] Andrew: And they sold our CEO, Don, on S3, and we just became their, like, key customer, especially in those early, early days. I didn’t join until six years later, but I got to see a lot of the effects of it over time. And in the last decade, I’ve seen similar changes, and all the things that have shipped, all of the things that we’ve had influence over, some services that we helped to kick off. It’s been really, really interesting to see it as that kind of customer. We’re still a small company, but we have this unusual level of influence in that cloud AWS world.

[00:02:17] Corey: Do you feel like you still do? Because one of the things that I found, at least from where I’m sitting, is that the way that AWS themselves engage with customers, regardless of what they say—if you ignore what they say and pay only attention to what they do—which is a great approach for dating, incidentally—then you see that there’s, at least from my perspective, there’s been a significant shift away from a lot of the historical customer obsession in a good way into what feels like customer obsession in a vaguely creepy way. I no longer have the same sense, on some level, that as a customer that AWS is in my corner the way that they once were.

[00:02:52] Andrew: It’s tricky to speak from my point of view because we see, I think, just as where we live as a customer—SmugMug and Flickr—we see the best parts of AWS. So, we generally—we talk to the best, the top people on the service teams, we get amazing account managers and TAMs, and the people we talk to are just, like, the best representatives of AWS. They really do talk about customer obsession and show it, so from my point of view, that’s been going really well, and is one of the only companies that I see really living up to what they claim in their values. So, you know, customer obsession is definitely always there. Anytime that I talk to a service team, they definitely have that kind of empathy that a lot of other companies don’t. They really try to understand what we’re trying—what our goals are, and put them in our shoes. I’ve seen small companies do that, but very few companies have that in their culture and actually exercise it.

[00:03:57] Corey: What have you seen shift over the decade-plus that you’ve been working with AWS? Because it’s very hard to, on some level, to reconcile the current state of AWS to what it was the first time I touched it. I remember being overwhelmed by how many services there were, and there was no way I was going to learn what all of them did, and there were about 12. Now, it feels like there are—like, add an order of magnitude, double it, and keep going, and you get a lot closer to what’s there today. I still get as overwhelmed as I did back then. What have you seen that’s shifted on the AWS side of the world?

[00:04:31] Andrew: So, in terms of services, it definitely long ago went past the size of things I can keep track of, over time. We still try to get good highlights, and there’s still a lot of need for, like, hey, we need to know about all these new things that are coming out, but we’re kind of past the point where we try to track, like, every new service, and try to run every new service and see what it does.

[00:04:57] You must be so happy. My God.

[00:04:59] We used to do a lot of that, just, like, hey, we’ll talk to every service team.

[00:05:04] Corey: For me, the big tipping point was Ground Station because as much as I advise people otherwise, I still fall into the trap myself of, when AWS announces a new service, that means I should kick the tires on it. And they announced Ground Station that talks to satellites in orbit. I’m like, “How the hell am I going to get a CubeSat up there?” Followed by, “Wait a minute. They’re building things that are for specific use cases among specific customers.” Power to them. That’s great, but that doesn’t mean it becomes my to-do list for the next few quarters of what to build.

[00:05:33] And every service is for someone; not everything is for everyone. And I don’t think, for example, the IoT stuff is aimed at me at all. I don’t have factory-sized problems. I don’t need to control 10,000 robots at this point in time. So, that means that I’m a very poor analog of the existing customers. Some things, like S3, every customer basically uses S3. In some cases, they don’t realize it, but they are. And other things get a little far-flung out there. And I think that S3 as a raw infrastructure service really does recomme—really does represent AWS at its absolute best.

[00:06:08] Andrew: In every way, S3 represents the best. It’s not just the service itself, but all of the people that are on that team, from leadership down to engineers, they all seem to have the answers for the questions that we have, even when they’ve had some problems and failures. But they, you know, they respond to those really well, too. That’s really the heart of AWS, and I hope that is where the rest of the services are going to.

[00:06:35] Corey: Honestly, it’s the most Amazonian service I can think of, even down to having a bad name. Because S3 stands for Simple Storage Service, and it has gotten so intelligent that calling it ‘simple’ is really one of those areas that’s, okay, now it just sounds like outright mockery. “What, you don’t understand S3? But it’s so simple.” If you understand S3 thoroughly from end-to-end, every aspect of the service, they have a job waiting for you.

[00:07:00] But you’re right about the responsiveness. Back when Intelligent Tiering launched, I had issues with it from the perspective of, okay, that monitoring charge on a bunch of small objects that will never transition is ridiculous, and the fact that it’s going to charge you for a minimum of 30 days means that for anyone using it in a transitory way, it’s a complete non-starter. A year or so goes by, and they reach out. They’re like, “There. Look at our new pricing. Does that address your concerns?”

[00:07:20] Like, holy crap, I’m just a loudmouth on the internet. I use it, sure, but I’m not in the top… any percentage of customers as far as usage goes on this. And they’re right. And the answer was, “Yeah, mostly.” The only weird edge case is when you have an object between 128 kilobytes and—between 148 and 160 kilobytes—we have the math on a blog post on our website—that says for this very weird edge case of no changes to this, it will wind up costing you more for the monitoring charge than it will for the potential cost savings, but that was very difficult to get to and it hits almost nobody. It’s a great default storage class.

[00:07:57] Andrew: Yeah. I think at first Intelligent Tiering, because it had the word intelligent in it, it sounded like some automated, like, hey, we’ll figure out what class to put your objects in. But when we looked at it further, it was definitely like, “Oh, that makes sense.” You know, you trade off some additional costs of, like, moving things back into S3 Standard for, like, the read charges.

[00:08:21] Corey: I find it useful for use cases that you might very well see—like, I don’t know how widespread the use case is—but back when I used to do work at an expense management company. Receipts were generally uploaded once and read either zero or one times, but you had to keep them forever. So yeah, transitioning them into infrequent access made an awful lot of sense. But what you’ll often see, I imagine, especially if it was your photo hosting site, is every once in a while, something that, it’s been there for years, suddenly gets a massive amount of traffic.

[00:08:51] And if you write naive code—not that I would ever do such a thing—you wind up with every read coming from S3 because you don’t have caching, and suddenly it winds up blowing out your bill because, okay, it’s a five megabyte photo that is downloaded 20 million times in 24 hours; that starts to add up, so the intelligence around that starts to be helpful. You can beat S3 Intelligent Tiering if you have an understanding of the workloads in the bucket, and a deep understanding of the lifecycle story, sure, but for most people, my recommendation has shifted to if you don’t know where to start, start with this. There are remarkably few vertex cases where you’ll find yourself regretting.

[00:09:31] Andrew: We're definitely in one of those cases. We spend a lot of time working on optimizing storage, figuring out how to class things appropriately. There’s a ton of caching in our layers, so it’s all about delivering all of our photos exactly how our customers want. And the photo model is an interesting one for S3, too, because—I think they’ve talked about it a few times in re:Invent keynotes—but the basic model is, both of our services at SmugMug and Flickr, we store the original files that the customer uploads because, in general, photographers want, you know, they want their original photo. They don’t want you to compress it down like Google Photos does. They want to be able to get back the original photo that they uploaded, bit to bit.

[00:10:19] So, we store those, but they’re generally not well compressed, they’re usually too big for display, so we spend a lot of time processing those, doing a lot of GPU work to deliver them really quickly, and, you know, without having to minimizing the both the S3 costs and also making it deliver across the network as fast as possible. There’s a ton of trade-offs there, and we spend a lot of time thinking about that. But when it works, like, S3 is—that’s one of the most amazing parts is how few engineers we really need who have a deep knowledge of S3. And we can run this entire storage business on top of S3 without actually knowing that much, without having to touch it very often, except for a few specific things like the cost management and how to optimize the storage. But once it’s up, you know, it runs so much [laugh] easier than any kind of storage I’ve ever used before.

[00:11:19] Corey: Could you imagine using an NFS filer for this or a SAN somewhere? And this is also the danger. What you’re doing at SmugMug is super interesting, especially when it comes to S3, and I feel on the same level, it makes terrible fodder for keynotes because you are the oldest S3 customer and one of the largest. So, taking lessons from what you do and mapping them to my own S3 usage—which on a monthly basis is about $90—is absolutely the wrong direction for me to [laugh] go in. It’s a, yeah—like, by the time that the general guidance doesn’t apply to you, yeah, you’d know.

[00:11:55] There’s a difference between best practices, and okay, you’re now starting to push the boundaries of what is commonly accepted as standard or even possible. Yeah, at that point, if you tell me that my standard guidance for S3 doesn’t apply to you, I will nod and agree with you. But that’s part of the trick I found of being a consultant is that, recognize the best practice guidance, and also where it starts to fall down. Because at certain extents, everything winds up becoming a unicorn case.

[00:12:24] Andrew: I’ve found that AWS likes talking about those. They really, they enjoy these, like, customer stories that are not typical cases. I think they actually talked about storage-classing. Peter DeSantis in last year’s keynote—or keynote at night—he talked about storage classing.

[00:12:43] Corey: It was two years ago now. Welcome to 2024. Yeah, late night surprise computer science lecture with Professor DeSantis. It’s one of my favorite, favorite things

[00:12:51] Andrew: at re:Invent.

[00:12:52] That was basically exactly the model we’ve been working on with, you know, what to do with all this idle storage, and balancing it with the really high-volume stuff, all the high-volume uploads, high-volume usage, and then all of that storage volume that just sits there. Well, it’s not our problem to manage it. We have to help AWS to come up with some of these ideas, and you know, how to achieve our low-cost, unlimited storage, working with them to make it into our viable business. It’s really our low-level goal there.

[00:13:28] Corey: I have to ask—because the only way to discover things like this is basically community legend and whatnot—AWS talks a lot about the eleven nines of durability that S3 has, which is basically losing one object every, what, trillion years or whatever it works out to. Now, let’s talk reality. I’ve lost data before because some moron—who I’m not going to identify because he’s me—wound up deleting the wrong object. It’s always control plane issues, and the rest. It’s not an actual drive failure. Which is why I think that metric gives people a false sense of security.

[00:14:01] Andrew: They’re calculating known possible failures.

[00:14:05] Corey: And DR does not factor into it because you have the same durability metric for S3 infrequent access one zone, which is going to be in a single availability zone. And the odds of that availability zone being destroyed by a meteor are significantly less than eleven nines, so let’s be clear here.

[00:14:22] Andrew: That was also something that DeSantis covered in that talk, which was that one zone isn’t necessarily one zone, which is, to most people, probably a surprise. But in their interest, if they don’t have to move the blocks around, it’s cheaper for them to leave the block. So, if you’re transitioning files out, they might not do anything to them. They may stay in the exact same spot they were when they were in S3 Standard; it just gives them the option to spread their blocks around more, and to plan more for their performance on their individual disk. That service, I think, is somewhat misnamed.

[00:14:55] Corey: Yeah. It’s kind of unfortunate, but it’s the truth. And I think it’s also a… it’s an example of a bunch of different use cases exist for this. I don’t know if a lot of people are aware of this, but my constant running joke of Route 53 is a database, I changed that joke last minute to talk about Route 53. Originally, it was S3, but enough analytics workloads live on top of S3—I do it myself—where using it as a database is not the most ridiculous thing in the world.

[00:15:23] I mean, there are—I have one workload that grabs an SQLite file, uses that as its database in a Lambda function, then returns it with any changes it’s made, every time it runs. And there’s logic in there to avoid race conditions and whatnot, but okay. That’s not a best practice for a lot of workloads, but it works really well for this particular one. And the joke I found is that using text records in DNS as a relational data store is objectively ridiculous. That is the sort of thing that no one should actually do, so it makes a great joke. It’s when people start to say, “Well, what about this other ide—” yeah, you’re trying to solve a problem that is better solved by other ways. Here’s what they are.

[00:16:05] Andrew: Yeah, and S3 was actually a very cheap database for a while. When it started, they didn’t have any operation costs, so they only charged for storage volume, and so you could build a very inexpensive database on top of S3. Not today, but back when it started, you could have all these zero byte files.

[00:16:26] Corey: Yeah, during the beta. That’s why they started charging per request because as I recall, people were hammering the crap out of the front-ends as a result, and that wasn’t a use case we built the thing for. So, how do we re-architect around it? You aren’t going to change human nature, so okay, make it cost money. People will either stop doing it or you’ll have enough money to invest to fix the problem.

[00:16:45] So, what have you seen as far as, I guess, the trends come and go? I mean, you’ve been to re:Invent, I think, almost every time. What was your take on re:Invent this past year? What bigger trends is it speaking about?

[00:16:59] Andrew: I’ve been to every re:Invent, except for the very first one, and I missed the first one because I just had my first child born, so I think there’s a good reason to miss it. But I’ve been to every one since, and so, I think that’s ten… or eleven, if you count the virtual one. Vegas hasn’t changed. Vegas is still, like, my least favorite place to visit, but the people at re:Invent are why I go. It’s become this very, kind of like, business-focused conference, versus, like, what I would prefer, which is more like a grow your community, think about things.

[00:17:35] For me, it’s become more like, hey, we need to talk to these vendors, these teams at AWS, and it’s very, like, just packed with business meetings, not so much on the, like, interests. [laugh] It’s okay, but it’s very business and sales, kind of, focused. This year, I also spoke at the Expo Hall, which was pretty unusual. I gave a talk at the AWS dev lounge all about learning about AWS. I called it “Learning Backwards” because it was about AWS trivia, I think a topic you know very well [laugh] .

[00:18:13] Corey: Yeah. And it feels like that’s what a lot of the certification exams started as. It’s like, all right, do you know this one weird trick? Or can you remember exactly what the syntax is for something? It’s like, that is not how you test knowledge of these things.

[00:18:26] Something else you’ve talked about in that vein, incidentally, has been the value of having a broad engineering background as applied to tech, and what I always found fascinating about your approach to it was you have a background in civil and environmental engineering, but you don’t take a condescending approach whenever you have this conversation. It doesn’t come from, “Oh, yes. Because I have this background, I am better than you with the rest of it.” You talk about it being an advantage, while also accepting that people come from a bunch of different places. What’s your take on it?

[00:18:56] Andrew: So, I went to Stanford University, graduated in 1999. So, I think when I graduated, I knew nothing about the industry, and I was just kind of… just kind of out there. I probably had that attitude where, if you didn’t go to college, I didn’t know what you’re doing. But over the years, just because I’ve worked with so many great people—our CEO, Don MacAskill, for example, you know, he didn’t go to college, and I’ve worked with so many other great people who have been great engineers, and they didn’t go to college, but had other great, really formative experiences that turned them into excellent engineers. I think one of the things that broad, kind of, engineering education gives you is that, if—maybe you don’t have the same kind of opportunities, like, to get into some job early on, you know, learn about some, like, weird engineering things, you know, the education gives you a different view into it.

[00:19:55] I think my favorite example is that AWS has, like, different pricing formulas for every service, and when I—a lot of people get overwhelmed by those. When I look at that, I’m like, oh, you know, it’s just another equation. I can toss this into a spreadsheet, or just do it a calculator, and figure out the costs, do some estimates down the line.

[00:20:17] Corey: The fact that it’s an equation as part of the challenge. Like, there are very few things you buy where you have that level of granularity and variability in pricing dimensions that affect others. It becomes a full system, as opposed to simple arithmetic.

[00:20:30] Andrew: But so much of my engineering course load, back in the day, was like, you know, you learn all these different equations, let’s say, like, fluid mechanics or something like that. There’s super complicated equations, and you have to figure out which ones to use, what goes where, what’s applicable. And compared to that, you know, the AWS costs is really, really much simpler. But you can kind of tell, you can kind of tell that the people who actually create those pricing formulas either came from, like, finance worlds, or education, engineering, worlds where they, in their minds, it’s also very simple. But to most people, the pricing formulas are, like, way too complicated.

[00:21:14] Corey: That feels like a trend from AWS that I’ve noticed, which is, increasingly it’s become apparent that every team has their own P&L to which they’re held accountable, which means that every service, even ones that should not necessarily be so, have to generate revenue, and they are always looking to make sure that there isn’t some weird edge case like that zero byte S3 potential challenge back when it first launched, so they have a bunch of different dimensions that grow increasingly challenging to calculate. And the problem that I’ve discovered is the interplay between those things. Okay, you put an object into S3. Great. It doesn’t take much to look at the S3 pricing page and figure out what that’s going to cost.

[00:21:56] But now you have versioning turned on. Okay, what does that potentially mean? What does the delta look like? There’s bucket replication that adds data transfer. It causes config events to potentially cause rule evaluations. If you have CloudTrail data events turned on, then it costs 20 times more to record that data event than the actual data event cost you. And then those things, in turn, spit out logs on their own, which in turn, generate further charges downstream. And if you’re sending them to somewhere else that’s outside of AWS, there’s egress charges to contend with. And it becomes this tertiary quaternary, and so on and so forth, level of downstream charges that on a per-object basis are tiny fractions of a penny, but that’s the fun thing about scale; it starts moving that decimal point right quick.

[00:22:40] Andrew: Yeah. Going back to the engineering education, I think that’s another thing is orders of magnitude and understanding, you know, the—I look at our S3 usage, and you know, people talk about millions of things; we have billions, multiple billions. And when you’re casually talking about it, it doesn’t sound that different, but what I tell people is, like, you know, if your million things takes one day, your billion things takes three years. So, that’s [laugh] —people don’t usually have a great way to conceptualize the large numbers like that. Like, we’re doing, you know, a trillion operations a year. What is that? So, conceptualizing the orders of magnitude, that’s another thing, I feel like the broad engineering education has helped me a lot.

[00:23:30] Corey: Yeah. There’s a lot to be said for folks with engineering backgrounds in this space. I’ve worked with some who are just phenomenally good people at a lot of these problems and thinking about it in a whole new way. That’s not all of them. It’s one of those areas that is indicative and is curious.

[00:23:47] It… let’s also be clear here, like the old joke: what do you call the last place student to graduate from medical school? Doctor. Same approach where there’s a—different people align themselves in different areas. You take a very odd, holistic view to a lot of these things that despite the large number of engineers from Stanford I have worked with over the course of my career, almost no one else sees things the way that you do in quite the same way. So, I would wonder how much of that is the engineering background versus your own proclivities and talents.

[00:25:21] There’s a lot that can be addressed, just from the perspective of thinking about things from a different point of view. I think that is—like, people—like, it’s gotten—the term itself has become almost radioactive as far as toxicity goes, but diversity of thought comes into this an awful lot, where, what is the background, what is the lens you view things from? I mean, I tend to look at almost every architectural issue these days from the perspective of cost. Now, that is not necessarily the best way to approach every problem, but it’s one that doesn’t tend to be as commonly found, and it lets me see things very differently. Conversely, that means that even small-scale stuff where the budget rounds up to $1 for spend, I still avoid certain configuration because oh, that’d be 20 cents of it. That’s going to be awful. It’s hard to stop thinking like this.

[00:26:10] Andrew: Yeah, you can understand though, some parts of AWS services, you can kind of, you can understand what they’re going for by their pricing. Like, if some service releases a serverless model, you can kind of tell by the pricing whether they want everyone to, like, default to this version of their service, or if it’s like a niche product just for small usage, or if it’s intended for large usage. A lot of times the pricing can give that away.

[00:26:41] Corey: Oh, I was very excited about Kendra until I saw it started at 7500 bucks a month at launch. It’s like, “Oh, it’s not for me. Okay.” Yeah, price discrimination is a useful thing. Honestly, when I look at a new product, the first thing I pull up is—I ignore the marketing copy and go to their pricing page. Is there something that makes sense, like, for either a free or trial period? Is there a ‘call me for enterprise’ at the other end? And in between is a little less relevant. But if they don’t speak to both ends of that market, they’re targeting something very special. Let’s figure out what that is.

[00:27:11] Andrew: Yeah, and in my position, I look at the pricing side, and what it can do for our business is usually the bigger and more difficult question. But you really learn a lot from the pricing side of businesses. Something I hadn’t had to do until I took over the AWS part of our business is just understanding, hey, you can figure out a lot of things in this, kind of, reverse by looking at how their business people think of their services.

[00:27:40] Corey: It's neat to see how that starts manifesting in customer experience, too. Honestly, one of the hard parts for me whenever I deal with AWS is remembering just the sheer scale of the company. I come from places where 200 employees is enormous, so that doesn’t necessarily track. And there’s no one person sitting there like a czar, deciding that this is going to be how they view it. Instead, it is a pure… it’s a pure story of different competing folks working together and collaborating in strange ways. It’s an odd one.

[00:28:10] One last topic I want to get into before we call this an episode. I ran a re:Invent photo scavenger hunt on site at re:Invent, and you won. Prize to be released shortly. But I want to just say how fun it was. Oh, good, people are using this. And also, who’s that person that’s pulling way ahead of the pack?

[00:28:28] Effectively, despite doing all these fun things at re:Invent you also walked around with a camera the whole time, which, you know, working at SmugMug, oh okay, I start to get it. You liked photography. Awesome. But you got some great shots that I’ll be posting up, and we talk about this a bit more in the coming weeks.

[00:28:44] Andrew: Great. That was a big surprise to me when you told me. I enjoy photography a lot. One of the great reasons of working at SmugMug and Flickr is working in the photography culture, working with photographers. I love going on photo walks, doing this kind of thing, so that was one of the more fun parts of re:Invent. I really enjoyed doing that. And it was also interesting to see how you actually made that app quickly.

[00:29:12] The other thing I noticed was that, in Werner’s keynote, he actually talked about a photo app that was uploading entire large photos across the network, and I think he was talking about you [laugh] because he was talking about the trade-offs between delivering—you know, not doing any work on a photo and sending the whole thing to a customer versus shrinking it down and sending it faster. And your app actually sends the entire photo over because, you know, I imagine this is not your business to develop photography.

[00:29:43] Corey: Yeah, it’s partially that. I also built the thing as a—entirely via GenAI. I’m curious, as somebody walked around there, dealing with the sometimes challenging radio signals, was that a problem at any point during the conference for you? The fact that it didn’t compress or do any transformation on the photos before it went up?

[00:29:59] Andrew: No because you only downloaded the photo if you want to view it again, so I only did it a few times, just to see. I always poke around photo apps to see how they work, and I just noticed that because I’m like, “Oh, this photo is coming down the line really slow.” And of course, it’s just the original thing that I sent. You know, which is fine.

[00:30:18] Corey: In hindsight, that’s blindingly obvious. Yeah, I should have definitely fixed that. There are existing libraries which basically, it’s an include and call it good. It’s not… yeah, it’s not like it’s something that would have been a massive challenge. I just used a lot of the AWS Amplify primitives combined with a couple of generative AI tools because I don’t know front-end stuff, so let’s see if the robot does.

[00:30:37] And it worked surprisingly well. Then, of course, I wound up with somebody who is not at all a robot, Danny Banks at AWS, who’s a principal developer—sorry a principal technologist, a principal design technologist—get the titles right—on the Amplify team, and he was great at, “Okay, that looks like crap. Let’s fix this.” Like, oh, thank God. Someone who’s good at this stuff. Turns out that one of the best approaches to getting something done right is to do a really crappy version that offends the sensibilities of experts.

[00:31:03] Andrew: That’s a tactic that I take frequently. I just used that recently.

[00:31:09] Corey: I really want to thank you for taking the time to speak with me. If people want to learn more about what you’re up to and how you view things, where’s the best place for them to find you?

[00:31:16] Andrew: That’s a great question. I used to be pretty active on Twitter, but no longer for some reason. Um—

[00:31:25] Corey: Can’t imagine why that would be.

[00:31:26] Andrew: So, I would guess, Mastodon. I’m @shandrew at hachyderm.io. And also, you can contact me on LinkedIn, on Flickr, and anywhere else. I’m usually under shandrew. S-H-A-N-D-R-E-W.

[00:31:43] Corey: And we will, of course, put links to that in the show notes. Thank you so much for taking the time to speak with me today. I really appreciate it.

[00:31:50] Andrew: Thank you very much, Corey. It was a pleasure talking to you, as always.

[00:31:54] Corey: Andrew Shieh, principal engineer at SmugMug slash Flickr. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment about how you don’t see the problem with downloading the full-size image every time someone wants to view it from S3 infrequent access.

Join our newsletter

checkmark Got it. You're on the list!
Want to sponsor the podcast? Send me an email.

2021 Duckbill Group, LLC