The Security Coat of Many Colors with Will Gregorian

Will Gregorian, Head of Security and Technical Operations at Rhino, sits down with Corey—despite the fact they’ve crossed paths in the past! Will’s background working for startups has informed his current work in security. By spending his time in smaller companies its helped Will to craft his perspectives in a valuable way, check in for how! Will and Corey talk about their own history, Will’s capacity to bring security to the early stages of start up, and how to find the failures to avoid in the future. Will ponders on the militarism in the language around security and how to revolutionize the conversation going forward, and what lessons can be learned from working security for healthcare. At the forefront is where Will tries so stay and he gives us the reasons why.

Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.

Corey: This episode is sponsored in part by CircleCI. CircleCI is the leading platform for software innovation at scale. With intelligent automation and delivery tools, more than 25,000 engineering organizations worldwide—including most of the ones that you’ve heard of—are using CircleCI to radically reduce the time from idea to execution to—if you were Google—deprecating the entire product. Check out CircleCI and stop trying to build these things yourself from scratch, when people are solving this problem better than you are internally. I promise. To learn more, visit circleci.com.

Corey: Up next we’ve got the latest hits from Veem. Its climbing charts everywhere and soon its going to climb right into your heart. Here it is!

Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Sometimes I like to talk about my previous job being in a large regulated finance company. It’s true. I was employee number 41 at a small startup that got acquired by BlackRock. I was not exactly a culture fit, as you probably can imagine by basically every word that comes out of my mouth and then imagining that juxtaposed but they’re a highly regulated finance company.

Today, my guest is someone who knows me from those days because we worked together back in that era. Will Gregorian is the head of Information Security at Color Health, and is entirely too used to my nonsense, to the point where he becomes sick of it, and somehow came back around. Will, thanks for joining me.

Will: Hello. How are you?

Corey: It’s been a while, and so far, things are better now. It turns out that I don’t have—well, I was going to say I don’t have the same level of scrutiny around my social media usage that you do at large regulated finance companies anymore, but it turns out that when you basically spend your entire day shitposting about a $1.8 trillion company in the form of Amazon, oh, it turns out your tweets get an awful lot of scrutiny. Just, you know, not by the company that pays you.

Will: That’s very true. And you knew how to actually capitalize on that.

Corey: No, I sort of basically figured that one out by getting it wrong as I went from step to step to step. No, it was a wild and whirlwind time because I joined the company as employee 41. I was the first non-developer ops hire, which happens at startups a fair bit, and developers try to interview you and ask you a bunch of algorithm questions you don’t do very well at. And they say, “Well, I have no further questions. Do you?”

And of course, there’s nothing that says bad job interview like short job interview. “Yeah, just one. What are you actually working on in an ops context?” And we talked about, I think, migrating from EC2 Classic to VPC back in those days, and I started sketching on the whiteboard, “Let me guess it breaks here, here, and here.” And suddenly, there are three more people in the room watching me do the thing on the whiteboard.

Long story short, I get hired and things sort of progressed from there. The acquisition comes down and then how, uh, we suddenly, it turns out, had this real pressing need for someone to do InfoSec on a full-time slash rigorous basis. Which is where you came in.

Will: That’s exactly where I came in. I came in a month after the acquisition, if I remember correctly. That was fun. I actually interviewed with you, didn’t I?

Corey: You did. You passed, clearly.

Will: I did pass. That’s pretty hard to pass.

Corey: It was fun, to be perfectly blunt. This is the whole problem with startup FinTech in some ways, where you’re dealing in regulated industries, but at what point do you start bringing security in, as someone—where that becomes its own function? And how do you build that out? You can get surprisingly far without it until right afterwards then you really can’t. But for a startup in the finance space, your first breach can very much be something of a death knell for the company.

Will: That’s very true. And there’s no really good calculation on when you bring those security people in, which is probably the reason why—brace yourself—we’re talking about DevSecOps.

Corey: Oh, good. Let’s put more words into DevOps because goes well.

Will: Yeah. It does. It really does. I love it. You should look at my Twitter feed; I do make fun of it. But the thing is, it’s mostly about risk. And founders ought to know what that risk is, so maybe that’s the reason why they hired me because they felt like there’s existential risk around brand and reputation, which is the reason why I joined. But yeah, [sigh] fundamentally, the problem with that is that if you hire a security practitioner, especially the first one, it’s kind of like dating, in a way—

Corey: Oh, yes.

Will: If you don’t set them up correctly, then they’re doomed to be failed, and there are plenty of complexities as a result. Imagine you’re a scrappy FinTech startup, you have a bunch of developers, they want to start writing code, they want to do big and great things, and all of a sudden security comes in and says, “Thou shalt not do the following things.” That’s where it fails. So, I think it’s part culture, part awareness from a founder perspective, part DevOps because let’s face it, most of the stuff happens in infra side. And that’s not to slam on anybody. And delicious goes on.

Corey: Yeah. Something that I developed a keen appreciation for when I went into business for myself after that and started the Duckbill Group, is that when you talk to attorneys, that was really the best way to I found to frame it because they’ve been doing this for 2000 years. It turns out InfoSec isn’t quite that old, although occasionally it feels like some of the practices are. Like, you know, password rotation every 30 days. I digress.

And lawyers will never tell you what to do, or at least anyone who’s been doing this for more than six months. Instead, the answer to everything is, “It depends. Here are the risk factors to consider; here are the trade-offs.” My wife is a corporate attorney and I learned early on not to let her have any crack at my proposal documents in those days because it’s fundamentally a sales document, but her point was, “Well, this exposes you to this risk, and this risk, and this risk, and this risk.” And it’s, “Yes, I’m aware of all of that. If I don’t know how to do what I do, effectively, I’m not going to be able to fulfill this. It’s not the contract; it is the proposal and worst case I’ll give them their money back with an apology and life goes on.”

Because at that point, I was basically a tiny one-man band, and there was no real downside risk. Worst case, the entity gets sued into oblivion; I have to go get a real job again. Maybe Amazon’s hiring, I don’t know. And it’s sort of progressed from there. Left to their logical conclusion and letting them decide how it’s going to work, it becomes untenable, and it feels like InfoSec is something of the same story where the InfoSec practitioners I’ve known would not be happy and satisfied until every computer was turned off, sunken into concrete, and then dropped into Challenger Deep out in the Pacific.

Will: Yep. And that’s part of the issue is that InfoSec, generally speaking, hasn’t kept up with the modern practices, technologies, and advancements around even methodologies and culture. They’re still very much [unintelligible 00:06:32] approaching the information security conversation, militaristically speaking; everything is very much based on DOD standards. Therein lies the problem. And funny enough, you mentioned password rotation. I vividly remember we had that conversation. Do you remember that?

Corey: It does sound familiar. I’ve picked that fight so many times in so many different places. Yeah. My current thing that drives me up a wall is, in AWS’s IAM console, you get alerts for any IAM credential parents older than 90 days and it’s not configurable. And it’s, yes, if I get a hold of someone’s IAM credentials, I’m going to be exploiting it within seconds.

And there are studies; you can prove this empirically. Turns out it’s super economical to mine Bitcoin in someone else’s Cloud account. But the 90-day idea is just—all that does—the only good part of that to me is it enforces that you don’t have those credentials stashed somewhere that they become load-bearing and you don’t understand what’s going on in your infrastructure. But that’s not really the best-practice hill, I would expect AWS to wind up staking out.

Will: Precisely. And there lies the problem is that you have basically industry standards that really haven’t adopted the cloud mentality and methodologies. The 90-day rotation comes from the world of PCI as well as a few other frameworks out there. Yeah, I agree. It only takes a
few seconds, and if somebody is account—for example, in this case, IAM account—has programmatic access, game over.

Yeah, they’re going to basically spin up a whole bunch of EC2 instances and start mining. And that’s the issue is that you’re basically trying to bolt on a very passe and archaic standard to this fast-moving world of cloud. It just doesn’t work. So, things have gotten considerably better. I feel like our last conversation was, what, circa 2015, ’16?

Corey: Yeah. That was the year I left: 2016. And then it was all right, maybe this cloud thing has legs? Let’s find out.

Will: It does. It does. It actually really does. But it has gotten better and it has matured in dramatic ways, even on the cybersecurity side of the house. So, we’re no longer having to really argue our way through, “Why do we have to rotate passwords every 90 days?”

And I’ve been part of a few of these conversations with maybe the larger institutions to say, look, we have compensating controls—and I speak their language: ‘compensating controls’—you want to basically frame it that way and you want to basically try to rationalize why, technically speaking, that policy doesn’t make sense. And if it does, well, there is a better way to do it.

Corey: I feel very similarly about the idea of data being encrypted at rest in a cloud context. Yeah in an old data center story this has happened, where people will drive a pickup truck through the wall of the data center, grab a rack into the bed and peel out of there, that’s not really a risk factor in a time of cloud, especially with things like S3 where it is pretty clear that your data does not all live in easily accessible format in one facility. You’d have to grab multiple drives from different places and assemble it all together however it is they’re doing it—I presume—and great. I don’t actually need to do any encryption at rest story there. However, every compliance regime out there winds up demanding it and it’s easier for me to just check the box and get the thing encrypted—which is super easy, and no noticeable performance impact these days—than it is for me to sit here and have this argument with the auditor.

It’s one of the things I’ve learned that would arguably make me a way better employee than I was when we worked together is I’ve learned to pick my battles. Which fights do I really need to fight and which are, fine, whatever, click the ridiculous box. Life goes on.

Will: Ah, the love of learning from mistakes. The basic model of learning.

Corey: Someday I aspire to learn from mistakes of others instead of my own. But, you know, baby steps.

Will: Exactly. And you know, what’s funny about it is that I just tweeted about this. EA had a data breach and apparently, their data breach was caused by a Slack conversation. Now, here’s my rebuttal. Why doesn’t the information security community come together and actually talk about those anti-patterns to learn from one another?

We all keep it in a very in a confidential mode. We locked it away, throw the keys away, and we never talk about why this thing happened. That’s one problem. But, yeah, going back to what you were talking about, yeah, it’s interesting. Choose your battles carefully, frankly, speaking.

And I feel like there’s a lesson to be learned there—and I do experience this from time to time—is that, look, our hands are tied. We are basically in the world of relevance and we still have to make money. Some of these things don’t make sense. I wholeheartedly agree with my engineering counterparts where these things don’t make sense. For example, the encryption at rest.

Yeah, if you encrypt the EBS volume, does really get you a whole lot? No. You have to encrypt the payload in order to be able to secure and keep the data that you want confidential and that’s a massive lift. But we don’t ever talk about that. What we talk about and how we basically optimize our conversations, at least in the current form, is let’s harp on that compliance framework that doesn’t make sense.

But that compliance frameworks makes us the money. We have to generate revenue in order to remain employed and we have to make sure that—let’s face it, we work in startups—at least I do—and we have to basically demonstrate at least some form of efficacy. This is the only thing that we have at our disposal right now. I wish that we would get to the world where we can in fact practice the true security practices that make a fundamental difference.

Corey: Absolutely. There’s a bunch of companies that would more or less look all the same on the floor of the RSA Expo—

Will: Yep.

Corey: —and you walk up and down and they’re selling what seems to be the same product, just different logos and different marketing taglines. Okay. And then AWS got into the game where they offered a bunch of native tools that help around these things, like CloudTrail logs, et cetera, and then you had GuardDuty to wind up analyzing this, and Macie to analyze this, but that’s still [unintelligible 00:12:12], and they have Detective on top of that, and Security Hub that ties it all together, and a few more. And then, because I’m a cloud economist, I wind up sitting here and doing the math out on this and yes, it does turn out the data breach would be cheaper. So, at what point do you stop hurling money into the InfoSec basket on some level?

Because it’s similar to DR; it’s a bit of a white elephant you can throw any amount of money at and still get it wrong, as well as at some point you have now gone so far toward the security side of things that you have impaired usability for folks who are building things. Obviously, you need your data to be secure, but you also need that data to be useful.

Will: Yep. The short answer to that is, I would like to find anybody who can give you the straight answer for that one. There is no [unintelligible 00:13:00] to any of this. You cannot basically say, “This is a point of stop.” If you will, from an expenditure perspective.

The fundamental difference right now is we’re trying to basically cross that chasm. Security has traditionally been in a silo. It hasn’t worked out really well. I think that security really needs to buck up and collaborate. It cannot basically remain in a control function, which is where we are
right now.

A lot of security practitioners have the belief that they are the master of everything and no one is right. That fundamentally needs to stop. Then we can have conversations around when we can basically stop spending the expenditure on security. I think that’s where we are right now. Right now, it still feels very much disparate in a not-so-good way.

It has gotten better, I think; the companies in the Valley are really trying to basically figure out how to do this correctly. I would say the larger organizations are still not there. And I want to really, sort of, sit from the sideline and watch the digital transformation thing happen. One of the larger institutions just announced that they’re going to go with AWS Cloud, I think you know who I’m talking about.

Corey: I do indeed.

Will: Yeah. [laugh]. So, I’m waiting to see what’s going to happen out of that. I think that a lot of their security practitioners are up for a moment of wake-up. [laugh].

Corey: They really are. And moving to cloud has been a fascinating case study in this. Back in 2012, when I was working in FinTech, we were doing a fair bit of work on AWS, so we did a deal with a large financial partner. And their response was, “So okay, what data centers are you
using?” “Oh, yeah, we’re hosting in AWS.”

And their response was, “No, you’re not. Where are you hosting?” “Okay, then.” I checked recently and sure enough, that financial partner now is all-in on Cloud. Great. So, I said—when one of these deals was announced—that large finance companies are one of the bellwether institutions, that when they wind up publicly admitting that they can go all-in on cloud or use a cloud provider, that is a signal to a lot of companies that are no longer even finance-adjacent, but folks who look at that and say, “Okay, cloud is probably safe.”

Because when someone says, “Oh, our data is too sensitive to live on the cloud.” “Really? Because your government uses it, your tax authority uses it, your bank uses it, your insurance underwriter uses it, and your auditor uses it. So, what makes your data so much more special than that?” And there aren’t usually a lot of great answers other than just curmudgeonly stubbornness, which, hey, I’m as guilty of as anyone else.

Will: Well, I mean, there’s a bunch of risk people sitting there and trying to quantify what the risk is. That’s part of the issue is that you have
your business people who may actually be embracing it, but then you—and your technologists, frankly speaking. But then you have the entire risk arm, who is potentially reading some white paper that they read, and they’re concluding that the cloud is insecure. I always challenge that.

Corey: Yeah, it’s who funded this paper, what are they trying to sell? Because no one says that without a vested interest.

Will: Well, I mean, there’s a bunch of server manufacturers that are going to be left out of the conversation.

Corey: A recurring pattern is that a big company will acquire a startup of some sort, and say, “Okay, so you’re on the cloud.” And they’ll view that through a lens of, “Well, obviously of course you’re on the cloud. You’re a startup; you can’t afford to do a data center build-out, but don’t worry. We’re here now. We can now finance the CapEx build-out.”

And they’re surprised to see pushback because the thing that they miss is, it was not an economic decision that drove companies to cloud. If it started off that way, it very quickly stopped being that way. It’s a capability story, it’s if I need to suddenly scale up an entire clone of the production environment to run a few tests and then shut it down, it doesn’t take me eight weeks and a whole bunch of arguing with procurement to get that. It takes me changing an argument to, ideally a command line or doing some pull request or something like that does this all programmatically, waiting a few minutes and then testing it there. And—this is the part everyone forgets—McLeod economic side—and then turning it back off again so you don’t pay for it in perpetuity.

It really does offer a tremendous boost in terms of infrastructure, in terms of productivity, in terms of capability stories. So, we’re going to move back to a data center now that you’ve been acquired has never been a really viable strategy in many respects. For starters, a bunch of you engineers are not going to be super happy with that, and are going to take their extremely hard-to-find skill set elsewhere as soon as that becomes a threat to what they’re doing.

Will: Precisely. I have seen that pattern. And the second part to that pattern, [laugh] which is very interesting is trying to figure out the compromise between cloud and on-prem. Meaning that you’re going to try to bolt-on your on-prem solutions into the cloud solution, which equally doesn’t work if not it makes it even worse. So, you end up with this quasi-hybrid model of sorts, and that doesn’t work. So, it’s all-in or nothing. Like I said, we’ve gotten to the point where the realization is cloud is the way to do it.

Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don’t ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.

Corey: For the most part, yes. There are occasional use cases where not being in cloud or not being in a particular cloud absolutely makes sense. And when companies come to me and talk to me that this is their perspective and that’s why they do it, my default response is, “You’re probably right.” When I talk about these things, I’m speaking about the general case. But companies have put actual strategic thought into
things, usually.

There’s some merit behind that and some contexts and constraints that I’m missing. It’s the old Chesterton’s Fence story, where it’s a logic tool to say, okay, if you come to a fence in the middle of nowhere, the naive person, “Oh, I’m going to remove this fence because it’s useless.” The smarter approach is, “Why is there a fence here? I should probably understand that before I take it down.” It’s one of those trying to make sure that you understand the constraints and the various strategic objectives that lend themselves to doing things in certain ways.

I think that nuance gets lost, particularly in mass media, where people want these nuanced observations somehow distilled down into something that fits in a tweet. And that’s hard to do.

Will: Yep. How many characters are we talking about now? 280.

Corey: 280 now, but you can also say a lot with gifs. So, that helps.

Will: Exactly, yeah. A hundred percent.

Corey: So, in your career, you’ve been in a lot of different places. Before you came over and did a lot of the financial-regulated stuff. You were at Omada Health where you were focusing on healthcare-regulated side of things. These days, you’re in a bit of a different direction, but what have you noticed that, I guess, keeps dragging you into various forms of regulated entities? Are those generally the companies that admit that they, while still in startup stage, actually need someone to focus on security? Or is there more to it that draws you in?

Will: Yeah, I know. There’s probably several different personas to every company that’s out there. You have your engineering-oriented companies who are wildly unregulated, and I’m talking about maybe your autonomous vehicle companies who have no regulations to follow, they have to figure it out on their own. Then you have your companies that are in highly regulated industries like healthcare and financial industry, et cetera. I have found that my particular experience is more applicable to the latter, not the former.

I think when you basically end up in companies that are trying to figure it out, it’s more about engineering, less about regulations or frameworks, et cetera. So, for me, it’s been a blend between compliance and security and engineering. And that’s where I strive. That doesn’t mean that I don’t know what I’m doing, it just means that I’m probably more effective in healthcare and FinTech. But I will say—you know, this is an interesting part—what used to take months to implement now is considerably shorter from an implementation timeline perspective.

And that’s the good news. So, you have more opportunities in healthcare and FinTech. You can do it nimbly, you can do things that you generally had to basically spend massive amounts of money and capital to implement. And it has gotten better. I find myself that, you know, I struggle less now, even in the AWS stack trying to basically implement something that gets us close to what is required, at least from a bare minimum perspective.

And by the way, the bare minimum is compliance.

Corey: Yes.

Will: That’s where it starts, but it doesn’t end there.

Corey: A lot of security folks start off thinking that, “Oh, it’s all about red team and pentesting and the rest, and no, no, an awful lot of InfoSec is in fact compliance.” It’s not just, do the right thing, but how do you demonstrate you’re doing the right thing? And that is not for everyone.

Will: I would caution anybody who wants to get into security to first consider how many different colors there are to the rainbow in the security side of the house, and then figure out what they really want to do. But there is a misconception around when you call security often, to your point, people kind of default to, “Oh, it’s red teaming.” Or, “It’s basically trying to break or zero-days.” Those happens seldom, although seems they’re happening far more often than they should.

Corey: They just have better marketing now.

Will: Yeah. [laugh].

Corey: They get names and websites and a marketing campaign. And who knows, probably a Google Ad buy somewhere.

Will: Yep, exactly. So, you have to start with compliance. I also would caution my DevOps and my engineering counterparts and colleagues to,
maybe, rethink the approach. When you approach a practitioner from a security side, it’s not all about compliance, and if you ask them, “Well,
you only do compliance,” they’re going to may laugh at you. Think of it as it’s all-inclusive.

It is compliance mixed with security, but in order for us to be able to demonstrate success, we have to start somewhere, and that’s where compliance is—that’s the starting point. That becomes sort of your northern light in a referential perspective. Then you figure out, okay, how do we up our game? How do we refine this thing that we just implemented? So, it becomes evolving; it becomes a living entity within the company. That’s how I usually approach it.

Corey: I think that’s the only sensible way to go about these things. Starting from a company of one to, at the time is recording, I believe we’re nine people but don’t quote me on that. I don’t want to count noses. One of the watershed moments for us when we started hiring people who—gasp, shock—did not have backgrounds as engineers themselves—it turns out that you can’t generally run most companies with only people who have been spending the last 15 years staring at computers. Who knew?—and it’s a different mindset; it’s a different approach to
these things.

And because again, it’s that same tension, you don’t want to be the Department of No. You don’t want to make it difficult for people to do their jobs. There’s some low bar stuff such as you don’t want people using a password of ‘kitty’ everywhere and then having it on a post-it note on the back of their laptop in an airport lounge, but you also don’t want them to have to sit there and go through years of InfoSec training to make this stuff makes sense. So, building up processes like we have here, like security awareness training, about half of it is garbage; I got to be perfectly honest. It doesn’t apply to how any of us do business. It has a whole bunch of stuff that presupposes that we have an office. We don’t. We’re full remote with no plans to change that. And it’s a lot of frankly, terrible advice, like, “Never click a link in email.” It’s yeah, in theory, that makes sense from a security perspective, but have you met humans?

Will: Yeah, exactly.

Corey: It’s this understanding of what you want to be doing idealistically versus what you can do with people trying to get jobs done because they are hired to serve a purpose for the company that is not security. “Security is everyone’s job,” is a great slogan and I understand where it’s going, but it’s not realistic.

Will: Nope, it’s not. It’s funny it’s you mentioned that. I’m going through a similar experience from a security awareness training perspective and I have been cycling through several vendors—one prominent one that has a Chief Hacking Officer of sorts—and amazingly enough, their content is so very badly written and so very badly optimized on the fact that we’re still in this world of going to a office or doing things that don’t make sense. “Don’t click the link?” You’re right. Who doesn’t click the link? [laugh].

Corey: Right. Oh, yeah. It’s a constant ongoing thing where you continually keep running into folks who just don’t get it, on some level. We all have that security practitioner friend who only ever sends you email that is GPG encrypted. And what do they say in those emails?

I don’t know. Who has the time to sit there and decrypt it? I’m not running anything that requires disclosure. I just don’t understand the mindset behind some of these things. The folks living off the grid as best they can, they don’t participate in society, they never have a smartphone, et cetera, et cetera. Having seen some things I’ve seen, I get it, but at some point, it’s one of those you… you don’t have to like it,
but accepting that we live in a society sort of becomes non-optional.

Will: Exactly. There lies the issue with security is that you have your wonks who are overly paranoid, they’re effectively like the your talented engineer types: they know what they’re talking about and obviously, they use open-source projects like GPG, et cetera. And that’s all great, but they don’t necessarily fit into the contemporary context of the business world and they’re seen as outliers who are basically relied on to do things that aren’t part of the normal day-to-day business operations. Then you have your folks who are just getting into it and they’re reading your CISSP guides, and they’re saying, “This is the way we do things.” And then you have people who are basically trying to cross that
chasm in between. [laugh].

And that’s where the security is right now. And it’s a cornucopia of different personalities, et cetera. It is getting better, but what we all have to collectively realize is that it is not perfect. To your point, there is no one true way of practicing security. It’s all based on how the business perceived security and what their needs are, first and foremost, and then trying to map the generalities of security into the business context.

Corey: That’s always the hardest part is so many engineering-focused solutions don’t take business context into account. I feel very aligned with this from the cost perspective. The reason I picked cost instead of something like security—because frankly, me doing basically what I’m doing now with a different position of, “Oh, I will come in and absolutely clear up the mistakes you have made in your IAM policies.” And, “Oh, we haven’t made any mistakes in our IAM policies.” You ever met someone for who not only is that true, but also is confident enough to say that? Because, “Great. We’ll do an audit. You want to bet? If we don’t find anything, we’ll give you a refund.” [laugh]. And it’s fun, but are people going to call you with that in the middle of the night and wake you up? The cloud economics thing, it is strictly a business hours problem.

Will: Yeah, yeah. It’s funny that you mention that. So, somebody makes a mistake in that IAM cloud policy. They say, “Everybody gets admin.” Next thing you know, yes, that ends up causing an auth event, you have a bunch of EC2 instances that were basically spun up by some bad actor, and now you have a $1 million bill that you have to pay.

Corey: Right. And you can get adjustments to your bill by talking to AWS support and bending the knee. And you’re going to have to get yelled at, and they will make you clean up your security policies, which you really shut it down anyway, and that’s the end of it. For the most part.

Will: I remember I spun up a Macie when it had just came out.

Corey: Oh, no.

Will: Oh, yeah.

Corey: That was $5 per gigabyte of data ingested, which is right around the breakeven point of hire a bunch of college interns to do it instead, by hand.

Will: Yeah, I remember the experience. It ended up costing $24,000 in a span of 24 hours.

Corey: Yep.

Will: [laugh].

Corey: And it was one of the most blindsidingly obvious things, to the point where they wound up releasing something like a 90% pay cut with the second generation of billing. And the billing’s still not great on something like that. I was working with a client when that came out, and their account manager immediately starts pushing it to them and they turn to me almost in unison, and, “Should we do it?”—good. We have them trained well, and I, “Hang on,”—envelope math—“Great. Running this on the data you have an S3 right now would cost for the first month, $76 million, so I vote we go with Option B, which is literally anything that isn’t that, up to and including we fund our own startup that will do this ourselves, have them go through your data, then declare failure on Medium with a slash success post of our incredible journey has come to an end; here’s what’s next. And then you pocket the difference and use it for something good.”

And then—this is at the table with the AWS account manager. Their response, “So, you’re saying we have a pricing problem with Macie?” It’s like well, “Whether it’s a problem or not really depends on what side of that transaction [laugh] you’re on, but I will say I’ll never use the thing.” And only four short years later, they fixed the pricing model.

Will: Finally. And that was the problem is that you want to do good; you end up doing bad as a result. And that was my learning experience. And then I had to obviously talk to them and beg, borrow, and steal and try to explain to them why I made that mistake. [laugh]. And then finally, you know [crosstalk 00:29:52]—

Corey: Oh, yeah. It’s rare that you can make an honest, well-intentioned mistake and not get that taken care of. But that is not broadly well known. And they of course can’t make guarantees around it because as soon as you do that you’re going to open the door for all kinds of bad actors. But it’s something where, this is the whole problem with their billing model is they have made it feel dangerous to experiment with it. “Oh, you just released a new service. I’m not going to play with that yet.”

Not because you don’t trust the service and not because you don’t trust the results you’re going to get from it, but because there’s this haunting fear of a bill surprise. And after you’ve gone through that once or twice, the scars stick with you.

Will: Yep. PTSD. I actually learned from that mistake, and let’s face it, it was a mistake and you learn from that. And I feel like I sort of honed in on the fact that I need to pay attention to your Twitter feed because you talk about this stuff. And that was really, like, the first and last mistake that I made with a AWS service stack.

Corey: Following on my Twitter feed? Yeah, first and last mistake a lot of people make.

Will: Oh, I mean, it was—that’s too, but you know, that’s a good mistake to make. [laugh]. But yeah, it was really enlightening in a good way. And I actually—you know, what’s funny about it is if you start with a AWS service that has just basically been released, be cautious and be very calculated around what you’re implementing and how you’re implementing it. And I’ll give you one example: AWS Shield, for example.

Corey: Oh, yeah. The free version or the $3,000 per month with a one-year commitment?

Will: [unintelligible 00:31:15] version. Yeah, you start there, and then you quickly realize the web application firewall rules, et cetera, they’re just not there yet. And that needs to be refined. But would I pay $3,000 for AWS Shield Advanced or something else? I probably will go with something else.

There lies the issue is that AWS is very quick to release new features and to corner that market, but they just aren’t fast enough to, like, at least in the current form—you know, from a security perspective, when you look at those services, they’re just not fast enough to refine. And there is, maybe, an issue with that, at least from my experience perspective. I would want them to pay a little bit more attention to, not so much your developers, but your security practitioners because they know what they’re looking for. But AWS is nowhere to be found on that side of the house.

Corey: Yeah. It’s a hard problem. And I’m not entirely sure the best way to solve for it, yet.

Will: Yeah, yeah. And there lies a comment where I said that we’re crossing that chasm right now…. We’re just not there yet.

Corey: Yeah. One of these days. If people want to hear more about what you’re up to and how you view these things, where can they find you?

Will: Twitter.

Corey: Always a good decision. What’s your username? And we will, of course, throw a link to it in the [show notes 00:32:33].

Will: Yeah, @willgregorian. Don’t go to LinkedIn. [laugh].

Corey: No. No one likes—LinkedIn is trying to be a social network, but not anywhere near getting there. Thank you so much for taking the time to basically reminisce with me if nothing else.

Will: This was awesome.

Corey: Really was. Will Gregorian, head of information security at Color Health. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an ignorant comment telling me why I’m wrong about rotating passwords every 60 days.

Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Announcer: This has been a HumblePod production. Stay humble.

Join our newsletter

checkmark Got it. You're on the list!
Want to sponsor the podcast? Send me an email.

2021 Duckbill Group, LLC