Battling Back Against Data Breaches with Maya Levine
Maya Levine: The challenge is that all of these individual logs by themselves can often be, you know, just typical cloud operations. But if you can add some kind of logic on top of it, where it's looking at, okay, now this is atypical, you know, list S3 bucket, that's a normal call, but a hundred of them? Welcome
Corey Quinn: to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by our friends at Sysdig, where Maya Levine is a product manager. Maya, thank you for joining me.
Maya Levine: Thanks so much for having me, Corey.
Sponsor: Sysdig secures cloud innovation with the power of runtime insights. From prevention to defense, Sysdig prioritizes the risks that matter most.
Secure Every Second with Sysdig. Learn more at Sysdig, S-Y-S-D-I-G, dot-com.
Our thanks as well to Sysdig for sponsoring this ridiculous podcast.
Corey Quinn: So, let's start at the very top. Product management means an awful lot of things to an awful lot of different companies.
Where do you start and stop?
Maya Levine: For me, product management is all about understanding what are your pain points. When I'm thinking of customers, what is hard for them? What are the problems that they really need help solving? And obviously, Sysig is looking at that from a cloud and container security point of view.
Corey Quinn: And what have you found? What is the painful part about, I guess, cloud security other than to be unflattering? All of it.
Maya Levine: Yeah, I was gonna say, where do I start?
Corey Quinn: Yeah, it's like, it seems like a target rich environment.
Maya Levine: There's a lot of challenges. One thing that has come up a lot in this past year is just how quickly attackers are able to execute their attacks.
And we found that the average cloud attack takes about 10 minutes. And when we think about that, it becomes painfully obvious that we need some kind of cloud detection and response or CDR as uh, You know, this industry is so fond of their acronyms.
Corey Quinn: We do love our acronyms dearly. It's some of the things we won't get away from.
Maya Levine: Yes, too many.
Corey Quinn: Do you find the speed of attack increasing is in part, I guess, due to the fact that it is cloud, where there's a consistent set of standards of how things are deployed? If you have an S3 bucket, getting access to that looks an awful lot the same as getting access to another company's S3 bucket.
Whereas back in the days of data centers, where everyone runs their own bespoke little unicorn, it took a lot more time and was less. Automation friendly, I guess, is probably the best way to frame that? Or do you think there's something else to it?
Maya Levine: Uh, at first, maybe we saw that cloud attacks were harder for attackers because the complexity of cloud systems, I think, is harder to understand.
But what we've seen in recent times is that attackers are becoming more well versed in cloud native technologies, and they're embracing the same things that we are when we are adopting cloud, right? Ease of deployment and automation services and all of those things are being utilized and utilized well in different cloud breaches.
Corey Quinn: What I find interesting in the, some of the areas that you personally have been focusing on has been around the idea of identity. One of my frequent talking points that I trot out from time to time is that the internet long ago took a collective vote and decided that our email inboxes were the cornerstone of our online identities.
Get access to someone's inbox and for most purposes you can become them on the internet. How has that manifested in, like, a more infrastructure centric way when it comes to cloud? Because an awful lot of folks are talking about identity these days. How does that, what is the, I guess, the impact of that on CDR, as you put
Maya Levine: it?
The impact is that credentials are usually, what we're seeing is the initial access point for attackers. And I think that we can't always control how attackers make their way into our environment. Where CDR is helpful. is that we can be notified once they're in there, um, and, you know, respond effectively.
But going back to the identities and the credentials, I always like to make the analogy of a key left under the mat. If I'm a robber and I'm, you know, scoping out homes, That's the first place I'm going to look is these known places where people are leaving their keys. And when it comes to secrets harvesting, attackers know where to look.
And this can be things like serverless function code, IAC software. These files often contain credentials. Or secrets or other sensitive information, and they're often overlooked.
Corey Quinn: There's a, yeah, but I found a number of things doing suspicious work, some of which in production products, which is never terrific, where they would just scan the local environment of your dev environment and find any credentials that were being used to access AWS in this case, and then just silently send them up to the company server so they could take action on your behalf.
Now, doing that with permission is one thing, but when people were surprised to discover this, they didn't. really have the traction they did ever again and haven't been heard from since. There's a, there's a certain approach to, I guess, consent of customers. But there's also the sense as well of looking at most laptop environments or desktop environments that I've worked in.
Over time, I start to see sloppiness where I've stashed credentials in dot files all over the place because that's where they have to be. There's no better way for a lot of systems to start adopting that. And. I guess I like to ideally hope that people clean up their hygiene once things start moving into production a little bit more, but the constant drumbeat of breaches seems to suggest otherwise.
Maya Levine: Yeah, I mean, SaaS applications are also a huge attack service. We, we see credentials being left everywhere from repos to AD to Slack. You know, I really do think that attackers are better than we are at secrets management. And the reason why is the motivation is different here. Uh, for them. Getting access to a really privileged long term credential can be their golden ticket to get lots of money and execute their attack successfully.
And for defenders, this usually isn't the highest priority item for them.
Corey Quinn: I have to give customers some credit here because whenever I'm puttering around and building code badly, as a general rule, my options are I can either embed the credential temporarily, quote unquote, in the code or I can go half an hour out of my way and bring in extra libraries and do the responsible handling of credentials or whatnot.
When I don't even know if the thing is going to work or not, the temptation to take the shortcut is terrific. Getting to a point of being able to do that and stop doing that requires significant scaffolding on the development experience side. That I can't necessarily blame people who are under time pressure to bypass.
Maya Levine: That's the given pull between development and security that we often see, right? It's between the quickness that we can deploy amazing features to our customers versus, you know, kind of securing and locking things down and making sure that it's done in a safe way. And I don't think one or the other is correct, right?
There needs to be a balance between the two. Think of something like employing a secrets management system. This just reduces your likelihood of credential leaks. If you can keep your keys and your credentials in a centralized location and provide an API to dynamically retrieve them, it'll reduce the likelihood that your credentials are going to be inadvertently left in files.
Corey Quinn: And on some level, I keep waiting for. Some IDE to come up with a simple drop in replacement for this that's transparent, does all the correct things on the back end, but developers don't have to worry about it. Instead, it seems like developers aren't worrying about it. These things make their way into production, and that's where what you do seems to come into play.
Maya Levine: Yeah. And, um, again, there's two aspects that I think security practitioners should be thinking about. The first is on the prevention side. This is where the identity hygiene and the misconfiguration part is going to be helpful. You know, how can I lock things down and harden and just prevent attacks from happening in the first place?
But it's not complete without the runtime security side. And this runtime security piece should be able to detect at real time. Because of how quickly we're seeing cloud attacks are happening, it's not enough. to be notified about weird malicious behavior an hour after it happened. You know, if an attack takes 10 minutes to execute, you can see why that's too late.
Corey Quinn: It's always been, if you're kind of catching these things relatively quickly, there's always the big question that how long has it been going on? How, what have they had access to? What in the environment can I trust versus what should be considered compromised? And I guess every time I've been looped in on the early stages of a breach, There's been this massive confused fog over everything where no one really knows what's going on.
People are making wild hypotheses and throwing them around to see what sticks, and people in some cases are overreacting, uh, misinformation runs rampant, etc, etc. It always tends to feel like regardless of how mature the processes are, there's a bit of a sudden surprise fire drill. Everyone's running around screaming for help.
Maya Levine: Totally. And I honestly recommend actually doing fire drills, you know, trying to putting your systems under stress and seeing how things happen in, in an simulated way. And so when things really do happen, you know, at least you've kind of practiced it a couple of times, but yeah, I think that it is, it is a challenge.
It's, it's hard to understand the scope and that's where part of what you need in a cloud detection and response solution. is actually the ability to see things after they've crossed a detection boundary. So what do I mean by that? We've actually, Sysdig’s Threat Research Team has actually observed attacks where the threat actors moved laterally from an AWS environment To an EC2 compute instance, to an on premises server.
And typically when we think of lateral movement, we think of going from one account to another. And the reason why this type of lateral movement is a challenge for defenders is because once an attacker moves from AWS into EC2, CloudTrail no longer provides any information about what the attacker is doing.
To see what the attacker is doing on the EC2, you need, you know, security at runtime on that workload. And so the, the kind of challenge here is that you need to be able to see these logs, see these detections occurring from all of these different areas and have kind of a solution, ideally, that can tie these things together and show, you know, this is what happened in CloudTrail, and then this is what happened on runtime on the compute.
And the challenge is being able to correlate those, those actions and being able to kind of see and paint the picture of the attack.
Corey Quinn: Do you find that, uh, I, I guess modern cloud technologies, Kubernetes being one example of this, are, are leading to novel forms of breaches, or is it effectively still the same things we used to see in the old days of three tier web apps running on mobile devices?
Maya Levine: We can't say it's the same. There's, I think, It depends on what you're talking about. Attackers goals are usually financial. That's not going to change unless you're talking about, you know, espionage, which is a whole different thing. But most attackers are after your money or after money in general. And they're just looking, they're just finding new ways to get money in the cloud.
It's more often going to be, you know, crypto miners, or we're still seeing ransomware happening in the cloud as well as on premises. So I think, yeah, the motives are the same, but the techniques are different.
Corey Quinn: Even along that, those lines, you folks recently had a blog post about coining the term LLMjacking, where people would grab credentials for something like OpenAI or whatnot, and then use those credentials either for malicious purposes or to not have to pay for it themselves.
Can you give any color on what that looks like? It would not have occurred to me that that would be something that people would use directly since, I guess I'm stuck in the old world where, yeah, I'm going to, I'm going to basically capture access to a bunch of compute resources and use them to mine cryptocurrency, which is super economical in someone else's account.
But I hadn't made the leap yet to using LLMs directly as a revenue generator.
Maya Levine: Yeah, it was an interesting attack. So we saw them gain access with stolen credentials, again, the initial access point, and start targeting specific LLM models, uh, that were hosted by cloud providers, like Anthropic’s Claude. And so they're using scripts to check credentials against a bunch of different AI services like OpenAI and Bedrock.
And they're checking the capabilities and quotas of these stolen credentials without triggering any alarms. Um, and using kind of a reverse proxy to basically manage and then sell access. To these compromised LLM accounts.
Corey Quinn: Once these companies wind up, uh, sorry, once these companies wind up, uh, being compromised on an LLM key, how long does the credential remain good for?
Because it always struck me as, oh, you start getting access to something and you blow the bill to the stratosphere, people find out relatively quickly and turn it off. Ideally, that gets noticed within minutes or hours and practiced days or a week or two. The way you're talking about this sounds almost like it's a, it's a more persistent, longer term breach.
Maya Levine: You know, anyone who discovers this and shuts it down, they kind of eliminate that for themselves. But yeah, we were seeing that they, the attackers would disrupt legitimate LLM usage by maximizing quota limits and changing things in the setting. So maybe it doesn't trigger as many alarms as usual. Um, there was a potential for significant financial damage here, something like $46,000 a day.
So that's something that hopefully your billing would notify you about for sure. But even just one day, that's, that can be a lot of money for an organization.
Corey Quinn: Weird confessions here. I started noticing that in an open AI key of mine two months ago for, I was using it for a system to generate internal placeholder text.
And it went from costing me about 10 15 a month or so to starting to get a number of series where it was costing me basically 8 10 every three days. It was increased usage, and I never bothered to track it down and figure out Okay, is this just due to a weird logic bug, or I'm seeing a lot more throughput through that system, or something external doing it?
Because it's an open AI key, there's no auditability that I'm able to discover to figure out, all right, what are those queries, what is being used, what time of day, and correlate that. So I shrugged, I wound up turning the key off and didn't have to worry about it again. But it does make me wonder now that you're saying that.
Maya Levine: With any new technology and anything new that's kind of buzzing in the industry, we can expect attackers to also be into the buzz. And so, especially for AI, security for AI, I say starts with just a visibility aspect. Where do you have AI packages. Where do you have AI deployed in your environment? Um, is there any shadow deployments of, you know, developers that spun it up somewhere that you aren't aware of?
And, um, then layer on top of that, what risks do these workloads or wherever, uh, have, right? Just typical risks, you know, are they misconfigured in some egregious way? Are they, you know,
Corey Quinn: The only place these credentials lived was inside of AWS Secrets Manager. So if that got popped, we all have different things to worry about.
But so I'm wondering now, okay, where else would they have been picked up? Now, of course, thinking it through a bit more logically, the reason I generally dismissed it was that if this had been a breach, I have a sneaking suspicion that seven bucks every three days would not have been the sign of, uh, of, of compromise.
It also is directionally in line, at least with the right order of magnitude for, for traditional usage, which is why I was mostly okay letting it slide. But now I really am starting to wonder. I may have to dig into the CloudTrail logs.
Maya Levine: I mean, it's cool that you have these CloudTrail logs to look at, right?
It's some kind of audit and logging system to be able to go back. And that's a problem with cloud. Often you're dealing with resources that may not even exist anymore, right? They’re very dynamic. And so. You need to have some kind of ability to look back and try to understand what happened.
Sponsor: In the cloud, every second counts. Sysdig stops cloud attacks in real time by instantly detecting
changes in risk with runtime insights and open source Falco. We correlate signals across workloads, identities, and services to uncover hidden attack paths and prioritize the risks that matter most.
Discover more at Sysdig, S-Y-S-D-I-G, dot-com.
Corey Quinn: Audit logging is incredibly expensive at scale, but not having it can be even more expensive.
It's one of those areas where it's a complete waste of money until suddenly it's very much not and that that instance pays for everything you've ever spent and then more on it. But it's sometimes difficult to get leaders on board with Thinking through these responsibilities. That's part of the reason I believe that CISO is a C level position.
It's not just some director of security somewhere. It has to be fought for on some level in boardrooms.
Maya Levine: A hundred percent. And I think there could, there can be an element of prioritization here. Um, we witnessed an attack, uh, recently this year where we saw an attacker, you know, got access into the environment and then they checked it.
to see if an S3 bucket had versioning enabled, which basically allows, uh, you know, customers to easily restore data if something happened to it on that S3 bucket. And it was enabled, so they disabled it and then deleted all the data, exfiltrated the whole ransomware bit. Um, and so having CloudTrail on data events, like exfiltrating data, that's can be really, really expensive, but maybe it's worth it prioritizing it for certain data storage where you know you have really critical data being stored.
Customer data, HIPAA data, whatever it is.
Corey Quinn: And then ideally, defense in depth, we have an SCP that disables the removal of versioning policies, etc. There are ways to do this, but they're all very obvious. After you really should have thought about that, because it seems to me that most people's understanding of attack vectors has not matured to the level of understanding that, no, no, no, not only are some of these attackers as good at cloud as you are, many of them are markedly better.
Maya Levine: Yeah, I mean, it depends on the company, right? Some companies are completely born in cloud, cloud native, they live and breathe that, and a lot of companies are coming to us with old mindsets, right? There's no longer the same perimeter to secure. There's, it's different. It's different in the cloud. And it is really difficult actually to understand the complexity of all of the systems involved, right?
We're asking people to be so in depth with so many different technologies and tools and systems. And that's challenging. That's hard. I I'm not one here to say. Oh, you know, humans, we suck and we just were really bad at this. And I think humans are the weakest link, but that's very understandable because I think what we're asking of people is difficult.
Corey Quinn: We do not evolve in this environment. These are advancements in the last couple hundred years. And a quote I heard from Swift on security on Twitter that I really love and has resonated has been of all the levers available to you, Human nature is not one of them. You will not be able to change it. And that's, that is the realistic truth of it.
I, I get very tired of security awareness trainings that basically castigate employees for things of, if you click the wrong link in an email, you could destroy the company. Well, frankly, that sounds like a failure on the part of corporate security. If one person clicking a link brings everything down, maybe focus on the other side of that equation because, spoiler, we all click the wrong link sometimes.
That Again, is human nature. Uh, the attacker has to get lucky once, defenders only have to lose once. And that's, there's an, there's an unequal parity here. So, not to sound like a Debbie Downer here, but I have to ask then, so if, if breach is inevitable, what can people do?
Maya Levine: Before I answer that question, I will say that it is proven that Educating and those annoying trainings actually do have an effect.
They reduce the risk of people clicking on phishing links or doing those kinds of things significantly. There was some Forbes study that I saw that it really actually does reduce that likelihood. Again, that chance is not eliminated entirely.
Corey Quinn: Very fair. I am an overly negative cynic on that. I'm not trying to dunk on people who make mistakes.
We all make mistakes. I make them in triplicate, generally speaking. But yes, I definitely accept what you're saying.
Maya Levine: Part of why these trainings are annoying is that they're not coming at it from the perspective of, Hey, this is so understandable to make this mistake. That you clicked on a link that looked exactly like The font and button and copy and everything that this company usually has.
It is an understandable mistake to make. In fact, we're seeing things that are even more understandable. There was a breach where somebody kind of, you know, stole MFA credentials. To, to, I think it was Uber's VPN and they kept, you know, trying to log in repeatedly and then contacted that person on WhatsApp and pretended to be IT support and convince them to reset their credentials.
And all I'm saying is that if somebody can be tricked to clicking a phishing email link, they're They can almost certainly be tricked into accepting a notification from their employer's own MFA.
Corey Quinn: And overconfidence is absolutely one of those problems. Oh, I'm far too smart to ever click a link like that.
Really? You've never gotten a push notification? Sounded important at two in the morning when you're getting home from the bar? Really? You're at your best then? What's your secret? I'd like to be, I'd like to be functional then as well. But yeah, the idea of flooding the MFA codes, uh, and click here to accept through Duo, historically, people just start spamming them until eventually people got annoyed and hit the OK button and then they were in.
Because it's easier to believe that your IT team has misconfigured something that's spamming you instead of know this is an actual attack. There's a, there is an awareness issue.
Maya Levine: A hundred percent. And I think, and, and also, I mean, we talked earlier about developers leaving their credentials in all sorts of locations.
That's also an awareness issue, right? They, I don't think many of them are thinking, Hey, this is a great, this is like me going out of town and leaving the key under the mat for weeks and weeks, right? I don't think they're thinking about it in that term and maybe just framing it in that term can be helpful to reducing it.
But at the same time, I firmly believe that we should expect to be breached. We can't control. how attackers get in always, right? There's zero day threats that we learn about all the time. And so that's where the runtime piece comes in, right? You want to prevent, you want to harden, you want to do everything you can to make it hard for attackers to take advantage of your systems.
But you also need to operate under the assumption that things can go wrong, and you, and what do you do when that happens?
Corey Quinn: I like carrying things through one step further, to a level of turning things back around again. For example, when AWS releases a new service, sometimes what I like to do is very carefully bound an IAM policy to just that service, and then leak some credentials with that policy attached onto GitHub.
Then people will, of course, subvert it almost instantly, get the thing to mine cryptocurrency. Great. Now I can kill the credentials, I can stop whatever Bitcoin mining thing it is, and then figure it's mostly configured for the purposes that I need it, and off to production it goes. I'm mostly kidding here, but there's an uncomfortable element of truth to it.
You, and it appears that I'm not the only person who finds a lot of these things confusing. You've given a talk, uh, several times now, called IAMConfused, which, Talk about titles that really resonate with me. My God, I am the confused deputy. Uh, please tell me about your talk.
Maya Levine: Yeah, it was, uh, so I, one of the things I run at Sysdig is our identity security solution.
And I was kind of baffled at how many, uh, organizations I saw that recognized the importance of, of zero trust around least permissive, but hadn't prioritized it. As a major, you know, initiative that they, there are other things kind of took higher precedence. So I created this talk where I walk through eight real life examples of breaches that occurred in the past year or two that utilize identities in some way to achieve their goal.
Just to highlight the fact that almost every single breach that you hear about on the news took advantage of mismanaged identities or secrets, uh, or permissions in some way.
Corey Quinn: Yeah. Increasingly, people hear about these things. Oh, there was a hack. Generally, it feels like they fall into two big categories.
One is the provider themselves wound up leaking data somewhere, or it's the credential stuffing approach where people wound up using the same password everywhere with poor password hygiene. Increasingly, though, it starts to seem like quote unquote hacking. is mostly taking advantage of either social engineering or else weak credential management, sometimes one combined with the other.
Is that materially changing in cloud? We're seeing more of that, less of that, or is that always the way of things?
Maya Levine: It is really what we're seeing as the most common initial vector, initial access point for attackers is getting their hands on some kind of credentials, whether that's buying them on the dark web from, you know, a previous breach that got kind of leaked, taking advantage of password reuse, um, or finding those kind of long term credentials that are stored somewhere.
There's many, many different ways that attackers can gain that initial access, but once they're in, there's all sorts of crazy stuff that they can and do do. Um, but it's really about, I think for me, highlighting that, again, you can't control always how they get in, but if you are not adhering to least permissive, what you're doing is kind of giving them a gift to go and do whatever they want to do and, and escalate however they want to escalate and move to wherever they want to move.
Um, and that's, That should scare you. That should be scary.
Corey Quinn: It's one of those unfortunate areas where there just isn't a lot of, I guess, wiggle room. It's unfortunate because there's, you can't fix human nature, you can't, there's also an inherent trust built into society as a whole. Bruce Schneer wound up writing a book about that a while back.
Uh, It's, uh, the, uh, trust required, the embedded trust for, uh, that societies need to survive and thrive. It's, you can't, you know, people don't generally take a paranoid baseline of assuming every person talking to them is out to, out with ill intent. So I do have a lot of empathy for these things. I'm also increasingly of the mind that a That a strong defense isn't enough.
Assume you're going to get breached, you need to be able to detect and respond to that rapidly. Otherwise, it leads to the M&M security. Once you're through the thick candy shell, everything inside is gooey and chocolatey, and you can basically have a field day in there until everything comes crumbling down around you.
I've used to be fairly dismissive of response approaches as opposed to, uh, as far as detection goes rather than prevention. But now it's, yeah, you, you, the reality is assume people will get in. How do you mitigate, detect, and stop that damage quickly?
Maya Levine: Yeah, and I know that you've talked to some other people about this 5 5 5 benchmark that Sysdig is, is talking about.
And this is really, I think, driving this point home. If, if cloud attacks are happening in 10 minutes or less, How can we match that speed, right? So, you do need to detect, we're saying within five seconds of that signal occurring. Whether this is a log from an application or cloud, a system call, something from network traffic.
needs to be real time so that you can correlate and triage in around five minutes and initiate some kind of response in five minutes.
Corey Quinn: The, uh, the idea of being able to detect things in five minutes is tricky, especially when things like CloudTrail can often take 20 to wind up announcing, Oh, hey, this thing changed.
It's fair to that team. Their responsiveness has improved over the years, but there's still no guarantee. There's no public SLA around when events will, when you can count on them being there for things like that. So it's a, It's a bit of a best effort at the moment.
Maya Levine: That's true. Um, I will say though on the, so on the correlating and triage piece, this is another place where good identity things can help you.
Um, because if you can stitch actions that attacker took between identities and between different environments together with a common thread, which is the identity that helps you to paint a picture of that. I think more important on, on what you just said in the, Response side, how do you respond in five minutes?
Many of these needs to be automated. You're not going to manually be able to respond with the speed of attacker's automation. So there are auto response actions that are available in the cloud. And some actions do need to be manual. Obviously, you know, I wouldn't automate an action that could take down your entire website.
I would Propagate that up to the right person to see if that's kind of what's needed. Um, but whatever you can automate, that is what's going to help you to respond at the correct speed.
Corey Quinn: And you just alluded to something as well that's been the bane of my existence. Uneven levels of audit coverage. Okay, we know someone got into the VPN and then connected from there to a Bastion host and did some things.
We're not entirely sure what. We can analyze traffic returns and see that there were, there was some encrypted traffic going to the database, some to a web server. But we don't really know what it was or what it contains, so that's a big hole in our understanding. It's, it's tricky to get holistic coverage of these things, of what you'll need from an audit perspective, but it absolutely needs to be done.
Maya Levine: Yeah, and I think that's one of the, the biggest challenges for us security vendors, right? It's a pain point that we're trying to solve for is how Can we help you find that needle in the haystack? You're drowning in noise, right? It's great that we have all these logs and all these audit things, but the problem now is that we've become kind of over and inundated and overwhelmed with them.
And sometimes the indicator of compromise that's within Those logs gets buried in all of them.
Corey Quinn: There's no good answers that I can see here. I wish there were otherwise. But I really don't see how they are, how there's a better path forward that is going to be, I guess, clear enough. Something will get there, but not, uh, but it just still seems like there's still so much work to be done.
At the moment, being aware of it happens, computers are going to be able to react faster than humans can and having a plan seem to be like the baseline, do the work, everything else sort of extends downstream from that.
Maya Levine: The good news is, is that the techniques that attackers do aren't that different across a bunch of, uh, across a bunch of different breaches.
For example, most attackers, once they make it into your environment, will do some reconnaissance. They'll enumerate, they'll try to find out, what can I access with my current credentials? What other credentials can I get access to? And so they're making calls like, list S3 buckets, list that, list, list, list, right?
All of these calls, calls that are pretty typical in a normal cloud day to day operation. However, what's not typical is seeing them happen, you know, there was 100 calls within a minute. That should be an indication that there's malicious activity potentially here. And that's kind of what we're trying to do.
We're trying to add Extra logic on top of it, because the challenge is that all of these individual logs by themselves can often be, you know, just typical cloud operations, but if you can add some kind of logic on top of it, where it's looking at, okay, now this is atypical, you know, list S3 bucket, that's a normal call, but a hundred of them, not so much.
Corey Quinn: And okay. That's, that's one of our, uh, DevOps people. They're presumed trusted. Sure, but when their activity begins to look very different than what it normally does, and possibly that's a correlation of, okay, because their tooling tends to work through Python, why are they suddenly making requests from Golang, that's a little odd, they start to be able to identify heuristically, that there are, something fishy might be afoot, but ugh, it seems like it's, it's such a great ideal, but getting there is such a hard part.
Maya Levine: It is, but we, we, we are getting there. I think I, I'm, I'm not, you know, I think the security and attackers, that's always going to be this cat and mouse game, right? Where, uh, threats innovate in some way, and then we innovate to kind of defend against it and, and there's always going to be that push and pull, but I think if we can embrace.
These kind of automations and these logic and machine learning in a way that that is actually effective for security. I don't think that AI is the silver bullet that everyone, you know, says, uh, we can just run it and then that's it. You know, we don't have to do, Uh, our jobs anymore. Uh, but I do think that there are use cases where AI can help us and cutting through the noise and finding these abnormalities is one of those.
Corey Quinn: I really want to thank you for taking the time to speak with me today. If people want to learn more, where's the best place for them to go?
Maya Levine: Uh, they can visit Sysdig. com or reach out to me directly on LinkedIn.
Corey Quinn: And perfect, we'll put links to both of those in the show notes. Thank you so much for taking the time to speak with me today.
I really do appreciate it.
Maya Levine: Thank you, Corey.
Corey Quinn: Maya Levine, Product Manager at Sysdig. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a 5 star review on your podcast platform of choice, along with an angry, insulting comment that no doubt will not be attributed to you because that platform provider wound up getting breached due to poor credential management.
Join our newsletter
2021 Duckbill Group, LLC