Insights from a Vendor Insider with Ian Smith

Ian Smith: We hold ourselves to a really high bar internally. So we feel comfortable at these things. Maybe not entirely comfortable. It's still uncomfortable when someone says, well, tell us about a downtime incident. We're human. We've still had some downtime incidents, but we have incredibly high reliability.

Corey Quinn: Welcome to Screaming in the Cloud. I'm Corey Quinn, back after a hiatus. This promoted guest episode is brought to us by our friends at Chronosphere and they have brought a return. When last I spoke to you, Ian Smith, you were the field CTO at Chronosphere, and now you're the chief of staff. Um, congratulations or condolences?

Ian Smith: Oh, definitely condolences. Definitely for Chronosphere, at least.

Sponsor: Complicated environments lead to limited insight. This means many businesses are flying blind instead of using their observability data to make decisions, and engineering teams struggle to remediate quickly as they sift through piles of unnecessary data. That’s why Chronosphere is on a mission to help you take back control with end-to-end visibility and centralized governance to choose and harness the most useful data. See why Chronosphere was named a leader in the 2024 Gartner Magic Quadrant for Observability Platforms at chronosphere.io.

Corey Quinn: So if you talk to a bunch of different folks who are various chiefs of staff, chief of staves, however it pluralizes, uh, you're going to get more answers than people you asked to around what the role is.

What is a chief of staff in the Chronosphere context?

Ian Smith: At least in the credit suite context, the Chief of Staff component of my role is really focused on the overall effectiveness of the executive team, making sure they're set up for success so that they can focus on their departmental needs and obviously the company's needs.

And then there's a second layer to it where its sort of this hybrid role between Chief of Staff and Head of Strategy. I look overarchingly at, you know, what is the company doing? How are we gauging the market? Not just how are we selling, but what are we presenting to the market? What's our narrative there and directionally, where are we going over the next one, two, three, five years?

So it's a very interesting role, multifaceted. Um, and I get to do a whole bunch of cool things like this podcast.

Corey Quinn: Which is always fun and exciting.

Other cool things you've done lately include making an appearance in the Gartner Magic Quadrant for, I'm sorry, I forget which one. They have so many quadrants these days that at this point it feels like they're inventing new dimensions.

They've gone three dimensional. Why they're still talking in terms of two, I don't know. But first, congratulations. That's a big deal.

Ian Smith: Yeah. Thank you. Yeah. Being a, and not just debuting in the Magic Quadrant, but as a leader in the observability space, given the relatively short tenure of the company overall, it's been a huge amount of work by, by the team.

I definitely don't take any credit for that, but it's been very gratifying to see that work pay off and obviously see the interest from the market based off that recognition by Gartner.

Corey Quinn: It's always interesting. I used to take a lot of, I guess, salt with the idea of, Oh, this is, this is whatever all the cool kids are talking about.

And okay, if someone's in the quadrant, does that just pay for play? What is it? And the more I talk to people, the less I believe that that's accurate. It's, there's value in, especially when you're a large company, figuring out what other companies are doing in a realistic way. And frankly, having something to point at that helps shore up and justify that decision with something a little bit more scientific than just vibes.

Ian Smith: Yeah, absolutely. And sometimes it can be a good starting point. And there's a lot of depth of analysis that Gartner goes through. I can say from our experience, they definitely make you jump through a lot of hoops. They introspect a lot of not just what the product does, but how are you positioning it?

Where are you going? Um, there's obviously a vision component to the magic quadrant as well, but all of that should condense down into not just, you know, that nice graphic, but obviously the depth of analysis behind it and the report and the report itself isn't just a pathway to what do you buy, but it's what's going to be relevant to you.

And obviously as you consider solutions, you'd be thinking about, okay, well, what's relevant to me? And then take that into reading something like the Gartner report and identify, well, they're strong at these things, but maybe these things aren't super important to me. Maybe someone who is placed elsewhere in the quadrant is an ideal fit for me based off what I need.

Corey Quinn: You've been talking a bit lately about the buying process. And that resonates with my philosophy on things. For a little bit of context on this, one of the things I do as a consultant is help negotiate AWS contracts on behalf of customers with AWS. And it's not that necessarily that I'm a fantastic negotiator so much as it is that as a customer you deal with this every, what, year, maybe two years if you have a number of different contracts.

AWS deals with this multiple times a week. So do we. So at some point you wind up, uh, sort of solving for that experience differential, where, oh, you're doing this all the time, we're doing this all the time, great, we, we can pick out the ebbs and flows and the sea changes that happen. Whereas if you only do it like buying a car, once you head out every five years you've done it, or ten years, however often you, I drive things and buy, I buy things and drive them into the road, but at that point, it's great, but let's, let's figure out how do I do this?

And I feel like a babe in the woods every time I go through that. Having someone who does this day in, day out is valuable. I hadn't thought about doing that from an observability perspective, which is what makes us a bit new. Tell me more.

Ian Smith: Yeah. I mean, just like the car analogy, I, I mean, I like you, uh, I buy a car.

I've spent a lot of time and effort in it. It's, it can be quite stressful and I might keep the car, in this case, I think I've actually had my most recent car for about 10 years. And I've been thinking about. Buying a new one recently. One of the things that I went to was like, well, what material is out there for people like myself?

And obviously there's an explosion of content out there. I found an interesting trend on YouTube that there are these people who've spent 30, 40 years in the car selling industry who now have pivoted to essentially be content creators. And they talk about, well, these are all the tricks. Here's all the hints.

And yes, there's aspects around negotiation, right? And everyone has a procurement team. When you work at a company, but there's those aspects of like, well, how do you really think about what you need and how do you translate that into your purchasing and then going and getting that outcome that you want, which is maybe it's a car that holds six people.

Maybe it's a car that allows you to take those great road trips. And I think in a similar fashion, there's a lot of things in the observability space that are somewhat stacked against you. You mentioned the fact that, you know, uh, a vendor, let's say Amazon or even like Chronosphere or other vendors in observability space, we do this. All the time.

I've spent 10 years in a pre sale since I've worked with so many different customers. And I'll be honest, like there are some tricks that we perform and there are some things that we bank on in terms of that unpreparedness for the buyer, generally the technical decision maker. So it's not someone who buys for a living, unlike procurement, but even that buying for a procurement thing, they are deferring a lot to the technical decision maker.

They're saying you're the expert, you're the one who's got it.

Corey Quinn: Well, we can hope anyway. Sometimes that doesn't hold true and no one likes the results there.

Ian Smith: Exactly. But you're going through this process, you're trying to collect requirements, you're understanding where this, you know, solution might fit into the business, and you're essentially making the case that we need to go spend money.

And not just spend money, but spend time and effort doing an implementation. Most organizations already have an observability solution. So there's a migration component that's very top of mind for people, the disruption to all of the end users, which are generally all of engineering. So there's a lot of weight behind this, but when you think about it, just like with the car, every 10 years in my case, how often are you buying an observability solution?

And maybe there's multiple solutions in the thing, but if you're buying a platform, let's say that has all of your key components in it, how often are you really doing that? As an individual, maybe across a 10 year period, you might do that once, twice, three times if you're hopping between a lot of organizations that really need improvement.

And meanwhile, on the vendor side, they're doing this all the time. Are you, are you packaging all of this stuff up neatly? Do you have a clear process or are you just going and talking to people and seeing sort of who checks out?

Corey Quinn: You, you talk about the car analogy and I think it holds very well. I mentioned the contract negotiation analogy.

How, what makes observability special in this sense as opposed to effectively Any form of enterprise SaaS.

Ian Smith: I mean, I think there are lots of commonalities, but observability in particular, for me, I think about the impact observability has on, really, the core of what the company's doing. Most software companies, most even enterprise businesses really rely on software to deliver experience to their customers, generate revenue, and it has a big impact on a lot of their employees, it has a big impact on their customers.

For me, observability, the reason why I've spent so long in it is its ability to go and impact the industry and sort of the digital society we have as a whole. But are we really doing those right things as a, as an observability sector in the industry? Are we doing the right things in terms of like buying the right tools and pattern matching?

If you get the observability solution wrong, if say, for example, you buy something that's incredibly unreliable for you, it was great in the pilot, but it doesn't hit the reliability that you need because you then need to be able to provide reliability to your customers. Your business is going to suffer.

The quality of life of your engineers is going to suffer. Your observability team is going to suffer as well. And why? Like we don't need that to happen. And ultimately, if the observability industry as a whole doesn't get better, then the software industry is not going to get better. That's me maybe tilting a little bit at windmills, but that's part of why I've been really passionate about observability.

So as we think about that sort of purchase and move to observability solutions, Picking the right solution. And I'm not saying I work for Chronosphere. I'm not saying Chronosphere is the right for every single person, but picking the right thing, using resources like Gartner, coming in prepared, focusing on those outcomes rather than tick box features.

That's really important to getting, you know, that rock solid software and facilitating development of, of, of all the things that we rely on on a daily basis.

Corey Quinn: This is, I don't disagree with what you're saying, but you also work at a vendor. You clearly have a vested interest in that decision going a particular way.

So, is talking about that experience differential, uh, just, you taking a victory lap? Are you trying to educate people as far as how to handle this? Is this just a, hee hee, here's what we're doing and there's nothing you can do to stop us? How are you envisioning this?

Ian Smith: Definitely not. And I would say that, you know, as we work with really sophisticated customers, it makes us better.

And Chronosphere is focused on large organizations, organizations with very sophisticated needs. And we take a very partnership driven approach, right? We're not going out there and trying to plunder these customers and try and trick them because, you know, we have amazing retention, all those kinds of things.

That's the victory lap component. But at the end of the day, us listening to what the customers want, like them, um, being able to push us in certain directions has led to us having a more robust experience, and it means that we can be more guiding to these customers. So when a customer comes to us and says, okay, I just want to sort of evaluate and take a look at something and that's going to be great.

It's like, well, have you thought about requirements? Have you thought about all the different stakeholders in your organization? And oftentimes that's a great way of us building good credibility with the customer and also making sure that we are the right solution for them. The last thing as a vendor that you want to do, particularly as someone who provides a lot of white glove engagement, is you don't want to go and spend a lot of time, effort, energy, and in our case, we stand up dedicated infrastructure for our customers and even our pilots. We don't want to go and spend those things unnecessarily. So if we can have those good conversations, if we can help people be prepared, right time, right place, right ideas, then The buying process is actually going to be easier for us.

And at the end of the day, if the whole industry is sort of lifted up and the bar is raised, then, and we're already there, we're going to be in a good place as a vendor. Naturally, as I said before, I've spent the last decade in this space. I want the industry to get better. And I think at least a better outcomes and happier customers.

Sponsor: Complicated environments lead to limited insight. This means many businesses are flying blind instead of using their observability data to make decisions, and engineering teams struggle to remediate quickly as they sift through piles of unnecessary data. That’s why Chronosphere is on a mission to help you take back control with end-to-end visibility and centralized governance to choose and harness the most useful data. See why Chronosphere was named a leader in the 2024 Gartner Magic Quadrant for Observability Platforms at chronosphere.io.

Corey Quinn: So, if you're, if I'm a customer and I'm looking at an observability solution, I can, generally from my own experience living in the SRE life, a few things are almost certainly true. One, I have a problem. It might look like a reliability problem, but blessed few places start on day one building an app with an eye toward, and as we go, we're going to instrument this thing. And even if they do, they get it hilariously wrong from positions of scale, how the product winds up evolving, et cetera. So there's a painful problem that they have to deal with. And.

If there's one area that I think observability vendors compete in the most, and this is obviously, as someone who runs a sponsored podcast, more visible to me than some others, it seems to be with marketing dollars positioning what they do differently, more so than it is, in many cases, technical differentiation.

I'm not saying that you or any given company necessarily falls into this, but it does seem like there's a lot of All of these options are pretty good along some baseline stuff and then tend to differentiate around the margins. So if you're a babe in the woods buyer going in to purchase a solution in this space because you have an expensive problem and you're getting yelled at most likely, what, what should they do?

What, what should I, what should I do as a new buyer from your perspective?

Ian Smith: Yeah, I mean, as you point out, there are very solid, I would say, commoditized capabilities across the board. I think you can think of things like the margins as becoming more important, but as you think about a lot of organizations or a lot of technical evaluators think about products and technical features.

But instead you think about the outcomes. We've talked about this before, but you're the, yes, the features lead to outcomes, but how do you evaluate that kind of thing? A lot of times people go, okay, give me a list of features. And then in the pilot, I'm going to check those individual features. A good pattern I see it can be hard to set up, but a good pattern I see from a sophisticated buyer is, Hey, we want to go put data from a production system into the pilot environment.

And then we want to be able to compare. The solution that we have now, or maybe with a vendor or maybe multiple vendor solutions. And let's say that we had an incident last week. What if we tried to investigate that thing from a workflow perspective and importantly, and there's layers to this, right? You might think, Oh, that's a good idea, but then who do you put in the seat to investigate that thing?

One of the anti patterns that I see a lot of is, Oh, we'll put the most sophisticated observability user in the company, in that hot seat. They already know, they already know about the incident. They're an expert in the, in the current tool. They're intended to be an expert in the future tool. And that's a signal, but your experts are not indicative of the entire set of engineers in your organization.

Corey Quinn: This is a common problem that I've encountered where, especially when observability was new, it felt like, great, can you even explain to me what your company or product does? And the answer is yes, but first, go and do the tutorial, which is three to five years experience as an SRE in a scaled out environment.

Then you'll understand what we're talking about. Now, the state of explaining these things has dramatically improved, which is necessary as someone who's trying to forget those three to five years of experience myself, but it's a. But it was always a, you must be at least this smart to even understand what we're talking about.

So you have the experts inside of a company going for this and then expecting a bunch of disparate teams to suddenly all get on board with this. It's, it's a recipe for disappointment.

Ian Smith: Right. And so this comes to, I guess, maybe the summation of the point that I make on this, which is you need to think about what outcome that you're trying to get.

And at the end of the day, it's generally about people. It's not about how many logs can I go send through a system. It's the value they're trying to get. The value I'm trying to get is that my engineers can respond to issues and investigate issues. Well, what do my engineers look like? How is that representative of my engineers?

How do I get that kind of signal? And then you think about this further. Okay, well, when do I need it the most? I need most when things are busted and broken. And obviously Chronosphere is a SaaS solution, for example. But, and I'm slightly biased here, if you think about a lot of smart engineers are like, Well, I love to tinker with things.

Maybe I should go and deploy it in my own environment. Okay, if my environment's having issue, and my observability solution is co located on the same infrastructure as the application's having issue, Am I going to have visibility at all?

Corey Quinn: The, uh, it's the bootstrapping problem is how I like to think about that, where I was working at a web hosting company many moons ago, and I was brought in as the voice of experience on a relatively junior team that was running the data center ops.

Great, awesome. So where's the runbook on how to get this place up from a cold start? Oh, it's in Confluence. And where's Confluence? Oh, it's on that server over there. And I looked at the person and they said, we should print that out and put it over here. Like, there we go. That's the phrase. Make sure that, for example, If your hypervisor needs DNS servers to come up, maybe don't put the DNS resolvers on the VM that lives inside of a guest on that hypervisor.

There's a, it's the idea of make sure that if nothing else, you can figure out that things are down. Every cloud provider that I'm aware of somewhere has something running on competitors that just Give the outside perspective of, suddenly, are we just internally all talking to ourselves? Let us know if somehow from the rest of the internet, we drop off.

That's one of like the first things I would always build out when I took a role running on my Linux box at home. It was great. Can I wind up hitting the website? If all else fails, let me know.

Ian Smith: Right. And so at the end of the day, this is a reliability problem, right? And so again, you have many layers to it.

Okay. Do I want to be personally reliable for the observability solutions uptime when I'm also simultaneously trying to solve my own applications uptime? Maybe, maybe not. Okay. But then if I do go down the SAS pathway, what does reliability mean? What kind of uptime am I guaranteed? That stuff's on paper.

So have you talked to someone who has actually used this solution for a long time and what has their experience been? And there's nuances to this. It can be, okay, what happens when things go down? What kind of guarantees does the solution provide you? Um, what kind of mitigation? Do they even notify you?

Right? Because it's one, if the observability solution goes quiet in the dark and then you don't know about it, you can ask the whole, you know, does a tree make a sound if it falls in the forest? It's a sketchy thing to rely upon, but Again, there's many layers. Like what does reliability mean to you and how do you get signal from it?

And on that signal piece, regardless of reliability or anything else, I think one of the most important things you can do is actually go and talk in depth with a customer that hopefully looks like you and prioritizes the same things as you.

Corey Quinn: My philosophy, and maybe this is actually a question for you from your perspective, having been on the selling side of it, I tend to be more skeptical of customer references that I'm pointed at by the vendor than ones that I find myself organically.

And it's not necessarily that I'm out there like, alright, who has an axe to grind against particular company? Invariably when someone has too much of an axe to grind, they're generally a former employee and things didn't end well. But I want to hear the real story, not, not something that you're doing out of some contractual commitment where, because on backend, at enterprise deals, there's always a reference and I can use you as a testimonial and you'll say positive things style approach.

I really want to get the real dope. How do you avoid the problem of I guess, uh, the perception that anyone you introduce me to is only gonna say glowing things.

Ian Smith: I think there's a, there's, there's like, again, layers to it. One, there's the worst case scenario in my mind, which is the written case study. You can't ask questions.

Everything's been heavily edited. Not the one that you're

Corey Quinn: I, what I want to see in the written case study is the name of the person. And then I'm going to go track that person down and say, what's the real dope here? Like, wait, I have a case study? Great. Now we're, now we're cooking with gas,

Ian Smith: right? So on the, on the references thing, like a live reference, one of the things that I think every customer should expect is you should be able to ask for a reference pretty early on in the process, right?

It shouldn't be, Oh, this is the last thing before we sign the contract because From a tactics perspective, I've worked at companies that do that. And the explicit tactic that's described internally is, look, even if that call doesn't go super well, they're already so far down the process. Like it's just, it's just got to tick a box.

It doesn't have to be a ringing endorsement. They don't need to get all of their questions answered. They're so close to signing. But if you have that reference earlier, that could have a much bigger impact. Another expectation you should have is you shouldn't expect to have anyone from the vendor on the call.

Corey Quinn: Oh, absolutely. No one is going to, well, not no one, I'm a jerk, but most people are not and will not talk smack about a vendor directly in front of the vendor. Whereas I have always found the most honest approaches that I've gotten have been over beers outside of an office, one on one. And to be clear, I don't want to give the wrong impression here because it sounds like, all right, tell me everything that's terrible about it, but because there is some of that.

But it's also, people are generally pretty even handed and fair about these things. And if not, even if you wind up with an overwhelmingly negative person, I found it useful to, okay, now continue to tell me something positive about it. And if the answer is no, then okay, that, that tells me something. And also you, you need to do this more than once.

You cannot make an informed decision based on one data point.

Ian Smith: Right. Other things that I would think about is, does this customer look like me? In the sense of, do they prioritize the same things? Are they at a similar scale? Are they using the same product features? How long have they been a customer? If they've been a customer for three months, probably still a honeymoon phase.

Has it been long enough to see potentially bad behaviors? And to that point, what are the questions you're going to ask? If you just go in and say, okay, I'm going to do this reference call. One of the things that a vendor is going to do, and I've done this, you brief really heavily. Right? These are the things that I want you to talk about.

But if you, as the prospective customer, are talking to the reference saying, Look, that's great, but I have some very specific questions, questions that I would recommend asking, for example, Hey, tell me what happened when something actually went wrong. Because you can talk about the happy path all you want, but when, Hey, tell me when an outage happened.

Tell me when they didn't manage to solve a problem for you. Like you had a, maybe in the observability space, you had a big outage and ultimately the tool didn't help you at all. When you took that problem back, what was the dialogue like? What happens when you file support tickets? What happens when you have a massive surge in data volumes?

What is that experience like? How has the negotiation been? Have you dealt with overages? So all of those sort of potentially negative circumstances, and everyone's had them, right? If you work with vendor tools, you've had negative circumstances. As you say, just go and ask about what is it like, when, and have a list of those.

The things that you should be worried about for any given vendor, but particularly for observability vendors have that subset. If they don't have answers like, Oh, that's never happened. Maybe a little suspect. Either they don't want to tell you or it really hasn't happened, so they can't give you the signal you're looking for.

Corey Quinn: It is interesting to me, just as an aside, that you are advocating for this perspective, because it doesn't matter who the vendor is. This is, this is a very agnostic thing. At, at Chronosphere, if a customer or prospective customer goes through this process, they will uncover negative things about Chronosphere by definition.

That is the approach. I don't think that anything you're saying particularly advantages Chronosphere versus any other vendor. Other than how, what are the legitimate customer experiences that you have had in the marketplace? So it's laudable. Um, I'm sort of surprised you're allowed to do it.

Ian Smith: So

Corey Quinn: That's right! You're the chief of staff.

You're allowed to do whatever you say you are. My mistake.

Ian Smith: For a

Corey Quinn: This is a great way for a junior employee to get themselves surprise fired.

Ian Smith: The theory is ultimately, as I mentioned before, is it's raising that bar. And we hold ourselves to a really high bar internally. So we feel comfortable at these things. Maybe not entirely comfortable. It's still uncomfortable when someone says, well, tell us about a downtime incident.

We're human. We've still had some downtime, but we have incredibly high reliability. And we have things that we can point to that lead to that. It's not a matter of luck. You know, we've been supporting customers in production for years at this point, and some of them continuously being able to say, Hey, we've provided between four and five nines of uptime to that customer, not on a monthly basis, but on a For the entire duration of the customer's lifestyle basis, that is not a mistake.

And so if, for example, reliability is a really important thing to you, we have those data points to point to. And not just the, well, it's been this number, but this is why. And we can explain the why and the effort and investment that we put into these things, and everything should be backed up. So, um, If we are willing to put our money where our mouth is and do these things, then everyone else should.

And as I said before, it's like you've raised the bar and you set people's expectations high. Our belief is that we will be able to clear the bar. And for the types of customers that care about the things that we've built our product and, and not just the product, but also the vendor experience around, then we will continue to be successful as we have been.

But ultimately, uh, as I mentioned before, there is a slightly selfish aspect beyond the success of Chronosphere, which is if customers come prepared, it actually makes our lives easier. The, what I see a lot of the time is that a customer will come to us and we'll have some of these conversations. They'll be like, great.

We had, we hadn't thought about that stuff. We need to come back to you in three months time.

Corey Quinn: Yeah, it's also sometimes, I guess the worst possible scenario for a vendor is when you have one prospect talking to an existing customer on the reference check approach and they convince the existing customer to go in a different direction.

I mean, that's got to be a weird experience where it's like, hey, that's a good point. They were terrible at this. We should look into something else. I don't imagine that happens all that often, but the idea is funny.

Ian Smith: It doesn't happen a lot, but at the end of the day, as I said, you don't want to waste as a company, as a business, you don't want to waste resources on something that shouldn't have happened in the first place.

And if you think about the opposite. If you think about someone who's maybe going to a competitor of Chronosphere, maybe hears this, maybe takes some of this advice and drives all of this stuff and goes and does research, as you said, like maybe go and find some of your own references and go talk to someone who maybe evaluated that product and chose someone else.

Maybe that someone else chose Chronosphere and maybe they get a sense of why did they do that? So the hope is, okay, we may deflect one or two who may not have been a good fit for us, but for every one of two that deflected, maybe three or four come back in.

Corey Quinn: There's a, there's a definite, I think, sense as well that it is not winner take all in the observability space.

Everyone I talk to has what could charitably be called an observability pipeline, but in practical terms it's generally something of a mess where there's a huge number of products already deployed and as much as they say, oh, this new vendor will help us consolidate some of these things, it's, nope, we're just adding one more to the count.

Ian Smith: Yes. Yeah. And it's, it's definitely, I think a very valid desire to want to consolidate down, but particularly for the enterprise, I see a fairly large problem in the sense of oftentimes there are point solutions or maybe solutions that are targeting a relatively narrow use case. They might be platforms, there might be multiple products in the, in the suite, but they may be targeting a relatively narrow use case.

Maybe it's, Hey, we still have some stuff in a data center in Europe. And that all has to be on prem. You're like, well, I want something that can be on prem and SaaS as well. Like, is that a really good approach? Because now you're starting to get a lowest common denominator approaches. And fascinatingly for enterprises who are in this multi faceted transitionary phase, I would say two years ago for those companies who had historically large investments in APM and were looking to sort of the future and also looking to things like open telemetry, there was a sense of, well, what I should look for is something that could do everything that my APM solution did, but it's very open source compatible, very cloud native suitable, and makes your youngest, most, you know, tech savvy engineers in the terms of microservices and whatnot, very, very happy.

I want that. And we saw a lot of RFPs and we sort of like a sort of like a shotgun kind of blast. And we had conversations with some, some came on board as customers. There's a large portion who were like, well, we're going to try something else. And recently in the last, let's say, six to nine months, we've seen a lot of them, names that you would recognize, finance, healthcare, who are like, wow, we tried this thing of either going really far in the future and consolidating everything there, or leaning very much on our, you know, historical vendor who was like, hey, we're going to add open source capabilities.

And it's not worked out because lowest common denominator is not. And this is, I think, is a great example of what we've been talking about, which is what are the outcomes you want, not what does the solution look like? Does it look like a box of donuts? No, no. You want to figure out how to feed people.

Corey Quinn: Yeah. It's a, what is the, what is the actual problem you're attempting to solve for here as opposed to checking boxes?

I do want to go back as there's, I guess, come full circle here to the car analogy where I think people often get confused here. When you're buying a car, that is generally a one off transaction.

Once the contract is signed and maybe the cooling off period has expired, it's over. It's done. It has to basically burn your house down in order to be unwound in any realistic way. Enterprise software observability in particular is not like that. Yes, there's some sunk cost in building it out and instrumenting things, but if it's bad enough, people will leave, and they certainly are going to complain about it.

So, what you said a few minutes ago about the idea that you don't want to have a solution in place where it never should have been there, it causes you more harm down the road. There needs to be a longer term view than this quarter's numbers.

Ian Smith: Yeah, absolutely. And as these environments get more and more complex, it's not a sign and I'm done if the cost is going to have not just the, whether I turn off or not, but I have this big implementation cost.

And then I've got, well, if I go somewhere else, I've got yet another implementation cost. It becomes, Very, very complex. And that those are some of the aspects that I think people really need to think about in terms of just the outcomes and the organization observability and the adoption and the purchase of it is not a technical problem by itself.

The technical problems are manifestations of the organization's pain. And ultimately, if you aren't thinking about the organization, you aren't thinking about the people, you aren't thinking about the processes, and you're just focused on the other P or. Product or technology. You're really doing yourself a disservice and you are very unlikely to be successful.

You might pick the right solution. The technology might be there, but did you pick the right timing? Did you have the right people involved? Did you have the right stakeholders there? Did you set things up for success for that implementation? Was it something that you needed to do at a particular time period?

Um, often as we go through conversations with customers who have deeply entrenched SaaS solutions, uh, they realize, oh, I thought I just needed to sign and then I have that as close to my previous solutions, uh, you know, cancellation period as possible, but actually I need multiple months to do some sort of migration or implementation.

And these are factors that. May not be directly, well, what is the product going to do with it? There can be things that help you import in the product and technology side, but also from the vendor, do they help you with that? And I'm not talking about getting nickel and dimed by pro services. Do they have experience in doing these migrations?

So again, talk to a customer, a reference, dig into that. What was the migration experience like? Was it longer or shorter than you expected? How much effort did they expend on their behalf? Did they provide tooling? There's all of these things that can really layer on top of just the Great, I have metrics, logs, and traces.

Done.

Corey Quinn: It's kind of nice to be able to just check boxes and move on with our day. But unfortunately, the real world problems don't look like that anymore. I wish they did.

Ian Smith: And that's where, as I said, sophisticated customers have really pushed us to, to make our buying process and while our selling process, what it is.

And so we want to be able to push that back down because it does make our lives easier. If people have clarity about what they're looking for and they can have a real conversation with us about what that is, that's easy to get alignment straight away. Not have to waste a whole bunch of time and resources on something or invest even more in bringing everyone up.

Right? So this is part of that approach of, can we get that message out there, get people thinking about it, even just incrementally a little bit more before they come and have those conversations, there's obviously benefits, as I said, potentially people, you know, reflecting back into us going, Hey, that sounds like an experience that we want.

But ultimately, if the whole industry gets this way, if every other vendor also promotes this kind of thing, then one, everyone's experience is going to be better. Two, our lives also get easier.

Corey Quinn: Indeed.

I really want to thank you for taking the time to speak with me. If people want to learn more, I don't know, try and attempt to retain you as a buying consultant for observability software.

Where's the best place for them to find you?

Ian Smith: Chronosphere. io is our website. I'm always floating around. Our team is absolutely happy to, to talk you through this process and, and not just, Hey, uh, here's a demo, but. What does a buying process look like with Chronosphere? What is host signing look like? What does implementation look like?

These are things that we're happy to talk about upfront, but yeah, you can find me on what used to be known as Twitter, uh, under Datasmithing, um, or find me on, on LinkedIn.

Corey Quinn: And we'll of course put links to that in the show notes. Thank you so much for taking the time to speak with me. I appreciate it.

Ian Smith: Thanks, Corey. Great to talk to you.

Corey Quinn: Ian Smith, Chief of Staff at Chronosphere. This promoted guest episode has of course been brought to us by our friends at Chronosphere. And I'm Corey Quinn. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five star review on your podcast platform of choice, but I'm still going to wind up questioning the negative comment you leave by looking for several more testimonials.

Join our newsletter

checkmark Got it. You're on the list!
Want to sponsor the podcast? Send me an email.

2021 Duckbill Group, LLC