Datadog Q4 2025 Earnings Call - AI acceleration lifts bookings to a record $1.63B
Summary
Datadog closed 2025 with a punch, reporting Q4 revenue of $953 million, up 29% year over year, and a record $1.63 billion in bookings as AI-native customers and broad cloud consolidation accelerated demand. The quarter blended healthy unit economics with heavy product momentum: three core pillars now each exceed $1 billion in ARR, log and APM traction is strong, and platform adoption continues to deepen across enterprises.
Management is doubling down on AI as both a product enhancer and a new observability frontier. Bits AI agents, the MCP server, LLM observability, and GPU-focused tooling are already in customer hands, and Datadog flagged rapid adoption metrics. The company guides 2026 revenue to $4.06 billion to $4.10 billion, implying 18% to 20% growth, and keeps a conservative posture by modeling slower growth for its single largest customer while expecting the broader business to grow at least 20%. Cash generation remains robust, with free cash flow of $291 million and a 31% FCF margin in Q4.
Key Takeaways
- Q4 revenue $953 million, up 29% year over year, beating the high end of guidance.
- Record bookings of $1.63 billion in Q4, up 37% year over year, including 18 deals >$10 million and two deals >$100 million.
- Management highlights an AI-driven inflection, with both AI-native customers and the broad customer base accelerating spend.
- Datadog now has three core pillars over $1 billion ARR: Infrastructure Monitoring (> $1.6B ARR), Log Management (> $1B ARR), and APM/DEM (> $1B ARR), with APM growing in the mid-30s% YoY and identified as the fastest-growing core pillar.
- Platform adoption is deepening: 84% of customers use two or more products, 55% use four or more, 33% use six or more, and 9% use 10 or more products, up materially year over year.
- Customer base and concentration: ~32,700 customers total, ~4,310 customers with ARR >= $100k generate about 90% of ARR, 48% of the Fortune 500 are customers, median Fortune 500 ARR still under $500k indicating expansion runway.
- AI product traction: Bits AI SRE Agent GA in December, over 2,000 trial and paying customers ran investigations in the past month; MCP server usage grew elevenfold in Q4 vs Q3; ~5,500 customers use at least one Datadog AI integration.
- LLM and AI observability momentum: over 1,000 customers using LLM/AI observability capabilities, the product ecosystem expanded with LLM experiments, playgrounds, prompt analysis and a forthcoming agents console.
- Log Management and Flex Logs: Log Management crossed $1B ARR, Flex Logs is nearing $100M ARR, and Datadog reported ~100 multi-million dollar deals replacing legacy logging vendors.
- Retention and unit economics remain healthy: trailing 12-month net revenue retention ~120%, gross retention in the mid- to high-90s, Q4 gross margin 81.4%, operating margin 24% and non-GAAP operating income $230 million.
- Strong cash generation: cash, cash equivalents and marketable securities $4.47 billion at quarter end; operating cash flow $327 million; free cash flow $291 million, FCF margin 31%.
- Billings $1.21 billion, up 34% YoY; RPO $3.46 billion, up 52% YoY, with current RPO growth ~40% YoY, and RPO duration increased due to more multiyear deals.
- 2026 guide: Q1 revenue $951M-$961M (25%-26% YoY); FY2026 revenue $4.06B-$4.10B (18%-20% YoY). Management models the core business, excluding the largest customer, to grow at least 20% in 2026 and emphasizes conservatism around the single largest customer.
- Go-to-market and investment posture: continued scaling of GTM headcount and geographic capacity while maintaining productivity, OpEx growth 29% YoY in Q4 as Datadog keeps investing in R&D and go-to-market execution.
- Competitive framing: management says customers are consolidating away from homegrown and legacy vendors, and that general-purpose LLMs are complementary but not a substitute for embedded, real-time observability that operates in the data plane.
Full Transcript
Michelle, Conference Call Operator: Good day, and welcome to the Q4 2025 Datadog Earnings Conference Call. At this time, all participants are in listen-only mode. After the speaker’s presentation, there’ll be a question and answer session. To ask a question, please press star one one. If your question has been answered and you’d like to remove yourself from the queue, please press star one one again. As a reminder, this call may be recorded. I’m now going to turn the call over to Yuka Broderick, Senior Vice President of Investor Relations. Please go ahead.
Yuka Broderick, Senior Vice President of Investor Relations, Datadog: Thank you, Michelle. Good morning, and thank you for joining us to review Datadog’s fourth quarter 2025 financial results, which we announced in our press release issued this morning. Joining me on the call today are Olivier Pomel, Datadog’s Co-founder and CEO, and David Obstler, Datadog’s CFO. During this call, we will make forward-looking statements, including statements related to our future financial performance, our outlook for the first quarter and fiscal year 2026, and related notes and assumptions, our product capabilities, and our ability to capitalize on market opportunities. The words anticipate, believe, continue, estimate, expect, intend, will, and similar expressions are intended to identify forward-looking statements or similar indications of future expectations. These statements reflect our views today and are subject to a variety of risks and uncertainties that could cause actual results to differ materially.
For a discussion of the material risks and other important factors that could affect our actual results, please refer to our Form 10-Q for the quarter ended September 30, 2025. Additional information will be made available in our upcoming Form 10-K for the fiscal year ended December 31, 2025, and other filings with the SEC. This information is also available on the Investor Relations section of our website, along with a replay of this call. We will discuss non-GAAP financial measures, which are reconciled to their most directly comparable GAAP financial measures in the tables in our earnings release, which is available at investors.datadoghq.com. With that, I’d like to turn the call over to Olivier.
Olivier Pomel, Co-founder and CEO, Datadog: Thanks, Yuka, and thank you all for joining us this morning to go over what was a very strong Q4 and overall a really productive 2025. Let me begin with this quarter’s business drivers. We continue to see broad-based positive trends in the demand environment with the ongoing momentum of cloud migration. We experienced trends across our business, across our product lines, and across our diverse customer base. We saw a continued acceleration of our revenue growth. This acceleration was driven in large part by the inflection of our broad-based business outside of the AI native group of customers we discussed in the past. And we also continued to see very high growth within this AI native customer group as they go into production and grow in users, tokens, and new products.
Our go-to-market teams executed to a record $1.63 billion in bookings, up 37% year-over-year. This included some of the largest deals we’ve ever made. We signed 18 deals over $10 million in TCV this quarter, of which two were over $100 million, and one was an eight-figure land with a leading AI model company. Finally, churn has remained low, with growth revenue retention stable in the mid- to high 90s, highlighting the mission-critical nature of our platform for our customers. Regarding our Q4 financial performance and key metrics, revenue was $953 million, an increase of 29% year-over-year and above the high end of our guidance range. We ended Q4 with about 32,700 customers, up from about 30,000 a year ago.
We also ended Q4 with about 4,310 customers, with an ARR of $100,000 or more, up from about 3,610 a year ago. These customers generated about 90% of our ARR. We generated free cash flow of $291 million, with a free cash flow margin of 31%. Turning to product adoption, our platform strategy continues to resonate in the market. At the end of Q4, 84% of customers used two or more products, up from 83% a year ago. 55% of customers used four or more products, up from 50% a year ago. 33% of our customers used six or more products, up from 26% a year ago. 18% of our customers used eight or more products, up from 12% a year ago.
As a sign of continued penetration of our platform, 9% of our customers used 10 or more products, up from 6% a year ago. During 2025, we continued to land and expand with larger customers. As of December 2025, 48% of the Fortune 500 are Datadog customers. We think many of the largest enterprises are still very early in their journey to the cloud. The median Datadog ARR for our Fortune 500 customers is still less than $500,000, which leaves a very large opportunity for us to grow with these customers. We’re landing more customers and delivering more value, and we also see that with the ARR milestones we’re reaching with our products.
We continue to see strong growth dynamics with our core three pillars of observability: Infrastructure Monitoring, APM, and Log Management, as customers are adopting the cloud, AI, and modern technologies. Today, Infrastructure Monitoring contributes over $1.6 billion in ARR. This includes innovations that deliver visibility and insights across our customers’ environments, whether they are on-prem, virtualized servers, containerized hosts, serverless deployments, or parallelized GPU fleets. Meanwhile, Log Management is now over $1 billion in ARR, and this includes continued rapid growth with Flex Logs, which is nearing $100 million in ARR. Our third pillar, the end-to-end suite of APM and DEM products, also crossed $1 billion in ARR. This includes an acceleration of our core APM product into the mid-30s% year-over-year, and currently our fastest growing core pillar.
We have now enabled our customers with the easiest onboarding and implementation in the market, while delivering unified, deep, end-to-end visibility into their applications. Now, remember that even with these three pillars, we’re still just getting started, as about half of our customers do not buy all three pillars from us, or at least not yet. Moving on to R&D and what we built in 2025. We released over 400 new features and capabilities this year. That’s too much for us to cover today, but let’s go over just some of our innovations. We are executing relentlessly on our very ambitious AI roadmap, and I will split our AI efforts into two buckets: AI for Datadog and Datadog for AI. So first, let’s look at AI for Datadog. These are AI products and capabilities that make the Datadog platform better and more useful for customers.
We launched Bits AI SRE agent for general availability in December to accelerate root cause analysis and incident response. Over 2,000 trial and paying customers have run investigations in the past month, which indicates significant interest and showed great outcomes with Bits AI SRE. We’re well on our way with Bits AI Dev Agent, which detects code-level issues, generates fixes with production context, and can even help release and monitor a fix. Bits AI Security Agent, which autonomously triages SIEM signals, conducts investigations, and delivers recommendations. The Datadog MCP Server is being used by thousands of customers in preview. Our MCP server responds to AI agent and user prompts and uses real-time production data and rich Datadog context to drive troubleshooting, root cause analysis, and automation. We’re seeing explosive growth in MCP usage, with the number of tool calls growing elevenfold in Q4 compared to Q3.
Second, let’s talk about Datadog for AI. This includes capabilities that deliver end-to-end observability and security across the AI stack. We are seeing an acceleration in growth for observability. Over 1,000 customers are using the product, and the number of brands since has increased 10 times over the last six months. In 2025, we broadened the product to better support application development and iteration, adding capabilities such as LLM experiments, LLM playground, LLM prompt analysis, and custom LLM as a judge. We will soon release our AI agents console to monitor usage and adoption of AI agents and coding assistance. We’re working with design partners on GPU monitoring, and we are seeing GPU usage increase in our customer base overall. We’re building into our products the ability to secure the AI stack against prompt injection attacks, model hijacking, and data poisoning, among many other risks.
Overall, we continue to see increased interest among our customers in next-gen AI. Today, about 5,500 customers use one or more Datadog AI integrations to send us data about their machine learning, AI, and LLM usage. In 2025, our observability platform delivered deeper and broader capabilities for our customers. We reached a major milestone of more than 1,000 integrations, making it easy for our customers to bring in every type of data they need and engage with the latest technologies from cloud to AI. In log management, we’re seeing success in our consolidation motion. During 2025, we saw an increasing demand to replace a large legacy vendor with Datadog in nearly 100 deals for $10s of millions of new revenue. We improved log management with notebooks, reference tables, log patterns, calculated fields, and an improved Live Tail, among many other innovations.
We launched Data Observability for general availability. Data is becoming even more critical in the AI era. With Data Observability, we are enabling end-to-end visibility across the entire data lifecycle. We launched Storage Management last month, providing granular insights into cloud storage and recommendations to reduce spend. We delivered Kubernetes autoscaling, so users can quickly identify which overprovisioned clusters and deployments and right-size their infrastructure. In the digital experience monitoring area, we launched path analytics to help product designers make better design decisions with clear data about user experience and behavior. And we delivered RUM without limits, giving front-end teams full visibility into user traffic and performance, and dynamically choosing the most useful sessions to retain. In security, we’re seeing increasing traction and are actively displacing existing market-leading solutions with Cloud SIEM in large enterprises.
This year, our engineers shipped many new capabilities, including a tripling of the amount of content packs built into the product, and most importantly, the tight integration with Bits AI Security Agent, which has already shown promise as a strong differentiator in the market. We launched Code Security, enabling customers to detect and remediate vulnerabilities in their code and open source libraries from development to production. We continue to advance our cloud security offering, adding infrastructure as code or IaC security, which detects and resolves security issues with Terraform. We launched our Security Graph to identify and evaluate attack paths. In software delivery, in January, we launched Feature Flags. They combine with our real-time observability to enable canary rollouts, so teams can deploy new code with confidence.
We expect them to gain importance in the future, as they serve as a foundation for automating the validation and release of applications in an AI agentic development world. We are also building out our internal developer portal, which includes software catalog and scorecards, to help developers navigate infrastructure and application complexity, provide rich context to AI development agents, and ultimately enable a faster release cadence. In cloud service management, we launched OnCall and now support over 3,000 customers with their incident response processes. And I already mentioned Bits AI SRE Agent, which pairs with OnCall to accelerate our customer incident resolution. As you can tell, we’ve been very busy, and I want to thank our engineers for a very productive 2025. And most importantly, I’m even more excited about our plans for 2026. So let’s move on to sales and marketing.
I want to highlight some of the great deals we closed this quarter. First, we landed an eight-figure annualized deal and our biggest new logo deal to date with one of the largest AI financial model companies. This customer had a fragmented observability stack and cumbersome monitoring workflows, leading to poor productivity. This is a consolidation of more than five open-source, commercial, hyperscaler, and in-house observability tools into the unified Datadog platform. That has returned meaningful time to developers and has enabled a more cohesive approach to observability. This customer is experiencing very rapid growth. Datadog allows them to focus on product development and supporting their users, which is critical to their business success. Next, we welcome back a customer that was a European data company in a nearly seven-figure annualized deal.
This customer’s log-focused observability solution had poor user experience and integrations, which led to limited user adoption and gaps in coverage. By returning to Datadog and consolidating seven observability tools, they expect to reduce tooling overhead and improve engineering productivity with faster incident resolution. It will adopt nine Datadog products at the start, including some of our newer products, such as Flex Logs, Observability Pipelines, Cloud Cost Management, Data Observability, and OnCall. Next, we signed an eight-figure annualized expansion with a leading e-commerce and digital payments platform. This customer’s products have an enormous reach, and its commercial APM solution had scaling issues, lacked correlation across silos, and had a pricing model that was difficult to understand or predict. With this expansion, they are standardizing on Datadog APM using OpenTelemetry, so their teams can correlate metrics, traces, and logs to detect and resolve issues faster.
And they have already seen meaningful impact, with a 40% reduction in resolution times by their own estimates. This customer has adopted 17 products across the Datadog platform. Next, we signed a seven-figure annualized expansion for an eight-figure annualized deal with a Fortune 500 food and beverage retailer. This long-time customer uses a Datadog platform across many products, but still has over 30 other observability tools and embarked on consolidating for cost savings and better outcomes. With this expansion, Datadog Log Management and Flex Logs will replace their legacy logging product for all ops use cases, with expected annual savings in the $ millions. This customer is expanding to 17 Datadog products. Next, we signed a seven-figure annualized expansion with a leading healthcare technology company. This company was facing reliability issues, impacting clinicians during critical workflows and putting customer trust at risk.
The customer will consolidate 6 tools and adopt 7 Datadog products, including LLM Observability, to support their AI initiatives, as well as Bits AI SRE Agent to further accelerate incident response. Next, we signed an eight-figure annualized expansion, more than quadrupling the annualized commitment with a major Latin American financial services company. Given its successful tool consolidation project and rapid adoption of Datadog products across all of its teams, this customer renewed early with us while expanding to additional products, including Data Observability, CI Visibility, Database Monitoring, and Observability Pipelines. With Datadog, this customer showed measurable improvements in cost, efficiency, customer experience, and conversion rates across multiple lines of business. That proof of value led them to broaden their commitment with us and has firmly established Datadog as their mission-critical observability partner.
Last and not least, we signed a seven-figure annualized expansion for an eight-figure annualized deal with a leading fintech company. With this expansion, the customer is moving their log data onto our unified platform, so teams can correlate telemetry in one place and save between hours and weeks in time to resolution for incidents. This customer has adopted 19 Datadog products across the platform, including all three pillars, as well as digital experience, security, software delivery, and service management. And that’s it for our wins. Congratulations to our entire go-to-market organization for a great 2025 and a record with Q4. It was inspiring to see the whole team at our CR kick off last month and really exciting to embark on a very ambitious 2026.... Before I turn it over to David for a financial review, I want to say a few words on our longer-term outlook.
There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers for our business. So we continue to extend our platform to solve our customers’ problems from end to end across their software development, production, data stack, user experience, and security needs. Meanwhile, we’re moving fast in AI by integrating AI into the Datadog platform to improve customer value and outcome, and by building products to observe, secure, and act across our customers’ AI stacks. In 2025, we executed very well and delivered for our customers against their most complex mission-critical problems. Our strong financial performance is an output of that effort.
We’re even more excited about 2026, as we are starting to see an inflection in AI usage by our customers into their applications, and as our customers begin to adopt AI innovations, such as our Bits AI SRE Agent. To hear about all that in detail and much more, I welcome you all to join us at our next Investor Day, this Thursday in New York, between 1:00 P.M. and 5:00 P.M. I’ll be joined by our product and go-to-market leaders to share how we are serving our customers, how we innovate to broaden our platform, and how we are delivering greater value with AI. For more details, please refer to the press release announcing the event or head to investors.datadoghq.com. And with that, I will turn it over to our CFO, David.
David Obstler, CFO, Datadog: Thanks, Olivier. Our Q4 revenue was $953 million, up 29% year-over-year, and up 8% quarter-over-quarter. Now, to dive into some of the drivers of our Q4 revenue growth. First, overall, we saw robust sequential usage growth from existing customers in Q4. Revenue growth accelerated with our broad base of customers, excluding the AI natives, to 23% year-over-year, up from 20% in Q3. We saw strong growth across our customer base, with broad-based strength across customer size, spending bands, and industries, and we have seen this trend of accelerated revenue growth continue in January. Meanwhile, we are seeing continued strong adoption amongst AI native customers, with growth that significantly outpaces the rest of the business.
We see more AI-native customers using Datadog, with about 650 customers in this group, and we are seeing these customers grow with us, including 19 customers spending $1 million or more annually with Datadog. Among our AI customers are the largest companies in this space. As of today, 14 of the top 20 AI-native companies are Datadog customers. Next, we also saw continued strength from new customer contribution. Our new logo bookings were very strong again this quarter, and our go-to-market teams converted a record number of new logos, and average new logo land sizes continues to grow strongly. Regarding retention metrics, our trailing 12-month net revenue retention percentage was about 120%, similar to last quarter, and our trailing 12-month gross revenue retention percentage remains in the mid- to high 90s. Now moving on to our financial results.
First, billings were $1.21 billion, up 34% year-over-year. Remaining performance obligations, or RPO, was $3.46 billion, up 52% year-over-year, and current RPO growth was about 40% year-over-year. RPO duration increased year-over-year, as the mix of multiyear deals increased in Q4. We continue to believe revenue is a better indication of our business trends than billing and RPO. Now, let’s review some of the key income statement results. Unless otherwise noted, all metrics are non-GAAP. We have provided a reconciliation of GAAP to non-GAAP financials in our earnings release. First, our Q4 gross profit was $776 million, with a gross margin percentage of 81.4%.
This compares to a gross margin of 81.2% last quarter and 81.7% in the year ago, year-ago quarter. Q4 OpEx grew 29% year over year, versus 32% last quarter and 30% in the year-ago quarter. We continue to grow our investments to pursue our long-term growth opportunities, and this OpEx growth is an indication of our successful execution on our hiring plans. Our Q4 operating income was $230 million, for a 24% operating margin, compared to 23% last quarter and 24% in the year-ago quarter. Now, turning to the balance sheet and cash flow statements. We ended the quarter with $4.47 billion in cash, cash equivalents, and marketable securities. Cash flow from operations was $327 million in the quarter.
After taking into consideration capital expenditures and capitalized software, free cash flow was $291 million, for a free cash flow margin of 31% . Now for our outlook for the first quarter and the full fiscal year 2026. Our guidance philosophy overall remains unchanged. As a reminder, we base our guidance on trends observed in recent months and apply conservatism on these growth trends. For the first quarter, we expect revenues to be in the range of $951 million-$961 million, which represents a 25%-26% year-over-year growth. Non-GAAP operating income is expected to be in the range of $195 million-$205 million, which implies an operating margin of 21%.
Non-GAAP net income per share is expected to be in the $0.49-$0.51 per share range, based on approximately 367 million weighted average diluted shares outstanding. For the full fiscal year 2026, we expect revenues to be in the range of $4.06-$4.1 billion, which represents 18%-20% year-over-year growth. This includes modeling within our guidance that our business, excluding our largest customer, grows at least 20% during the year. Non-GAAP operating income is expected to be in the range of $840-$880 million, which implies an operating margin of 21%.
Non-GAAP net income per share is expected to be in the range of $2.08-$2.16 per share, based on approximately 372 million weighted average diluted shares. Finally, some additional notes on our guidance. First, we expect net interest and other income for the fiscal year 2026 to be approximately $140 million. Next, we expect cash taxes in 2026 to be about $30 million-$40 million, and we continue to apply a 21% Non-GAAP tax rate for 2026 and beyond. Finally, we expect capital expenditures and capitalized software together to be in the 4%-5% of revenue range in fiscal year 2026. To summarize, we are pleased with our strong execution in 2025.
Thank you to the Datadog teams worldwide for a great 2025, and I’m very excited about our plans for 2026. Finally, we look forward to seeing many of you on Thursday for our Investor Day. Now with that, we will open up our call for questions. Operator, let’s begin the Q&A.
Michelle, Conference Call Operator: Thank you. As a reminder, to ask a question, please press star one, one. Our first question comes from Sanjit Singh with Morgan Stanley. Your line is open.
Sanjit Singh, Analyst, Morgan Stanley: Thank you for taking the question, and congrats on a strong close to the year and a successful 2025. Olivier, I wanted to get your updated views in terms of where observability is headed in the context of a lot of advancements when it comes to agentic frameworks, agentic deployments, the stuff that we’ve seen from Anthropic and the new frontier models from OpenAI. Just in terms of, like, what this means for observability as a category, defensibility of it, in terms of can customers use these tools to build, you know, homegrown solutions for observability? So just get your latest comments on the sensibility of the category and how Datadog may potentially have to evolve in this new sort of agentic era.
David Obstler, CFO, Datadog: Yeah, I mean, look, the there’s a few different ways to look at it. You know, one is there’s going to be many more applications than there were before. Like, people are building much more, they’re building much faster. You know, we’ve covered that in previous calls, but, you know, we think that this is nothing but an acceleration of the increase of productivity for developers in general, so you can build a lot faster. As a result, you create a lot more complexity because you build more than you can understand at any point in time.
You move a lot of the value from the act of writing the code, which now you actually don’t do yourself anymore, to validating, testing, making sure it works in production, making sure it’s safe, making sure it interacts well with the rest of the world, with the end users, make sure it does what it’s supposed to do for the business, you know, which is what we do with observability. So we see a lot more volume there, and we see that as, you know, what we do basically, where observability can help. The other part that’s interesting is that a lot more happens within these agents and these applications, and a lot of what we do as humans now starts to look like observability.
You know, basically, we’re here to understand. We’re trying to understand what the machine does. We try to make sure it’s aligned with us. We try to make sure, you know, the output is what we expected when we started, and that, you know, we didn’t break anything. And so we think it’s going to bring observability more widely in domains that it didn’t necessarily cover before. So we think that these are accelerants, and I mean, obviously, we have a horse in this one, but, you know, we think that observability and the contexts between the code, the applications, and the real world and production environments, and real users, and the real business is the most interesting, the most important part of the whole AI development lifecycle today.
Unnamed Analyst, Analyst: Maybe just one follow-up on that line of thinking. In a world where there’s a greater mix between human SREs and agentic SREs, is there any sort of evolution that we need to think about in terms of whether it’s UI or how workflows work in observability and how maybe Datadog sort of tries to align with that that evolution that’s likely to come in the next couple of years?
Olivier Pomel, Co-founder and CEO, Datadog: Yeah, there’s going to be an evolution, that’s certain. You know, there’s going to be a lot more automation we see today. Like, we see all the signs we see point to everything moving faster. You know, more data, but more and more interactions, more systems, more releases, more breakage, more resolutions of those breakages, more bugs, more vulnerabilities, everything, you know. So we see an acceleration there. At the end of the day, the humans will still have some form of UI to interact with all that, and a lot of the interaction will be automated by agents. So we’re building the products to satisfy both conditions.
So we have a lot of UIs, and we are able to present the humans with UIs that represent how the world works, what the options are, give them familiar ways to go through problems and to model the world. And we also are exposing a lot of our functionality to agents directly. You know, we mentioned on the call, we have an MCP server that is currently in preview, and that is really seeing explosive growth of usage from our customers. And so it’s a very likely future that part of our functionality is delivered to agents through MCP servers or the likes. Part of our functionality is directly implemented by our own agents, and part of our functionality is delivered to humans with UIs.
Unnamed Analyst, Analyst: Understood. Thank you, Oli.
Michelle, Conference Call Operator: Thank you. Our next question comes from Raimo Lenschow with Barclays. Your line is open.
Raimo Lenschow, Analyst, Barclays: Thank you. Congrats from me as well. Staying on a little bit on that AI theme, Olivier, the eight-figure deal for a model company is really exciting. I assume they try to do it with some open source tooling, et cetera, but actually went from like, you know, almost paying not a lot of money to paying you more money. What drove that thinking? What do you think they saw that kind of convinced them to do that? It’s now the second one, you know, after the other very big model provider. So clearly, that whole debate in the market between, oh, you can do that on the cheap somewhere, is not kind of quite valid. Could you speak to that, please? Thank you.
Olivier Pomel, Co-founder and CEO, Datadog: I mean, the situation is just very similar to every single customer we land. Every customer we land has had some item at homegrown. They have some open source. They might still run some open source. Like, that’s typically what we see everywhere. You know, it’s cheaper or to do it yourself is usually not the case, you know, so your engineers are typically very well compensated and a big part of the spend in these companies. Their velocity is what gates just about anything else in the business. And so, you know, usually when we come in, when customers start engaging with us, we can very quickly show value that way. So it’s not any different from what we see with any other customer.
And also, within the AI cohort, it’s not, you know, original at all. Like, you know, the AI cohorts in general is a who’s who of the companies that are growing very fast and that are shaping the world in AI, and they’re all adopting our product for the same reasons. Sometimes with different volumes, because those companies have different scales, but the logic is the same.
Raimo Lenschow, Analyst, Barclays: It’s perfect. Thank you for them.
Michelle, Conference Call Operator: Thank you. Our next question comes from Gabriela Borges, with Goldman Sachs. Your line is open.
Unnamed Analyst, Analyst: Hi, good morning. Congratulations on the quarter, and thank you for taking my question. Oli, I wanted to follow up on Sanjit’s question on how to think about where the line is between what an LLM can do longer term and the domain experience that you have in observability. If I think about some of Anthropic’s recent announcements, they’re talking about LLMs as a broader anomaly detection type tool, for example, on the security vulnerability management side. How do you think about the limiting factor to using LLMs as an anomaly detection tool that could potentially take share over, over from observability at the time in the category? And how do you think about the moat that Datadog has that offers customers a better solution relative to where the roadmap and LLMs can go long term? Thank you.
Olivier Pomel, Co-founder and CEO, Datadog: Yeah. So that’s a, that’s a very good question. You know, we definitely see that LLMs are getting better and better, and we bet on them getting significantly better, you know, every few months, as we’ve seen over the past couple of years. And as a result, they’re very, very good at looking at broad sets of data, you know, first part. You know, so if you feed a lot of data to an LLM app for an analysis, you’re very likely to get something that is very good and that is going to get, you know, even better. So when you think of, you know, what we have that is fundamentally moat here, there’s two parts.
One is how we are able to assemble that context so we can feed it into those intelligence, intelligence engines, you know, and that’s how we aggregate all the data we get. You know, we parse out the dependencies, we understand where everything fits together, and we can feed that into the LLM. That’s in part what we do, for example, today, we expose these kinds of functionality behind our MCP server, and so customers can recombine that in different ways using different intelligence tools. But the other part that, we think where the world is going, is going for observability is that, you know, right now we are, you know, the SDLC is accelerating a lot, but it’s still somewhat slow, and so it’s okay to have incidents and run post-hoc, analysis on those incidents and maybe use some outside tooling for that.
Where the world is going is you’re going to have many more changes, many more things. You cannot actually afford to have incidents to look at for, you know, everything that’s happening in your system. So you’ll need to be proactive. You’ll need to run analysis in stream as all the data flows through. You’ll need to run detection and resolution before you actually have outages materialize. And for that, you’ll need to be embedded into the data plane, which is what we run. And you also need to be able to run specialized models that can act on that data, you know, as opposed to just taking everything and summarizing everything after the fact and 15 minutes later. And that’s, you know, what we’re uniquely positioned to do. We’re building that.
We, you know, not quite there yet, but we think that, you know, a few years from now, that’s, that’s what the world’s going to run, and that’s what makes us significantly different in terms of how we can apply anomaly detection, intelligence, and preemptive resolution into our systems.
Unnamed Analyst, Analyst: That makes a lot of sense. Thank you. My follow-up,
Olivier Pomel, Co-founder and CEO, Datadog: By the way, the data plates we’re talking about are very real time, and there are many others of magnitudes larger in terms of data flows, data volumes, than what you typically feed into an LLM. So it’s a bit of a different problem to solve.
Unnamed Analyst, Analyst: Yeah, super interesting. Thank you. My follow-up for both you, Oli and David, you’ve mentioned a couple of times now, some of the conversations you have with customers about value creation within the Datadog platform. Tell us a little bit about how some of those conversations evolve when the customer sees that in order to do observability for more AI usage, perhaps that Datadog bill is going up. What are some of the steps that you can take to make sure the customer still feels like they’re getting a ton of value out of the Datadog platform? Thank you.
Olivier Pomel, Co-founder and CEO, Datadog: Well, there’s a few things. I mean, first, I, you know, again, the rule of software always applies. You know, there’s only two reasons people buy your product is to make more money or to save money. So, whatever you do, when customers use a new product, they need to see a cost saving somewhere, or they need to see that they are going to get to customers they wouldn’t get to otherwise. So we, we have to prove that. We always prove that. Anytime a customer buys a product, you know, that’s what is happening behind the scenes. In general, when customers add to our platform as opposed to bringing another vendor in or another product in, they also spend less by doing it on our platform.
Unnamed Analyst, Analyst: I appreciate the call. Thank you very much.
Michelle, Conference Call Operator: Thank you. Our next question comes from Itay Kidron with Oppenheimer & Co. Your line is open.
Itay Kidron, Analyst, Oppenheimer & Co: Thanks, and congrats. Quite an impressive finish for the year. David, I wanted to dig in a little bit into your 26 guide. Just wanna make sure I understand some of your assumptions. So maybe you could talk about the level of conservatism that you’ve built into the guide for the year. And also, you’ve talked about at least 20% growth for the core, excluding the largest customer, but what is it that we should assume for the large customer? And now when you look at the AI cohort, excluding this large customer, are there any concentrations evolving over there, given your strong success there?
David Obstler, CFO, Datadog: Yeah, there are three questions in there. The first is overall on guidance, except what we, what we’re gonna speak about next. We took the same approach as we, we looked at the organic growth rates and the attach rates and the new logo accumulation rates and discounted that. So for the overall business, which is quite diversified, we talked about diversification by industry, by geography, by SMB, mid-market, and enterprise. We took the same approach. We noted that with the guidance being 18%-20% and the non-AI or heavily diversified business being 20%+, that would imply that the growth rate of that core business, assumed in the guidance, is higher than the growth rate of the large customer. That doesn’t mean the large customer is growing any which way.
It’s just that in our consumption model, we essentially don’t control that, and so we took a very conservative assumption there. And the last point I think you mentioned is the highly diversified. We said 650 names in the AI is quite diversified, you know, essentially would be very similar to our overall business, in which we have a range of customers, but not the concentration level. And what we’re seeing there is significant growth, but like our overall distributed customer base, you know, a growth and then, you know, potentially some working on how the product’s being used, but nothing, you know, out of the ordinary relative to the overall customer base in the very diversified AI set of customers outside the largest customer. Hopefully, that’s helpful.
Itay Kidron, Analyst, Oppenheimer & Co: Okay, that’s great. Yeah, and can you give us the % of revenue of the AI cohort, this quarter?
David Obstler, CFO, Datadog: We hadn’t put it in there.
Itay Kidron, Analyst, Oppenheimer & Co: Thank you.
Michelle, Conference Call Operator: Thank you. Our next question comes from Todd Coupland with CIBC. Your line is open.
Unnamed Analyst, Analyst: Oh, thank you, and good morning. I wanted to ask you about competition and how the LLM rise is impacting share shifts. Just talk about that and how Datadog will be impacted. Thanks a lot.
Olivier Pomel, Co-founder and CEO, Datadog: Yeah. I mean, there hasn’t been, you know, in the market with customers, any particular change in competition, you know, in that we see the same kind of folks, and the positioning are relatively similar, and we’re pulling away. We’re taking share from anybody who has scale. And I know there’s been noise. There were a couple of M&A deals that came up, and we got some questions about that. The companies in there were not particularly winning companies, not companies that we saw in deals, not companies that had a large market impact. And so we don’t see that as changing the competitive dynamics for us in the near future.
We also know that competing in observability is a very, very full-time job. It’s a very innovative market, and we know exactly what it is we had to do and have to do to keep pulling away the way we are, and so we’re very confident in our approach and what we’re going to do in the future there. With the rise of LLM, there is a-- there’s clearly more functionality to build and there are new ways to serve customers. You know, we have-- we mentioned our LLM observability product. There are a few other products on the market for that. I think it’s still very early for that part of the market, and that market is still relatively undifferentiated in terms of the kinds of products they are. But we expect that to shake out more into the future.
We think in the end, there’s no reason to have observability for your LLM that is different from the rest of your system, in great part because your LLMs don’t work in isolation. The way they implement their smarts is by using tools, the tools on your applications, on your existing applications or new applications you build for that purpose. And so you need everything to be integrated in production, and we think we stand on a very strong footing there.
David Obstler, CFO, Datadog: Thank you.
Michelle, Conference Call Operator: Thank you. Our next question comes from Mark Murphy with J.P. Morgan. Your line is open.
Mark Murphy, Analyst, J.P. Morgan: Thank you. Olivier, Amazon is targeting $200 billion in CapEx this year. If you include Microsoft and Google, that CapEx is gonna exceed $500 billion this year for the big three hyperscalers, and it’s growing 40%-60%. I’m wondering if you’ve collected enough signals from the last couple of years of CapEx, that trend, to estimate how much of that is training related and when it might convert to inferencing, where Datadog might be required. In other words, you know, are you looking at this wave of CapEx and able to say it’s gonna create a predictable ramp in your LLM observability revenue? Maybe what inning of that are we in? Then I have a follow-up.
Olivier Pomel, Co-founder and CEO, Datadog: Well, I think it’s probably too reductive to peg that on LLM Observability. I think it points to way more applications, way more intelligence, way more of everything into the future. Now it’s kind of hard to directly map the CapEx from those companies into what part of the infrastructure is actually going to be used to deliver value, you know, two or three or four years from now. So I think we’ll have to see on what the conversion rate is on that. But look, it definitely points to a very, very, very large increases in the complexity of the systems, the number of systems and the reach of the systems in the economy.
And so we think it’s going to be like it’s going to be of great help to our business. Let’s put it this way.
Mark Murphy, Analyst, J.P. Morgan: Yeah, great help. Okay. And then as a quick follow-up, there is an expectation developing that OpenAI is gonna have a very strong competitor, which is Anthropic, kind of closing the gap, you know, producing nearly as much revenue as OpenAI in the next 1-2 years. You mentioned an 8-figure land with an AI model company. I’m wondering if we step back, do you see an opportunity to diversify that AI customer concentration, whether, you know, sometimes it might be a direct customer relationship there, or, you know, it could be some of the products like Claude Code, you know, being adopted globally, just kind of creating more surface area to drive business to Datadog.
Can you comment on maybe what is happening there among the larger AI providers or whether you can diversify that out?
Olivier Pomel, Co-founder and CEO, Datadog: Yeah, I mean, look, we’ve never been a... Like, we’re not built as a business to be concentrated on a couple of customers. That’s not how we’ve become successful. That’s probably not how we’ll be successful in the long term. So yes, I mean, at the end of the day, it should be irrational for customers, for all customers, in the AI cohort, not to use our product. So we see... We have some great successes with the customers currently in that cohort. We see more, by the way. We have more that are more inbound there and more customers that are talking to us from the largest, you know, even hyperscaler level AI labs. And we expect to drive more business there in the future.
I think there’s, there’s no question about that.
David Obstler, CFO, Datadog: You’re seeing that in some of the metrics we’ve been giving in terms of the number of AI-native customers, the size of some of these customers. So, you know, to echo what Ali said, we are essentially selling to many of the largest players, which results in greater size of the cohort and more diversification.
Mark Murphy, Analyst, J.P. Morgan: Thank you.
Michelle, Conference Call Operator: Thank you. Our next question comes from Matt Hedberg with RBC. Your line is open.
Matt Hedberg, Analyst, RBC: Great. Thanks for taking my question, guys. Congrats from me as well. You know, Dave, a question for you. You know, your prior investments are clearly paying off with another quarter of acceleration, and it seems like you’re gonna continue to invest in front of the future opportunity. I think op margins are down maybe 100 basis points on your initial guide. I’m curious if you can comment on gross margin expectations this year and how you also might realize incremental OpEx synergies by using even more AI internally?
David Obstler, CFO, Datadog: ... Yeah, on the gross margin, I think what we said is, you know, ±80% mark. We, you know, we try to engineer. When we see opportunities for efficiency, we’ve been quite good at being able to harvest them. At the same time, we wanna make sure we’re investing, you know, in the platform. So I think, you know, what we’re essentially, where we are today is very much sort of in line with what we said we’re targeting. There may be opportunity longer term, but, we also are trying to balance those opportunities with investment in the platform. And in terms of AI, to date, we are using it in our internal operations. So far, it’s, well, the first signs of what we’re seeing is productivity and adoption.
We will continue to update everybody as we see opportunities in terms of the cost structure. Ali, anything else you wanna go over?
Olivier Pomel, Co-founder and CEO, Datadog: Yeah, I mean, look, the expectation in the, in the short midterm anyway, should be that we keep investing heavily in R&D. You know, we’re getting a lot... We see great productivity gains, you know, with AI there. But at this point, it does, it helps us build more faster, and get to solve more problems for our customers. But we’re very busy adopting AI across the organization.
David Obstler, CFO, Datadog: Got it. Thanks, guys.
Michelle, Conference Call Operator: Thank you. Our next question comes from Koji Ikeda with Bank of America. Your line is open.
Koji Ikeda, Analyst, Bank of America: Yeah. Hey, guys. Thanks so much for taking the question. Olivier, maybe a question for you. A year ago, you talked about how some, while some customers do wanna take observability in-house, it’s really a cultural choice. It may not be rational unless you have tremendous scale, access to talent, and growth is not limited by innovation bandwidth, which most companies do not. And so it is a year later, and it does seem like the industry and the ecosystem and, and, and everything has changed quite a bit. So I was hoping to get your updated views on these dots, if it has changed at all over the past year, and why? Thank you.
Olivier Pomel, Co-founder and CEO, Datadog: No, I mean, look, it’s something that happens sometimes, but it’s a small minority of the cases. Like, the general motion is customers start with some homegrown or attempts to do things themselves, then they move to our product, and they scale with our product. Sometimes they optimize a little bit along the way, but the general motion is they do more and more with us. They rely on us for more of their solving more of their problems, and they outsource the problem and increasingly the outcomes to us. So I don’t think that’s changing. Look, we’ll still see customers here and there that choose to in-source it and do it themselves, again, usually for cultural reasons.
I would say economically or from a focus perspective, it doesn’t make sense for the very vast majority of companies. You know, we even see teams at, you know, hyperscalers that have all the tooling in the world, all the money in the world, all the know-how in the world, and that still choose to use our products because it gives them a more direct path to solving their problems.
Koji Ikeda, Analyst, Bank of America: Thank you.
Michelle, Conference Call Operator: Thank you. And our next question comes from Peter Weed with Bernstein Research. Your line is open. Peter, if your telephone is muted, please unmute. Our next question comes from Brad Reback with Stifel. Your line is open.
Brad Reback, Analyst, Stifel: Great. Thanks very much. Ali, the sustained acceleration in the core business is pretty impressive. Obviously, you all have invested very aggressively in go-to-market over the last kind of 18-24 months. Can you give us a sense of where you are on that productivity curve, and if there’s additional meaningful gains, you think, or is it incremental? And maybe where you see additional investments in the next 12-18 months? Thanks.
Olivier Pomel, Co-founder and CEO, Datadog: Yeah, I mean, we feel good about the productivity. I think the main drivers from us in the future is we still need to scale, and we’re still scaling the go-to-market team. We’re not at the scale we need to be in every single market and segment we need to be in the world right now, and so we keep scaling there. So the focus now is not necessarily to improve productivity, it’s to scale while maintaining productivity. And of course, there’s still many, many things we can do. Actually, you know, even though we love our performance, there’s always a bunch of things that could be better, you know, territories that could be better, productivity that could be better, things like that.
We have tons of work, tons of things we wanna do, tons of things we wanna fix, some things we wanna improve. But overall, we feel good about what happened, we feel good about scaling, and you should expect more scaling for us on the go-to-market side in the year to come.
Brad Reback, Analyst, Stifel: Great. Thank you.
Michelle, Conference Call Operator: Thank you. Our next question comes from Howard Ma with Guggenheim. Your line is open.
David Obstler, CFO, Datadog: Great. Thanks for taking the question. I have one for Olivier. The core APM products growing in the mid-30% growth, that is pretty impressive, and I think better than maybe a lot of us expected. Is the question: Is that a re-acceleration, and is the growth driven by AI-native companies that are using Datadog’s real user monitoring and other DEM features as compared to, or as opposed to, rather, core enterprise customers that are building more applications?
Olivier Pomel, Co-founder and CEO, Datadog: Yeah, I think—I mean, look, APM in general, I think, has always been a bit of a steady Eddie in terms of the growth. Like, it’s a product that takes a little bit longer to deploy than others, which is further into the applications. And so it’s, you know, it takes a bit longer to penetrate within the customer environment. That being said, we did a number of different things we did that helped with the growth there. One is we invested a lot in actually making that onboarding deployment a lot simpler and faster. You know, so we think we have the best in the market for that, and it shows.
Second, we invested a lot in the digital experience side of it, and it’s very differentiated, something our customers love, and is driving a lot of adoption of the broader APM suite. And we expect to see more of that in the future. And third, you know, we made investments in go-to-market, we cover the market better, and so we’re getting into more looks at more deals in more parts of the world. And so all of that combined, you know, helps that product reaccelerate growth, you know, quite a bit. And so we feel actually very, very good about it, you know, which is why we keep investing. Overall, we still only have a small part of the pure APM market.
Like that product is scaled at about $10 billion, including VEM, but the market is larger. And so we think there’s a lot more we can do there.
David Obstler, CFO, Datadog: Yeah, I wanna add, you know, we talked about, as I just mentioned, that we’re not penetrated across our customer base, and, and therefore, we’re continuing to consolidate onto our platform. So we have quite a number of wins where we already have other products, we already have infra and logs, and we’re consolidating APM.
Unnamed Analyst, Analyst: Thank you, guys. David, as a follow-up for you on margin, are the larger-
David Obstler, CFO, Datadog: Mm-hmm
Unnamed Analyst, Analyst: ... AI-native customers significantly diluted to gross margin? And when you think about the initial 2026 margin guide, how much of that reflects potentially lower gross margin type of those customers versus incremental investments?
David Obstler, CFO, Datadog: On a weighted average, they’re not. As we’ve always said, for larger customers, it isn’t about the AI natives or non-AI natives, it has to do with the size of the customer. We have a highly differentiated, diversified customer base, so I would say, you know, we’re essentially expecting a similar type of discount structure in terms of size of customer as we have going forward. And, you know, there are consistent ongoing investments in our gross margin, including data centers and development of the platform. So I think it’s more or less what we’ve seen over the past couple of years, not really affected by AI or not AI, AI native.
Unnamed Analyst, Analyst: Okay. Thank you. Great quarter.
David Obstler, CFO, Datadog: Thank you.
Michelle, Conference Call Operator: Thank you. Our next question comes from Peter Weed with Bernstein Research. Your line is open.
Peter Weed, Analyst, Bernstein Research: Hello, can you hear me this time?
David Obstler, CFO, Datadog: Yes, you’re on.
Peter Weed, Analyst, Bernstein Research: Okay, thank you.
David Obstler, CFO, Datadog: You’re on.
Peter Weed, Analyst, Bernstein Research: Yeah, apologies for the last time. Great quarter. You know, looking forward, I think one of your most interesting, exciting opportunities really is around Bits AI, and I’d love to hear kind of, like, how you think that opportunity shapes up. Like, how do you get paid the fair value for the productivity you’re bringing to the SRE and the broader operations team, and really how you see competition playing out in that space? Because obviously we’ve seen startups coming in, you know, there’s questions about Anthropic and, you know, where they wanna go. You know, how does Datadog really capture this value, and protect it, for the business?
Olivier Pomel, Co-founder and CEO, Datadog: Yeah, I mean, look, the way we currently sell a lot of these products is, you show, like, the difference in time spent. And, you know, when the alternative is you try and solve a problem yourself and, you know, you have an outage and you start a bridge, and you have 20 people on the bridge, and they look for 3 hours for the root cause, you know, and you wake up people in the middle of the night for that. Like, it’s very expensive. It takes a lot of time. There’s a lot of customer impact because the outages are long.
If the alternative is, you know, in five minutes you have the answer, and you only get three people looking that are the right folks, and, you know, you have a fix within 10 minutes, you know, you shorter, shorter impact on the customer, many, many, many less folks internally involved, lower cost. So it’s fairly easy to make that case. And so that’s what we sell the value there. The longer term, as I was saying earlier, I think the... Right now, the state-of-the-art for incident resolution is post-hoc.
You know, you have an incident, and you look into it, and you diagnose it, and then you resolve it, you know, so, yeah, maybe you cut the customer impact from one hour to, you know, 15 minutes, you know, but you still have an issue, you still have impact, you still distract the team, you still, you know, have humans working on that. I think longer term, what’s going to happen is the systems will get in front of issues, they will auto diagnose issues, they will help pre-mitigate or pre-remediate, you know, potential issues. And for that, the analysis will have to be run in stream, which is a very different thing, you know.
You can, you can message data and give it to an LM for post-hoc analysis, and a lot of the value is going to be in the gathering the data, but you also have quite a bit of value in the smarts that are done on the back end, you know, by the LM for that. And that’s something that is done by the Anthropic, the OpenAI of the world today.
I think as you look at being in stream, looking at, you know, 3, 4, 5 orders of magnitude more data, looking at this data in real time and passing judgment in real time on what’s normal, what’s anomalous, and what might be going wrong, doing that, you know, hundreds, thousands, millions of times per second, I think that’s what is going to be our advantage and where it’s going to be much harder for others to compete, especially general purpose AI platforms.
Peter Weed, Analyst, Bernstein Research: Thank you.
Michelle, Conference Call Operator: Thank you. Our next question comes from Brent Thill with Jefferies. Your line is open.
Brent Thill, Analyst, Jefferies: Thanks, David, I think many gravitate back to that mid-20% margin you put up a couple years ago, and I know the last couple of years, including the guide, are looking at low, low 20%. Can you talk to maybe your true north, how, how you’re thinking about that? Obviously, growth being number one, but how you’re thinking about the framework on the bottom line. Thanks.
David Obstler, CFO, Datadog: Yep. Yeah, the framework is we try to plan with more conservative revenues, understanding that that if the revenues exceed above the targets that we give, it’s you know difficult in the short term to invest incrementally. So what we’re trying to do is invest first in the revenue growth and then layer in additional investment as we see if we see excess of target. So generally, it reflects one the continuing investment, which we think is paying off, both in terms of the platform and R&D, as well as in and including AI, as in go-to-market.
And then, you know, as we’ve seen, you know, over the years in our beat and raise, we’ve tended to have some of that flow through into the margin line and then re-up again for the next phase of growth.
Brent Thill, Analyst, Jefferies: Any big changes in the go-to-market or big investments you need to make, David, this year, to address what’s happened in the AI cohort or not?
David Obstler, CFO, Datadog: We’re continuing. It’s very similar to what we’re doing, which is to try to work with clients to prove value over time that reflects, you know, that manifests itself in our account management and our CS, as well as our enterprise. So no, I think in for this year, we are looking at capacity growth, including geographic, you know, deepening the ways we interact with customers, expanding channels, very much similar to what we’ve done in the previous years.
Brent Thill, Analyst, Jefferies: Thanks.
Olivier Pomel, Co-founder and CEO, Datadog: All right. And that’s, that’s going to be it for today. So, on that, I’d like to thank all of you for listening to this call, and I think we’ll meet many of you on Thursday for Investor Day. So thank you all. Bye.
David Obstler, CFO, Datadog: Thank you.
Michelle, Conference Call Operator: Thank you for your participation. You may now disconnect. Everyone, have a great day.