• Share this text:
Report Abuse
Advanced Micro Devices, Inc. (NasdaqGS:AMD) Analyst/Investor Day Transcript Thursday, March 05, 2020 10:00 PM - posted by guest on 12th June 2020 11:52:16 PM

Executives

David Wang - Senior Vice President of Engineering-Radeon Technologies Group

Devinder Kumar - Senior VP, CFO & Treasurer

Forrest E. Norrod - Senior VP and GM of Datacenter & Embedded Solutions Business Group

Lisa T. Su - President, CEO & Non-Independent Director

Mark D. Papermaster - CTO and Executive VP of Technology & Engineering

Richard A. Bergman - Executive Vice President of Computing & Graphics Business Group

Ruth Cotter - Senior Vice President of Worldwide Marketing, Human Resources & Investor Relations

Unknown Executive



Analysts

Aaron Christopher Rakers - Wells Fargo Securities, LLC, Research Division

Blayne Peter Curtis - Barclays Bank PLC, Research Division

Harsh V. Kumar - Piper Sandler & Co., Research Division

Mitchell Toshiro Steves - RBC Capital Markets, Research Division

Nathan Brookwood;Insight 64;Research Fellow

Ross Clark Seymore - Deutsche Bank AG, Research Division

Timothy Michael Arcuri - UBS Investment Bank, Research Division

Trip Chowdhry;Global Equities Research, LLC;Co-Founder

Unknown Analyst



Presentation


Lisa T. Su


All right. Good afternoon, everyone. Thank you for joining us today for our 2020 Financial Analyst Day. We have a lot in store for you. So we appreciate everyone's time. And it's really an afternoon to take a step back and talk about the long term. We're going to talk a bit about our strategy, a lot about our technology and our technology road maps and product plans and then, of course, talk about our financial outlook over the next 4 or 5 years.



So let me start first with just a little bit of context. Some of you are new to AMD. Some of you have followed us for a while. But I'd like to say that we've been on a journey these last 5 years. I've been CEO just a little bit over 5 years. And I've actually shown this chart pretty often over the last few years because it is a set of guiding principles for us.



First and foremost, we decided to play to our strengths. For us, it's all about high performance, high-performance technologies, high-performance computing, high-performance products. And this is the fundamental DNA of our company. And so that's why this is our focus. We also set a set of pretty ambitious goals. And we set out to build a culture within AMD of really focused execution, meeting our commitments to our stakeholders, employees, customers, shareholders, partners. And we believe with all of this, we would be able to create a company that would grow and grow profitability as well. And when you take a look at our strategy, actually, our strategy has been very consistent. Again, this has been sort of a fundamental tenet of what we focused on. And it's really around 3 core competencies. I like to call them our crown jewels. When you look at it, we invest in high-performance graphics for growing markets like gaming, cloud gaming, console gaming for compute and AI and virtual and augmented reality. We think graphics will continue to be one of the most important elements of the high-performance technologies going forward. We also invest in high-performance CPUs. And when you think about that, think about that in our client systems. Think about that in our infrastructure and cloud environments, everybody needs more compute. And then when you really put these 2 together, it's really the underlying solutions that are very differentiating. And those solutions can be at the chip level, whether you're talking about semi-custom SoCs. Or they can be at the solution level, whether you're talking about platforms that put together our hardware and software or you're talking about partnerships, where we focus on deep co-design with some of the largest customers in the world. These are our 3 tenets: investing in high-performance graphics, investing in high-performance compute and bringing those together in very differentiated solutions.



Now why do we love high-performance computing so much? We really believe that this is a technology that is the enabler for both the present and the future. It actually drives what can be done over the next number of years. And whether you're talking about very, very big systems like supercomputers or the cloud and the hyperscale environments, or you're talking about new workloads like AI and big data analytics and visualization or you're talking about the things that we enjoy like gaming and new client devices, all of these have one thing in common: they all require high-performance computing, and they really play to our strengths. So this is our focus from a product standpoint.



Now as we formulated our strategy and plans over the last 5 years, the most important decisions that we had to make, and frankly, that any technology company has to make, is around those technology investments. It actually takes years to develop a new architecture and to create that foundation to build great products. And when you think about those strategic decisions, they really lead us to where we are today. So these investments include things like our Zen road map. With our leadership CPU road map, we invested in Zen. It was a big, big performance boost. We're now with Zen 2, and you're going to hear more about what's coming. We're investing in a new graphics architecture with our DNA. And our DNA is actually very unique because it spans both consoles, PCs and even mobile gaming. And it will last us again for the next 5 years.



We also made a choice, and this was an important choice to move our entire product portfolio to 7-nanometer, and very, very aggressively. And that has really paid off for us with now best-in-class manufacturing. And when we saw some of the constraints of Moore's Law, we said, "Hey, there's a different way to do this. There's a better way to do this." And it involves chiplet architecture, which allows you to put the best technologies [ on a ] package and, in some extent, break the constraints of Moore's Law. Now these things actually may seem pretty obvious today, but frankly, 5 years ago, they weren't so obvious. And they really are the strategic decisions that lead us to today's product road map.



So time lines are always interesting. There are lots and lots of products. If you take a look at the last few years, we first demonstrated Zen actually in August of 2016. So that seems like a long time ago. But if you look at over the last 4 years of products, you'll see a couple of things. You'll see consistency in the road map. So both on the PC side as well as on the server side, consistency with what we've been able to do on the CPU side as well as consistency in the rollout of our new graphics products and really a cadence of product innovation across the last 4 or 5 years.



And when you look at that for 2019, 2019 was actually a huge year. It was a huge year for AMD because we did introduce 7-nanometer across our entire portfolio. And in high-end desktops, that was third generation Ryzen; in the HEDT market, that was third generation threadripper; in mobile processors, we introduced the 7-nanometer Ryzen 4000 series; in graphics, we introduced the new Navi products with RDNA; and in data center, we introduced Rome or our second-generation EPYC. And when we you look across this product set, this is performance leadership. This is performance leadership. And I know when people say that, you're like well, what does performance leadership mean?



So let me just give you a view of how we think about performance leadership. This is actually a view of -- first, let me talk about the PC market. This is desktop and notebook performance. And what this shows is, let's call it, the last 5 years of products in the industry, which show relatively incremental performance. On the desktop side, this is multithreaded performance with Cinebench. On the notebook side, this is productivity performance plus graphics performance. But you can see it's been relatively, let's call it, incremental. When we introduced Ryzen, it changed. With first and second-generation Ryzen, we became very, very competitive. With third-generation Ryzen, both on the desktop side and on the notebook side, we have changed the performance trajectory. We have changed the performance trajectory. And by the way, that's what we mean by pushing the envelope on high-performance computing.



And so when you look at the business results of all of that, looking first at the PC market, look, we've had very strong results in PCs. When you look at desktops today, we're over 50% share in the premium segment at many of the top global e-tailers. When you look at mobile platforms, we increased the number of mobile platforms in 2019 by about 70% with Ryzen. And in high-end desktop, we have the best product in the industry. It is the best product in the industry. And what that has translated to in terms of business is we have consistently gained share every quarter for the last 8 quarters, 8 points of share in the last 8 quarters.



And now when you look at data center, you see the same trends. You've seen the same trend of incremental improvement in the industry over the last 5 or 6 years. This is looking at SPECint rate, which is a very -- which is a normalized benchmark for data center computing. With first-generation EPYC, we became very competitive. With second-generation EPYC, we've changed the industry curve. We've literally doubled the performance of our competition with the second generation of EPYC. And again, we're very excited about the progress in data center, when you look at some of the statistics. We love the cloud. We are expanding deployments in the cloud, with all of the top cloud providers. We doubled our number of cloud instances in 2019. We expect to be at over 150 in 2020. We've expanded our platforms in enterprise across a number of deployments in a number of OEMs. That pipeline is growing quickly. We doubled the number of platforms in 2019 in enterprise, and we expect to be at over 140 platforms in 2020. And in supercomputing, and let's call this a really, really good area for us, we're winning consistently the top deployments. That's the top deployments today and the top deployments over the next 3 or 4 years. And one of the things that I can say we're very, very proud of is the fact that we were selected for 2 of the largest DOE deployments for supercomputers with both Frontier last year at Oak Ridge National Labs, and just yesterday, we announced with Lawrence Livermore National Labs that they've selected AMD CPUs and AMD GPUs for El Capitan, which should be the most powerful supercomputer in the world in early 2023. So lots of good progress on the data center side.



Now a couple of points on graphics. Look, we are investing in graphics. We're investing in graphics. Last year, we delivered our first generation of the new graphics architecture RDNA. We saw significant gains in performance per watt as well as overall performance. And you're going to see us continue to invest in graphics. And that has led to some very nice progress in gaming, including, if you look at our RDNA family, the Navi family, we're winning at 1080p and 1440p. This is a very important market for us. We're the exclusive discrete graphics provider for Max, very important partnership for us. We are being used in the cloud, in many places, very important partnerships for us. And of course, we love game consoles. Game consoles, if you think about all of the folks that have game consoles between Microsoft and Sony Systems, in this current generation, we've shipped over 150 million units since we started in 2013. So a lot of momentum there overall.



Let me also spend a few minutes on how we think about customers. In addition to products, we have majored on building very deep customer relationships. And what that means is it's beyond a typical road map, customer, vendor relationship. It's really about how we do something special. How we co-develop, how we co-design, how we co-innovate. Microsoft and Sony are great examples of that. We love what we do in the console relationships. We've extended our Microsoft relationship to Surface. And that's a very important partnership for us. When we look at the cloud, the cloud is all about what we can do to optimize for the key workloads, and you'll hear more about what we're doing with those relationships. And then what we're doing with the OEMs as well to bring out the new user experiences. So lots of focus on deepening our customer relationships and creating competitive advantage in the ecosystem.



Now what has that translated into over the last 5 years? It's translated to exceptional performance. If you look at where we are -- we were in 2015, we were about a $4 billion company. We finished 2019 at about $6.7 billion. That's 14% annual growth rate over the last few years. And this has actually come from growth across our PC business, our discrete graphics business as well as our data center businesses. So really on the strength of those new products. And that growth has translated into significant margin expansion. We've expanded our margins by more than 10 points in the last few years. We've improved profitability, and we substantially strengthened the company and the balance sheet. So it's fair to say we're a much stronger company than we were a few years ago.



Now as much fun as the last 5 years have been, today is really about the future. And we'd like to talk about what we see over the next 5 years. I can say for sure, if you ask me or anybody on this leadership team, we are even more excited about the coming journey in terms of what we can do. And the reason is -- are very simple. First of all, the opportunities are larger. The impact we can make on the industry is larger, and our resources are much stronger. And so if you think about those things and what we've been able to accomplish, it's really exciting to think about what we will accomplish. So again, some of the guiding principles that we think about.



First and foremost, and you're going to hear us say it probably 100 times this afternoon, maybe 101, we will stay committed to high-performance computing leadership. That is our mantra. We are uniquely very, very good at it. And frankly, there are very few companies in the industry that can possibly do it. It is extraordinarily hard to stay at the bleeding edge. We actually see even more opportunities to combine our CPU and GPU solutions. We combine them today, by the way. If you look at our solutions today, in PCs and consoles, they put integrated CPUs and GPUs together. But we see that opportunity broadening and becoming more disruptive as we go over the next 5 years. We also will continue to prioritize very strong and predictable execution. We want to be a trusted partner for our customers. We want people to come to AMD first because they know that they can count on us. And then, frankly, from a business standpoint, our aspirations are to be a best-in-class growth franchise. And we don't take that as sort of a light expression. That is our aspiration.



So let's talk about the market and what's happening in the markets. We love our markets. Our markets are big. They're growing. And there are places where it's very clear who our competition is. When you look at our TAM, it's about an $80 billion TAM, and this is, let's call it, a 2023 number. We see data center at about $35 billion. Data center includes CPUs, it includes GPUs and includes some telco and infrastructure. We see PCs at about $32 billion. This is not necessarily a growing market, but it's a very large market, and it's a large, good market for high-performance computing. And we see gaming, and within gaming, we consider consumer graphics as well as game consoles. Again, we see this to be a good market. Lots of people are gaming. More people are gaming. And they all want better graphics. So it's an exciting TAM and an exciting opportunity for us.



Now today, we're going to spend a lot of time talking about our technology investments. When I talk to you guys often, you're like, what makes you so confident that you can continue the leadership in products. And it really is about the choices that we're making, the choices that we have made, the choices that we will make. And if you look at it, there are really a couple of key areas, right? First and foremost, we're going to continue to invest significantly in our cores road map. That's our CPU, Zen road map, that's our RDNA road map. These are the baseline for what we have to do from a technology standpoint.



We're going to be aggressive with advanced technology that has played to our strength. That's where we are. To do high-performance compute, you have to be aggressive with advanced technology, and that's in process, packaging and interconnect. We're very excited about data center. We think the data center has some very unique characteristics as new workloads come in, and there's a lot of innovation to be had, across cloud, enterprise and accelerated computing. And we're equally excited about PC and gaming solutions because, again, there are lots of new user experiences. So these are the areas of our technology investments.



Now my team is going to talk about this in a lot more detail. So I'm just going to give you like a small preview over the next few minutes. But it gives you an idea of what we prioritize. So industry-leading CPU and GPU road maps. These are IP road maps. And David and Mark are going to go over these in more detail. The main thing that I want to say is that you can count on us to have a very strong cadence of continuing to innovate on the CPU and the GPU architecture. We can see the path. We see the path today, and our teams are working on these things today. In the area of advanced technology, I said we would be aggressive, and we'll continue to be aggressive. Mark will talk more about this. In process, we are in a very, very good position, a very good position with 7-nanometer, and we're committed to being aggressive with advanced process nodes. I think that is really part and parcel to the strategy. We're also very clear that packaging is key, that Moore's Law is slowing down. That packaging is a way to break some of those constraints. The chiplets are great. There's next-generation chiplets. There's work that we can do in terms of 3D die stacking. And we'll talk more about that. And probably the area that is somewhat underappreciated, when you talk about advanced technology, is interconnects. As sexy as the individual components are, how you put them together makes an incredible amount of difference. And we are uniquely positioned to really drive that interconnect architecture.



Moving on to the data center market. Look, I said we were really excited about this area. We are really excited about this area. Forrest is going to spend quite a bit of time talking to you about it. The excitement comes from the fact that there's just insatiable demand for more compute. Everybody needs more compute, no matter where you are. And it's not just more compute, but it's different compute. And there are new workloads and new problem sets and new ways to solve the problem, no matter where you're looking in this ecosystem. And our view is that there's a huge advantage if you think about solving the problems differently. And that's where we're focused in data center. Some of the key centers of data center leadership. Again, you can expect these, CPU road map. Today, Zen 2 is the best CPU in the market. Second-generation EPYC with Rome is the best x86 server processor in the market. We intend to continue that with Zen 3-based Milan, on track for later this year. Perhaps one of the things that may be very new today is what we're going to do in data center GPUs. We used to really share the architecture. So our GCN architecture was shared between consumer graphics and data center graphics. But when you look at the workloads going forward, there's really an opportunity to optimize. David is going to share with you our new compute road map for GPUs. We call it AMD CDNA. We like that. It stands for compute DNA. And what you can expect is this is a beginning of a new road map that's going to take our GPU compute architecture forward, particularly around HPC and machine learning. And you can expect a cadence like we've done on the CPU side with Zen, on the GPU side with CDNA. And then really exciting is how we put these system solutions together and really form, together with our CPU road map, our GPU road map and our new interconnect capability, the best system solutions in the industry. And that includes hardware solutions as well as investment in software solutions on the platform side. So that gives you an idea of how we view data center and our bets in the data center.



So moving on to PCs and gaming. Look, we are as excited about the opportunities here. It's a different market. It's a market led more by user experiences, but the world is different today. People have a different expectation of what you can do on a notebook or a desktop or a gaming system. And in the PC market, people want more performance. They want more capability. They want more portability. They want more security. And those are things that we're good at. And for gaming, you're going to hear from Rick, they're like more than 2 billion gamers in this world. And frankly, they have a lot of expectations. They want to be able to play their games anywhere, anytime on any form factor, while interacting with their friends and family, and they want to do it at high resolution. And that requires also plenty of CPU and GPU capability. Lot of technology here. And in this part of the business, what you're going to hear from me and what you're going to hear from Rick is we are committed to building the best products for PCs and gaming. And we think there's a lot of opportunity, a lot of opportunity. In the PC market, it's a large market that has lots of demands across both notebook and desktop. We've done very, very well in the desktop market, particularly in the DIY market. But we are still very underrepresented in consumer and commercial systems. And we'll talk about how we become more representative in those markets. In graphics, we have the RDNA road map, but the road map also needs a great set of products to go along with it. And as I hear all the time from gaming enthusiasts, we are committed to a top to bottom gaming portfolio. And that's a multiyear, multigenerational commitment.



And then on the console side. Look, we are honored to be partnered with Microsoft and Sony for their next-generation consoles. It's probably the most anticipated consumer launch of 2020. And what we do with each of them is really help them power their visions for the next generation of gaming with custom SoCs. And so that gives you an idea of what we have in store for PCs and gaming.



Okay. So lots of exciting technology and products that you'll hear about today. But I also want to make sure that I give you a preview of how we're thinking about driving shareholder returns. We believe we have great markets. We believe we have great products, and we also believe that we are underrepresented in the TAM. There is a lot more that we can do. And so with continued execution what we're driving towards is best-in-class growth. What we're driving towards is making those right investments not just for today but for the next 5 years. What we believe is with our continued expansion of our product portfolio, we will continue to expand margins and grow profitability. And we also believe we'll generate a significant amount of cash in that time frame.



Now Devinder is going to go through much more of this towards the end of the afternoon. But I thought I would, again, give you a preview of what that long-term financial model looks like. So in 2017, at our Analyst Day, we set out a model, which at the time, some asked whether it was a bit aggressive. It was double-digit annual revenue growth. It was gross margins at about 40% to 44%. We were about 30% margin in 2015, and it was about increasing profitability and cash. And I'm happy to say, if you look at the numbers for 2019, we more or less met these goals. And in some cases, we've exceeded these goals. And so we feel good about the progress. Now as we project out for the future, we're even more excited about what we think we can accomplish. And for our long-term model that we're going to talk about today, it actually has a few very important components. The first is we believe we can accelerate growth. With where our products are positioned with where we are positioned with our customers, we believe that growth will accelerate, and we estimate for the new model. We're calling this a long-term model, think about it as 4 years. So let's call it, like a 2023 model. We believe we can deliver approximately 20% compound annual growth rate over that time frame. As we expand what we do in the commercial markets, particularly in data center and commercial PCs, that will help expand our margins. And so we see margins increasing from where we ended 2019, approximately 43%, to greater than 50%. We think operating margins are in the mid-20s. So again, I think we're building a very balanced model, where you see growth, where you see investment, margin expansion on both the gross and the operating line. And again, our goal is to generate a significant amount of cash as we grow the business. So Devinder will have much more on this, but hopefully, that gives you an idea of what we're trying to achieve. And when you think about all the technology and all the products and all the markets, this is the financial model.



So let me finish up here and just state that, look, we are very ambitious with where we think we can take AMD. That ambition is motivated by building the best. And so that's the model for today. It's about building the best on both the technology side as well as the business side. And that comes with leadership in our road maps, that comes with really execution excellence. It comes with market share gains across all of our markets and a commitment to strong shareholder returns.



So with that, let me turn it over to my team, so that we can give you a lot more detail on how we get there. And with that, let me introduce Mark Papermaster to the stage to talk about our technology. Mark.



Mark D. Papermaster


Well, thanks to all of you for joining us here today. I've had the opportunity to share with you our technical approach here over the last 8 years on our journey back to high performance here at AMD. And you would think about that journey, we've embraced that as a company, we've embraced it in our strategy and in our culture. And so it's exciting to be here today and to talk to you about our view of our status of how we've done, and more importantly, our journey going forward, staying on that pace to deliver high performance to the market. And so Lisa touched on that earlier, but it is such a fundamental about what we do in our technology strategy at AMD that I'm going to spend just another minute about -- talking about these workloads in the markets that we serve because it is an ever-changing landscape across each of these segments. But there's one thing in common, and that is that incredible demand for more performance, more high-performance in each of these segments. In fact, it's actually an exponential growth in most of these segments. And it's not a surprise to you why that is, because you see it every day, you see smart devices around you. You have many appliances, you've got your smart home. You see on the factory floor, the telemetry, smart functionality has been built in across the factory floor. You see it at the emerging edge of the market. When you look at how 5G will transform the market, it changes the analytics that has to occur at the edge of the network, in base stations, in the closets that traditional telco closets across the industry. And it is the approaches that supercomputers have deployed, right, to bring massive-scale compute to solve the emerging workloads in AI and analytic challenges that we're facing. You look at decision-making, the massive data that's going into decision-making today are requiring large-scale simulations. What about content delivery? The Olympics are going to be broadcast later this year in 8K. That's a tremendous demand in terms of delivery on that content, and of course, game serving, driving up the capabilities in our cloud server, and then bring it down to the client interface level. You look at gamers, you look at content creation, you look at what we're all doing on our PC devices today. And we want more and more visualization, more clarity, more immersive experience and, of course, we want a higher capability and efficiency at that level -- at that client level. And so all of these factors come together, and that's what is driving our strategy. It's our role at AMD with the products we develop to enable our customers to harness that data and put it to work. And moreover, to do it with devices that are easy to program that have been out there for years and have an entire ecosystem around them to bring those end solutions to bear. That is what drives our strategy. And we couldn't be where we are today, as Lisa mentioned earlier, without decisions that we made several years ago. And I will have to say that it wasn't hard for us when we saw where the workloads were going to call out that strategic focus on high performance that was a strength that we knew that we could tap on with such the deep experience in products over that area. But we made a set of tough calls on investments. We fundamentally had to transform our ability to deliver that performance. So we had the building blocks, but we transformed that delivery process in a way that we could be trusted to deliver each generation after successive generation, be a trusted partner. So that was a change in our execution model. We changed our road map, and we changed the very way in which we put the IP blocks together through a modular approach. And that's what we'll spend a few minutes just looking at some of those accomplishments over the last 5-plus years.



I'll start with that reengineering of the engineering approach at AMD. Frankly, it's one of the key accomplishments. It's, I'd say, equal to the tough technology decisions, was some of the changes we made in the delivery model at AMD. It was a shift to a culture of high performance, a culture of collaboration and a culture of top-flight engineering execution. And in the prior FAD, I shared with you some of the ways in which we are making those changes, and they have, in fact, proven to be highly effective. We talked before about this idea of leapfrogging teams, right? So what is a leapfrogging team? It's where -- what we did on our CPU road map. We were a multiple CPU line, so we were split out in our focus. We consolidated on a single Zen family, not just one product generation, but we were working on multiple product generations from the outset. And then what we do and what we still have today is we always have one generation going to market, one, well along in the design and one in the conceptual phase. And so this has been fundamental to improve our ability, again, to be that trusted supplier to our customers. But we went much further than that. We changed the way in which we brought our IPs together. We historically had very different methodologies across the company. It was hard to collaborate, when everyone is sort of rolling their own, doing it their own way. And so what we did when we adopted that modular approach? Yes, it was key to hit the performance goals. That's what drove us was to hit the performance goals. But equally, it facilitated the cultural change because you had -- for modularity, you have to architect how the pieces come together, that drives the collaboration across the company, drives that co-engineering of how we design going forward. And it frankly changed the plumbing of how we put products together forever going forward at AMD. We also made a change in how we put our simulation together, how we verify our designs because we needed to accelerate how we bring products to market. So we invested in our simulation and emulation techniques to drive earlier verification of the features that we're designing in our new products. And it was turned out to be quite impactful. Because what we then were able to do through this effort is to drive left, to drive, if you think about a schedule that you lay out in front of you and you typically, historically, have had that feature validation of software features on top of the hardware to the right of a line, that line is when the silicon comes back and you're testing that silicon. So that feature enablement was done in the bring-up in the lab with that silicon. And what we fundamentally done is we shifted that validation left. We've shifted it pre-silicon, leveraging of these advanced simulation, emulation, techniques that we've deployed. And so yes, it had an immediate impact in terms of then -- when you validate that pre-silicon, and the hardware comes back now, we're completing that bring-up in hours and days versus the weeks and months of the methodologies that we had historically done. And it actually changed the way that we even architect our solutions because it enabled a parallelization of the architecture, the hardware and software architecture, putting these solutions together. And a great example of that is Modern Standby. Modern Standby, you all have in -- if any of you who have laptops, it's what's giving you improved performance and energy efficiency across your device. And it's complex because of the high software-hardware-firmware interactions to get that done. So we partnered from architecture through delivery with Microsoft with our AMD software team, our AMD firmware team, chip design teams, platform teams, all of them partnered together, levered these approaches, and it was -- you see it. We are shipping Modern Standby on our devices today, and you see it. It's working flawlessly. So an excellent example of how we change our approach. And frankly, this is the new normal at AMD. This is our expectation. This is how we're developing all of our products going forward. And look at our execution that this has led to on 7-nanometer. Lisa called out that, that decision, that choice had to be made some years ago, and we have delivered, as promised, in 2019 over the course of year, a comprehensive portfolio, over 20 products in the market now with 7-nanometer. You look at what 7-nanometer did by doubling the density, and you could do this in roughly the same power envelope. So look at our Ryzen road map, we doubled the number of cores, Ryzen HET -- HEDT, high-end desktop, doubled the number of cores, EPYC server going -- doubling to 64 cores per socket. And then in the most recently announced notebook with the Ryzen 4000, doubling to 8 cores, still having the integrated graphics capability for leadership and with all-day battery life. So 7-nanometer successfully executed and rolled out again over 20 platforms. Let's do -- and of course, a Navi in our graphics line, with the impressive performance per watt gains, leveraging 7-nanometer. So across our portfolio at AMD.



And what I'd like to do now is just to dive a little deeper on our delivery of that new Zen 2 core. When you look at that architecture, as I said a moment ago, 7-nanometer was key, of course, given that density and energy efficiency, but the design was fundamental in both performance and the scalability of our implementation. We leveraged the second-generation Infinity Architecture, taking that Infinity fabric and broadening its application in terms of actually allowing us to still provide that on chip and socket to socket and server connectivity that we had done historically with our Infinity approach. But now we leveraged it to actually implement a chiplet approach. So what it allows us to do? You look at our server implementation and you see 8 smaller, easier-to-manufacture 7-nanometer CPU die connected with that Infinity Architecture to essential 12-nanometer IO and memory chip. So it delivered performance. It delivered all the scalability. It delivered actually easier to leverage that performance because it created a single NUMA domain. And it is provided on top of that with the performance enhancement of feeding those engines even greater and actually with PCI Gen 4 to double the rate that we had before. So that Infinity Architecture, was fundamental to putting the whole picture together, delivering new CPU performance, 15% instruction per clock gain, but adding it at a holistic level to make sure that we could deliver the core performance. And look at that core performance, just going -- when we introduced Zen, it was a breakthrough, 52% instruction per clock. As I said a moment ago, 15% instruction per clock at Zen 2, but the knock we had taken on that first gain is we had previously had a gap on single thread performance. And we set out with Zen 2 to eliminate that gap, and you can see on Cinebench single thread, which is a great benchmark, great representative benchmark for content creation, 21% improvement on that single thread performance, a majority of that driven by the design improvements. We changed the branch prediction scheme to be more accurate. You have more accuracy of the code prediction. We improved the pipeline execution, expanding the width as well as our dispatch efficiency. We doubled our floating point to a 256 wide type, we doubled how we feed it. So it's a true doubling and floating point, and with that double core density, it's actually 4x the floating point capability on every Zen 2 product that we shipped. And particularly important to gamers and to server is that latency to memory. We improve that effective latency. We doubled the L3 cache. So really, really important decisions that deliver real-world performance, and that's what's driving the rapid growth of the acceptance of the Zen product family in the industry. I want to show you just a few stats on that. The bulk of our shipments are, of course, this is an account of course. So clearly, the highest volume is in the client products. And you can look with that Ryzen launch in early 2017, about a year later 50 million Zen core shipped in the market. We bumped that performance with the key Zen+ release, and you can see that we jumped to almost 160 million devices with that second-generation Ryzen.



But critical was Zen 2 and that acceptance that we've had and really blowing through the levels of historic cap of performance of CPUs in the industry, blowing through that and jumping in almost just actually about 8 months, you see that we're now over 260 million Zen core family chips in the industry. So very wide acceptance and with the strength of our road map, you'll see this curve do nothing but increase in its steepness going forward.



And so as I wrap up, looking at that set of products and the execution that we've had in -- just in the recent past, I'd be remiss if I didn't take a moment and talk about product security because it is the foundation. You have to be trusted to be able to sell your compute devices. It's the bedrock of what we do. It's job 1 at AMD, and so of course, we include a dedicated secure processor in every device that we ship. We boot up in a trusted environment. We authenticate before we communicate and talk to any other device, and that continues strong. We continue to be very resilient in terms of the security of our CPU designs, and we remain focused every day.



You look at security events. We were affected with Spectre v1 and v2 like every CPU chip in the industry. We implemented a software mitigation immediately. We hardened that in Zen 2 for Spectre v1 and v2, but the resiliency of our designs means that when you look at the other side channel attacks that have -- you've been reading sort of continued reports there, our architecture was not affected by meltdown. Architecture was not affected by SWAPGS. We were not affected by ZombieLoad 1 or ZombieLoad 2. So we remain incredibly focused here, and we want to continue to roll out enhanced security features.



Our encryption has been well accepted in the industry; Memory Guard, where we have a memory across a memory encryption, very seamlessly implemented across our devices. The ability to then move to virtualized machines and have unique keys across unique instances in a cloud environment has been very well received, now rolled out across the OS and hypervisor partners in the industry. And the way that we do that there, of course, is done without the need for any code modification. So it's now -- it does not require any lift of the applications of our end users. And so you're seeing a further adoption of this approach.



And then going forward, again, continuing to invest modern security going in across our client devices. We will be increasing our security in those cloud environments, multi-tenant environments. So we're ready, leading-edge encryption capabilities I just described with SEV but growing that to add a capability to protect for what's called a malicious hypervisor. Somehow a bad actor can get into even a hypervisor of that cloud applications, and you'll see that coming in our road map soon.



And then lastly, I'm very pleased to announce that AMD has joined the Confidential Compute Consortium. We feel very proud about these advances I just described to you, but we want to join with others in the industry and work to close the final gap to protect data throughout its entire life cycle. So again, security will remain a bedrock and foundation of everything that we do.



Okay. Let's shift gears. Let's go forward and look at really our investments because, as excited as I am about what we've done, I couldn't be more excited about what's coming in the future. How we put our solutions together around our CPU and GPU road maps will be very exciting as we leverage the process and the packaging investments we're making as well as our next-generation interconnect. And what's so special about what we're doing with these investments is how they come together and enable the accelerated computing and how that will drive our platforms going forward in AMD along with the software stack that you'll hear next about as well as other features that are coming from David Wang, next section.



So start with road map. Sustained execution, we talked about how critical that is in the very foundation at AMD. So you saw the progress we've made with Zen 1 and Zen 2. Zen 3, right on track. It's coming up well. It's on track for delivery late this year. And what I'm really excited about as sharing about Zen 4 or next generation being in the 5-nanometer design. We're working with our foundry partner on 5 nanometer in that same close partnership that we did for 7 nanometer. We bring the know-how of how to marry design in foundry technologies for high performance, and so we're continuing that same partnership and that execution for 5 nanometer.



So look, we called out a very clear CPU strategy. Our road map's stable. We're executing. We are hedged down and executing, and this trajectory will keep us positioned to be that trusted supplier and to meet the demanding applications going forward.



And I have to tell you that when we designed that road map, we designed it assuming that like from most of my career, that we would have that continued gap versus the process capabilities of our x86 competitor because that's the world we live in. That's what we designed for, and that's what we put in place. We anticipated that FinFET would help close that gap, which it did. So 14-nanometer FinFET made very good advancements. The plot you're looking at is showing a relative server product density and relative server performance per watt versus our competition. So you can see that we really had always learned to design with efficiency to account for some of the gaps that we had. But what was historic was at 7 nanometer. We did not anticipate that the 7 nanometer that we'd actually have a leadership process capability.



And so what we are doing going forward is continuing that partnership, executing very, very closely with our foundry partners, and we're assuming that our competitor will address and will come back. But when you -- once you have a gap and once you have some of the issues that may cause a delay in a new technology, it's going to take some time, right? And so what we're doing is we're assuming that the competitors will come back at some point, but we will keep our trajectory at the same pace of competitiveness that it's been on.



And in an era of Moore's law slowing down, once you have that historic level playing field, it is about how you put the solutions together. And so it is about these innovations of integrating solutions, and that's, again, where AMD will not let up. We were at the forefront of packaging technology. You go back in 2015 when we implemented our stacked high-bandwidth memories, of our first-generation high-bandwidth memories over a silicon connectivity, silicon carrier, 2.5D packaging to a GPU leader in applying this approach, which lowers the power and dramatically improves the GPU performance. We've continued that approach with our high-end GPU products. When -- our CPU, we've had excellent experience with multi-chip approaches and then innovated as we talked about with Zen 2 on a chiplet approach to give us tremendous configurability as well as more performance and scalability going forward.



And what I'm really excited about is sharing with you our view of the future because what we're working on with several of our future products is actually marrying those approaches together, working on improving the density and also marrying the capability of what we have practiced on our 2.5D and 3D packaging, along with chiplets. We call this our X3D approach, and it plays perfectly into our modular approach that we have in AMD, so investment in packaging, driving that flexibility and that density and efficiency of packaging going forward and with it, our Infinity architecture.



And so if you look at that road map of Infinity architecture, right, it was, first, allowing us to have some breakthrough CPU connectivity, which you saw in our first generation. And with the second generation, again, that high-performance chiplet implementation in CPUs and we applied it as well on our GPU road map to be able to connect in either 4-way or forthcoming up to 8-way configurations of GPU connectivity. So that was a huge step forward in terms of our scalability and our road map.



And very exciting to share with you today what will be coming out in future products is our third-generation Infinity architecture. And with that, we complete that connectivity across our CPU and our GPU road map of bringing that Infinity architecture linkage and bringing a coherency across those engines.



What does that mean? It means performance. It means efficiency. It gives us unprecedented bandwidth in the industry that you'll see between those devices. It reduces the latency, but more importantly, it unifies the addressability. As you cache from the GPU into the CPU, it looks like a unified pool of memory that's available and with the coherency, it's easier to program, the programmers not having to manage those elements. It's very straightforward to access.



What does this mean with these type of improvements? Well, look at those machine learning training that are demanding massive amounts of data where the models are growing tremendously. They demand this type of connectivity. Look at another example, motion picture rendering, right, that needs that combination of both CPU and GPU. So many other applications are coming up. We're very excited about what our third-generation Infinity architecture will enable.



And you don't have to look further into how this is going to play going forward than looking at accelerated computing because it is something that you don't have to think that far back on. Go back to 2012. That's when the gradient descent analysis sort of spun this whole AI approach to be able to rapidly manage data. And that's where you start to see the supercomputing take advantage of GPU and CPU. That's the first wave of heterogeneous computing and leveraging for both high-performance computing, HPC, markets and machine learning leveraging that heterogeneous approach.



But now the model sizes have exploded. The previous approach isn't good enough. It takes this combined optimized system of CPU, GPU and the ability as well to bolt on accelerators with that, that we're providing, that is the new era of computing. It's the next era of computing. It is the exascale era of computing. And I will tell you that it was a heated competition for these systems that Lisa talked about in the Department of Energy with Frontier, with Oak Ridge National Lab and El Capitan with Lawrence Livermore National Lab. And we love the fight at AMD. That is what we're made of. And so that fight drove us to improve our road map, to improve our competitiveness to go head to head and simply beat the competition. And it is laying the foundation for our next investments for the long term at AMD because it drove us to the edge of what was possible.



And that's why I'd like to end my comments, is on that point about the long term because that's what we're about. We have a deep R&D commitment at AMD. We have a deep experience at AMD, and now we've tied to that an incredible execution culture. So our core road maps will be relentless in computing, gain -- as well as efficiency gains generation after generation. We've led the industry and innovation in modular and chiplet approach, and it'll keep us on a Moore's law pace of performance even when Moore's law itself, the semiconductor node alone, is tailing off. We've invested here for the future. And our execution approach, our culture is, in fact, a differentiator, and it's allowing us to be a trusted supplier to Fortune 500 companies across the globe.



And lastly, our successful journey to exascale computing is driving the next wave of innovation at AMD. We will not let up on the pace of development at R&D. Thank you very much.



And with that, I'm really pleased to introduce my partner in technology development, Senior Vice President of Radeon Technology Group, David Wang.



David Wang


Thank you, Mark, for covering the exciting CPU and interconnect technologies and the future of accelerated computing. So now it's my turn to talk to you about GPU.



Some of you may know I rejoined AMD 2 years ago. I have been spending most of my career doing GPU development, and my top goal here is to drive GPU leadership. So I'm very happy to be here to share with you our GPU technology road map and our journey to drive leadership in gaming, in data center, in accelerated computing.



Okay. Now let's dive in. Let's start with our vision. Our vision is very simple. We want to drive AMD Radeon technology everywhere. And indeed, we have made a tremendous progress expanding our Radeon ecosystem across PC, Macs, game consoles and also data center and mobile. This is a very, very broad ecosystem spanning from cellphone to supercomputer. And to drive GPU leadership across such a broad spectrum of workload require huge focus on technology development. So next, I'll talk to you about our technology development strategy.



We have a very simple and clear strategy on process, architecture, efficiency and software. On the process side, driving aggressive adoption of advanced process technology, Mark mentioned about, because process is the foundation, but it's not enough. We also want to develop domain-specific architecture, so the architecture is optimized for its workload. More on that later. And want to drive aggressively the performance per watt and the performance per area efficiency improvement because the leadership on performance per watt drives higher ASP and the improvement on performance per area, we're continually driving down the cost. And lastly, we want to leverage our open source software strategy to continue to expand our ecosystem.



So Mark talked about the process. I'll cover the rest in my presentation today. We want to develop domain-specific architecture because with Moore's law slowing down, it's very, very challenging for the general-purpose architecture to achieve optimal efficiency for both gaming workloads as well as the high-performance compute workloads. And therefore, we are shifting our strategy from the GPGPU type of architecture to domain-specific architectures. That is the RDNA architecture optimized for gaming and the CDNA architecture optimized for compute that Lisa mentioned earlier.



This allow us through domain-specific optimization to achieve optimal efficiency for gaming and for compute. It also means the end user don't have to pay for the performance and the features that they don't need for their application. So it's a win-win.



So I'll next cover the RDNA and follow by CDNA. We launched the RDNA architecture last year. It was the all-new architecture designed and optimized for gaming with the objective of driving efficiency and the performance of modern gaming workload, improve power and the bandwidth efficiency and also provide a flexible platform to implement software features to enhance gaming experience. And lastly, the architecture was made to be super scalable to be able to support from mobile gaming to cloud gaming.



And in order to achieve these objectives, we have developed the following architectural innovations. We have a new compute unit design. That's a very, very efficient pipeline for diverse and dynamic gaming workload. We have also developed a new multi-level cache that can feed data to the new compute array in an energy-efficient way. And we also optimized our graphics pipeline to deliver the highest performance per clock and also improve the clock frequency. And as a result, we're able to deliver more than 50% of the performance per watt improvement from GCN to RDNA through a combination of architecture, 7-nanometer gain and the design optimization.



So now let's look at beyond RDNA. RDNA is our new foundation of our multi-year multi-generational gaming GPU road map. RDNA 2 or next generation will continue to innovate to raise the bar on improving performance per watt while adding advanced features such as ray tracing and a variable rate shading. You're going to see RDNA 2-based products from AMD and from our partners later this year.



And with our strong development pipeline, the RDNA 3 is also underway. And we'll continue to push for higher perf-per-watt improvement and also new features. So stay tuned.



Now let's take a closer look at how we are proving -- improving the RDNA perf per watt. We are leveraging the proven CPU design methodology on the Zen road map to make similar perf-per-watt improvements on the RDNA road map. We focus on 3 main areas: the micro-architecture innovations to improve the perf per clock or the IPC in the CPU terminology; and we are enhancing the logic, reducing the logic complexity in the switching power; and put together, we have a new physical design flow to drive the highest possible clock frequency, multi-gigahertz of clock frequency for our graphics engine.



With all these enhancements, our plan, our target is to drive another 50% enhancement for RDNA to RDNA 2. As you can see, we have established a very strong generation of perf-per-watt improvement road map, which will extend to RDNA 3. And this strong road map allow us to really drive desktop and notebook gaming leadership. And this is also a key reason why Samsung has chosen our graphics IP even for their mobile applications.



Now let's talk about ray tracing. Ray tracing is an interesting technology for gaming. However, as you all know, the adoption has been slow mostly because of lack of content, lack of hardware and the performance penalty when people -- when gamer [ turn down. ] So we have developed the all-new hardware accelerated ray tracing architecture as part of our RDNA 2. It is a common architecture used in the next-generation game consoles. With that, you will greatly simplify the content development that developers can develop on one platform and easily port it to the other platform. This will definitely help speeding up the adoption.



And we also provide lower-level API support. That gives control -- that give more control to the developers so that they can extract more performance from the underlying hardware platforms. This will help mitigating the performance concern.



Well, so you can take a look at the image. It was rendered on the RDNA 2 silicon, running the latest Microsoft DXR 1.1 API. That was co-architected and codeveloped by AMD and Microsoft to take the full advantage of the common ray tracing architecture. This is a great proof point of the benefit of Radeon everywhere. I encourage to check it out in the demo area after our presentation.



So now I want to switch gears to take us into GPU. So we mentioned about we want to develop domain-specific architecture in order to improve efficiency, which is extremely critical for data center operations. And that's why we have designed the CDNA architecture that's optimized for data center compute with objectives to enhance performance for HPC and machine learning workloads with specialized compute in the tensor operations we added in and to reduce the data center total cost of ownership through the similar type of perf-per-watt efficiency improvement that we borrowed from the RDNA methodology, to add features that will enhance the enterprise-grade RAS and the security and the virtualization support because all are critical for data center operations. And lastly, the architecture must be scalable in order to scale the performance for multi-GPU and exascale computing, of course, leveraging our Infinity architecture that Mark mentioned above. So this is our multi-generational compute GPU architecture road map.



We introduced our first 7-nanometer GPU last year based on the GCN architecture. So this year, we'll launch our first CDNA-based product that's equipped with second-generation AMD Infinity architecture that will greatly enhance the multi-GPU connectivity. And the product will be optimized for HPC and machine learning applications. And to move forward, our next-generation CDNA 2 architecture will continue to push on performance per watt, but even more importantly, you'll be equipped with our third-generation Infinity architecture with CPU/GPU coherency that will extend the architectural capabilities to support exascale computing.



So we have very, very strong road map for data center GPU. So now let's take a look at the software because hardware without software doesn't go too far. So let's look at our software strategy. Our data center software on compute is based on what we call ROCm. It's Radeon Open Compute Platform. It is fully open sourced. And our partners and customers, they love open source because it enables them to innovate and to differentiate to create their own value-add solutions without being locked into proprietary solutions. And ROCm also provides multi-platform support. People can develop in the HIP code, which is a platform-agnostic open source API. And that can be compiled and run on any existing GPU. And we have also provided tools, translators if people want to convert their CUDA code to the HIP code to preserve their investment while moving to open platform.



And lastly, the ROCm software architecture was built to scale. It will scale the multi-GPU performance, and that's critical for the exascale computing. And the software can also leverage the expanded bandwidth and the coherency from the Infinity architecture.



So we have been increasing our software investment in the last few years, so let's look at our progress. So the journey started in 2018. At the time, you can tell, there's still quite a few software components were still in the early phase of development, but we have built a solid foundation. Last year, we put our focus on building a complete software stack to accelerate machine learning, and by working closely with Google and Facebook, AMD GPU is now officially supported by TensorFlow and PyTorch. And the combination of both represents more than 95% of the machine learning applications. So I would say we're pretty well covered there.



And this year, our plan is to develop the complete exascale software solution to cover both machine lending and extending to HPC. This is to support our supercomputer design wins. And there's no better way than working with exascale customers to make sure our software is ready for large-scale deployment.



And lastly, our ROCm ecosystem is also growing. This is a partial list of our ecosystem partners in operating systems, compilers, libraries and applications. We have a great community support, which is important, so we can offer the end-to-end solutions for our customers.



Now let's quickly look at our performance. If you look at the chart on the left, it shows our efforts of continuous performance improvement. You can see from release 2.0 to the software release 3.0, we almost doubled the performance running on the same hardware. And this demonstrates the maturity of our machine learning software stack, the frameworks, the compilers and libraries. And you can see on the right-hand side of the chart, it shows our multi-GPU performance scalability. We can achieve almost linear scalability from 1 GPU all the way to 16 GPU. Again, it demonstrates the maturity of our communication library and the benefit of Infinity architecture. And this scalability is critical for exascale computing.



So pull everything together, only AMD can provide the combined advantages of CPU plus GPU plus the open source software. We can provide the fully integrated CPU plus GPU systems and the unified tools to make it easy for people to develop their applications. And AMD's Infinity architecture connect many CPUs and many GPUs together through enhanced bandwidth and cache coherency.



Together with our open source software, we can demonstrate a performance advantage comparing to the competitors' combinations as you can see in the chart. These combined advantages of accelerated computing really enable us to drive the high-performance computing leadership. And that's a key reason why we're chosen for the supercomputer design wins.



Okay. In summary, we have developed strong GPU technology road map. That's based on advanced process technology, the RDNA, the CDNA architecture, the aggressive perf-per-watt efficiency enhancement and our open source software. With this strong technology road map, it puts us on the path to leadership. And we are very, very focused executing to our plan, to our commitment. This will enable the next wave of winning products.



This end of my presentation. Thank you very much. And with that, I'll introduce Rick Bergman to tell us the exciting business and products for the PC and the gaming business. Thank you.



Richard A. Bergman


Well, thank you very much, David. It's quite exciting to see your passion around GPUs.



So I thought I'd first start off reintroducing myself. I'm actually like David, recently joined AMD after 8 years as President and CEO of Synaptics. Prior to that 30-year semiconductor background, starting out with Texas Instruments and then a decade at ATI and AMD. While at ATI, I led the business that grew our GPU share to #1 in the marketplace for discrete graphics. While at AMD, I led the team that developed the first APU or accelerated processing unit, the first time a x86 core and a GPU were combined together.



But that's all in the past, and it's real exciting in its own way. But what I'm really excited about now is the opportunity in front of AMD, and that's what I'm going to talk about over the next little bit. So why am I so excited about it? It's really 3 -- comes down to 3 things. First, and you heard this from Mark, we've created this execution machine on processors, this regular cadence of leadership products coming out. Second, I can now see it's clear the same thing is happening on GPUs. David talked about our RDNA or Navi generation of products coming out in that similar fashion. And then third, wow, we get to combine these into incredible APUs. So it's just really, really exciting time in AMD's 50-year history.



And there's a strong path forward as we look forward to where we can take this. So to my role as head of the computing and graphics group is how can we turn that into a sustainable growth, in our market share with our profitability, with the platforms moving forward. So there's 1.5 billion PC users, active PC users out there with an exciting user base, and they have different needs. You have some consumers that are looking for the thin and light laptop computer. Of course, we have the enthusiasts that want the absolute bleeding-edge technology. And then you have content creators that view their PC as a tool. And we've kind of rewritten the rules there going from 16 to 64 cores, but at the end of the day, AMD has to pull all that together. And there's no better company that's positioned better than AMD to pull that together into solutions.



And so what is the opportunity? $32 billion opportunity in the PC market for us, so it's a very sizable market. We started with the Ryzen Desktop 5 and 7 and now have increased to have full coverage of TAM with Ryzen products. We're focused on executing across markets. So whether it's desktop, notebook, high-end desktop, commercial or consumer, we have the opportunity to lead in all of those markets.



So as Lisa mentioned, we've disrupted the market. And so in some ways, it's so exciting that we have the fastest solution in 3 distinct segments of the marketplace. Of course, our Ryzen desktop using the chiplet approach power 7 nanometers, we have the clear leader in desktop. There's Threadripper for the high-end desktop as well, the world's first 64-bit core processor for high-end -- 64 core desktop processor for high end, allowing content developers, in some cases, to decrease the amount of time, for example, rendering by 50%. And then in notebook, we launched the mobile Ryzen third-generation product back at CES, again, setting a new level of performance for the ultrathin notebook products.



So how does that all add up? Well, let's look at the market share in unit shipments. We've almost doubled our unit shipments in just 2 years. As you'd expect, our market share has followed a similar trend. But in addition, our ASPs or average selling prices have also increased each year. So if you think about that, we've expanded our stack for Ryzen, increased our unit shipments and then also increased our ASPs, meaning we're moving up. We're moving into the premium segments of the marketplace, going from 30% TAM to full TAM coverage.



So what is our path forward over the next 5 years? Well, certainly, when we want our growth to be long term, sustainable and built across diverse segments. We've done a great job on the desktop side, but we're going to aggressively pursue the notebooks in the commercial segments as well. It's untapped potential for AMD. For commercial, we're a natural mix as Mark touched on. We have a security technology, the reliability and the performance to be quite successful in that particular segment. And we're looking for the multi-generational support, the CPU cores based on the Zen technology. Of course, we have the GPU cores as well, but what's really exciting, again, is pulling it all together into SoCs, and we have clearly the best SoC capability and products coming to market in 2020. And I'll touch on a few of those through the course of the presentation. And of course, we need software, of course, to pull it all together and deliver the full potential of the hardware.



So now let's touch on the desktop leadership again. And so almost any way you look at it, AMD has leadership in the key markets in the desktop area. With our desktop Ryzen products, of course, you can go out to the websites, see the reviews, see the prices that we're getting, see our market share at retailers around the world or you can go and look at the high-end desktop type of solutions as well, where we do have those 64 cores 3x the performance of our competition. Virtually any workload, AMD is beating the competition. But then kind of don't take our word for it. Go out to their respective websites or reviews, and you'll see just review after review after review, talking about how AMD has clear leadership in these segments.



So now how do we transfer that desktop leadership into notebooks? So notebooks do represent 64% of the TAM out there. And certainly, performance is important in the notebook business, but we want to have the entire user experience as well. So we want our customers, when they get an AMD-based notebook, to love that experience from the day they go home and open up that box all the way through to the lifetime of that notebook. So in the case of productivity, we've doubled the multi-thread performance generation over generation. In the case of responsiveness, we've really worked on our drivers, and Mark talked quite a bit about modern standby. But if you look at year-over-year, we have 5x the number of platforms that support modern standby versus prior generation. And then, of course, battery life, we've moved all the way up to as much as 18 hours with our new Ryzen product.



And so we announced at CES our third-generation AMD Ryzen mobile processor to quite a bit of fanfare and tremendous excitement at the OEMs. Of course, it's the world's first 8-core x86, the world's first 7-nanometer mobile processor, tremendous graphics, tremendous battery life. The average lifetime of a notebook is about 4.8 years, so let's just call it 5 years. If you go back 5 years and look how far we've come, we've increased the compute power by 6x. We've increased the graphics by 3x, and the battery life of a notebook is now 3 to 4x longer, so just tremendous progress, telling you where we're going to be 5 years from now as well.



This is really a watershed moment for AMD. Really, it's a big moment for us when you look at the performance. Of course, we've always led in graphics for a decade. That's just expected now that we'll have the best graphic solution in the industry. Zen 2, Zen core, our multi-thread performance, has also been leadership as well. But what now is new, single thread performance. AMD has taken over the leadership of single thread performance. So in the 3 major categories of performance, AMD is the leader.



So what does that mean? And at some point, the numbers start to speak. So you can see we've more than doubled our market share in the notebook space. So keep in mind, we've been able to accomplish that without 7 nanometer. This is pre-third gen. Without 7 nanometer, without Zen 2, we've been able to more than double our market share. And now you look at the number of platforms that we have, going from 50 to, in 2020, 135, 176%. It's a leading indicator where this business is going. And it's with all the top PC OEMs as well, HP, Dell, Lenovo and so forth, selecting us for premium notebooks all the way through to entry-level notebooks, clearly, great progress.



The other area I mentioned that's so important to us now is the commercial business. That represents 48% of the TAM and yet, great opportunity to continue that momentum in growth. Of course, the decision criteria around commercial products is a little bit different. Performance still matters. You want your workers to have snappy performance of course. Security is absolutely critical. And so I'll talk to that on the next slide. You do want that battery life to keep your employees going all day long no matter where they are. And then, of course, manageability for deployment, imaging and management within a modern IT infrastructure.



So let's talk a minute about what it requires for security. So security is just so important in today's world. You have a 3-level approach to security. Of course, the first level is at the processor level. And there, as Mark mentioned, we do have that built-in dedicated security coprocessor to protect your PC. Then as you expanded the platform level, we have features like the full encryption -- the Memory Guard, the full encryption of memory to protect your data while it's resident in your PC or while it's being transmitted either direction as well.



And then you take it up to the next level, the operating system level. We work with Microsoft and the OEMs to ensure that we have the best security at the enterprise level as well.



And so we support things like ThinkShield from Lenovo or Sure Start from HP. And so now, how do the numbers look here? As you can see, once again, very strong share moving forward. But again, let's look at the leading indicator, how many design wins do we have? In this case, more than double the number of commercial platforms, going from 35 to 70-plus in 2020. And again, I'll remind you, this was without the Ryzen PRO solution, no 7-nanometer, no Zen 2 cores. We have that momentum with the prior products moving forward. So really expect things to pick up also in this space, again, working with top commercial OEMs like HP and Lenovo. Lenovo already announced their intention to use the Ryzen PRO in a ThinkPad product. So there'll be a full lineup of ThinkPad solutions leveraging AMD processors.



So the road map. Of course, a great deal of interest in the road map. Over the last 3 years, we've had 3 generations of Ryzen products. The Zen core is so critical to the foundation for the success of our products in this area. And as you heard, by the end of this year, we'll have the fourth generation of Ryzen with the Zen 3 core. So it's going to be a very exciting year for AMD across all the different platforms that we participate in.



So now I'm going to shift gears. So I just talked about the PC market. Now, I'll talk about the gaming market. So 2.5 billion gamers out there in the world. So often you have different images of who a gamer is. In some cases, just a business person, playing candy crush, sitting at a gate, waiting for their flight to take off, or it could be an enthusiast. Really, again, pushing one in the latest and greatest. So we love the enthusiast. It could be a teenager, playing Minecraft at home on a PC, or perhaps it's a 20-year-old playing The Sims, also on a PC. A range of environments, different areas, different performance levels, all possibilities for Radeon technology. And ask that, where is Radeon technology to play? Well, it's everywhere. We have an installed base of over 500 million, and it's clear we're on a path in the next several years to get over 1 billion Radeon users. But what's really remarkable about this slide is, who is adopting Radeon technology? Of course, you have Google up there, you have Microsoft, there's Sony, there's apple, of course, the PC vendors as well. These are companies that do their homework. They go out and look who's got the best technologies out there. And invariably, they're coming back when you want the best, the world's best graphics, IP, best graphic solutions for gaming, it's AMD.



And so the TAM in this business is also sizable. It's a $12 billion market, and it's growing. RDNA, as you heard from David, really forms the foundation for powering the next decade of gaming. It's going to enable a full stack of solutions from AMD. And of course, every gamer knows, it's not just about the hardware, you also have to have great software as well. So our first solution, the Navi, what we call our Navi 1X solution, using RDNA, we introduced last year. It was a very ambitious introduction. Of course, brand-new, grounds-up architecture in a leading-edge process like 7-nanometer, key features like GDDR6, and we also led the industry with PCIe 4.0. Very bold architecture. We targeted the largest audience in the PC market. And it was really around 2 distinct segments: so first, the 1080p market, that resolution, that represents about 60% of the PC gamers; and then a 1440p market as well, which is the majority of the balance of the gamers out there. Two different product lines, 5600 and the Radeon 5700.



Another thing to note here is the breadth of solutions. Just like I mentioned previously from -- on Radeon, these are all the leading graphics add-in board vendors. And of course, they also happened to -- several of them also support us on the motherboard side as well. We have 3x in this generation the number of SKUs than we had in the prior generation due to the strong pull and interest for our Navi 1X products. And as further validation, the Radeon 5700 XT won the GPU of the year from the PC Gamer. And so the importance is -- at the end is, how well do you run games, and more importantly, modern games or recently released games. So this is about a half dozen of the really critical games out there. Looking at our performance. As you can see, it's quite good, which means very fluid gameplay. And it also shows, actually, we're a little better than the competition as well. Because we targeted the right area with the right architecture, with the right solution.



And so as I've mentioned software a few times, I just want to reemphasize the importance of software in this marketplace. We do put a great deal of emphasis on it, because it really allows the performance and the features to come out of our solutions. So in December, we released our latest software, we call it Adrenalin, and enabled a new graphics user interface, making it easier for gamers to game. Brand-new hot titles come out. We have to be there on day 0 with optimized drivers that work flawlessly. So that is our commitment. And then there's features. So software gives us a touch point with the gaming community. And so we hear when they're looking for certain additional features or maybe we innovate and come up with features. And one of those examples was Radeon image sharpening, which just allows a higher -- a better visual experience on games.



And so Navi shift. But what's next, of course, is the big question. Of course, that will be our Navi 2X family of products. We'll introduce that those products at the end of this year. But clearly, we're stepping up, and we're targeting the enthusiast class performance as well, which means we have to have great performance at 4K resolutions. Remember, I talked about 1080 and 1440, the next big step is 4K resolution. That means uncompromised performance, very fluid play. And of course, we have to have just stunning visuals as well. As David talked about the importance of ray tracing to show the shadows and reflections and to be able to game at a very fluid, consistent rate. So we'll add additional features like variable-rate shading as well to give you that performance uplift, so critical in this segment of the market.



And we'll take that 2X stack across the -- over time across the entire top to bottom lineup for graphics. So now we shift into, what does our GPU road map look like? So this probably looks very familiar to the CPU road map that I just showed. So remember at the very beginning of my presentation, I said, we're just getting that same clockwork, that same execution machine on the GPUs like we've had on the CPUs. So we're going to have cadence, that drumbeat. And so yes, as I mentioned, Navi 2X starting later this year and then continuing forward, but the development team is off working on that next generation, creating those even better performance, better set of visuals, because it demonstrates our commitment to be in this market, to not only be in it, but to actually win in this market as well.



And so now, let's talk another key segment of the gaming market, which is the game consoles. 10-plus year relationship with Sony and Microsoft. As Lisa mentioned, having shipped over 150 million units of the current generation. But we're on the cusp of the next generation. So the next-generation consoles will use our latest Zen technology, use our latest RDNA technology as well. Really creating this very immersive experience with ray tracing, 3D audio, fast load times, they'll really excite a new generation of gamers. So whether it's Radeon mobile processor, Navi 2X or one of these next-generation consoles, clearly 2020 is going to be a very, very exciting year for gamers.



And so I'll wrap it all up. I mean you've heard about our great processors, heard about our great graphics, incredible SoC capability, we'll have more APUs this year than ever in the history of AMD, gives us the opportunity to have leadership across all segments in PC and gaming. Remember, combined, that's a $44 billion market, and we have the opportunity to lead in every segment. As you walk the hallways here in AMD or any site worldwide, it's absolutely clear that our employees are maniacally focused on executing to the road maps that I showed today. There's really no reason at all that we can't seize the opportunity for a sustained growth and success over the next decade at AMD.



Thank you very much. So thank you, and now we will take a brief break, and we'll come back with Forrest talking about our Data Center solutions.



[Break]



Forrest E. Norrod


Good afternoon, and welcome back. I'm incredibly pleased to be here to share with you an update on the progress we've made so far in the data center and the future that we see ahead of us. And for us at AMD, when we think about our mission to deliver high-performance computing, there is perhaps no market for which that is more important than the data center market. Because the data center market of today is a market that is seeing continuous innovation and continuous disruption. With new applications, that weren't even dreamed of a few years ago, springing into fruition all the time. With machine intelligence, of course, with the advent of web-scale applications, with artificial intelligence, it is truly an era in the data center of continuous change and an endless need for compute power and endless need for high-performance computing, which is a perfect match for our mission. And it's also a very large and interesting market, as Lisa said earlier, it's a $35 billion market in 2023, with a number of rapidly growing segments that we believe are a perfect fit for our technology and our direction.



And so I want to talk to you a little bit about how we're going to continue to attack this market, but let me first reflect on where we began this journey, this journey back to leadership in the data center. We embarked on this several years ago. We actually talked about this and our ambition to bring innovation back to the data center at our Financial Analyst Day back in 2015.



The first major milestone on this journey was when we introduced the first-generation AMD EPYC processor in 2017, which truly was the first competitive x86 server CPU to challenge our competitor in quite some time. And when we introduced that processor, that first EPYC processer, codename Naples, the processor with this incorporation of the Zen core began our tour of Italy and a return to the data center. We put up a road map showing what we were going to deliver from 2017 all the way out to 2020. One of the reasons that we did that is because we knew that to be considered for the data center, much less to be a leader, we had to be not just a provider of high-performance components, we had to be a reliable partner, somebody that end customers could count on to be there and to always be there with each generation, new products, new innovations, maintaining the value of the investment that they would have to put into any new entrant into the data center. And so we put this road map up. And I'm incredibly pleased and proud to say that the teams have delivered with metronomic regularity on this road map. With Naples in 2017, Rome, of course, last year, and Milan on track to ship this year, as Lisa mentioned earlier. So the execution is there, and the execution with the second-generation EPYC has delivered an amazing part. This is the highest performing x86 processor ever, and it's not close. When we introduced Rome, it was twice the performance of the competitive x86 processor, which enables our customers to, in many cases, drive 25% to 50% lower total cost of ownership for providing the data center services to their internal and external customers, which allows them to transform their data centers, their IT, their operations. And as Mark mentioned earlier, the Zen 2 core at the heart of Rome also incorporates, not just high-performance, but the high-security features that are so critical to secure the data center to secure our customers valuable IT assets. And so Rome has been a revolution in the data center.



And on the performance side, I'm incredibly proud to say that we have garnered over 140 world records so far. We're the ultimate in performance in a wide range of workloads, from enterprise IT, through high-performance computing, through cloud, infrastructure. Rome delivers superior performance, delivers world record of performance. If you take a closer look at that, and you look at the processors that are available for the vast majority of servers today, and that means 2 socket servers, and you look at what we've delivered with Rome versus what the competitor has to offer today, including the processor refreshes that they just introduced a few weeks ago or a few days ago, we clearly have incredible performance lead. Double the performance of the competitive offerings, inclusive of their most recent refreshes in the 2-socket market, and the same story in 1 socket, where the unique value of Rome, and indeed Naples before that, was to first establish a no-compromise 1-socket market, where by not placing any artificial limitations on the availability or features of a single-socket processor, we have enabled customers who need all the reliability and the resiliency of an enterprise-grade server processor to address to scale their processor needs to fit their application, and hence, drive yet further TCO value.



Now Rick talked earlier about the various aspects of performance in the client market. And indeed, these performance metrics that I've shown you so far demonstrate the throughput superiority, the multi-threaded superiority of AMD EPYC over the competitive parts. And that holds true even when you look at lower core counts and can compare core to core. So in the heart of the market, where most enterprises buy in the 16-core processor segment, if you take a look at our performance there, again, inclusive of the 2 brand-new parts that our competitor just introduced, AMD EPYC has a performance lead and an unmatchable performance per dollar advantage. So almost 3.5x the performance for dollar of the most recently introduced 16-core part from our competitor, a truly remarkable achievement.



So with that, I want to reflect on where we stand today. So today, we're in a place where we have demonstrated repeatedly predictable execution to the market, quite frankly, highlighted and thrown into high relief by uneven execution by others in this segment. We've demonstrated leadership performance on multiple dimensions, and we've demonstrated strong ecosystem support, where we have the OEMs, the software vendors and the IHVs that provide the rest of the ecosystem to compose an entire solution, all embracing EPYC. And so that's a great foundation. That's a great place to be. And so we're taking that foundation and our imperative, our mission now for the entire team is to continue accelerating growth of the AMD EPYC product portfolio. And so our mission is to broaden the deployments that have already begun, first with Naples and then more rapidly with Rome across enterprise, cloud and HPC. It's to work with our customers to continuously unlock all of the power and performance of our solutions by working with them to optimize their workloads and tune their software to get every last drop of performance out. It's to continue ramping our field and customer support organizations as our customer set grows and to support optimization and deployments. And then that's all in service, of course, of growing our market share.



Now on the market share, the next significant milestone is one that we are going to hit next quarter when, as promised, we believe, we will achieve double-digit server CPU market share in Q2 of this year. Now when we look at what we've got in more detail across each one of these segments, we have built a product, we've built a portfolio that really can demonstrate leadership performance in the workloads that matter. And the markets that matter, broadly speaking, are these 3: the cloud market, which now constitutes, quite frankly, about half of the overall market for server processors; the enterprise IT market, which is a little north of 35%; and then, of course, the HPC market, which constitutes most of the remainder. With the first-generation EPYC, taking that first-generation Zen core, we were able to produce a product, Naples, that effectively addressed about 60% of the workloads across those broad markets, the vast majority of the HPC workloads, much of the cloud workloads and a good proportion of the enterprise IT workloads. And for them, we had leadership performance and a great solution.



With Rome and with the work that we've done with our ecosystem partners and end customers, we've expanded that footprint substantially. And we see over 80% of the workloads across these segments are addressed best, are addressed in a leadership fashion with second-generation EPYC Rome. And so to give you a few examples in each one of those segments. On the enterprise, we have a great performance story. We had with Naples, superior virtualization capability and superior TCO. With Rome, we've taken that to the next level, up to 50% TCO savings over the alternative. But beyond that, in many other application areas, in Java performance, in database performance, we're providing nearly half -- or sorry, nearly twice the performance of the competitive solutions, even on things such as SAP, we've demonstrated world records, and we've demonstrated demonstrably superior performance.



You couple that with the embrace of the OEM partners who have greatly expanded their portfolio of solutions, their portfolio of platforms that are made available to the end customers, growing from 22 platforms available in 2017 with the first-generation EPYC processors to over 140 expected this year. We've got a winning solution for enterprise IT, and we've got that broad support across the enterprise ecosystem, both on the OEM side, on the software side and the IHV side. Our mission, keep driving, keep driving.



On the cloud computing side, it has been incredibly gratifying to watch the growth here as well. In 2018, our first instances were launched with some of the major cloud providers. We had 18 public instances available -- public instances and services available on first-generation EPYC systems. And now this year, we expect over 150 public services and instances available so that customers around the world, who've embraced the cloud computing paradigm, can do so now with AMD EPYC and see the full performance of Rome. In order to support that, we have developed and deployed customized versions of the EPYC processors, uniquely tweaking, in many cases, to ensure that the environments and the TCO is maximized for our cloud customers. And they, in turn, are using the fact that we can support 60% more virtual machines at -- with each being at the same performance level or better that we can support better docker performance, that we have much better performance per watt, which is critically important for any cloud provider operating at scale. And then we've got the memory bandwidth to unlock all of that performance and feed the beast, as Mark is wont to say, to keep all of those cores humming and to keep that performance available to end users.



With those advantages, it's no wonder that the world's leading clouds in the U.S., in Asia and in Europe, have embraced AMD EPYC. A few examples, Google's largest general-purpose cloud -- actually, largest instance of any type is available on second-generation Rome. We've delivered more than 25% better TCO for Twitter. And Microsoft has a great set of instances in both general-purpose as well as purpose-specific instance types, including examples that are demonstrating over 60% higher SQL performance for companies that are embracing a move to the cloud. So a great place for us so far and a great place for us to continue to grow with our cloud customers.



On high-performance computing, of course, this is a real area of pride for us at AMD, as Lisa mentioned earlier. We have a part and a road map that has demonstrated leadership in HPC applications by virtue of a substantial lead in floating point performance and demonstrated leads in performance on commercial applications with up to 72% higher structural analysis, 95% higher computational fluid analysis than that available on our competitor. Very importantly as well, and one of the things that's led to a number of major supercomputing design wins with large meteorological research associations and institutions around the world, 120% faster weather forecasting performance.



And with the Frontier and El Capitan announcements, it is a real source of pride for the AMD team to be powering the world's fastest exascale systems and to be contributing to supercomputing and HPC clusters around the world from industrial applications to research, to defense applications as well. So a new area for us, a new area of focus has really been to expand our reach beyond the traditional areas of cloud, enterprise and HPC and to dive deeper into the realm of the telco and infrastructure products that they themselves are undergoing a revolution. Many of these systems that are powering today's networks have moved or are moving from being based on proprietary hardware, on proprietary ASICs to running in software on industry-standard servers. And the thing that's enabling that is the incredible power of today's servers and that, in turn, gives those customers who have embraced that paradigm, the opportunity to add significant agility and significantly better costs to their businesses. EPYC, particularly the second-generation EPYC, is uniquely capable in this area with much higher double the IO bandwidth per link, with much higher bandwidth capability in terms of more links, more memory bandwidth, we demonstrate leadership, networking data plane performance and truly are the perfect fit for the emerging 5G telco infrastructure.



Nokia, one of the leaders in telco infrastructure, recently demonstrated why they have embraced AMD Rome. Because they see twice the performance on their 5G infrastructure based on Rome than they see using a competitive -- our competitive CPU. That's the type of performance that Rome provides. And so we've got a great part, we've got a great start in all of those markets, and we're just going to keep going. I'm very pleased to say that the third-generation AMD EPYC part, code-named Milan, we expect to continue demonstrating performance, leadership in both throughput and low thread count applications, and we expect to be shipping it later this year, as promised.



And with Milan, we will open up the aperture even further. I've talked to you about with Rome, we've got better than 80% coverage on the workloads across HPC enterprise and cloud computing. With Milan and the work that we're doing with our ecosystem partners, we believe we'll have leadership performance in virtually 100% of the workloads that power today's data center, which is incredibly exciting, and we believe continues our progress in every segment. But our tour of Italy doesn't end there. So Rome is an incredible part. Milan will be fantastic. But Mark and the team are completely committed to that CPU leadership that you talked about before. And the Zen 4 core will show up in our next to next generation server part, code-named Geneva, that you'll see in 5-nanometer. And that we announced yesterday, will be the CPU that's at the heart of the El Capitan system. So the journey will continue, the leadership products in the server will continue.



But beyond server, the other market that we've got to talk about is GPU. Data center GPU for us is a rapidly growing market that we believe by 2023 will constitute an $11 billion market opportunity for us. And in data center GPU, there's really a number of different application areas or segments that are -- that -- where it is crucial, it is vital. First one in virtualization and cloud gaming. Many customers are making the move to do virtual desktop, virtual gaming in the cloud, unlocking a new set of economics, unlocking a new set of collaboration, unlocking a new set or new points for TCO optimization. Beyond virtualization and cloud gaming, of course, the most actively discussed application of GPUs has got to be machine intelligence. And we are committed to driving our road map aggressively for machine intelligence. And then, of course, high-performance computing, where the top-end supercomputers have been the province of GPU-accelerated computing for the last decade.



Our start in this market has already been made. We have the world's first 7-nanometer GPUs for the data center. We are the only company to implement industry standard, hardware-based virtualization that allows the resources of the GPU to be scaled up or down depending on the needs of a specific application. And that's critical for almost any application in the cloud. We're taking the same approach to developing a multigenerational road map, not developing -- not just developing, deploying, delivering a multigenerational road map for GPUs that we took on the CPU side. And when we look at the innovation that's happening in each one of these areas, we think that innovation is best-fueled by an open software environment, by an open software community. And AMD has made a strong commitment, having completely open-source tool chains from drivers to libraries to enable each one of these segments.



In GPU virtualization, it was incredibly gratifying to us earlier this week to have Microsoft Azure introduced its latest virtualization instance, the NVv4, which provides accelerated VDI solutions via the cloud. And with -- by virtue of our hardware virtualization technology, scales up and down and delivers superior economics than any other alternative. That same technology is also critically important to providing the scalability and the long-term economics that are so important to cloud gaming to ensure that, that segment grows as well.



In machine intelligence, look, the applications of MI are just incredible. Anybody that's -- you all have seen it, the revolution in natural language processing and image recognition and recommendation engines and even in industrial automation robotics has been incredible. The types of capabilities that we're seeing now in these systems are things that people would not have dreamed of a decade ago. And we're seeing applications spring up -- applications for artificial intelligence spring up everywhere. One of the interesting one that was discussed the other day is using machine intelligence in concert with HPC to steer scientific research and traditional HPC computing. And by doing so, to get what was characterized as a force multiplier deployed. So we believe that machine intelligence applications are going to continue to grow, but what's going to grow even faster is the insatiable need for performance for these applications. Because as researchers refine their algorithms, as they explore the limits of what machine intelligence can do, they're driving an exponentially increasing demand for performance by virtue of larger models, longer run times, more data, going into all of these models.



And that looks a lot, quite frankly, like HPC demands as well. Mark showed earlier, a portion of this data on HPC. But just to put a finer point on it, we have seen, on the supercomputing side, a 10,000x increase in performance over the last 15 years. And if you look at just this year to the exascale era, we need to do another 10x increase in performance to meet that exascale barrier -- to break that exascale barrier. Similar things are happening on machine compute. If you look at the model sizes of some of the most interesting work that's being done out there, where you're seeing the most interesting results, model sizes have grown 300x in the last 3 years. And so for both of these segments, we see this unending, to use Mark's word, relentless demand for increased performance. And the only way to meet those performance needs is with accelerated solutions. And so our road map is committed to providing that performance for those applications, and David showed this earlier.



We're already shipping our first 7-nanometer GPU for the data center, our MI50 product, based on the GCN, later this year. We'll be introducing our first CDNA product, also based on 7-nanometer and optimized for HPC and MI applications to be a highly efficient, scalable product to meet those needs. CDNA 2 will be coming soon to usher in the exascale era and to provide that 10x uplift in performance at the system level.



And it's not just about performance, it's about making sure that, that performance is usable. And this was touched on earlier, but I want to turn up the contrast. Because, look, accelerated systems have now been around for about a decade. But what we have done is we have bolted on accelerators onto a server system architecture that, quite frankly, was created for web-scale applications and databases. And so the CPUs and the GPUs are isolated from one another. They don't work well together. And although the performance is there, it's difficult to reach. It's difficult to program effectively applications to fully unlock the performance of the system with this topology. The first step in addressing this, we're taking with our CDNA architecture to allow better scalability amongst the GPUs and coherency amongst the GPUs to allow up to 8 GPUs to scale more efficiently and work more efficiently with one another.



But with the CDNA 2 architecture, we get something truly special, where we extend that Infinity architecture, in this case, Infinity Architecture 3, to couple the CPUs and the GPUs together into one unified data view, where this not only provides additional performance, much more importantly, it allows programmers to stop worrying about the explicit management of data movement, the explicit management of preemption, the explicit management of a bunch of functions that they haven't had to worry about on CPUs for many, many years and that are acting as barriers to embracing accelerated computing. So we're super excited by this unified accelerated computing architecture that we're bringing to bear in CDNA 2. And it's that architecture, quite frankly, that led to the exascale systems that Lisa and Mark both mentioned earlier, first with Frontier, which will be a 1.5 exaflop system deployed next year, and it is capable of outperforming the 100 what-- top 100 supercomputers on today's supercomputing list combined. It will be deployed in the middle of next year, supported by AMD CPUs and AMD GPUs connected together in a coherent way. The only thing that's more exciting than Frontier and what can be done with Frontier is, of course, El Capitan announced yesterday in an event with Lawrence Livermore National Labs and HPE, it will produce over 2 exaflops of performance, expected to be more powerful than the top 200 supercomputers on today's list and it will be supported by AMD CPUs and GPUs shipping in 2022. Those 2 systems, to us, are a source of tremendous pride and a validation that our assertion that we're making tremendous progress on our goal to be the new data center leader is coming true.



And I think that I hope you see, and I think that the industry is seeing that AMD has produced leadership products with leadership execution, and we have a leadership road map that's going to continue into the future, delivering the best performance available on all sorts of applications across a wide range of workloads and leading the way to the future of accelerated computing by defining the architecture of tomorrow's accelerated system. And so I think with that, I think, we are well on our way to demonstrating to any of the question, we are the new data center leader.



And with that, I'd like to turn the stage over to my good friend, Devinder.



Devinder Kumar


Thank you, Forrest. It's been 3 years since we had our last Financial Analyst Day in 2017. And I can tell you, it's been quite an exciting journey. You've heard from Lisa, and my colleagues about our plans to accelerate momentum, execute product and technology road maps and the long-term growth that's ahead of us.



I will share with you the financials since 2017, in terms of our progress of or what we have done since 2017. And then the next phase of our financial journey, our long-term financial model and our capital allocation strategy.



So let's get started. Here is exactly the priorities we laid out in 2017: grow revenue from the base that Lisa showed you earlier, just about over $4 billion; gross margin expansion; operating expense discipline to return to profitability. And from a standpoint of what we have done, we have focused on the financials to grow our revenue, expand the gross margin steadily over the last few years, exhibited the OpEx discipline to continue to increase our profitability over the last few years.



Let's take a look at the details. On revenue, we've made great progress. We've had increased revenue of $2.5 billion from where we were in 2016 to 2019, 56% growth in revenue from 2016 to 2019.



It's been driven by the new leadership products that you heard my colleagues talk about. The AMD Horizon, the AMD Radeon and the AMD Epic products that have been introduced since that time.



Let's turn to margins and OpEx. If you look at the left, gross margin coming off of 2016, 31%. And we've showed steady increase, accelerating gross margin improvement, 12 percentage point improvement in gross margin in just 3 years. And since the Finance Analyst Day in 2017, when we met, every quarter since then, we've had year-over-year increasing gross margin; every quarter, since we met the last time, gross margin has gone up when you look back at year-over-year gross margin expansion.



We made the OpEx investments needed for R&D and, in particular, what you heard a lot about today, the multi-generation road map, whether you go from first generation or second generation to third and now on to the fourth generation of products.



In 2017 and 2018, our focus was R&D. In 2019, in addition to the investments in R&D, we accelerated our investments in go-to-market, and we had a lot of go-to-market activities in 2019. And that's why you see the $2.1 billion OpEx in 2019.



With that, if you take revenue, margin and OpEx, let's look at growth in profitability. On the left -- actually, I love this chart. First of all, when you look at the operating margin coming off of 1% in 2016, we've had 11-point increase in operating margins in the 3 years from 1 to 12. And then on the right, EPS. We've made very good progress on the bottom line. Earnings per share has gone up. We've had growth in EPS for the last 3 years. And with that, the P&L has gotten better.



But let's turn to our TAM and look at the balance sheet, which I know a lot of you were asking me about when we met in 2017, about AMD's balance sheet. If you look on the left of this chart, $1.8 billion of debt has gone down by $1.2 billion in the 3-year period.



It's less than $600 million as we ended 2019. We've ended 2019 with $1.5 billion of cash, which turns net cash positive for the first time in a long time to be about $900 million as we ended 2019.



The gross largest target that we set in 2017 was to get under 2x. We've exceeded that target. The gross leverage is 0.5x, in particular because the EBITDA of the company, as we enter 2019 on a trailing 12-month basis, was more than $1 billion, and that led to a very strong foundation of a balance sheet as we go on with the investments needed for the next few years in terms of everything you've heard from my colleagues this afternoon.



On the next slide, let me summarize what we said in 2017. 2020 long-term target model is exactly what we said in 2017, double-digit growth in revenue. And as we end 2019, and we've had about 16% compound annual growth rate in revenue. Gross margin, we had said, we'll get from the low 30s to 40 to 44. We ended 2019 at the upper end of that range that we laid out in 2017.



EPS, strong growth every year. We've had solid momentum, solid financial momentum for the last 3 years, which brings us to today. And let me show you the priorities from a financial standpoint for our next 4 years.



You've heard a lot today from my colleagues, Mark and David, shared with you, the multi-generation CPU and GPU technology road map. You heard about architecture. You heard about our product road map from Rick and Forrest, and their plans to grow market share and grow revenue in many different areas. All of this sets us up for continued success, all of this sets us up with continued success. And for the next 4 years, it's all about growth. You saw Lisa put on the chart earlier, about 20% compound annual growth rate on revenue for the next 4 years. It's about growth.



Now with growth, we also want to focus on continued margin expansion, to go further from where we are today, and that will continue to be a focus. We want to further increase operating margin and increased profitability. And finally, we want to generate significant amount of cash.



It really sets up for a very exciting 4 years, given where we have come from the last 3 years in terms of everything we have done. But it starts, as you've heard from all of us, with our market opportunities.



We are playing in markets today, where the TAM is $79 billion. You've heard about the data center from all of us, $35 billion TAM in the 2023 time frame. PCS, large market, $32 billion TAM. And then the gaming, which is a combination of consumer graphics and our game console business, $12 billion TAM. The opportunities ahead of us are pretty large. We execute and we grow the revenue, we can improve our financials in a significant manner.



Let me show you the long-term target model. Revenue growth 20% compound annual growth rate and a lot of it from increasing market share in the markets we already play in with the products you heard from today.



Gross margin greater than 50%, higher ASPs higher-margin products, premium products that we are introducing in premium markets, driving the gross margin higher. We invest in OpEx at 26% to 27% of revenue, prioritizing R&D and also go-to-market activities. And we expect to double the operating margin. You just saw me show you the 2019 operating margin at 12% and with the mid-20s percent in this time frame, we doubled the operating margins from where we -- where we're in 2019. And we expect to generate free cash flow margins greater than 15% and generate significant cash over the long term, significant cash. This long-term model projects a very exciting period for us from a financial standpoint for the next 4 years.



Let me show you a little bit on the revenue mix over the last -- over the next 4 years. In 2019, we ended with $6.7 billion of revenue. And you've heard us say, the data center revenue, CPU and GPU combined is about 15% of the revenue. With the compound annual growth rate that we have for the company of 20%, the overall revenue of the company gets bigger.



The PCs and gaming part of it grows mid-teen percentage, but we expect that in the long-term time frame, in the long-term model, data center becomes greater than 30% of the revenue in the 2023 time frame. The data center, everything you heard Forrest talk about, driving the revenue to be greater than 30% of our revenue, which is overall higher from where we were in 2019. In addition to revenue, we are also continuing to focus on gross margin.



Margin expansion. So let's talk about that. Where is the margin expansion come from? We have 2019, 43% gross margin. The high-end gaming is slightly accretive, and the margins will improve as we build out our gaming portfolio. PC products are above corporate average and a significant contributor to margin growth in the long-term model. And then data center margins, which are well above corporate average are the largest contributor with the growth in the data center revenue becoming 30% or greater than 30% of the overall revenue. In combination, this drives our gross margin higher, and we would like to do, for the next 4 years, exactly what we have done for the last 3 years.



The tax rate. Let me just cover that. We expect that sometime during the long-term target model, given our consistent profitability, the tax rate on a non-GAAP basis will move up to approximately 15%. The long-term cash tax rate stays at about 3%, similar to what we had in 2019, and what we are guiding to in 2020, fundamentally due to the fact that we have $6.7 billion of net operating losses that carry forward, and allow us to pay at a lower tax rate, even though the non-GAAP tax rate is 15%. And those NOLs protect the approximately 3% cash tax rate through the period of the long-term model.



Let me move on to the capital allocation strategy. OpEx investment comes first. As you heard me say earlier, we will continue to invest in R&D and go-to-market acceleration. We think about many things beyond that, we have to fund the revenue growth in terms of the working capital needed, and then on the shareholder return side of it, we will consider shareholder return vehicles, including limiting share dilution and strategic initiatives. And lastly, building on the credit rating progress that we've made over the last few years, we want to have a goal of achieving investment-grade rating from a rating standpoint.



In a nutshell, our priorities are invest in the business, drive growth and deliver shareholder returns; invest in the business, drive growth and return to the shareholders.



So from an overall standpoint, in addition to the business momentum, the product momentum, the revenue momentum, here's a summary of the financial momentum. We have delivered great products and establish great financial momentum.



We have been laser-focused on execution and on all the road maps you heard about today. We have significant opportunities ahead of us, with the almost $80 billion TAM. We are still in the early innings of market share growth across PCS, gaming and data center, and that's where you see the ongoing market share gains as one of the drivers of financial momentum.



We want to accelerate the financial momentum, continuing to drive gross margin expansion, continuing to increase profitability and generate significant cash in the time frame of the long-term target model.



And finally, we want to deliver strong returns to our shareholders. Thank you very much.



So it's a pleasure to invite Lisa, back on stage for closing the mark for the Q&A.



Lisa T. Su


Thank you, Devinder. All right. How's everyone doing? So look, I hope you've enjoyed the last couple of hours and got a feel for the excitement that we have and the products and the technology and the business. And I'm just going to spend just a couple of minutes and just summarize in a few key takeaways.



Hopefully, it's very, very clear. We are committed to leadership in high-performance computing. And that's across data center, that's across PCS and that's across gaming. And I hope it's also really clear that we are assuming the competition is going to be very, very strong. We have big competitors, and we respect them a lot. But at the end of the day, we're playing our game. And we know that, if we execute our roadmaps, that we will see the growth that is exciting us.



And that growth is in a set of great markets that we are underrepresented in today. And so we come back to our ambitious view of delivering the best. And that's the best in technology and also best-in-class in terms of overall growth. So those are really the key takeaways.



Now I think we're going to turn it over to Q&A. I'm sure there are some questions. So let me have the team come back up, and I think we'll reset the stage for Q&A. Just give us a couple of minutes here.



Okay. So look, before we begin the Q&A, we spent the entire afternoon really focused on the long-term because that's what this is about.



It's about our strategy and our road maps, but I do want to address probably a topic that's on the minds of many of you, which is a little bit about what's happening in the short term. Obviously, there's a lot of volatility in the markets with the coronavirus, and we want to make some comments about that as well.



Our first priority, of course, like all of our colleagues, is to ensure the health and safety of our employees and our partners and our customers. And so that is our focus. And we have taken steps to minimize potential exposure at our global sites and with travel, like most of our peer companies have. From a business standpoint, it is a very dynamic situation. So let me give you some color to kind of give you, a view of what's going on.



From an overall supply chain standpoint, our supply chain is primarily focused in China, Malaysia as well as Taiwan. And I would say, it's a very robust supply chain. So we have taken a number of actions to ensure that we have continuity in that supply chain. And based on what we see today, we're actually back to near-normal supply capacity in our supply chain.



So that is something that we continue to be very focused on. We're also monitoring our customers, since a lot of our customers have supply chains that are very dependent on China and some of those operations. And we did see some disruptions, certainly through Chinese New Year and in month of February. There's a lot of progress being made. I would say, all of us in the ecosystem are trying to return those operations to as normal as possible. And we expect that to continue over the next couple of weeks -- I'm sorry, over the next coming weeks.



Now let me turn to the demand standpoint. I think from a demand standpoint, again, this is a very fluid situation. So there are lots of puts and takes. What we have seen is, outside of China, the overall demand has actually been about what we expected for the first quarter.



In China, we have seen some reduction in consumer demand, particularly in the off-line channel networks and those, I think, will continue for some time. We have also seen some other puts and takes, where the demand for infrastructure has increased beyond what we had expected originally. And so with all of that, we had guided the first quarter at our first quarter earnings call at $1.8 billion, plus or minus $50 million. We are not updating that as of now. Our best visibility is that the impact in the first quarter will be modest, but we'll keep watching that and perhaps we'll be in the lower half of the range, but still within the range of our original guidance. You also saw from Devinder that for the rest of 2020, we are standing -- our first quarter -- our 2020 guidance remains unchanged, and we see a very exciting growth path over the 2020 year.



So with that, let me turn it over to questions from the audience.



Question and Answer


Ruth Cotter


Great. Thank you, Lisa. So we have microphones in the room. We have Laura here in the middle and [ Saskit, Jason. ] So if you wouldn't mind putting your hand up if, you have a question, and we'll take questions in the room. Aaron?



Aaron Christopher Rakers


Yes. Aaron Rakers with Wells Fargo. I guess, I want to unpack the model a little bit more. One of the things that I'm a bit surprised by is that you've got this big push in the GPU side, particularly the data center side. How do we, as kind of analysts, start to think about modeling that out? Have you thought about separating that out from a segmentation perspective? And what's embedded in your expectations as well with regard to the margin profile, the gaming SOCs as Microsoft and Sony come on late this year into 2021?



Devinder Kumar


Yes. I think on the segment piece of it, we report the segments as we do with the CG and EESC. We have provided additional color, where we feel it's helpful, especially on the data center piece of it, which is a combination of CPUs and GPUS.



If you look at what I presented, we talked about PC gaming in totality, mid-teens growth over the time frame of the long-term model. And then on the data center side of it, it's higher growth. And that's why it's becoming 30% of revenue on an overall standpoint. And if you look at the numbers and do the math, I think you can get to the numbers in terms of how much it is in data center. CPUs and GPUs -- CPUs, generally, if you look at it from a viewpoint of the market is essentially, flat. GPUs is where the growth is on the data center side of it. And that's where I would leave it.



Ruth Cotter


Ross?



Ross Clark Seymore


Ross Seymore from Deutsche Bank. I want to stick on the data center side of things, and maybe Devinder, what you just said, a little clarification. Going from the 15% of sales to 3% of sales, a little color on how you think that growth will be driven between the GPU and the CPU side, the server CPU? And then maybe a related follow-on is on the market share within servers as a whole. I think, Forrest, you talked about getting to the double digits in the second quarter, hitting your target there. What sort of target should we think of as being next? And as the market size itself, the full server market? Or are you still kind of judging the market as 2/3 of what people describe it as in its entirety?



Lisa T. Su


Yes. So lots of different questions there, Ross. Let me try to take a couple of them, and then maybe Forrest will respond as well. Look, when we think about the model, let me just take a step back and say that we are designing a model that has a number of different outcomes I can get there. And so that is, obviously, a lot of growth in data center, but also a lot of growth on the PC and gaming side.



Within data center, clearly, from a dollar value growth rate sort of growth number, the data center CPUs will be the larger number. From an overall growth rate, given we're starting from a low base on the GPU side, the growth rate will be higher and as well as the market growth rate is higher. Relative to market share goals, I think, our view is that, our product portfolio, whether you're talking about the data center or PCs or gaming, really supports very strong market share over the next number of years. We're not putting out a new market share target. But what I would say is that we believe that our product portfolio is strong, and can certainly, meet our previous market share capability as well as beyond that across a number of years. Forrest, did I...



Forrest E. Norrod


No. I think you hit it well, Lisa. I mean I think on the last point, we're certainly not done, when we hit double-digit market share. Our imperative is to continue to grow. And as Lisa mentioned, our ambition is not to stop any time soon, keep it rolling.



Ruth Cotter


[ Laura Tony ] here. Front row -- front...



Unknown Analyst


First question is on the supercomputer wins, specifically El Capitan and Oakridge. The -- I thought was interesting is you want both the CPO the GPU at El Capitan. And this is 3 years out, so they're making decisions 3 years out on, obviously, a forward roadmap at 5. And I'm presuming CDNA. So what was the reason why they chose you? I can understand the CPU side. And the GPU was a little surprising. Is that because of this GPU, CPU interconnect and that was what -- that was the trigger to win the combined both sockets? And then does this -- is this a precursor to hyperscale design wins? And will that translate to hyperscale CPU, GPU combined wins? So that's my question.



Forrest E. Norrod


Maybe I'll take a first whack at that, maybe ask David and Mark as well to weigh in there. Look, I think it was immensely gratifying to be chosen as the CPU and GPU provider for both of those. I mean one of the things that is -- makes it most gratifying is, these are extremely rigorous evaluations that are done. And so they look at the technical characteristics of the proposed solution. They also, quite frankly, look at your execution capability and track record quite closely. And I think because they are procuring things that are -- that have not yet finished design. And so I think David and the team did a marvelous job on the GPU side, developing an architecture that we think can scale up quite a bit. And maybe he'll talk about that in just a second. But then the ability to put them together, the ability that we talked about throughout Mark, David's and my presentation, to be able to have a unified accelerated computing architecture with the CPU and the GPUs working seamlessly together, and dramatically simplifying the programming model was something that I think was tremendously attractive to DOE and to the national labs and...



David Wang


Yes. I think we didn't show this purple [watt] Progression on the CDNA side. But you can imagine, whatever we have done and what we're planning to do on RDNA 1, 2, 3. That trajectory will be happening on the CDNA side as well. So we are aggressively enhancing the performance for [watt] and the type of operations that the data center and HPC customers care about. And we'll be scaling our technology very aggressively. So the CDNA architecture migration. So I would say, besides, obviously, the infinity architecture that provides all the benefits of catch coherency, programmability, unifi memory. I think the GPU performance and so on is also a key factor of winning the deal.



Mark D. Papermaster


And then lastly, I would add, we have a -- really listened carefully to the requirements from the Department of Energy and their goal to have this system be optimized for both HPC and MI and listen to them and presenting to them how our software stack could really an open source approach, optimize across CPU and GPU and be optimized with the libraries we would provide and then enable them to enhance that even further, it was also a key element.



Forrest E. Norrod


The one thing, I would add to that is to add on the back to answer the second part of your question, as we do think that, that architecture is the right architecture for machine intelligence as well. That necessarily, when you look at those demands, we're going to have to move to clusters of accelerated systems. And this is the right architecture for those machine -- large-scale machine intelligence applications as well.



Ruth Cotter


Great. [ Saskit ] at the very back of the room.



Trip Chowdhry;Global Equities Research, LLC;Co-Founder


Crip (sic) [ Trip ] Chowdhry with Global Equities Research. A phenomenal presentation, a lot of learning. I was wondering, if you look at 2 industries, the semiconductor industry, the chip industry and the software industry. For the semiconductor industry, the catalyst and data centers is, machine learning models are getting bigger. But when you look into the software industry, they are going with distributed training, transfer learning and their motto is extract as much power from your existing CPUs and delaying -- sorry CPUs and GPUs and delaying the purchase, how do you see this evolving? Will it be a growth linear? Or would it be a step function?



Forrest E. Norrod


Do you want to answer or what?



Mark D. Papermaster


Well, I'll just start from a workload standpoint. I think you're going to see, the algorithms are still changing. And so this is dynamic. It's one of the reasons that you saw across our presentations that our whole approach is scalability. We -- our CPU roadmap runs unabated. Our GPU roadmap has a relentless unabated growth as well. And then it's how you put it together. So there's no question that certain workload applications will remain GPU-only, will remain CPU-only. But what you're seeing in these supercomputer applications is, that the -- it's the leading indicator of where the industry is going. The -- what often -- you go back through history, the problems that first needed supercomputer class, you think about the old supercomputers, you're actually performing those operations on your phone today. So it's -- it is indeed, from my standpoint, what I'm hearing from CTO peers in the industry, it is indeed the leading indicator of the approaches being used on many of these analytic and machine learning workload.



Ruth Cotter


Thank you, Mark. Jason, Tim here. Thank you.



Timothy Michael Arcuri


Tim Arcuri, UBS. Devinder, I actually had 2. So first of all, I guess I'm a little surprised that OpEx is coming down so much as a percent of revenue from last year to 2023, given that you have all these new architectures and initiatives that you have to support, a; and b, if you compare, say, to like NVIDIA, they spend 28% of revenue on OpEx and they're supporting a single architecture. So can you sort of talk to how you're going to keep OpEx so low? And yet grow revenue so much?



Devinder Kumar


I think it starts with really some of the things that we talked about. Revenue is growing on a compound basis at 20%. And some of the things that Mark talked about in terms of the efficiencies, in terms of the engineering hubs from that standpoint. And we look overall at the investments needed. And our view is in the -- out in that time frame, not right away. I mean, we ended at 31% in 2019, but out in the time frame in 2023, where the revenue has grown, we can manage it within the 26% to 27% OpEx rate.



Ruth Cotter


[ Matt ]?



Unknown Analyst


Thank you very much, everybody. I just wanted to say the balance sheet stuff, Devinder, congrats. I had one question and then, I guess, a clarification. The clarification bit was, Mark, on your slides. I think in the past, you guys had talked about Zen 3 being on 7 plus. And you talked today about it being on 7. Maybe you could just clarify if that's a nomenclature change or if that's an actual change in architecture?



And then backing up to the question, I think if you look through the long-term model there, 30% of revenue would be, I don't know, $4 billion, $4.5 billion in data center. Maybe you could talk about how much you think of that is cloud versus enterprise versus HPC, which seems to have a lot of momentum? And then you talked about wireless infrastructure today for the first time. Maybe you could just break that, say, $4 billion, $4.5 billion revenue down? And just how you're thinking about growth in those segments?



Mark D. Papermaster


Matt, the clarification is, in fact, nomenclature, as you said. So we work very closely with TSMC. And as you see how their public -- road map on 7-nanometer evolved. There was at onetime a 7-nanometer plus, and what is often happens is these new full node changes like 7-nanometer, what happens is, some of the enhancements actually get folded in the base road map. So 7-nanometer actually has encompassed in that nomenclature, several different grades, shall we say, of its development. And so we got matched up with TSMC nomenclature.



Forrest E. Norrod


Then with regard to the second question, I think, look, we think that, that split in the market is stabilizing that we do think that cloud currently constitutes about 30% of the overall market, enterprise, about -- sorry, about 50%, enterprise about 35%, HPC about 15%. When we look out in that time frame, we don't see it dramatically changing. There'll probably be a few percentage points shifting. And then certainly, our ambition is to participate in all of those segments quite strongly. So ideally, we would be looking at a relatively balanced spread across all 3 of those businesses.



Ruth Cotter


Jason, over here. Thank you.



Mitchell Toshiro Steves


Mitch Steves, RBC. So really 2 questions. The first one is really just regarding your competitor. I mean, every day, there's a new leak in terms of what the specs are for both you guys and your competitors. So maybe you guys can provide some sort of a high-level view on what do you guys expect to happen over the next couple of years because long-term road maps obviously have a lot of competition in it.



But then secondly, I'm sure you're getting a lot of questions on this already, but you have a 20% target over the next several years. Just want to confirm that's more of like a long-term CAGR, and it's not going to be a 2021 number. So what I mean specifically about that is probably '21, based on my math, has got to be higher than that and it would decelerate, and I just want to make sure that's correct?



Lisa T. Su


Want to do the first one, and I'll do the second one?



Mark D. Papermaster


Well, just -- on the roadmap, of course, there's always speculation and leaks. And we don't comment on speculations. What we really do is, really is, what I alluded to earlier, we -- and then what Lisa said it explicitly, we, of course, have formidable competitors, and we're fighters here at AMD. So what we do is focus on getting the most competitive roadmap that we possibly can and really listening to our customers to make sure that what we're developing, we'll address the workloads that they have coming at them.



Lisa T. Su


Yes. And Mitch, to your comment about the overall compounded annual growth rate. I mean, look, it's fair to say that 20% is a very strong number. When you look across 4 years, if you just do the math, it's more than double the size of what the company was in 2019. So I would say, there -- it will depend on how exactly things ebb and flow, but we would be very, very pleased, and that would be lots of market share gain to achieve the CAGR of 20%.



Ruth Cotter


[ Saskit ] at the back here.



Unknown Analyst


The question is about some of the developments in the startup space. And there's a lot of money going into new developments for semiconductors and also some of your customers are talking about their own silicon. Maybe can you talk about how you see that? And is this a new fight you have to take on? Or is this something that we should be thinking about more deeply? Or how do you think about that dynamic?



Lisa T. Su


Mark?



Mark D. Papermaster


Yes. I'll start just from -- just take you back on even the historic view of the industry, there's always been a need for specialized devices. There's been specialized ASICs out there or starting off in an FPGA implementation. This is how new approaches, new workloads are typically implemented. But when you look at them, they're typically targeted at a more narrow set of workloads. And this won't diminish at all, the needs for these very high-performance, easy-to-program GPUs and CPUs, we have out there. The code that's out there already, leveraging these approaches is massive. And you also commented on some of the larger companies developing their own silicon. Again, in a market, where there's an insatiable growth of compute demand, there's room for these tailored solutions. And so it's an ecosystem. We're very, very confident on our growth as we shared with you today, and there's plenty of room for tailored solutions in the industry.



Some of them will have staying power, others won't. But there will always be a need for these easy-to-program general-purpose solutions that stay on the competitive path that we set out and are implementing here at AMD.



Ruth Cotter


[ Jason, over ] here.



Unknown Analyst


Your R&D efficiency is really impressive, and you sort of highlighted a couple of areas where you're getting there. Emulation, simulation, concurrent software, hardware design and modularity. Everybody else is doing sort of the same thing. And I'm just wondering if you could give a little more color as to how you're getting there? And how you're getting these products out so fast relative to your competitors with fewer dollars?



Mark D. Papermaster


Well, I'll start and...



Devinder Kumar


Are you trying to get Mark a raise?



Mark D. Papermaster


Short answer is necessity is a mother of invention. So when -- just speaking candidly, and when you look at the turnaround we were facing, and we really couldn't have the kind of investment that we would have liked to have had. With the things I talked about aren't something that we're trying to do now. What I shared with you over my comments was looking back at what we implemented and allowed us to deliver these high-performance products to market. So it's really a credit to the team and how they responded to the challenges we faced, bringing AMD back to high performance. And the good news is, they love it. And so we don't see any lack of that kind of innovation of how to improve going forward.



Ruth Cotter


Great. All right.



Nathan Brookwood;Insight 64;Research Fellow


Thank you. Nathan Brookwood, Insight 64. You were very explicit when you talked about Genoa being based on 5-nanometer technology. But you were less specific when you talked about the advanced versions of our DNA and CDNA being on an advanced node. Can you give us some color on that advanced node? And why you are characterizing the one very explicitly? And the other kind of very generically?



Lisa T. Su


So maybe let me take that. I think when you look at our processor roadmap. Again, we have the win from El Capitan. It is a 5-nanometer roadmap to get some of the performance efficiencies that we need to get. As it relates to RDNA and CDNA, David has a lot of things in the hopper, and we will talk more about what node and what architecture and all that stuff as we get closer to product.



Ruth Cotter


Great. Steven?



Blayne Peter Curtis


Blayne Curtis, Barclays. So I just want to ask on the data center GPU market. Maybe you could just talk about that split. So the CDNA, what exactly is different versus in graphics? Is it just -- you're adding acceleration? Or is there something more fundamentally different?



And then just from a competitive landscape, why I like in CPU, your competitors should have 7-nanometer this year. Can you just describe where you see the differentiation in the data center markets for your product?



David Wang


Sure. I think, as I mentioned in my presentation, right? A big part of that CDNA optimization is, indeed, adding some more higher density [ mass, ] right, for HPC and for some of, what we call, the center ops, the matrix acceleration ups, that are needed for HPC and machine learning acceleration. So I would say, that a very big part of innovation on the CDNA. And of course, I will say, we are optimizing architecture also for compute. So that does mean we are putting less focus on other operations that are less important for high-performance computing. So those -- the possibility, the opportunities for us to reclaim the silicon area to enhance the compute capabilities. I think the other big one, though, is what Mark talked about Infinity architecture. As we -- someone mentioned about the explosion of AI, the machine learning, the size of the training that is very, very important, the size of the training really grows. We need a lot more GPUs. They all need to be interconnected together with a high-bandwidth efficiency.



So that second generational for Infinity architecture that Mark talked about, really allow us to interconnect multiple CD&A-based GPUs at a much, much higher bandwidth and flexible topology to allow this large training model to be performed in a much more energy-efficient way. So I would say, those are the kind of the key elements.



Ruth Cotter


Laura?



Unknown Analyst


You mentioned go-to-market investments and it strikes me that, that's a good idea because you wouldn't want something as mundane as sales people to get in the way of everything you're trying to achieve. So maybe you could flesh that out a little bit? What kind of hiring expectations you have in what areas? And maybe a sense for how flexible you can be and what you're trying to build towards? Or are you trying to do it as you go?



Lisa T. Su


Yes. Maybe let me take that. So we are at a place where go-to-market investments are very, very important for us. Darren Gray has our overall worldwide sales organization. And the focus is on customer-facing commercial both for clients as well as data center and enterprise as well as sort of field application engineering at top hyperscalers to help them optimize our capabilities. So lots of focus in this area. You will continue to see that be an area that we invest. And I think it's one of those places, again, that we can scale. As the company scales, we scale our capabilities there as well.



Ruth Cotter


Jason?



Unknown Analyst


Shane [indiscernible] with IDC. Forrest, my question is regarding data center GPU. Remember the 3 application segments you highlighted, a virtualization, MI and HPC. Could you map your existing product line? And which product lines serve those 3 applications? And, I think, you could then allow a quick follow up?



Forrest E. Norrod


Yes. Sure. So right now, we have -- if you look at the virtualization or cloud gaming space, I think we've got some of our MI parts in current GCN-based MI parts in the virtualization space and in the cloud gaming space. We certainly see, looking forward, the CDNA parts, particularly starting with the ones later this year are going to build on the wins that we've already had with our existing architecture and focus those in the MI and HPC segments. So on the virtualization side, it'll be a mixture of parts going forward.



Unknown Analyst


If you allow me to map into more product names as in you've got the V340 series, you've got business with -- in cloud with Stadia, and you have Radeon Instinct. And I'm trying to map those into those applications that you highlighted earlier and get a sense for where you stand now, given where your strategy that you just highlighted is going to take you as far as server GPU?



Forrest E. Norrod


I don't think we get into the details of the product mapping, particularly if they're embedded in our customers end products. So I think I'll stand by what I said a moment ago, which is we've got some of the existing MI products in the virtualization space based on GCN architecture, and we certainly see CDNA driving heavily into MI and HPC going forward.



Unknown Analyst


And then if you allow under MI, would you segment MI training versus inferencing way? And do you consider both of them under the TAM of MI?



Forrest E. Norrod


I think we definitely consider both of them under the TAM of accelerated MI.



Ruth Cotter


[Saskit?]



Harsh V. Kumar


Harsh Kumar, Piper Sandler. So you talked about CDNA and unified data, is that in Milan or the generation after that? And then also, my understanding from this is that the data will appear in the same way to a CPU and a GPU? Is that roughly accurate? And how big a deal is that to your end customers? And when your competitor comes out with their GPUS, how easy or hard is it for them to emulate something like this?



Forrest E. Norrod


A number of questions there. Maybe I'll take one of the first one and then leave the others to David.



All that we've said so far on the supercomputers is that frontier is based on a future custom EPYC processor and then El Capitan is based on Genoa. I think that's all that we've said thus far on the processor side.



David Wang


I think on the -- given our CD&A 2 that is how we're coherence, that allow us to have the data in its preferred location, instead of needs to be replicated on the GPU, CPU or copy to each other. A very simple example is CPUs doing preprocessing, on set of the data, and that needs to be processed again by GPU, and they have to pass back to CPU for post-processing. Imagine now we have the unified memory that's called [cache ] coherent, you can just keep that data in HBM memory for the GPU, and you can just do the whole processing without moving data back and forth between CPU and the GPU. And that's tremendously helpful both from a programming point of view as well as from a performance point of view.



As far as our competitor-wise, I mean, it depends on how they work out the coherency scheme with whoever their partner is.



Ruth Cotter


Great. I think we have time for one last question.



Unknown Analyst


I actually had 2, but I don't know if that works. But the -- just 1 nit question. Why is the free cash flow margin and operating margin, 10 points -- as much 10 points apart that would think for you, it's pretty comparable. And then I just -- you also mentioned M&A as a potential -- something to think about. Just any context on what type of M&A you might think about what is that tuck-in type of stuff for something bigger? It seems like you have a pretty full portfolio already?



Devinder Kumar


Let me take the first one. So the first one, if we look at the operating margin approximately mid-20s, as we call it. You do have the 3%, approximately 3% cash tax rate. You have CapEx investments. You have interest expenses. And then obviously, as you're growing the business, there is some amount of money needed to fund the growth with a significant CAGR that we showed from a revenue standpoint. And these are approximate numbers. So yes, you're right. There's a difference between that. But those are the 3 or 4 factors that drive to the difference between operating margin and free cash flow margin.



Lisa T. Su


Yes, I wouldn't expect, though, that drove, there should be a 10-point difference. So I think that's just the way we built the model at this point.



And then, look, to your second question, hopefully what you've seen this afternoon is like, we're really excited about our organic growth opportunities. There's a lot of growth. There's a lot of technology. There's a lot of market opportunity. And we see that as really a great business model. Now the fact is the company is a lot stronger today than it was a few years ago. And so that balance sheet is also something that we're proud of. And we're always going to look at what are good priorities for us to continue that growth. And that's the way we think about strategic M&A.



Unknown Executive


Great. Well, thank you, everybody, for joining us today and to everybody for tuning in on the webcast. And for those of you here in person with us today, we're going to retire to a demo area out in the front lobby. And you can spend some time also with the presenters out there. Thank you.



Lisa T. Su


Fantastic.



Unknown Executive


Thank you.



Devinder Kumar


Thank you.

Report Abuse

Login or Register to edit or copy and save this text. It's free.