CNBC Exclusive: CNBC Transcript: Microsoft CEO Satya Nadella Speaks with CNBC’s Jon Fortt on “The Exchange” Today

Breaking News from CNBC’s David Faber: Microsoft has offered to make small divestiture to meet objections of CMA – Sources

WHEN: Today, Wednesday, November 15, 2023

WHERE: CNBC’s “The Exchange”

Following is the unofficial transcript of a CNBC exclusive interview with Microsoft CEO Satya Nadella on CNBC’s “The Exchange” (M-F, 1PM-2PM ET) today, Wednesday, November 15. Following is a link to video from

Additional footage from the interview will air during the “CNBC Leaders: Satya Nadella” special on Monday, November 20th at 8pm ET.

All references must be sourced to CNBC.

JON FORTT: Thanks for having me back here in Seattle. You just got off the stage minutes ago.

SATYA NADELLA: Thank you so much, Jon. And thanks for coming out here. It’s sort of becoming a great habit for you now to be showing up multiple times. We love it.

FORTT: It is indeed – well to talk to you, of course. Big announcements here. A year ago, open AI put out chat GPT and your stock is up around 50% since then. What’s been the most significant first wave of adoption in AI for you? You talked a lot about copilots today. general public and investors probably don’t think about those as much but strategically for you, has that been the most significant?

NADELLA: Yeah I would say both Jon. I mean the two real breakthroughs in some sense with this generation of AI, one is this natural user interface that shows you know, the first time people really got a sense for it was when chat GPT launched right. There is a complete new way to relate to information, whether it’s web information or information inside the enterprise. And that’s what we mean streaming with our copilot approach, and that definitely has caught the imagination, it’s becoming the new UI pretty much for everything or the new agent to both not just get the knowledge but to act on the knowledge. But the other thing that’s also happening is a new reasoning, just like say in the past, we thought about databases, we now have a new reasoning capability, which is not doing relational algebra, but doing neural algebra. And that, you know, you can take an API and you can continue or send you know, a paragraph or you can do summarization or predictions. That’s a new capability that’s going to change pretty much every software category. So between both of these, you can see a lot more mainstream deployment of AI and the benefits of it and GitHub Copilot perhaps is a good example of that.

FORTT: What I think a lot of people outside of the developer community don’t necessarily get is that there’s this AI tool that’s helping developers to write code. Jensen Huang, we’ll talk a little bit – he was on stage with you a few minutes ago. He was talking about that even accelerating the speed at which Nvidia is able to innovate. What’s the breakthrough there?

NADELLA: Yeah I mean, for me, even when – my own confidence about this generation of AI being different is when I first started seeing, when I think GPD 3 and 3 5, GitHub Copilot because that was the first product in fact before chat GBT, we built GitHub Copilot and deploy these models. And the fact that developers now can just do you know, code completions and get you know, and even then, you know, one of the things we’ve done is we’re taking the joy out of some of the software development. You know, we bring back the joy, the flow to stay in it… It’s unlike anything we’ve seen in the past when you’re taking the most knowledge work task, which is software development and seeing 50 plus percent improvement. And so that’s what we’re trying to replicate with Copilot for the broad knowledge work and information, I mean frontline work. In fact, what Jensen is saying, they’re deploying both GitHub Copilot for the developers but deploying Microsoft Copilot for all the employees at Nvidia so he’s saying watch out. Now Nvidia, if you think it’s fast now, let’s see what happens in a year from now.

FORTT: So there’s a scramble happening right now across enterprise software. So many companies I’m talking to are trying to add AI into their portfolios, enhance their existing product offerings with it and then see kind of how much their customers are gonna be willing to pay that added boost and perhaps productivity. You’ve been early on this and have some data showing how Microsoft customers feel about, you know, this AI being built into your software. Where do you find it so far?

NADELLA: It’s very, very, very promising. I mean, obviously, the developer one is the one that where we have conclusive I would say data and it’s becoming – the brand went from sort of ah, well this is a good idea to mainstream just like that because of the obvious benefits both individually and for organizations. I do believe like farm level performance, you will start seeing divergence if you are adopting or not adopting some of this technology. The next place I think, is even things like customer service. We are self-deployed our Copilot for customer service, in fact, for Azure support. It turns out when you’re a customer service agent, by the time you’re trying to solve a problem, it’s already a very hard problem to solve because the automatic bot didn’t solve it, it’s been escalated to you so you have an irate customer on one in a tough problem. So Copilot helping you there it’s fantastic.

FORTT: But then the idea being that the AI can go into a database, figure out when did they call before, what were their problems?

NADELLA: All the knowledge bases and bring the sort of the solution to use will speak versus you going around it. But here’s the interesting thing, we’ve realized that it’s not just that that was hard but it is also the pain every customer service agent had of summarizing everything they did to solve the problem at the end of the call, which took like half hour with all the artifacts, the logs and what have you and well that’s automated, right? So that’s real productivity gains. So we are seeing massive throughput, same thing is happening in sales. Same thing is happening in finance. So broad strokes, I think we do you know, this conference, we are launching all the data we have already with the co pilot, it’s early days, but we’re very optimistic that this is probably the thing that we’ve been looking in fact the last time where information technology showed up for real in productivity data was when PCs became mainstream in the late 90s and early 2000s because work and workflow changed. I hope that that’s the same thing that gets replicated in the AI era. That’s

FORTT: Yeah it’s a generation ago. How long do you think before the data is conclusive enough that you’ll know on the demand side, the customer benefit side kind of what the calculation is, and that’ll be able to aid your sales effort?

NADELLA: That’s a great question. In fact, one of the things we’re also developing is a bit of a methodology on how we go about measuring because it’s kind of one of the things right, what’s the productivity measures here? By cohort, can you think about some evals some tasks and really look at, deploy the software, look at and follow the cohort, you know, in a month in three months, look at your own data. And that’s one of the other things to be realizing is it’s big, every business is different. Every workflow is different. Every business process is different and zones are different in time. And so that’s why even having these customization tool, so we’re really excited about the Copilot studio because you need to be able to tailor these experiences for your specific business process needs. And so I think all of these will add up, and I’m hoping that in 24, I think of calendar year 24 is the year where we will have, I call classic commercial enterprise deployment and deployment data for all of us.

FORTT: Well, I wanted to start there because that’s sort of the top line, right – customer demand, what are the problems that it’s solving. But I also want to talk about the bottom line and costs and that’s where some of your chip announcements come in. You talked about Azure Maia, Azure Cobalt. Start with Maia. AI accelerator, ARM based. This is not competing with Nvidia necessarily or Jensen wouldn’t have been onstage with you. But running – starting with you said Microsoft’s own workloads. The software that Microsoft is offering out in the cloud, this is going to help run that more efficiently. What kind of savings – what kind of efficiency is possible do you think with your own designed chip versus what you can get off the shelf?

NADELLA: Yeah, I mean, the thing Jon, that we are seeing is – as a hyper scaler, you see the workload and you optimize the workload. That’s sort of what one does as a hyper scaler.

FORTT: Hyper scaler meaning it’s you –


FORTT: — it’s Amazon, it’s Google, you’re the cloud. You got billions and billions of dollars spent on these data centers.

NADELLA: That’s right.  I mean, so you’ve got a systems company at this point. I mean, everything from sort of how we source our power to how we think about the data center design, right? The data center is the computer. The cooling in it, everything is all optimized for the workload. So the fact is, we are now what – we saw these training workloads and inference workloads quite frankly first, right? We have three, four year advantage of trying to sort of learn everything about this workflow. That’s kind of to me in a systems business, you have to be early to the next big workload that’s going to take over so to speak. And that’s what we got right. And so we’ve been hard at work on it. The other thing is we also and I think about you’re talking about AI, for us, Open AI’s models are the ones that are deployed at scale. Both, obviously, those are the models that we are training at scale and deploying for inference at scale. It’s not just a bunch of models, but it’s this one model. So we now have a great roadmap for how we think about Maia, how we think about AMD, how we think about Nvidia –all in our fleet. Like right now as we speak, we have some of the Maia stuff powering GitHub copilot for example. So you will see us deploy our own accelerators and also take advantage. I mean, the other announcement today was the AMD announcement. We’re gonna introduce MI300 into the fleet. It’s got some fantastic memory characteristics and memory bandwidth characteristics which I think are gonna make –and GPT-4 is already running on it. So we’re excited about obviously our cooperation and partnership which is deep with Nvidia, AMD and our own

FORT: Custom chips are the new black, right. AWS has Inferentia, Trainium. Google has its TPUs. What does it take to make yours better and get more benefit out of your systems working?

NADELLA: Yeah, I think the way I look at it and say is you don’t enter the silicon business just to be in the silicon business. I think of the silicon business as a means to an end, which is ultimately delivering a differentiated workload. So for example, that’s why I don’t even think of the silicon itself. I think about the cooling system. I don’t know if you caught that. What we did was we built an entire rack which is liquid cooled for Maia. And everything – the thermal distribution of that entire rack is very different from a traditional rack. Guess what? We built it because we can then deploy these in data centers we already have, as opposed to saying let’s wait for the next big data center design which is fully liquid cooled, which by the way, is also coming. So that’s the level when I think about the advantages we will get is not just going to be about one sort of silicon, but it’s going to be the entirety of its system, optimized for high scale workloads that are deployed broadly like something like Open AI inferencing.

FORTT: Now let’s take a global perspective. Not long after we’re done talking to here, you’re getting on a plane going to San Francisco. Chinese President Xi is there. He would like access to all of these innovations that Microsoft has been talking about, that Nvidia has been talking about. President Biden says no. What should happen from here that both allows trade to take place and protects intellectual property?

NADELLA: I think that’s a great question. I mean, at the end of the day, nation states are the ones who define their policies. I mean, it’s clear that the United States has a particular set of policy decisions that they’re making on what it means to both have trade and competition and national security. And so as the states decide, and in this case, obviously, we are subject to what the USG decides, we will sort of really be compliant with it. And at the same time, we do have a global supply chain. The reality of tech as an industry today is it’s globalized. And the question is, how does it sort of reconfigure as all of these new policies and trade restrictions all just play out? Whereas at least for today, the majority of our business in the United States and Europe and in the rest of Asia, and so we don’t see this as a major, major issue for us, quite frankly, other than any disruption to supply chains.

FORTT: The AI piece?

NADELLA: That’s right.

FORTT: That separation –

NADELLA:: That’s right because most of our businesses in fact, a lot of the Chinese multinationals operating outside of China are our bigger AI customers perhaps. But China is not a market that we’re focused on, per se, as domestically. We mostly focused on the global market X China.

FORTT: For the customers, though, who have to operate in all of these different regions, all these different fields – as a hyper scaler, you’ve been building out data centers in those places so that you can abide by the rules. Does this friction make it more complicated? Or in a way, does it benefit Microsoft’s more diverse global footprint that you have more options to serve customers as these conflicts arise?

NADELLA: I mean, I think I’ve always sort of felt that in order to be a global provider of something as critical as compute, you just need to localize. And that’s why we have more data center regions than anybody else. I always felt that data sovereignty the legitimate reasons for why any country would want it for their public sector, critical industry, was always going to be true. Also, let’s not forget the speed of light issues. You need to have a global footprint in order to be able to serve everyone in the world. And so yes, I think at this point, having invested, having gotten here and now gotten ahead on AI, it’s going to work to our competitive advantage. But I think that this is also about the you know, the maturity that one needs in order to deal with the world as is, as opposed to it’s not like we’re building one consumer service that’s reaching, you know, 2, 3 billion people. This is about reaching every enterprise public sector workload in the world with all of the compliance and security needs. And that’s a very different business than just having one hit consumer service.

FORTT: Now, it’s been about 25 years since Microsoft lost a big case versus the government. Where it looks to some like Microsoft was about to get smaller. And yet we’re talking here Microsoft just won a big legal case where you’re getting bigger with the addition of Activision Blizzard. So I guess in a way, congratulations. But also, there’s some work in the AI context here. And now to integrate this, particularly in AI, you talked about this a little bit on stage. What’s the challenge of integrating this into Microsoft, into Azure, into your status as a hyper scaler in a way that you get the full benefit of all of that content, of the gaming developer community and of AI?

NADELLA; Yeah, I mean to us at the end of the day, you know, when I think about AI, it’s not about just this has another technology you add on the side. It’s about sort of changing the nature of every software category, right? Whether it’s gaming, whether it is core Azure or Windows – all redefining every software category where AI is going to be core to what we do with the value props we develop. The other important consideration is also to not think of safety as something that we do later. But to really shift left and really build it into the core. Like for example, when we think about Copilot, we built all the responsible AI and guardrails right at the very beginning so that when people are deploying the Copilot, they know that they have the best safety around the Copilot built in. And so these are the things that we’re going to do up and down the stack. And that’s why I walked up today. You know, from infrastructure to data to Copilots. We’re thinking of AI as the main thing with safety, as opposed to one more thing.

FORTT: With the stock recently at all-time high. Satya Nadella, CEO of Microsoft. Thanks for joining us here on CNBC.

NADELLA: Thank you so much, Jon.

Scroll to Top