Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
LAHAINA, MAUI—The highlight of this year’s Snapdragon Summit is the unveiling of Qualcomm’s second-generation Snapdragon X2 Arm laptop processors, including a new “Extreme” tier of chips that looks poised to take on AMD’s, Intel’s, and Apple’s best. PCs with the new X2 Elite and X2 Elite Extreme processors won’t appear until the first half of 2026, but Qualcomm is setting some great expectations: 18-core and 12-core processors with serious multicore CPU muscle, a field-leading TOPS count on its neural processor (an 80 TOPS NPU), and a redesigned graphics core.
With Intel on the ropes but now poised to work more closely than ever with Nvidia on co-designed “RTX” SoCs, with AMD benefitting from the power vacuum left by Intel, and with Apple Silicon going strong, it’s the wildest time ever to cover laptop processors. PCMag’s John Burek and Wired’s Luke Larsen were given the opportunity to sit down with one of the company’s senior executives to chat about the new Snapdragon X2 Elite family. We quizzed Kedar Kondap, SVP and GM of Qualcomm’s Compute division, on various aspects of the new X2 chips: their makeup, the new Snapdragon Guardian tech designed to appeal to enterprise, and how to cool these fierce-looking chips. The interview has been slightly edited and shortened for clarity.
Market Successes for Snapdragon X So Far
PCMAG: We’ve seen various numbers from different sources—adoption numbers relative to the rest of the market—for the initial Snapdragon X Elite. Do you have anything that you can, want, or are able to share in terms of market percentages, numbers in terms of units sold, things of that sort?
Kedar Kondap, Snapdragon Summit 2025 keynote (Credit: John Burek)
KONDAP: I can get you the right numbers that we’ve shared publicly, but we announced that at earnings. When you look at categories where we were focused—which is devices that are thin-and-light, $600 and above, with integrated GPU—in certain markets I think we’ve done really well…
So, approximately 9% of Windows laptops above $600 in the US and the top five European countries. These are the regions we focused on primarily in the first launch. We wanted to make sure that we were focused in how we delivered X Elite into the market. And it was staggered, right? We launched X Elite in June 2024; we launched X Plus devices in September at IFA last year; and then the X came in January. We announced in January, and devices came a little bit after that.
WIRED: I’m curious—you probably don’t have the numbers for this—about consumer purchases. Obviously, with some of the stuff announced, you are trying to push into enterprise more, which is where the big numbers are. Can you talk about how you are doing for consumers, people buying directly from stores and online?
KONDAP: Yeah. The market, as you put it, is segmented between consumer and commercial. We focused the first launches on consumer. We’ve done a lot of pilots and enterprise trials. Those are in progress. We’re a lead partner, in IT, ourselves. We’ve deployed more than 16,000 laptops at Qualcomm, so we’re obviously leading the wave here. From a consumer standpoint, your question was more around, how did we approach the consumer segment?
WIRED: I’m just wondering if you guys are doing better in that specific target [market] than enterprise?
KONDAP: So our first focus was that we targeted consumer. Look at the investments and the strategy that we had. Products were very consumer-focused. We partnered with retailers, consumer retailers, globally. We had 50-plus retailers that we partnered with to have devices available. We focused on OEM dotcom channels. So, like, all of the OEM channels had devices. We announced more than 9,300 stores, and some of them even had a Snapdragon-branded kiosk. And the reason for us to do that is to build a relationship with the consumer. We wanted the consumer to understand: One, to understand the brand, but second, also to understand these experiences. The AI stuff is new, and there’s not one specific app, or one use case, that’s going to fit everybody….
But in parallel, we have started enterprise trials. Like I said, Qualcomm has led the way, but we have other partners like SAP and many others that have already deployed or are testing actively. We know that takes a little bit longer. Part of why we introduced Snapdragon Guardian today is to showcase more benefits to enterprises and what we can offer.
About Snapdragon Guardian
WIRED: I assume Guardian will be offered in all laptops, not just those sold directly into commercial?
KONDAP: The use cases—obviously, it depends. For example, an OEM wants to sell it into the consumer space and add a capability with Guardian. You have the ability to track a PC, manage a PC. I’ll give you a good use case.
Kids, for example, right? Many schools allow for kids to carry laptops, but they don’t allow them to carry phones. So great use case: You know, today, on phones, people have Life360, or one of these apps that you can track where your kids are. You can geofence stuff. Think of it as something very similar. You have the ability to geofence where your kids are, have it on a laptop, and be able to manage it remotely. You can access what they’re doing. So we want to give that control to consumers. So it’s both a consumer and a commercial thing—but obviously it benefits largely commercial enterprises.
Kedar Kondap, Snapdragon Summit 2025 keynote (Credit: John Burek)
WIRED: Will it be that when people buy those laptops, will they experience that? As in, are OEMs going to use that technology, then rebrand it? Or are we going to see it directly, like a feature in every laptop?
KONDAP: I don’t think we’re ready to talk about that yet. But we wanted to showcase the technology and what we built because we have added the Guardian technology as a separate subsystem in our platform. So we’ve taken the steps to make sure that it is a secure subsystem within our SoC, and it provides the ability for our OEM partners to build on top of that. They can choose to offer it either in consumer or commercial. We’re providing the foundation, if you will.
PCMAG: About the Guardian stuff, I heard a mention in the presentation that it would be available even if the system was powered off. I was trying to figure out how that works. Are you able to talk on that?
KONDAP: It is a separate subsystem within the SoC, and it connects to a cellular modem. The whole system stays in a low-power island. You can wake the modem up, which will wake up the subsystem. Now, as you know, that can go through multiple scenarios—like, for example, if it’s a dead-battery situation, then obviously you can’t do anything about it. But then, at the same time, our intent is offering this to enterprise or IT administrators. We want to give them the control to manage devices better, and the risk of a malicious attack is low if you’re in a dead-battery situation. So as long as the user plugs in the laptop and brings it up to a certain threshold of battery, you have the ability to wake it up and just push a patch, or be able to do any activities…
PCMAG: Disable it, or whatever the case.
KONDAP: It’s totally up to the IT administrators.
Some SoC Particulars
PCMAG: Question on the SoC. I was looking at the specs we were given and noticed that there were different memory allocations for the three different X2 SKUs announced so far. The first, the Elite Extreme, is at 48GB. The others were listed as “device-dependent.” And I was just wondering why the 48GB ceiling was landed upon. Any particular reason?
KONDAP: I think the X2 Elite Extreme devices you saw, that we were running, had 48GB of memory. Honestly, we just picked the 48GB because that is still a pretty sweet spot. There’s no science behind it. The X2 Elite can address up to 128GB. 48GB is already big for most users. So there’s no science behind why we picked that.
(Credit: John Burek)
PCMAG: Is there anything you can speak to in terms of the Adreno GPU? The efficiency gains that were claimed on stage today are pretty impressive. And any insight into how you got to that point versus the first gen?
KONDAP: It’s a completely new Adreno GPU, designed ground up for this stuff. It has a new architecture, better shader pipelines. The entire GPU is new. It’s not iterative. It is a new generation. And of course, if you’re able to attend the sessions after this, we will go into a lot more technical details on exactly how it is done. But yes, it’s a completely new architecture, and that’s how we’re able to get the performance gains as well as the power efficiency.
PCMAG: One other thing that came up when I was talking with some of the folks in the benchmarking session that we had yesterday. The two reference desktops shown—I understand that they’re being cooled with AirJet—Frore Systems’ AirJet cooling—one of the reps told me?
KONDAP: We have the option to do both. I should say, technically, you have the option to do three things. One, you can have the option to enable this with a fanless design. You can get close to at least 12 watts TDP, if not a little bit more. Or, you can use a regular fan. Or, you can use AirJet. And so, right now, we have two of the SKUs, I believe, that we’re showing here. One of the designs is fanless, and the other one has AirJet, which gives you close to 25 watts TDP.
AirJet-cooled Qualcomm desktop reference design (Credit: John Burek)
It’s just an option that we want to showcase—that in the same X2 Elite or the X2 Elite Extreme, you can utilize the entire benefit and choose your design points, no different than laptops. And you can tell when you look at the form factors, the difference is insignificant in terms of what you can do, but you can still get 25 watts of performance at very low power.
PCMAG: Thoughts on using AirJet outside of these desktops, in things like laptops? There’s no reason you couldn’t do that?
KONDAP: No restriction. We just showcased this technology in the small form factors.
Market Positioning for the New X2 Chips
WIRED: Can you talk about who the X2 Elite Extreme is for, and the thought behind offering that as a separate configuration?
KONDAP: There’s swim lanes, right? You have certain price points in certain swim lanes. The X1 Elite was in the price band, or I’ll say the sweet spot, of $1,000 device SPs. Think of the X2 Elite as something very similar to that. The Extreme version with the 18-core CPU, with the much higher graphics core, will address a higher tier of experiences. That’s part of why we focus so much on talking about gaming and creator use cases. We want to start showcasing the performance, and that’s where you get the true benefit of running stuff.
(Credit: John Burek)
Everybody’s in search of this one AI app that is going to transform. But we believe the workloads are going to get to agentic. And as you start looking at the whole scenario, all these different agents running on the device, we believe that it’s going to run across all the different cores (obviously, on the NPU for low power), and it’s going to run hybrid as necessary….Same reason why, for example, we added an 80 TOPS NPU. We believe that we’re capping out in terms of many of the use cases that we’re running, even at 45 TOPS. So we’re enhancing leadership in each of those areas.
WIRED: The 80 TOPS will be across the lineup, right? In the same way that 50 TOPS is across the [current Snapdragon X] lineup?
KONDAP: We’re keeping it constant. But the memory is different. In the X2 Elite and the X2 Elite Extreme, the memory configurations are different. DDR—the available bandwidth is different.
For example, you can address close to about 150, 152, gigabits per second in the X2 Elite. The Extreme gives you about 225, so think of it as an eight-channel DDR and a 12-channel DDR. That’s the difference. In technical terms, it gives you more DDR bandwidth. And the reason to do that, obviously, is because many of the AI use cases are DDR-centric.
Everybody’s in search of this one AI app that is going to transform. But we believe the workloads are going to get to agentic.
PCMAG: This might be a marketing question. I was looking at the initial three SKUs, and I noticed that the first SKU, the X2 Elite Extreme SKU, is 18-core, with everything else maxed out. The second one is also 18 cores, but not called Extreme. And then the other Elite is a 12-core, right? I was wondering what the marketing logic was of not making the first two 18-core chips both “Extreme”?
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.
Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
KONDAP: So the difference is that with the Extreme version, because it’s the higher DDR, and it is literally for “extreme” use cases, with AI and all of these things. The big difference between the three…all three SKUs support 80 TOPS, but the middle one and the lower one are both eight-channel. So addressability for memory is around 150 gigabits per second, and the Extreme version is the one with 225.
PCMAG: Is the higher GPU performance also tied to the higher memory bandwidth?
KONDAP: No…to be fair, obviously DDR also drives the GPU, drives a lot of the pipeline, drives a lot in terms of the SoC. But that’s not why it is. Technically, yes, you are correct that games, or video, or those use cases, will scale because the bandwidth scales. But we’ve kept the graphics constant. You won’t necessarily get to the same output. We just want to have the option available for everybody in case they want to use the 18-core without the extra DDR.
PCMAG: About the Guardian hardware. It was said on stage that there is an SoC element that is part of the Guardian hardware. Is there any way of describing what that is?
KONDAP: It’s a dedicated processor within the SoC that’s isolated from the rest of the other cores. It has its own BIOS. You can manage the subsystem independently without having to access the rest. So it’s a more secure way of how we’re able to access an independent processor within the entire SoC.
Battery Life, and Why a Bigger NPU?
WIRED: I noticed there was very little talk about battery life, as opposed to in the first generation—that was, you know, the thing! Anything to say about how different the battery life we’re looking at [will be]? Or a kind of parity with the previous generation?
KONDAP: No, it’ll be better. We didn’t talk in terms of specifics; then, the challenge becomes, what use cases, what do you want to run? With the first generation, we had to showcase stuff because we hadn’t launched yet! [Laughter] Now, people have, like the [HP] OmniBook 5, tested out, and the claims that are made with 34 hours of battery life, they’re tested. So there are third-party reports that have attested to this incredible battery life. But I showed some of the claims, as you heard; it’s depending on which platform you look at: 30%, 40%, 50% better performance in the Extreme at 60% lower power. You will see improvements and gains in battery life, not just in terms of performance. But again, as you know, it’s tied to use cases, how people are running it. But we will continue our leadership in performance per watt. We’re going to lead the way.
PCMAG: Question on the NPU. We are familiar with how things develop in CPU and GPU, but with NPU development, you’re going from one number to a much larger number. How does that happen? What are the factors in chip development that enable an NPU to go from that to that in a generation? Just not being familiar with NPU architecture works.
KONDAP: More details to come on Architecture Day for all of these, but look—AI, and that’s why we had Steven [Bathiche] from Microsoft talk about it more technically, because there are things that are moving at such a rapid pace. It’s funny, because when we talked about 45 TOPS, and were the first to introduce that in market, everybody said, 45 TOPS? Why do you need 45 TOPS?
I don’t know if you guys have had a chance to go through our demo area, but now you have use cases there that are utilizing 100% of our 45 TOPS, right? For example, there’s an app called Collov, which basically helps you stage your home. There’s no one app that’s going to meet everybody’s needs, but there are apps that are going to meet every different use case. What’s happening is we’re seeing this huge demand for NPU workloads.
At the same time, the models that we’ve seen have been optimized significantly. For example, when we launched, we talked about a 13-billion-parameter model back then, running on our 45 TOPS NPU with the eight-channel DDR at 130 gigabits per second. Now fast-forward. What has happened in two years is that models have shrunk. What was back then an INT8 model is now an INT4 model, sometimes an INT2 model. The accuracy is still very good. That’s what Steven also addressed. We’re also able to address bigger models now on the same device. Today, we’re able to, in our X1 platform, in many cases, fit like a 27-billion-parameter model.
So what’s happening is we looked at a macro level, at how the industry is shaping with the AI workloads and the models. Then we looked at what we believe is the future, looking at all of these agentic workloads. Let me give you an example. Think of a use case where you’re going to tell your PC: Look at my calendar, try to see if I can go to Snapdragon Summit. Am I available that week? And please help me schedule with these five folks, and help me do ABCDE, and give me flight options.
That use case has multiple agents running on the device. Each agent is different. There’s an agent that’s looking at your calendar; there’s an agent that’s looking at the conflicts that somebody else might have. There’s an agent that’s going to look at your travel preferences and all of that. There’s also a hybrid approach to this, because if your preference is Southwest, Alaska, Delta Airlines, whatever, it’s going to go in and access the web to see that. OK, John would like to fly on September 20, and it says, “Well, looks like you can make it. You’re supposed to go meet Luke at so-and-so place, but it looks like we can find another option.”
What has happened in two years is that models have shrunk. What was back then an INT8 model is now an INT4 model, sometimes an INT2 model. The accuracy is still very good.
So those agents—I’m giving you a very rudimentary example—but think of that as workloads where we’re building up workloads internally to start modeling. And that, in partnership with Microsoft as well, and looking at partnering with all the model vendors, that’s how we sized up the NPU.
There’s more to it. I’m still talking very rudimentary. Now, you think about LVMs [large vision models] and multi-modalities with images and creating videos. Like: I type a text prompt, I want to write a little story, and I want to get it converted into a video. How does that happen? What parts of it run on-device? That’s how we start sizing up the NPU and start looking at how we want to architect this. But also the reason why, bringing it back to the higher DDR bandwidth is that we’re seeing that use case there. That’s why we wanted to have that option available for anybody who needs it.
PCMAG: Is it also a [factor] that multiple AI demands are being made at the same time? So, having a larger NPU enables not only multiple agents, but multiple processes that may be happening in the system at the same time? Being unsure how an NPU works, can it juggle [like a CPU]?
KONDAP: Yeah, it is able to in the same way. And you still have a very powerful CPU and a very powerful GPU, and you have an NPU. So the better part is now if you’re offloading all of these tasks, like the agentic stuff, to your NPU, you still have a lot of CPU and GPU headroom left for you to do other tasks, right? You’re still going to do your email, you’re still going to do other tasks that you want to do that you’ve offloaded to these other pipelines, including the video and audio and all of that.
We talked about adding this NPU even to our audio blocks. That’s where we look at echo cancellation, background, and all of that stuff. We’re going to start running it more on these smaller NPUs. Overall, like I said, that’s how we model our use cases right now.
(Credit: John Burek)
WIRED: [With NPU], are we still in a sort of “build it and they will come” situation? I know that initially, that’s what it was, right? You had to get these into these computers before people could start developing. Where are we at right now?
KONDAP: There are more ISVs than we can keep up with right now, the number of ISVs that are wanting to port all their stuff. And you see why specifically these creator workloads—like Ableton Live and with big voice-model stuff—these are very intense things that really do take up a lot of the NPU. And so everybody is moving toward the same thing, like enterprises are moving to agents.
We have a large customer right now that has moved a lot of their workforce in many areas…all onto agents. Like, they’ve moved their entire payroll to an agentic AI. There is no more payroll for them. They have, like, one person in payroll. They shrunk it from 11 down to one person. And then making sure that the agents can run payroll, they’ve linked it up in the back end. There are all these different use cases….I don’t have to sit across the table and convince somebody that the future is AI. Those days are past us now.
WIRED: Specifically on-device, right?
KONDAP: It’s moving on-device….The example I gave you of writing a little story and then building a video: It costs a lot of money to build that story and build a video from that story if you run it 100% in the cloud. And there’s no reason to. But we’re not necessarily saying everything’s going to run on-device. We’re just saying that’s the optionality. You have the option to run a hybrid model, where you can run stuff on-device, and other parts in the cloud. That’s the beauty of how this industry is going to move.
PCMAG: Is there any world in which applications, AI applications, are load-balancing between the NPU and other parts of the CPU/SoC? Is that currently done today?
KONDAP: Yes, it’s done today. OK, but it’s more power, right? The reason for the NPU is that it’s a lot more power-efficient. Running anything on your NPU helps you with battery life significantly.
PCMAG: And these days, what’s managing the traffic around which part of the chip you’re using? Is that something that’s built into the app, is that something system-level? Who’s arbitrating that?
KONDAP: So the OS already has it. Like, we can go to your Task Manager today and see your NPU utilization relative to your CPU and GPU. So you can actually run something and see where it’s being run—just the way you could run it earlier on the CPU and GPU, now you can see it on the NPU. From an orchestration standpoint, Microsoft provided—if you guys saw the announcement—Windows ML, so it makes it easier now for developers to have a cross-referenced framework that they can use.
For Qualcomm, we have our own orchestration framework. So, for example, when an ISV comes in and balances a particular use case, we have the governance in terms of what runs on the CPU, the GPU, or the NPU, or within the NPU, how do we want to run all the models. That governance we already have. We provide the orchestration ourselves.
Snapdragon X, Looking to 2029…
WIRED: Do you have a goal in mind for market share? What would you consider to be a success in this next generation?
KONDAP: We’ve said our North Star. We said at our investor day, that’s $4 billion in 2029.
PCMAG: $4 million by 2029?
KONDAP: $4 billion.
PCMAG: That’s a big number. [Laughter.]
(Credit: Qualcomm)
KONDAP: But we’re also talking about a different way of how people are going to look at the PC. It’s not the same as what we all see today. It will change. The interaction will change. You heard Cristiano [Amon, Qualcomm CEO] saying, “AI will be the new UI.” We’re seeing it. I can’t stress it enough.
(Note: PCMag is attending Qualcomm’s Snapdragon Summit by invitation, but in keeping with our ethics policy, we have assumed all costs for travel and lodging for the conference.)
About Our Expert
John Burek
Executive Editor and PC Labs Director
Experience
I have been a technology journalist for almost 30 years and have covered just about every kind of computer gear—from the 386SX to 64-core processors—in my long tenure as an editor, a writer, and an advice columnist. For almost a quarter-century, I worked on the seminal, gigantic Computer Shopper magazine (and later, its digital counterpart), aka the phone book for PC buyers, and the nemesis of every postal delivery person. I was Computer Shopper’s editor in chief for its final nine years, after which much of its digital content was folded into PCMag.com. I also served, briefly, as the editor in chief of the well-known hard-core tech site Tom’s Hardware.
During that time, I’ve built and torn down enough desktop PCs to equip a city block’s worth of internet cafes. Under race conditions, I’ve built PCs from bare-board to bootup in under 5 minutes. I never met a screwdriver I didn’t like.
I was also a copy chief and a fact checker early in my career. (Editing and polishing technical content to make it palatable for consumer audiences is my forte.) I also worked as an editor of scholarly science books, and as an editor of “Dummies”-style computer guidebooks for Brady Books (now, BradyGames). I’m a lifetime New Yorker, a graduate of New York University’s journalism program, and a member of Phi Beta Kappa.