Intel Wants to Boost Laptop Battery Life to 25 Hours With Ice Lake

This site may earn affiliate commissions from the links on this page. Terms of use.

At CES 2019 and its Architecture Day back in December, Intel made it clear that its overall goal with its 10nm designs is to push CPUs and platform designs beyond the levels we’ve seen with 14nm chips. Sunny Cove, the new CPU architecture at the heart of Ice Lake, is one component of that shift. The new CPU architecture can accelerate certain cryptographic operations by up to 75 percent, with support for Intel’s VNNI (Vector Neural Network Instructions).

The implication of Intel’s overall focus is that the initial push for Ice Lake in consumer hardware will be mobile-first, echoing the approach the company took with Broadwell back in 2014. Ice Lake will support LPDDR4X memory as a method of improving RAM bandwidth, though it isn’t clear if Intel will support the highest speed grade, LPDDR4X-4266. That much bandwidth would actually give Intel’s IGPs decidedly more bandwidth than any of AMD’s equivalent APUs, especially in mobile, though the RAM in these systems wouldn’t be upgradeable.

As part of its Architecture Day, Intel discussed some of these improvements. The company wants to see 1W displays in-market by 2020, which would represent a significant reduction from average display power in current models. The company is taking other steps to improve overall energy efficiency, including plans to offload AV scaling to the GPU (it’s previously discussed this) and its integrated WiFi 6 and Thunderbolt 3 support. To be sure, pulling these functions on to the SoC will probably increase overall efficiency, since most I/O blocks benefit from this type of integration, at least at some level.

How Much Improvement Can Customers Expect?

The problem with Intel’s projections is how much of this work is out of the company’s hands. Few metrics are more susceptible to manipulation than battery life, precisely because the degree of optimization a manufacturer performs and the components chosen can have such a drastic impact on the final system.

This is most obviously visible when companies screw things up, but proving the rule in the exception still counts. UEFI and driver optimizations matter. Display panel choice, skin temperature targets, and power consumption targets are all aspects of the device that the OEM controls. We’ve begun to see deltas between sustained and burst performance that implies we need to adjust laptop testing to account for them. Collectively, this gives manufacturers a great deal of flexibility to target various features and capabilities to build differentiated products. In practice, however, companies often either poorly differentiate between which systems deploy which improvements or they take advantage of any power savings to increase the power consumption of other aspects of the system. Higher resolution displays cost more power, and gains in panel efficiency aren’t always high enough to offset the impact of increased pixel counts.

Consumers want 25 hour battery life. Consumers also want high resolutions, excellent CPU and GPU performance, minimal thickness, small bezels, upgradeable parts, and affordable prices. OEMs want to make money. Mix it all together, and we have a situation in which we can say that yes, Intel’s 10nm process node, display technology improvements, and architectural efficiency gains should collectively deliver battery life gains. They will likely be more area- and workload-specific, however, than customers might like.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtremeTech

Leave a Reply

Your email address will not be published. Required fields are marked *

Read previous post:
Nvidia Admits Defeat, Will Support G-Sync on FreeSync Displays

This site may earn affiliate commissions from the links on this page. Terms of use. When Nvidia launched its G-Sync

Close