banner



Broadwell Vs Skylake Benchmark

Performance Claims:

+xviii% IPC vs. Skylake,
+47% Performance vs. Broadwell

With every new production generation, the company releasing the product has to put some level of expectations on operation. Depending on the company, you'll either get a high level number summarizing performance, or you'll get reams and reams of criterion data. Intel did both, especially with a headline '+18%' value, but in recent months the company has also been on a charge nigh what sort of benchmarking is worth doing. I want to have a quick diversion down that road, and give my thoughts on the matter.

Showtime, I want to define some terms, just then we're all on the same folio.

  • A synthetic examination is a benchmark engineered to probe a feature of the processor, often to observe its acme capability in i or several specific job. A synthetic test does not oft reflect a real-earth scenario, and likely doesn't use real world software. Synthetic benchmarks are designed to be stable and repeatable, and the assay oftentimes describing how a processor performs in an ideal scenario.
  • A real-globe examination uses software that the user ends up using, along with a representative workload for that software. These tests are commonly almost applicable to cease-users looking to purchase a product, equally they can see actual use-instance results. Real-world tests can have obvious pitfalls: it can be difficult to exam across multiple machines with only a unmarried license, and testing i piece of software has no guarantee on operation on another.

A typical assay of a processor does ii things: what can information technology do (constructed) and how does it perform (real-globe). Users interested in the development of a platform, how information technology will expand and grow, or engineers peering over the contend, or even investors looking at the direction the company is going, will look at what products tin can practice. People looking at what to use, what to work with, are more interested in the operation. Reviewers should get this concept, and companies similar Intel should get this also – with Intel hiring a number of ex-reviewers of late, this is coming through.

A couple of months ago, Intel approached subsets of reviewers to discuss best benchmarking practices. On the table were existent-world benchmarks, and which benchmarks represent the widest array of the market. Under fire was Cinebench, a semi-synthetic test (it uses a real-world engine on example data) that Intel believed didn't represent the performance of a processor.

Intel provided data from i of its deputed surveys on software that people use. Their data was based on a list of all consumers, from entry-level users up to prosumers, casual gamers, and enthusiasts, but as well covering commercial use cases. At the peak of the list were the obvious examples, such as OS and browsers: Explorer.exe, Border, Chrome. In the top gear up were of import widely distributed software packages, such as Photoshop (all versions), Steam, WinRAR, Function programs, and pop games like Overwatch. The point Intel was trying to make with this list is that a lot of reviewers run software that isn't popular, and should aim to comprehend the widest market as possible.

The fundamental point they were trying to make was that Cinebench, while based on Cinema4D and a rendering tool used by a number of the community, wasn't the exist-all and end-all of performance. At present this is where Intel's explanation became bifurcated: despite this being a discussion on what benchmarks reviewers should consider using, Intel's perspective was that citing a single number, as Intel's competitors have washed, doesn't represent true operation in all apply cases. There was a general feeling that users were taking single numbers like this and jumping to conclusions. So despite the fact that the media in the room all test multiple software angles, Intel was clear in that they didn't desire a single number to boss the headlines, especially when it's from software that is ranked (according to Intel's survey) somewhere in the 1400s.

Needless to say, Intel got a flake of backfire from the press in the room at the time. Key criticisms were that those present, when they become hardware, test a variety of software, not but Cinebench, to attempt and give a more than overall view. Other key elements included that the survey covered all users, from consumer, commercial, and workstation: a number of the press in the room have audiences that are enthusiasts, so they will cater their benchmark appropriately. There was likewise a discussion that a number of software packages listed in the height 100 are really hard to benchmark, due to licensing arrangements designed to stop repeated installs across multiple systems. Typically virtually software vendors aren't interested in working with the benchmark community to help evaluate performance, in the consequence that it exposes deficiencies in their code base. There was also the mode in that readers were adapting over time: most focused readers want their specific software tested, and it is impossible to test 50 different software packages, so a few that can be streamlined in a benchmark suite are used as a representative sample, and typically Cinebench is one of those in the rendering arena, aslope POV-Ray, Corona, etc.

Intel, at this phase in the word, notwithstanding went on to show how the new hardware performs on a variety of tests. We've covered these images before on previous pages, merely Intel stated a significant uplift in graphics compared to the current 14nm offerings, from 40% upwards to 108%:

Besides equally comparisons to the contest:

Aside from 3DMark, these are all 'existent-world' tests.

Motion forward a few weeks, and Intel'south Tech Day where Ice Lake is discussed, and Intel brings up IPC.

Intel's big argument is that Sunny Cove, a 2019 production, offers 18% more than instructions per clock against Skylake, a 2015 product. In order to come to that conclusion, as expected, Intel has to plow to synthetic testing: SPEC2006, SPEC2017, SYSMark 2014 SE, WebXPRT, and Cinebench R15. Wait, what was that terminal i? Cinebench?

And so in that location are two topics to discuss hither.

First is the eighteen% increment over four years – that's the equivalent to a four.2% compound almanac growth rate. Some users will state that we should have had more than, and that Intel's problems with its 10nm manufacturing procedure means that this should take been a 2017 product (which would have been an eight.6% CAGR). Ultimately Intel built enough of an IPC increase atomic number 82 over the last decade to afford something like this, and it shows that at that place isn't an IPC wall but yet.

Second is the use of Cinebench, and the previous version at that. Given what was discussed above, various conclusions could exist drawn. I'll get out those up to you. Personally, I wouldn't accept included it.

Aside from IPC, Intel also spoke well-nigh bodily unmarried-threaded operation about Sunny Cove in its 15W way.

At a cursory glance, I would take expected this graph to exist from real-globe assay. Just given the blurb at the bottom it shows that these results are derived from SPEC2006, specifically 1-thread int_rate_base, which means that these are constructed results, so we'll analyze them with that in mind. This test also gets lots of benefit from turbo, with each exam probable to fit within the turbo window of an adequately cooled organisation.

The base line here is Broadwell, Intel's vthursday Generation processor, which if you remember was the starting time Intel processor to accept an integrated FIVR on the mobile parts for ability efficiency. In this case we see that Intel puts Skylake as +9% above Broadwell, then moving through Kaby Lake and Whiskey Lake we come across the effect of increasing that peak turbo frequency and ability upkeep: when we moved from dual cadre to quad cadre 15W mobile processors, that acme turbo power budget increased from 19W to 44W, assuasive longer turbo. Overall we hit +42% for 8th Gen Whiskey Lake over Broadwell.

Ice Lake, by comparing, is +47% over Broadwell. When moving from Broadwell to Ice Lake, which Intel expects most of its users to do, that'due south a sizable unmarried threaded performance bound, I won't dispute that, although I will wait until we see real world data to come to a improve conclusion.

However, if we compare Ice Lake to Whiskey Lake, we see only a +iii.5% increment in single threaded performance. For a generation-on-generation increment, that's even lower than the four-twelvemonth CAGR from Skylake. Some of yous might be questioning why this is happening, and information technology all comes downwardly to frequency.

Intel's electric current 8th Gen Whiskey Lake, the i7-8565U, has a peak turbo frequency of four.8 GHz. In 15W way, we sympathise that the summit frequency of Ice Lake is under 4.0 GHz, essentially handing Whiskey Lake a ~20% frequency advantage.

If this sounds odd, plough over to the next page. Intel is going to start tripping over itself with its new production lines, and nosotros'll practise the math.

Wi-Fi 6: Implementing AX over Air conditioning* Competing Confronting Itself: iii.9 GHz Ice Lake-U on 10nm vs iv.9 GHz Comet Lake-U on 14nm

heeterphou1968.blogspot.com

Source: https://www.anandtech.com/show/14514/examining-intels-ice-lake-microarchitecture-and-sunny-cove/10

0 Response to "Broadwell Vs Skylake Benchmark"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel