Java 8

编程入门 行业动态 更新时间:2024-10-28 12:16:29
Java 8 - 外部迭代的性能优于内部迭代?(Java 8 - External Iteration performing better than Internal Iteration?)

所以我正在阅读Java 8的一本书,当我看到他们在外部和内部迭代之间进行比较,并考虑比较两者时,性能方面是明智的。

我有一个方法,只是总结一个整数序列到n 。

迭代一:

private static long iterativeSum(long n) { long startTime = System.nanoTime(); long sum = 0; for(long i=1; i<=n; i++) { sum+=i; } long endTime = System.nanoTime(); System.out.println("Iterative Sum Duration: " + (endTime-startTime)/1000000); return sum; }

顺序一 - 使用内部迭代

private static long sequentialSum(long n) { long startTime = System.nanoTime(); //long sum = LongStream.rangeClosed(1L, n) long sum = Stream.iterate(1L, i -> i+1) .limit(n) .reduce(0L, (i,j) -> i+j); long endTime = System.nanoTime(); System.out.println("Sequential Sum Duration: " + (endTime-startTime)/1000000); return sum; }

我试图对它们做一些基准测试,结果证明使用外部迭代的性能比使用内部迭代的性能要好得多。

这是我的驱动程序代码:

public static void main(String[] args) { long n = 100000000L; for(int i=0;i<10000;i++){ iterativeSum(n); sequentialSum(n); } iterativeSum(n); sequentialSum(n); }

Iteravtive的运行时间始终<50ms,而Sequential的执行时间始终> 250ms。

我无法理解为什么内部迭代不在这里执行外部迭代?

So I was reading a book for Java 8, when I saw them doing comparison between external and internal iteration and thought about comparing the two, performance wise.

I have a method that just sums up a sequence of integers up to n.

Iterative one:

private static long iterativeSum(long n) { long startTime = System.nanoTime(); long sum = 0; for(long i=1; i<=n; i++) { sum+=i; } long endTime = System.nanoTime(); System.out.println("Iterative Sum Duration: " + (endTime-startTime)/1000000); return sum; }

Sequential One - using internal iteration

private static long sequentialSum(long n) { long startTime = System.nanoTime(); //long sum = LongStream.rangeClosed(1L, n) long sum = Stream.iterate(1L, i -> i+1) .limit(n) .reduce(0L, (i,j) -> i+j); long endTime = System.nanoTime(); System.out.println("Sequential Sum Duration: " + (endTime-startTime)/1000000); return sum; }

I tried to do some benchmarking on them, it turns out that the one using external iteration performs far better than the one using internal iteration.

Here's my driver code:

public static void main(String[] args) { long n = 100000000L; for(int i=0;i<10000;i++){ iterativeSum(n); sequentialSum(n); } iterativeSum(n); sequentialSum(n); }

The running time for the Iteravtive one was always < 50ms whereas the execution time for Sequential one was always > 250ms.

I am not able to understand why internal iteration is not out performing external iteration here?

最满意答案

即使所呈现的结果完全不相关,观察到的效果实际上也会发生:Stream API确实存在开销,即使在热身之后,对于这样的简单任务也不能在实际应用中完全消除。 我们来写一个JMH基准:

@Warmup(iterations = 5, time = 500, timeUnit = TimeUnit.MILLISECONDS) @Measurement(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS) @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.MICROSECONDS) @Fork(3) @State(Scope.Benchmark) public class IterativeSum { @Param({ "100", "10000", "1000000" }) private int n; public static long iterativeSum(long n) { long sum = 0; for(long i=1; i<=n; i++) { sum+=i; } return sum; } @Benchmark public long is() { return iterativeSum(n); } }

这里是基线测试:纯循环。 我的箱子上的结果如下:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.is 100 avgt 30 0.074 ± 0.001 us/op IterativeSum.is 10000 avgt 30 6.361 ± 0.009 us/op IterativeSum.is 1000000 avgt 30 688.527 ± 0.910 us/op

以下是基于Stream API的迭代版本:

public static long sequentialSumBoxed(long n) { return Stream.iterate(1L, i -> i+1).limit(n) .reduce(0L, (i,j) -> i+j); } @Benchmark public long ssb() { return sequentialSumBoxed(n); }

结果如下所示:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.ssb 100 avgt 30 1.253 ± 0.084 us/op IterativeSum.ssb 10000 avgt 30 134.959 ± 0.421 us/op IterativeSum.ssb 1000000 avgt 30 9119.422 ± 22.817 us/op

非常令人失望:慢了13-21倍。 这个版本里面有很多装箱操作,这就是为什么创建原始流专业化的原因。 让我们来检查非盒装版本:

public static long sequentialSum(long n) { return LongStream.iterate(1L, i -> i+1).limit(n) .reduce(0L, (i,j) -> i+j); } @Benchmark public long ss() { return sequentialSum(n); }

结果如下:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.ss 100 avgt 30 0.661 ± 0.001 us/op IterativeSum.ss 10000 avgt 30 67.498 ± 5.732 us/op IterativeSum.ss 1000000 avgt 30 1982.687 ± 38.501 us/op

现在好多了,但仍然慢了2.8-10倍。 另一种方法是使用范围:

public static long rangeSum(long n) { return LongStream.rangeClosed(1, n).sum(); } @Benchmark public long rs() { return rangeSum(n); }

结果如下:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.rs 100 avgt 30 0.316 ± 0.001 us/op IterativeSum.rs 10000 avgt 30 28.646 ± 0.065 us/op IterativeSum.rs 1000000 avgt 30 2158.962 ± 514.780 us/op

现在它慢了3.1-4.5倍。 造成这种缓慢的原因是Stream API具有非常长的调用链,该链调用MaxInlineLevel JVM限制,因此默认情况下它不能完全内联。 您可以像-XX:MaxInlineLevel=20一样增加此限制设置并获得以下结果:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.rs 100 avgt 30 0.111 ± 0.001 us/op IterativeSum.rs 10000 avgt 30 9.552 ± 0.017 us/op IterativeSum.rs 1000000 avgt 30 729.935 ± 31.915 us/op

好得多:现在只有1.05-1.5倍的慢。

这个测试的问题在于迭代版本的循环体非常简单,因此可以通过JIT编译器进行高效的展开和矢量化处理,而且对于复杂的Stream API代码来说,以相同的效率执行此操作更加困难。 然而,在实际应用中,你不可能在循环中总结数字(为什么不写n*(n+1)/2呢?)。 存在实际问题即使使用默认的MaxInlineLevel设置,Stream API开销也要低MaxInlineLevel 。

Even though the presented results are completely irrelevant, the observed effect actually takes place: the Stream API do has an overhead which for such simple tasks cannot be totally eliminated in real applications even after warm-up. Let's write a JMH benchmark:

@Warmup(iterations = 5, time = 500, timeUnit = TimeUnit.MILLISECONDS) @Measurement(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS) @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.MICROSECONDS) @Fork(3) @State(Scope.Benchmark) public class IterativeSum { @Param({ "100", "10000", "1000000" }) private int n; public static long iterativeSum(long n) { long sum = 0; for(long i=1; i<=n; i++) { sum+=i; } return sum; } @Benchmark public long is() { return iterativeSum(n); } }

Here's baseline test: plain loop. The results on my box are the following:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.is 100 avgt 30 0.074 ± 0.001 us/op IterativeSum.is 10000 avgt 30 6.361 ± 0.009 us/op IterativeSum.is 1000000 avgt 30 688.527 ± 0.910 us/op

Here's your version of Stream API based iteration:

public static long sequentialSumBoxed(long n) { return Stream.iterate(1L, i -> i+1).limit(n) .reduce(0L, (i,j) -> i+j); } @Benchmark public long ssb() { return sequentialSumBoxed(n); }

The results look like this:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.ssb 100 avgt 30 1.253 ± 0.084 us/op IterativeSum.ssb 10000 avgt 30 134.959 ± 0.421 us/op IterativeSum.ssb 1000000 avgt 30 9119.422 ± 22.817 us/op

Very disappointing: 13-21x slower. This version has many boxing operations inside, that's why primitive stream specializations were created. Let's check non-boxed version:

public static long sequentialSum(long n) { return LongStream.iterate(1L, i -> i+1).limit(n) .reduce(0L, (i,j) -> i+j); } @Benchmark public long ss() { return sequentialSum(n); }

The results are the following:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.ss 100 avgt 30 0.661 ± 0.001 us/op IterativeSum.ss 10000 avgt 30 67.498 ± 5.732 us/op IterativeSum.ss 1000000 avgt 30 1982.687 ± 38.501 us/op

Much better now, but still 2.8-10x times slower. An alternative would be to use the range:

public static long rangeSum(long n) { return LongStream.rangeClosed(1, n).sum(); } @Benchmark public long rs() { return rangeSum(n); }

The results are the following:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.rs 100 avgt 30 0.316 ± 0.001 us/op IterativeSum.rs 10000 avgt 30 28.646 ± 0.065 us/op IterativeSum.rs 1000000 avgt 30 2158.962 ± 514.780 us/op

Now it's 3.1-4.5x times slower. The cause of this slowness is that Stream API has very long call chain which hits the MaxInlineLevel JVM limit, so it cannot be fully inlined by default. You may increase this limit setting like -XX:MaxInlineLevel=20 and get the following result:

Benchmark (n) Mode Cnt Score Error Units IterativeSum.rs 100 avgt 30 0.111 ± 0.001 us/op IterativeSum.rs 10000 avgt 30 9.552 ± 0.017 us/op IterativeSum.rs 1000000 avgt 30 729.935 ± 31.915 us/op

Much better: now it's only 1.05-1.5x slower.

The problem with this test is that the loop body of iterative version is very simple, thus could be efficiently unrolled and vectorized by JIT-compiler, and it's much harder to do this with the same efficiency for sophisticated Stream API code. However in real applications you're unlikely to sum consequtive numbers in a loop (why not write n*(n+1)/2 instead?). With real problems Stream API overhead is much lower even with default MaxInlineLevel setting.

更多推荐

本文发布于:2023-07-08 19:21:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1079984.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:Java

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!