Spark Scala了解reduceByKey(

编程入门 行业动态 更新时间:2024-10-24 14:25:58
Spark Scala了解reduceByKey(_ + _)(Spark Scala Understanding reduceByKey(_ + _))

在scala的第一个spark例子中,我无法理解reduceByKey(_ + _)

object WordCount { def main(args: Array[String]): Unit = { val inputPath = args(0) val outputPath = args(1) val sc = new SparkContext() val lines = sc.textFile(inputPath) val wordCounts = lines.flatMap {line => line.split(" ")} .map(word => (word, 1)) .reduceByKey(_ + _) **I cant't understand this line** wordCounts.saveAsTextFile(outputPath) } }

I can't understand reduceByKey(_ + _) in the first example of spark with scala

object WordCount { def main(args: Array[String]): Unit = { val inputPath = args(0) val outputPath = args(1) val sc = new SparkContext() val lines = sc.textFile(inputPath) val wordCounts = lines.flatMap {line => line.split(" ")} .map(word => (word, 1)) .reduceByKey(_ + _) **I cant't understand this line** wordCounts.saveAsTextFile(outputPath) } }

最满意答案

对两个参数应用函数后,Reduce占用两个元素并产生三分之一。

您显示的代码等效于以下内容

reduceByKey((x,y)=> x + y)

除了定义虚拟变量和编写lambda之外,Scala足够聪明地弄清楚,你试图实现的是在它接收的任何两个参数上应用func (在这种情况下为sum),因此语法

reduceByKey(_ + _)

Reduce takes two elements and produce a third after applying a function to the two parameters.

The code you shown is equivalent to the the following

reduceByKey((x,y)=> x + y)

Instead of defining dummy variables and write a lambda, Scala is smart enough to figure out that what you trying achieve is applying a func (sum in this case) on any two parameters it receives and hence the syntax

reduceByKey(_ + _)

更多推荐

本文发布于:2023-04-29 03:08:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1334569.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:Spark   Scala   reduceByKey

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!