Spark Scala了解reduceByKey(_ + _)(Spark Scala Understanding reduceByKey(_ + _))
在scala的第一个spark例子中,我无法理解reduceByKey(_ + _)
object WordCount { def main(args: Array[String]): Unit = { val inputPath = args(0) val outputPath = args(1) val sc = new SparkContext() val lines = sc.textFile(inputPath) val wordCounts = lines.flatMap {line => line.split(" ")} .map(word => (word, 1)) .reduceByKey(_ + _) **I cant't understand this line** wordCounts.saveAsTextFile(outputPath) } }I can't understand reduceByKey(_ + _) in the first example of spark with scala
object WordCount { def main(args: Array[String]): Unit = { val inputPath = args(0) val outputPath = args(1) val sc = new SparkContext() val lines = sc.textFile(inputPath) val wordCounts = lines.flatMap {line => line.split(" ")} .map(word => (word, 1)) .reduceByKey(_ + _) **I cant't understand this line** wordCounts.saveAsTextFile(outputPath) } }最满意答案
对两个参数应用函数后,Reduce占用两个元素并产生三分之一。
您显示的代码等效于以下内容
reduceByKey((x,y)=> x + y)除了定义虚拟变量和编写lambda之外,Scala足够聪明地弄清楚,你试图实现的是在它接收的任何两个参数上应用func (在这种情况下为sum),因此语法
reduceByKey(_ + _)Reduce takes two elements and produce a third after applying a function to the two parameters.
The code you shown is equivalent to the the following
reduceByKey((x,y)=> x + y)Instead of defining dummy variables and write a lambda, Scala is smart enough to figure out that what you trying achieve is applying a func (sum in this case) on any two parameters it receives and hence the syntax
reduceByKey(_ + _)更多推荐
发布评论