Spark groupBy"/>
Spark groupBy
目录
- Spark groupBy功能
- 案例演示
- 小练习:用groupBy实现wordCount
Spark groupBy功能
按照传入函数的返回值进行分组,将相同的key对应的值放入一个迭代器中
案例演示
需求:将List(1, 2, 3, 4, 5, 6, 7, 8, 9)按照奇偶输出到控制台,形式如下
偶数:2468
奇数:13579
package com.xcu.bigdata.spark.core.pg02_rdd.pg022_rdd_transformimport org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}/*** @Package : com.xcu.bigdata.spark.core.pg02_rdd.pg022_rdd_transform* @Desc : 按照传入函数的返回值进行分组,将相同的key对应的值放入一个迭代器中*/
object Spark05_GroupBy {def main(args: Array[String]): Unit = {//创建配置文件val conf: SparkConf = new SparkConf().setAppName("Spark05_GroupBy").setMaster("local[*]")//创建SparkContext,该对象是提交的入口val sc = new SparkContext(conf)//创建RDDval rdd: RDD[Int] = sc.parallelize(List(1, 2, 3, 4, 5, 6, 7, 8, 9))//按照奇偶数进行分组val groupbyRDD: RDD[(Int, Iterable[Int])] = rdd.groupBy((x: Int) => {x % 2})//打印数据groupbyRDD.collect().foreach((t: (Int, Iterable[Int])) => {t._1 match {case 1 => {print("奇数:")t._2.foreach(print(_))println()}case _ => {print("偶数:")t._2.foreach(print(_))println()}}})sc.stop() }
}
小练习:用groupBy实现wordCount
package com.xcu.bigdata.spark.core.pg01_wordcountimport org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.{SparkConf, SparkContext}/*** @Desc : 专门针对groupBy算子的小案例*/
object Spark02_WordCount {def main(args: Array[String]): Unit = {// 创建配置文件val conf: SparkConf = new SparkConf().setAppName("Spark03_WordCount").setMaster("local[*]")// 创建SparkContext,该对象是提交的入口val sc = new SparkContext(conf)// 创建RDDval rdd: RDD[String] = sc.makeRDD(List("Hello Scala", "Hello Spark", "Hello World"))// 扁平化val flatMapRDD: RDD[String] = rdd.flatMap((s: String) => {s.split(" ")})// 分组val groupByRDD: RDD[(String, Iterable[String])] = flatMapRDD.groupBy((word: String) => {word})// 单词统计val mapRDD: RDD[(String, Int)] = groupByRDD.map((t: (String, Iterable[String])) => {(t._1, t._2.size)})// 打印输出mapRDD.collect().foreach(println)// 释放资源sc.stop()}
}
输出:
(Hello,3)
(World,1)
(Spark,1)
(Scala,1)
更多推荐
Spark groupBy
发布评论