本文介绍了在 Spark 中展平行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
限时送ChatGPT账号..我正在使用 Scala 对 Spark 进行一些测试.我们通常读取需要像以下示例一样操作的json文件:
I am doing some testing for spark using scala. We usually read json files which needs to be manipulated like the following example:
test.json:
{"a":1,"b":[2,3]}
val test = sqlContext.read.json("test.json")
如何将其转换为以下格式:
How can I convert it to the following format:
{"a":1,"b":2}
{"a":1,"b":3}
推荐答案
可以使用explode
函数:
scala> import org.apache.spark.sql.functions.explode
import org.apache.spark.sql.functions.explode
scala> val test = sqlContext.read.json(sc.parallelize(Seq("""{"a":1,"b":[2,3]}""")))
test: org.apache.spark.sql.DataFrame = [a: bigint, b: array<bigint>]
scala> test.printSchema
root
|-- a: long (nullable = true)
|-- b: array (nullable = true)
| |-- element: long (containsNull = true)
scala> val flattened = test.withColumn("b", explode($"b"))
flattened: org.apache.spark.sql.DataFrame = [a: bigint, b: bigint]
scala> flattened.printSchema
root
|-- a: long (nullable = true)
|-- b: long (nullable = true)
scala> flattened.show
+---+---+
| a| b|
+---+---+
| 1| 2|
| 1| 3|
+---+---+
这篇关于在 Spark 中展平行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
更多推荐
[db:关键词]
发布评论