本文介绍了如何在Spark DataFrame/DataSet中将行拆分为不同的列?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
假设我的数据集如下:
Name | Subject | Y1 | Y2 A | math | 1998| 2000 B | | 1996| 1999 | science | 2004| 2005我想拆分此数据集的行,以便像这样消除Y2列:
I want to split rows of this data set such that Y2 column will be eliminated like :
Name | Subject | Y1 A | math | 1998 A | math | 1999 A | math | 2000 B | | 1996 B | | 1997 B | | 1998 B | | 1999 | science | 2004 | science | 2005有人可以在这里提出一些建议吗?我希望我已经使我的查询清楚了.预先感谢.
Can someone suggest something here ? I hope I had made my query clear. Thanks in advance.
推荐答案我认为您只需创建 udf 即可创建范围.然后,您可以使用explode创建必要的行:
I think you only need to create an udf to create the range. Then you can use explode to create the necessary rows:
val createRange = udf { (yearFrom: Int, yearTo: Int) => (yearFrom to yearTo).toList } df.select($"Name", $"Subject", functions.explode(createRange($"Y1", $"Y2"))).show()此代码的python版本类似于:
The python version of this code would be something like:
from pyspark.sql import Row from pyspark.sql.functions import udf, explode from pyspark.sql.types import IntegerType createRange=udf( lambda (yearFrom, yearTo): list(range(yearFrom, yearTo)), IntegerType()) df.select($"Name", $"Subject", explode(createRange($"Y1", $"Y2"))).show()更多推荐
如何在Spark DataFrame/DataSet中将行拆分为不同的列?
发布评论