在pyspark的情况总和(sum of case when in pyspark)
我正在尝试将hql脚本转换为pyspark。 我正在努力如何在groupby子句之后的聚合语句中实现大小写的总和。 例如。
dataframe1 = dataframe0.groupby(col0).agg( SUM(f.when((col1 == 'ABC' | col2 == 'XYZ'), 1).otherwise(0)))在pyspark有可能吗? 我在执行这样的语句时遇到错误。 谢谢
I am trying convert hql script into pyspark. I am struggling how to achieve sum of case when statements in aggregation after groupby clause. eg.
dataframe1 = dataframe0.groupby(col0).agg( SUM(f.when((col1 == 'ABC' | col2 == 'XYZ'), 1).otherwise(0)))Is it possible in pyspark? I am getting error while executing such statement. Thanks
最满意答案
您可以使用withColumn创建一个包含要求求和的值的列,然后对其进行聚合。 例如:
from pyspark.sql import functions as F, types as T schema = T.StructType([ T.StructField('key', T.IntegerType(), True), T.StructField('col1', T.StringType(), True), T.StructField('col2', T.StringType(), True) ]) data = [ (1, 'ABC', 'DEF'), (1, 'DEF', 'XYZ'), (1, 'DEF', 'GHI') ] rdd = sc.parallelize(data) df = sqlContext.createDataFrame(rdd, schema) result = df.withColumn('value', F.when((df.col1 == 'ABC') | (df.col2 == 'XYZ'), 1).otherwise(0)) \ .groupBy('key') \ .agg(F.sum('value').alias('sum')) result.show(100, False)打印出这个结果:
+---+---+ |key|sum| +---+---+ |1 |2 | +---+---+You can use withColumn to create a column with the values you want to to be summed, then aggregate on that. For example:
from pyspark.sql import functions as F, types as T schema = T.StructType([ T.StructField('key', T.IntegerType(), True), T.StructField('col1', T.StringType(), True), T.StructField('col2', T.StringType(), True) ]) data = [ (1, 'ABC', 'DEF'), (1, 'DEF', 'XYZ'), (1, 'DEF', 'GHI') ] rdd = sc.parallelize(data) df = sqlContext.createDataFrame(rdd, schema) result = df.withColumn('value', F.when((df.col1 == 'ABC') | (df.col2 == 'XYZ'), 1).otherwise(0)) \ .groupBy('key') \ .agg(F.sum('value').alias('sum')) result.show(100, False)Which prints out this result:
+---+---+ |key|sum| +---+---+ |1 |2 | +---+---+更多推荐
发布评论