本文介绍了将窗口功能应用于多列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我想执行窗口功能(平均移动平均值),但是要在数据框的所有列上执行.
I would like to perform window function (concretely moving average), but over all columns of a dataframe.
我可以这样
from pyspark.sql import SparkSession, functions as func df = ... df.select([func.avg(df[col]).over(windowSpec).alias(col) for col in df.columns])但是我担心这不是很有效.有更好的方法吗?
but I'm afraid this isn't very efficient. Is there a better way to do it?
推荐答案一个更好的选择是创建一个新的df,在其中您可以对Window函数中的列进行分组,并对其余的列应用平均值,然后进行左连接.对于将df溢出到磁盘(或无法持久存储在内存中)的大型数据帧,这绝对是最佳选择.
An alternative which may be better is to create a new df where you Group By the columns in Window function and apply average on the remaining columns then do a left join. For large data frames where the df is being spilled over to disk (or cannot be persisted in memory), this will definitely be more optimal.
更多推荐
将窗口功能应用于多列
发布评论