本文介绍了如何在pyspark中存储一组列?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试在5k数据集中存储包含单词"road"的列.并创建一个新的数据框.
I am trying to bucketize columns that contain the word "road" in a 5k dataset. And create a new dataframe.
我不确定该怎么做,这是我到目前为止尝试过的:
I am not sure how to do that, here is what I have tried far :
from pyspark.ml.feature import Bucketizer spike_cols = [col for col in df.columns if "road" in col] for x in spike_cols : bucketizer = Bucketizer(splits=[-float("inf"), 10, 100, float("inf")], inputCol=x, outputCol=x + "bucket") bucketedData = bucketizer.transform(df) 推荐答案可以在循环中修改 df :
from pyspark.ml.feature import Bucketizer for x in spike_cols : bucketizer = Bucketizer(splits=[-float("inf"), 10, 100, float("inf")], inputCol=x, outputCol=x + "bucket") df = bucketizer.transform(df)或使用管道:
from pyspark.ml import Pipeline from pyspark.ml.feature import Bucketizer model = Pipeline(stages=[ Bucketizer( splits=[-float("inf"), 10, 100, float("inf")], inputCol=x, outputCol=x + "bucket") for x in spike_cols ]).fit(df) model.transform(df)更多推荐
如何在pyspark中存储一组列?
发布评论