我目前正在尝试将两个DataFrame结合在一起,但在其中一个Dataframe中保持相同的顺序.
I'm currently trying to join two DataFrames together but retain the same order in one of the Dataframes.
从哪些操作保留了RDD顺序?,似乎(如果这是不准确的,因为我是Spark的新手.联接不会保留顺序,因为由于数据位于不同的分区中,所以行以未指定的顺序联接/到达"最终数据帧,而不是按指定的顺序到达.
From Which operations preserve RDD order?, it seems that (correct me if this is inaccurate because I'm new to Spark) joins do not preserve order because rows are joined / "arrive" at the final dataframe not in a specified order due to the data being in different partitions.
在保留一个表的顺序的同时,如何执行两个DataFrame的联接?
How could one perform a join of two DataFrames while preserving the order of one table?
例如
+ ------------ + --------- +|col1 |col2 |+ ------------ + --------- +|0 |一个||1 |b |+ ------------ + --------- +
加入
+ ------------ + --------- +|col2 |col3 |+ ------------ + --------- +|b |x ||一个|y |+ ------------ + --------- +
在 col2 上应该给出
+ ------------ + -------------------- +|col1 |col2 |第3列|+ ------------ + --------- + ---------- +|0 |一个|y ||1 |b |x |+ ------------ + --------- + ---------- +
我听说过有关使用 coalesce 或 repartition 的一些信息,但是我不确定.任何建议/方法/见解均表示赞赏.
I've heard some things about using coalesce or repartition, but I'm not sure. Any suggestions/methods/insights are appreciated.
编辑:这类似于在MapReduce中使用一个reducer吗?如果是这样,在Spark中会是什么样子?
Edit: would this be analogous to having one reducer in MapReduce? If so, how would that look like in Spark?
推荐答案不能.您可以添加 monotonically_increasing_id 并在加入后重新排序数据.
It can't. You can add monotonically_increasing_id and reorder data after join.
更多推荐
可以在Spark中保留Dataframe联接的顺序吗?
发布评论