本文介绍了Spark SQL作业的Spark修复任务编号的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我一直看到Apache Spark安排了一系列阶段,固定涉及200个任务.由于这一直发生在许多不同的工作上,所以我猜测这与Spark配置之一有关.有什么建议可能是什么配置吗?
I keep seeing that Apache Spark schedules series of stages with a fixed 200 tasks involved. Since this keeps happening to a number of different jobs I am guessing this is somehow related to one of Spark configurations. Any suggestion what that configuration might be?
推荐答案200是在改组期间使用的默认分区数,它由spark.sql.shuffle.partitions控制.可以在运行时使用SQLContext.setConf设置其值:
200 is a default number of partitions used during shuffles and it is controlled by spark.sql.shuffle.partitions. Its value can set on runtime using SQLContext.setConf:
sqlContext.setConf("spark.sql.shuffle.partitions", "42")或RuntimeConfig.set
spark.conf.set("spark.sql.shuffle.partitions", 42)更多推荐
Spark SQL作业的Spark修复任务编号
发布评论