我有一个 Spark 应用程序,它可以毫无错误地完成,但是一旦它完成并保存了它的所有输出并且进程终止,Spark 独立集群主进程就变成了一个 CPU 猪,使用 16 个 CPU 的全时数小时,而 webUI 变得无响应.我不知道它会做什么,是否有一些复杂的清理步骤?
I have a spark application that finishes without error, but once it's done and saved all of its outputs and the process terminates, the Spark standalone cluster master process becomes a CPU hog, using 16 CPU's full time for hours, and the web UI becomes unresponsive. I have no idea what it could be doing, is there some complicated clean up step?
更多细节:
我有一个 Spark 独立集群(27 个工作人员/节点),我已经成功向其提交作业一段时间了.我最近扩大了我的应用程序的大小,最大的现在需要 3.5 小时,使用 100 个内核超过 27 个工作人员,并且每个工作人员在工作过程中都有大约数十 GB 的随机读/写.否则,应用程序与之前成功运行的较小作业没有什么不同.
I've got a Spark standalone cluster (27 workers/nodes) that I've been successfully submitting jobs to for a while. I recently scaled up the size of my applications, the largest now takes 3.5 hours using 100 cores over 27 workers, and each worker has ~dozens of GB of shuffle read/write over the course of the job. Otherwise, the application is no different than the smaller jobs that have run successfully before.
推荐答案这是 Spark 独立集群的一个已知问题,由大型应用程序创建的大量事件日志引起.您可以在下面的问题跟踪链接中阅读更多内容.
This is a known issue with Spark's standalone cluster, and is caused by the massive event log created by large applications. You can read more at the issue tracking link below.
issues.apache/jira/browse/SPARK-12299
目前,最好的解决方法是禁用大型作业的事件日志记录.
At the current time, the best work-around is to disable event logging for large jobs.
更多推荐
应用程序完成后,Spark Standalone 集群主 Web UI 无法访问
发布评论