Dag

编程入门 行业动态 更新时间:2024-10-23 11:22:52
本文介绍了Dag-scheduler-event-loop java.lang.OutOfMemoryError:无法创建新的本机线程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

运行5-6小时后,我从火花驱动程序中得到以下错误。我正在使用Ubuntu 16.04 LTS和open-jdk-8。

I get the following error from spark-driver program after running for 5-6 hours. I am using Ubuntu 16.04 LTS and open-jdk-8.

Exception in thread "ForkJoinPool-50-worker-11" Exception in thread "dag-scheduler-event-loop" Exception in thread "ForkJoinPool-50-worker-13" java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoinPool.java:1672) at scala.concurrent.forkjoin.ForkJoinPool.deregisterWorker(ForkJoinPool.java:1795) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:117) java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoinPool.java:1672) at scala.concurrent.forkjoin.ForkJoinPool.signalWork(ForkJoinPool.java:1966) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.push(ForkJoinPool.java:1072) at scala.concurrent.forkjoin.ForkJoinTask.fork(ForkJoinTask.java:654) at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.start(Tasks.scala:377) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.start(Tasks.scala:443) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$$anonfun$spawnSubtasks$1.apply(Tasks.scala:189) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$$anonfun$spawnSubtasks$1.apply(Tasks.scala:186) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.spawnSubtasks(Tasks.scala:186) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.spawnSubtasks(Tasks.scala:443) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.internal(Tasks.scala:157) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.internal(Tasks.scala:443) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$classpute(Tasks.scala:149) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTaskpute(Tasks.scala:443) at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinTask.doJoin(ForkJoinTask.java:341) at scala.concurrent.forkjoin.ForkJoinTask.join(ForkJoinTask.java:673) at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.sync(Tasks.scala:378) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:443) at scala.collection.parallel.ForkJoinTasks$class.executeAndWaitResult(Tasks.scala:426) at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:56) at scala.collection.parallel.ParIterableLike$ResultMapping.leaf(ParIterableLike.scala:958) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48) at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48) at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51) at scala.collection.parallel.ParIterableLike$ResultMapping.tryLeaf(ParIterableLike.scala:953) at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$classpute(Tasks.scala:152) at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTaskpute(Tasks.scala:443) at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoinPool.java:1672) at scala.concurrent.forkjoin.ForkJoinPool.deregisterWorker(ForkJoinPool.java:1795) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:117)

这是Spark Driver程序产生的错误,默认情况下在客户端模式下运行有人说通过传递 - driver-memory 3g 标志或者其他什么消息无法创建新的本机线程真的说JVM在询问操作系统创建一个新线程,但操作系统无法再分配它,JVM可以通过请求操作系统创建的线程数量取决于平台,但通常是64位操作系统上的32K线程。 JVM。

This is the error produced by the Spark Driver program which is running on client mode by default so some people say just increase the heap size by passing the --driver-memory 3g flag or something however the message "unable to create new native thread" really says that the JVM is asking OS to create a new thread but OS couldn't allocate it anymore and the number of threads a JVM can create by requesting OS is platform dependent but typically it is 32K threads on a 64-bit OS & JVM.

当我做ulimit时 - 我得到以下内容

when I did ulimit -a I get the following

core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 120242 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 120242 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

cat / proc / sys / kernel / pid_max

cat /proc/sys/kernel/pid_max

32768

cat / proc / sy s / kernel / threads-max

cat /proc/sys/kernel/threads-max

240484

无法创建新的本机线程显然意味着它与堆无关。所以我认为这更多是操作系统问题。

"unable to create new native thread" clearly means it has nothing to do with heap. so I believe this is more of a OS issue.

推荐答案

在Spark 2.0中使用ForkJoinPool似乎存在一个错误.0正在创建太多线程。特别是在您在Dstream上调用窗口操作时使用的UnionRDD.scala中。

There seems to be a bug in the usage of ForkJoinPool in Spark 2.0.0 which is creating way too many threads. Specifically in the UnionRDD.scala which is used when you are calling a window operation on Dstream.

issues.apache/jira/browse/SPARK-17396 所以根据此票我升级到2.0.1并解决了问题。

issues.apache/jira/browse/SPARK-17396 so according to this ticket I upgraded to 2.0.1 and it fixed the issue.

更多推荐

Dag

本文发布于:2023-11-25 18:46:54,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1630912.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:Dag

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!