我有一些复杂的Oozie工作流程,可以从本地Hadoop迁移到GCP Dataproc。工作流程包括shell脚本,Python脚本,Spark-Scala作业,Sqoop作业等。
I have some complex Oozie workflows to migrate from on-prem Hadoop to GCP Dataproc. Workflows consist of shell-scripts, Python scripts, Spark-Scala jobs, Sqoop jobs etc.
我遇到了一些潜在的解决方案,它们结合了我的工作流调度需求:
I have come across some potential solutions incorporating my workflow scheduling needs:
请让我知道在性能,成本和迁移复杂性方面哪种选择最有效。
Please let me know which option would be most efficient in terms of performance, costing and migration complexities.
推荐答案所有3个都是合理的选项(尽管#2 Scheduler + Dataproc最笨拙)。需要考虑几个问题:您的工作流多久运行一次,您对未使用的VM的容忍度如何,您的Oozie工作流有多复杂,以及您愿意花多少时间进行迁移?
All 3 are reasonable options (though #2 Scheduler+Dataproc is the most clunky). A few questions to consider: how often do your workflows run, how tolerant are you to unused VMs, how complex are your Oozie workflows, and how willing are you to invest time into migration?
Dataproc的工作流支持分支/联接,但缺少其他Oozie功能,例如,如何处理工作失败,决策节点等。如果您使用其中任何一种,我什至都不会考虑直接迁移到工作流模板和选择#3或下面的混合迁移。
Dataproc's workflows support branch/join but lack other Oozie features such as what to do on job failure, decision nodes, etc. If you use any of these, I'd would not even consider a direct migration to Workflow Templates and choose either #3 or the hybrid migration below.
一个很好的起点是混合迁移(这是假设您的群集很少使用)。保持您的Oozie工作流程,并让Composer +工作流程模板与Oozie创建集群,使用init操作来暂存Oozie XML文件+作业jar /工件,添加单个 pig sh 作业通过工作流通过CLI触发Oozie。
A good place to start, would be hybrid migration (this is assuming your clusters are sparsely used). Keep your Oozie workflows and have Composer + Workflow Templates create a cluster with Oozie, use init action to stage your Oozie XML files + job jars/artifacts, add a single pig sh job from a Workflow to trigger Oozie via CLI.
更多推荐
GCP Dataproc集群上的工作流计划
发布评论