我的用例涉及获取项目中存在的所有流数据流作业的作业ID,然后将其取消.更新我的数据流作业的源,然后重新运行.
My use case involves fetching the job id of all streaming dataflow jobs present in my project and cancel it. Update the sources for my dataflow job and re-run it.
我正在尝试使用python实现此目的.到目前为止,我还没有遇到任何有用的文档. 我考虑过使用python的库子进程来执行gcloud命令,这是一种解决方法.但是我还是无法存储结果并使用它.
I am trying to achieve this using python. I did not come across any useful documentation until now. I thought of using python's library subprocess to execute the gcloud commands as a workaround. But again I was not able to store the result and use it.
有人可以指导我,因为这样做的最好方法是什么.
Can somebody please guide me as what is the best way of doing this.
推荐答案除了直接使用其余的API,您还可以在 google-api-python-client .对于简单的调用,它并没有带来太多的价值,但是当传递许多参数时,它比原始的HTTP库更容易使用.
In addition to using the rest API directly, you can use the generated Python bindings for the API in google-api-python-client. For simple calls it doesn't add that much value, but when passing in many parameters it can be easier to work with than a raw HTTP library.
使用该库,作业列表调用看起来像
With that library, the jobs list call would look like
from googleapiclient.discovery import build import google.auth credentials, project_id = google.auth.default(scopes=['www.googleapis/auth/cloud-platform']) df_service = build('dataflow', 'v1b3', credentials=credentials) response = df_service.projects().locations().jobs().list( project_id=project_id, location='<region>').execute()更多推荐
如何使用python API列出所有数据流作业
发布评论