我可以从安装apache spark的同一群集中的hive服务器加载数据。但是,我怎么能从远程配置单元服务器加载数据到数据帧。是hive jdbc连接器的唯一选择吗?
任何建议我该怎么做?
解决方案您可以使用 org.apache.spark.sql.hive.HiveContext 在Hive表上执行SQL查询。
您也可以将spark连接到实际存储数据的底层HDFS目录。这将更具性能,因为SQL查询不需要解析或者将模式应用于文件。
如果群集是外部群集,你需要设置 hive.metastore.uris
I can load data from hive server in the same cluster as where apache spark is installed. But how can i load data into dataframe from a remote hive server. Is the hive jdbc connector the only option to do so?
any suggestion how can i do this?
解决方案You can use org.apache.spark.sql.hive.HiveContext to perform SQL query over Hive tables.
You can alternatively connect spark to the underlying HDFS directory where data is really stored. This will be more performant as the SQL query doesn't need parsed or the schema applied over the files.
If the cluster is an external one, you'll need to set hive.metastore.uris
更多推荐
远程连接apache spark与apache配置单元。
发布评论