PySpark sqlContext JSON查询数组的所有值

编程入门 行业动态 更新时间:2024-10-25 16:27:04
本文介绍了PySpark sqlContext JSON查询数组的所有值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我目前有一个json文件,正在尝试使用sqlContext.sql()查询,如下所示:

I currently have a json file that i am trying to query with sqlContext.sql() that looks something like this:

{ "sample": { "persons": [ { "id": "123", }, { "id": "456", } ] } }

如果我只想输入第一个值,则输入:

If I just want the first value I would type:

sqlContext.sql("SELECT sample.persons[0] FROM test")

但是我想要所有"persons"的值而不必编写循环.循环只会消耗过多的处理能力,并且鉴于这些文件的大小,这将是不切实际的.

but I want all the values of "persons" without having to write a loop. Loops just consume too much processing power, and given the size of these files, that would just be impractical.

我以为我可以在[]括号内放置一个范围,但是我找不到用于执行此操作的任何语法.

I thought I would be able to put a range in the [] brackets but I can't find any syntax by which to do that.

推荐答案

如果您的模式如下:

root |-- sample: struct (nullable = true) | |-- persons: array (nullable = true) | | |-- element: struct (containsNull = true) | | | |-- id: string (nullable = true)

并想从persons数组访问单个structs,只需将其爆炸即可:

and want to access individual structs from persons array all you have to do is to explode it:

from pyspark.sql.functions import explode df.select(explode("sample.persons").alias("person")).select("person.id")

另请参阅:查询具有复杂类型的Spark SQL DataFrame

更多推荐

PySpark sqlContext JSON查询数组的所有值

本文发布于:2023-10-25 00:06:27,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1525378.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:数组   PySpark   sqlContext   JSON

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!