使用PySpark作为Pyspark Dataframe读取Json文件?(Read JSON file as Pyspark Dataframe using PySpark?)

编程入门 行业动态 更新时间:2024-10-26 14:39:41
使用PySpark作为Pyspark Dataframe读取Json文件?(Read JSON file as Pyspark Dataframe using PySpark?)

我如何阅读下面的JSON结构来使用pyspark来触发数据框?

我的Json结构

{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}

我曾尝试过:

df = spark.read.json('simple.json');

我想输出a,b,c作为列和值作为各自的行。

谢谢。

How can I read the following JSON structure to spark dataframe using PySpark?

My JSON structure

{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}

I have tried with :

df = spark.read.json('simple.json');

I want the output a,b,c as columns and values as respective rows.

Thanks.

最满意答案

Json字符串变量

如果你有JSON字符串作为变量,那么你可以做

simple_json = '{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}' rddjson = sc.parallelize([simple_json]) df = sqlContext.read.json(rddjson) from pyspark.sql import functions as F df.select(F.explode(df.results).alias('results')).select('results.*').show(truncate=False)

这会给你

+---+---+----+ |a |b |c | +---+---+----+ |1 |2 |name| |2 |5 |foo | +---+---+----+

Json字符串作为文件中的单独行(sparkContext和sqlContext)

如果你在一个文件中json字符串作为单独的行,那么你可以像上面那样使用sparkContext将它读入rdd [string]中,其他过程与上面相同

rddjson = sc.textFile('/home/anahcolus/IdeaProjects/pythonSpark/test.csv') df = sqlContext.read.json(rddjson) df.select(F.explode(df['results']).alias('results')).select('results.*').show(truncate=False)

Json字符串作为文件中的单独行(仅限sqlContext)

如果您在文件中将json字符串作为单独的行,那么您只能使用sqlContext 。 但是这个过程很复杂,因为你必须为它创建模式

df = sqlContext.read.text('path to the file') from pyspark.sql import functions as F from pyspark.sql import types as T df = df.select(F.from_json(df.value, T.StructType([T.StructField('results', T.ArrayType(T.StructType([T.StructField('a', T.IntegerType()), T.StructField('b', T.IntegerType()), T.StructField('c', T.StringType())])))])).alias('results')) df.select(F.explode(df['results.results']).alias('results')).select('results.*').show(truncate=False)

这应该给你同样的结果

我希望答案是有帮助的

Json string variables

If you have json strings as variables then you can do

simple_json = '{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}' rddjson = sc.parallelize([simple_json]) df = sqlContext.read.json(rddjson) from pyspark.sql import functions as F df.select(F.explode(df.results).alias('results')).select('results.*').show(truncate=False)

which will give you

+---+---+----+ |a |b |c | +---+---+----+ |1 |2 |name| |2 |5 |foo | +---+---+----+

Json strings as separate lines in a file (sparkContext and sqlContext)

If you have json strings as separate lines in a file then you can read it using sparkContext into rdd[string] as above and the rest of the process is same as above

rddjson = sc.textFile('/home/anahcolus/IdeaProjects/pythonSpark/test.csv') df = sqlContext.read.json(rddjson) df.select(F.explode(df['results']).alias('results')).select('results.*').show(truncate=False)

Json strings as separate lines in a file (sqlContext only)

If you have json strings as separate lines in a file then you can just use sqlContext only. But the process is complex as you have to create schema for it

df = sqlContext.read.text('path to the file') from pyspark.sql import functions as F from pyspark.sql import types as T df = df.select(F.from_json(df.value, T.StructType([T.StructField('results', T.ArrayType(T.StructType([T.StructField('a', T.IntegerType()), T.StructField('b', T.IntegerType()), T.StructField('c', T.StringType())])))])).alias('results')) df.select(F.explode(df['results.results']).alias('results')).select('results.*').show(truncate=False)

which should give you same as above result

I hope the answer is helpful

更多推荐

本文发布于:2023-08-03 16:35:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1393524.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:文件   Dataframe   PySpark   Pyspark   Json

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!