我如何阅读下面的JSON结构来使用pyspark来触发数据框?
我的Json结构
{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}我曾尝试过:
df = spark.read.json('simple.json');我想输出a,b,c作为列和值作为各自的行。
谢谢。
How can I read the following JSON structure to spark dataframe using PySpark?
My JSON structure
{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}I have tried with :
df = spark.read.json('simple.json');I want the output a,b,c as columns and values as respective rows.
Thanks.
最满意答案
Json字符串变量
如果你有JSON字符串作为变量,那么你可以做
simple_json = '{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}' rddjson = sc.parallelize([simple_json]) df = sqlContext.read.json(rddjson) from pyspark.sql import functions as F df.select(F.explode(df.results).alias('results')).select('results.*').show(truncate=False)这会给你
+---+---+----+ |a |b |c | +---+---+----+ |1 |2 |name| |2 |5 |foo | +---+---+----+Json字符串作为文件中的单独行(sparkContext和sqlContext)
如果你在一个文件中有json字符串作为单独的行,那么你可以像上面那样使用sparkContext将它读入rdd [string]中,其他过程与上面相同
rddjson = sc.textFile('/home/anahcolus/IdeaProjects/pythonSpark/test.csv') df = sqlContext.read.json(rddjson) df.select(F.explode(df['results']).alias('results')).select('results.*').show(truncate=False)Json字符串作为文件中的单独行(仅限sqlContext)
如果您在文件中将json字符串作为单独的行,那么您只能使用sqlContext 。 但是这个过程很复杂,因为你必须为它创建模式
df = sqlContext.read.text('path to the file') from pyspark.sql import functions as F from pyspark.sql import types as T df = df.select(F.from_json(df.value, T.StructType([T.StructField('results', T.ArrayType(T.StructType([T.StructField('a', T.IntegerType()), T.StructField('b', T.IntegerType()), T.StructField('c', T.StringType())])))])).alias('results')) df.select(F.explode(df['results.results']).alias('results')).select('results.*').show(truncate=False)这应该给你同样的结果
我希望答案是有帮助的
Json string variables
If you have json strings as variables then you can do
simple_json = '{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}' rddjson = sc.parallelize([simple_json]) df = sqlContext.read.json(rddjson) from pyspark.sql import functions as F df.select(F.explode(df.results).alias('results')).select('results.*').show(truncate=False)which will give you
+---+---+----+ |a |b |c | +---+---+----+ |1 |2 |name| |2 |5 |foo | +---+---+----+Json strings as separate lines in a file (sparkContext and sqlContext)
If you have json strings as separate lines in a file then you can read it using sparkContext into rdd[string] as above and the rest of the process is same as above
rddjson = sc.textFile('/home/anahcolus/IdeaProjects/pythonSpark/test.csv') df = sqlContext.read.json(rddjson) df.select(F.explode(df['results']).alias('results')).select('results.*').show(truncate=False)Json strings as separate lines in a file (sqlContext only)
If you have json strings as separate lines in a file then you can just use sqlContext only. But the process is complex as you have to create schema for it
df = sqlContext.read.text('path to the file') from pyspark.sql import functions as F from pyspark.sql import types as T df = df.select(F.from_json(df.value, T.StructType([T.StructField('results', T.ArrayType(T.StructType([T.StructField('a', T.IntegerType()), T.StructField('b', T.IntegerType()), T.StructField('c', T.StringType())])))])).alias('results')) df.select(F.explode(df['results.results']).alias('results')).select('results.*').show(truncate=False)which should give you same as above result
I hope the answer is helpful
更多推荐
发布评论