我使用 Version1 架构生成了超过一年的镶木地板文件.并且随着最近的架构更改,较新的镶木地板文件具有 Version2 架构额外的列.
I have parquet files generated for over a year with a Version1 schema. And with a recent schema change the newer parquet files have Version2 schema extra columns.
因此,当我同时加载旧版本和新版本的镶木地板文件并尝试过滤更改的列时,我得到一个异常.
So when i load parquet files from the old version and new version together and try to filter on the changed columns i get an exception.
我希望 spark 读取旧文件和新文件,并在列不存在的情况下填充空值.当找不到列时,spark 填充空值是否有解决方法?
I would like for spark to read old and new files and fill in null values where the column is not present.Is there a workaround for this where spark fills null values when the column is not found?
推荐答案有两种方法可以尝试.
1.喜欢这种方式,你可以使用地图变换,但不推荐这样做,例如 spark.read.parquet("mypath").map(e => val field =if (e.isNullAt(e.fieldIndex("field"))) null else e.getAs[String]("field"))
1.like this way that you can use a map transform,but this not recommended,such as spark.read.parquet("mypath").map(e => val field =if (e.isNullAt(e.fieldIndex("field"))) null else e.getAs[String]("field"))
2.使用mergeSchema选项的最佳方式,例如:
2.the best way that you can use mergeSchema option,such as :
spark.read.option("mergeSchema", "true").parquet(xxx).as[MyClass]ref:模式合并
更多推荐
Spark读取不同版本的Parquet文件
发布评论