本文介绍了如何对SparkR数据帧进行子集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
假设我们有一个数据集"people",其中包含ID和Age作为2乘3的矩阵.
Assume we have a dataset 'people' which contains ID and Age as a 2 times 3 matrix.
Id = 1 2 3 Age= 21 18 30在sparkR中,我想创建一个新的数据集 people2 ,其中包含所有早于18的ID.在这种情况下,它是ID 1和3.在sparkR中,我会这样做
In sparkR I want to create a new dataset people2 which contains all ID who are older than 18. In this case it's ID 1 and 3. In sparkR I would do this
people2 <- people$Age > 18但是它不起作用.您将如何创建新的数据集?
but it does not work. How would you create the new dataset?
推荐答案对于那些欣赏R可以执行任何给定任务的众多选择的人,您还可以使用SparkR :: subset()函数:
For those who appreciate R's multitude of options to do any given task, you can also use the SparkR::subset() function:
> people <- createDataFrame(sqlContext, data.frame(Id=1:3, Age=c(21, 18, 30))) > people2 <- subset(people, people$Age > 18, select = c(1,2)) > head(people2) Id Age 1 1 21 2 3 30要回答评论中的其他详细信息:
To answer the additional detail in the comment:
id <- 1:99 age <- 99:1 myRDF <- data.frame(id, age) mySparkDF <- createDataFrame(sqlContext, myRDF) newSparkDF <- subset(mySparkDF, mySparkDF$id==3 | mySparkDF$id==32 | mySparkDF$id==43 | mySparkDF$id==55, select = 1:2) take(newSparkDF,5) (1) Spark Jobs id age 1 3 97 2 32 68 3 43 57 4 55 45更多推荐
如何对SparkR数据帧进行子集
发布评论