“值$不是StringContext的成员"

编程入门 行业动态 更新时间:2024-10-28 19:35:11
本文介绍了“值$不是StringContext的成员"-缺少Scala插件吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在使用具有Scala原型的Maven.我遇到了这个错误:

I'm using maven with scala archetype. I'm getting that error:

值$不是StringContext的成员"

"value $ is not a member of StringContext"

我已经尝试在pom.xml中添加几件事,但是没有什么效果很好...

I already tried to add several things in pom.xml, but nothing worked very well...

我的代码:

import org.apache.spark.ml.evaluation.RegressionEvaluator import org.apache.spark.ml.regression.LinearRegression import org.apache.spark.ml.tuning.{ParamGridBuilder, TrainValidationSplit} // To see less warnings import org.apache.log4j._ Logger.getLogger("org").setLevel(Level.ERROR) // Start a simple Spark Session import org.apache.spark.sql.SparkSession val spark = SparkSession.builder().getOrCreate() // Prepare training and test data. val data = spark.read.option("header","true").option("inferSchema","true").format("csv").load("USA_Housing.csv") // Check out the Data data.printSchema() // See an example of what the data looks like // by printing out a Row val colnames = data.columns val firstrow = data.head(1)(0) println("\n") println("Example Data Row") for(ind <- Range(1,colnames.length)){ println(colnames(ind)) println(firstrow(ind)) println("\n") } //////////////////////////////////////////////////// //// Setting Up DataFrame for Machine Learning //// ////////////////////////////////////////////////// // A few things we need to do before Spark can accept the data! // It needs to be in the form of two columns // ("label","features") // This will allow us to join multiple feature columns // into a single column of an array of feautre values import org.apache.spark.ml.feature.VectorAssembler import org.apache.spark.ml.linalg.Vectors // Rename Price to label column for naming convention. // Grab only numerical columns from the data val df = data.select(data("Price").as("label"),$"Avg Area Income",$"Avg Area House Age",$"Avg Area Number of Rooms",$"Area Population") // An assembler converts the input values to a vector // A vector is what the ML algorithm reads to train a model // Set the input columns from which we are supposed to read the values // Set the name of the column where the vector will be stored val assembler = new VectorAssembler().setInputCols(Array("Avg Area Income","Avg Area House Age","Avg Area Number of Rooms","Area Population")).setOutputCol("features") // Use the assembler to transform our DataFrame to the two columns val output = assembler.transform(df).select($"label",$"features") // Create a Linear Regression Model object val lr = new LinearRegression() // Fit the model to the data // Note: Later we will see why we should split // the data first, but for now we will fit to all the data. val lrModel = lr.fit(output) // Print the coefficients and intercept for linear regression println(s"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}") // Summarize the model over the training set and print out some metrics! // Explore this in the spark-shell for more methods to call val trainingSummary = lrModel.summary println(s"numIterations: ${trainingSummary.totalIterations}") println(s"objectiveHistory: ${trainingSummary.objectiveHistory.toList}") trainingSummary.residuals.show() println(s"RMSE: ${trainingSummary.rootMeanSquaredError}") println(s"MSE: ${trainingSummary.meanSquaredError}") println(s"r2: ${trainingSummary.r2}")

我的pom.xml是:

and my pom.xml is that:

<project xmlns="maven.apache/POM/4.0.0" xmlns:xsi="www.w3/2001/XMLSchema-instance" xsi:schemaLocation="maven.apache/POM/4.0.0 maven.apache/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>test</groupId> <artifactId>outrotest</artifactId> <version>1.0-SNAPSHOT</version> <name>${project.artifactId}</name> <description>My wonderfull scala app</description> <inceptionYear>2015</inceptionYear> <licenses> <license> <name>My License</name> <url>....</url> <distribution>repo</distribution> </license> </licenses> <properties> <mavenpiler.source>1.6</mavenpiler.source> <mavenpiler.target>1.6</mavenpiler.target> <encoding>UTF-8</encoding> <scala.version>2.11.5</scala.version> <scalapat.version>2.11</scalapat.version> </properties> <dependencies> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-mllib_2.11</artifactId> <version>2.0.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.0.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.0.2</version> </dependency> <dependency> <groupId>com.databricks</groupId> <artifactId>spark-csv_2.11</artifactId> <version>1.5.0</version> </dependency> <!-- Test --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>org.specs2</groupId> <artifactId>specs2-junit_${scalapat.version}</artifactId> <version>2.4.16</version> <scope>test</scope> </dependency> <dependency> <groupId>org.specs2</groupId> <artifactId>specs2-core_${scalapat.version}</artifactId> <version>2.4.16</version> <scope>test</scope> </dependency> <dependency> <groupId>org.scalatest</groupId> <artifactId>scalatest_${scalapat.version}</artifactId> <version>2.2.4</version> <scope>test</scope> </dependency> </dependencies> <build> <sourceDirectory>src/main/scala</sourceDirectory> <testSourceDirectory>src/test/scala</testSourceDirectory> <plugins> <plugin> <!-- see davidb.github/scala-maven-plugin --> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>3.2.0</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> <configuration> <args> <!--<arg>-make:transitive</arg>--> <arg>-dependencyfile</arg> <arg>${project.build.directory}/.scala_dependencies</arg> </args> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.18.1</version> <configuration> <useFile>false</useFile> <disableXmlReport>true</disableXmlReport> <!-- If you have classpath issue like NoDefClassError,... --> <!-- useManifestOnlyJar>false</useManifestOnlyJar --> <includes> <include>**/*Test.*</include> <include>**/*Suite.*</include> </includes> </configuration> </plugin> </plugins> </build> </project>

我不知道如何解决它.有人有什么主意吗?

I have no idea about how to fix it. Does anybody have any idea?

推荐答案

添加此项目即可.

val spark = SparkSession.builder().getOrCreate() import spark.implicits._ // << add this

更多推荐

“值$不是StringContext的成员"

本文发布于:2023-11-25 13:34:38,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1629960.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:成员   StringContext   quot

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!