scala on stackoverflow

  • strict warning: Non-static method view::load() should not be called statically in /home/eob/scalaclass.com/sites/all/modules/views/views.module on line 842.
  • strict warning: Declaration of views_handler_argument::init() should be compatible with views_handler::init(&$view, $options) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_argument.inc on line 745.
  • strict warning: Declaration of views_handler_filter::options_validate() should be compatible with views_handler::options_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter::options_submit() should be compatible with views_handler::options_submit($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter_boolean_operator::value_validate() should be compatible with views_handler_filter::value_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter_boolean_operator.inc on line 149.
  • strict warning: Declaration of views_plugin_row::options_validate() should be compatible with views_plugin::options_validate(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
  • strict warning: Declaration of views_plugin_row::options_submit() should be compatible with views_plugin::options_submit(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
Syndicate content
most recent 30 from stackoverflow.com 2016-07-24T02:56:33Z
Updated: 2 years 26 weeks ago

Basic scala - second degree polynomial

23 July 2016 - 3:11am

Could someone help me with the following question? I'm relatively new to Scala and my lecturer didn't supply us with any exam paper answers so need to know a correct way to do certain questions.

Using the Scala syntax, write a function for the second degree polynomial. Second degree polynomial is of form:

y=ax^2+bx+c

Your function should be named second and it should take 4 parameters: a, b, c and x. Parameters should accept real numbers and a real number should be returned. Also, write an anonymous version of this function using the Scala syntax.

Is it possible to specify the scala compiler used by sbt?

23 July 2016 - 1:59am

I modified the source code of scala compiler and built it. Now I want to test this compiler. However, many existing scala projects use sbt as build tool. So I wonder if it is possible to replace the official scala compiler used by sbt with the scala compiler built by myself.

elegant global override for toString in scala

23 July 2016 - 12:50am

I have found myself frequently hard-pinning the console output of objects, into test code, as the expected output of unit tests. This makes me have to manually quote any string literals though, as Scala will print them without quotes to the console. Whereas other than that, a printed object is a perfect scala expression, strings aside.

With RTL languages this is a big annoyance as eclipse sucks at letting you navigate within RTL text when numbers are involved in the same text.

To the point then ― how can I elegantly override toString behavior in scala (to include quotes around strings) rather than override each and every class of interest that has strings in it? what would be the most elegant way?

How to add dynamic cols to a data frame in spark in scala [on hold]

22 July 2016 - 9:46pm

I m using scala and spark. I have a Data Frame and i want to add new cols to it dynamically. By dynamically i mean to create a col by merging data from existing column. PLZ kindly tell me how to do it

Meaning of the Singleton word in Scala

22 July 2016 - 8:32pm

I have an understanding of what Singleton objects are, but perusing a library i came across something that confused me: mixing in Singleton

trait Foo[A <: Bar with Singleton]

I cant seem to find info on what this means. A is a subtype of Bar-with-Singleton-access? What does mixing in Singleton provide?

Can not define a Map in Intellij scala worksheet

22 July 2016 - 8:31pm

I was trying to make a Map in Intellij scala worksheet but I get this error EmptyScope.enter

The code I tried and failed is very simple:

val a = Map("a" -> 1)

Another error message was Error:Error:uncaught exception during compilation: scala.reflect.internal.FatalError

I am using version 15.0.6

Please help me. Thanks

How can I memoize a whitebox scala macro?

22 July 2016 - 5:40pm

My general question is: how can I memoize the result of a whitebox scala macro, but still keep the more specific type than the declared type? I want to avoid recreating the return value during runtime every time the function is accessed.

I have a trait Persistable[T] that I use to extend the companion object of a case class, and that trait uses a def macro to create a anonymous instance of a class Properties[T] that has fields based on the original fields of the case class.

trait Persistable[T] { def properties: Properties[T] = macro myImpl }

and the actual properties type of a Peristable[User] object might be something like: Properties[User]{val id: Property[Int,User]; val username: Property[String,User]}

Thanks to whitebox macros the properties field has the more specific type of the anonymous class, but I don't want to recreate the instance every time it's accessed. I considered adding a private var _properties but then the type ends up as Properties[T] instead of the type of the anonymous class. I tried running typeCheck on the tree, but the anonymous class it returns doesn't actually exist during runtime.

Mongo casbah: cannot resolve "++"

22 July 2016 - 5:18pm

Casbah version: 2.8.0

Following example here: http://api.mongodb.com/scala/casbah/2.0/tutorial.html#combining-multiple-dbobjects

I'm using below as import statements.

import com.mongodb.casbah.AggregationOutput import com.mongodb.casbah.Imports._ import com.mongodb.casbah.TypeImports._ import com.mongodb.casbah.commons.{MongoDBList, MongoDBObject}

And below ++ got Cannot resolve symbol ++ error.

val basic = MongoDBObject( "id" -> "123", "project" -> "pp123" ) val createdTime = MongoDBObject( "createdTime" -> MongoDBObject( "$exists" -> false ) ) val query = basic ++ createdTime

I tried to Google but didn't find much, the official documentation didn't help either...

I guess I'm just missing an import statement for ++, but I don't know which one to import.

How can I perform session based logging in Play Framework

22 July 2016 - 3:36pm

We are currently using the Play Framework and we are using the standard logging mechanism. We have implemented a implicit context to support passing username and session id to all service methods. We want to implement logging so that it is session based. This requires implementing our own logger. This works for our own logs but how do we do the same for basic exception handling and logs as a result. Maybe there is a better way to capture this then with implicits or how can we override the exception handling logging. Essentially, we want to get as many log messages to be associated to the session.

Reading JSON object with JSONArray inside to RDD using Scala and without using Dataframe

22 July 2016 - 2:51pm

I was able to read a json object to RDD using the following code without using dataframe, here is my JSON Object:

{"first":"John","last":"Smith","address":{"line1":"1 main street","city":"San Francisco","state":"CA","zip":"94101"}}

Here is the code for reading it to RDD:

package com.spnotes.spark import com.fasterxml.jackson.annotation.JsonProperty import com.fasterxml.jackson.core.JsonParseException import com.fasterxml.jackson.databind.ObjectMapper import com.typesafe.scalalogging.Logger import org.apache.spark.{SparkContext,SparkConf} import org.slf4j.LoggerFactory import scala.collection.mutable.ArrayBuffer class Person { @JsonProperty var first:String=null @JsonProperty var last:String=null @JsonProperty var address:Address=null override def toString=s"Person(first=$first, last=$last, address=$address)" } class Address { @JsonProperty var line1:String=null @JsonProperty var line2:String=null @JsonProperty var city:String=null @JsonProperty var state:String=null @JsonProperty var zip:String=null override def toString=s"Address(line1=$line1, line2=$line2, city=$city, state=$state, zip=$zip)" } object JSONFileReaderWriter{ // val logger = Logger(LoggerFactory.getLogger("JSONFileReaderWriter")) val mapper = new ObjectMapper() def main(argv: Array[String]): Unit = { if (argv.length != 2) { println("Please provide 2 parameters <inputfile> <outputfile>") System.exit(1) } val inputFile = argv(0) val outputFile = argv(1) println(inputFile) println(outputFile) //logger.debug(s"Read json from $inputFile and write to $outputFile") val sparkConf = new SparkConf().setMaster("local[1]").setAppName("JSONFileReaderWriter") val sparkContext = new SparkContext(sparkConf) val errorRecords = sparkContext.accumulator(0) val records = sparkContext.textFile(inputFile) var results = records.flatMap { record => try { Some(mapper.readValue(record, classOf[Person])) } catch { case e: Exception => { errorRecords += 1 None } } }//.filter(person => person.address.city.equals("mumbai")) results.saveAsTextFile(outputFile) println("Number of bad records " + errorRecords) } }

But when there is a JSONArray inside the JSONObject, I could not figure it out how to extend the code. Any help is really appreciated.

Here is the JSONObject that I want to read to RDD without using dataframe:

{"first":"John","last":"Smith","address":[{"line1":"1 main street","city":"San Francisco","state":"CA","zip":"94101"},{"line1":"2 main street","city":"Palo Alto","state":"CA","zip":"94305"}]}

I DO NOT want to use Spark SQL.

How can I get a column of JSON strings to hold structs instead?

20 July 2016 - 1:57pm

working with scala and spark, I currently have a column called data I create using this:

val decryptedPatientTable = patientTable.withColumn("data", decryptUDF(patientTable.col("data").cast(StringType)))

This function decrypts the column data from a string into another string of syntax {"key": value, ... } so what I'd like to do is have it as a Struct or something I'm going to be able to query later. Any ideas?

UNRESOLVED DEPENDENCY: import org.apache.spark.streaming in Spark Scala

20 July 2016 - 1:24pm

I'm trying to build a Scala jar file to run it is Spark. when i'm trying to build the jar using sbt. I'm facing with following error:

[warn] :: org.apache.spark#spark-streaming-twitter-2.10_2.10;1.6.2: not found [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] [warn] Note: Unresolved dependencies path: [warn] org.apache.spark:spark-streaming-twitter-2.10_2.10:1.6.2 (/home/hadoop/app/simple.sbt#L7-8) [warn] +- twittertophashtag:twittertophashtag_2.10:1.0 sbt.ResolveException: unresolved dependency: org.apache.spark#spark-streaming-twitter-2.10_2.10;1.6.2: not found

Here is the simple.sbt

version := "1.0" scalaVersion := "2.10.5" libraryDependencies ++= Seq("org.apache.spark" %% "spark-core" % "1.6.2","org.apache.spark" % "spark-streaming_2.10" % "1.6.2","org.apache.spark" %% "spark-streaming-twitter-2.10" % "1.6.2","org.elasticsearch" % "elasticsearch-hadoop" % "2.1.0.Beta4") resolvers += "clojars" at "https://clojars.org/repo" resolvers += "conjars" at "http://conjars.org/repo"

Your help would be much appreciated.

Thanks, Sal

Pattern matching a domain name

20 July 2016 - 1:02pm

I don't use pattern matching as often as I should. I am matching a domain name for the following:

1. If it starts with www., then remove that portion and return. www.stackoverflow.com => "stackoverflow.com" 2. If it has either example.com or example.org, strip that out and return. blog.example.com => "blog" 3. return request.domain hello.world.com => "hello.world.com" def filterDomain(request: RequestHeader): String = { request.domain match { case //?? case #1 => ? case //?? case #2 => ? case _ => request.domain } }

How do I reference the value (request.domain) inside the expression and see if it starts with "www." like:

if request.domain.startsWith("www.") request.domain.substring(4)

Type class syntax for both Seq[A] and Map[A, Int]

20 July 2016 - 12:54pm

I am writing a type class to represent collections of experimental observations. My syntax trait to enrich Seq and Map seems poor. Can it be improved?

Goal:

assert(5 == Seq("a", "a", "b", "b", "c").numObs) assert(5 == Map("a" -> 2, "b" -> 2, "c" -> 1).numObs)

My (heavily simplified) type class:

trait Observations[A, T[_]] { def numObs(t: T[A]): Int }

Instances for Seq and frequency tables:

object Observations{ implicit def seqIsObservations[A]: Observations[A, Seq] = new Observations[A, Seq] { def numObs(t: Seq[A]) = t.size } implicit def freqTableIsObservations[A]: Observations[A, ({ type λ[A] = Map[A, Int] })#λ] = new Observations[A, ({ type λ[A] = Map[A, Int] })#λ] { def numObs(t: Map[A, Int]) = t.values.sum } }

The best syntax I’ve come up with:

trait ObservationsSyntax { implicit class ObservationsOps[A, T[_]](thiz: T[A]) { def numObs(implicit instance: Observations[A, T]): Int = instance.numObs(thiz) } //Don’t like having to have this special case implicit class ObservationsMAPOps[A](thiz: Map[A, Int]) { def numObs(implicit instance: Observations[A, ({ type λ[A] = Map[A, Int] })#λ]): Int = instance.numObs(thiz) } }

Is there a way to use type lambdas to eliminate the second implicit in the syntax trait?

How to simplify future result handling in Akka/Futures?

20 July 2016 - 12:34pm

I want to simplify my for comprehension code to make it as simple as possible.

Here is the code

case object Message class SimpleActor extends Actor { def receive = { case Message => sender ! Future { "Hello" } } } object SimpleActor extends App { val test = ActorSystem("Test") val sa = test.actorOf(Props[SimpleActor]) implicit val timeout = Timeout(2.seconds) val fRes = for { f <- (sa ? Message).asInstanceOf[Future[Future[String]]] r <- f } yield r println { Await.result(fRes, 5.seconds) } }

Is it possible to get rid of this part

.asInstanceOf[Future[Future[String]]]

?

Transforming a column in an RDD/DataFrame

20 July 2016 - 12:25pm

I have this line:

val decryptedDFData = sqlContext.read.json(patientTable.select("data").map(row => decrypt(row.toString())))

Which just selects the "data" column from another DataFrame "patientTable" and applies my decryption function row by row and creates another DataFrame. How can I either: Apply the encryption function to the original DataFrame knowing that the schema isn't going to be fixed (but the "data" attribute will always be there) or insert each row of the new DataFrame as a struct into it's corresponding row from before?

sbt v0.3.12 install failed to download jars on Ubuntu

20 July 2016 - 12:21pm

When installing sbt latest version (0.3.12) on ubuntu 16.04 I'm getting the following errors:

download failed: org.scala-sbt#main;0.13.12!main.jar download failed: org.scala-sbt#actions;0.13.12!actions.jar download failed: org.scala-sbt#io;0.13.12!io.jar download failed: org.scala-sbt#completion;0.13.12!completion.jar download failed: org.scala-sbt#collections;0.13.12!collections.jar download failed: org.scala-sbt#api;0.13.12!api.jar download failed: org.scala-sbt#incremental-compiler;0.13.12!incremental-compiler.jar download failed: org.scala-sbt#compile;0.13.12!compile.jar download failed: org.scala-sbt#ivy;0.13.12!ivy.jar download failed: org.scala-sbt#main-settings;0.13.12!main-settings.jar download failed: org.scala-sbt#command;0.13.12!command.jar download failed: org.scala-sbt#compiler-interface;0.13.12!compiler-interface.jar Error during sbt execution: Error retrieving required libraries

When I try the links directly it gives me back 404. Example link

The instructions I used from http://www.scala-sbt.org/download.html

echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823 sudo apt-get update sudo apt-get install sbt

How Can I keep track of column index of a Datafarme after running .select() or .filter()

20 July 2016 - 11:56am

I have a dataframe made from a parquet file. I want run df.select("firstName") and store that in a new dataframe but I want to keep track of what the column index of "firstName" was originally. Any ideas?

Spark: Computing correlations of a DataFrame with missing values

20 July 2016 - 11:53am

I currently have a DataFrame of doubles with approximately 20% of the data being null values. I want to calculate the Pearson correlation of one column with every other column and return the columnId's of the top 10 columns in the DataFrame.

I want to filter out nulls using pairwise deletion, similar to R's pairwise.complete.obs option in its Pearson correlation function. That is, if one of the two vectors in any correlation calculation has a null at an index, I want to remove that row from both vectors.

I currently do the following:

val df = ... //my DataFrame val cols = df.columns df.registerTempTable("dataset") val target = "Row1" val mapped = cols.map {colId => val results = sqlContext.sql(s"SELECT ${target}, ${colId} FROM dataset WHERE (${colId} IS NOT NULL AND ${target} IS NOT NULL)") (results.stat.corr(colId, target) , colId) }.sortWith(_._1 > _._1).take(11).map(_._2)

This runs very slowly, as every single map iteration is its own job. Is there a way to do this efficiently, perhaps using Statistics.corr in the Mllib, as per this SO Question (Spark 1.6 Pearson Correlation)

getting a list of all files inside a zip/rar/7z file with Scala

20 July 2016 - 11:45am

is there a way to get a list of all the files inside a compressed file without decompressed it? I don't mind using a Java library but all the solutions I found performed a decompression. Also, if it is relevant I know that the compressed file has sub directories in it and I want to also get the files from them.

> omeprazole 40 mg price - buy misoprostol and mifepristone
- valtrex buy online no prescription - where to buy asacol - cell spy phone - generic levitra -