scala on stackoverflow

  • strict warning: Non-static method view::load() should not be called statically in /home/eob/scalaclass.com/sites/all/modules/views/views.module on line 842.
  • strict warning: Declaration of views_handler_argument::init() should be compatible with views_handler::init(&$view, $options) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_argument.inc on line 745.
  • strict warning: Declaration of views_handler_filter::options_validate() should be compatible with views_handler::options_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter::options_submit() should be compatible with views_handler::options_submit($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter_boolean_operator::value_validate() should be compatible with views_handler_filter::value_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter_boolean_operator.inc on line 149.
  • strict warning: Declaration of views_plugin_row::options_validate() should be compatible with views_plugin::options_validate(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
  • strict warning: Declaration of views_plugin_row::options_submit() should be compatible with views_plugin::options_submit(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
Syndicate content
most recent 30 from stackoverflow.com 2016-07-24T02:56:33Z
Updated: 2 years 26 weeks ago

For Expression with recursive generator

20 July 2016 - 11:35am

In the 3 code variations below, the For Expression produces totally different output. The recursive generator seems to be sourced from real values (A,B,C) but in version2 and version3 of the function below, none of the letters were present in the yield output. What is the reason?

def permuteV1(coll:List[Char]) : List[List[Char]] = { if (coll.isEmpty) List(List()) else { for { pos <- coll.indices.toList elem <- permuteV1(coll.filter(_ != coll(pos))) } yield coll(pos) :: elem } } permuteV1("ABC".toList) //res1: List[List[Char]] = List(List(A, B, C), List(A, C, B), List(B, A, C), List(B, C, A), List(C, A, B), List(C, B, A)) def permuteV2(coll:List[Char]) : List[List[Char]] = { if (coll.isEmpty) List(List()) else { for { pos <- coll.indices.toList elem <- permuteV2(coll.filter(_ != coll(pos))) } yield elem } } permuteV2("ABC".toList) //res2: List[List[Char]] = List(List(), List(), List(), List(), List(), List()) def permuteV3(coll:List[Char]) : List[List[Char]] = { if (coll.isEmpty) List(List()) else { for { pos <- coll.indices.toList elem <- permuteV3(coll.filter(_ != coll(pos))) } yield '-' :: elem } } permuteV3("ABC".toList) //res3: List[List[Char]] = List(List(-, -, -), List(-, -, -), List(-, -, -), List(-, -, -), List(-, -, -), List(-, -, -))

How can I modify config values while producing a fat jar with sbt assembly plugin?

20 July 2016 - 11:25am

Using sbt assembly plugin, I am trying to produce a fat jar containing akka.http. I do have concat strategy that concats all reference.conf files so I don't lose any basic settings. However, when I rename akka into something via:

ShadeRule.rename("akka.**" -> "something.@1").inAll

i get the:

com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'something'

issue.

Seems like it is looking for the modified config entries something.* and not finding them. Is there a way to modify all akka config values into something while concatenating all those configs?

How to create JSON object in Scala/Play

20 July 2016 - 11:24am

I need to create the following JSON object in Scala and Play framework, but I am having trouble:

{"employees":[ {"firstName":"John", "lastName":"Doe"} ]}

The data used to create the object comes from a html form (already implemented). So far my code creates the following:

val json: JsValue = Json.toJson(formContent) //returns = {"firstName":"John", "lastName":"Doe"}

How can I add the the key "employees" to this object?

Can anyone help me?

Thanks

Websocket Reverse Proxy without using Nginx by using only Play Microservice

20 July 2016 - 11:19am

I have requirement that ,Play MicroService should act as reverse proxy.Let say websocket server is running on cowboy server C and Play microservice P .So when try to access webscoket it should to go via P to C.

Number of parameters that can be passed to an UDF in scala

20 July 2016 - 11:14am

Is there a limit on the number of parameters that can be passed to an UDF in scala?

Thank you. Sai

Scala: parse JSON file into List[DBObject]

20 July 2016 - 10:50am

1.Input is JSON file that contains multiple records. Example:

[ {"user": "user1", "page": 1, "field": "some"}, {"user": "user2", "page": 2, "field": "some2"}, ... ]

2.I need to load each record from the file as a Document to MongoDB collection. Using casbah for interacting with mongo, inserting data may look like:

def saveCollection(inputListOfDbObjects: List[DBObject]) = { val xs = inputListOfDbObjects xs foreach (obj => { Collection.save(obj) })

Question: What is the correct way (using scala) to parse JSON to get data as List[DBObject] at output?

Any help is appreciated.

Parsing zip file using Scala in Spark

20 July 2016 - 9:51am

I want to read a zip file in Spark using Scala. I went through some posts in stackoverflow but didnt get a complete code anywhere. Can someone post a complete code for reading zip file in Spark using Scala.

Where is Stream in JavaConversions and JavaConverters?

20 July 2016 - 9:17am

First of all, I know how I can use the java libraries, and I can write some loops that do the stuff I need for me, but the question is more, why is there nothing for it in there?

Converting Future-based API into Stream-based API

20 July 2016 - 9:06am

At the moment a Future-based API is converted into Stream-based like this

def getUpdates(offset: Option[Int] = None): Future[Update] = ??? def updateStream(offset: Option[Int] = None): Stream[Update] = { val head = Await.result { getUpdates(offset) } head #:: updateStream(Some(head.update_id)) }

I'm not sure if using Await is a proper thing to do here. Is there a better way?

Elastic Search Invalid Pattern Given

20 July 2016 - 8:57am

I have this code below:

def main(args: Array[String]) { val sparkConf = new SparkConf().setAppName("Spark-Hbase").setMaster("local[2]") .set("es.index.auto.create", "true") .set("es.resource", "test") .set("es.nodes", "127.0.0.1") .set("es.output.json", "true") /* More code */ DStream.map { _._2 }.foreachRDD { (rdd: RDD[String]) => EsSpark.saveJsonToEs(rdd, "127.0.0.1") }

I keep getting an error for the elastic search es.nodes property saying:

Caused by: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: invalid pattern given 127.0.0.1/ at org.elasticsearch.hadoop.util.Assert.isTrue(Assert.java:50) at org.elasticsearch.hadoop.serialization.field.AbstractIndexExtractor.compile(AbstractIndexExtractor.java:51) at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:398) at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:40) at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67) at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:67) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) ... 3 more

Am I doing something wrong by putting 127.0.0.1? I tried putting the port as by doing 127.0.0.1:9200 but it still didn't work. Does anyone have any pointers? Thanks.

Can I safely create a Thread in an Akka Actor?

20 July 2016 - 8:37am

I have an Akka Actor that I want to send "control" messages to. This Actor's core mission is to listen on a Kafka queue, which is a polling process inside a loop.

I've found that the following simply locks up the Actor and it won't receive the "stop" (or any other) message:

class Worker() extends Actor { private var done = false def receive = { case "stop" => done = true kafkaConsumer.close() // other messages here } // Start digesting messages! while (!done) { kafkaConsumer.poll(100).iterator.map { cr: ConsumerRecord[Array[Byte], String] => // process the record ), null) } } }

I could wrap the loop in a Thread started by the Actor, but is it ok/safe to start a Thread from inside an Actor? Is there a better way?

java.io.NotSerializableException: DStream checkpointing has been enabled but the DStreams with their functions are not serializable

20 July 2016 - 8:36am

When executing this code in Spark Streaming, I got a Serialization error (see below):

val endPoint = ConfigFactory.load("application.conf").getConfig("conf").getString("endPoint") val operation = ConfigFactory.load("application.conf").getConfig("conf").getString("operation") val param = ConfigFactory.load("application.conf").getConfig("conf").getString("param") result.foreachRDD{jsrdd => jsrdd.map(jsobj => { val docId = (jsobj \ "id").as[JsString].value val response: HttpResponse[String] = Http(apiURL + "/" + endPoint + "/" + docId + "/" + operation).timeout(connTimeoutMs = 1000, readTimeoutMs = 5000).param(param,jsobj.toString()).asString val output = Json.parse(response.body) \ "annotation" \ "tags" jsobj.as[JsObject] + ("tags", output.as[JsObject]) })}

So, as I understand, the problem is with scalaj Http api. How can I resolve this issue? Obviously I cannot change the api.

java.io.NotSerializableException: DStream checkpointing has been enabled but the DStreams with their functions are not serializable org.consumer.kafka.KafkaJsonConsumer Serialization stack: - object not serializable (class: org.consumer.kafka.KafkaJsonConsumer, value: org.consumer.kafka.KafkaJsonConsumer@f91da5e) - field (class: org.consumer.kafka.KafkaJsonConsumer$$anonfun$run$1, name: $outer, type: class org.consumer.kafka.KafkaJsonConsumer) - object (class org.consumer.kafka.KafkaJsonConsumer$$anonfun$run$1, ) - field (class: org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3, name: cleanedF$1, type: interface scala.Function1) - object (class org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3, ) - writeObject data (class: org.apache.spark.streaming.dstream.DStream) - object (class org.apache.spark.streaming.dstream.ForEachDStream, org.apache.spark.streaming.dstream.ForEachDStream@761956ac) - writeObject data (class: org.apache.spark.streaming.dstream.DStreamCheckpointData) - object (class org.apache.spark.streaming.dstream.DStreamCheckpointData, [ 0 checkpoint files

]) - writeObject data (class: org.apache.spark.streaming.dstream.DStream) - object (class org.apache.spark.streaming.dstream.ForEachDStream, org.apache.spark.streaming.dstream.ForEachDStream@704641e3) - element of array (index: 0) - array (class [Ljava.lang.Object;, size 16) - field (class: scala.collection.mutable.ArrayBuffer, name: array, type: class [Ljava.lang.Object;) - object (class scala.collection.mutable.ArrayBuffer, ArrayBuffer(org.apache.spark.streaming.dstream.ForEachDStream@704641e3, org.apache.spark.streaming.dstream.ForEachDStream@761956ac)) - writeObject data (class: org.apache.spark.streaming.dstream.DStreamCheckpointData) - object (class org.apache.spark.streaming.dstream.DStreamCheckpointData, [ 0 checkpoint files

])

sending media data over the WebSocket in Play framework

20 July 2016 - 8:08am

I am using akka based Websockets in Play 2.5.4 and Scala , similar in this way :

import play.api.libs.json.JsValue import play.api.mvc._ import play.api.libs.streams._ class Controller4 @Inject() (implicit system: ActorSystem, materializer: Materializer) { import akka.actor._ class MyWebSocketActor(out: ActorRef) extends Actor { import play.api.libs.json.JsValue def receive = { case msg: JsValue => out ! msg } } object MyWebSocketActor { def props(out: ActorRef) = Props(new MyWebSocketActor(out)) } def socket = WebSocket.accept[JsValue, JsValue] { request => ActorFlow.actorRef(out => MyWebSocketActor.props(out)) } }

Refer https://www.playframework.com/documentation/2.5.x/ScalaWebSockets

My question is If i want to send media data over the socket connection (like videos , audio, files,etc.) How would i be able to achieve that? I am sorry if it feels pretty basic or easy to you , but i have no clear idea about this. Thanks in Advance.

Update a multi-statement Slick 2.1 withDynSession block to Slick 3.0.3

20 July 2016 - 8:02am

I have this Slick 2.1 based repo method:

def addContentBySourceInfo(userId: UUID, adopted: Boolean, contentId: UUID, contentInfo: ContentWithoutId): Either[ContentAlreadyExistsError, Content] = { getDatabase withDynSession { val content = contentInfo.toContent(contentId) Try { ContentTable.query += content UserContentTable.query += UserContentModel(userId, contentId, Some(adopted)) } match { case Failure(e:com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException) => Left(ContentAlreadyExistsError(content.source, content.sourceId)) case Failure(e) => throw e // could be some other error, and we should fail fast. case Success(s) => Right(content) } } }

Where getDatabase simply returns Database.forURL(..) from slick.jdbc.JdbcBackend.

How would I convert this to be compatible with the DBIO api from Slick 3.x?

Note: I'd like to keep these methods Synchronous until I'm ready to upgrade my entire repository layer to handle Asynchronous calls (i.e. I don't want to break my repository API just yet)

runtime exception on ScalaCheck + GeneratorDrivenPropertyChecks [duplicate]

20 July 2016 - 7:28am

This question already has an answer here:

build.sbt:

name := "myApp" version := "0.1" scalaVersion := "2.11.8" libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" libraryDependencies += "org.scalacheck" %% "scalacheck" % "1.13.2"

src/test/scala/MyAppCheck.scala

import org.scalatest._ import org.scalatest.prop.GeneratorDrivenPropertyChecks class MySpec extends FlatSpec with Matchers with GeneratorDrivenPropertyChecks { "n+n" should "be n*2" in { forAll { (n: Int) => whenever (n > 0) { (n+n) should be (n*2) } } } }

run test:

$ sbt test [info] Exception encountered when attempting to run a suite with class name: org.scalatest.DeferredAbortedSuite *** ABORTED *** [info] java.lang.IncompatibleClassChangeError: Implementing class

Why?

Instead, running this from an Ammonite console works just fine:

load.ivy("org.scalatest" %% "scalatest" % "2.2.6") load.ivy("org.scalacheck" %% "scalacheck" % "1.13.2") import org.scalatest._ import org.scalatest.prop.GeneratorDrivenPropertyChecks class MySpec extends FlatSpec with Matchers with GeneratorDrivenPropertyChecks { "n+n" should "be n*2" in { forAll { (n: Int) => whenever (n > 0) { (n+n) should be (n*2) } } } } new MySpec().execute()

Publishing from sbt to a local maven repo

20 July 2016 - 6:40am

I added the following to the build.sbt

publishTo := Some(Resolver.file("file", new File(Path.userHome.absolutePath+"/.m2/repository"))) publishMavenStyle := true

Publishing does not seem to have been affected: the following

sbt publishLocal

The result is apparently still going to the ivy instead of .m2 repo:

Packaging /git/msSCSCTEL/streaming-reconciler/target/scala-2.10/streaming-reconciler_2.10-0.1.0-SNAPSHOT-javadoc.jar ... Done packaging. published streaming-reconciler_2.10 to /Users/myuser/.ivy2/local/com.mycomp/streaming-reconciler_2.10/0.1.0-SNAPSHOT/poms/streaming-reconciler_2.10.pom published streaming-reconciler_2.10 to /Users/myuser/.ivy2/local/com.mycomp/streaming-reconciler_2.10/0.1.0-SNAPSHOT/jars/streaming-reconciler_2.10.jar published streaming-reconciler_2.10 to /Users/myuser/.ivy2/local/com.mycomp/streaming-reconciler_2.10/0.1.0-SNAPSHOT/srcs/streaming-reconciler_2.10-sources.jar published streaming-reconciler_2.10 to /Users/myuser/.ivy2/local/com.mycomp/streaming-reconciler_2.10/0.1.0-SNAPSHOT/docs/streaming-reconciler_2.10-javadoc.jar published ivy to /Users/myuser/.ivy2/local/com.mycomp/streaming-reconciler_2.10/0.1.0-SNAPSHOT/ivys/ivy.xml

What is missing/incorrect to publish to maven instead?

Why Scala Enumeration does not work in Apache Zeppelin but it works in maven

20 July 2016 - 6:32am

Enumeration works as expected when I use it in a maven project(with the same Scala version).

object t { object DashStyle extends Enumeration { val Solid,ShortDash = Value } def f(style: DashStyle.Value) = println(style) def main(args: Array[String]) = f(DashStyle.Solid) }

But when it runs in Apache Zeppelin(Zeppelin 0.6, Spark 1.6, Scala 2.10, Java 1.8)

object DashStyle extends Enumeration { val Solid,ShortDash = Value } def f(style: DashStyle.Value) = println(style) f(DashStyle.Solid)

It reports the following error even it says found and required type is exactly the same

<console>:130: error: type mismatch; found : DashStyle.Value required: DashStyle.Value f(DashStyle.Solid)

Why and how should I use it?

Type class API in Scala

20 July 2016 - 6:21am

This is a pattern I am considering to use in Scala to define a contract without limiting the type but still having a fluent API without all the verbose implicity[..].

The idea is to build a implicit class on top of a type class like so:

implicit class NumberLikeApi[N : NumberLike](n: N) def add(n2: N): N = implicitely[NumberLike[N]].add(n, n2) }

Now with the right implicits in scope you can do:

val sum = n1.add(n2)

Instead of:

val sum = implicitly[NumberLike[N]].add(n1, n2)

My question: Is it somehow possible to automate / generate the implicit class part? It is basically a duplication of the type class.

I failed to find something in the language & standard library. Is there perhaps a macro in a library somewhere that can do this?

Task not serializable: Json strings processing using Spark Streaming

20 July 2016 - 5:47am

I am receiving Json strings from Kafka in Spark Streaming (Scala). The processing of each string takes some time, so I want to distribute the processing over X clusters.

Currently I am just making tests on my laptop. So, for simplicity, let's assume that the processing that I should apply to each Json string is just some normalization of fields:

def normalize(json: String): String = { val parsedJson = Json.parse(json) val parsedRecord = (parsedJson \ "records")(0) val idField = parsedRecord \ "identifier" val titleField = parsedRecord \ "title" val output = Json.obj( "id" -> Json.parse(idField.get.toString().replace('/', '-')), "publicationTitle" -> titleField.get ) output.toString() }

This is my attempt to distribute the operation normalize over "clusters" (each Json string should be processed entirely; Json strings cannot be splitted). How to deal with the the issue Task not serializable at the line val callRDD = JSONstrings.map(normalize(_))?

val conf = new SparkConf().setAppName("My Spark Job").setMaster("local[*]") val ssc = new StreamingContext(conf, Seconds(5)) val topicMap = topic.split(",").map((_, numThreads)).toMap val JSONstrings = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2) val callRDD = JSONstrings.map(normalize(_)) ssc.start() ssc.awaitTermination()

Deserializing inner fields with Jackson

20 July 2016 - 5:22am

An application receives JSON response of the following format { "ok": true, "responseType": "Foo", result" : { /* pretty much anything could be here */ }}

The whole message is deserialized into case class Result(ok: Boolean, responseType: String, result: Any) with mapper.readValue[Result](json) and result is Map[String, Any] I guess. How can I deserialize the result field into proper case class?

> omeprazole 40 mg price - buy misoprostol and mifepristone
- valtrex buy online no prescription - where to buy asacol - cell spy phone - generic levitra -