scala on stackoverflow

  • strict warning: Non-static method view::load() should not be called statically in /home/eob/scalaclass.com/sites/all/modules/views/views.module on line 842.
  • strict warning: Declaration of views_handler_argument::init() should be compatible with views_handler::init(&$view, $options) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_argument.inc on line 745.
  • strict warning: Declaration of views_handler_filter::options_validate() should be compatible with views_handler::options_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter::options_submit() should be compatible with views_handler::options_submit($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter_boolean_operator::value_validate() should be compatible with views_handler_filter::value_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter_boolean_operator.inc on line 149.
  • strict warning: Declaration of views_plugin_row::options_validate() should be compatible with views_plugin::options_validate(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
  • strict warning: Declaration of views_plugin_row::options_submit() should be compatible with views_plugin::options_submit(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
Syndicate content
most recent 30 from stackoverflow.com 2016-07-24T02:56:33Z
Updated: 2 years 26 weeks ago

What are the rules for writing a sort comparator

18 July 2016 - 9:34am

I want to sort the objects of class A based on the values of members: a, b ,c: A is given the most preference -> b is the next -> c has the least preference.

Comparator is used by: x = x.sortWith(comparator) here, x is a ArrayBuffer[A]

class A {var a,b,c = 0} def comparator(f1:A, f2:A) = { if(f1.a == f2.a) { if(f1.b == f2.b) f1.c > f2.c else f1.b > f2.b } else f1.a > f2.a }

Using this I am getting:

**java.lang.IllegalArgumentException: Comparison method violates its general contract!** at java.util.TimSort.mergeLo(TimSort.java:747) at java.util.TimSort.mergeAt(TimSort.java:483) at java.util.TimSort.mergeCollapse(TimSort.java:410) at java.util.TimSort.sort(TimSort.java:214) at java.util.TimSort.sort(TimSort.java:173) at java.util.Arrays.sort(Arrays.java:659) at scala.collection.SeqLike$class.sorted(SeqLike.scala:618) at scala.collection.AbstractSeq.sorted(Seq.scala:41) at scala.collection.SeqLike$class.sortWith(SeqLike.scala:575) at scala.collection.AbstractSeq.sortWith(Seq.scala:41)

Akka stream - List to mapAsync of individual elements

18 July 2016 - 9:34am

My stream has a Flow whose outputs are List[Any] objects. I want to have a mapAsync followed by some other stages each of which processed an individual element instead of the list. How can I do that?

Effectively I want to connect the output of

Flow[Any].map { msg => someListDerivedFrom(msg) }

to be consumed by -

Flow[Any].mapAsyncUnordered(4) { listElement => actorRef ? listElement }.someOtherStuff

How do I do this?

How to generate case objects for every field in a Scala case class using macro?

18 July 2016 - 9:29am

I'm trying to generate case objects for every case member of every child case class of a sealed trait. I'm able to generate the code in the macro but I don't know how to use this in my code.

Example usecase:

sealed trait Item sealed trait Field { val name: String } case class Product(id: String, name: String) extends Item

It should generate the following case objects which are fields of Product.

case object ProductIdField extends Field { val name = "Product Id" } case object ProductNameField extends Field { val name = "Product Name" }

Macro so far which generates the code

import scala.language.experimental.macros import scala.reflect.macros.blackbox.Context object FieldGenerator { def generator[A](): Product = macro generate[A] def generate[A: c.WeakTypeTag](c: Context)(): c.Tree = { import c.universe._ val subclasses: Set[c.universe.Symbol] = c.weakTypeOf[A].typeSymbol.asClass.knownDirectSubclasses val fieldObjects: Set[String] = subclasses.flatMap { (subClass: c.universe.Symbol) => val itemName = subClass.name.toString val sealedTraitName = s"${itemName}Field" val fieldSealedTrait: String = s"sealed trait $sealedTraitName extends Field" val fieldCaseObjects: Iterable[String] = subClass.info.decls.collect { case m: MethodSymbol if m.isCaseAccessor => val fieldName = m.name.toString.capitalize s"""case object ${itemName + fieldName}Field extends $sealedTraitName { val name = "$itemName $fieldName" } """.stripMargin } List(fieldSealedTrait) ++ fieldCaseObjects } fieldObjects.foreach(println) q"..$fieldObjects" } }

Here is how I am calling it

FieldGenerator.generator[Item]()

And I get the following compile time error

a pure expression does nothing in statement position; you may be omitting necessary parentheses

How can I import the generated code ?

Apache Flink - add to a set in parallel

18 July 2016 - 8:56am

I am new to apache flink and I would like to know how to distribute my problem below. I have 3 different sets and I need to add elements to them in parallel, I am feeling a bit lost. For example my code is:

myData.foreach { case x: Package.MyTypeA => typeASet ++= x.create(data) case x: Package.MyTypeB => typeBSet ++= x.create(data) case x: Package.MyTypeC => typeCSet ++= x.create(data) }

where myData variable is a collection from which I will update the other 3 different collections (typeASet,typeBSet,typeCSet) and variable data is a Map. The create method returns an Iterable of TypeA, TypeB and TypeC correspondingly. I use Scala.

Check interval time range which doesn't exist

18 July 2016 - 8:34am

Hello I have a schema like this,

CREATE TABLE booking ( id BIGSERIAL NOT NULL PRIMARY KEY, "from" TIME NOT NULL, "to" TIME NOT NULL, );

Sample data:

id | from | to ------+---------------------+--------------------- 1 | 08:00:00 | 09:00:00 2 | 10:00:00 | 11:00:00

Now, I would like to find all possible bookings which doesn't exist yet(1 hour interval, with a maximum of to until 22:00:00). Such as 09:00:00 - 10:00:00.

Any ideas folks?

K-means with Elasticsearch-Spark type mismatch error

18 July 2016 - 8:21am

I'm new to Spark and Scala. I'm trying to read data from ES and implement K-means algorithm with that data. I can read datas with this code :

val source = sc.newAPIHadoopRDD(esconf, classOf[EsInputFormat[Text, MapWritable]], classOf[Text], classOf[MapWritable])

And I couldn't find how can I manipulate this "source" to use like that?

val clusters = KMeans.train(source, numClusters, numIterations)

I got error like:

type mismatch; found : org.apache.spark.rdd.RDD[(org.apache.hadoop.io.Text, org.apache.hadoop.io.MapWritable)] required: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]

Scala REPL from cmd: The syntax of the command is incorrect

18 July 2016 - 7:11am

My Scala (2.12) REPL recently stopped working. When starting it from cmd it tells me the syntax of the command is incorrect. The only system changes I can think of were installing IntelliJ 2016.2 (which I've tried uninstalling and no luck) and "Cmder" (http://cmder.net/) mini version which is standalone and shouldn't have changed anything. There could have been windows updates. I've tried uninstalling Scala (had 2.12.0-M3) and installing Scala 2.12.0-M5, and it didn't work. I uninstalled 2.12 have installed 2.11.8 and it's REPL seems to work fine. So perhaps it's a bug with Scala. I may log an issue to them later if there's nothing to go on here.

The closest thing I found to my problem is: The input line is too long. The syntax of the command is incorrect, but I'm not trying to start an activator project, just the REPL from command line.

These steps I performed (while 2.12 was installed):

I tried setting Java 6 as JAVA_HOME: Exception in thread "main" java.lang.UnsupportedClassVersionError: scala/tools/nsc/MainGenericRunner : Unsupported major.minor version 52.0

which makes since as Java 8 is the minimum, so at least it tries to start a java process and is picking up that environment variable.

when I have Java 8 as JAVA_HOME I get: The syntax of the command is incorrect.

C:\Users\tombstone>scala " " ]==[-toolcp] was unexpected at this time.

any other arguments (including -version) I tried give me the syntax is incorrect

C:\Users\tombstone>where scala C:\Program Files (x86)\scala\bin\scala C:\Program Files (x86)\scala\bin\scala.bat

which seems correct to me

I'm using Windows 7 Enterprise x64

Scala Futures. Action if one of several fails

18 July 2016 - 7:06am

I have a situation where I want to execute several tasks concurrently as a future so that if one of them fails, the others will still execute. If one fails I want to log it's error. I want my parent thread to be able to tell if each one has succeeded or not and then perform some action based on that. Eg, if one of the futures failed, print "Hey, one of the futures failed"

val futureA = Future(doTaskThatReturnsABoolean) val futureB = Future(doTaskThatReturnsABoolean) val futureC = Future(doTaskThatReturnsABoolean) futureA.onFailure (case t => println(future A failed ) + t.getMessage) futureB.onFailure (case t => println(future B failed ) + t.getMessage) futureC.onFailure (case t => println(future C failed ) + t.getMessage) if (one of these futures failed) { println("One of the futures failed") throw new someNewError }

If any or all of the futures fails, I want to get their stack trace logged but I don't want to then cause my whole program to error until all futures have had a chance to run. Because they could all error for different reasons, I don't want to just repeat their error, I want to throw a new one.

I've been reading http://docs.scala-lang.org/overviews/core/futures.html but just can't get my head round this one.

I don't want to use "Await" as that requires a time and I want to give the futures as much time as they want to run. Let's assume for now that they "will" complete but in an undetermined timeframe that depends entirely on data size.

Structuring multi-module projects in SBT across multiple repositories

18 July 2016 - 6:55am

We are using SBT and Ivy, but are having issues with multi-module projects. I am not sure if there is a better way of doing what we are attempting to do.

We have a parent module, which aggregates the submodules:

lazy val `foo-parent` = (project in file(".")) .aggregate(`foo-layout`,`foo-environment`,...) .settings(commonSettings: _*)

And some submodules, e.g.:

lazy val `foo-layout` = Project("foo-layout", file("foo-layout")) .settings(commonSettings: _*) .settings( libraryDependencies ++= Seq( `...), name := "foo-layout" ).dependsOn(`foo-config`)

We have a core module of libraries, aggregated in foo-parent, all in the same git repo, and "implementation" modules for clients in separate repos. In the implementation modules we used to import the submodules as library dependencies e.g.:

libraryDependencies += "com.foo" %% "foo-layout" % foo_version

The problem with this was when changing anything in foo-parent you had to manually do sbt publishLocal, or set the modules as "module dependencies" in IntelliJ, which reset to using JARs every time the project is refreshed.

Now we have a symlink "foo-source" which is linked to the directory "foo-parent" in which all of the sub-modules reside, and have a somewhat wrong-feeling way of switching between using source and JARs.

val compileFooFromSource = false lazy val foo_parent: Project = Project("foo-parent", file("foo-source")) lazy val foo_dependencies = Seq("foo-environment", "foo-layout", ...) lazy val test: Project = compileFooFromSource match { case false => Project("test", file(".")) .settings(artifactSettings: _*) .settings( libraryDependencies ++= foo_dependencies.map( mod => ("com.foo" % mod % version.value) ) ) case true => Project("test", file(".")) .settings(artifactSettings: _*) .dependsOn(foo_dependencies.map(s => classpathDependency(s)): _*) }

This approach relies on symlinks, which could cause problems for Windows developers or CI (we have to remember to change the source flag to false before committing, or CI will break.) Is this the only way of having a multi-module project that is structured like this across multiple repos? Is there a more SBT "way"?

Is it illegal to overload methods with multiple parameter lists? [duplicate]

18 July 2016 - 6:53am

This question already has an answer here:

Trying to define two methods named foo, one taking two parameter lists, and the other one only having one. Does not seem to work:

object Bar { def foo(a: Int)(b: Int): Int = a+b def foo(a: Int): Int = foo(a)(0) } error: ambiguous reference to overloaded definition, both method foo in object Bar of type (a: Int)Int and method foo in object Bar of type (a: Int)(b: Int)Int match argument types (Int) def foo(a: Int): Int = foo(a)(0) ^

??? no, they don't ...

> omeprazole 40 mg price - buy misoprostol and mifepristone
- valtrex buy online no prescription - where to buy asacol - cell spy phone - generic levitra -