scala on stackoverflow

  • strict warning: Non-static method view::load() should not be called statically in /home/eob/scalaclass.com/sites/all/modules/views/views.module on line 842.
  • strict warning: Declaration of views_handler_argument::init() should be compatible with views_handler::init(&$view, $options) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_argument.inc on line 745.
  • strict warning: Declaration of views_handler_filter::options_validate() should be compatible with views_handler::options_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter::options_submit() should be compatible with views_handler::options_submit($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter.inc on line 589.
  • strict warning: Declaration of views_handler_filter_boolean_operator::value_validate() should be compatible with views_handler_filter::value_validate($form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/handlers/views_handler_filter_boolean_operator.inc on line 149.
  • strict warning: Declaration of views_plugin_row::options_validate() should be compatible with views_plugin::options_validate(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
  • strict warning: Declaration of views_plugin_row::options_submit() should be compatible with views_plugin::options_submit(&$form, &$form_state) in /home/eob/scalaclass.com/sites/all/modules/views/plugins/views_plugin_row.inc on line 135.
Syndicate content
most recent 30 from stackoverflow.com 2016-07-24T02:56:33Z
Updated: 2 years 26 weeks ago

Scala Syntax Partial

18 July 2016 - 6:24pm

Had a question why when I create a partial function I can't immediately invoke it. Both res6 and res8 are the same type (function1) so I'm not sure how come res7 works (immediately invoking it) and what would be res9 fails

scala> ((x: Int) => x + 1) res6: Int => Int = <function1> scala> ((x: Int) => x + 1)(1) res7: Int = 2 scala> def adder(a: Int, b: Int) = a + b adder: (a: Int, b: Int)Int scala> adder(1, _: Int) res8: Int => Int = <function1> scala> adder(1, _: Int)(1) <console>:12: error: Int does not take parameters adder(1, _: Int)(1) ^ scala> (adder(1, _: Int))(1) res10: Int = 2

Validating entire string as json using jackson

18 July 2016 - 6:16pm

I'm using the following code to parse json

new com.fasterxml.jackson.databind.ObjectMapper().readTree(jsonStr)

But it parses the following string successfully since it looks like it stops processing once it finds a valid tree.

{ "name": "test", }, "field": "c" }

Is there a way to make it consider the entire string or stream passed? I couldn't find an appropriate option in DeserializationFeature.

Note that the solution doesn't have to involve jackson. If there's a simpler way to do that in java or scala, that'll suffice too.

Define Compound Task in SBT

18 July 2016 - 4:23pm

I want to define a compound task in sbt so that all tasks that are run in my CI job can be executed in a single commmand. For example at the moment I am running:

clean coverage test scalastyle coverageReport package

However I'd like to just run

ci

Which would effectively be an alias to all of the above tasks. Furthermore I'd like to define this in a scala file (as opposed to build.sbt) so I can include it in an already existing common scala plugin and thus it becomes availbale to all my projects.

So far (after much reading of the docs) I've managed to get a task that depends just on scalastyle by doing:

lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI") ci := { val scalastyleResult = (scalastyle in Compile).evaluated println("In the CI task") }

however if I attempt to add another task (say the publish task) e.g:

ci := { val scalastyleResult = (scalastyle in Compile).evaluated val publishResult = (publish in Compile).evaluated println("In the CI task") }

this fails with:

[error] [build.sbt]:52: illegal start of simple expression [error] [build.sbt]:55: ')' expected but '}' found.

My first question is whether this approach is indeed the correct way to define a compound task.

If this is the case, then how can I make the ci task depend on all the tasks mentioned.

How to submit request MapReduce job from Web Site using AngularJS

18 July 2016 - 4:11pm

We want to create application, where user can submit Map Reduce request and pass input parameter from UI like Region, Date range. That will be in this sequence:

  1. User submit MapReduce request and request detail save in RDBMS system MySQL and request will be submit to Hadoop/Spark system.
  2. On dashboard UI, user can see their list of requests that are still in progress.
  3. Once MapReduce complete it will update request detail in MySQL.
  4. On dashboard it will show status request completed.
  5. On click request, there will be WebApi request from Angular JS to read MapReduce results from Hadoop/Spark system.

Is that flow possible? We can easily integrate will MySQL through WebApi but how we can submit Spark MapReduce request and read their results from AngularJS.

How to check isEmpty on Column Data Spark scala

18 July 2016 - 3:52pm

My data looks like :

[null,223433,WrappedArray(),null,460036382,0,home,home,home]

How do I check if the col3 is empty on query in spark sql ? I tried to explode but when I do that the empty array rows are disappearing. Can some suggest me a way to do this.

I tried :

val homeSet = result.withColumn("subscriptionProvider", explode($"subscriptionProvider"))

where subscriptionProvider(WrappedArray()) is the column having array of values but some arrays can be empty. I need to get the subscriptionProvider with null values and subscriptionProvider array has "Comcast"

Mismatching keystrokes IntelliJ IDEA after installing Scala plugin

18 July 2016 - 2:11pm

I just installed the Scala Plug-in for IntelliJ IDEA Community Edition (version:2016.1.3).

So, after the installation, when I type the following symbols, the corresponding symbols get represented inside the class definition of the editor.

" becomes @ ; becomes $ , becomes ? . becomes / ? becomes &

Similarly, a few more characters get printed like this.

The funny thing is, this happens inside class definitions and inside build.sbt files. (Could happen in other files too). Outside the class definitions, the keystrokes are working fine.

How can I rectify this?

This happens in .java files and also in Scala files.

I have installed Scala 2.11.8 and SBT 0.13.12. Using Ubuntu 14.04 LTS with Oracle Java Version 8 Update 61.

Does anyone know why I get this error?

18 July 2016 - 1:41pm

Error:scalac: missing or invalid dependency detected while loading class file 'SpecificationStructure.class'. Could not access type ScalaObject in package scala, because it (or its dependencies) are missing. Check your build definition for missing or conflicting dependencies. (Re-run with -Ylog-classpath to see the problematic classpath.) A full rebuild may help if 'SpecificationStructure.class' was compiled against an incompatible version of scala.

How can I create a dataframe out of a nested JSON?

18 July 2016 - 1:10pm

So my initial schema looks like this:

root
|-- database: String
|-- table: String
|-- data: struct (nullable = true)
| |-- element1: Int
| |-- element2: Char

The show() result has one data column that's ugly with [null,2,3] etc

What I want to do is to make the data struct into it's own dataframe so I can have the nested json's data spread out among columns but something like:

val dfNew = df.select("data") only really gets me the same gross column when I use show() instead of the multiple columns specified by the schema (element1, element 2) etc.

Is there a way to do this?

Validate if hazelcast indexes are used

18 July 2016 - 1:03pm

I am using Hz PredicateBuilder to query Hz maps with query on nested object's attribute. Have created an index on the nested field too in config file.

EntryObject e = new PredicateBuilder().getEntryObject(); Predicate idPredicate = e.get( "id" ).equal( id ); Predicate predicate = e.get( "rel.id" ).equal( rel.id).and(idPredicate); return personMap.values( predicate );

Where rel is an object with id attribute and is an attribute of Person object.

indexes = [ { attribute = id isOrdered = false }, { attribute = rel.id isOrdered = false } ]

Correct records are returning but want to make sure if this query is using the index. Is there a way to make sure this query is using the index? (Any informational message or something). Worked with DB2 before and debug level shows these messages. Any help is much appreciated! Thanks in advance.

Scala slick perform complex sql query

18 July 2016 - 12:20pm

I'm a new slick user and I'm having trouble creating a slick query. I need to perform this SQL query with slick:

SELECT COUNT(dd.definition_id) accuracy, d.id, d.subject_id, d.creator_id, d.active FROM definition_detail dd INNER JOIN definition d ON dd.definition_id=d.id WHERE dd.value_id IN (1,2,3) GROUP BY dd.definition_id;

I tried to do it by myself but I managed to create only this:

db.run((definition join definitionDetailDAO.definitionDetail on ((d, dd) => d.id === dd.definition_id)) .map{ case (d, dd) => (d, dd) }.filter(_._2.value_id inSet valuesSeq) .map{ case (d, dd) => (d.id, d.subject_id, d.creator_id, d.active) }.result)

Which corresponds to this query:

SELECT d.id, d.subject_id, d.creator_id, d.active FROM definition_detail dd INNER JOIN definition d ON dd.definition_id=d.id WHERE dd.value_id IN (1,2,3);

Can anyone help me learn it?

P.S. I'm using this dependencies:

libraryDependencies += "com.typesafe.play" %% "play-slick" % "2.0.0" libraryDependencies += "com.typesafe.play" %% "play-slick-evolutions" % "2.0.0"

How to globally order multiple ordered observables in Monix

18 July 2016 - 11:50am

Suppose I have multiple iterators that are ordered. If I wanted to merge these iterators while globally ordering them (e.g. [(1,3,4), (2,4,5)] -> [1,2,3,4,4,5]) using monix how would I do it?

install scala 2.12 mac with homebrew

18 July 2016 - 11:48am

I'd like to install the latest version of Scala => version 2.12.0-M5 on my mac using homebrew. Is it possible?

When I run command this is what I get

brew install scala Warning: scala-2.11.6 already installed

Ruby's instance_eval equivalent in Scala for building DSLs

18 July 2016 - 11:20am

In Ruby, when you design an embedded DSL a very useful trick is to leverage instance_eval. That way one can offer special statements within a certain block by implementing them as private methods on a special object. This is very nice for contextual stuff.

For an example see: https://robots.thoughtbot.com/writing-a-domain-specific-language-in-ruby

I was wondering what the closest equivalent would be in a Scala EDSL? More specifically how would I offer parts of the syntax only within a certain context that is delimited by a block?

Spark tuning for Elasticsearch - how to increase Index/Ingest throughput

18 July 2016 - 11:19am

Would like to know the relation between Spark executors, cores and Elasticsearch batch size and how to tune Spark job optimally to get better index throughput.

I have 3.5B data in Parquet format and I would like to ingest them to Elasticsearch and I'm not getting more than 20K index rate. Sometimes I got 60K-70K but it comes down immediately and the average I got was around 15K-25K indexes per second.

Little bit more details about my input:

  • Around 22,000 files in Parquet format
  • It contains around 3.2B records (around 3TB in size)
  • Currently running 18 executors (3 executors per node)

Details about my current ES setup:

  • 8 nodes, 1 master and 7 data nodes
  • Index with 70 shards
  • Index contains 49 fields (none of them are analyzied)
  • No replication
  • "indices.store.throttle.type" : "none"
  • "refresh_interval" : "-1"
  • es.batch.size.bytes: 100M (I tried with 500M also)

I'm very new in Elasticsearch so not sure how to tune my Spark job to get better performance.

Why does Spark piping add zeros to the output?

18 July 2016 - 10:46am

I have a simple Spark code

object PipeExample extends App { val rdd = sc.makeRDD(List("hi", "hello", "how", "are", "you")) val pipeRdd = rdd.pipe("/test/src/main/resources/len.sh") pipeRdd.collect().foreach(println) }

It should pipes a list of words to my bash script

#!/bin/sh read input len=${#input} echo $len

which just prints a length of the input string. But what I get in output is

0 2 0 5 3 0 3 3

As you can see there are zeros in the output. I want to find out where zeros come from and how to get rid of them?

Scala returns no annotations for a field

18 July 2016 - 10:37am

I have this:

class Max(val value : Int) extends StaticAnnotation{} class Child() extends Parent { @Max(5) val myMember= register("myMember") } abstract class Parent { def register(fieldName : String) = { val cls = getClass import scala.reflect.runtime.universe._ val mirror = runtimeMirror(cls.getClassLoader) val clsSymbol = mirror.staticClass(cls.getCanonicalName) val fieldSymbol = clsSymbol.typeSignature.member(TermName(fieldName)) println(s"${fieldSymbol.fullName} " + fieldSymbol.annotations.size) } }

this does not work, somehow, it returns 0 annotations, if instead, I put the annotation on the class, then I can read it fine. Why?

How can I extract the values that don't match when joining two RDD's in Spark?

18 July 2016 - 10:26am

I have two sets of RDD's that look like this:

rdd1 = {(12,abcd,lmno),(45,wxyz, rstw), (67, asdf, wert)} rdd2 = {(12, abcd, lmno), (87, whsh, jnmk), (45, wxyz, rstw)}

I need to create a new RDD that has all the values found in rdd2 that don't have corresponding matches in rdd1. So the created RDD should contain the following data:

rdd3 = {(87, whsh, jnmk)}

Does anyone know how to accomplish this?

Refactoring Scala to use search functions as arguments results in Option[Any] issue

18 July 2016 - 10:20am

So originally I had the following. It contains a lot of boiler-plate:

private def getCollection(newState: Asset, currentState: Option[Asset]) = newState.assetGroup.flatMap(_.collection) match { case Some(collection) => Some(collection) case None => currentState match { case Some(state) => state.assetGroup.flatMap(_.collection) case None => None } } private def getChildSource(newState: Asset, currentState: Option[Asset]) = newState.content.flatMap(_.contract.flatMap(_.childSource)) match { case Some(childSource) => Some(childSource) case None => currentState match { case Some(state) => state.content.flatMap(_.contract.flatMap(_.childSource)) case None => None } } private def getParentSource(newState: Asset, currentState: Option[Asset]) = newState.content.flatMap(_.contract.flatMap(_.parentSourceId)) match { case Some(childSource) => Some(childSource) case None => currentState match { case Some(state) => state.content.flatMap(_.contract.flatMap(_.parentSourceId)) case None => None } }

So after some work I simplified it to the following:

private def getCurrentField[A](newState: Asset, currentState: Option[Asset], searchFunction: Asset => Option[A]) : Option[A] = newState.content.flatMap(_.contract.flatMap(_.childSource)) orElse { currentState match { case Some(state) => searchFunction(state) case None => None } } val getCollection : Asset => Option[Collection] = (state : Asset) => state.assetGroup.flatMap(_.collection) val getChildSource : Asset => Option[String] = (state : Asset) => state.content.flatMap(_.contract.flatMap(_.childSource))

...but this gives me a compiler error:

[warn] <filename_removed>.scala:68: a type was inferred to be `Any`; this may indicate a programming error. [warn] currentState match { [warn] ^ [error] _:67: type mismatch; [error] found : Option[Any] [error] required: Option[A] [error] newState.content.flatMap(_.contract.flatMap(_.childSource)) orElse { [error] ^ [warn] one warning found [error] one error found

If I remove the return type to getCurrentField, it compiles and the tests pass, but I still get that compiler warning: a type was inferred to be Any.

What's the best way to deal with type parameters in this situation?

Comparing XML in Scala while ignoring certain elements?

18 July 2016 - 10:19am

I'm reading in an XML file in Scala like this:

val ExpectedOdataOutput = XML.loadFile("./src/test/resources/expected-odata-output.xml")

The data in the file has the following structure:

<a:feed xmlns:a="http://www.w3.org/2005/Atom" xmlns:m="http://docs.oasis-open.org/odata/ns/metadata" xmlns:d="http://docs.oasis-open.org/odata/ns/data" m:context="$metadata#blah"> <a:id>http://localhost:8089/</a:id> <a:entry> <a:id>test('1')</a:id> <a:title/> <a:summary/> <a:updated>2016-07-05T13:32:36Z</a:updated> <a:author> <a:name/> </a:author> <a:link rel="edit" href="test('1')"/> <a:category scheme="http://docs.oasis-open.org/odata/ns/scheme" term="#Test"/> <a:content type="application/xml"> <m:properties> <d:ID>id1</d:ID> <d:NAME>name1</d:NAME> <d:URL>url1</d:URL> </m:properties> </a:content> </a:entry> <a:entry> <a:id>test('1')</a:id> <a:title/> <a:summary/> <a:updated>2016-07-05T13:32:36Z</a:updated> <a:author> <a:name/> </a:author> <a:link rel="edit" href="test('1')"/> <a:category scheme="http://docs.oasis-open.org/odata/ns/scheme" term="#Test"/> <a:content type="application/xml"> <m:properties> <d:ID>id2</d:ID> <d:NAME>name2</d:NAME> <d:URL>url2</d:URL> </m:properties> </a:content> </a:entry> </a:feed>

I'm writing a test to verify the output of an API call. The API call gets the XML like so:

val ApiResult: HttpResponse[String] = Http("http://localhost:8089/test").asString val actual_data = scala.xml.XML.loadString(ApiResult)

I'm trying to figure out how to do two things:

  1. Check that the API call and test data from the file match exactly, except for the <a:updated> tag.

  2. Check that the tag names in the API call and test file match, ignoring the content in the tags.

How do I do this? I'm new to Scala and am having trouble making sense of its documentation for XML.

reformat DateTime Array

18 July 2016 - 9:43am

I have generated an Array of dates with the following code using jodatime

import org.joda.time.{DateTime, Period} import org.joda.time.format.DateTimeFormat import java.text.SimpleDateFormat def dateRange(from: DateTime, to: DateTime, step: Period): Iterator[DateTime] =Iterator.iterate(from)(_.plus(step)).takeWhile(!_.isAfter(to)) val from = new DateTime(2000, 06, 30,0,0,0,0) val to = new DateTime(2001, 06, 30,0,0,0,0) val by = new Period(0,2,0,0,0,0,0,0) val range = { dateRange(from ,to, by)} val dateRaw = (range).toArray

How can I pass DateTimeFormat.forPattern("YYYYMMdd") to each value in order to get an Array of integers of format yyyyMMdd

Array[Int] = Array(20000630,20000830,20001030...

> omeprazole 40 mg price - buy misoprostol and mifepristone
- valtrex buy online no prescription - where to buy asacol - cell spy phone - generic levitra -