Friday, September 30, 2016

Creating presentations ...

Early november I am presenting both at the Dutch JFall and Devoxx BE conferences.
This year I wanted to get started early, so I sat down and ... "Ok, what am I going to use to create the presentation in? Keynote? Mmm ... If I want to present this at different conferences with a matching style, maybe I need something different."

So, I looked at some Markdown slide tools. RemarkJS looked nice.
I played a bit with it and after one night I had the Devoxx look in a template for RemarkJS.

Jekyll also looked nice to be able to split the Markdown and HTML into separate files. This will also make it easier maybe to switch presentation layouts, I thought.
Jekyll is also not that difficult and it was setup quickly for my sample presentation.
A disadvantage of using Jekyll and RemarkJS together, is that they both use double brackets for variables. Since Jekyll already tries to replace those, variables cannot be used for RemarkJS in this setup.

Finally I wanted to be able to write the markdown and interactively see the resulting slides in a browser. I found this blogpost which explained a Gulp+BrowserSync setup.

So ... now I have a generic setup in which I can write in markdown and see the result directly in the browser. The look can easily be customised with css and the style can easily be switched using page variables.

I made the setup available in this git repo so it is freely available for anyone:

Now I can start on the real slides ... ;-)

Wednesday, July 13, 2016

Running Docker on Pine64

In the beginning of this year I backed the Kickstarter for Pine64. The Pine64 is an interesting little board with a quad core All-Winner 64-bit CPU, lots of IO, a gigabit network interface and up to 2GB memory.
And that for a price less than the latest Raspberry Pi 3, starting at just $15 up to $29 for the 2GB version.

Almost 2 weeks ago I received the 5 Pine64's from my pledge. Unfortunately I was not able to do anything with them till last weekend.
Recently the site was launched with lots of info about the board and how to get started.
I started out with the latest DebianBase image, but that did not seem to work. The board did not start properly somehow. The DebianMate image did work though, but it is quite big and also has a UI which I do not need. I was quite a hassle to figure out how to disable the LightDM UI.
For those interested in knowing how:
$ systemctl get-default
// will show it's current status is ''.
// to disable ui it must be set to ''.
$ systemctl get-default
$ reboot
Then I had an issue with the screen resolution on my Eizo display, but that was solved changing the screen size on the monitor itself.
If you'd like to adjust the resolution of the Pine, you can do so in the /boot/uEnv.txt file. Default it is set to '720p60', but you can change it to '1080p60', '1920x1080p' or '1080p50'. For more info about setting the resolution see Linux Sunxi Display or here in the code for all supported resolutions.

After the initial boot and upgrade of the packages, I checked whether the GoLang was available since Docker is build in Go. It was, and it installs.
Then I checked out Docker. It is available in the '' package, but unfortunately it does not install. I then looked into how to build Docker myself on a armv7, but that's quite a hassle since it seems you need Docker to build a new Docker. :-/
And Etcd, which is needed by Kubernetes which is my ultimate goal to run in the Pine, is also not available for Debian but that one is easy to build yourself.

Yesterday I tried out the UbuntuBase image since it is much smaller than the DebianMate (160MB compared to 1.2Gb).
This Ubuntu version does have GoLang and Etcd and Docker packages available. And they all work!

After installing them, I wanted to verify Docker really works. I was a bit of a search to find a BusyBox image for armhf, but finally found Container4armhf on the Docker hub which provides one.
Then, running
$ docker run container4armhf/armhf-busybox echo "Hello Docker on Pine64"
prints 'Hello Docker on Pine64". It works! :-D

Another option is to run:
$ docker run armv7/armhf-ubuntu echo "Hello from armv7 ubuntu"
which also prints out the echo string.

So ... Docker on the Pine64 works. Great!
Next steps will be to try to install Kubernetes to run on the Pine64 and try to build a cluster out of all the pines.

Wednesday, June 29, 2016

Talking to my home

My home is mostly automated using KNX. Yes, that’s a bit expensive, but it’s wired, very flexible, it always works and is a truly open standard so I can choose from many manufacturers. I always get my hardware from
I use OpenHab to extend the KNX installation a bit for simple but nice ui and to have an easy accessible playground.

Last weekend I started to look into how to control my home by voice. A few weeks ago I receive a free Hue kit. After installing those and connecting to the via the iOS app, they already can be controller by Siri, but I wanted to see whether I could also control my KNX installation.

I quickly found this OpenHab-HomeKit bridge application which allows you to control your OpenHab installation via Siri on an iPhone or iPad. The new MacOS Sierra Siri which will be release this fall will also make Siri available on the Mac!

Setup was not difficult, but I was not able to get it running on my Pi which also runs OpenHab mainly because the bridge is a NodeJS app and I just could not get a NodeJS version on the Pi which worked with the bridge application. So I ended up just running it on my MacMini server.
This only thing needed in OpenHab was to create a separate sitemap with the items to expose to Siri since not all items are supported (yet).
After that it was just trying out several commands to figure out what works in the Dutch language. Controlling the lights, light color, dim level and outlets work fine. Only the roller shutters do not work yet, but that's because a wrong command is send to OpenHab. It’s sending a ‘0’ or ‘100’ value instead of ‘UP’ or ‘DOWN’, but I already saw that this can be fixed by adding a rule to OpenHab to translate the wrong command into a valid one.

Here is a short video of the result:

The reason why some commands did not work was that I first I was using a wrong command and second Siri only could have of the sentence.
Overall it worked really nice. And, as an added bonus, because I also have an AppleTV, it also works remote without having to setup anything for that! Not only controlling the lights, but also for getting information about the house like the temperature in the living room.

I also briefly looked into Google Voice, but since that api is not open it is not possible to use Google Voice to control OpenHab. I did find a blogpost somewhere on how to setup Google Voice with home automation by using IFTTT, but I want my home to work standalone and not be dependant on internet services so I do not want to duplicate my home to IFTTT. But, I want to try out some geo fencing features with IFTTT and OpenHab soon.

Friday, January 29, 2016

Testing Neo4j 3 with embedded server with Bolt

Together with colleague Stijn van Drunen we're working on a project where we're using Neo4j 3 since we want to use Neo4j's new binary Bolt driver.

See Stijn's blogpost on how to use an embedded Neo4j to run integration tests where the application uses the new Bolt driver.

Friday, December 11, 2015

Building JavaCPP presets for OpenCV 3 for Raspberry Pi (linux-arm)

Native image processing

To speed up image processing in a Java/Scala application on a Raspberry Pi, we resorted to 'opencv'. OpenCV already provides native Java binding. The disadvantage of this is however, that you manually must load those native libraries in your Java application.

JavaCV/JavaCPP to the rescue!

JavaCV is a wrapper using JavaCPP Presets like OpenCV.
JavaCPP provides a way to use OpenCV without manually adding code to load the native library. It does this via a static initialiser in the JavaCPP classes which makes sure the correct native library is loaded by the JVM. JavaCPP supports several C library, among which 'opencv'. They provide seperate jar files containing platform dependent native libraries, like for for macosx, linux-x86, linux-x86_64, windows-x86, windows-x86_64, android-arm, android-x86 (See JavaCPP presents in Maven Central). 

The advantage of JavaCV over the native OpenCV bindings, is that
- JavaCV combines all c-libraries in a single jar with a classifier for a specific platform
- It's easy to include platform specific dependencies in a project.
- JavaCV comes with a tool to automatically load the c-libraries from the jar.

However, they do not provide a 'linux-arm' build so you have to build it yourself. The easiest way, I think, is to build it on a real Pi. This can take several hours however.

Used resources:

If you just need the opencv JavaCPP preset then I provide these jars:
Note that these jars probably does not support video processing since I did not build opencv with any video dependency.

Note that JavaCPP 1.1 fixes a 'native-library-loading' issue on amd64 linux systems (for virtual machines, docker images etc) where on a 'amd64' architecture the native libs, of linux-x86_64 packages, did not get loaded.

Building for the Pi

JavaCPP comes with a 'cppbuild' script to build the opencv sources and create the java binding for it, but 'linux-arm' is not supported yet. In SNAPSHOT version there is some work in progress, but it seems this is only for cross compiling which does not support creating the java bindings currently.
So, only solution is to build it on the Pi itself.
The 'regular' opencv sources can be build for the Pi. Then these libs can be used to create the javacpp-opencv jar.

In short, these are the steps to take:

  • build opencv on pi (building the 'real' opencv project for pi is easier than trying to tweak the javacpp/opencv cppbuild script. Support for building linux-arm was added in SNAPSHOT but only for cross compilation. Does not work with Java wrappers, see below, and does not run on a real Pi.)
  • get opencv sources
  • run cmake to create native make files
  • run 'make -j5' to compile opencv on Pi. Use -j5 to use all cores!
  • run 'make install' to install libs in /usr/local/.. folders.
  • build javacpp for opencv using compiled opencv libs
  • make links to /usr/local's bin, include, lib and share folders in 'javacpp/opencv/cppbuild/linux-arm'.
  • build javacpp jar: 'mvn package'


  • make sure to checkout the same version by a git-tag in both 'opencv' and 'opencv_contrib' source folders.
  • run using 'screen' so build does not stop once you accidently disconnect.
  • ANT is required to be able to build java wrappers. Set both ANT_HOME and JAVA_HOME.
  • javacpp requires libraries in 'opencv_contrib' (e.g. 'face' module) so opencv build must include those modules.

Building OpenCV

sudo apt-get update
We did not need video libraries, but add those when you need them.
sudo apt-get install build-essential cmake pkg-config libpng12-0 libpng12-dev libpng++-dev libpng3 libpnglite-dev zlib1g-dbg zlib1g zlib1g-dev pngtools libtiff4-dev libtiff4 libtiffxx0c2 libtiff-tools libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg-progs 
sudo apt-get install screen
sudo apt-get install ant
in a screen session:
screen -S opencv
OpenCV Contrib needed because of 'face' dependency by javacpp
git clone
checkout correct version to build (same as opencv version!)
cd opencv_contrib
git checkout 3.0.0
cd ..
git clone
cd opencv
git checkout 3.0.0

ANT_HOME and JAVA_HOME needed to be able to build Java Wrappers and Java Tests.
It's assumed Java 8 is already installed on your Pi!
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export ANT_HOME=/usr/share/ant
create dir for build into
mkdir build
cd build
All build options are mentioned in this presentation (slide 12):
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_EXAMPLES=D BUILD_PNG=ON -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules -D BUILD_opencv_face=ON -D BUILD_opencv_ximgproc=ON -D BUILD_opencv_optflow=ON ..
Check Java output is:
--   Java:
--     ant:                         /usr/bin/ant (ver 1.8.2)
--     JNI:                         /usr/lib/jvm/java-8-oracle/include /usr/lib/jvm/java-8-oracle/include/linux /usr/lib/jvm/java-8-oracle/include
--     Java wrappers:               YES
--     Java tests:                  YES
Start compiling:
make -j5 (aantal cores + 1)
Install bins, libs, includes, shares in /usr/local to be used by javacpp
sudo make install 

Building JavaCPP

Use natively build OpenCV libs.
JavaCPP needs Face.hpp !! --> is in opencv-contrib
cd  // to home dir
git clone
cd javacpp-presets
checkout the version to build
git checkout 1.1
install main pom in repo (might not really be necessary)
mvn install -N 
make links in javacpp-presets/opencv/cppbuild/linux-arm to /usr/local/bin, /usr/local/share, /usr/local/lib, /usr/local/include
cd opencv/cppbuild/linux-arm
ln -s /usr/local/bin bin
ln -s /usr/local/include include
ln -s /usr/local/lib lib
ln -s /usr/local/share share
back to javacpp-presets/opencv folder
cd ../.. 
build javacpp library using previously build opencv libs
mvn clean package
Once completed, the 'target' folder contains a 'opencv-linux-arm.jar'
mv target/open-linux-arm.jar target/opencv-3.0.0-1.1-linux-arm.jar
Install jar in local repo or Nexus using these artifact details (also see 'target/maven-archive/'):
groupId: org.bytedeco.javacpp-presets
artifactId: opencv
version: 3.0.0-1.1
classifier: linux-arm
packaging: jar

Now use this jar in your project using details as above.
Don't forget the classifier!!
or for SBT in build.sbt:
// Platform classifier for native library dependencies for javacpp-presets
lazy val platform = org.bytedeco.javacpp.Loader.getPlatform
in libraryDependencies:
"org.bytedeco" % "javacpp" % javacppVersion,
"org.bytedeco" % "javacv" % javacppVersion excludeAll(ExclusionRule(organization = "org.bytedeco.javacpp-presets")),
"org.bytedeco.javacpp-presets" % "opencv" % ("3.0.0-"+javacppVersion) classifier "",
"org.bytedeco.javacpp-presets" % "opencv" % ("3.0.0-"+javacppVersion) classifier platform,
"org.bytedeco.javacpp-presets" % "opencv" % ("3.0.0-"+javacppVersion) classifier "linux-arm",
in project/plugins.sbt
// `javacpp` are packaged with maven-plugin packaging, we need to make SBT aware that it should be added to class path.
classpathTypes += "maven-plugin"
// javacpp `Loader` is used to determine `platform` classifier in the project`s `build.sbt`
// We define dependency here (in folder `project`) since it is used by the build itself.
libraryDependencies += "org.bytedeco" % "javacpp" % "1.0"

Note on cross compiling:

OpenCV can also be cross compiled. Philipz's docker containers helps a lot setting up such an environment. The problem is however, you also want to build the java wrappers. Because the build architecture is set to 'arm' it will also look for an 'arm' version of the JVM and AWT libraries whereas you must use the native (ubuntu) java instead. Was not able to get a cross compile working yet.
Had to hack the /usr/local/cmake-2.8/Modules/FindJNI.cmake to make sure it found the 'amd64' java libraries (might be possible to set the JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY vars instead) so cmake would make a build file where it would build the java wrappers and tests.
So, for now, this opencv-3.0.0-1.1-linux-arm.jar was still build on a real Pi (v2) and yes, it took hours to complete. ;-)

Friday, September 4, 2015

Chaining rejection handlers

To serve some resources via Spray, it's as easy as using the 'getFromFile' directive. If you want to fall back to an alternative because the file is not available, you can define a RejectionHandler to serve an alternative file.

In a web application I wanted to try some alternative names before falling back to a default image. The default solution requires you to nest all RejectionHandlers.
 def alternativesHandler = RejectionHandler {  
  case rejection =>  
   handleRejection(RejectionHandler {  
    case rejection => getFromFile("second-alternative")  
   }) {  
This becomes quite messy very fast and is not very easy to read.
Chaining instead of nesting would improve it quite a bit.
 def alternativesHandler = chaining(RejectionHandler {  
  case rejection => getFromFile("first-alternative")  
 } >> RejectionHandler {  
  case rejection => getFromFile("second-alternative")  
 } >> RejectionHandler {  
  case rejection => getFromFile("third-alternative")  
Here is the code which makes this possible. It only needs to be imported where the route is being defined. The 'chaining' method just tries each handler in the List. The implicit classes extends the classes with a '>>' method which allows the handlers to be chained in a list.
 trait RejectionHandlingChain {  
  type RejectionHandlerList = List[RejectionHandler]  

  implicit class RejectionHandlerListExt(handler: RejectionHandlerList) {  
   def >>(other: RejectionHandler): RejectionHandlerList = handler :+ other  

  implicit class RejectionHandlerExt(handler: RejectionHandler) {  
   def >>(other: RejectionHandler): RejectionHandlerList = List(handler, other)  

  import spray.routing.directives.RouteDirectives.reject  
  import spray.routing.directives.ExecutionDirectives.handleRejections  

  final def chaining(handlers: RejectionHandlerList): RejectionHandler = RejectionHandler {  
   case rejection => handlers match {  
    case Nil     =>  
     reject(rejection: _*)  
    case head :: tail =>  
     handleRejections(chaining(tail)) {  

 object RejectionHandlingChain extends RejectionHandlingChain  

Wednesday, July 8, 2015

Using Akka Http to perform a Rest call and deserialise json

I have been playing with Akka Streams and Akka Http to create a flow to get some data from a public Rest endpoint and deserialize the json using Json4s.
Since there are not that many examples yet, and documentation only has a few examples, I'm sharing my little app here.

Default Akka Http only supports Spray Json, but fortunately Heiko already created a small akka-http-json library for Json4s or Play Json.

Here's is small code sample on how to create a Akka Streams Flow and run it. This was just to test the calling of the Rest endpoint and deserialise the result json into a case class. Next step is then to extend the flow to do something useful with the retrieved data. I'll put putting it into a time series database called Prometheus, and maybe also into Mongo.

package enphase

import akka.http.scaladsl.Http
import akka.http.scaladsl.model.{HttpRequest, Uri}
import akka.http.scaladsl.unmarshalling.Unmarshal
import{Sink, Source}
import de.heikoseeberger.akkahttpjson4s.Json4sSupport
import org.json4s.{DefaultFormats, Formats, Serialization, jackson}

import scala.concurrent.{Await, Future}

 * Enphase API Client which gets Enphase data and put those into InfluxDB
 * - Start with HTTP GET request to Enphase API.
 * - Transform response into json
 * - Transform json into time series data
 * - Put time series data into InfluxDB using HTTP POST request
object Client extends App with Json4sSupport {

  val systemId = 999999 // replace with your system id
  val apiKey   = "replace-with-your-api-key"
  val userId   = "replace-with-your-user-id"

  val systemSummaryUrl = s"""/api/v2/systems/$systemId/summary?key=$apiKey&user_id=$userId"""
  println(s"Getting from: $systemSummaryUrl")

  implicit val system = ActorSystem()
  implicit val materializer = ActorMaterializer()
  implicit val formats: Formats = DefaultFormats
  implicit val jacksonSerialization: Serialization = jackson.Serialization

  val httpClient = Http().outgoingConnectionTls(host = "")

  private val flow: Future[SystemSummary] = Source.single(HttpRequest(uri = Uri(systemSummaryUrl)))
      .mapAsync(1)(response => Unmarshal(response.entity).to[SystemSummary])

  import concurrent.duration._

  val start = System.currentTimeMillis()
  val result = Await.result(flow, 15 seconds)
  val end = System.currentTimeMillis()

  println(s"Result in ${end-start} millis: $result")

 * Entity for system summary json:
 * {
 * "current_power": 3322,
 * "energy_lifetime": 19050353,
 * "energy_today": 25639,
 * "last_report_at": 1380632700,
 * "modules": 31,
 * "operational_at": 1201362300,
 * "size_w": 5250,
 * "source": "microinverters",
 * "status": "normal",
 * "summary_date": "2014-01-06",
 * "system_id": 123
 * }
case class SystemSummary(system_id: Int, summary_date: String, status: String, source: String,
                          size_w: Int, operational_at: Long, modules: Int, last_report_at: Long,
                          energy_today: Int, energy_lifetime: Long, current_power: Int)

At first I could not get Heiko's Unmarchallers working and I wrote my own Unmarshaller which is not that difficult looking at some other implementations. The problem was a very vage error saying something was missing, but not exactly what. Today I figured out, it was just missing one of the required implicit arguments, the Json4s Serializers, and then it all worked nicely.

But here's is how to implement a custom Unmarshaller which unmarshalls a HttpResponse instance:

  implicit def responseUnmarshaller[T : Manifest]: FromResponseUnmarshaller[T] = {
    import concurrent.duration._
    import enphase.json.Json4sProtocol._
    import org.json4s.jackson.Serialization._

    new Unmarshaller[HttpResponse, T] {
      override def apply(resp: HttpResponse)(implicit ec: ExecutionContext): Future[T] = {
            .toStrict(1 second)
            .map(json => { println(s"Deserialized to: $json"); json })
            .map(json => read[T](json))

The only change in the application needed to use this unmarshaller is to replace the 'mapAsync' line with:


The project build.sbt contains these dependencies:

scalaVersion := "2.11.6"

libraryDependencies ++= Seq(
  "com.typesafe.akka" % "akka-http-experimental_2.11" % "1.0-RC4",
  "de.heikoseeberger" %% "akka-http-json4s" % "0.9.1",
  "org.json4s" %% "json4s-jackson" % "3.2.11",
  "org.scalatest" % "scalatest_2.11" % "2.2.4" % "test"

Happy Akka-ing.