Message Queue Telemetry Transport (MQTT)

After various encounters and two projects which employed MQTT, I thought it was a good idea to write a summary about my experiences and my point of view. As my new job is very close to the IoT and Industry 4.0 sector, it did not take much time to encounter the hype.

Once upon a time

MQTT is, like AMQP, a messaging protocol for distributed systems, meaning that a node can pass a message to one or multiple other nodes. Having used RabbitMQ and AMQP for years, this principle is nothing seriously new to me, and especially as a RMQ user, a core point of MQTT does not sound very exciting at the first glance – its simplicity. In AMQP, I have a whole bunch of functionality to get messages from A to B, including advanced routing and the, pardon me, awesome exchange functionality. MQTT lacks all of them – in exchange for a very little basic footprint, which makes it very tempting for embedded systems and little devices (such as sensors…) which do not possess much memory.

In the basic version of MQTT, developed in the late 1990s, the functionality was just publishing a Message M from a client C to a Broker B, which would dispatch it to all Clients S0..Sn which expressed interest in the type of message. A message M was defined as {Topic T, Payload P, QOS Q}, both parts of the tuple were represented as String. A broker B would wait for a set of Subscriptions S0..Sn, do just do a simple strcmp on any new message and dispatch the message to all clients which subscribed to the topic of a new message in the first place. Using a simple QOS mechanism, the publisher of a message was able to specify the way the Broker should deal with messages on a per-message basis: fire-and-forget (and do not care), deliver to at least one client, deliver to >1 clients.

On the transport side, MQTT could be sent through plain wire (such as a RS-232 / RS-485 line or even teletype), or via TCP/IP, of course unencryted and unauthorized.

So, the protocol was plain simple and a real minimalistic approach to setting up a message exchange, by no ways applicable to our needs in the year 2016 and should be forgotten, right?


My first applications with MQTT

My first experience with MQTT was in 2010, when I used it as a transport protocol for sensor data over a RS-485 line.

The project was a greenfield project, but the controller in the sensor was very limited, and my Atmel AtTiny had much more to care about than parsing, verifying and deserializing fancy XML data even for the transport layer. Using full-blown frameworks such as Apache Thrift was not an option aswell, but MQTT was deployed very fast. Using topic names “config.sensitivity”, “config.sensorType”, “sensor[0].value” and so on made the evaluation of data very easy, not to say trivial. On a 2-pin cable inside a machine cabinet, I also did not care very much about security, so it just made my live easier and saved me some hours of work to implement my own protocol. Cool, job done.

Then came the second, which is mainly used for telemetry and event exchange with other devices, using more advanced features of todays MQTT implementations:

  • Server and client certificates, automatically rolled out during device production, respectively orchestration of the broker server – which means: Authentication by certificates. Of course MQTT also supports plain username/passwords.
  • Hierarchical topics. Topics are not only strings anymore – following the convention of naming them in url-pattern style, such as devices/<id>/events/updates/caseTemperature, it is possible to perform registrations on a hierarchical basis. For example, a subscriber registered for devices/<id>/* would also receive messages published to sub-topics such as the above.
  • Which brings us to Access control lists, supported by a subset of the mainstream brokers. For example, why should a sensor be able to subscribe to its own readings? Or, more relevant, why shouldn’t one share a MQTT broker with other clients, as long as they are not likely to interfere?
  • Distributed, scalable brokers.

Which means, one could even use MQTT on the internet nowadays without being fired.



On the client side, I mainly used Eclipse Paho. After writing a Autocloseable Fluent-API facade for it, it was quite pleasant to work with.

Examples (Java)

Connect to a Broker

Or with SSL:

Publishing a message:

Simple publish-receive test:



On the broker side, I used Moquette (easily embeddable into own Java applications) and Mosquitto. As there are excellent documentations on both, I will not go into details here.


We have been through many approaches to make computers talk to each other. We have been through the SOAP-Ceremony, the CORBA-Hell, and now we even manage to write beautiful RESTful webservices. We also laughed about Windows NT messaging. Personally, I like to model communication architectures using RabbitMQ, but I also have to confess that often a simple messaging protocol without any advanced features would have done the job.

On the other hand, I do not think much of hypes and consultants who try to make a fortune on selling the next best thing. So, I will just add MQTT to my toolbox, be happy with the current exciting project which employs MQTT for M2M and telemetry, and, as usual, choose the right tool for the job, with or without a fancy title, next time again. As MQTT will gain more use-cases just because of simplicity, I do not think this will be our last encounter.

Test-Driving the Go Language

When I completed my compiler design studies, I felt it was not appropriate for me to give my exercise projects any fancy names and put them on Github. With this in mind, I was used to question any new language which came out of nowhere and wait for a number of readings on my technology radar before considering to deal with it.

With a number of significant readings from Google Go, I decided to give it a pleasant test drive.

The programming language Go, originally a 20%-Project initiated by a Google employee, quickly found a position in my mind – a slick alternative to C/C++ for smaller systems and fast backend services.  Being used to Java/Spring Boot and .net/Nancy, Go could be interesting to me if there is any gap it could fill – maybe a gap I was yet to discover.

To find out, I built a simple echo service which covers some aspects I would usually face on backend services:

  • HTTP Server
  • JSON
  • Commandline Arguments or Environment Variables
  • Logging (ELK friendly, please)
  • Easy to integrate in CI/CD environments

Setting up Go

Setting up go takes 5 minutes on UMTS or 4 hours on a crappy hotel WiFi. Download the package, install, create your workspace and start coding. I used a text editor for writing Go code, afterwards I noticed that there is an awesome set of plugins for Atom.

Right after creating the workspace one thing becomes obvious – Go loves conventions, convention over configuration and implicit declarations. The workspace is organized in a way that seems comfortable to Java developers, because it works perfectly if you organize all your projects and external dependencies in the form of Java namespaces.

Java developers will, on the other hand, not be comfortable with the missing concept of classes and the implicit private/public declarations: function names in camelCase are private, while their PascalCased counterparts are public.

The Echo Service

Lets build a simple Echo Service, which just listens to HTTP requests and echoes them back, while writing comprehensive logs and following a set of practices which appear a good idea in Java.

Enter Go..

Main file


Service Handler



or, to build to GoHome/bin:

Go generates fat binaries which include all their dependencies, which means that deployment is a little bit easier, regardless of the bigger files.

Test Run



I needed about 3 hours to learn go, get my workspace running and develop the trivial service above. Not too far away from Spring Boot when leaving the Java language out of consideration. To me, Go looks more than promising. I will definitively consider it as an alternative to C on my next project which requires a small and slick microservice backend for tasks I can quickly replace with an alternative implementation.

Building an iTunes Web Interface in 2 Hours with Java and Spring Boot

In my mind, the Spring Framework is positioned right next to the “Code less, do more” sign, promising to cut redundancy and boilerplate code and letting the developer focus on the actual work that needs to be done – just the same motivation as JEE. In this tutorial, without intending do encourage any kind of deathmatch between JEE and Spring,  I am going to serve the contents of my iTunes Repository through a Spring RESTful Webservice and render it through an AngularJs frontend. The reason why I chose Spring for this tutorial is one really interesting project from the Spring world – Spring Boot, a framework which allows the developer to build and deploy self-contained standalone webservices very quickly.

Overall System Architecture

Exposing a music library through a web service can be achieved via various approaches, the selection of the initial way to take is based on the following considerations:

  • The repository is maintained by iTunes, but talking about separation of concerns, this fact should be transparent to any consumer of the Music Library Access Layer. So we need a kind of interface or facade/ACL around the iTunes specific functionality to avoid an unintended vendor lock-in.
  • Even inside the iTunes domain, the task of acquiring the content does not neccessarily mean using walking one single path. The most trivial way is taking advantage of the predictable folder structure, but taking the way along the sqlite database or the XML file is also viable and, from the features provided, superior to fetching the folder structure.
  • iTunes Libraries can (and probably will) be huge, whily querying them is expensive. So, caching or importing into a faster database would be a good idea.

Considering the above, the chosen approach for the scope of this tutorial is:

  • Creating an interface which is ignorant of the type of media storage and extraction strategy used
  • Implementing the trivial filebased approach to get a working result quickly – if the overall solution is good, we will switch to a better one down the road.

From the decision above, the following modules are derived:

  1. Library Reader ( – Reads the Library and provides the content through a standard interface
  2. Web Service ( – Exposes the Library Reader through a RESTful webservice
  3. HTML5/JavaScript Frontend – Let there be humans!

Basic Project Setup

I use Maven for the task of collecting and wiring dependencies, however the same could be achieved with any other package management solution (such as Gradle) aswell.

I will be using Spring Boot’s “fat jar” approach, so the project will compile to a self-containing jar file that includes all dependencies, including an embedded Tomcat – so there is no need to setup an own application server. Building a .war file for any dinosaur application server can be achieved with just one little tweak in the Maven Configuration.

The basic version which I describe in this tutorial uses the following technology stack:

  • Spring Boot Web Starter
  • Spring Boot HATEOAS Starter (love Richardson Maturity Model)
  • Spring Boot Starter AOP (Aspect oriented Programming Support)
  • Spring Security for User Authentification
  • (as I also develop in c#, I was a little jealous about the property system)
  • JSONdoc for automatic API documentation and Test client
  • WADL homebrewn controller to provide a WADL file for testing suites, such as SoapUI
  • Spring Unit Testing support
  • Mockito for mocking Components (outside of Spring)
  • Angular.js for the JavaScript Frontend
  • Angular-material for the JavaScript User Interface

The full version (introduced in the next 2 hours sessions) also uses:

  • Neo4J Graph Database
  • Neo4J Spring support
  • Spring Boot Actuator for Health Monitoring
  • Logstash, Elasticsearch and Kibana for Log analysis

My weapons of choice concerning the general toolchain are:

  • Netbeans IDE (8.1)
  • Maven
  • Jenkins CI
  • SonarQube Code Inspection
  • Node, Bower and Gulp. Stop clinging to yesterday, JavaScript!
  • Git and Phabricator for versioning and housekeeping.


iTunes Interface

in iTunes, we are mainly dealing with Artists, Albums/Compilations and Tracks. For the purpose of retrieving those entities, a possible minimal Interface is:

Sitting in front of a C# IDE everyday aswell, I appreciate the properties and class initializers for the great job they do in cleaning up code from unnecessary IDE generated bloat – we really should depracate mandatory getters and setters in 2015. In plain Java8, there is no such feature available, but there are libraries which provide that feature. I use Lombok and Immutables, usually I prefer Immutables, because it does not cause issues when using Spring AOP in the same project.

Expected Behavior of the repository reader:



The webservice consumes the Media Library Reader and provides a HTTP/REST interface. With just a little Spring power, this task is quite easy.

Webservice Controller

In Spring, a class has to follow a simple pattern in order to become a REST service, adding the @RestController annotation suffices. Together with some documentation for my favorite auto-documentation tool and stripping the Javadoc header, the class below does the task of exposing the artists list of a provided music repository to any REST client.

Keeping JSONDoc annotations and actual service code in the same class might raise concerns – one could even extract an interface from the class and put the entire documentation there. This did not happen here, because:

  • I think that documentation and upper level service belong together – as close as possible
  • There are more annotations to be put here later – having them all in one place might be a good idea
  • Why not feel the urge to modularize the RestControllers themselves as soon as possible?

The @ApiVersion is a qualifier, which will be used for client-side version selection later.

Logging Aspects

The controller should not necessarily have to deal with the environment-specific way of logging – it simply is not its concern. Thanks to Spring AOP and the Pointcut API, a basic logging aspect can be implemented as below – Spring AOP will kindly intercept the requests matching the AspectJ expression in the pointcut annotations.

HATEOAS Support (the glory of REST)

In the SpringBoot/HATEOAS tutorial, the assembly of the Resource is implemented in the controller. This might be fine for smaller projects, such as our iTunes Server, but me, I would feel guilty in terms of Single Responsibility Principle Violations if I would actually take this approach. Fortunately, Spring provides an Interface for a Entity to Resource conversion, so I just implement the Adapters outside and let Spring autowire them into my controller.

For example, assembling a HATEOAS compliant resource from an instance of Artist could be implemented as:

Annotated by @Component, it is trivial to autowire the desired implementation of the Resource Assembler using Springs Dependency Injection.

Initializing the Repository Reader

Again, Spring offers many ways to wire a working instance of the Repository reader to the Webservice controller. In this example, I will use the direct approach of providing a builder for a reader instance, which just returns an iTunes file-based reader. Being annotated with @Configuration and @Bean, Spring will use the getLibraryReader() method to resolve the outgoing dependency the REST Controller states during construction.


Run it

After a maven clean build and run, the server is waiting or requests. Using a plain HTTP client or a full-blown test suite, such as SoapUI, we can retrieve artist, album and track lists:

The logfile is also very concise and a fantastic light snack for logstash kv{} operator:

(timestamp) TRACE 9192 — [m3potp:1] d.m.e.m.m.MusicApiLoggingAspect: client=0:0:0:0:0:0:0:1 http.status=200 method=GET uri=/api/v1/artists, message=found 771 artists


The service is now able to expose a media library. In the next part, we will add Spring Boot Actuator for Health Monitoring and Spring Security for authentification.

Java8 Streams Part 1: Performance considerations

Java 8 Streams, together with lambda expressions, added a lot of power and comfort to the Java programming language. Being a Scala and C# developer aswell, both concepts are nothing new or uncommon to me, but I welcome the fact that Java is moving towards functional programming and staying universal the same time.

In this first post in a series on Java Streams, I will look into basic performance considerations.

More than syntactic sugar

Code that uses the Streams and Lambda language features is, in most cases, much more beautiful and comprehensible than its counterparts relying on imperative code and anonymous classes. One could be tempted to simply convert all the “uncool” loops from imperative to functional code and end up with code that actually cherry-picks the best concepts from both technologies, but how about the performance?

The test case

In this test case, we investigate a piece of code that performs a simple operation on a set of Double values with a cardinality of 100000. Our code has the simple task of computing the sign flag of the tangent value of each item. In imperative programming, a possible approach would be:

In this case, a simple Array of Double values would have sufficed, too, but as we refer to typical production code, a deriverate of ICollection should be much more realistic.

Using the Java8 Stream API, the same result could be expressed in a much more readable form:

The code above actually is a cherry pick among imperative and functional programming by opting to feed the results of the default mapping collector to the Constructor of the result object instead. This is actually not the most popular way to perform this task from a functional programming point of view, implementing an own Collector or using the Reduce function would comply much more. In this case, this is not very attractive due to the object creation penalty (and its massive costs compared with the inexpensive operation we are performing) when reduce is used, the approach of implementing an own collector is also not the best idea, as the implementation is much more complicated than simply passing a Collection to a constructor.

For the sake of completeness, a possible implementation could use reduce to return a Classification instance directly, which, with an execution time of 6900ms, is the slowest choice.

So, Java allows to use the right tool for the job.

Performance tests

I chose a trivial approach to benchmarking, executing the Operations n+m times, n times for warm-up, and m times for the actual measuring. This task could also be performed with JBH, but due to the estimated execution time of >2s, the accuracy of the trivial benchmark is sufficient.

On a dual-core mobile i7, the results for 400 iterations after 600 iterations of warm-up and JIT-bait were:

– Imperative: 5410ms
– Functional: 6878ms

The functional code was actually slower than its imperative counterpart, which may simply result from the penalty caused by the more complex way of performing cheap operations while iterating over a set of values. This means that, from a statistical point of view, both approaches are expected to converge towards equal processing times with rising complexity of the operation that is performed each iteration.

Parallel Streams

After the MHz-wars in the 90s, CPU manufacturers began to increase the number of CPU cores and threads on a chip, so it is not uncommon to have 4 physical CPU cores with 2 threads per core even in a mobile computer.

Both our solutions above have in common that they do not really take advantage of this situation, but comparing them in terms of portability to distributed computing, it becomes obvious that the imperative counterpart is considerably more painful to convert than the functional one. Actually, compared with the map-shuffle-reduce approach of the MapReduce algorithm, the functional code perfectly describes how it can be executed on multiple processors. So, if we expose the set as a parallel stream, this is exactly what Java will do:

Running that code on the same machine, taking advantage of 4 instead of just one CPU, the function completes after 2416ms, which is less then half the time the imperative code took. There was no need to take care of threading or reassembly, actually it was not even necessary to modify the code at all. We also got reminded of Amdahl’s and Gustavson’s law in  the event of any expectation on speed-ups close to 100%.

Conclusion: Java8 Streams offers syntactic sugar for many day-by-day tasks, and as soon as we take advantage of the parallel processing capabilities, we receive a speed-up without having to take care about threading ourselves. Due to the penalties involved, either for non-trivial iterating, shuffling and reassembly, it is up to the developer to choose between the right tool for the job, but the costs and risks for parallel computing became much lower.

Les deux Alpes – Alpe d’Huez

Compared with other climbs in France,  Alpe d’Huez could easily be overlooked. It does not offer the challenges of Mont Ventoux in terms of heat and wind, not the height of Col de Galabier, and the town itself appears as an artifically installed camp for winter sports enthusiasts. Its attractivity for cyclists results from the famous Tour de France Stage, initially won by italian cyclist Fausto Coppi in the 1950, its varying gradient that makes clean pacing upwards a challenge, and especially the beautiful french alps where this climb is situated.

I chose the ride from Les deux Alpes to Alpe d’Huez and back – a magnificent ride with a lot of climbs.