Category Archives: Spring

Ice cream sales break microservices, Hystrix to the rescue

In November 2015, we had the opportunity to spend three days with a greenfield project in order to get to know Spring Cloud Netflix. At comSysto, we always try to evaluate technologies before their potential use in customer projects to make sure we know their pros and cons. Of course, we had read about several aspects, but we never really got our hands dirty using it. This had to change!

Besides coming up with a simple scenario that can be completed within a few days, our main focus was on understanding potential problems in distributed systems. First of all, any distributed system comes with the ubiquitous problem of failing services that should not break the entire application. This is most prominently addressed by Netflix’ “Simian Army” which intentionally breaks random parts of the production environment.

However, we rather wanted to provoke problems arising under heavy load due to capacity limitations. Therefore, we intentionally designed a distributed application with a bottleneck that turned into an actual problem with many simultaneous requests.

Our Use Case

Our business case is about an ice selling company, which is acting on worldwide locations. On each location there are ice selling robots. At the company’s headquarters we want to show an aggregated report about the ice selling activities for each country.

All our components are implemented as dedicated microservices using Spring Boot and Spring Cloud Netflix. Service discovery is implemented using Eureka server. The communication between the microservices is RESTful.


Architecture overview of our distributed system with the deployment setup during the experiments.

There is a basic location-service, which knows about all locations provided with ice-selling-robots. The data from all these locations has to be part of the report.

For every location, there is one instance of the corresponding microservice representing an ice-selling-robot. Every ice-selling-robot has locally stored information about the amount of totally sold ice cream and the remaining stock amount. Each of them continuously pushes this data to the central current-data-service. It fails with a certain rate, which is configured by a central Config Server.

For the sake of simplicity, the current-data-service stores this information in-memory. Every time it receives an update from one of the ice-selling-robots, it takes the new value and forgets about the old one. Old values are also forgotten if their timestamp is too old.

The current-data-service offers an interface by which the current value for the totally sold amount of ice cream or the remaining stock amount can be retrieved for one location. This interface is used by an aggregator-service, which is able to generate and deliver an aggregated report on demand. For all locations provided by the location-service the current data is retrieved from the current-data-service, which is then aggregated by summing up the single values from the locations grouped by the locations’ country. The resulting report consists of the summed up values per country and data type (totally sold ice cream and remaining stock value).

Because the connection between aggregator-service and current-data-service is quite slow, the calculation of the report takes a lot of time (we simply simulated this slow connection with a wifi connection, which is slow in comparison with an internal service call on the same machine). Therefore, an aggregated report cache has been implemented as fallback. Switching to this fallback has been implemented using Hystrix. At fixed intervals the cache is provided with the most current report by a simple scheduled job.

The reporting service is the only service with a graphical user interface. It generates a very simplistic html-based dashboard, which can be used by the business section of our company to get an overview of all the different locations. The data presented to the user is retrieved from the aggregator-service. Because this service is expected to be slow and prone to failure, a fallback is implemented which retrieves the last report from the aggregated-report-cache. With this, the user can always request a report within an acceptable response time even though it might be slightly outdated. This is a typical example for maintaining maximum service quality in case of partial failure.


The reporting “dashboard”.

We used a Spring Cloud Dashboard from the open source community for showing all registered services:


Spring Cloud Dashboard in action.

The circuit-breaker within the aggregator-service can be monitored from Hystrix dashboard.

Screen Shot 2015-12-30 at 22.22.26

Hystrix dashboard for reporting service under load. All circuits are closed, but 19% of all getReport requests failed and were hence successfully redirected to the cached version.

Understanding the Bottleneck

When using Hystrix, all connectors to external services typically have a thread pool of limited size to isolate system resources. As a result, the number of concurrent (or “parallel”) calls from the aggregator-service to the report-service is limited by the size of the thread pool. This way we can easily overstress the capacity for on-demand generated reports, forcing the system to fall back to the cached report.

The relevant part of the reporting-service’s internal declaration looks as depicted in the following code snippet (note the descriptive URLs that are resolved by Eureka). The primary method getReport() is annotated with @HystrixCommand and configured to use the cached report as fallbackMethod:

public Report getReport() {
 return restTemplate.getForObject("http://aggregator-service/", Report.class);

public Report getCachedReport() {
 return restTemplate.getForObject("http://aggregated-report-cache/", Report.class);

In order to be able to distinguish primary and fallback calls from the end user’s point of view, we decided to include a timestamp in every served report to indicate the delta between the creation and serving time of a report. Thus, as soon as the reporting-service delegates incoming requests to the fallback method, the age of the served report starts to increase.


With our bottleneck set up, testing and observing the runtime behavior is fairly easy. Using JMeter we configured a testing scenario with simultaneous requests to the reporting-service.

Basic data of our scenario:

  • aggregation-server instances: 1
  • test duration: 60s
  • hit rate per thread: 500ms
  • historize-job-rate: 30s
  • thread pool size for the getReport command: 5

Using the described setup we conducted different test runs with a JMeter thread pool size (=number of concurrent simulated users) of 3, 5 and 7. Analyzing the served reports timestamps leads us to the following conclusion:

Using a JMeter thread count below the size of the service thread pool results in a 100% success rate for the reporting-service calls. Setting sizes of both pools equal already gives a small noticeable error rate. Finally, setting the size higher than the thread pool results in growing failures and fallbacks, also forcing the circuit breaker into short circuit states.

Our measured results are as follows (note that the average report age would be 15s when always using the cached version given our historize-job-rate of 30s):

  • 3 JMeter threads: 0,78s average report age
  • 5 JMeter threads: 1,08s average report age
  • 7 JMeter threads: 3,05s average report age

After gaining these results, we changed the setup in a way that eliminates the slow connection. We did so by deploying the current-data-service to the same machine as the aggregation-service. Thus, the slow connection has now been removed and replaced with an internal, fast connection. With the new setup we conducted an additional test run, gaining the following result:

  • 7 JMeter threads, fast network: 0,74s average report age

By eliminating one part of our bottleneck, the value of report age significantly drops to a figure close below the first test run.


The critical point of the entire system is the aggregation due to its slow connection. To address the issue, different measures can be taken.

First, it is possible to scale out by adding additional service instances. Unfortunately, this was hard to test given the hardware at hand.

Second, another approach would be to optimize the slow connection, as seen in our additional measurements.

Last but not least, we could also design our application for always using the cache assuming that all users should see the same report. In our simplistic scenario this would work, but of course that is not what we wanted to analyze in the first place.

Our Lessons Learned

Instead, let us explain a few take-aways based on our humble experience of building a simple example from scratch.

Spring Boot makes it really easy to build and run dozens of services, but really hard to figure out what is wrong when things do not work out of the box. Unfortunately, available Spring Cloud documentation is not always sufficient. Nevertheless, Eureka works like a charm when it comes to service discovery. Simply use the name of the target in an URL and put it into a RestTemplate. That’s all! Everything else is handled transparently, including client-side load balancing with Ribbon! In another lab on distributed systems, we spent a lot of time working around this issue. This time, everything was just right.

Furthermore, our poor deployment environment (3 MacBooks…) made serious performance analysis very hard. Measuring the effect of scaling out is nearly impossible on a developer machine due to its physical resource limitations. Having multiple instances of the same services doesn’t give you anything if one of them already pushes the CPU to its limits. Luckily, there are almost infinite resources in the cloud nowadays which can be allocated in no time if required. It could be worth considering this option right away when working on microservice applications.

In Brief: Should you use Spring Cloud Netflix?

So what is our recommendation after all?

First, we were totally impressed by the way Eureka makes service discovery as easy as it can be. Given you are running Spring Boot, starting the Eureka server and making each microservice a Eureka client is nothing more than dependencies and annotations. On the other hand, we did not evaluate its integration in other environments.

Second, Hystrix is very useful for preventing cascading errors throughout the system, but it cannot be used in a production environment without suitable monitoring unless you have a soft spot for flying blind. Also, it introduces a few pitfalls during development. For example, when debugging a Hystrix command the calling code will probably detect a timeout in the meantime which can give you completely different behavior. However, if you got the tools and skills to handle the additional complexity, Hystrix is definitely a winner.

In fact, this restriction applies to microservice architectures in general. You have to go a long way for being able to run it – but once you are, you can scale almost infinitely. Feel free to have a look at the code we produced on github or discuss whatever you are up to at one of our user groups.

Teamgeist on Android Wear

Die ganze IT Welt spricht derzeit von Wearables. Also wollte ich mir in einem Lab die Android Wear API genauer anschauen. Schnell war auch schon der erste Anwendungsfall gefunden. In unserer Teamgeist App gibt es seit kurzem die Möglichkeit Kudos zu verteilen.


Auf einer Android Wear Uhr würden sich die Kudos prima darstellen lassen. Dazu gäbe es zwei Aktionen. Einmal für einen Kudo “voten”. Die andere wäre die Teamgeist App öffnen.

Für eine Integration mit der Teamgeist App bräuchten wir eine neue Schnittstelle. Zum kennen lernen der Android Wear Api begnügen wir uns deswegen im folgenden mit einer Android App die Kudos erstellt und verschickt.

Nach kurzer Recherche wurde klar, dass für den Anwendungsfall gar keine eigene Android Wear App notwendig ist. Es reicht eine normale Android App die mittels der Notifications API Nachrichten direkt an die Uhr versendet. Anwendungen eigens für Android Wear geschrieben, werden in einem späteren Tutorial näher beleuchtet.


Ein paar Dinge die wir benötigen bevor wir loslegen können:

  • Intellij (14) als IDE
  • Android SDK mit installierten API Packages für Level 19 (4.4.2), 20 (4.4W) und Android Support Library V4 (20)

Android SDK

  • Aus Mangel einer echten Android Wear starten wir eine aus dem AVD Manager heraus

AVD Wear

Für das Koppeln mit einem Handy benötigen wir auf dem Handy die Android Wear App aus dem Play Store. Das koppeln von der emulierten Wear und einem per USB angeschlossen Handy funktioniert erst dann wenn folgender Befehl auf Kommandozeile eingegebenen wurde (im Tools Verzeichnis vom android-sdk):

~/development/android-sdk-mac_86/platform-tools$ adb -d forward tcp:5601 tcp:5601

Erst wenn der Befehl ohne Fehler ausgeführt wurde, lässt sich aus der Android Wear App im Handy die emulierte Uhr mit dem Handy verbinden. Wird das Handy vom Rechner getrennt und neu angeschlossen, muss der Befehl erneut ausgeführt werden. Eine genau Beschreibung gibt es von Google oder hier.

Neue Android App erstellen

Nachdem wir den Emulator mit dem Handy erfolgreich gekoppelt haben, erscheinen auch schon die ersten Notifications auf der Uhr wie z.B. der Eingang neuer Mails.

Damit wir nun selbst Notifications versenden können erstellen wir InteliJ ein neues Projekt. Im ersten Bildschirm wählen wir links Android und rechts das Gradle: Android Module aus. Auf der darauffolgenden Seite müssen wir ein paar Einstellungen wie z.b. die Version des Target SDK vornehmen.

Target SDK

Anmerkung: Wir hätten hier auch 4.3 wählen können da die Android Wear App ab Android 4.3 unterstützt wird.

Auf den nächsten Seiten belassen wir die Einstellung wie sie sind und wählen auf dem letzten Bildschirm nur noch den Ordner für unser Projekt aus.

Cleanup des generierten Projektes

In unserer Teamgeist App benötigen wir natürlich als erstes unseren Teamgeist und fügen diesen zu den drawables hinzu 🙂



In der activity_main.xml löschen wir die TextView und erstellen statt dessen einen Button.

    android:text="Sende Kudos"
    android:id="@+id/kudo_button" android:layout_centerVertical="true" android:layout_centerHorizontal="true"/>

Um mit den Button in Java zu arbeiten holen wir uns eine Referenz darauf in der MainActivity#onCreate() Methode und setzen auch gleich einen OnClickListener.

protected void onCreate(Bundle savedInstanceState) {

    Button kudoButton = (Button)findViewById(;
    kudoButton.setOnClickListener(new View.OnClickListener() {
        public void onClick(View view) {
          //hierher kommt unser Notification Code

Wenn wir jetzt unsere App starten, sollte sich auf dem Handy die App öffnen mit einem Button “Sende Kudos” auf weißem Hintergrund.

Senden einer ersten Notification

Um eine erste Notification zu senden müssen wir noch die V4 Support Library zu unserem Projekt hinzufügen. Hierzu fügen wir der dependency Section unserer build.gradle Datei eine Zeile hinzu.

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile ""

Beim ersten mal hinzufügen der V4 Support Library zu einem Projekt erkennt IntelliJ dies und erstellt durch nachfragen ein eigenes Repository hierfür.

Nun können wir auf die Notification API in der onClick Methode des vorher erstellten OnClickListeners zugreifen und fügen folgenden Code hinzu.

public void onClick(View view) {
  //1. Erstellen eines NotificationCompat.Builder mit Hilfe des Builder Patterns
  Notification notification =
    new NotificationCompat.Builder(MainActivity.this)
      .setContentText("Congratulations, you have sent your first notification")

  //2. Wir benötigen einen NotificationManager
  NotificationManagerCompat notificationManager =

  //3. Versenden der Notification mittels NotificationManager und NotificationBuilder
  int notificationId = 1;
  notificationManager.notify(notificationId, notification);

  1. Als erstes wird mit Hilfe des NotificationCompat.Builder und dem Builder Pattern eine Notification erstellt. Hier setzen wir zu Beginn einen Titel, einen Text und ein Bild.
  2. Dann benötigen wir zum versenden einen NotificationManager. Den erhalten wir mit dem Aufruf der from() Methode von der Klasse NotificationManagerCompat.
  3. Danach sind wir bereit die Notification über die notify Methode des NotificationManagers zu verschicken. Die notificationId dient hierbei zur Unterscheidung von verschiedenen Notifications einer App.

Wenn wir die App jetzt deployen, starten und auf “Kudo senden” drücken kriegen wir unsere erste eigene Notification auf der Uhr.



Anhand des App Icons ermittelt Android eine ähnliche Hintergrundfarbe. Ein eigenes Bild sieht jedoch viel besser aus. Wir erreichen dies in dem wir beim Builder zusätzlich setLargeIcon aufrufen.

new NotificationCompat.Builder(MainActivity.this)
 .setLargeIcon(BitmapFactory.decodeResource(getResources(), R.drawable.teamgeist_logo))
 .setContentText("Congratulations, you have sent your first notification")

Damit kriegt die Notification auf der Uhr den Geist auch als Hintergrund.



Wir können der Notification verschiedene Benutzerinteraktionen hinzufügen. Mit einem PendingIndent wird beispielsweise eine bestimmte Activity in unserer App aufgerufen und ihr mittels “Extras” Daten übergeben. Den PendingIndent erstellen wir in einer eigenen Methode.

private PendingIntent createContentIntent() {
    Intent viewIntent = new Intent(MainActivity.this, MainActivity.class);
    viewIntent.putExtra("EventNotified", "1");
    PendingIntent viewPendingIntent =
          PendingIntent.getActivity(MainActivity.this, 0, viewIntent, 0);
    return viewPendingIntent;

Diesen Indent übergeben wir dem Builder durch Aufruf von setContentIntent.

new NotificationCompat.Builder(MainActivity.this)
 .setLargeIcon(BitmapFactory.decodeResource(getResources(), R.drawable.teamgeist_logo))
 .setContentText("Congratulations, you have sent your first notification")

Durch nach links Wischen der Notification erscheint unsere neue Aktion.


Klicken wir nun auf “Open on phone” öffnet sich die hinterlegte Activity im Handy, also in unserem Fall die MainActivity. Leider bleibt bisher die Notification auf der Uhr bestehen. Um sie dort zu entfernen, müssen wir abfragen ob die App durch die User Interaktion gestartet wurde und deaktivieren in diesem Falle die Notification. Dazu erstellen wir uns die Methode cancelNotificationOnUserInteraction Methode und rufen sie in der MainActivity#onCreate Methode auf.

private void cancelNotificationOnUserInteraction() {
    Intent intent = getIntent();
    Bundle extras = intent.getExtras();
    if (extras != null && "1".equals(extras.getString("EventNotified"))) {

Neben dieser Standard Aktion können wir weitere “Actions” hinzufügen. Dazu erstellen wir uns ein Action Objekt mit folgender Methode,

private NotificationCompat.Action showInBrowser() {
    Intent browserIntent = new Intent(Intent.ACTION_VIEW);
    Uri geoUri = Uri.parse("");
    PendingIntent browserPendingIntent =
            PendingIntent.getActivity(this, 0, browserIntent, 0);

    return new NotificationCompat.Action(
            android.R.drawable.ic_dialog_map, "Open in Browser", browserPendingIntent);

und übergeben das Objekt an den Builder mittels der addAction Methode.

new NotificationCompat.Builder(MainActivity.this)
 .setLargeIcon(BitmapFactory.decodeResource(getResources(), R.drawable.teamgeist_logo))
 .setContentText("Congratulations, you have sent your first notification")

Wir können die Notification jetzt zweimal nach links schieben und kriegen dann eine weitere Aktion zur Auswahl. Beim klicken auf “Open in Browser” öffnet sich nun unsere Teamgeist Webseite auf dem Handy.


Mit Hilfe so einer Action würden wir die Voting Funktion realisieren. Die App auf dem Handy müsste dann dem Teamgeist Server den vote übermitteln.

Was gibt es noch?

Damit sind wir am Ende unseres ersten Android Wear Labs angekommen. Neben diesen Aktionen gibt es noch besondere Wear Notification Features. Da wäre zum einen die Möglichkeit die Notification um mehr als eine “Page” zu erweitern. Oder Notifications zu gruppieren. Jedoch das wahrscheinlich bekannteste Feature ist die Möglichkeit auf eine Notification mittels Sprache zu antworten.

All dies sind potentielle Themen für unser nächstes Android Lab. Und natürlich möchten wir die App mit unserem Teamgeist Server verbinden um echte Kudos zu erhalten und für sie “voten” ;-).

Spring-Shell – an easy way to create your own shell

If your next mind blowing tool needs to get some user interaction, using command line arguments might not always be the best user experience. So if you want to provide a more convenient way for the user to interact with your program, then a shell can be a solution. Using a shell gives you the power to lead the user in the process of interaction by predefining commands and thereby giving a hint on what is possible. With Spring-Shell, it is pretty easy to create a shell with your own commands that gives you access to the functions of your program.

How it Works

The shell is based on the Spring-Framework and already provides default built in commands for basic functions like exiting the shell, getting a help page or even using unix/windows commands. It also has some converters for reading in different types of input, like boolean or date. Besides that it contains a plugin model which can be used to customize the shell. Therefore to use your own commands, you need to write a plugin, that will be read in by the plugin model. Each plugin has to contain the file Meta-Inf/spring/spring-shell-plugin.xml. In this file you have to declare where to find the classes that define your custom commands, e.g. with the spring component-scanning functionality.

Continue reading

Spring Boot – my favorite timesaving conventionenabling autoconfigcreating beanmaking classpathshaking microcontainer

One of the common misconceptions when it comes to Spring based Java applications is that these require a sheer amount of configuration before one can even start working on the actual domain problem that the application is suppose to solve. This is mainly because of XML configurations that were greatly reduced with annotations already. But still, if you want to set-up a web application as quickly as possible without Spring (XML) configuration files you need to download and configure a web server, set-up a database connection, then write all the required beans, persistence.xml for hibernate, web.xml, etc. Since what you actually wanted was to code the solution to your very own problem you start asking yourself whether it really has to be so complicated!?

Continue reading

Eberhard Wolff zeigt wie’s geht – Die Spring Master Class in Berlin


Vom 24. bis 26. September 2014 fand erstmalig eine Schulung im Themenumfeld “Spring Framework” statt, die sich abseits der Platzhirsche VMWare und FastLane zu platzieren versucht. Angeboten und organisiert wird das hands-on Training von der Firma Gedoplan, die den Spring Evangelisten und Java Champion Eberhard Wolff für diese Veranstaltung in Berlin gewinnen konnte.

Wie der Titel bereits andeutet, zielen die Inhalte auf die Vermittlung von weiterführenden Konzepten in der Verwendung des Spring Framework ab. Ein guter Ausgangspunkt ist der Kenntnisstand der Spring Core Schulung und ca. zwei bis drei Projekte Praxiserfahrung. Diese helfen dabei, die verschiedenen Fragestellungen, die im Verlauf der Schulung aufgrund der extremen Flexibilität des Frameworks entstehen, aufzugreifen und einzuordnen.

Continue reading

How to create your own ‘dynamic’ bean definitions in Spring

Recently, I joined a software project with a Spring/Hibernate-based software stack, which is shipped in an SAAS-like manner, but the databases of the customers need to be separated from each other. Sounds easy for you? Ok let’s see what we have in detail.

1. Baseline study

Requirement: There is a software product that can be sold to different customers, but the provider wants to keep the sensible data of the customer in separate data sources. Every customer/login has only access to exactly one data source.

Continue reading

Spring Data Neo4j

Today we would like to introduce you to Spring Data Neo4j. To this end we implemented a little showcase application. The context of the showcase is a shop system. Therefore it would be useful to calculate what other users also viewed – as known from many popular shopping e-commerce websites like Amazon.  As these connections between users and products are easily displayable as a graph, we decided to use Neo4j to represent the nodes and the relationships between them. Continue reading