Category Archives: Lightweight Java

Ice cream sales break microservices, Hystrix to the rescue

In November 2015, we had the opportunity to spend three days with a greenfield project in order to get to know Spring Cloud Netflix. At comSysto, we always try to evaluate technologies before their potential use in customer projects to make sure we know their pros and cons. Of course, we had read about several aspects, but we never really got our hands dirty using it. This had to change!

Besides coming up with a simple scenario that can be completed within a few days, our main focus was on understanding potential problems in distributed systems. First of all, any distributed system comes with the ubiquitous problem of failing services that should not break the entire application. This is most prominently addressed by Netflix’ “Simian Army” which intentionally breaks random parts of the production environment.

However, we rather wanted to provoke problems arising under heavy load due to capacity limitations. Therefore, we intentionally designed a distributed application with a bottleneck that turned into an actual problem with many simultaneous requests.

Our Use Case

Our business case is about an ice selling company, which is acting on worldwide locations. On each location there are ice selling robots. At the company’s headquarters we want to show an aggregated report about the ice selling activities for each country.

All our components are implemented as dedicated microservices using Spring Boot and Spring Cloud Netflix. Service discovery is implemented using Eureka server. The communication between the microservices is RESTful.

architecture

Architecture overview of our distributed system with the deployment setup during the experiments.

There is a basic location-service, which knows about all locations provided with ice-selling-robots. The data from all these locations has to be part of the report.

For every location, there is one instance of the corresponding microservice representing an ice-selling-robot. Every ice-selling-robot has locally stored information about the amount of totally sold ice cream and the remaining stock amount. Each of them continuously pushes this data to the central current-data-service. It fails with a certain rate, which is configured by a central Config Server.

For the sake of simplicity, the current-data-service stores this information in-memory. Every time it receives an update from one of the ice-selling-robots, it takes the new value and forgets about the old one. Old values are also forgotten if their timestamp is too old.

The current-data-service offers an interface by which the current value for the totally sold amount of ice cream or the remaining stock amount can be retrieved for one location. This interface is used by an aggregator-service, which is able to generate and deliver an aggregated report on demand. For all locations provided by the location-service the current data is retrieved from the current-data-service, which is then aggregated by summing up the single values from the locations grouped by the locations’ country. The resulting report consists of the summed up values per country and data type (totally sold ice cream and remaining stock value).

Because the connection between aggregator-service and current-data-service is quite slow, the calculation of the report takes a lot of time (we simply simulated this slow connection with a wifi connection, which is slow in comparison with an internal service call on the same machine). Therefore, an aggregated report cache has been implemented as fallback. Switching to this fallback has been implemented using Hystrix. At fixed intervals the cache is provided with the most current report by a simple scheduled job.

The reporting service is the only service with a graphical user interface. It generates a very simplistic html-based dashboard, which can be used by the business section of our company to get an overview of all the different locations. The data presented to the user is retrieved from the aggregator-service. Because this service is expected to be slow and prone to failure, a fallback is implemented which retrieves the last report from the aggregated-report-cache. With this, the user can always request a report within an acceptable response time even though it might be slightly outdated. This is a typical example for maintaining maximum service quality in case of partial failure.

report

The reporting “dashboard”.

We used a Spring Cloud Dashboard from the open source community for showing all registered services:

cloud-dashboard

Spring Cloud Dashboard in action.

The circuit-breaker within the aggregator-service can be monitored from Hystrix dashboard.

Screen Shot 2015-12-30 at 22.22.26

Hystrix dashboard for reporting service under load. All circuits are closed, but 19% of all getReport requests failed and were hence successfully redirected to the cached version.

Understanding the Bottleneck

When using Hystrix, all connectors to external services typically have a thread pool of limited size to isolate system resources. As a result, the number of concurrent (or “parallel”) calls from the aggregator-service to the report-service is limited by the size of the thread pool. This way we can easily overstress the capacity for on-demand generated reports, forcing the system to fall back to the cached report.

The relevant part of the reporting-service’s internal declaration looks as depicted in the following code snippet (note the descriptive URLs that are resolved by Eureka). The primary method getReport() is annotated with @HystrixCommand and configured to use the cached report as fallbackMethod:

@HystrixCommand(
 fallbackMethod="getCachedReport",
 threadPoolKey="getReportPool"
)
public Report getReport() {
 return restTemplate.getForObject("http://aggregator-service/", Report.class);
}

public Report getCachedReport() {
 return restTemplate.getForObject("http://aggregated-report-cache/", Report.class);
}

In order to be able to distinguish primary and fallback calls from the end user’s point of view, we decided to include a timestamp in every served report to indicate the delta between the creation and serving time of a report. Thus, as soon as the reporting-service delegates incoming requests to the fallback method, the age of the served report starts to increase.

Testing

With our bottleneck set up, testing and observing the runtime behavior is fairly easy. Using JMeter we configured a testing scenario with simultaneous requests to the reporting-service.

Basic data of our scenario:

  • aggregation-server instances: 1
  • test duration: 60s
  • hit rate per thread: 500ms
  • historize-job-rate: 30s
  • thread pool size for the getReport command: 5

Using the described setup we conducted different test runs with a JMeter thread pool size (=number of concurrent simulated users) of 3, 5 and 7. Analyzing the served reports timestamps leads us to the following conclusion:

Using a JMeter thread count below the size of the service thread pool results in a 100% success rate for the reporting-service calls. Setting sizes of both pools equal already gives a small noticeable error rate. Finally, setting the size higher than the thread pool results in growing failures and fallbacks, also forcing the circuit breaker into short circuit states.

Our measured results are as follows (note that the average report age would be 15s when always using the cached version given our historize-job-rate of 30s):

  • 3 JMeter threads: 0,78s average report age
  • 5 JMeter threads: 1,08s average report age
  • 7 JMeter threads: 3,05s average report age

After gaining these results, we changed the setup in a way that eliminates the slow connection. We did so by deploying the current-data-service to the same machine as the aggregation-service. Thus, the slow connection has now been removed and replaced with an internal, fast connection. With the new setup we conducted an additional test run, gaining the following result:

  • 7 JMeter threads, fast network: 0,74s average report age

By eliminating one part of our bottleneck, the value of report age significantly drops to a figure close below the first test run.

Remedies

The critical point of the entire system is the aggregation due to its slow connection. To address the issue, different measures can be taken.

First, it is possible to scale out by adding additional service instances. Unfortunately, this was hard to test given the hardware at hand.

Second, another approach would be to optimize the slow connection, as seen in our additional measurements.

Last but not least, we could also design our application for always using the cache assuming that all users should see the same report. In our simplistic scenario this would work, but of course that is not what we wanted to analyze in the first place.

Our Lessons Learned

Instead, let us explain a few take-aways based on our humble experience of building a simple example from scratch.

Spring Boot makes it really easy to build and run dozens of services, but really hard to figure out what is wrong when things do not work out of the box. Unfortunately, available Spring Cloud documentation is not always sufficient. Nevertheless, Eureka works like a charm when it comes to service discovery. Simply use the name of the target in an URL and put it into a RestTemplate. That’s all! Everything else is handled transparently, including client-side load balancing with Ribbon! In another lab on distributed systems, we spent a lot of time working around this issue. This time, everything was just right.

Furthermore, our poor deployment environment (3 MacBooks…) made serious performance analysis very hard. Measuring the effect of scaling out is nearly impossible on a developer machine due to its physical resource limitations. Having multiple instances of the same services doesn’t give you anything if one of them already pushes the CPU to its limits. Luckily, there are almost infinite resources in the cloud nowadays which can be allocated in no time if required. It could be worth considering this option right away when working on microservice applications.

In Brief: Should you use Spring Cloud Netflix?

So what is our recommendation after all?

First, we were totally impressed by the way Eureka makes service discovery as easy as it can be. Given you are running Spring Boot, starting the Eureka server and making each microservice a Eureka client is nothing more than dependencies and annotations. On the other hand, we did not evaluate its integration in other environments.

Second, Hystrix is very useful for preventing cascading errors throughout the system, but it cannot be used in a production environment without suitable monitoring unless you have a soft spot for flying blind. Also, it introduces a few pitfalls during development. For example, when debugging a Hystrix command the calling code will probably detect a timeout in the meantime which can give you completely different behavior. However, if you got the tools and skills to handle the additional complexity, Hystrix is definitely a winner.

In fact, this restriction applies to microservice architectures in general. You have to go a long way for being able to run it – but once you are, you can scale almost infinitely. Feel free to have a look at the code we produced on github or discuss whatever you are up to at one of our user groups.

Advertisements

Connecting your secured OAuth2 webapp with Android

In my last post I showed how to send custom notifications for Android Wear devices to an Android Wear watch.

In order to demonstrate this better I used the events from our Teamgeist App as a case example and how notifications could look like.

I mocked the server side data as the focus was on the notification part. Today, I wanted to connect to our server side data via an Android app. Therefore I needed to authenticate the Android user at the server.

How the Google OAuth2 flow works for our web app

Our Teamgeist App uses OAuth2 and Google to authenticate a user and gain some information about the users profile picture or email address. The OAuth2 flow can be quite tricky the first time. The flow between our JavaScript client and the server part can be simplified to following the steps:

1. The first request from our JavaScript application to the server will be sent without token in the HTTP header. Therefore, we redirect the user to the login page where we show the Google+ login button:

 

Web1

 

2. Depending on the current user login status the next pages could be a login screen or selection page (e.g. if the user is using multiple Google accounts). Let’s assume the user logs in. The next page will be our consent screen:

Web3 Consent

 

As you can see, our App wants access to the users’ profile information. If the user grants the permission and accepts this screen, Google will generate an authorization code. During the OAuth2 setup Google asked for a callback URL. The generated authorization code will be sent to this URL.

In exchange of the authorization code the server retrieves an access token from Google. With this token the server can obtain information about the users’ profile for a certain amount of time. Furthermore, the server gets a refresh token which lasts longer and can be used to get a new access token.

The access and refresh tokens should be stored on the server. These tokens should never be given to the client app (Neither Android nor JavaScript). On the client side we store an application bearer token. We associate this token to the user and give it to the client. That’s the only token the client needs to communicate to our server.

Connect Android to the existing flow

Let’s assume that the user using an Android app already registered over the web. In order to get any information from the server e.g. events or kudos, we must initiate a request for a bearer token. We followed a blog post from Google, step-by-step about cross-client-identity which lead us to two working solutions. Both have their restrictions.

For both of them you need to register an Android app INSIDE of your application project in the Google Developer Console. Don’t create a new one, as these will be linked together and must be part of the same project. Please verify twice the SHA1 key for the Android app you enter AND the package name of your Android application. In the beginning we started with refactoring the current notification app by changing the package name from io.teamgeist.app to io.teamgeist.android. This lead to frustrating INVALID_CLIENT_ID and INVALID_AUDIENCE errors. As we changed back to .app and recreated the Android application in the developer console everything started to work. We haven’t tried to rename it back to .android so we can’t tell if this is a forbidden keyword in the package name or maybe we were too confident about our IDE renaming android package names. If you struggle with any of the errors check your keystore SHA1 key and your packagename. Also look into this blog post, which came quite handy.

If you did everything right you can obtain an authorization code or a GoogleIdToken from GoogleAuthUtil. For this, you will need the client id of the server or web application registered in your project.

Select the Google Account

Before you start you need to let the user select an Google Account. This is done by invoking a Choose Account Intent from the AccountPicker class:

Intent intent = AccountPicker.newChooseAccountIntent(
        null, null,
        {"com.google"}, 
        false, null, null, null, null);
startActivityForResult(intent, PICK_ACCOUNT_CODE);

When the user picks one Account the onActivityResult method of the initiating activity will be triggered. In order to get the authorization code or GoogleIdToken, you need the e-mail address of the user from the intent extras.

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
...
        if (resultCode == RESULT_OK) {
            String email = data.getStringExtra(AccountManager.KEY_ACCOUNT_NAME);
....

Authorization Code

The idea using the same flow with the authorization code was tempting. Get the code from your Android application and send it to the server where the server can exchange it for a pair of refresh/access tokens. You can request an authorization code by calling:

GoogleAuthUtil.
        getToken(yourActivity, email,
        "oauth2:server:client_id:{server_client_id}:api_scope:");

This call is blocking and therefore has to be executed e.g. in an AsyncTask. Besides, the server_client_id of the scope parameter must be the client id of the server.

When you call this, you will not get an authorization code but an exception of type UserRecoverableAuthException, because you need to authorize your Android app for offline access. The exception itself contains already an intent to be triggered. It will launch a consent screen where the user has to grant the requested permissions of the app.

catch (UserRecoverableAuthException userRecoverableException) {
    yourActivity.startActivityForResult(userRecoverableException.getIntent(), 
    GRANT_PERMISSIONS_CODE);
}

Screenshot_2015-01-02-21-07-33

 

When you add more scopes to the scope string (see com.google.android.gms.common.Scopes for available permissions) the consent will contain more permission requests.

After the user accepts the consent, the onActivityResult of the initiating Activity will be called. From the extras you get the authorization code:

protected void onActivityResult(int requestCode,int resultCode,Intent data){
    if(requestCode==GRANT_PERMISSIONS_CODE){
        if(resultCode==RESULT_OK){
            Bundle extras=data.getExtras();
            String authtoken=extras.getString("authtoken");
        }
    }
}

The code has a very short time-to-live (TTL) and can only be used once. Once you send the token to the server you can get a refresh and access token in exchange. After that create a bearer token and return it to the Android app as you would do it with your JavaScript app. Add the bearer token to the header of all your REST calls.

The authorization code grants offline access to the app through the refresh token. That’s one of our reasons why we don’t like this solution:

1. If the user logs out from the Android app (remove the Bearer Token), you will need to go through all the steps again, including the consent screen.

2. In our example we also don’t need the offline access to user data (meaning the server can interact with Google without any user interaction). As we suppose the user is already registered via web and has granted the permission for offline access.

In our Android app we only want to fetch data contained in our server. Next, let’s take a look into the GoogleIdToken approach.

GoogleIdToken

The GoogleIdToken is a JSON Web Token (JWT). The JWT contains three parts: Header, payload and signature. The signature is encrypted and contains the header and the payload. With the public keys from Google (https://www.googleapis.com/oauth2/v1/certs) everybody can decrypt the signature and check if it matches the header and the payload.

The GoogleIdToken payload contains several information regarding the user and the application. If you send the token to https://www.googleapis.com/oauth2/v1/tokeninfo?id_token=[jwt] a payload could look like this:

{
    "issuer": "accounts.google.com",
    "issued_to": "[Client id of your android application from the developer console]",
    "audience": "[Client id of your web application from the developer console]",
    "user_id": "10186*************",
    "expires_in": 3581,
    "issued_at": 1420230999,
    "email": "stefan.djurasic@comsysto.com",
    "email_verified": true
}

On the server you would verify the signature and then look into the payload.

1. If the signature check is ok, you know the token has been created by Google.

2. You know Google has already verified your Android app (it checks the SHA-1 key and the package name of your Android app and compares them with the Android client registered in the same project as the web/server application) and thus provided your app with the JWT for the user with the e-mail address in the payload.

This is why you have to check the “audience” field. It must contain your web/server application client id. You could also check the “issued_to” field (also called “azp”). It contains the client id of your Android application. But this is not really needed as long as you have only one client communicating this way with your server. Google says this field could be faked from rooted devices although we don’t know how we would accomplish this.

So lets come back to our app. We want to get the GoogleIdToken. You can obtain it from Android with the same method call with which you obtained the authorization code:

Change the scope parameter in the call from

"oauth2:server:client_id:{server_client_id}:api_scope:"

to

"audience:server:client_id:{server_client_id}"

Unlike the Authorization Code request,  we directly get the response back this time. There is no need for consent screen, as the user already granted permission to the server app. On the server side Google provides the verification of the token signature with a GoogleIdTokenVerifier. You should also provide your client id of the web/server application through the Builder of the GoogleIdTokenVerifier :

JsonFactory jsonFactory = new JacksonFactory();
NetHttpTransport transport = new NetHttpTransport();
String jwt = "thestringofthejwt";

GoogleIdTokenVerifier verifier = new GoogleIdTokenVerifier.Builder(transport, jsonFactory)
    .setAudience(Arrays.asList("YOUR SERVER CLIENT ID "))
    .setIssuer("accounts.google.com")
    .build();

GoogleIdToken googleIdToken = verifier.verify(jwt);
if (googleIdToken != null) {
    //The token is valid. You can add a check for the issued_to/azp field
    if (!"YOUR ANDROID CLIENT ID".equals(googleIdToken.getPayload().getAuthorizedParty())) {
        throw new OAuthException("Wrong authorized party returned");
    }
        
}

You verified the user and the application and could send the Bearer Token to the Android app as explained before with the authorization code. The benefits you gain additionally are:

1. You need no extra call to Google to verify the token.

2. You could even change your server application to accept not only the bearer token but also the GoogleIdToken. Therefore, you could spare the creation of the bearer Token and store it to the database.

The only thing you need to do, is to check if the user has already logged in from the web and search for the users’ data in your user database by using the social id or e-mail address from the JWT.

Drawbacks:

1. The user must have logged in into the app via the authorization code flow from the webapp. There the user has to accept the consent screen.

2. The user is never logged out. If the JWT expires (60 minutes) the Android app can get a new JWT without interaction with your server. Even if you invalidate all Bearer Tokens from the user the JWT would still be valid. Blocking the user is only possible by adding a flag to the user in your database.

3. You can’t access additional data on the server side from Google with the JWT.

4. Checking for the JWT in addition to the Bearer Token needs change on our server side.

Besides all the drawbacks we prefer the JWT approach. One suggestion is to create a mixture of these two possibilities. Use an authorization code for user registration and to get the access/refresh token. For User Identification use the GoogleIdToken only.

In our next episode we will use the login to gather periodically events from our server and push them as notifications to an Android Wear Smartwatch.

Feel free to share your thoughts and opinions in the comments section below.

 

Cross Language Benchmarking Part 3 – Git submodules and the single-command cross language benchmark

In my recent blog posts (part 1, part 2) I have described in detail how to do micro benchmarking for Java and C/C++ with JMH and Hayai. I have presented a common execution approach based on Gradle.

Today I want to improve the overall project structure. Last time I already mentioned, that the project structure of the Gradle projects is not optimal. In the first part I will roughly repeat the main goal and proceedings from the past articles, secondly introduce some new requirements, and finally I will present you a more flexible module structure to split production code and benchmarks, which will then be embedded in a cross language super-project.
Continue reading

Cross-language benchmarking – Gradle loves native binaries!

Last time I gave you an introduction to my ideas about benchmarking. I explained to you that comparing performance between different compilers and implementations is as old as programming ifself, and, above all, it is not that simple to setup as it sounds. If you don’t know what I write about, have a look at the former post on this topic. There, I explained, how to setup a simple Gradle task which runs JMH benchmarks as part of a Gradle task chain. However, the first article is no prerequisite at all for this article!

Today I want to start from the other side. As a C++ developer who wants to participate in a performance challenge, I want

  • a framework that outputs numbers that can be compared with other benchmark results
  • to run all benchmarks at once
  • to compare my results with other implementations or compiler assemblies
  • to execute all in one task chain

Continue reading

Introduction to comSysto´s cS ONE Conference

We were planning this amazing event for a couple month. On December 4th, 2014 it was finally time to start with the cS One Conference. This event was the first ever internal conference for comSysto, therefore everyone was enthusiastic and very excited about the outcome.

The Idea Behind It / Motivation

The introduction to the conference was made during breakfast by Daniel Bartl, one of the owners of comSysto.

Featured image      Featured image

As a proponent of the idea “New Work, New Culture”, we always try to find a way how to give our colleagues a chance to do something they deeply care about and love, to transfer knowledge to each other, to socialize with each other and be able to work in teams together.

The cS ONE Conference was all about knowledge transfer and team strengthening. comSysto employees had the chance to organize their own workshop or bring certain topic up for discussion in a group setting, which was conducted during working hours. The employees were very passionate about their workshops and group discussions and were looking forward to the conference. Everyone had the chance to sign up for the workshops and talks which, took place that day.

The Agenda

In order to start our day with lots of energy, we kicked it off with great breakfast which was very delicious and kept us going until lunch break thanks to lunchbox catering.

Featured image      Featured image

The talks as well as the workshops started both at 10 am. See below the timesheet for each talk and workshop.

Featured image      Featured imageThe topics of the talks were as follows:

  • JVM Deep Dive
  • Angular JS
  • Footfalls reloaded (Talk from Strata Barcelona)
  • Stress management & Burnout prevention
  • Agile at Emnos
  • DIY IOT Sensor Labs

The topics of the workshops were as follows:

  • How groups of people work
  • Shortcut Mania
  • comSysto moodboard
  • Modern Dev Tools
  • Building a mobile web app with famo.us
  • MOOC Mania

The topics of the tables were as follows:

  • comSysto Continuous Improvement Board
  • BIG PICTURE Guild
  • Wissenskombinat Guild
  • Marketing: Outlook for the 1st half year of 2015
  • Trainee @ comSysto + small workshop
  • Managing directors table
  • Introducing the new office team and their roles
  • GULP 2.0

The topic tables at comSysto were similar to a booth at an exhibition, and each topic table covered a different subject matter. Each colleague had the chance to drop by and inform themselves about the certain focus area. Many of the topics were work related. As most of the talks have lots of internal information about clients and projects, we can only show you two of the sessions (GULP 2.0 and JVM Deep Dive).

The guild tables were groups of employees that share knowledge, tools, codes about certain topics. In the BIG PICTURE Guild, employees explore data in small projects like Kaggle competitions, sensor data analysis, IOT, location tracking and anomaly detection. They basically try to get knowledge out of the data mainly by using machine learning methods. Wissenkombinat is the guild that has its focus on knowledge transfer and employee development. The aim of the Wissenkombinat is to improve the good “knowledge” (e.g. increasing efficiency, increasing communication between each other and transferring knowledge, etc.) and to find ways how to better learn from each other. If you want to read more about our guilds (there are several more) then please follow this link to our website.

Featured image        Featured imageFeatured image         Featured image

Each attendee had the chance to rate the talks, workshops and topic tables with our mascot sticker (see below).

IMG_2943

Work Hard, Play Hard

Featured image      Featured image

The chillout corner was very popular. comSysto employees had the chance to play playstation, which by the way belongs to teambuilding :).

What do you think about comSysto’s One Conference?

Does it encourage you to start your own internal conference?

You are so convinced by it and want to join our team? 🙂

Share your thoughts with me.

Teamgeist on Android Wear

Die ganze IT Welt spricht derzeit von Wearables. Also wollte ich mir in einem Lab die Android Wear API genauer anschauen. Schnell war auch schon der erste Anwendungsfall gefunden. In unserer Teamgeist App gibt es seit kurzem die Möglichkeit Kudos zu verteilen.

Kudos

Auf einer Android Wear Uhr würden sich die Kudos prima darstellen lassen. Dazu gäbe es zwei Aktionen. Einmal für einen Kudo “voten”. Die andere wäre die Teamgeist App öffnen.

Für eine Integration mit der Teamgeist App bräuchten wir eine neue Schnittstelle. Zum kennen lernen der Android Wear Api begnügen wir uns deswegen im folgenden mit einer Android App die Kudos erstellt und verschickt.

Nach kurzer Recherche wurde klar, dass für den Anwendungsfall gar keine eigene Android Wear App notwendig ist. Es reicht eine normale Android App die mittels der Notifications API Nachrichten direkt an die Uhr versendet. Anwendungen eigens für Android Wear geschrieben, werden in einem späteren Tutorial näher beleuchtet.

Vorbereitung

Ein paar Dinge die wir benötigen bevor wir loslegen können:

  • Intellij (14) als IDE
  • Android SDK mit installierten API Packages für Level 19 (4.4.2), 20 (4.4W) und Android Support Library V4 (20)

Android SDK

  • Aus Mangel einer echten Android Wear starten wir eine aus dem AVD Manager heraus

AVD Wear

Für das Koppeln mit einem Handy benötigen wir auf dem Handy die Android Wear App aus dem Play Store. Das koppeln von der emulierten Wear und einem per USB angeschlossen Handy funktioniert erst dann wenn folgender Befehl auf Kommandozeile eingegebenen wurde (im Tools Verzeichnis vom android-sdk):

~/development/android-sdk-mac_86/platform-tools$ adb -d forward tcp:5601 tcp:5601

Erst wenn der Befehl ohne Fehler ausgeführt wurde, lässt sich aus der Android Wear App im Handy die emulierte Uhr mit dem Handy verbinden. Wird das Handy vom Rechner getrennt und neu angeschlossen, muss der Befehl erneut ausgeführt werden. Eine genau Beschreibung gibt es von Google oder hier.

Neue Android App erstellen

Nachdem wir den Emulator mit dem Handy erfolgreich gekoppelt haben, erscheinen auch schon die ersten Notifications auf der Uhr wie z.B. der Eingang neuer Mails.

Damit wir nun selbst Notifications versenden können erstellen wir InteliJ ein neues Projekt. Im ersten Bildschirm wählen wir links Android und rechts das Gradle: Android Module aus. Auf der darauffolgenden Seite müssen wir ein paar Einstellungen wie z.b. die Version des Target SDK vornehmen.

Target SDK

Anmerkung: Wir hätten hier auch 4.3 wählen können da die Android Wear App ab Android 4.3 unterstützt wird.

Auf den nächsten Seiten belassen wir die Einstellung wie sie sind und wählen auf dem letzten Bildschirm nur noch den Ordner für unser Projekt aus.

Cleanup des generierten Projektes

In unserer Teamgeist App benötigen wir natürlich als erstes unseren Teamgeist und fügen diesen zu den drawables hinzu 🙂

teamgeist_logo

 

In der activity_main.xml löschen wir die TextView und erstellen statt dessen einen Button.

<Button
    android:layout_width="wrap_content"
    android:layout_height="wrap_content"
    android:text="Sende Kudos"
    android:id="@+id/kudo_button" android:layout_centerVertical="true" android:layout_centerHorizontal="true"/>

Um mit den Button in Java zu arbeiten holen wir uns eine Referenz darauf in der MainActivity#onCreate() Methode und setzen auch gleich einen OnClickListener.

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    Button kudoButton = (Button)findViewById(R.id.kudo_button);
    kudoButton.setOnClickListener(new View.OnClickListener() {
        @Override
        public void onClick(View view) {
          //hierher kommt unser Notification Code
        }
    });
}

Wenn wir jetzt unsere App starten, sollte sich auf dem Handy die App öffnen mit einem Button “Sende Kudos” auf weißem Hintergrund.

Senden einer ersten Notification

Um eine erste Notification zu senden müssen wir noch die V4 Support Library zu unserem Projekt hinzufügen. Hierzu fügen wir der dependency Section unserer build.gradle Datei eine Zeile hinzu.

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile "com.android.support:support-v4:20.0.+"
}

Beim ersten mal hinzufügen der V4 Support Library zu einem Projekt erkennt IntelliJ dies und erstellt durch nachfragen ein eigenes Repository hierfür.

Nun können wir auf die Notification API in der onClick Methode des vorher erstellten OnClickListeners zugreifen und fügen folgenden Code hinzu.

@Override
public void onClick(View view) {
  //1. Erstellen eines NotificationCompat.Builder mit Hilfe des Builder Patterns
  Notification notification =
    new NotificationCompat.Builder(MainActivity.this)
      .setSmallIcon(R.drawable.teamgeist_logo)
      .setContentTitle("Notifications?")
      .setContentText("Congratulations, you have sent your first notification")
      .build();

  //2. Wir benötigen einen NotificationManager
  NotificationManagerCompat notificationManager =
    NotificationManagerCompat.from(MainActivity.this);

  //3. Versenden der Notification mittels NotificationManager und NotificationBuilder
  int notificationId = 1;
  notificationManager.notify(notificationId, notification);

}
  1. Als erstes wird mit Hilfe des NotificationCompat.Builder und dem Builder Pattern eine Notification erstellt. Hier setzen wir zu Beginn einen Titel, einen Text und ein Bild.
  2. Dann benötigen wir zum versenden einen NotificationManager. Den erhalten wir mit dem Aufruf der from() Methode von der Klasse NotificationManagerCompat.
  3. Danach sind wir bereit die Notification über die notify Methode des NotificationManagers zu verschicken. Die notificationId dient hierbei zur Unterscheidung von verschiedenen Notifications einer App.

Wenn wir die App jetzt deployen, starten und auf “Kudo senden” drücken kriegen wir unsere erste eigene Notification auf der Uhr.

simple_notification

Hintergrundbild

Anhand des App Icons ermittelt Android eine ähnliche Hintergrundfarbe. Ein eigenes Bild sieht jedoch viel besser aus. Wir erreichen dies in dem wir beim Builder zusätzlich setLargeIcon aufrufen.

new NotificationCompat.Builder(MainActivity.this)
 .setLargeIcon(BitmapFactory.decodeResource(getResources(), R.drawable.teamgeist_logo))
 .setSmallIcon(R.drawable.teamgeist_logo)
 .setContentTitle("Notifications?")
 .setContentText("Congratulations, you have sent your first notification")
 .build();

Damit kriegt die Notification auf der Uhr den Geist auch als Hintergrund.

simple_notification_with_background

Benutzerinteraktion

Wir können der Notification verschiedene Benutzerinteraktionen hinzufügen. Mit einem PendingIndent wird beispielsweise eine bestimmte Activity in unserer App aufgerufen und ihr mittels “Extras” Daten übergeben. Den PendingIndent erstellen wir in einer eigenen Methode.

private PendingIntent createContentIntent() {
    Intent viewIntent = new Intent(MainActivity.this, MainActivity.class);
    viewIntent.putExtra("EventNotified", "1");
    PendingIntent viewPendingIntent =
          PendingIntent.getActivity(MainActivity.this, 0, viewIntent, 0);
    return viewPendingIntent;
}

Diesen Indent übergeben wir dem Builder durch Aufruf von setContentIntent.

new NotificationCompat.Builder(MainActivity.this)
 .setLargeIcon(BitmapFactory.decodeResource(getResources(), R.drawable.teamgeist_logo))
 .setSmallIcon(R.drawable.teamgeist_logo)
 .setContentTitle("Notifications?")
 .setContentText("Congratulations, you have sent your first notification")
 .setContentIntent(createContentIntent())
 .build();

Durch nach links Wischen der Notification erscheint unsere neue Aktion.

PendingIntent

Klicken wir nun auf “Open on phone” öffnet sich die hinterlegte Activity im Handy, also in unserem Fall die MainActivity. Leider bleibt bisher die Notification auf der Uhr bestehen. Um sie dort zu entfernen, müssen wir abfragen ob die App durch die User Interaktion gestartet wurde und deaktivieren in diesem Falle die Notification. Dazu erstellen wir uns die Methode cancelNotificationOnUserInteraction Methode und rufen sie in der MainActivity#onCreate Methode auf.

private void cancelNotificationOnUserInteraction() {
    Intent intent = getIntent();
    Bundle extras = intent.getExtras();
    if (extras != null && "1".equals(extras.getString("EventNotified"))) {
        NotificationManagerCompat.from(this).cancel(1);
    }
}

Neben dieser Standard Aktion können wir weitere “Actions” hinzufügen. Dazu erstellen wir uns ein Action Objekt mit folgender Methode,

private NotificationCompat.Action showInBrowser() {
    Intent browserIntent = new Intent(Intent.ACTION_VIEW);
    Uri geoUri = Uri.parse("http://app.teamgeist.io");
    browserIntent.setData(geoUri);
    PendingIntent browserPendingIntent =
            PendingIntent.getActivity(this, 0, browserIntent, 0);

    return new NotificationCompat.Action(
            android.R.drawable.ic_dialog_map, "Open in Browser", browserPendingIntent);
}

und übergeben das Objekt an den Builder mittels der addAction Methode.

new NotificationCompat.Builder(MainActivity.this)
 .setLargeIcon(BitmapFactory.decodeResource(getResources(), R.drawable.teamgeist_logo))
 .setSmallIcon(R.drawable.teamgeist_logo)
 .setContentTitle("Notifications?")
 .setContentText("Congratulations, you have sent your first notification")
 .setContentIntent(createContentIntent())
 .addAction(showInBrowser())
 .build();

Wir können die Notification jetzt zweimal nach links schieben und kriegen dann eine weitere Aktion zur Auswahl. Beim klicken auf “Open in Browser” öffnet sich nun unsere Teamgeist Webseite auf dem Handy.

OpenInBrowserAction

Mit Hilfe so einer Action würden wir die Voting Funktion realisieren. Die App auf dem Handy müsste dann dem Teamgeist Server den vote übermitteln.

Was gibt es noch?

Damit sind wir am Ende unseres ersten Android Wear Labs angekommen. Neben diesen Aktionen gibt es noch besondere Wear Notification Features. Da wäre zum einen die Möglichkeit die Notification um mehr als eine “Page” zu erweitern. Oder Notifications zu gruppieren. Jedoch das wahrscheinlich bekannteste Feature ist die Möglichkeit auf eine Notification mittels Sprache zu antworten.

All dies sind potentielle Themen für unser nächstes Android Lab. Und natürlich möchten wir die App mit unserem Teamgeist Server verbinden um echte Kudos zu erhalten und für sie “voten” ;-).