Category Archives: UX

ReactJs – Minesweeper App

Minesweeper is well known and quite popular computer game from ’90. For purpose of this blogpost I decided to revive it in ReactJs technology.

ReactJs is JavaScript library providing a view for data rendered as HTML. React offers a model in which subcomponents cannot directly affect enclosing components (“data flows down”), and is also very effective in refreshing a view when data is changed. Its “virtual DOM” feature allows this framework to only render parts of view that actually need to be changed and that is a reason why ReactJs provides fastest reactions on user inputs for great user experience in front-end applications.

MVC architecture

Best practice when using ReactJs is to create a hierarchy with one master component on the top. That component presents the application itself and holds the global state of application. In MVC architecture, that component will be used as a controller and it will handle changes on a model state and trigger a rendering when necessary.

Creating a good model is a key step in developing any good application. Model will hold the data that represents the application current state. Each object should represent one specific element and should only contain at least outer dependency as necessary. Simpler model allows you easier data handling when changes occur.

View is implemented by creating low level React components that can be reused on multiple places in application. Each view component always represents a single model object and it contains that object as a property of a view. Each view component is also responsible for catching html events and forwarding them to controller. In order to be able to handle such events from input devices, each input device is also represented in model. In Minesweeper case it is the MouseModel that holds informations about  current mouse state and alerts controller when actions are made.

Image 1:  Minesweeper – MVN architecture

minesweeper-mvc-arhiecture

Handling state changes

Controller contains a whole global state of application. It reacts on state changes of input devices, delegates actions to a game model and updates application state when necessary. In order to be able to update application state, the controller must be aware of any state change on the model. For that purpose ‘event listener’ pattern is implemented on model.

Each model component implements interface that contains two methods: addEventHandler(eventHandler) and fireEvent(eventName, event). Each model object set an event handler callback to each of its children. So, when model object changes its state it can fire a ‘stateChanged’ event. Its parent will handle that event and fire the same event to his parent and so on and so on until event reach the controller on top of the hierarchy. Controller then handles that event by updating global state and starting render() procedure that will update the view.

Image 2:  Minesweeper – handling state changes 

minesweeper-events

This application was created as a full front-end application and requires only a browser to run. It was developed for Chrome browser and was not additionally adapted to other browsers. It was developed as full front-end but it also offers an option of adding a scoreboard that require a back-end REST service with some kind of persistence.

Live demo is available on:  http://dobilinovic.comsysto.com/minesweeper/

Source code is available on gitHub: https://github.com/Obee88/minesweeper-react

Hope you enjoy it!

Davor Obilinović

 

Anatomy of a large Angular application

Do I really need a strategy?

Yes.

A fresh application always starts out as that one application that’s going to be designed for easy maintenance and development.
Unfortunately, it’s just a matter of time until that application becomes non-trivial and needs reorganisation and/or a rewrite. In those moments, it helps if you’ve designed your application in a way that’s easy to refactor and, with some forethought (and luck), a reorganisation might not even be necessary. A bigger application usually also means a bigger team consisting of people with varying degree of front-end and Angular knowledge. Having a clear set of guidelines regarding the architecture and coding style pays off very fast.

The aforementioned problems are exactly the problems we faced while building an application that gets more than 10 million visitors each month. After a while, developing a feature becomes a chore. The same questions always pop up:

Where do I put this piece of code?

How do I modify data?

How come this event changed my data and state?

Why does modifying a piece of code suddenly break more than half of my unit tests?

It was clear — we needed a new direction.

Setting a direction

Our goal at that point was to have something that’s easy to develop, maintain and test. If we accomplish that, there’s a good chance that our application is going to be future-proof as well.

This article aims to tell the story of a better architecture but also to provide a working example of all the principles discussed here. That’s why you’ll find an accompanying repository with an interactive demo application. Details of the repository and how it relates to this article will be discussed later.

Separation of concerns

Looking at the problem from a different angle, we’ve noticed that the biggest problem was writing tests that are not too brittle. Easy testing means that mocking various parts of an application is easy which lead us to the conclusion that we need better separation of concerns.

sketchThat also meant we needed a better data flow; one where it’s completely clear who provides and modifies data and who (and how) triggers data changes. After a few initial sketches, we’ve come to a rough sketch of a data flow that resembled React’s Flux. It’s pretty clear how data flows in a flux(-like) application. In a nutshell — an event (e.g. user or callback) requests a data change from a service which modifies the data and propagates the changes to components that need that data. This in turn makes it easy to see who triggered a data change and there’s always one data source.

Better tooling

One thing that made our life easier was using a language that transpiles to JavaScript. That’s something I would seriously recommend. The top two contenders right now are TypeScript and Babel. We chose TypeScript because the tooling made it easier to notice errors at compile time and refactor bigger pieces of code.

Future proofing

Future proofing means having an application that’s easy to maintain but also reasonably easy to upgrade. It won’t be long until Angular 2 becomes production ready and a sane architecture with TypeScript goes a long way in making the gradual upgrade easier.

The bare necessities

What follows is a list of advices I expect developers of a sane Angular application are going to follow:

  • separate your concerns,
  • keep the flow of data unidirectional,
  • manage your UI state using data,
  • use a transpiled language,
  • have a build process in place,
  • test.

Let’s dive into each one of them.

Separating concerns

When each layer of an application can run as a separate entity, doesn’t know too much about the system (layers that aren’t in direct contact) and is easily testable, you’ll have an application that’s a joy to work with. Angular offers building blocks that lend itself to such a separation of concerns. If you want a deep insight into the subject, check out this blog post.

Vertical separation

Concerns can be separated horizontally and vertically. Vertical separation happens when you split an application into verticals. Each vertical has a life of its own and internally should have horizontal separation. What worked best for us, was completely separating parts of the application (e.g. separate home page, details page, configuration page, etc.) into standalone web pages that each initialise an Angular application. Communication between these modules is easy and achievable by using standard techniques like sessions, URL parameters, etc.

verticals@2x

Horizontal separation

Where it gets interesting is horizontal separation. That’s where you actually build up your Angular application and place all its building blocks. It’s important to note that each layer (and block inside a layer) only knows about the layer above itself and doesn’t care about layers underneath that are going to consume its exposed functionalities.

Each vertical features a similar structure:

  • services layer,
  • facade layer,
  • components layer.

horizontals@2x

Components layer

The components layer is the layer that the users can interact with.
It contains directives with accompanying HTML templates and controllers. When testing (and conceptually designing), directives and HTML templates build one block and controllers build the other block of this layer.

The reason is simple — testing controllers is easy because they can be tested without a dependency on Angular. This exact feature of controllers makes them also the perfect place to put any functionality your directive requires. The preferred way then, would be to use controllerAs and bindToController in directives to build up components.

Blocks in this layer get parts of the facade layer injected and, through these, can pull data and request data modification.

components-layer@2x

A question often pops up in this layer — are we going to reach data to a component through isolated scope or get a service injected and request it? 

The answer to that question is not always clear and involves using common sense.
Smaller, reusable components without child components are a clear candidate for getting data through isolated scope and directly using that data.
Components featuring child components or more logic often benefit much more from getting their data through an injected service because they don’t get coupled to their parent.

Facade layer

The facade layer is an abstraction layer. A facade is defined as follows:

A facade can (…) reduce dependencies of outside code on the inner workings of a library, since most code uses the facade, thus allowing more flexibility in developing the system.

In our architecture, its only job is abstracting the back facing part (services layer) from the front facing part of your application (components layer). The blocks in this layer are services whose methods get called from the components layer and are then redirected to corresponding services in the services layer.

It’s that simple.

But also powerful, because such an abstraction is easy to split up and changes done to the services layer never affect your components layer.

Services layer

The services layer features all the smart things your application is supposed to do. Be it data modification, async fetching, UI state modification, etc. This layer is also the layer where your data lives and gets handed to the components layer through the facade layer.

services-layer@2x

This layer is typically going to feature:

  • services that handle your data or UI state (e.g. DataService and UIStateService),
  • services that assist them in doing so (e.g. DataFetchService or LocalStorageService) and
  • other services that you may need like a service that’s going to tell you at which breakpoint in a responsive layout you are.

Keeping the flow of data unidirectional

Now is the time to explain how all the layers and blocks fit together in a unidirectional flow of data.

data-flow@2x

Getting data

The services layer features services that know how to get data. The initial set of data is either already present as part of the HTML, asynchronously fetched or hardcoded. This data gets transformed into objects (your models) and is available through methods present on the services in your services layer.

The blocks in the components layer can now make a request for the data through the facade layer, get the already parsed data and display it. Easy.

Modifying data

If an event happens that should modify data, the blocks in the components layer make a request to the facade layer (e.g. “refresh list of users” or “update the contents of this article with this data”).

The facade layer passes the request to the correct service.

In the services layer, the request gets processed, the needed data gets modified and all the directives get the new data (because it was already bound to the directives). This works thanks to the digest cycle. Most events that happen are going to trigger a digest cycle which will then update the views. If you’ve got an event that doesn’t trigger the digest cycle (like a slider’s slide event), you can trigger a digest cycle manually.

Keep it flowing

As you can see, there’s only one place in your application that modifies your data (or a part of it). That same place provides that data and is the only part where something wrong with the data can happen which makes it much easier to debug.

Managing UI state using data

A larger Angular application is probably going to feature various states in which it can find itself. Clicking on a toggle can lead to the change of a tab, selection of a product and highlighting of a row in a table, all at the same time. Doing that on the DOM level (like jQuery manipulation) would be a bad idea because you lose the connection between your data and view.

Since we’ve already established a nice architecture, let’s use it to manage our UI state. You’d create a UIStateService in the services layer. That service would hold all relevant UI data and modify it if needed. Like already explained, that service would provide that data but also be in charge of modifying it. The facade layer would then delegate all needed changes to the correct service(s).

It’s important to note that a UIStateService might not be needed. Since views depend on data, most of the time it’s possible to just use that data and control the state of the views. A separate state service makes sense when you have to manage UI state that’s completely separated from your model.

Transpiling code

There are many benefits transpiling from a language to JavaScript. A few obvious ones are:

  • using features that are coming in newer versions of ECMAScript,
  • abstraction of JavaScript quirks,
  • compile time errors,
  • better tooling…

You can transpile from future versions of ECMAScript with Babel or even add typing support with TypeScript or Flow. You can’t go wrong with either of these choices because, at the end of the day, you get usable JavaScript. If any of the tools no longer exist, you can continue working with the generated JavaScript.

TypeScript

Seeing as how the Angular Team teamed up with Microsoft and are basing Angular 2 on TypeScript, it is safe to assume that the support for that stack is going to be really good. In that sense, it makes sense to get acquainted with TypeScript.

Aside from offering type safety, TypeScript has really good tooling support with editors like Sublime, Visual Studio Code or WebStorm which all offer autocompletion, inline documentation, refactoring, etc. Most of them also have a built-in TypeScript compiler so you can find compile-time errors while coding. The great autocompletion and inline documentation is possible because of type definition files. You would typically get a type definition file, put it in your project and reference it — the mentioned features work then out of the box. Visit DefinitelyTyped to see which libraries and frameworks are supported (hint: odds are, you’re going to find every library or framework you use there) and then use tsd to easily install them from the CLI.

The team at Angular is proposing a concept where libraries directly include the type definition files. The benefits of that approach are two-fold: there’s no need to search for type definition files and the type definition file you get with a version of a library always corresponds to the API of that version.

To get a quick look at all the benefits of developing with TypeScript, you can watch this video from Angular Connect.

A switch to TypeScript is mostly painless because valid JavaScript code is valid TypeScript code. Just change the file extensions to .ts, put a TypeScript compiler in your build process and you’re good to go.

Speaking of build process…

Having a build process in place

You do have a build process in place, don’t you?

If not, pick Grunt, Gulp, Webpack or whichever build/packaging tool you’d like to work with and get going. The repository accompanying this article uses Gulp, so you can get an idea how the code gets transpiled, packed for the web and tested. I won’t go into details on build tools because there are many articles out there detailing them.

build@2x

Testing

You should test all parts of your application.

I see quite often that people leave out testing HTML templates because they’ve got integration tests. Unfortunately, Angular won’t let you know if you’ve got a typo somewhere in your template and integration tests can get big and slow very fast while still not covering enough ground (not to mention the time needed to maintain them).

The point is — with a good architecture in place, testing is easy because you only test code you’ve written and mock away all dependencies. Angular’s dependency injection plays a big role as well and testing with Angular is straightforward.

A combination of Karma as test runner and Jasmine as testing framework is probably going to be enough for all of your test cases. Testing in your build process (between transpiling and packaging) is also going to make sure you’re not introducing regression bugs.

Testing directives means separately testing the directive definition with its accompanying template and controllers.
Controllers are easy to test because they just get instantiated with all of their dependencies mocked away and you can get straight to testing its insides. Most of the time, you’ll just be testing if your controllers delegated to the correct service in the facade layer.
Instantiating directives and mocking away their controller is also easy because the controller is present at the compiled element after Angular’s compilation. To test what’s happening in a template, change the controller or scope mock and run a digest cycle. The new values should be present.

Testing services in the facade or services layer is just as easy because you can mock away every dependency and really test only the code that’s present.

That’s also the main take-away here — test code that’s present in the component you’re testing. Tests should fail if you modify the public methods of a component, but only tests that are associated with that component and not half of all your tests. If writing tests is hard, you’re either testing too much (and not mocking away enough) or having a problem with the architecture of your application.

Real world example

1-uTlQ1o47ZW5ptmhFnKlBvQ

Heroes of Warcraft is a trademark and Hearthstone is a trademark or registered trademark of Blizzard Entertainment, Inc., in the U.S. and/or other countries.

As part of this article, you can check out and play with a demo application here.

It’s a deck management application for card games. Games like Hearthstone, Magic the Gathering and similar have players building decks from an ever-growing collection of cards and battle against each other. You can create and manage decks with a pre-built array of custom made cards taken from HearthCards.

Source repository

What we’ll discuss here is the repository from which the demo application was built and you can find that repository here. The idea behind this repository is to give you a working application that explores the ideas discussed in this article and a nice cheat sheet when you’re not sure how to implement a feature in Angular using TypeScript.

To get started, clone the repository and follow the README. That’s going to start up your server and serve the compiled Angular modules.

For easier work later, I recommend starting a watcher in each vertical by running gulp watch. Now, each time you modify a file inside of a vertical, Gulp is going to compile and test your changes.

Vertical separation

The application is divided into three verticals: common, deckmanager and deckbuilder. Each of these verticals is an Angular module. The common module is a utility module and gets injected into other modules.

Horizontal separation

All verticals feature a similar structure which follows what we’ve already discussed here in the article. You’ll find the directories components and services where the components directory contains directives, controllers and templates making it the components layer and the services directory where you’ll find the facade and services layer.

Let’s explore the layers.

Services layer

The deckmanager vertical is a good candidate because it features a data managing service and a UI state managing service. Each of these services has its own model consisting of objects that they’ll manage and provide.

DataService, further more, gets LocalStorageService from the common module. This is where separation of concerns pays off — the data (decks and cards in the decks) are going to be stored into local storage. Because our layers are decoupled, it’s easy to replace that storage service with something completely different.

If you take a look at the DataService in the deckbuilder vertical, you’ll see that we’re also injecting a PageValueExtractorService. That service allows us to have pre-populated data in HTML that gets parsed and used right away. This is a powerful technique that can make application startup much faster. Once again, it’s easy to see how trivial it is to combine data storage strategies and, if we decide to change the concept completely, our components won’t notice it. They just care about getting the right data, not how it got there.

Facade layer

Let’s look at the facade layer and see how it works in practice.

// ... imports

export default class FacadeService implements IFacadeService {
    private dataService:IDataService;
    private uiStateService:IUIStateService;

    constructor(dataService:IDataService, uiStateService:IUIStateService) {
        this.dataService = dataService;
        this.uiStateService = uiStateService;
    }

    public getDecks():IDeck[] {
    return this.dataService.getDecks();
}

public createNewDeck(name:string):void {
    this.dataService.createNewDeck(name);
this.uiStateService.setShowNewDeckForm(false);
}

// ... rest of service
}

FacadeService.$inject = ['DataService', 'UIStateService'];

The FacadeService gets the DataService and UIStateService by injection and can then further delegate logic between the other two layers.

If you look at the createNewDeck() method, you can see that the FacadeService isn’t necessarily just a delegation class. It can also decide simple things. The main idea is that we want a layer between components and services so that they don’t know anything about each other’s implementation.

Components layer

The structure of components includes the directive definition, a template and a controller. The template and controller are optional but, more often than not, they’re going to be present.

You can notice that the components are, for a lack of better words, dumb. They get their data and request modifications from the facade layer. Such a structure yields two big wins: less complexity and easier testing.

Take a look at a controller:

// ... imports

export default class DeckController {
    private facadeService:IFacadeService;

    constructor(facadeService:IFacadeService) {
        this.facadeService = facadeService;
    }

    public getDecks():IDeck[] {
        return this.facadeService.getDecks();
    }
    
    public addDeck():void {
        this.facadeService.setShowNewDeckForm(true);
    }
    
    public editDeck(deck:IDeck):void {
        this.facadeService.editDeck(deck);
    }
    
    public deleteDeck(deck:IDeck):void {
        this.facadeService.deleteDeck(deck);
    }
}

DeckController.$inject = ['FacadeService'];

A quick glance makes it obvious that this component provides CRUD functionalities for our game decks and that it’s going to be really easy to test this class.

Data flow

As discussed in the article, the data flow is going to feature components using the facade layer which is going to delegate those requests to the correct services and deliver results.

Because of the digest cycle, every modification is going to also update the values in the components.

To clarify, consider the following image:

app-data-flow@2x

This image shows the data flow when a user clicks on a card in the Deck Builder. Even before the user interacts with the card gallery, the application has to read the contents of the current deck and all cards supported in the application. So, the first step is the initial pull of data that happens from the components through the facade to the services.

After a user clicks on a card the facade layer gets notified that a user action needs to be delegated. The services layer gets notified and does the needed actions (updating the model, persisting the changes, etc.).

Because a user click using ngClick triggers a digest cycle, the views are going to get updated with fresh data just like it happened in the first step.

Under consideration

The application is tested and features a simple build process. I’m not going to dive deep into these topics because the article is big enough as is, but they are self-explanatory.

The build process consists of a main Gulp configuration file and little configuration files for each vertical. The main Gulp file uses the vertical files to build each vertical. The files are also heavily annotated and shouldn’t be a problem to follow.

The tests try to be limited just to files that they’re concerned with and mock everything else away.

What now?

The application has lots of places where it could be improved upon:

  • additional filtering of cards by cost, hit points, attack points or card rarity
  • sorting by all possible criteria,
  • adding Bootstrap’s Affix to the chosen cards in the deck builder
  • developing a better Local Storage service which has much better object checking and casting
  • further improving the Page Value Extractor service to allow for metadata being included in the JSON for better type association
  • etc.

If you check the source code of the application, you’ll notice that there are comments marked with TODO. It’s possible to track these comments in IDEs and text editors (WebStorm and Visual Studio Code do it out of the box, Sublime has several plugins that support it). I’ve included several TODOs that range from new features to improvements and you’re very welcome to fix them and learn a few things along the way.

The devil is in the detail

The points discussed in this article mostly deal with big picture stuff.

If you want to find out about implementation details that can creep up while developing an Angular application, watch this entertaining video from Angular Connect about the usual errors in Angular applications.

Another great resource is this blog post by a developer who re-built the checkout flow at PayPal with Angular.


Back to the drawing board

We have a working application and an idea on how to structure our applications. It’s time to go back to the drawing board now and see if this can really be considered a win.

Consider the demo (tutorial) application that’s featured at the official Angular 2 page — John Papa’s Tour of Heroes. I’ve linked directly to the sources so you can click through the various parts of the application source code. What you’ll notice right away is how similar it feels to the application that’s part of this article. Also, you’ll notice that the take-aways from this article can easily be applied to this application as well — just take the logic out of the components and add layers for a better data flow.

The biggest advantage of developing a well-structured Angular application with TypeScript is the future-proofing that you get. Angular 2 is shaping up to be a great framework and easier to use than Angular 1 with lots of sugar (like annotating components).

Why not, then, upgrade our knowledge for things to come?

Introduction To The E-Commerce Backend commercetools platform

This blog post is an introduction to the e-commerce backend commercetools platform, which is a Platform as a Service (PaaS) of commercetools GmbH from Munich, and gives impulses on how to use it.

First the facts to commercetools and commercetools platform:

commercetools GmbH is a Munich based company situated in the north near Olympia Park and has further offices in Berlin and New York. The commercetools platform is a backend for all kinds of e-commerce use cases including online pure players, mobile and point-of-sales application, couch-commerce and marketplaces. commercetools began developing its platform in 2006 and has never stopped since.
I will at first give an overview of the UI of the platform with examples as to how to use it and then talk about the Rest API they provide in order to access data for an imaginary online shop.

User interface of commercetools platform

The sign up process is fairly easy and completed in about 5 minutes. You create an account and associate a project with it. One account can hold several projects and you can invite several accounts to one project. You will be asked whether you want to include test data in the project which is advisable for your first project.

Sphere Dashboard

Dashboard commercetools platform

The self-explanatory UI allows access to all needed functionalities from Products to Orders to Settings and content for developers. The first thing you will see is the dashboard which gives you revenue statistics for any given time.

I will guide you through the account as the workflow of creating a project should be:

  • Creating Product Types:
    At first you have to understand the difference between product types and categories. Product types are used to describe common characteristics and most importantly, common custom attributes, whereas categories are used to organize products in a hierarchical structure.

    creating a product type

    Creating a product type

    Look at the product type drink I created. I added two attributes, alcohol as a boolean and volume as a number. Now every product which is created using this product type has to have these two attributes additionally to all the other attributes I will show you later.

  • Creating Categories:
    As mentioned the categories are used to organize products in you project. This should be nothing spectacularly new.

    Creating categories

    Creating categories

    I decided to use a root category containing all other categories as subcategories to make my life easier later when retrieving the categories for the online shop. A category has just name, description, parents and children.

  • Creating Products:
    Now to the important part of the setup, the products itself. When creating a product you will have to choose one of the previously created product types. Note that a product can only be of one product type.

    Creating a product

    Creating a product

    After inserting name, description, the custom attributes and a few others the product is created. You can now upload pictures, add categories, create product variants (for example for different colors), add prices and even define SEO attributes.

  • Everything else via API:
    Creating Customers and Orders is possible in the UI but is, in my opinion, more practicable by API calls. This will be explained in the next part of this post.

REST API of commercetools platform

There are a lot of SDKs in different languages like Java, PHP and Node.JS for access to the API (check out the git-repository) but I decided to code directly against the API via the REST service. The API is fully documented here. I wrote an OnePage App with AngularJS and used the Angular $http service for my API calls, which I will show you in this part of my post. Transported data in both directions is in JSON format which allows fast and reliable handling.

Authorization

A client has to obtain an access token via an OAuth2 service. There are several access scopes, such as view_products, manage_orders and view_customers, which allow different kind of interaction with the platform. Normally you would have to implement a small server which handles the authentication and authorization. Otherwise the token would have to be stored on client side, which is not save, for with the manage_orders token a client can not only manage his own orders but all orders of the project. I ignored that for my test application and concentrated on the Rest API.

Getting Products

To obtain the products from the platform I used Angular’s http service:

function loadProducts(){
    $http.get('https://api.sphere.io/testshop-rw/product-projections?current=true')
        .success(function(data){$scope.loadProductsResponse = data;
                               handleLoadProductsResponse($scope.loadProductsResponse);
                                })
}

As response to this request you will receive a list of products with all parameters you can possibly need. Notable is the fast response time of the server which was never over 200 ms.

Carts, Customers and Orders

The most important task for an online shop is the handling of customers and their carts and orders. My test implementation creates an anonymous cart for every new user that enters the website:

if(localStorage['cartId'] === null){
    $http.post('https://api.sphere.io/testshop-rw/carts', {'currency':'EUR'/*,'customerId':localStorage['customerId']*/})
          .success(function(data){localStorage['cartId'] = data.id;})
}

As you can see I use the localStorage feature to store data. That way the customer can come back later or refresh the website without loosing previously obtained data. Once a customer logs in, the cart will be merged into the existing cart of the customer.

Registration for a customer is as simple as this:

function signUp(emailAddress, password, lastName, firstName, streetName, streetNumber, routingCode, city){
    $scope.registerCustomer = {
      email: emailAddress,
      firstName: firstName,
      lastName: lastName,
      password: password,
      anonymousCartId: localStorage['cartId'],
      addresses :[{
        email: emailAddress,
        firstName: firstName,
        lastName: lastName,
        streetName: streetName,
        streetNumber: streetNumber,
        postalCode: routingCode,
        city: city,
        country: 'DE'
     }]
  }
  angular.toJson($scope.registerCustomer)
  $http.post('https://api.sphere.io/testshop-rw/customers', $scope.registerCustomer)
    .success(function(data){$scope.signUpResponse = data;
                            signUpSuccess($scope.signUpResponse);
                            })
    .error(function(data){$scope.signUpResponse = data;
                          handleError(signUpResponse);
                          })
}

The customer can add several addresses including shipping and billing addresses which allows him to select one of them for checkout.

An order is created from a cart or an anonymous cart via POST:

function cartToOrder(updateCartResponse){
      $scope.makeOrder = {
        id : updateCartResponse.id,
        version : updateCartResponse.version
      }
      angular.toJson($scope.makeOrder);
      $http.post('https://api.sphere.io/testshop-rw/orders', $scope.makeOrder)
        .success(function(data){$scope.cartToOrderResponse = data;
                                orderSuccess($scope.cartToOrderResponse);})
}

The process a customer goes through until a product is ordered is fairly simple and only uses a few API calls.

Search

commercetools platform gives you built in fast search and filtering capabilities. Using NoSQL technology, the API allows you to create comprehensive product searches, after-search navigation and configuration. In addition, every change made to the product catalog is automatically indexed.
With the built-in facet technology you can enhance customer experience and usability with extended search and navigation capabilities. Therefore customers can find products faster – especially if you have a comprehensive and complex catalog.

The operators point of view

As the company which operates the online shop you have a pretty easy job, too. All products can be uploaded and updated via CSV files which allows you to manipulate all products at once and not one after the other. There are a few different payment statuses which can be given to orders with the payment state.

plug in integrations

plug in integrations

Orders can be downloaded in CSV or XML to feed them to your inventory control system and logistics provider.

Unfortunately as of yet there are no plug in payment methods which is sad but there is a silver lining. commercetools is working on that right now. The same with the direct integration of Hippo CMS which would allow you to manage all content via Hippo.
Other than that there are several ways to integrate the commercetools platform to your existing IT landscape (see graphic).

For more information on the commercetools platform, here are a few links which might be useful:

All in all I enjoyed working with commercetools because of the complete API documentation, the fast and very helpful support and the very fast and easy accessible API. Just sign up for a free trial and see for your self.

If you want to learn more about AngularJS, register now for our Training and get Early Bird Tickets.

Building a desktop application with Electron

The how and what of JavaScript desktop applications

Desktop applications always had a special place in my heart. Ever since browsers and mobile devices got powerful, there’s been a steady decline of desktop applications which are getting replaced by mobile and web applications. Still, there’s are a lot of upsides to writing desktop applications — they are always present once they’re in your start menu or dock, they’re alt(cmd)-tabbable (I hope that’s a word) and mostly connect better with the underlying operating system (with its shortcuts, notifications, etc) than web applications.

Github ElectronIn this article, I’ll try to guide you through the process of building a simple desktop application and touch on important concepts for building desktop application with JavaScript.

The main idea behind developing desktop applications with JavaScript is that you build one codebase and package it for each operating system separately. This abstracts away the knowledge needed to build native desktop applications and makes maintenance easier. Nowadays, developing a desktop application with JavaScript relies on either Electron or NW.js. Although both tools offer more or less the same features, I went with Electron because it has some advantages I found important. At the end of the day, you can’t go wrong with either.

Basic assumptions

I assume that you’ve got your basic text editor (or IDE) and Node.js/npm installed. I’ll also assume you’ve got HTML/CSS/JavaScript knowledge (Node.js knowledge with CommonJS modules would be great, but isn’t crucial) so we can focus on learning Electron concepts without worrying about building the user interface (which, as it turns out, are just common web pages). If not, you’ll probably feel somewhat lost and I recommend visiting my previous blog post to brush up on your basics.

A 10,000 foot view of Electron

In a nutshell, Electron provides a runtime to build desktop applications with pure JavaScript. The way it works is — Electron takes a main file defined in your package.json file and executes it. This main file (usually named main.js) then creates application windows which contain rendered web pages with the added power of interacting with the native GUI (graphical user interface) of your operating system.

In detail, once you start up an application using Electron, a main process is created. This main process is responsible for interacting with the native GUI of your operating system and creates the GUI of your application (your application windows).

Purely starting the main process doesn’t give the users of your application any application windows. Those are created by the main process in the main file by using something called a BrowserWindow module. Each browser window then runs its own renderer process. This renderer process takes a web page (an HTML file which references the usual CSS files, JavaScript files, images, etc.) and renders it in the window.

For example, if you only had a calculator application, your main process would instantiate a window with a web page where your actual web page (calculator) is.

Although it is said that only the main process interacts with the native GUI of your operating system, there are techniques to offload some of that work to renderer processes (we’ll look into building a feature leveraging such a technique).

The main process can access the native GUI through a series of modules available directly in Electron. Your desktop application can access all Node modules like the excellent node-notifier to show system notifications, request to make HTTP calls, etc.

Hello, world!

Let’s get started with a traditional greeting and install all the necessary prerequisites.

Accompanying repository

This guide is accompanied by the sound-machine-tutorial repository.
Use the repository to follow along or continue at certain points. Clone the repository to get started:

git clone https://github.com/bojzi/sound-machine-electron-guide.git

and then you can jump to a git tag in the sound-machine-tutorial folder with:

git checkout <tag-name>

I’ll let you know when a tag is available with a code block:

Follow along:
git checkout 00-blank-repository

Once you clone/checkout your desired tag, run:

npm install

so that you aren’t missing any Node modules.

If you can’t switch to another tag, it would be easiest to just reset your repository state and then do the checkout:

git add -A
git reset --hard

Set up shop

Follow along with the tag 00-blank-repository:
git checkout 00-blank-repository

In the project folder create a new package.json file in it with the following contents:

{
    "name": "sound_machine",
    "version": "0.1.0",
    "main": "./main.js",
    "scripts": {
        "start": "electron ."
    }
}

This barebones package.json:

  • sets up the name and version of the application,
  • lets Electron know which script the main process is going to run (main.js) and
  • sets up a useful shortcut — an npm script to run the application easily by running “npm start” in your CLI (terminal or command prompt).

It’s time to get Electron. The easiest way of accomplishing that is by installing a prebuilt binary for your operating system through npm and save it as a development dependency in your package.json (that happens automatically with –save-dev). Run the following in your CLI (in the project folder):

npm install --save-dev electron-prebuilt

The prebuilt binary is tailored to the operating system it’s being installed on and allows the running of “npm start”. We’re installing it as a development dependency because we will only need it during development.

That’s, more or less, everything you need to start developing with Electron.

Greeting the world

Create an app folder and an index.html file in that folder with the following contents:

<h1>Hello, world!</h1>

In the root of the project create a main.js file. That’s the file that Electron’s main process is going to spin up and allow the creation of our “Hello, world!” web page. Create the file with the following contents:

'use strict';

var app = require('app');
var BrowserWindow = require('browser-window');

var mainWindow = null;

app.on('ready', function() {
    mainWindow = new BrowserWindow({
        height: 600,
        width: 800
    });

    mainWindow.loadUrl('file://' + __dirname + '/app/index.html');
});

Nothing scary, right?
The app module controls your application life (for example — watching when your application is ready for creating windows).
The BrowserWindow module allows window creation.
The mainWindow object is going to be your main application window and is declared as null because the window would otherwise be closed once JavaScripts garbage collection kicks in.

Once app gets the ready event, we create a new 800 pixels wide and 600 pixels high window using BrowserWindow.
That window’s renderer process is going to render our index.html file.

Run our “Hello, World!” application by running the following in your CLI:

npm start

and bask in the glory that is your app.

Hello, world!

Hello indeed.

Developing a real application

A glorious sound machine

First things first — what’s a sound machine?
A sound machine is a little device that makes sounds when you press various buttons, mostly cartoon or reaction sounds. It’s a funny little tool to lighten up the mood in an office and a great use case to develop a desktop application as we’re going to explore quite a few concepts while developing it (and get a nifty sound machine to boot).

sound-machine-tutorial@2x

The features we’re going to build and concepts we’re going to explore are:

  • basic sound machine (basic browser window instantiation),
  • closing the sound machine (remote messages between main and renderer process),
  • playing sounds without having the application in focus (global keyboard shortcuts),
  • creating a settings screen for shortcut modifier keys (Shift, Ctrl and Alt) (storing user settings in home folder),
  • adding a tray icon (remotely creating native GUI elements and getting to know menus and tray icon) and
  • packaging your application (packaging your application for Mac, Windows and Linux).

Building the basic feature of a sound machine

Starting point and application organization

With a working “Hello, world!” application under your belt, it’s high time to start building a sound machine.files@2x

A typical sound machine features several rows of buttons which respond to presses by making sounds. The sounds are mostly cartoonish and/or reaction based (laughter, clapping, glass breaking, etc.).

That’s also the very first feature we’ll build — a basic sound machine that responds to clicks.

Our application structure is going to be very straightforward.

In the root of the application we’ll keep the package.json file, the main.js file and any other application-wide files we need.

The app folder will house our HTML files of various types within folders like css, js, wav and img.

To make things easier, all the files needed for web page design have already been included in the initial state of the repository. Please check the tag 01-start-project out. If you followed along and created the “Hello, world!” application, you’ll have to reset your repository and then do the checkout:

If you followed along with the "Hello, world!" example:
git add -A
git reset --hard
Follow along with the tag 01-start-project:
git checkout 01-start-project

To keep things simple, we’re going to have only two sounds but extending it to the full 16 sounds is simply a matter of finding extra sounds, extra icons and modifying index.html.

Defining the rest of the main process

Let’s revisit main.js to define the look of the sound machine. Replace the contents of the file with:

'use strict';

var app = require('app');
var BrowserWindow = require('browser-window');

var mainWindow = null;

app.on('ready', function() {
    mainWindow = new BrowserWindow({
        frame: false,
        height: 700,
        resizable: false,
        width: 368
    });

    mainWindow.loadUrl('file://' + __dirname + '/app/index.html');
});

We’re customizing the window we’re creating by giving it a dimension, making it non-resizable and frameless. It’s going to look like a real sound machine hovering on your desktop.

The question now is — how to move a frameless window (with no title bar) and close it?
I’ll talk about custom window (and application) closing very soon (and introduce a way of communicating between the main process and a renderer process), but the dragging part is easy. If you look at the index.css file (in app/css), you’ll see the following:

html,
body {
    ...
    -webkit-app-region: drag;
    ...
}

-webkit-app-region: drag; allows the whole html to be a draggable object. There is a problem now, though — you can’t click buttons on a draggable object. The other piece of the puzzle is -webkit-app-region: no-drag; which allows you to define undraggable (and thus clickable elements). Consider the following excerpt from index.css:

.button-sound {
    ...
    -webkit-app-region: no-drag;
}

Displaying the sound machine in its own window

The main.js file can now make a new window and display the sound machine. And really, if you start your application with npm start, you’ll see the sound machine come alive. Of course, there’s nothing happening right now because we just have a static web page.

Put the following in the index.js file (located in app/js) to get the interactivity going:

'use strict';

var soundButtons = document.querySelectorAll('.button-sound');

for (var i = 0; i < soundButtons.length; i++) {
    var soundButton = soundButtons[i];
    var soundName = soundButton.attributes['data-sound'].value;

    prepareButton(soundButton, soundName);
}

function prepareButton(buttonEl, soundName) {
    buttonEl.querySelector('span').style.backgroundImage = 'url("img/icons/' + soundName + '.png")';

    var audio = new Audio(__dirname + '/wav/' + soundName + '.wav');
    buttonEl.addEventListener('click', function () {
        audio.currentTime = 0;
        audio.play();
    });
}

This code is pretty simple. We:

  • query for the sound buttons,
  • iterate through the buttons reading out the data-sound attribute,
  • add a background image to each button
  • and add a click event to each button that plays audio (using the HTMLAudioElement interface)

Test out your application by running the following in your CLI:

npm start

Sound machine

A working sound machine!

Closing the application from a browser window via remote events

Follow along with the tag 02-basic-sound-machine:
git checkout 02-basic-sound-machine

To reiterate — application windows (more exactly their renderer process) shouldn’t be interacting with the GUI (and that’s what closing a window is). The official Electron quick start guide says:

In web pages, it is not allowed to call native GUI related APIs because managing native GUI resources in web pages is very dangerous and it is easy to leak resources. If you want to perform GUI operations in a web page, the renderer process of the web page must communicate with the main process to request the main process perform those operations.

Electron provides the ipc (inter-process communication) module for that type of communication. ipc allows subscribing to messages on a channel and sending messages to subscribers of a channel. A channel is used to differentiate between receivers of messages and is represented by a string (for example “channel-1”, “channel-2”…). The message can also contain data. Upon receiving a message, the subscriber can react by doing some work and can even answer. The biggest benefit of messaging is separation of concerns — the main process doesn’t have to know which renderer processes there are or which one sent a message.

Messaging

That’s exactly what we’ll do here — subscribe the main process (main.js) to the “close-main-window” channel and send a message on that channel from the renderer process (index.js) when someone clicks the close button.

Add the following to main.js to subscribe to a channel:

var ipc = require('ipc');

ipc.on('close-main-window', function () {
    app.quit();
});

After requiring the module, subscribing to messages on a channel is very easy and involves using the on() method with the channel name and a callback function.

To send a message on that channel, add the following to index.js:

var ipc = require('ipc');

var closeEl = document.querySelector('.close');
closeEl.addEventListener('click', function () {
    ipc.send('close-main-window');
});

Again, we require the ipc module and bind a click event to the element with the close button. On clicking the close button we send a message via the “close-main-window” channel with the send() method.

There’s one more detail that could bite you and we’ve talked about it already — the clickability of draggable areas. index.css has to define the close button as non-draggable.

.settings {
    ...
    -webkit-app-region: no-drag;
}

That’s all, our application can now be closed via the close button. Communicating via ipc can get complicated by examining the event or passing arguments and we’ll see an example of passing arguments later.

Playing sounds via global keyboard shortcuts

Follow along with the tag 03-closable-sound-machine:
git checkout 03-closable-sound-machine

Our basic sound machine is working great. But we do have a usability issue — what use is a sound machine that has to sit in front of all your windows the whole time and be clicked repeatedly?

This is where global keyboard shortcuts come in. Electron provides a global shortcut module which allows you to listen to custom keyboard combinations and react. The keyboard combinations are known as Accelerators and are string representations of a combination of keypresses (for example “Ctrl+Shift+1”).

Keyboard shortcuts Since we want to catch a native GUI event (global keyboard shortcut) and do an application window event (play a sound), we’ll use our trusted ipc module to send a message from the main process to the renderer process.

Before diving into the code, there are two things to consider:

  1. global shortcuts have to be registered after the app “ready” event (the code should be in that block) and
  2. when sending messages via ipc from the main process to a renderer process you have to use the reference to that window (something like “createdWindow.webContents.send(‘channel’)”)

With that in mind, let’s alter our main.js and add the following code:

var globalShortcut = require('global-shortcut');

app.on('ready', function() {
    ... // existing code from earlier

    globalShortcut.register('ctrl+shift+1', function () {
        mainWindow.webContents.send('global-shortcut', 0);
    });
    globalShortcut.register('ctrl+shift+2', function () {
        mainWindow.webContents.send('global-shortcut', 1);
    });
});

First, we require the global-shortcut module. Then, once our application is ready, we register two shortcuts — one that will respond to pressing Ctrl, Shift and 1 together and the other that will respond to pressing Ctrl, Shift and 2 together. Each of those will send a message on the “global-shortcut” channel with an argument. We’ll use that argument to play the correct sound. Add the following to index.js:

ipc.on('global-shortcut', function (arg) {
    var event = new MouseEvent('click');
    soundButtons[arg].dispatchEvent(event);
});

To keep things simple, we’re going to simulate a button click and use the soundButtons selector that we’ve created while binding buttons to playing sounds. Once a message comes with an argument of 1, we’ll take the soundButtons[1] element and trigger a mouse click on it (note: in a production application, you’d want to encapsulate the sound playing code and execute that).

Configuring modifier keys via user settings in a new window

Follow along with the tag 04-global-shortcuts-bound:
git checkout 04-global-shortcuts-bound

With so many applications running at the same time, it could very well be that the shortcuts we’ve envisioned are already taken. That’s why we’re going to introduce a settings screen and store which modifiers (Ctrl, Alt and/or Shift) we’re going to use.

To accomplish all of that, we’ll need the following:

  • a settings button in our main window,
  • a settings window (with accompanying HTML, CSS and JavaScript files),
  • ipc messages to open and close the settings window and update our global shortcuts and
  • storing/reading of a settings JSON file from the user system.

Phew, that’s quite a list.

Settings button and settings window

Similar to closing the main window, we’re going to send messages on a channel from index.js when the settings button gets clicked. Add the following to index.js:

var settingsEl = document.querySelector('.settings');
settingsEl.addEventListener('click', function () {
    ipc.send('open-settings-window');
});

After clicking the settings button, a message on the channel “open-settings-window” gets sent. main.js can now react to that event and open up the new window. Add the following to main.js:

var settingsWindow = null;

ipc.on('open-settings-window', function () {
    if (settingsWindow) {
        return;
    }

    settingsWindow = new BrowserWindow({
        frame: false,
        height: 200,
        resizable: false,
        width: 200
    });

    settingsWindow.loadUrl('file://' + __dirname + '/app/settings.html');

    settingsWindow.on('closed', function () {
        settingsWindow = null;
    });
});

Nothing new to see here, we’re opening up a new window just like we did with the main window. The only difference is that we’re checking if the settings window is already open so that we don’t open up two instances.

Once that works, we need a way of closing that settings window. Again, we’ll send a message on a channel, but this time from settings.js (as that is where the settings close button is located). Create (or replace the contents of) settings.js with the following:

'use strict';

var ipc = require('ipc');

var closeEl = document.querySelector('.close');
closeEl.addEventListener('click', function (e) {
    ipc.send('close-settings-window');
});

And listen on that channel in main.js. Add the following:

ipc.on('close-settings-window', function () {
    if (settingsWindow) {
        settingsWindow.close();
    }
});

Our settings window is now ready to implement its own logic.

Storing and reading user settings

Follow along with the tag 05-settings-window-working:
git checkout 05-settings-window-working

The process of interacting with the setting windows, storing the settings and promoting them to our application will look like this:

  • create a way of storing and reading user settings in a JSON file,
  • use these settings to display the initial state of the settings window,
  • update the settings upon user interaction and
  • let the main process know of the changes.

We could just implement the storing and reading of settings in our main.js file but it sounds like a great use case for writing a little module that we can then include in various places.

Working with a JSON configuration 

That’s why we’re going to create configuration.js file and require it whenever we need it. Node.js uses the CommonJS module pattern which means that you export only your API and other files require/use the functions available on that API.


Configuration

To make storing and reading easier, we’ll use the nconf module which abstracts the reading and writing of a JSON file for us. It’s a great fit. But first, we have to include it in the project with the following command executed in the CLI:

npm install --save nconf

This tells npm to install the nconf module as an application dependency and it will be included and used when we package our application for an end user (in contrast to installing with the save-dev argument which will only include modules for development purposes).

The configuration.js file is pretty simple, so let’s examine it fully. Create a configuration.js file in the root of the project with the following contents:

'use strict';

var nconf = require('nconf').file({file: getUserHome() + '/sound-machine-config.json'});

function saveSettings(settingKey, settingValue) {
    nconf.set(settingKey, settingValue);
    nconf.save();
}

function readSettings(settingKey) {
    nconf.load();
    return nconf.get(settingKey);
}

function getUserHome() {
    return process.env[(process.platform == 'win32') ? 'USERPROFILE' : 'HOME'];
}

module.exports = {
    saveSettings: saveSettings,
    readSettings: readSettings
};

nconf only wants to know where to store your settings and we’re giving it the location of the user home folder and a file name. Getting the user home folder is simply a matter of asking Node.js (process.env) and differentiating between various platforms (as observed in the getUserHome() function).

Storing or reading settings is then accomplished with the built-in methods of nconf (set() for storing, get() for reading with save() and load() for file operations) and exporting the API by using the standard CommonJS module.exports syntax.

Initializing default shortcut key modifiers

Before moving on with settings interaction, let’s initialize the settings in case we’re starting the application for the first time. We’ll store the modifier keys as an array with the key “shortcutKeys” and initialise it in main.js. For all of that to work, we must first require our configuration module:

'use strict';

var configuration = require('./configuration');

app.on('ready', function () {
    if (!configuration.readSettings('shortcutKeys')) {
        configuration.saveSettings('shortcutKeys', ['ctrl', 'shift']);
    }
    ...
}

We try reading if there’s anything stored under the setting key “shortcutKeys”. If not, we set an initial value.

As an additional thing in main.js, we’ll rewrite the registering of global shortcut keys as a function that we can call later when we update our settings. Remove the registering of shortcut keys from main.js and alter the file this way:

app.on('ready', function () {
    ...
    setGlobalShortcuts();
}

function setGlobalShortcuts() {
    globalShortcut.unregisterAll();

    var shortcutKeysSetting = configuration.readSettings('shortcutKeys');
    var shortcutPrefix = shortcutKeysSetting.length === 0 ? '' : shortcutKeysSetting.join('+') + '+';

    globalShortcut.register(shortcutPrefix + '1', function () {
        mainWindow.webContents.send('global-shortcut', 0);
    });
    globalShortcut.register(shortcutPrefix + '2', function () {
        mainWindow.webContents.send('global-shortcut', 1);
    });
}

The function resets the global shortcuts so that we can set new ones, reads the modifier keys array from settings, transforms it to a Accelerator-compatible string and does the usual global shortcut key registration.

Interaction in the settings window

Back in the settings.js file, we need to bind click events which are going to change our global shortcuts. First, we’ll iterate through the checkboxes and mark the active ones (reading the values from the configuration module):

var configuration = require('../configuration.js');

var modifierCheckboxes = document.querySelectorAll('.global-shortcut');

for (var i = 0; i < modifierCheckboxes.length; i++) {
    var shortcutKeys = configuration.readSettings('shortcutKeys');
    var modifierKey = modifierCheckboxes[i].attributes['data-modifier-key'].value;
    modifierCheckboxes[i].checked = shortcutKeys.indexOf(modifierKey) !== -1;

... // Binding of clicks comes here
}

And now we’ll bind the checkbox behavior. Take into consideration that the settings window (and its renderer process) are not allowed to change GUI binding. That means that we’ll need to send an ipc message from settings.js (and handle that message later):

for (var i = 0; i < modifierCheckboxes.length; i++) {
...

    modifierCheckboxes[i].addEventListener('click', function (e) {
        bindModifierCheckboxes(e);
    });
}

function bindModifierCheckboxes(e) {
    var shortcutKeys = configuration.readSettings('shortcutKeys');
    var modifierKey = e.target.attributes['data-modifier-key'].value;

    if (shortcutKeys.indexOf(modifierKey) !== -1) {
        var shortcutKeyIndex = shortcutKeys.indexOf(modifierKey);
        shortcutKeys.splice(shortcutKeyIndex, 1);
    }
    else {
        shortcutKeys.push(modifierKey);
    }

    configuration.saveSettings('shortcutKeys', shortcutKeys);
    ipc.send('set-global-shortcuts');
}

It’s a bigger piece of code but still pretty simple.
We iterate through all the checkboxes, bind a click event and on each click check if the settings array contains the modifier key or not — and according to that result, modify the array, save the result to settings and send a message to the main process which should update our global shortcuts.

All that’s left to do is subscribe to the ipc channel “set-global-shortcuts” in main.js and update our global shortcuts:

ipc.on('set-global-shortcuts', function () {
    setGlobalShortcuts();
});

Easy. And with that, our global shortcut keys are configurable!

What’s on the menu?

Follow along with the tag 06-shortcuts-configurable:
git checkout 06-shortcuts-configurable

Another important concept in desktop applications are menus. There’s the always useful context menu (AKA right-click menu), tray menu (bound to a tray icon), application menu (on OS X), etc.

Menu In this guide we’ll add a tray icon with a menu. We’ll also use this opportunity to explore an other way of inter-process communication — the remote module.

The remote module makes RPC calls from the renderer process to the main process. In practice, it means that you remotely request native GUI modules from main.js and call methods on them. In that way, you could require the BrowserWindow object from the main process and instantiate a new browser window in a window (renderer process). Behind the scenes, that’s still a synchronous ipc message but it provides a very good tool to promote organization in your code.

Remote

Let’s see how we’d create a menu and bind it to a tray icon while doing it in a renderer process. Add the following to index.js:

var remote = require('remote');
var Tray = remote.require('tray');
var Menu = remote.require('menu');
var path = require('path');

var trayIcon = null;

if (process.platform === 'darwin') {
    trayIcon = new Tray(path.join(__dirname, 'img/tray-iconTemplate.png'));
}
else {
    trayIcon = new Tray(path.join(__dirname, 'img/tray-icon-alt.png'));
}

var trayMenuTemplate = [
    {
        label: 'Sound machine',
        enabled: false
    },
    {
        label: 'Settings',
        click: function () {
            ipc.send('open-settings-window');
        }
    },
    {
        label: 'Quit',
        click: function () {
            ipc.send('close-main-window');
        }
    }
];
var trayMenu = Menu.buildFromTemplate(trayMenuTemplate);
trayIcon.setContextMenu(trayMenu);

The native GUI modules (menu and tray) were required remotely and that way it’s safe to use them here.

A tray icon is defined through its icon. OS X supports image templates (by convention, an image is considered a template image if its filename ends with “Template”) which makes it easy to work with the dark and light themes. Other OSes get a regular icon.

There are multiple ways of building a menu in Electron. This way creates a menu template (a simple array with menu items) and builds a menu from that template. At the end, the new menu is attached to the tray icon.

Packaging your application

Follow along with the tag 07-ready-for-packaging:
git checkout 07-ready-for-packaging

What’s the use of an application which you can’t let people download and use?Packaging

Packaging your application for all platforms is easy using electron-packager. In a nutshell, electron-packager abstracts away all work going into wrapping your app with Electron and generates all platforms for which you’re going to publish.

It can be used as a CLI application or as part of a build process. Building a more complicated build scenario is not in the scope of this article, but we’ll leverage the power of npm scripts to make packaging easier. Using electron-packager is trivial, the general form when packaging an application is:

electron-packager <location of project> <name of project> <platform> <architecture> <electron version> <optional options>

where:

  • location of project points to the folder where your project is,
  • name of project defines the name of your project,
  • platform decides for which platforms to build (all to build for Windows, Mac and Linux),
  • architecture decides for which architectures to build (x86 or x64, all for both) and
  • electron version lets you choose which Electron version to use.

The first package is going to take a while because all the binaries for all platforms have to be downloaded. Subsequent packages are much faster.

I package the sound machine typically like this (on a Mac):

electron-packager ~/Projects/sound-machine SoundMachine --all --version=0.30.2 --out=~/Desktop --overwrite --icon=~/Projects/sound-machine/app/img/app-icon.icns

The new options included in the command are self-explanatory. To get a nice icon, you’ll first have to convert it to .icns (for Mac) and/or .ico (for Windows). Just search for a tool to convert your PNG file to these formats like this one (be sure to download the file with the .icns extension and not .hqx). If packaging for Windows from a non-Windows OS, you’ll need wine on your path (Mac users can use brew, while Linux users can use apt-get).

It doesn’t make sense to run that big command every time. We can add another script to our package.json. First of all, install electron-packager as a development dependency:

npm install --save-dev electron-packager

Now we can add a new script to our package.json file:

"scripts": {
    "start": "electron .",
    "package": "electron-packager ./ SoundMachine --all --out ~/Desktop/SoundMachine --version 0.30.2 --overwrite --icon=./app/img/app-icon.icns"
}

And then run the following in CLI:

npm run-script package

The package command starts the electron-packager, looks in the current directory and build to Desktop. The script should be changed if you are using Windows, but that is trivial.

The sound machine in its current state ends up weighing a whopping 100 MB. Don’t worry, once you archive it (zip or an archive type of your choice), it’ll lose more than half its size.

If you really want to go to town, take a look at electron-builder which takes the packages produced by electron-packager and creates automated installers.

Additional features to add

With the application packaged and ready to go, you can now start developing your own features.

Here are some ideas:

  • a help screen with info about the app, its shortcuts and author,
  • adding an icon and a menu entry to open the info screen,
  • build a nice packaging script for faster builds and distribution,
  • add notifications using node-notifier to let users know which sound they’re playing,
  • use lodash to a greater extent for a cleaner code base (like iterating through arrays),
  • minify all your CSS and JavaScript with a build tool before packaging,
  • combine the aforementioned node-notifier with a server call to check for new versions of your app and notify users…

For a nice challenge — try extracting your Sound machine browser windows logic and using something like browserify to create a web page with the same sound machine you’ve just created. One code base — two products (desktop application and web application). Nifty!

Diving deeper into Electron

We’ve only scratched the surface of what Electron brings to the table. It’s pretty easy to do things like watching for power events on the host machine or getting various information on the screen (like cursor position).

For all of those built-in utilities (and generally while developing applications with Electron), check out the Electron API docs.

These Electron API docs are a part of the docs folder at the Electron GitHub repository and that folder is well worth checking out.

Sindre Sorhus maintans an awesome list of Electron resources on which you can find really cool projects and information like an excellent overview of a typical Electron application architecture which can serve as a refresher on the code we’ve been developing up until now.

In the end, Electron is based on io.js (which is going to be merged back into Node.js) and most of Node.js modules are compatible and can be used to extend your application. Just browse npmjs.com and grab what you need.

Is that all?

Not by a long shot.

Now, it’s time to build a bigger application. I’ve mostly skipped on using extra libraries or build tools in this guide to concentrate on important issues but you can easily write your app in ES6 or Typescript, use Angular or React and simplify your build with gulp or Grunt.

With your favorite language, framework and build tool, why not build a Flickr sync desktop application using the Flickr API and node-flickrapi or a GMail client using Google’s official Node.js client library?

Pick an idea that’s going to motivate you, init a repository and just do it.

Overview of the JavaScript ecosystem

What can I accomplish with Javascript?

The ecosystem of JavaScript has grown. Long gone are the days of simply inserting jQuery into your website and fading stuff in or out.

Entering the world of JavaScript today is an overwhelming task with lots of possibilities. It also means that it’s a world that’s brimming with opportunity. In the words of Jeff Atwood (http://blog.codinghorror.com/the-principle-of-least-power/):

Any application that can be written in JavaScript, will eventually be written in JavaScript.

The different aspects of JavaScript

There was never a better time to find a niche within the JavaScript ecosystem. Here’s a list of aspects you can dive into and which this article will explore deeper:

  1. Front-end development
  2. Command line interface (CLI) applications
  3. Desktop (GUI) applications
  4. Mobile applications
  5. Back-end development
  6. Any combination of the above

Front-end development

AngularJS

AngularJS

Developing the user facing part of websites has become increasingly complex by becoming highly interactive and offloading traditional server-side tasks to the front-end. It was once unfathomable that we’ll be running the likes of Google Maps, Spotify or YouTube in our web browsers but here we are, with a varied toolset to make complex web applications.

Front-end web development has grown exponentially in the last few years and I’ll offer just a glimpse of that here.

The basics of front-end web development

For a long time, JavaScript has been used solely for DOM manipulation with the odd animation thrown in for good measure. Since the beginnings, there was a big discrepancy between browser features.

jQuery started a revolution with abstracting the aforementioned browser features and making DOM manipulation easy while also bringing quite a few utilities to the table.

Nowadays, it’s quite easy to manipulate the DOM with pure JavaScript and there’s a very nice cheat sheet just for that purpose.

Efficiency through frameworks

With the growing complexity of websites and websites growing to web applications, there was a need to address the complex issues of web applications (state handling, data binding, view rendering, etc.). Many frameworks rose to that challenge and two that are probably most popular today are AngularJS and React.

It’s clear that Angular and React gained such traction since the former is backed by Google and the latter by Facebook. While Angular contains the whole MVC paradigm, React is somewhat leaner and mostly considered the V of MVC.

New frameworks show up all the time and time will only tell which one will reign supreme (of course, if something like that even happens).

What’s in a name?

There’s a good chance that you won’t be using  JavaScript any more, but any of the languages that transpile to JavaScript like:

  • EcmaScript 6 — the newest spec of JavaScript
  • TypeScript — Microsoft’s superset of JavaScript featuring types

Apart from just adding new features to a language, there’s a good chance you’ll be modularising your application by using ES6 native modules orCommonJS (mostly for Node.js development) or RequireJS (async module loading mostly for websites).

Transpilation and connecting of modularised applications is done via build tools (Gulp and Grunt, mentioned in detail later), using transpile tools (like Babel or Traceur) and module builders (like Browserify or Webpack). You’ll most likely transpile and build your modules in every aspect of JavaScript development.

There is a boatload of tools that weren’t mentioned. It is left to the reader to explore them and a good starting place is the awesome list of front-end development.


Command line interface (CLI) applications

Gulp running a gulpfile

Gulp running a gulpfile

Many developers rely mostly on the CLI in their day-to-day development — be it code linting, task running or starting a server, there’s a certain beauty in the efficiency of executing a task purely from the command line.

CLI applications are written using Node.js (or io.js, a fork of Node.js which is going to be merged back into Node.js soon). Node.js is an open source, cross-platform runtime environment that allows you to execute your JavaScript code everywhere through Chrome’s JavaScript runtime (not just in the browser, like before). In essence, once someone installs Node.js and takes your CLI application (package), it can be executed.

Package managers

It would be really bad if you’d have to write every functionality of every app from scratch. That’s where npm steps in. npm is a package manager forNode.js modules and using packages is really simple — install and require them.

The CLI application that you write can also be packaged as a Node.js module and distributed via npm. That is the preferred way of getting your CLI application (or Node.js modules for that matter) to other people.

Many popular libraries and tools have CLI applications for easier use likeGulp or Grunt. There’s also a list of awesome Node.js CLI apps.

Build tools

Build tools (and task runners) get a special mention because they’re the most basic tools you’ll be using no matter what type of application you’re building.

The most popular build tools nowadays are Grunt and Gulp which make the process of transforming your code to something usable much easier. A typical scenario is to:

  • transpile your JavaScript from EcmaScript 6 to EcmaScript 5
  • compile your SCSS to CSS
  • minify and concatenate the resulting files
  • copy everything to a distribution folder

 Desktop (GUI) applications

Slack

Slack

Applications are mostly moving to the web or onto mobile devices. Still, desktop applications offer an immersion mostly unavailable to web applications.

The biggest advantage of writing your desktop applications with JavaScript is abstraction of the platform for which you are coding. Your applications are cross-platform and the modules you use simplify the usage of typical desktop features (such as tray icons, notifications, global keyboard shortcuts, etc.).

Having a good project structure allows you a lot of code reuse between your web and desktop application. That in turn leads to easier maintenance.

Available tools

There are two popular projects which allow you to write a desktop application via HTML/JS:

  • NW.js — formerly known as node-webkit, it’s the most popular way of writing native desktop applications
  • Electron — a newer contender made by GitHub which already gained big traction in the same space

Notable applications

Both of the mentioned projects are used in quite a few popular desktop applications.

Notable applications done with NW.js or Electron include Slack, Game Dev Tycoon, GitHub Atom, WhatsApp Desktop, Facebook Messenger Desktop, Popcorn Time and Microsoft Visual Studio Code. There’s an extensive list of projects made with NW.js and an extensive list of projects made withElectron (both containing links to Repositories for learning or contributing purposes).


 Mobile applications

Facebook Mobile applications made with React Native

Facebook Mobile applications made with React Native

With such a booming market, it makes sense to develop mobile applications. The JavaScript ecosystem provides a few solutions to developing cross-platform (iOS, Android and Windows Phone) applications. The most popular projects providing cross-platform mobile application include:

Ionic and Phonegap use a browser wrapper around your HTML/JS and provide access to otherwise unavailable features of the platform (camera, various sensors, etc.). Ionic is leveraging the power of Angular to provide a well-tested and stable platform.

Facebook’s React Native has an interesting approach in which they render your written application to higher-level platform-specific components to achieve a truly native look of the app. This means that you’ll have to write a separate view layer for each platform but you’ll do it in a consistent manner.In the words of Tom Occhino, a software engineer at Facebook, they’re trying the approach of “learn once, write anywhere” which is completely in the spirit of such a diverse ecosystem as this one.

Notable applications

While React Native doesn’t support Android just yet, it’s great that Facebook is using it in their own apps already (Facebook Groups and Facebook Ads Manager). Android support should arrive in less than two months.

Mobile applications written in Ionic or Phonegap include popular applications such as Sworkit, Mallzee, Chefsteps, Snowbuddy and Rormix. There are extensive lists of applications built with Ionic and applications built with Phonegap.


Back-end development

Node.js

Node.js

Node.js is also the main driving force in back-end development in JavaScript.

The main advantage of Node.js is it’s event-driven, non-blocking I/O model. That said — Node.js is great at handling data-intensive real-time applications with many concurrent requests. Node.js does it by handling all these concurrent requests on a single thread and thereby greatly reducing needed system resources and allowing for great scalability.

A typical example for these benefits are chat applications. They require uninterrupted connections from clients to a chat room (real-time, non-blocking) and notifications of new messages (event-driven) where you’re supposed to support large numbers of clients and rooms (data-intensive).

You can also have a fairly decent web server written in JavaScript. The main takeaway here is that its main purpose shouldn’t be CPU intensive tasks or connections to a relational database, but a high volume of connections.

The most popular modules associated with back-end development are:

  • express — simple web framework for Node.js
  • socket.io — module for building real-time web applications
  • forever — module for ensuring that a given Node.js script runs continously

How these modules fit together

First of all, you need a web server which can process typical HTTP request on various routes like http://localhost:3000/about. That’s where express comes in.

To have an uninterrupted connection with the server, socket.io is used with a server-side and client-side component of establishing connections.

Since express runs on one thread, we must ensure that an exception doesn’t stop the process (and server altogether). For that purposes, we use forever.

To learn more about these modules, visit their respective websites which feature many tutorials (socket.io even has you building a chat server and client as a hello world application).


Any combination of the above

Meteor JavaScript app platform

Meteor JavaScript app platform

It’s easy to imagine how all these aspects come together.

One of the most popular way of combining them is having a full-stack JavaScript framework like MEAN or Meteor.
MEAN combines express, Angular, Node.js and MongoDB to offer a web platform whose back-end as well as front-end are written in JavaScript.
Meteor is a platform offering full-stack web and mobile application development in JavaScript.

Another example could be a JavaScript minifier for which you write the base module for Node.js and then use that module in a CLI application, a desktop application, a mobile application and a web application (served by express, of course) — all in JavaScript.

The possibilities are endless and we’re probably just scratching the surface. This ecosystem is exploding with new techniques, frameworks, modules and even language specs being defined all the time. It’s really exciting.


Where should I start?

That depends on how familiar with JavaScript you are right now.

Should you only be starting out, there are great resources to start with JavaScript (or programming, for that matter) like Codecademy, Nodeschoolor Codeschool which are all interactive and fun.

If you’ve got some jQuery knowledge under your belt and have been dabbling with pure JavaScript, know that no framework or library is ever going to replace a good understanding of core JavaScript. Start with really digging into the nitty-gritty of JavaScript. For that purpose, I can’t recommend Kyle Simpson’s You don’t know JS series enough. It’s open source and available on GitHub. The open source nature makes it really easy for you to contribute with errors you notice in the books. The books are also available as hard copies if you prefer reading that way with the added benefit of supporting the author.

With a strong JavaScript core, it would be wise to brush up on Node.js. As you’ve seen, it’s the basis for almost all of the aspects. Node.js promotes asynchronous programming which takes a while to get accustomed to but solves the problem of blocking UI. The aforementioned learning sources (Nodeschool and Codeschool) can also be used here.

After that, just follow the path that seems the most interesting. Chances are, you’ll fall deeper down the rabbit hole, discover new things and enjoy the experience even more.

comSysto loves JavaScript

Getting Started with D3.js

You are thinking about including some nice charts and graphics into your current project? Maybe you heard about D3.js, as some people claim it the universal JavaScript visualization framework. Maybe you also heard about a steep learning curve. Let’s see if this is really true!

First of all, what is D3.js?
D3.js is an open source JavaScript framework written by Mike Bostock helping you to manipulate documents based on data.

Okay, let’s first have a look at the syntax
Therefore, lets look at the following hello world example. It will append an <h1> element saying ‘Hello World!’ to the content <div> element.

<!DOCTYPE html>
<html>
    <head>
        <script src="http://d3js.org/d3.v3.min.js"></script>
    </head>
    <body>
    <div id="content"></div>
        <script> 
            d3.select('#content')
                .append('h1')
                .text('Hello World!');
        </script>
    </body>
</html>

As you can see the syntax is very similar to frameworks like JQuery and obviously, it saves you a lot of code lines as it offers a nice fluent API.

But let’s see how we can bind data to it:

d3.select('#content')
   .selectAll('h1')
   .data(['Sarah', 'Robert', 'Maria', 'Marc'])
   .enter()
   .append('h1')
   .text(function(name) {return 'Hello ' + name + '!'});

What happens? The data function gets our names array as parameter and for each name we append an <h1> element with a personalized greeting message. For a second, we ignore the selectAll(‘h1’) and enter() method call, as we will explore them later. Looking into the browser we can see the following:

Hello Sarah!
Hello Robert!
Hello Maria!
Hello Marc!

Not bad for a start! Inspecting the element in the browser, we see the following generated markup:

[...]
    <div id="content">
        <h1>Hello Sarah!</h1>
        <h1>Hello Robert!</h1>
        <h1>Hello Maria!</h1>
        <h1>Hello Marc!</h1>
    </div>
[...]

This already shows one enourmous advantage of D3.js: You acctually see the generated code and can spot errors easily.

Now, let’s have a closer look at the data-document connection
As mentioned in the beginning, D3.js helps you to manipulate documents based on data. Therefore, we only take care about handing the right data over to D3.js, so the framework can do the magic for us. To understand how D3.js handles data, we’ll first have a look at how data might change over time. Let’s take the document from our last example. Every name is one data entry. 

Data-Document Example 1

Easy. Now let’s assume new data comes in:

Data-Document Example 2

As new data is coming in, the document needs to be updated. The entries of Robert and Maria need to be removed, Sarah and Marc can stay unchanged and Mike, Sam and Nora need a new entry each. Fortunately, using D3.js we don’t have to care about finding out which nodes need to be added and removed. D3.js will take care about it. It will also reuse old nodes to improve performance. This is one key benefit of D3.js.

So how can we tell D3.js what to do when?
To let D3.js update our data, we initially need a data join, so D3.js knows our data. Therefore, we select all existing nodes and connect them with our data. We can also hand over a function, so D3.js knows how to identify data nodes. As we initally don’t have <h1> nodes, the selectAll function will return an empty set.

var textElements = svg.selectAll('h1').data(data, function(d) { return d; });

After the first iteration, the selectAll will hand over the existing nodes, in our case Sarah, Robert, Marc and Maria. So we can now update these existing nodes. For example, we can change their CSS class to grey:

textElements.attr({'class': 'grey'});

Additionally, we can tell D3.js what to do with entering nodes, in our case Mike, Sam and Nora. For example, we can add an <h1> element for each of them and set the CSS class to green for each of them:

textElements.enter().append('h1').attr({'class': 'green'});

As D3.js now updated the old nodes and added the new ones, we can define what will happen to both of them. In our cases this will affect the nodes of Mike, Sarah, Sam, Mark and Nora. For example, we can rotate them:

textElements.attr({'transform', rotate(30 20,40)});

Furthermore, we can specify what D3.js will do to nodes like Robert and Maria, that are not contained in the data set any more. Let’s change their CSS class to red:

textElements.exit().attr({'class': 'red'});

You can find the full example code to illustrate the data-document connection of D3.js as JSFiddle here: https://jsfiddle.net/q5sgh4rs/1/

But how to visualize data with D3.js?
Now that we know about the basics of D3.js, let’s go to the most interesting part of D3.js: drawing graphics. To do so, we use SVG, which stands for scalable vector graphics. Maybe you already know it from other contexts. In a nutshell, it’s a XML-based vector image language supporting animation and interaction. Fortunately, we can just add SVG tags in our HTML and all common browsers will display it directly. This also facilitates debugging, as we can inspect generated elements in the browser. In the following, we see some basic SVG elements and their attributes:

SVG elements

To get a better understanding of how SVG looks like, we’ll have a look at it as a basic example of SVG code, generating a rectangle, a line and a circle.

<svg>
 <rect x="10" y="15" width="60" height="20" />
 <line x1="95" y1="35" x2="105" y2="15" />
 <circle cx="130" cy="25" r="6" />
</svg>

To generate the same code using D3.js, we need to add an SVG to our content <div> and then append the tree elements with their attributes like this:

var svg = d3.select('#content').append('svg');
svg.append('rect').attr({x: 10, y: 15, width: 60, height: 20});
svg.append('line').attr({x1: 85, y1: 35, x2: 105, y2: 15});
svg.append('circle').attr({cx: 130, cy: 25, r: 6});

Of course, for static SVG code, we wouldn’t do this, but as we already saw, D3.js can fill attributes with our data. So we are now able to create charts! Let’s see how this works:

<div id="content"></div>
<script>
 d3.select('#content')
        .append('svg')
            .selectAll('rect')
            .data([100, 200, 150, 60, 50])
            .enter()
            .append('rect')
                .attr('x', 0)
                .attr('y', function(data, index) {return index * 25})
                .attr('height', 20)
                .attr('width', function(data, index) {return data});
</script>

This will draw our first bar chart for us! Have a look at it: https://jsfiddle.net/tLhomz11/2/

How to turn this basic bar chart into an amazing one?
Now that we started drawing charts, we can make use of all the nice features D3.js offers. First of all, we will adjust the width of each bar to fill the available space by using a linear scale, so we don’t have to scale our values by hand. Therefore, we specify the range we want to get values in and the domain we have. In our case, the data is in between 0 and 200 and we would like to scale it to a range of 0 to 400, like this:

var xScale = d3.scale.linear().range([0, 400]).domain([0,200]);

If we now specify x values, we just use this function and get an eqivalent value in the right range. If we don’t know our maximum value for the domain, we can use the d3.max() function to calculate it based on the data set we want to display.

To add an axis to our bar chart, we can use the following function and call it on our SVG. To get it in the right position, we need to transform it below the chart.

[svg from above].call(d3.svg.axis().scale(xScale).orient("bottom"));

Now, we can also add interaction and react to user input. For example, we can give an alert, if someone clicks one our chart:

[svg from above].on("click", function () {
    alert("Houston, we get attention here!");
})

Adding a text node for each line, we get the following chart rendered in the browser:

Coding Example Result

If you would like to play around with it, here is the code: https://jsfiddle.net/Loco5ddt/

If you would like to see even more D3.js code, using the same data to display a pie chart and adding an update button, look at the following one: https://jsfiddle.net/4eqzyquL/

Data import
Finally, we can import our data in CSV, TSV or JSON format. To import a JSON file, for example, use the following code. Of course, you can also fetch your JSON via a server call instead of importing a static file.

d3.json("data.json", function(data) {
    [access your data using the data variable]
}

What else does D3,js offer?
Just to name a few, D3.js helps you with layouts, geometry, scales, ranges, data transformation, array and math functions, colors, time formating and scales, geography, as well as drag & drop.

There are a lot of examples online: https://github.com/mbostock/d3/wiki/Gallery

TL;DR
+ based on web standards
+ totally flexible
+ easy to debug
+ many, many examples online
+ Libaries build on D3.js (NVD3.js, C3.js or IVML)
– a lot of code compared to other libraries
– for standard charts too heavy

Learning more
As this blog post is based on a presentation held at the MunichJS Meetup, you can find the original slides here: http://slides.com/elisabethengel/d3js#/ The recording is available on youTube: https://www.youtube.com/watch?v=EYmJEsReewo

For further information, have a look at: