Category Archives: comSysto

Ice cream sales break microservices, Hystrix to the rescue

In November 2015, we had the opportunity to spend three days with a greenfield project in order to get to know Spring Cloud Netflix. At comSysto, we always try to evaluate technologies before their potential use in customer projects to make sure we know their pros and cons. Of course, we had read about several aspects, but we never really got our hands dirty using it. This had to change!

Besides coming up with a simple scenario that can be completed within a few days, our main focus was on understanding potential problems in distributed systems. First of all, any distributed system comes with the ubiquitous problem of failing services that should not break the entire application. This is most prominently addressed by Netflix’ “Simian Army” which intentionally breaks random parts of the production environment.

However, we rather wanted to provoke problems arising under heavy load due to capacity limitations. Therefore, we intentionally designed a distributed application with a bottleneck that turned into an actual problem with many simultaneous requests.

Our Use Case

Our business case is about an ice selling company, which is acting on worldwide locations. On each location there are ice selling robots. At the company’s headquarters we want to show an aggregated report about the ice selling activities for each country.

All our components are implemented as dedicated microservices using Spring Boot and Spring Cloud Netflix. Service discovery is implemented using Eureka server. The communication between the microservices is RESTful.

architecture

Architecture overview of our distributed system with the deployment setup during the experiments.

There is a basic location-service, which knows about all locations provided with ice-selling-robots. The data from all these locations has to be part of the report.

For every location, there is one instance of the corresponding microservice representing an ice-selling-robot. Every ice-selling-robot has locally stored information about the amount of totally sold ice cream and the remaining stock amount. Each of them continuously pushes this data to the central current-data-service. It fails with a certain rate, which is configured by a central Config Server.

For the sake of simplicity, the current-data-service stores this information in-memory. Every time it receives an update from one of the ice-selling-robots, it takes the new value and forgets about the old one. Old values are also forgotten if their timestamp is too old.

The current-data-service offers an interface by which the current value for the totally sold amount of ice cream or the remaining stock amount can be retrieved for one location. This interface is used by an aggregator-service, which is able to generate and deliver an aggregated report on demand. For all locations provided by the location-service the current data is retrieved from the current-data-service, which is then aggregated by summing up the single values from the locations grouped by the locations’ country. The resulting report consists of the summed up values per country and data type (totally sold ice cream and remaining stock value).

Because the connection between aggregator-service and current-data-service is quite slow, the calculation of the report takes a lot of time (we simply simulated this slow connection with a wifi connection, which is slow in comparison with an internal service call on the same machine). Therefore, an aggregated report cache has been implemented as fallback. Switching to this fallback has been implemented using Hystrix. At fixed intervals the cache is provided with the most current report by a simple scheduled job.

The reporting service is the only service with a graphical user interface. It generates a very simplistic html-based dashboard, which can be used by the business section of our company to get an overview of all the different locations. The data presented to the user is retrieved from the aggregator-service. Because this service is expected to be slow and prone to failure, a fallback is implemented which retrieves the last report from the aggregated-report-cache. With this, the user can always request a report within an acceptable response time even though it might be slightly outdated. This is a typical example for maintaining maximum service quality in case of partial failure.

report

The reporting “dashboard”.

We used a Spring Cloud Dashboard from the open source community for showing all registered services:

cloud-dashboard

Spring Cloud Dashboard in action.

The circuit-breaker within the aggregator-service can be monitored from Hystrix dashboard.

Screen Shot 2015-12-30 at 22.22.26

Hystrix dashboard for reporting service under load. All circuits are closed, but 19% of all getReport requests failed and were hence successfully redirected to the cached version.

Understanding the Bottleneck

When using Hystrix, all connectors to external services typically have a thread pool of limited size to isolate system resources. As a result, the number of concurrent (or “parallel”) calls from the aggregator-service to the report-service is limited by the size of the thread pool. This way we can easily overstress the capacity for on-demand generated reports, forcing the system to fall back to the cached report.

The relevant part of the reporting-service’s internal declaration looks as depicted in the following code snippet (note the descriptive URLs that are resolved by Eureka). The primary method getReport() is annotated with @HystrixCommand and configured to use the cached report as fallbackMethod:

@HystrixCommand(
 fallbackMethod="getCachedReport",
 threadPoolKey="getReportPool"
)
public Report getReport() {
 return restTemplate.getForObject("http://aggregator-service/", Report.class);
}

public Report getCachedReport() {
 return restTemplate.getForObject("http://aggregated-report-cache/", Report.class);
}

In order to be able to distinguish primary and fallback calls from the end user’s point of view, we decided to include a timestamp in every served report to indicate the delta between the creation and serving time of a report. Thus, as soon as the reporting-service delegates incoming requests to the fallback method, the age of the served report starts to increase.

Testing

With our bottleneck set up, testing and observing the runtime behavior is fairly easy. Using JMeter we configured a testing scenario with simultaneous requests to the reporting-service.

Basic data of our scenario:

  • aggregation-server instances: 1
  • test duration: 60s
  • hit rate per thread: 500ms
  • historize-job-rate: 30s
  • thread pool size for the getReport command: 5

Using the described setup we conducted different test runs with a JMeter thread pool size (=number of concurrent simulated users) of 3, 5 and 7. Analyzing the served reports timestamps leads us to the following conclusion:

Using a JMeter thread count below the size of the service thread pool results in a 100% success rate for the reporting-service calls. Setting sizes of both pools equal already gives a small noticeable error rate. Finally, setting the size higher than the thread pool results in growing failures and fallbacks, also forcing the circuit breaker into short circuit states.

Our measured results are as follows (note that the average report age would be 15s when always using the cached version given our historize-job-rate of 30s):

  • 3 JMeter threads: 0,78s average report age
  • 5 JMeter threads: 1,08s average report age
  • 7 JMeter threads: 3,05s average report age

After gaining these results, we changed the setup in a way that eliminates the slow connection. We did so by deploying the current-data-service to the same machine as the aggregation-service. Thus, the slow connection has now been removed and replaced with an internal, fast connection. With the new setup we conducted an additional test run, gaining the following result:

  • 7 JMeter threads, fast network: 0,74s average report age

By eliminating one part of our bottleneck, the value of report age significantly drops to a figure close below the first test run.

Remedies

The critical point of the entire system is the aggregation due to its slow connection. To address the issue, different measures can be taken.

First, it is possible to scale out by adding additional service instances. Unfortunately, this was hard to test given the hardware at hand.

Second, another approach would be to optimize the slow connection, as seen in our additional measurements.

Last but not least, we could also design our application for always using the cache assuming that all users should see the same report. In our simplistic scenario this would work, but of course that is not what we wanted to analyze in the first place.

Our Lessons Learned

Instead, let us explain a few take-aways based on our humble experience of building a simple example from scratch.

Spring Boot makes it really easy to build and run dozens of services, but really hard to figure out what is wrong when things do not work out of the box. Unfortunately, available Spring Cloud documentation is not always sufficient. Nevertheless, Eureka works like a charm when it comes to service discovery. Simply use the name of the target in an URL and put it into a RestTemplate. That’s all! Everything else is handled transparently, including client-side load balancing with Ribbon! In another lab on distributed systems, we spent a lot of time working around this issue. This time, everything was just right.

Furthermore, our poor deployment environment (3 MacBooks…) made serious performance analysis very hard. Measuring the effect of scaling out is nearly impossible on a developer machine due to its physical resource limitations. Having multiple instances of the same services doesn’t give you anything if one of them already pushes the CPU to its limits. Luckily, there are almost infinite resources in the cloud nowadays which can be allocated in no time if required. It could be worth considering this option right away when working on microservice applications.

In Brief: Should you use Spring Cloud Netflix?

So what is our recommendation after all?

First, we were totally impressed by the way Eureka makes service discovery as easy as it can be. Given you are running Spring Boot, starting the Eureka server and making each microservice a Eureka client is nothing more than dependencies and annotations. On the other hand, we did not evaluate its integration in other environments.

Second, Hystrix is very useful for preventing cascading errors throughout the system, but it cannot be used in a production environment without suitable monitoring unless you have a soft spot for flying blind. Also, it introduces a few pitfalls during development. For example, when debugging a Hystrix command the calling code will probably detect a timeout in the meantime which can give you completely different behavior. However, if you got the tools and skills to handle the additional complexity, Hystrix is definitely a winner.

In fact, this restriction applies to microservice architectures in general. You have to go a long way for being able to run it – but once you are, you can scale almost infinitely. Feel free to have a look at the code we produced on github or discuss whatever you are up to at one of our user groups.

Machine Learning with Spark: Kaggle’s Driver Telematics Competition

Do you want to learn how to apply high-performance distributed computing to real-world machine learning problems? Then this article on how we used Apache Spark to participate in an exciting Kaggle competition might be of interest.

The Lab

At comSysto we regularly engage in labs, where we assess emerging technologies and share our experiences afterwards. While planning our next lab, kaggle.com came out with an interesting data science challenge:

AXA has provided a dataset of over 50,000 anonymized driver trips. The intent of this competition is to develop an algorithmic signature of driving type. Does a driver drive long trips? Short trips? Highway trips? Back roads? Do they accelerate hard from stops? Do they take turns at high speed? The answers to these questions combine to form an aggregate profile that potentially makes each driver unique.1

We signed up for the competition to take our chances and to get more hands on experience with Spark. For more information on how Kaggle works check out their data science competitions.

This first post describes our approach to explore the data set, the feature extraction process we used and how we identified drivers given the features. We were mostly using APIs and Libraries provided by Spark. Spark is a “fast and general computation engine for large scale data processing” that provides APIs for Python, Scala, Java and most recently R, as well as an interactive REPL (spark-shell). What makes Spark attractive is the proposition of a “unified stack” that covers multiple processing models on local machine or a cluster: Batch processing, streaming data, machine learning, graph processing, SQL queries and interactive ad-hoc analysis.

For computations on the entire data set we used a comSysto cluster with 3 nodes at 8 cores (i7) and 16GB RAM each, providing us with 24 cores and 48GB RAM in total. The cluster is running the MapR Hadoop distribution with MapR provided Spark libraries. The main advantage of this setup is a high-performance file system (mapr-fs) which also offers regular NFS access. For more details on the technical insights and challenges stay tuned for the second part of this post.

Telematic Data

Let’s look at the data provided for the competition. We first expected the data to contain different features regarding drivers and their trips but the raw data only contained pairs of anonymized coordinates (x, y) of a trip: e.g. (1.3, 4.4), (2.1, 4.8), (2.9, 5.2), … The trips were  re-centered to the same origin (0, 0) and randomly rotated around the origin (see Figure 1).

Figure 1: Anonymized driver data from Kaggle’s Driver Telematic competition1

At this point our enthusiasm got a little setback: How should we identify a driver simply by looking at anonymized trip coordinates?

Defining a Telelematic Fingerprint

It seemed that if we wanted useful and significant machine learning data, we would have to derive it ourselves using the provided raw data. Our first approach was to establish a “telematic fingerprint” for each driver. This fingerprint was composed of a list of features that we found meaningful and distinguishing. In order to get the driver’s fingerprint we used the following features:

Distance: The summation of all the euclidean distances between every two consecutive coordinates.

Absolute Distance: The euclidean distance between the first and last point.

Trip’s total time stopped: The total time that the driver has stopped.

Trip’s total time: The total number of entries for a certain trip (if we assume that every trip’s records are recorded every second, the number of entries in a trip would equal the duration of that trip in seconds)

Speed: For calculating the speed at a certain point, we calculated the euclidean distance between one coordinate and the previous one. Assuming that the coordinates units were meters and that the entries are distributed with a frequency of 1 second. This result would be given in m/s. But this is totally irrelevant since we are not doing any semantic analysis on it and we only compare it with other drivers/trips. For the speed we stored the percentiles 10, 25, 50, 80, 98. We did the same also for acceleration, deceleration and centripetal acceleration.

Acceleration: We set the acceleration to the difference between the speed at one coordinate and the speed at the previous one (when we are increasing speed).

Deceleration: We set the deceleration to the difference between the speed at one coordinate and the speed at the previous one (when we are decreasing speed).

Centripetal acceleration: We used the formulae:

centripetal acceleration

where v is the speed and r is the radius of the circle that the turning curve path would form. We already have the speed at every point so the only thing that is missing is the radius. For calculating the radius we take the current, previous and subsequent points (coordinate). This feature is an indicator of “aggressiveness” in driving style: high average of centripetal acceleration indicates turning at higher speeds.

From all derived features we computed a driver profile (“telematic fingerprint”) over all trips of that driver. From experience we know that the average speed varies between driving in the city compared to driving on the highway. Therefore the average speed over all trips for a driver is maybe not revealing too much. For better results we would need to map trip features such as average speed or maximum speed to different trip types like inner city trips, long distance highway trips, rural road trips, etc. 

Data Statistics: Around 2700 drivers with 200 trips each, resulting in about 540,000 trips. All trips together contain 360 million X/Y coordinates, which means – as they are tracked per second – we have 100,000 hours of trip data.

Machine Learning

After the inital data preparation and feature extraction we could turn towards selecting and testing machine learning models for driver prediction.

Clustering

The first task was to categorize the trips: we decided to use an automated clustering algorithm (k-means) to build categories which should reflect the different trip types. The categories were derived from all trips of all drivers, which means they are not specific to a certain driver. A first look at the extracted features and computed categories revealed that some of the categories are indeed dependent on the trip length, which is an indicator for the trip type. From the cross validation results we decided to use 8 categories for our final computations. The computed cluster IDs were added to the features of every trip and used for further analysis.

Prediction

For the driver prediction we used a Random Forest algorithm to train a model for each driver, which can predict the probability of a given trip (identified by its features) belonging to a specific driver. The first task was to build a training set. This was done by taking all (around 200) trips of a driver and label them with “1” (match) and then randomly choosing (also about 200) trips of other drivers and label them with “0” (no match). This training set is then fed into the Random Forest training algorithm which results in a Random Forest model for each driver. Afterwards the model was used for cross validation (i.e. evaluating the error rate on an unseen test data set) and to compute the submission for the Kaggle competition. From the cross validation results we decided to use 10 trees and a maximum tree depth of 12 for the Random Forest model (having 23 features).

An interesting comparison between the different ensemble learning algorithms for prediction (Random Forest and Gradient-BoostedTrees (GBT) from Spark’s Machine Learning Library (MLib)) can be found on the Databricks Blog.

Pipeline

Our workflow is splitted into several self-contained steps implemented as small Java applications that can be directly submitted to Spark via the “spark-submit” command. We used Hadoop Sequence files and CSV files for input and output. The steps are as follows:

spark-article-1

Figure 2: ML pipeline for predicting drivers

Converting the raw input files: We are faced with about 550,000 small CSV files each containing a single trip of one driver. Loading all the files for each run of our model can be a major performance issue, therefore we converted all input files into a single Hadoop Sequence file which is served from the mapr-fs file system.

Extracting the features and computing statistics: We load the trip data from the sequence file, compute all the features described above as well as statistics such as variance and mean of features using the Spark RDD transformation API and write the results to a CSV file.

Computing the clusters: We load the trip features and statistics and use the Spark MLlib API to compute the clusters that categorize the trips using k-means. The features CSV is enriched with the clusterID for each trip.

Random Forest Training: For the actual model training we load the features for each trip together with some configuration values for the model parameters (e.g. maxDepth, crossValidation) and start a Random Forest model training for each driver with labeled training data and optional testdata for crossvalidation analysis. We serialize each Random Forest model to disk using Java serialization. In its current version Spark provides native saving and loading of model result instances, as well as configuring alternative serialization strategies.

For the actual Kaggle submission we simply load the serialized models and predict the likelihood of each trip belonging to that driver and save the result it in the required CSV format.

Results and Conclusions

This blog post describes our approach and methodology to solve the Kaggle Driver Competition using Apache Spark. Our prediction model based on Random Forest decision trees was able to predict the driver with an accuracy of around 74 percent which placed us at position 670 at the Kaggle leaderboard at the time of submission. Not bad for 2 days of work, however there are many possible improvements we identified during the lab.

To learn more about the implementation details, technical challenges and lessons learned regarding Spark stay tuned for the second part of this post.

You want to shape a fundamental change in dealing with data in Germany? Then join our Big Data Community Alliance!

Sources:
1. https://www.kaggle.com/c/axa-driver-telematics-analysis

Agiles Projektmanagement auf dem PMCamp

Was ist das PMCamp?

Das PMCamp ist die wichtigste Unkonferenz im Projektmanagement, offen und vielfältig, die Ende Juli im München stattgefunden hat. Das PMCamp bringt Menschen auf Augenhöhe zusammen, um von- und miteinander zu lernen und gemeinsam die Zukunft im Projektmanagement zu gestalten. Erklärtes Ziel dieses Barcamps ist: Wissen teilen, Wissen vermehren – es schlägt eine Brücke zwischen (scheinbar) widerstrebenden Aspekten des gelebten Projektmanagements. Wir finden, dass das PMCamp ein in jeglicher Hinsicht unterstützenswerter Event ist. Weswegen wir mit großer Freude dieses Jahr als Teilnehmer, Programmgestalter und Sponsor dabei waren.

 

Warum war comSysto beim PMCamp?

comSysto hilft vielen, Expertise für moderne Technologien aufzubauen und mittels fortschrittlicher Methoden selbst agiler zu werden. Das PMCamp war für uns eine exzellente Gelegenheit und genau die richtige Plattform, um sich mit Gleichgesinnten auszutauschen und neue Impulse mitzugeben und mitzunehmen.

Wir – Manuela, Florian, Tobias, Christian und ich – sind dort angetreten, eine in punkto Skills und gemeinsamen Interessesgebieten bunte Truppe aus Lean Java Experts, Profis mit Scrum Master Skills und Rollen im Projekt, agil arbeitenden Projektmanagern, Agile Coaches und Lean Change Managern. Jeder von uns hatte mehrere Topics im Kopf, ganz konkret, aus unseren Projekten, die uns im Moment gerade beschäftigen oder die wir in naher Zukunft angehen wollen. Unser Ziel für das PMCamp war innovativen Input in Bezug auf agiles Projektmanagement für unsere tägliche Arbeit zu sammeln, unsererseits aus dem agilen Nähkästchen zu plaudern, zu networken und so einen Mehrwert für alle Anwesenden schaffen.

Das comSysto-Team auf dem PMCamp

 

 

 

 

 

 

 

 

Die Agenda und unsere Sessions im Einzelnen

Agenda PMCampMuc

 

 

 

 

 

 

 

Die Agenda war kunterbunt zusammegestellt aus Diskussionsthemen, Workshops, vorbereiteten Talks und Funsessions. Was uns bewegt hat:

Christian:

“Das PMCamp ist als Gegenstück zu klassischen Konferenzen eine echte Bereicherung. Statt polierten Folien und Frontal-Unterricht steht hier der Erfahrungsaustausch aus unterschiedlichsten Perspektiven auf Augenhöhe im Mittelpunkt. Dank der offenen Barcamp-Kultur benötigt man nicht viel Vorbereitung um spontan eine Session abzuhalten. Es genügt Interesse an einem Thema und mindestens eine Hand voll Teilnehmer die sich ebenfalls darüber austauschen wollen. Ich selbst habe eine auf 2 Slots an 2 Tagen verteilte Diskussions-Session zu “Projektmanagement mit Agilen Teams – Planung, Metriken und Reporting” vorgeschlagen und dann auch erfolgreich durchgeführt. Das PMCamp war dafür die perfekte Bühne da Teilnehmer aus verschiedenen Branchen, Abteilungen und Unternehmenskulturen ihre Erfahrungen teilen und voneinander lernen konnten. So können wir gemeinsamen einen wertvollen Beitrag leisten um moderne Management-Methodik und agiles Gedankengut auch in größeren Unternehmen zu etablieren. Ich bin nächstes Jahr definitiv wieder am Start!”

IMG_6659

 

 

 

IMG_6666

 

 

 

 

 

 

 

Manuela:

“In der Session zum Thema “Verteilte Teams” wurde diskutiert, welche Probleme entstehen, wenn Teams an mehreren Standorten arbeiten, die eventuell sogar über die ganze Welt verteilt sind. Die Kommunikation erfolgt verstärkt über E-Mails und online (Video-)Konferenz Tools. Je größer die Kultur- und Sprachunterschiede, desto schwieriger wird die Kommunikation und der Aufbau einer Vertrauensbasis. Umso wichtiger ist es, persönliche Treffen zu organisieren – auch wenn man Teammitglieder um die halbe Welt schicken muss. Die Teilnehmer brachten sehr anschauliche Beispiele ein, zum Beispiel von Projekten jahrelang schlecht liefen, bis man sich schließlich doch zu durchgerungen hat, alle Projektmitlieder für ein paar Wochen zu Workshops an den selben Ort brachte. Interessant war auch die Frage, warum in großen Open Source Projekten wie Wikipedia oder Linux diese Probleme nicht so ausgeprägt zu sein scheinen. Spannende Themen, die uns in den nächsten Jahren definitiv weiter begleiten werden.

Das tolle am PMCamp war für mich einerseits die Diversität der Teilnehmer – Projektleiter aus großen Konzernen, Manager aus mittelständischen Betrieben, Software Entwickler, Freelancer, Consultants und Studenten waren gleichermaßen vertreten. Dadurch ergaben sich sehr spannende, vielseitige Sessions und Diskussionen. Eine echte Bereicherung!”

IMG_6660 IMG_6665

 

 

 

 

 

 

 

 

 

 

 

Tobias:

“Das Teilnehmerfeld war sehr heterogen aufgestellt (klassische Wasserfall Projektleiter, Bauingenieure, Agenturhengste, Prozessberater auf Company Ebene, viele Selbständige). Dies ist einerseits sehr positiv, weil man sehr viele Einblicke bekommt in einem unbekannte Arbeitsumfelder; ferner erfährt man wie dort Probleme und Herausforderungen angegangen und gelöst werden. Etwas negativ hierbei ist natürlich gewesen, dass man selbst für seine eigene Tätigkeit nicht immer etwas mitnehmen konnte. Allgemein fand ich den Socializing Faktor höher als bei klassischen Konferenzen.

Process Change- Fluch oder Segen“: Meine Session war als Diskussionsrunde ausgelegt. In dieser wollte ich über die Erfahrungen, Probleme und Lösungen der anderen Teilnehmer in Bezug auf die Adaptierfähigkeit von Prozessrichtlinien im eigenen Team / Abteilung sprechen. Wann zu welchem Zeitpunkt in welcher Iteration sind Anpassungen am eigenen Prozess notwendig? Welche Probleme interner und externer Art entstehen hierbei? Sehr schnell stellte sich heraus, dass die Teilnehmer wie im kompletten Barcamp sehr heterogen besetzt waren. Es entstand schnell eine rege Diskussion, die zum Teil auch durchaus kontrovers geführt wurde. Während bei uns Anpassungen am Prozess monatlich stattfinden, berichtete ein anderer Diskutant, dass sie 8 Monate brauchen für Veränderungen. Insgesamt eine lustige Erfahrung, bei der ein interessanter Erfahrungsaustausch stattgefunden hat.

Fazit: Generell finde ich das Konzept der Barcamps sehr gelungen, es findet ein weitaus höherer Erfahrungs- und Meinungsaustausch zwischen den Teilnehmern statt, von dem zumindest ich oftmals mehr profitiere als von klassischen Konferenzen mit Front-Beschallung.”

IMG_6658 IMG_6662

 

 

 

 

 

 

 

 

 

 

 

 

Florian:

“Das PMCamp: Eine tolle Veranstaltung, um mit Projektverantwortlichen anderer Unternehmen Erfahrungen auszutauschen. Klasssiches Projektmanagement spielt für uns keine große Rolle – um so mehr war ich überrascht zu sehen, wie viele Projekte noch immer klassisch “gemanaged” werden, obwohl Vor- und Nachteile zwischen klassischen- und agilen Methoden hinlänglich bekannt sind. Trauen sich viele Unternehmen noch immer nicht zu mehr Agilität?

In einer eigenen Session “Wie schaffe ich den Change?” haben wir dann mit weiteren Teilnehmern über die Herausforderungen, die sich im Rahmen der Agilen Transition innerhalb eines Unternehmens ergeben, gesprochen. Viele der Teilnehmer konnten unsere Erfahrungen bestätigen: Teams arbeiten sehr oft agil – wohingegen die Organisation um die Teams herum oft den klassischen Ansätzen folgt. So haben Teams oft damit zu leben, dass Entscheidungen von oben nach unten weitergegeben werden, ohne die, die mit den Entscheidungen im Alltag leben werden, bei der Entscheidungsfindung mit einzubeziehen. Dieses Vorgehen kann dazu führen, dass der Mehrwert echter Agilität nicht genutzt wird. Die durch die Teams unterstützte Produktentwicklung kann nur bedingt auf Veränderungen reagieren und das Produkt nicht den gewünschten Wert liefern.”

IMG_6657 IMG_6670

 

 

 

 

 

 

 

 

 

 

 

 

Was folgt nach dem PMCamp?

Jeder von uns nimmt aus den selbst durchgeführten Sessions und aus denen, die wir besucht haben, Insights mit für unsere tägliche Arbeit. Öffentlich werden sich die Sessions von Christian in einer ähnlichen Form auf einem der nächsten Management 3.0 Stammtische wiederfinden bzw. fortgeführt. Hier geht es zur offiziellen Gruppe bei Xing.

Der kommende Management 3.0 Stammtisch findet wieder bei comSysto statt. Besonderheit: wir bekommen die Gelegenheit Jeff Sutherland, den “Erfinder”  des Scrum, persönlich kennenzulernen und seinem Vortrag über “Scrum@Scale and Leadership Development” zu lauschen.

Mehr zum PMCamp München

Mehr zu Management 3.0 und Trainingstermine