Nowadays, Microservices is one of the most popular buzz-word in the field of software architecture.  There are quite a lot of learning materials on the fundamentals and the benefits of microservices, but there are very few resources on how you can use microservices in the real world enterprise scenarios.  
In this post, I'm planning to cover the key architectural concepts of the Microservices Architecture(MSA) and how you can use those architectural principles in practice.

Monolithic Architecture 

Enterprise software applications are designed to facilitate numerous business requirements. Hence a given software application offers hundreds to functionalities and all such functionalities are piled into a single monolithic application. For examples, ERPs, CRMs and other various software systems are built as a monolith with several hundreds of functionalities. The deployment, troubleshooting, scaling and upgrading of such monstrous software applications is a nightmare.

Service Oriented Architecture (SOA) was designed to overcome some of the aforementioned limitations by introducing the concept of a 'service' which is an aggregation and grouping of similar functionalities offered from an application. Hence, with SOA, a software application is designed as a combination of 'coarse-grained' services. However, in SOA, the scope of a service is very broad.That leads to complex and mammoth services with several dozens of operations(functionalities) along with complex message formats and standards (e.g: all WS* standards).

Figure 1 : Monolithic Architecture

In most cases, services in SOA are independent from each other, yet they are deployed in the same runtime along with all other services (just think about having several web application which are deployed into the same Tomcat instance). Similar to monolithic software applications, these services have a habit of growing overtime by accumulating various functionalities. Literally that turns those applications into monolithic globs which are no different from conventional monolithic applications such as ERPs. Figure 1 shows a retail software application which comprises of multiple services. All these services are deployed into the same application runtime. So its a very good example of a monolithic architecture. Here are some of the characteristics of  such applications which are based on monolithic architecture.
  • Monolithic applications are designed, developed and deployed as a single unit.
  • Monolithic applications are overwhelmingly complex; which leads to nightmares in maintaining, upgrading and adding new features.
  • Hard to practice agile development and delivery methodologies with Monolithic architecture.
  • It is required to redeploy the entire application, in order to update a part of it.
  • Scaling : Has to be scaled as a single application and difficult to scale with conflicting resource requirements (e.g: one service requires more CPU while the other requires more memory)
  • Reliability - One unstable service can bring the whole application down. 
  • Hard to innovate : Its really difficult to adopt new technologies and frameworks as all the functionalities has to build on a homogeneous technologies/frameworks.  
These characteristics of Monolithic Architecture has led to the Microservices Architecture. 

      Microservices Architecture 

      The foundation of microservices architecture(MSA) is about developing a single application as a suite of small and independent services that are  running in its own process, developed and deployed independently.

      In most of the definitions of microservices architecture, it is explained as the process of segregating the services available in the monolith into a set of independent services. However, in my opinion Microservices is not just about splitting the services available in monolith into independent services.

      The key idea is that by looking at the functionalities offered from the monolith, we can identify the required business capabilities. Then those business capabilities can be implemented as fully independent, fine-grained and self contained (micro)services. They might be implemented on top of different technology stacks and each service is addressing a very specific and limited business scope.
      Therefore, the online retail system scenario that we explain above, can be realized with microservices architecture as depicted in figure 2. With the microservice architecture, the retail software application is implemented as a suite of microservices. So as you can see in figure 2, based on the business requirements, there is an additional microservice created from the original set of services that are there in the monolith. So, it is quite obvious that, using microservices architecture is something beyond the splitting of the services in the monolith. 
      Figure 2 : Microservice Architecture

      So, let's dive deep into the key architectural principles of microservices and more importantly, let's focus on how they can be used in practice.

      Designing Microservices : Size, scope and capabilities

      You may be building your software application from scratch by using Microservices Architecture or you are converting existing applications/services into microservices. Either way, it is quite important that you properly decide the size, scope and the capabilities of the Microservices. Probably that is the hardest thing that you initially encounter when you implement Microservices Architecture in practice.

      Let's discuss some of the key practical concerns and misconceptions related to the size, scope and capabilities of microservices.
      • Lines of Code/Team size are lousy metrics: There are several discussions on deciding the size of the Microservices based on the lines-of-code of its implementation or its team's size (i.e. two-pizza team). However, these are considered to be very impractical and lousy metrics, because we can still develop services with less code/with two-pizza-team size but totally violating the microservice architectural principals.
      • 'Micro' is a bit misleading term : Most developers tend to think that they should try make the service, as small as possible. This is a mis-interpretation.
      • In the SOA context, services are often implemented as monolithic globs with the support for several dozens of operations/functionalities. So, having SOA-like services and rebranding them as microservices is not going to give you any benefits of microservices architecture. 
      So, then how should we properly design services in Microservices Architecture?

      Guidelines for Designing microservices 

      • Single Responsibility Principle(SRP): Having a limited and a focused business scope for a microservice helps us to meet the agility in development and delivery of services.
      • During the designing phase of the microservices, we should find their boundaries and align them with the business capabilities (also known as bounded context in Domain-Driven-Design).
      • Make sure the microservices design ensures the agile/independent development and deployment of the service.
      • Our focus should be on the scope of the microservice, but not about making the the service smaller. The (right)size of the service should be the required size to facilitate a given business capability.
      • Unlike service in SOA, a given microservice should have a very few operations/functionalities and simple message format. 
      • It is often a good practice to start with relatively broad service boundaries to begin with, refactoring to smaller ones (based on business requirements) as time goes on. 
      In our retail use case, you can find that we have split the functionalities of its monolith into four different microservices namely 'inventory', 'accounting', 'shipping' and 'store'. They are addressing a limited but focussed business scope, so that each service is fully decoupled from each other and ensures the agility in development and deployment. 

      Messaging in Microservices 

      In monolithic applications, business functionalities of different processors/components are invoked using function calls or language-level method calls. In SOA, this was shifted towards a much more loosely coupled web service level messaging, which is primarily based on SOAP on top of different protocols such as HTTP, JMS. Webservices with several dozens of operations and complex message schemas was a key resistive force for the popularity of web services. For Microservices architecture, it is required to have a simple and lightweight messaging mechanism.

      Synchronous Messaging - REST, Thrift

      For synchronous messaging (client expects a timely response from the service and waits till it get it) in Microservices Architecture, REST is the unanimous choice as it provides a simple messaging style implemented with HTTP request-response, based on resource API style. Therefore, most  microservices implementations are using HTTP along with resource API based styles (every functionality is represented with a resource and operations carried out on top of those resources). 
      Figure 3 : Using REST interfaces to expose microservices
      Thrift is used (in which you can define an interface definition for your microservice), as an alternative to REST/HTTP synchronous messaging.

      Asynchronous Messaging - AMQP, STOMP, MQTT

      For some microservices scenarios, it is required to use asynchronous messaging techniques(client doesn't expects a response immediately, or not accepts a response at all). In such scenarios, asynchronous messaging protocols such as AMQP, STOMP or MQTT are widely used.

      Message Formats - JSON, XML, Thrift, ProtoBuf, Avro 

      Deciding the most suited message format for Microservices is a another key factor. The traditional monolithic applications use complex binary formats, SOA/Web services-based applications use text messages based on the complex message formats(SOAP) and schemas(xsd). In most microservices based applications use simple text based message formats such as JSON and XML on top of HTTP resource API style. In cases, where we need binary message formats (text messages can become verbose in some use cases), microservices can leverage binary message formats such as binary Thrift, ProtoBuf or Avro.

      Service Contracts - Defining the service interfaces - Swagger, RAML, Thrift IDL

      When you have a business capability implemented as a service, you need to define and publish the service contract. In traditional monolithic applications we barely find such feature to define the business capabilities of an application. In SOA/Web services world, WSDL is used to define the service contract, but as we all know WSDL is not the ideal solution for defining microservices contract as WSDL is insanely complex and tightly coupled to SOAP.
      Since we build microservices on top of REST architectural style, we can use the same REST API definition techniques to define the contract of the microservices. Therefore microservices use the standard REST API definition languages such as Swagger and RAML to define the service contracts.


      For other microservice implementation which are not based on HTTP/REST such as Thrift, we can use the protocol level 'Interface Definition Languages(IDL)' (e.g.: Thrift IDL).

      Integrating Microservices (Inter-service/process Communication)

      In Microservices architecture, the software applications are built as a suite of independent services. So, in order to realize a business use case, it is required to have the communication structures between different microservices/processes. That's why inter-service/process communication between microservices is a such a vital aspect.

      In SOA implementations, the inter-service communication between services is facilitated with an Enterprise Service Bus(ESB) and most of the business logic resides in the intermediate layer (message routing, transformation and orchestration). However, Microservices architecture promotes to eliminate the central message bus/ESB and move the 'smart-ness' or business logic to the services and client(known as 'Smart Endpoints'). 
      Since microservices use standard protocols such as HTTP, JSON etc. the requirement of integrating with disparate protocol is minimal when it comes to the communication among microservices.  Another alternative approach in Microservice communication is to use a lightweight message bus or gateway with minimal routing capabilities and just acting as a 'dumb pipe' with no business logic implemented on gateway. Based on these styles there are several communication patterns that has emerged in microservices architecture. 

      Point-to-point style - Invoking services directly 

      In point to point style, the entire the message routing logic resides on each endpoint and the services can communicate directly. Each microservice exposes a REST APIs and a given microservice or an external client can invoke another microservice through its REST API.

      Figure 4 : Inter-service communication with point-to-point connectivity. 
      Obviously this model works for relatively simple microservices based applications but as the number of services increases, this will become overwhelmingly complex. After-all that's the exact same reason for using ESB in the traditional SOA implementation which is to get rid of this messy point-to-point integration links. Let's try to summarize the key drawbacks of the point-to-point style for microservice communication.
      • The non-functional requirements such as end-user authentication, throttling, monitoring etc. has to be implemented at each and every microservice level. 
      • As a result of duplicating common functionalities, each microservice implementation can become complex.
      • There is no control what so ever on the communication between the services and clients (even for monitoring, tracing or filtering)
      • Often the direct communication style is considered as an microservice anti-pattern for large scale microservice implementations. 
      Therefore, for complex Microservices use cases, rather than having point-to-point connectivity or a central ESB, we could have a lightweight central messaging bus which can provide an abstraction layer for the microservices and that can be used to implement various non-functional capabilities. This style is known as API Gateway style.

      API-Gateway style

      The key idea behind the API Gateway style is that using a lightweight message gateway as the main entry point for all the clients/consumers and implement the common non-functional requirements at the Gateway level. In general an API Gateway allows you to consume a managed API over REST/HTTP. Therefore, here we can expose our business functionalities which are implemented as microservices, through the API-GW, as managed APIs. In fact, this is a combination of Microservices architecture and API-Management which give you the best of both worlds. 
      Figure 5: All microservices are exposed through an API-GW. 
      In our retail business scenario, as depicted in figure 5, all the microservices are exposed through an API-GW and that is the single entry point for all the clients. If a microservice wants to consume another microservice that also needs to be done through the API-GW. 
      API-GW style gives you the following advantages.
      • Ability to provide the required abstractions at the gateway level for the existing microservices. For example, rather than provide a one-size-fits-all style API, the API gateway can expose a different API for each client.
      • Lightweight message routing/transformations at gateway level.
      • Central place to apply non-functional capabilities such as security, monitoring and throttling.
      • With the use of API-GW pattern, the microservice become even more lightweight as all the non-functional requirements are implemented at the Gateway level.
      The API-GW style could well be the most widely used pattern in most microservice implementations.  

      Message Broker style

      The microservices can be integrated in asynchronous messaging scenario such as  one-way requests and publish-subscribe messaging using queues or topics. A given microservice can be the message producer and it can asynchronously send message to a queue or topic. Then the consuming microservice can consume messages from the queue or topic. This style decouples message producers from message consumers and the intermediate message broker will buffer messages until the consumer is able to process them. Producer microservices are completely unaware of the consumer microservices. 
      Figure 6 : Asynchronous messaging based integration using pub-sub.  

      The communication between the consumers/producers is facilitated through a message broker which is based on  asynchronous messaging standards such as AMQP, MQTT etc. 

      Decentralized Data Management 

      In monolithic architecture the application stores data in a single and centralized databases to implement various functionalities/capabilities of the application.


      Figure 7 : Monolithic application uses a centralized database to implement all its features. 
      In Microservices architecture the functionalities are disperse across multiple microservices and if we use the same centralized database, then the microservices will be no longer independent from each other (for instance, if the database schema has changed from a given microservice, that will break several other services). Therefore each microservice has to have its own database.
      Figure 8 : Microservices has its own private database and they can't directly access the database owned by other microservices. 
      Here are the key aspect of implementing de-centralized data management in microservices architecture. 
      • Each microservice can have a private database to persist the data that requires to implement the business functionality offered from it.
      • A given microservice can only access the dedicated private database but not the databases of other microservices.
      • In some business scenarios, you might have to update several database for a single transaction. In such scenarios, the databases of other microservices should be updated through its service API only (not allowed to access the database directly)
      The de-centralized data management give you the fully decoupled microservices and the liberty of choosing disparate data management techniques (SQL or NoSQL etc., different database management systems for each service). However, for complex transactional use cases that involves multiple microservices, the transactional behavior has to be implemented using the APIs offered from each service and the logic resides either at the client or intermediary (GW) level.

      Decentralized Governance

      Microservices architecture favors decentralized governance. 
      In general 'governance' means establishing and enforcing how people and solutions work together to achieve organizational objectives. In the context of SOA, SOA governance guides the development of reusable services, establishing how services will be designed and developed and how those services will change over time. It establishes agreements between the providers of services and the consumers of those services, telling the consumers what they can expect and the providers what they're obligated to provide. In SOA Governance there are two types of governance that are in common use:
      • Design-time governance - defining and controlling the service creations, design and implementation of service policies
      • Run-time governance - the ability to enforce service policies during execution
      So what does governance in Microservices context, really means? In microservices architecture, the microservices are built as fully independent and decoupled services with the variety of technologies and platforms. So there is no need of defining a common standards for services designing and development.  So, we can summarize the decentralized governance capabilities of Microservices as follows. 
      • In microservices architecture there is no requirement to have centralized design-time governance.
      • Microservices can make their own decisions about its design and implementation.
      • Microservices architecture foster the sharing of common/reusable services.
      • Some of the run-time governances aspects such as SLAs, throttling, monitoring, common security requirements and service discovery may be implemented at API-GW level. 

      Service Registry and Service Discovery 

      In Microservices architecture the number of microservices that you need to deal with is quite high. And also their locations change dynamically owing to the rapid and agile development/deployment nature of microservices. Therefore you need to find the location of a microservice during the runtime. The solution to this problem is to use a Service Registry.

      Service Registry

      Service Registry holds the microservices instances and their locations. Microservice instances are registered with the service registry on startup and deregistered on shutdown. The consumers can find the available microservices and their locations through service registry. 

      Service Discovery

      To find the available microservices and their location, we need to have a service discovery mechanism. There are two types of service discovery mechanisms, Client-side Discovery and Server-side Discovery. Let's have a closer look at those service discovery mechanisms. 

      Client-side Discovery
      In this approach the client or the API-GW obtains the location of a service instance by querying a Service Registry. 
      Figure 9 - Client-side discovery 
      Here the client/API-GW has to implement the service discovery logic by calling the Service-Registry component. 

      Server-side Discovery
      With this approach, clients/API-GW sends the request to a component(such as a Load balancer) that runs on a well-known location. That component calls the service registry and determine the absolute location of the microservice.  

      Figure 10 - Server-side discovery

      The microservices deployment solutions such as Kubernetes(http://kubernetes.io/v1.1/docs/user-guide/services.html) offers service-side discovery mechanisms.

      Deployment

      When it comes to microservices architecture, the deployment of microservices plays a critical role and has the following key requirements.
      • Ability to deploy/un-deploy independently of other microservices.
      • Must be able to scale at each microservices level (a given service may get more traffic than other services). 
      • Building and deploying microservices quickly. 
      • Failure in one microservice must not affect any of the other services.
      Docker (an open source engine that lets developers and system administrators deploy self-sufficient application containers in Linux environments) provides a great way to deploy microservices addressing the above requirements. The key steps involved are as follows. 
      • Package the microservice as a (Docker) container image.
      • Deploy each service instance as a container.
      • Scaling is done based on changing the number of container instances.
      • Building, deploying and starting microservice will be much faster as we are using docker containers (which is much faster than  a regular VM)
      Kubernetes is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery and replication control. As you can see, most of these features are essential in our microservices context too. Hence using Kubernetes (on top of Docker) for microservices deployment has become an extremely powerful approach, specially for large scale microservices deployments. 
      Figure 11 : Building and deploying microservices as containers. 
      In figure 11, it shows an overview of the deployment of the microservices of the retail application. Each microservice instance is deployed as a container and there are two container per each host. You can arbitrary change no of container that you run on a given host. 

      Security

      Securing microservices is a quite common requirement when you use microservices in real world scenarios. Before jumping in to microservices security let's have a quick look at how we normally implements security at monolithic application level. 

      • In a typical monolithic application, the security is about finding that 'who is the caller', 'what can the caller do' and 'how do we propagate that information'. 
      • This is usually implemented at a common security component which is at the beginning of the request handling chain and that component populates the required information with the use of an underlying user repository (or user store). 
      So, can we directly translate this pattern into the microservices architecture? Yes, but that requires a security component implemented at each microservices level which is talking to a centralized/shared user repository and retrieve the required information. That's is a very tedious approach of solving the Microservices security problem. 
      Instead, we can leverage the widely used API-Security standards such as OAuth2 and OpenID Connect to find a better solution to Microservices security problem. Before dive deep into that let me just summarize the purpose of each standard and how we can use them. 
      • OAuth2 - Is an access delegation protocol. The client authenticates with authorization server and get an opaque token which is known as 'Access token'. Access token has zero information about the user/client. It only has a reference to the user information that can only be retrieved by the Authorization server. Hence this is known as a 'by-reference token' and it is safe to use this token even in the public network/internet.  
      • OpenID Connect behaves similar to OAuth but in addition to the Access token, the authorization server issues an ID token which contains information about the user. This is often implement by a JWT (JSON Web Token) and that is signed by authorization server. So, this ensures the trust between the authorization server and the client. JWT token is therefore known as a 'By-value token' as it contains the information of the user and obviously it is not safe to use it outside the internal network. 
      Now, lets see how we can use these standards to secure microservices in our retail example. 
      Figure 12 : Microservice security with OAuth2 and OpenID Connect


      As shown in figure 12, these are the key steps involved in implementing microservices security.

      • Leave authentication to OAuth and the OpenID Connect server(Authorization Server), so that microservices successfully provide access given someone has the right to use the data.
      • Use the API-GW style, in which there is a single entry point for all the client request. 
      • Client connects to authorization server and obtains the Access Token (by-reference token).Then send the access token to the API-GW along with the request. 
      • Token Translation at the Gateway - API-GW extracts the access token and send it to the authorization server to retrieve the JWT (by value-token). 
      • Then GW passes this JWT along with the request to the microservices layer. 
      • JWTs contains the necessary information to help in storing user sessions etc. If each service can understand a JSON web token, then you have distributed your identity mechanism which is allowing you to transport identity throughout your system.
      • At each microservice layer, we can have a component that process the JWT, which is a quite trivial implementation. 

      Transactions 

      How about the transactions support in microservices? In fact, supporting distributed transactions across multiple microservices is an exceptionally complex task. The microservice architecture itself encourages the transaction-less coordination between services. 

      The idea is that, a given service is fully self-contained and based on the single responsibility principle. The need to have distributed transaction across multiple microservces is often a symptom of a design flaw in microservice architecture and usually can be sorted out by refactoring the scopes of microservices. 
      However, if there is a mandatory requirement to have distributed transactions across multiple services, then such scenarios can be realized with the introduction of 'compensating operations' at each microservice level. The key idea is, a given microservice is based on the single responsibility principle and if a given microservice failed to execute a given operation, we can consider that as a failure of that entire microservice. Then all the other (upstream) operations has to be undone by invoking the respective compensating operation of those microservice. 

      Design for Failures

      Microservice architecture introduces dispersed set of services and compared to monolithic design, that increases the possibility of having failures at each service level. A given microservice can fail due to network issues, unavailability of the underlying resources etc. An unavailable or unresponsive microservice should not bring the whole microservices-based application down. Thus microservices should be fault tolerant, be able to recover when that is possible and the client has to handle it gracefully.
      Also, since services can fail at any time, it's important to be able to detect(real-time monitoring) the failures quickly and, if possible, automatically restore the services.
      There are several commonly used patterns in handling errors in Microservices context.

      Circuit Breaker

      When you are doing an external call to a microservice, you configure a fault monitor component with each invocation and when the failures reach a certain threshold then that component stops any further invocations of the service (trips the circuit). After certain number of requests in open state (which you can configure), change the circuit back to close state.
      This pattern is quite useful to avoid unnecessary resource consumption, request delay due to timeouts and also give us to chance to monitor the system (based on the active open circuits states). 

      Bulkhead

      As microservice application comprises of number of microservices, the failures of one part of the microservices-based application should not affect the rest of the application. Bulkhead pattern is about isolating different parts of your application so that a failure of a service in such part of the application does not affect any of the other services.  

      Timeout

      The timeout pattern is a mechanism which is allowing you to stop waiting for a response from the microservice, when you think that it won't come. Here you can configure the time interval that you wish to wait.

      So, where and how do we use these patterns with microservices? In most cases most of these patterns are applicable at the Gateway level. Which means when the microservices are not available or not responding, at the Gateway level we can decide whether to send the request to the microservice using circuit breaker or timeout pattern. Also, its quite important to have patterns such as bulkhead implemented at the Gateway level as its the single entry point for all the client requests, so a failure in a give service should not affect the invocation of the other microservices.

      In addition Gateway can be used as the central point that we can obtain the status and monitor of each microservice as each microservices is invoked through the Gateway. 

      Microservices, Enterprise Integration, API Management and beyond. 

      We have discussed about various characteristics of Microservices architecture and how you could implement them in the modern enterprise IT landscape. However, we should keep in mind that Microservices is not a panacea. The blind adaptation of buzzword concepts is not going to solve you 'real' Enterprise IT problems. As you have seen throughout this blog post, there are quite a lot of advantages of microservices and we should leverage them. But we also have to keep in mind that it is not realistic to solve all the enterprise IT problems with microservics. For instance, Microservices architecture promotes to eliminate ESB as the central bus, but when it comes to real world IT, there are quite a lot of existing applications/services which are not based on Microservices. So, to integrate with them, we need some sort of integration bus. So, ideally a hybrid approach of Microservices and other enterprise architectural concepts such as Integration would be more realistic. I will discuss them further in a separate blog post. 

      Hope this gives you much more clear idea about how you can use Microservices in your enterprises. 

      References 

      http://microservices.io/patterns/microservices.html
      http://martinfowler.com/articles/microservices.html
      http://techblog.netflix.com/2011/12/making-netflix-api-more-resilient.html
      http://www.infoq.com/articles/seven-uservices-antipatterns

      https://pragprog.com/book/mnee/release-it


      28

      View comments








      1. I'm glad to announce that my first book "Beginning WSO2 ESB" has published with Apress publicationsIn this post, I would like to share some of the key highlights of the book. 

        Why "Beginning WSO2 ESB"? 

        The book 'Beginning WSO2 ESB’ provides a comprehensive coverage of the fundamentals of WSO2 ESB and its capabilities, through real-world enterprise integration use cases. Hence this book is well suited for both the readers who are getting started with WSO2 ESB as well as who are already familiar with WSO2 ESB. 

        Also, WSO2 ESB has drastically evolved over the last several years and now it's in the 5th generation. There's been a lot of new features added while some of the existing features got deprecated. Therefore, there is an increasing demand to have a central place to understand the new features along with the best practices of using existing features. This book is fully up-to-date with the latest WSO2 ESB 5.0 version. Hence you will learn fully state-of-the-art capabilities of WSO2 ESB and the recommended and proven best practices. 

        Use case - oriented learning experience 

        One of the easiest ways to understand any given concept is to understand a related example and then dive deep into the theoretical aspects. We extensively practice that approach throughout the entire book. Almost all the concepts are explained through real world use cases and you can try out most of the sample use cases while you read the book. And most of these use cases are inspired from the real world applications of WSO2 ESB. 

        Best practices and recommended patterns

        Almost all the examples in the book illustrate the most suitable constructs, that you should select to build a given integration scenario and how to design them in the most optimum way. 
        Over the years, there have been many enhancements added to the WSO2 ESB and users are really keen to know which is the best approach to building a given integration scenario. Since the book is based on the use case oriented model, the readers can easily understand and directly map and extend the use cases given in the book to the real integration scenario that he/she wants to build. 

        Future? 

        This book is coming out in an era where some people think that ESB concept is reaching its end of life, owing to the advent of microservices architecture (This universally applies to all ESB vendors). However, in my opinion, ESB as an integration solution will continue to dominate the enterprise integration space for next several years or so. But, the ESBs and Integration Platforms will morph to adopt the new architectural concepts such as microservices and container architecture. Stay tuned for more interesting stuff on this :). 


        Acknowledgement 


        Before I wrap up this post, I must thank all the people who have helped me in writing and publishing this book. 
        First of all, I would like to thank Dr. Sanjiva Weerawarana who is the founder, CEO, and chief architect of WSO2, for all the guidance that he has provided throughout all these years at WSO2. 
        Also, I should be grateful for Apress publications for giving me the opportunity to write a book for them and especially I would like to thank acquisitions editor Pramila Balan for all the support given.  

        I'm grateful to Prabath Siriwardana, who gave me the initial idea for writing a book, and for sharing all his experiences on writing a book. Also, I would thank Isuru Udana for all his contribution as the technical reviewer and all the other ESB team members for all their support. I express my gratitude to Selvaratnam Uthaiyashankar and Samisa Abeysinghe for all their help and guidance. 

        I’m grateful to my beloved wife Imesha, my parents, and my sister, who are the main driving forces behind all my success. Last but not least, thank you to everyone who supported me in many different ways.

        Get your copy! 

        "Beginning WSO2 ESB" is available for purchase at all the leading online book retailers including AmazonApressGoogle Books and Barnes & Nobel.  

        If you loved the book and have a minute to spare, I would really appreciate a short review on the page or site where you bought the book. Your help in spreading the word is greatly appreciated.  



        Thank You!. 

        - Kasun Indrasiri








        3

        View comments

      2. As microservices architecture is getting so much of a traction, most of the stake holders of enterprise IT are wondering how does it affects the other architectural paradigms such as Enterprise Integration and API Management. The objective of this post is to provide an insight into how a modern enterprise architecture would look like once we introduce  Microservices architecture into it. (However if you are still new to Microservices architecture you may refer my previous blog post 'Microservices in Practice'.)
        I would like to start this discussion of how Microservices fits into the the overall IT landscape by elaborating some views from Gartner on Microservices.
        The microservies architecture, essentially removes quite a lot of complexity when it comes to design, development, deployment and inter-service/system communication of the (micro)services.
        However, the complexity which is removed from Microservices layer, has to be fulfilled by some other component/layer. For example, since MSA do not recommend to use ESB as the centralized bus, all the tasks done by ESB, such as service orchestration, routing and integrating with disparate systems should be done by some other components including microservices themselves.

        Inner and Outer Architecture 

        In order to use MSA in real world IT solutions we need to address both the above mentioned requirements. Gartner suggests that MSA has two different architectural domain.
        • Inner Architecture : The pure microservices components  which is less complex are categorized under 'Inner Architecture' 
        • Outer Architecture : This delivers the platform capabilities that are required to build a solution around the microservices that we build.
        In fact the resulting enterprise architecture of the combination of inner and outer architecture, is the modern microservice based enterprise IT architecture. 

        The Modern Enterprise Architecture with Microservices

        The microservice architecture encourages the enterprises to get build all its IT solutions as microservices and not to use any intermediate integration products such as ESB. However, unless you are a green-field startup which has no any internal proprietary or legacy systems, that's not a realistic approach. If you consider any large organization or a cooperation, you simply can't convert all your software systems, services and solutions to Microservices. But such organizations want to leverage the microservices architecture to build agile and scalable software solutions. So, what we really need to have is a mix of Microservices architecture blended with the conventional monolithic architecture of the existing systems.
        Figure 1.1 : The modern Enterprise Architecture with Microservices, Enterprise Integration and API Management
        The figure 1.1 illustrates a high level overview of a modern enterprises IT architecture. Here you can find we have used a hybrid architecture which comprises of both Microservices and existing systems. This is similar to what Gartner has presented with their inner and outer architecture model.
        Here are the key design decisions that you need to take when you introduce Microservices architecture to your organization.
        • Use microservices architecture to build solutions whenever required and try to gain the full power that MSA brings in.
        • Enterprise Integration is still needed. : Since we are going for a hybrid approach you will still need to integrate all your internal systems and services with the use of an integration software such as ESB.
        • You won't be able to throw away most of the existing systems, but the new microservices may need to call such monolithic systems to facilitate the various business requirements. In this case you may use the underlying integration software/ESB and microservice can call the integration server to connect to disparate systems.
        • The 'new' ESBs : Although Integration software such as ESB may be still needed for the modern enterprise architecture, such tools won't be the central integration bus any more. Organization should look for light weight, high-performant and scalable integration software instead of heavy-weigh integration frameworks. 
        • API Management : Microservices can be exposed via the Gateway and all the API-Managmement techniques can be applied at that layer. All the other requirements such as security, throttling, caching, monetization, monitoring has to be done at Gateway layer. Also the non-microservice based services (traditional SOA stuff) can also be exposed through the API Gateway.
        Now lets have a closer look at microservices layer and see how do they interacts with each other in real world scenarios. 

        Integrating Microservices

        The most commonly asked questions in Microservice space is that 'Can microservices talk to each other?' or 'How to build new microservices by leveraging existing microservice'.
        In fact, microservices architecture foster to build a microservice with limited and a focused business scope. Therefore when it comes to building IT solutions on top of Microservices architecture, it is inevitable that we have to use the existing microservices. The interaction between microserivices can be done in conventional point-to-point style, however that approach becomes quite brittle when it comes to microservices solutions with several services. Therefore we need to adhere to the best practices of Integrating microservice.
        • Using a Gateway to expose microservices : Use a GW to front all your microservices and all microservices consumers uses the microservices through through the Gateway only.
        • No direct calls among microservices : Microservices cannot invoke other mircoservices directly. All calls must go through the Gateway.
        Now lets have a look at the techniques related to the interaction between microservices. 

        Orchestration at Microservices Layer 

        When you have to call multiple microservices to support a given business requirement you can build another microservice (which is again addressing a limited business scope) which orchestrate the service calls to the required microservices and aggregate the final response and send that back to the original consumer.
        For example, the figure 1.2 depicts a scenario which we have few microservices A, B, C and D. Now we want to introduce a new business functionality which requires to call microservices A and C sequentially and provide an aggregated response. For that we can build a new microservice (Microservice E) and the orchestration logic that contains calling service A and C is embedded into Microservice E. And all the invocation of microservices are done through the Gateway. If the microservice E has to be scaled independently, that can be done by scaling microservice E, A and C as required.
        Figure 1.2 : Service orchestration implemented at the microservices level 

        Orchestration at the Gateway Layer

        The other possible approach to implement the same orchestration scenario by brining in the orchestration logic to the Gateway level. In this case, we don't have to introduce another new microservice, but a virtual service layer hosted in the Gateway takes care of the orchestration.

         For example, as shown in figure 1.3, the service calls to microservices A and C can be implemented inside the Gateway layer (Most of the microservice gateway implementations do support this feature).
         When it comes to scaling the newly introduced business functionality, we have to scale the Gateway, microservices A and C. Now with this, the gateway is kind of becoming the monolith as it is also responsible to route all the other microservices requests.

        Figure 1.3 : Service orchestration implemented at Gateway level. 

        Choreography Style

        Another possible approach to build interactions among microservices using asynchronous messaging style such as MQTT or Kafka. In this case there is no central component that take care the service interactions. Various services can do publisher-subscriber based messaging using messaging protocols.

        Conclusion

        So as a conlusion on how you can use Microservices architecture in modern Enterprise IT landscape we can summarize the following key aspects. 

        • Microservices is not a panacea : It won't solve all your enterprise IT needs. Therefore we need to use it with other existing architectures. 
        • Most enterprise won't be able to convert their entire enterprise IT systems to microservices. Instead, they will use microservices to address some business use cases where they can leverage the power of Microservices Architecture.
        • Enterprise Integration never goes away. - That means you need to have an integration software such as ESB to cater to all your enterprise integration needs. 
        • All the business functionalities should be exposed as API by leveraging API management techniques. 
        • Interaction between microservices should be support via a Gateway.  
        • Service orchestration between microservices may be required for some business use cases, and that could be implement inside another microservice or gateway layer can do the orchestration. 

        18

        View comments

      3. Nowadays, Microservices is one of the most popular buzz-word in the field of software architecture.  There are quite a lot of learning materials on the fundamentals and the benefits of microservices, but there are very few resources on how you can use microservices in the real world enterprise scenarios.  
        In this post, I'm planning to cover the key architectural concepts of the Microservices Architecture(MSA) and how you can use those architectural principles in practice.

        Monolithic Architecture 

        Enterprise software applications are designed to facilitate numerous business requirements. Hence a given software application offers hundreds to functionalities and all such functionalities are piled into a single monolithic application. For examples, ERPs, CRMs and other various software systems are built as a monolith with several hundreds of functionalities. The deployment, troubleshooting, scaling and upgrading of such monstrous software applications is a nightmare.

        Service Oriented Architecture (SOA) was designed to overcome some of the aforementioned limitations by introducing the concept of a 'service' which is an aggregation and grouping of similar functionalities offered from an application. Hence, with SOA, a software application is designed as a combination of 'coarse-grained' services. However, in SOA, the scope of a service is very broad.That leads to complex and mammoth services with several dozens of operations(functionalities) along with complex message formats and standards (e.g: all WS* standards).

        Figure 1 : Monolithic Architecture

        In most cases, services in SOA are independent from each other, yet they are deployed in the same runtime along with all other services (just think about having several web application which are deployed into the same Tomcat instance). Similar to monolithic software applications, these services have a habit of growing overtime by accumulating various functionalities. Literally that turns those applications into monolithic globs which are no different from conventional monolithic applications such as ERPs. Figure 1 shows a retail software application which comprises of multiple services. All these services are deployed into the same application runtime. So its a very good example of a monolithic architecture. Here are some of the characteristics of  such applications which are based on monolithic architecture.
        • Monolithic applications are designed, developed and deployed as a single unit.
        • Monolithic applications are overwhelmingly complex; which leads to nightmares in maintaining, upgrading and adding new features.
        • Hard to practice agile development and delivery methodologies with Monolithic architecture.
        • It is required to redeploy the entire application, in order to update a part of it.
        • Scaling : Has to be scaled as a single application and difficult to scale with conflicting resource requirements (e.g: one service requires more CPU while the other requires more memory)
        • Reliability - One unstable service can bring the whole application down. 
        • Hard to innovate : Its really difficult to adopt new technologies and frameworks as all the functionalities has to build on a homogeneous technologies/frameworks.  
        These characteristics of Monolithic Architecture has led to the Microservices Architecture. 

            Microservices Architecture 

            The foundation of microservices architecture(MSA) is about developing a single application as a suite of small and independent services that are  running in its own process, developed and deployed independently.

            In most of the definitions of microservices architecture, it is explained as the process of segregating the services available in the monolith into a set of independent services. However, in my opinion Microservices is not just about splitting the services available in monolith into independent services.

            The key idea is that by looking at the functionalities offered from the monolith, we can identify the required business capabilities. Then those business capabilities can be implemented as fully independent, fine-grained and self contained (micro)services. They might be implemented on top of different technology stacks and each service is addressing a very specific and limited business scope.
            Therefore, the online retail system scenario that we explain above, can be realized with microservices architecture as depicted in figure 2. With the microservice architecture, the retail software application is implemented as a suite of microservices. So as you can see in figure 2, based on the business requirements, there is an additional microservice created from the original set of services that are there in the monolith. So, it is quite obvious that, using microservices architecture is something beyond the splitting of the services in the monolith. 
            Figure 2 : Microservice Architecture

            So, let's dive deep into the key architectural principles of microservices and more importantly, let's focus on how they can be used in practice.

            Designing Microservices : Size, scope and capabilities

            You may be building your software application from scratch by using Microservices Architecture or you are converting existing applications/services into microservices. Either way, it is quite important that you properly decide the size, scope and the capabilities of the Microservices. Probably that is the hardest thing that you initially encounter when you implement Microservices Architecture in practice.

            Let's discuss some of the key practical concerns and misconceptions related to the size, scope and capabilities of microservices.
            • Lines of Code/Team size are lousy metrics: There are several discussions on deciding the size of the Microservices based on the lines-of-code of its implementation or its team's size (i.e. two-pizza team). However, these are considered to be very impractical and lousy metrics, because we can still develop services with less code/with two-pizza-team size but totally violating the microservice architectural principals.
            • 'Micro' is a bit misleading term : Most developers tend to think that they should try make the service, as small as possible. This is a mis-interpretation.
            • In the SOA context, services are often implemented as monolithic globs with the support for several dozens of operations/functionalities. So, having SOA-like services and rebranding them as microservices is not going to give you any benefits of microservices architecture. 
            So, then how should we properly design services in Microservices Architecture?

            Guidelines for Designing microservices 

            • Single Responsibility Principle(SRP): Having a limited and a focused business scope for a microservice helps us to meet the agility in development and delivery of services.
            • During the designing phase of the microservices, we should find their boundaries and align them with the business capabilities (also known as bounded context in Domain-Driven-Design).
            • Make sure the microservices design ensures the agile/independent development and deployment of the service.
            • Our focus should be on the scope of the microservice, but not about making the the service smaller. The (right)size of the service should be the required size to facilitate a given business capability.
            • Unlike service in SOA, a given microservice should have a very few operations/functionalities and simple message format. 
            • It is often a good practice to start with relatively broad service boundaries to begin with, refactoring to smaller ones (based on business requirements) as time goes on. 
            In our retail use case, you can find that we have split the functionalities of its monolith into four different microservices namely 'inventory', 'accounting', 'shipping' and 'store'. They are addressing a limited but focussed business scope, so that each service is fully decoupled from each other and ensures the agility in development and deployment. 

            Messaging in Microservices 

            In monolithic applications, business functionalities of different processors/components are invoked using function calls or language-level method calls. In SOA, this was shifted towards a much more loosely coupled web service level messaging, which is primarily based on SOAP on top of different protocols such as HTTP, JMS. Webservices with several dozens of operations and complex message schemas was a key resistive force for the popularity of web services. For Microservices architecture, it is required to have a simple and lightweight messaging mechanism.

            Synchronous Messaging - REST, Thrift

            For synchronous messaging (client expects a timely response from the service and waits till it get it) in Microservices Architecture, REST is the unanimous choice as it provides a simple messaging style implemented with HTTP request-response, based on resource API style. Therefore, most  microservices implementations are using HTTP along with resource API based styles (every functionality is represented with a resource and operations carried out on top of those resources). 
            Figure 3 : Using REST interfaces to expose microservices
            Thrift is used (in which you can define an interface definition for your microservice), as an alternative to REST/HTTP synchronous messaging.

            Asynchronous Messaging - AMQP, STOMP, MQTT

            For some microservices scenarios, it is required to use asynchronous messaging techniques(client doesn't expects a response immediately, or not accepts a response at all). In such scenarios, asynchronous messaging protocols such as AMQP, STOMP or MQTT are widely used.

            Message Formats - JSON, XML, Thrift, ProtoBuf, Avro 

            Deciding the most suited message format for Microservices is a another key factor. The traditional monolithic applications use complex binary formats, SOA/Web services-based applications use text messages based on the complex message formats(SOAP) and schemas(xsd). In most microservices based applications use simple text based message formats such as JSON and XML on top of HTTP resource API style. In cases, where we need binary message formats (text messages can become verbose in some use cases), microservices can leverage binary message formats such as binary Thrift, ProtoBuf or Avro.

            Service Contracts - Defining the service interfaces - Swagger, RAML, Thrift IDL

            When you have a business capability implemented as a service, you need to define and publish the service contract. In traditional monolithic applications we barely find such feature to define the business capabilities of an application. In SOA/Web services world, WSDL is used to define the service contract, but as we all know WSDL is not the ideal solution for defining microservices contract as WSDL is insanely complex and tightly coupled to SOAP.
            Since we build microservices on top of REST architectural style, we can use the same REST API definition techniques to define the contract of the microservices. Therefore microservices use the standard REST API definition languages such as Swagger and RAML to define the service contracts.


            For other microservice implementation which are not based on HTTP/REST such as Thrift, we can use the protocol level 'Interface Definition Languages(IDL)' (e.g.: Thrift IDL).

            Integrating Microservices (Inter-service/process Communication)

            In Microservices architecture, the software applications are built as a suite of independent services. So, in order to realize a business use case, it is required to have the communication structures between different microservices/processes. That's why inter-service/process communication between microservices is a such a vital aspect.

            In SOA implementations, the inter-service communication between services is facilitated with an Enterprise Service Bus(ESB) and most of the business logic resides in the intermediate layer (message routing, transformation and orchestration). However, Microservices architecture promotes to eliminate the central message bus/ESB and move the 'smart-ness' or business logic to the services and client(known as 'Smart Endpoints'). 
            Since microservices use standard protocols such as HTTP, JSON etc. the requirement of integrating with disparate protocol is minimal when it comes to the communication among microservices.  Another alternative approach in Microservice communication is to use a lightweight message bus or gateway with minimal routing capabilities and just acting as a 'dumb pipe' with no business logic implemented on gateway. Based on these styles there are several communication patterns that has emerged in microservices architecture. 

            Point-to-point style - Invoking services directly 

            In point to point style, the entire the message routing logic resides on each endpoint and the services can communicate directly. Each microservice exposes a REST APIs and a given microservice or an external client can invoke another microservice through its REST API.

            Figure 4 : Inter-service communication with point-to-point connectivity. 
            Obviously this model works for relatively simple microservices based applications but as the number of services increases, this will become overwhelmingly complex. After-all that's the exact same reason for using ESB in the traditional SOA implementation which is to get rid of this messy point-to-point integration links. Let's try to summarize the key drawbacks of the point-to-point style for microservice communication.
            • The non-functional requirements such as end-user authentication, throttling, monitoring etc. has to be implemented at each and every microservice level. 
            • As a result of duplicating common functionalities, each microservice implementation can become complex.
            • There is no control what so ever on the communication between the services and clients (even for monitoring, tracing or filtering)
            • Often the direct communication style is considered as an microservice anti-pattern for large scale microservice implementations. 
            Therefore, for complex Microservices use cases, rather than having point-to-point connectivity or a central ESB, we could have a lightweight central messaging bus which can provide an abstraction layer for the microservices and that can be used to implement various non-functional capabilities. This style is known as API Gateway style.

            API-Gateway style

            The key idea behind the API Gateway style is that using a lightweight message gateway as the main entry point for all the clients/consumers and implement the common non-functional requirements at the Gateway level. In general an API Gateway allows you to consume a managed API over REST/HTTP. Therefore, here we can expose our business functionalities which are implemented as microservices, through the API-GW, as managed APIs. In fact, this is a combination of Microservices architecture and API-Management which give you the best of both worlds. 
            Figure 5: All microservices are exposed through an API-GW. 
            In our retail business scenario, as depicted in figure 5, all the microservices are exposed through an API-GW and that is the single entry point for all the clients. If a microservice wants to consume another microservice that also needs to be done through the API-GW. 
            API-GW style gives you the following advantages.
            • Ability to provide the required abstractions at the gateway level for the existing microservices. For example, rather than provide a one-size-fits-all style API, the API gateway can expose a different API for each client.
            • Lightweight message routing/transformations at gateway level.
            • Central place to apply non-functional capabilities such as security, monitoring and throttling.
            • With the use of API-GW pattern, the microservice become even more lightweight as all the non-functional requirements are implemented at the Gateway level.
            The API-GW style could well be the most widely used pattern in most microservice implementations.  

            Message Broker style

            The microservices can be integrated in asynchronous messaging scenario such as  one-way requests and publish-subscribe messaging using queues or topics. A given microservice can be the message producer and it can asynchronously send message to a queue or topic. Then the consuming microservice can consume messages from the queue or topic. This style decouples message producers from message consumers and the intermediate message broker will buffer messages until the consumer is able to process them. Producer microservices are completely unaware of the consumer microservices. 
            Figure 6 : Asynchronous messaging based integration using pub-sub.  

            The communication between the consumers/producers is facilitated through a message broker which is based on  asynchronous messaging standards such as AMQP, MQTT etc. 

            Decentralized Data Management 

            In monolithic architecture the application stores data in a single and centralized databases to implement various functionalities/capabilities of the application.


            Figure 7 : Monolithic application uses a centralized database to implement all its features. 
            In Microservices architecture the functionalities are disperse across multiple microservices and if we use the same centralized database, then the microservices will be no longer independent from each other (for instance, if the database schema has changed from a given microservice, that will break several other services). Therefore each microservice has to have its own database.
            Figure 8 : Microservices has its own private database and they can't directly access the database owned by other microservices. 
            Here are the key aspect of implementing de-centralized data management in microservices architecture. 
            • Each microservice can have a private database to persist the data that requires to implement the business functionality offered from it.
            • A given microservice can only access the dedicated private database but not the databases of other microservices.
            • In some business scenarios, you might have to update several database for a single transaction. In such scenarios, the databases of other microservices should be updated through its service API only (not allowed to access the database directly)
            The de-centralized data management give you the fully decoupled microservices and the liberty of choosing disparate data management techniques (SQL or NoSQL etc., different database management systems for each service). However, for complex transactional use cases that involves multiple microservices, the transactional behavior has to be implemented using the APIs offered from each service and the logic resides either at the client or intermediary (GW) level.

            Decentralized Governance

            Microservices architecture favors decentralized governance. 
            In general 'governance' means establishing and enforcing how people and solutions work together to achieve organizational objectives. In the context of SOA, SOA governance guides the development of reusable services, establishing how services will be designed and developed and how those services will change over time. It establishes agreements between the providers of services and the consumers of those services, telling the consumers what they can expect and the providers what they're obligated to provide. In SOA Governance there are two types of governance that are in common use:
            • Design-time governance - defining and controlling the service creations, design and implementation of service policies
            • Run-time governance - the ability to enforce service policies during execution
            So what does governance in Microservices context, really means? In microservices architecture, the microservices are built as fully independent and decoupled services with the variety of technologies and platforms. So there is no need of defining a common standards for services designing and development.  So, we can summarize the decentralized governance capabilities of Microservices as follows. 
            • In microservices architecture there is no requirement to have centralized design-time governance.
            • Microservices can make their own decisions about its design and implementation.
            • Microservices architecture foster the sharing of common/reusable services.
            • Some of the run-time governances aspects such as SLAs, throttling, monitoring, common security requirements and service discovery may be implemented at API-GW level. 

            Service Registry and Service Discovery 

            In Microservices architecture the number of microservices that you need to deal with is quite high. And also their locations change dynamically owing to the rapid and agile development/deployment nature of microservices. Therefore you need to find the location of a microservice during the runtime. The solution to this problem is to use a Service Registry.

            Service Registry

            Service Registry holds the microservices instances and their locations. Microservice instances are registered with the service registry on startup and deregistered on shutdown. The consumers can find the available microservices and their locations through service registry. 

            Service Discovery

            To find the available microservices and their location, we need to have a service discovery mechanism. There are two types of service discovery mechanisms, Client-side Discovery and Server-side Discovery. Let's have a closer look at those service discovery mechanisms. 

            Client-side Discovery
            In this approach the client or the API-GW obtains the location of a service instance by querying a Service Registry. 
            Figure 9 - Client-side discovery 
            Here the client/API-GW has to implement the service discovery logic by calling the Service-Registry component. 

            Server-side Discovery
            With this approach, clients/API-GW sends the request to a component(such as a Load balancer) that runs on a well-known location. That component calls the service registry and determine the absolute location of the microservice.  

            Figure 10 - Server-side discovery

            The microservices deployment solutions such as Kubernetes(http://kubernetes.io/v1.1/docs/user-guide/services.html) offers service-side discovery mechanisms.

            Deployment

            When it comes to microservices architecture, the deployment of microservices plays a critical role and has the following key requirements.
            • Ability to deploy/un-deploy independently of other microservices.
            • Must be able to scale at each microservices level (a given service may get more traffic than other services). 
            • Building and deploying microservices quickly. 
            • Failure in one microservice must not affect any of the other services.
            Docker (an open source engine that lets developers and system administrators deploy self-sufficient application containers in Linux environments) provides a great way to deploy microservices addressing the above requirements. The key steps involved are as follows. 
            • Package the microservice as a (Docker) container image.
            • Deploy each service instance as a container.
            • Scaling is done based on changing the number of container instances.
            • Building, deploying and starting microservice will be much faster as we are using docker containers (which is much faster than  a regular VM)
            Kubernetes is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery and replication control. As you can see, most of these features are essential in our microservices context too. Hence using Kubernetes (on top of Docker) for microservices deployment has become an extremely powerful approach, specially for large scale microservices deployments. 
            Figure 11 : Building and deploying microservices as containers. 
            In figure 11, it shows an overview of the deployment of the microservices of the retail application. Each microservice instance is deployed as a container and there are two container per each host. You can arbitrary change no of container that you run on a given host. 

            Security

            Securing microservices is a quite common requirement when you use microservices in real world scenarios. Before jumping in to microservices security let's have a quick look at how we normally implements security at monolithic application level. 

            • In a typical monolithic application, the security is about finding that 'who is the caller', 'what can the caller do' and 'how do we propagate that information'. 
            • This is usually implemented at a common security component which is at the beginning of the request handling chain and that component populates the required information with the use of an underlying user repository (or user store). 
            So, can we directly translate this pattern into the microservices architecture? Yes, but that requires a security component implemented at each microservices level which is talking to a centralized/shared user repository and retrieve the required information. That's is a very tedious approach of solving the Microservices security problem. 
            Instead, we can leverage the widely used API-Security standards such as OAuth2 and OpenID Connect to find a better solution to Microservices security problem. Before dive deep into that let me just summarize the purpose of each standard and how we can use them. 
            • OAuth2 - Is an access delegation protocol. The client authenticates with authorization server and get an opaque token which is known as 'Access token'. Access token has zero information about the user/client. It only has a reference to the user information that can only be retrieved by the Authorization server. Hence this is known as a 'by-reference token' and it is safe to use this token even in the public network/internet.  
            • OpenID Connect behaves similar to OAuth but in addition to the Access token, the authorization server issues an ID token which contains information about the user. This is often implement by a JWT (JSON Web Token) and that is signed by authorization server. So, this ensures the trust between the authorization server and the client. JWT token is therefore known as a 'By-value token' as it contains the information of the user and obviously it is not safe to use it outside the internal network. 
            Now, lets see how we can use these standards to secure microservices in our retail example. 
            Figure 12 : Microservice security with OAuth2 and OpenID Connect


            As shown in figure 12, these are the key steps involved in implementing microservices security.

            • Leave authentication to OAuth and the OpenID Connect server(Authorization Server), so that microservices successfully provide access given someone has the right to use the data.
            • Use the API-GW style, in which there is a single entry point for all the client request. 
            • Client connects to authorization server and obtains the Access Token (by-reference token).Then send the access token to the API-GW along with the request. 
            • Token Translation at the Gateway - API-GW extracts the access token and send it to the authorization server to retrieve the JWT (by value-token). 
            • Then GW passes this JWT along with the request to the microservices layer. 
            • JWTs contains the necessary information to help in storing user sessions etc. If each service can understand a JSON web token, then you have distributed your identity mechanism which is allowing you to transport identity throughout your system.
            • At each microservice layer, we can have a component that process the JWT, which is a quite trivial implementation. 

            Transactions 

            How about the transactions support in microservices? In fact, supporting distributed transactions across multiple microservices is an exceptionally complex task. The microservice architecture itself encourages the transaction-less coordination between services. 

            The idea is that, a given service is fully self-contained and based on the single responsibility principle. The need to have distributed transaction across multiple microservces is often a symptom of a design flaw in microservice architecture and usually can be sorted out by refactoring the scopes of microservices. 
            However, if there is a mandatory requirement to have distributed transactions across multiple services, then such scenarios can be realized with the introduction of 'compensating operations' at each microservice level. The key idea is, a given microservice is based on the single responsibility principle and if a given microservice failed to execute a given operation, we can consider that as a failure of that entire microservice. Then all the other (upstream) operations has to be undone by invoking the respective compensating operation of those microservice. 

            Design for Failures

            Microservice architecture introduces dispersed set of services and compared to monolithic design, that increases the possibility of having failures at each service level. A given microservice can fail due to network issues, unavailability of the underlying resources etc. An unavailable or unresponsive microservice should not bring the whole microservices-based application down. Thus microservices should be fault tolerant, be able to recover when that is possible and the client has to handle it gracefully.
            Also, since services can fail at any time, it's important to be able to detect(real-time monitoring) the failures quickly and, if possible, automatically restore the services.
            There are several commonly used patterns in handling errors in Microservices context.

            Circuit Breaker

            When you are doing an external call to a microservice, you configure a fault monitor component with each invocation and when the failures reach a certain threshold then that component stops any further invocations of the service (trips the circuit). After certain number of requests in open state (which you can configure), change the circuit back to close state.
            This pattern is quite useful to avoid unnecessary resource consumption, request delay due to timeouts and also give us to chance to monitor the system (based on the active open circuits states). 

            Bulkhead

            As microservice application comprises of number of microservices, the failures of one part of the microservices-based application should not affect the rest of the application. Bulkhead pattern is about isolating different parts of your application so that a failure of a service in such part of the application does not affect any of the other services.  

            Timeout

            The timeout pattern is a mechanism which is allowing you to stop waiting for a response from the microservice, when you think that it won't come. Here you can configure the time interval that you wish to wait.

            So, where and how do we use these patterns with microservices? In most cases most of these patterns are applicable at the Gateway level. Which means when the microservices are not available or not responding, at the Gateway level we can decide whether to send the request to the microservice using circuit breaker or timeout pattern. Also, its quite important to have patterns such as bulkhead implemented at the Gateway level as its the single entry point for all the client requests, so a failure in a give service should not affect the invocation of the other microservices.

            In addition Gateway can be used as the central point that we can obtain the status and monitor of each microservice as each microservices is invoked through the Gateway. 

            Microservices, Enterprise Integration, API Management and beyond. 

            We have discussed about various characteristics of Microservices architecture and how you could implement them in the modern enterprise IT landscape. However, we should keep in mind that Microservices is not a panacea. The blind adaptation of buzzword concepts is not going to solve you 'real' Enterprise IT problems. As you have seen throughout this blog post, there are quite a lot of advantages of microservices and we should leverage them. But we also have to keep in mind that it is not realistic to solve all the enterprise IT problems with microservics. For instance, Microservices architecture promotes to eliminate ESB as the central bus, but when it comes to real world IT, there are quite a lot of existing applications/services which are not based on Microservices. So, to integrate with them, we need some sort of integration bus. So, ideally a hybrid approach of Microservices and other enterprise architectural concepts such as Integration would be more realistic. I will discuss them further in a separate blog post. 

            Hope this gives you much more clear idea about how you can use Microservices in your enterprises. 

            References 

            http://microservices.io/patterns/microservices.html
            http://martinfowler.com/articles/microservices.html
            http://techblog.netflix.com/2011/12/making-netflix-api-more-resilient.html
            http://www.infoq.com/articles/seven-uservices-antipatterns

            https://pragprog.com/book/mnee/release-it


            28

            View comments

          • This is a recent white paper that I have written on building a Connected Retail IT Architecture using a middleware platform. Here we discuss about the typical structure of a Retail IT system and the common problems that we face when building an agile Retail IT.




            This provides a reference architecture for any Retail IT system and then I try to realize it with the WSO2 Middleware platform.

            To become a successful business in the ever-changing retail market, the retail business must have a connected retail IT architecture. This retail IT system should be able to connect with existing legacy systems, services, and data. At the same time, it would need to cater to demanding customer needs, flexibility, and agility in exploring new markets, expanding IT capabilities inside and outside the organization (SaaS), and enhance overall performance of the IT system.
            This white paper will discuss the importance of creating a connected retail system today and explain how a complete middleware platform can help address these challenges and meet your retail IT requirements. It will discuss a connected retail reference architecture with actual customer use cases.

            You can download the "Connected Retail Reference Architecture" white paper from http://wso2.com/whitepapers/connected-retail-reference-architecture/
            3

            View comments


          • 3

            View comments

          • Recently I have written a white paper on the evolution of Enterprise Integration where I discussed about the history of Enterprise Integration, SOA, ESB, API-Management and the future of integration technologies such as iPaaS(Integration Platform as a Service).

            The need to seamlessly connect cloud and mobile apps, multiple data streams, social media, and on-premises systems is challenging IT groups to keep pace with a rapid evolution in enterprise integration. A new white paper from WSO2 addresses this demand by presenting how to implement a modern integration platform that empowers the enterprise to build an internally and externally connected business.
            The white paper, “The Evolution of Integration: A Comprehensive Platform for a Connected Business,” was written by WSO2 Software Architect Kasun Indrasiri. It begins by discussing the several architectural approaches to integration. The paper then explores how the bus architecture and the advent of the service-oriented architecture (SOA) led to the evolution of the enterprise service bus (ESB) as the enterprise integration backbone.
            Next, the paper examines how managed APIs serve to extend the capabilities of a SOA and ESB by exposing business functions in a managed, accessible, monitored and adaptive way. It then looks at how organizations can build effective enterprise integration solutions by leveraging an ESB and API management through an API façade pattern.
            Additionally, the paper explores hybrid integration platforms as a new way to connect cloud based, mobile and on-premise systems. It also provides an overview of a lean, modern and cost-effective integration platform, powered by WSO2 software, in practice.




            You can find the white paper at http://wso2.com/whitepapers/the-evolution-of-Integration-a-comprehensive-platform-for-a-connected-business/
            5

            View comments



          • Why non-blocking IO

            A typical server application, such as a web server, needs to process thousands of request concurrently. Therefore, the modern web server needs to meet the following requirements.
            • Handling of thousands of connections simultaneously (significant number of connections may be in idle state as well)
            • Handling high latency connections
            • Request/response handling needs to be decoupled
            • Minimize latency, maximize throughput and avoiding unnecessary CPU cycles
            Hence, to cater to such requirements, there can be several possibilities in the server application architecture.
            • Having a pool of sockets for each client, and periodically polling them: This is the most straightforward approach; however, it is more or less impractical without non-blocking sockets. This is extremely inefficient and never scales with the increasing number of connections.
            • Thread per socket: This is the conventional approach and was initially used in some applications and this is the only practical solution with blocking sockets. Having thread per each client connection has several drawbacks, such as a large amount of overhead with the thread scheduling. Another flaw in this approach is that as the number of connections/client increases, we need to increase the number of threads as well. Therefore, this too hardly scales.
            • Readiness selection: It is the ability to choose a socket that will not block when read or written. This is a very efficient way of handling thousands of concurrent clients and scales well.
            The readiness selection approach was originally presented as an object behavioral pattern by Schmidt [1] under ‘Reactor: An Object Behavioral Pattern for Demultiplexing and Dispatching Handles for Synchronous Events’ paper. Therefore, in order to understand the readiness selection better, we need to take a closer look at the Reactor pattern.

            Reactor pattern

            The reactor design pattern was introduced as a general architecture to implement event-driven systems. In order to solve our original problem of implementing a server application that can handle thousands of simultaneous client connections, Reactor pattern provides a way in which we can listen to the events (incoming connections/requests) with a synchronous demultiplexing strategy, so that when an incoming event occurs, it is dispatched to a service provider (handler) that can handle this event.
            Let's have a detailed look at each key participants in Reactor pattern, which are depicted in the following class diagram.
            In the reactor pattern, the initiation dispatcher is the most crucial component. Often this is also known as the ‘Reactor’. For each type of service that the server application offers, it introduces a separate event handler that can process that particular event type. All these event handlers are registered with the initiation dispatcher. Initiation dispatcher uses a demultiplexer that can listen to all the incoming events and notify the initiation dispatcher accordingly. The demultiplexer uses ‘handles’ to identify the events that occur on a given resource, such as network connection. Handles are often used to identify OS managed resources, such as network connections, open files, etc.
            The behavior of demultiplexer is synchronous such that it blocks on waiting for events to occur on a set of handlers. However, once the event occurs, it simply notifies the initiation dispatcher and it will hand over the event to the respective concrete event handler type.

            Handle

            Handle identifies resources that are managed by the operating system, such as network connections, open files, etc. Handles are used by demultiplexer to wait on the events to occur on handles.

            Demultiplexer

            Demultiplexer works in synchronous mode to waiting on the events to occurs on the handlers. This is a synchronous blocking behavior, but this only blocks when we do not have events queued up at the handles. In all other cases, when there is an event for a given handle, demultiplexer notifies the initiation dispatcher to call-back the appropriate event handler.
            A very common realization of a demultiplexer in Unix is the ‘select(.)’ system call, which is used to examine the status of file descriptors in Unix.

            Initiation dispatcher/reactor

            Initiation dispatcher provides an API for registering, removing, and dispatching of event handler objects. Therefore, various types of event handlers are registered with the initiation dispatcher and it also initiates the demultiplexer so that it can receive notifications when the demultiplexer detects any event.
            When events such as connection acceptance, data input/output, timeout, etc. have occurred at the demultiplexer, it then notifies the initiation dispatcher. Thereafter, the initiation dispatcher invokes the respective concrete event handler.

            Event handler

            This is merely an interface that represents dispatching operation for a specific event.

            Concrete event handler

            This is derived from the abstract event handler and each implementation comprises a specific method of processing a specific event type. It is important to keep in mind that these concrete event handlers are often run on a dedicated thread pool, which is independent from the initiation dispatcher and the demultiplexer.

            Reactor pattern in Java NIO

            When it comes to developing server applications with Java, we need to have an underlying framework that supports a realization of the Reactor pattern. With the Java NIO framework, jdk provides the necessary building blocks to implement Reactor pattern with Java.

            Non-blocking echo server with Java NIO

            In order to understand how Reactor pattern can be implemented with Java NIO, let's take an example of a simple echo server and a client, in which the server is implemented based on the readiness selection strategy.

            Selector (demultiplexer)

            Selector is the Java building block, which is analogous to the demultiplexer in the Reactor pattern. Selector is where you register your interest in various I/O events and the objects tell you when those events occur.

            Reactor/initiation dispatcher

            We should use the Java NIO Selector in the Dispatcher/Reactor. For this, we can introduce our own Dispatcher/Reactor implementation called ‘Reactor’. The reactor comprises java.nio.channels.Selector and a map of registered handlers. As per the definition of the Dispatcher/Reactor, ‘Reactor’ will call the Selector.select() while waiting for the IO event to occur.

            Handle

            In the Java NIO scope, the Handle in the Reactor pattern is realized in the form of a SelectionKey.

            Event

            The events that trigger from various IO events are classified as - SlectionKey.OP_READ etc.

            Handler

            A handler is often implemented as runnable or callable in Java.

            Structure of the sample scenario

            In our sample scenario, we have introduced an abstract event Handler with the abstract method ‘public void handleEvent(SelectionKey handle)’ which is implemented by the respective concrete event handlers. AcceptEventHandler, ReadEventHandler, and WriteEventHandler are the concrete handler implementations.
            ReactorManager - This is the place where we initialize the Reactor and execute the server side operations.
            ReactorManager#startReactor
            Create the ServerSocketChannel, bind with a port and configure non-blocking behavior. ServerSocketChannel server = ServerSocketChannel.open(); server.socket().bind(new InetSocketAddress(port)); server.configureBlocking(false);
            Initialize Reactor and register channels Reactor initiationDispatcher = new Reactor(); initiationDispatcher.registerChannel(SelectionKey.OP_ACCEPT, server);
            Register all the concrete event handlers with the Reactor.
            ?
            1
            2
            3
            4
            5
            6
            7
            8
            9
            10
            11
            12
            13
            14
            15
            16
            17
            18
            19
            ServerSocketChannel server = ServerSocketChannel.open();
            server.socket().bind(new InetSocketAddress(port));
            server.configureBlocking(false);
            Reactor reactor = new Reactor();
            reactor.registerChannel(SelectionKey.OP_ACCEPT, server);
            reactor.registerEventHandler(
                    SelectionKey.OP_ACCEPT, new AcceptEventHandler(
                    reactor.getDemultiplexer()));
            reactor.registerEventHandler(
                    SelectionKey.OP_READ, new ReadEventHandler(
                    reactor.getDemultiplexer()));
            reactor.registerEventHandler(
                    SelectionKey.OP_WRITE, new WriteEventHandler());
            reactor.run();
            Invoke run() method of the Reactor. This method is responsible for calling select() method of the Selector/Demultiplexer on an indefinite loop and as the new event occurs they are retained with selectedKeys() method of the Selector. Then for each selected key, it invokes the respective event handler.
            ?
            1
            2
            3
            4
            5
            6
            7
            8
            9
            10
            11
            12
            13
            14
            15
            16
            17
            18
            19
            20
            21
            22
            23
            24
            25
            26
            27
            28
            29
            30
            31
            32
            33
            34
            35
            36
            37
            38
            39
            40
            41
            42
            43
            44
            45
            46
            47
            48
            49
            50
            51
            52
            53
            54
            55
            56
            57
            58
            59
            60
            61
            62
            63
            64
            65
            66
            67
            68
            69
            70
            71
            72
            73
                package org.panorama.kasun;
            import java.nio.channels.SelectableChannel;
            import java.nio.channels.SelectionKey;
            import java.nio.channels.Selector;
            import java.util.Iterator;
            import java.util.Map;
            import java.util.Set;
            import java.util.concurrent.ConcurrentHashMap;
            public class Reactor {
                private Map<integer, eventhandler=""> registeredHandlers = new ConcurrentHashMap<integer, eventhandler="">();
                private Selector demultiplexer;
                public Reactor() throws Exception {
                    demultiplexer = Selector.open();
                }
                public Selector getDemultiplexer() {
                    return demultiplexer;
                }
                public void registerEventHandler(
                        int eventType, EventHandler eventHandler) {
                    registeredHandlers.put(eventType, eventHandler);
                }
                public void registerChannel(
                        int eventType, SelectableChannel channel) throws Exception {
                    channel.register(demultiplexer, eventType);
                }
                public void run() {
                    try {
                        while (true) { // Loop indefinitely
                            demultiplexer.select();
                  Set<selectionkey> readyHandles =
                                    demultiplexer.selectedKeys();
                            Iterator<selectionkey> handleIterator =
                                    readyHandles.iterator();
                            while (handleIterator.hasNext()) {
                                SelectionKey handle = handleIterator.next();
                                if (handle.isAcceptable()) {
                                    EventHandler handler =
                                            registeredHandlers.get(SelectionKey.OP_ACCEPT);
                                    handler.handleEvent(handle);
                                }
                                if (handle.isReadable()) {
                                    EventHandler handler =
                                            registeredHandlers.get(SelectionKey.OP_READ);
                                    handler.handleEvent(handle);
                                    handleIterator.remove();
                                }
                                if (handle.isWritable()) {
                                    EventHandler handler =
                                            registeredHandlers.get(SelectionKey.OP_WRITE);
                                    handler.handleEvent(handle);
                                    handleIterator.remove();
                                }
                            }
                        }
                    } catch (Exception e) {
                        e.printStackTrace();
                    }
                }
            }

            Handle is represented from SlectionKey and Event are represented with SelectionKey.OP_ACCEPT, SelectionKey.OP_READ, SelectionKey.OP_WRITE etc.
            Source code for the above sample is available at https://github.com/kasun04/rnd/tree/master/nio-reactor.
            5

            View comments





          • Recently, I got a chance to read the 'Instant MapReduce Patterns - Hadoop Essentials How-to' by Dr Srinath Perera. (thanks to Packetpub who offered me a free e-book for this).

            I found this book, a pretty handy book for anybody who wants to get started with MapReduce with Hadoop. In fact, we can find many Map Reduce books which we have a lot of theories and very abstract use cases, but the gap that this book tries to give the readers a complete end-to-end experience on how to program with MapReduce. 


            When you are solving a problem with MapReduce/Hadoop, you often find that just knowing some HelloWorld type of examples won't be enough and you hardly find resource on how to solve different types of MapReduce problems using Hadoop. This book address that requirement and provide a concise introduction to solving different types of problems using MapReduce.

            Book is available for purchase at :

            Here I have summarized the various types of recipes that are presented in the book with complete code samples and contains all the instructions to run the samples. 

            Word Count

            This would be the very first program you will write, when you start learning MapReduce. This is a very good starting point to learn the basic of MapRecuce and understand the Hadoop Mapper and Reducer APIs. Throughout the book, for all examples a real data set (1G) from 
            http://snap.stanford.edu/data/#amazon was used. Using that sort of a data set make more sense to the readers as it gives a feeling that we are dealing with real 'large' data sets. 


            Installing Hadoop in a distributed setup and running Word Count 

            This example gives the readers a good understanding about a typical Hadoop deployment. The responsibilities of name node, data nodes, job tracker, and task trackers are clearly explained. The example given in the book can be deployed either in a single machine or a set of machines. 

            Foramtters 

            When we run MapReduce jobs on a given set of data, by default Hadoop reads the input data line by line. You could write your own formatter to process data which spans across the data set and then feed that in to the MapReduce jobs. This recipe provides a complete example of how to use Formatters with Hadoop. 

            Analytics 

            This sample is about doing a statistical analysis of a given data set. From a given data set how we can formulate a frequency distribution histogram with Hadoop and then plot the results with gnuplot. 

            Relational Operations - Join two data sets 

            It is often required to process  two large data sets and merge them on a given relational operation such as join. The example provided to show the Joining of two operations, combine two data sets one with 'list of most frequent customers' and 'items bought by customer' and then find out the 'items bough by 100 most frequent customers'. 


            In addition to above recipes, you will find complete examples of  Set Operations, Cross Corelation, Simple Search ,Graph Operations and K-Means in the 'Instant MapReduce Patterns - Hadoop Essentials How-to' book. 

            So, in summary, 'Instant MapReduce Patterns - Hadoop Essentials How-to' is book that any one who is willing to have a deep dive in to MapReduce with Hadoop, must have :). 






            6

            View comments

          • WSO2 has just release the latest version of Enterprise Service Bus, WSO2 ESB 4.7.0. This is the first major release of WSO2 ESB after the 4.6.0 release which purely contained the major performance enhancements.

            As a member of WSO2 ESB team, I would like to discuss some of the key aspect of 4.7.0 release which will be helpful for all the integration geeks and all of the existing users of WSO2 ESB .


            Comprehensive RESTful Integration 


            With the heavy adaptation of RESTful APIs all over the world any integration platform should offer a solid foundation to integrate RESTful services. WSO2 ESB initially came up with the REST API implementation, which is a robust and a clean way to expose REST APIs via ESB. This feature was extensively used in many integration scenarios and WSO2 API Manager successfully leverage this functionality in its API Gateway.  

              
            REST Triangle

            There are three main concepts that we need to understand in REST(Representational State Transfer); Nouns, Verbs and Data Types. 

            Nouns are used as the identifier for a resource and it is generally a URL(eg: To find a specific type of a Pizza in a Pizza Ordering RESTful service one could use:
             http://localhost:8080/PizzaShopServlet/restapi/PizzaWS/menu?category=pizza&type=pan).

            Verbs are generally used as operations generally encompasses everything that can be done to a piece of data, CRUD (Create, Read, Update, Delete). In HTTP scope these are mapped to POST, GET, PUT and DELETE. 

            Data Types provide the format for the data that will take part in your RESTful discussion. Most commons data types are XML JSON, or CSV. 

            HTTP Endpoint (Nouns and Verbs)

            REST APIs in WSO2 ESB is all about exposing the REST APIs, but how about integrating existing RESTful services. Prior to 4.7 release, WSO2 ESB has the support for integrating RESTful services but the users had to do a lot of tweak around various properties defined in the ESB configuration. Therefore, we introduced a new endpoint type called 'HTTP Endpoint', where users can specify an URI Template which can dynamically populate final URI for the RESTful service invocation. Also, users can manipulate HTTP method of the outgoing request. For more info, please refer [1].

            Loading ....


            HTTP Endpoint for Dynamic Address Resolution 
            In fact, HTTP Endpoint can talk to any SOAP based BE as well. With the HTTP Method set to 'POST' and you can create the URI as per your preference. So, this endpoint type will be a prime candidate for the scenarios where you want to have BE addresses which resolved during the run time(i.e: From a System variable etc).

            JSON support for Payload Factory Mediator (Data Types)

            When dealing with RESTful services, the data formats plays an important role. In particular, many RESTful APIs uses JSOn as the date format and that's what leads us to think about supporting mutiple media types in Payload Factory mediator. Since the Payload Factory mediator was introduced in to ESB, it was increasingly used for implementing many transformation scenarios (where the transformation logic is not that complex). With ESB 4.7 release, the payload factory mediator support multiple media types, xml and json and carter following transformation scenarios. When dealing with multiple media types, we introduce something called an 'evaluator' to distinguish between the xml,  json    or any other evaluator(i.e. xpath for xml and json_path for json). The media type based transformation not using any intermediate data representation format which makes it more efficient. Please refer [2] for more info.

            JSON -> JSON
            JSON -> XML
            XML->JSON
            XML->XML

            Loading ....

            High Performance Multitenant REST APIs with PassThru Transport 

            With ESB 4.6 release, we make the high performance Pass-Thru Transport (PTT), the default transport for ESB. However, we didn't support most of the Multitenant(MT) scenarios with PTT during 4.6 release. In particular, REST APIs in MT was not supported in 4.6. 
            In WSO2 ESB 4.7, we have added the full support for all Multitenant REST APIs which allows you to create blazing fast REST APIs in Multitenant cloud environments. 

            Transport Headers with Header Mediator 

            Often we have to deal with various HTTP headers when integrating RESTful backend services. In such cases, manipulating HTTP headers through a mediator will be very handy. We enforce this functionality in to Header Mediator (which was originally support transforming the SOAP headers only), so that users can set what ever the transport headers with the use to Header mediator. (One could do the same with a Property mediator, but using Header mediator make your config more readable and manageable) 

            Better Store and Forward Story - Guaranteed Delivery and Rate Matching 


            Store and Forward pattern is used in many integration solutions. It is becoming more and more frequent   that we have to deal with guaranteed delivery, in-order delivery or rate matching scenarios with messaging. So, the general approach to solve the store and forward problem is to use a persistent/non-persistent store to store the messages and then have a separate component which can forward/process messages based on predefined strategy. 


            Store and Forward 

            WSO2 ESB supported these use cases with the introduction of Message Stores and Processors. In 4.7, we have revamped the implementation of message store and processors such that it caters high performance guaranteed delivery and rate matching scenarios.

            In most of the integration scenarios, it is required to have a robust and efficient guaranteed delivery infrastructure. So, in such use case it is also required to have a high performance Message Broker solution such as WSO2 Message Broker. So, in 4.7 release we put an extra effort to provide seamless integration of WSO2 ESB and WSO2 MB. The improvements took place in JMS Message Store, Forwarding Message Processor(Guaranteed Delivery) and Sampling Message Processor(Rate Matching). Also, in future ESB releases also we are planning to strengthen Message Store and Processor more and more. For more info on WSO2 ESB/WSO2 MB integration please refer [3].

            SSL Tunneling Support

            When your ESB connects to a back-end server through a proxy server, you can enable secure socket later (SSL) tunneling through the proxy server, which prevents any intermediary proxy services from interfering with the communication.


            WSO2 ESB 4.7.0 supports such scenarios and you can configure either Pass-Thru or NHTTP transports to support SSL tunneling. Please refer [4].

            SSL Inbound Profiles with Multi-HTTPS Transport

            The Multi-HTTPS transport is similar to the HTTPS-NIO Transport, but it allows you to have different inbound SSL profiles with separate trust stores and key stores for different hosts using the same ESB. The ESB can listen to different host IPs and ports for incoming HTTPS connections, and each IP/Port will have a separate SSL profile configured. [5]



            Inbound Connection Throttling Support for PTT and NHTTP 

            WSO2 ESB is using non-blocking HTTP transports(PTT,NHTTP) which are based on HTTPCore NIO. When the client keep on sending request to ESB, the ListeningIOReactor which resides at the transport listener side keeps on accepting connections. We have encountered some production scenarios where the ESB should throttle the number of accepted connections. To do so, we introduce a new throttling parameter for both PTT and NHTTP transports.

            We can specify it in passthru.properties or nhttp.properties files and throttling the number of inbound connections "max_open_connections = 1000". Internally what happens here is that, if the connection count is above the specified limit, then the ListeningIO Reactor will be periodically paused. By default, the max_open_connections throttling is not applied and can be considered as ESB infinitely accepts the incoming connections. 

            OCSP/CRL

            OCSP(Online Certificate Status Protocol)/CRL(Certificate Revocation List) are two protocols used to check whether a given X509 Certificate is revoked by its Issuer. At the verification phase of SSL handshake, different verifications procedures are used in order to make sure the connecting party is trusted. OCSP/CRL certificate verification process is mandatory where high computer security is concerned. This feature is implemented for NHTTP transport of ESB 4.7 and can be enabled this in axis2.xml for HTTPS Transport Sender parameter 'CertificateRevocationVerifier' setting to true.

            Less Bugs 

            In WSO2 ESB 4.7 release, we bring down the issue count drastically by fixing nearly 400 bugs and resolved over 550 public jiras, which in turns bring down the total outstanding issue count to less than 150 issues. 

            Performance 

            Although ESB 4.6 release was solely a performance enhancement effort, in 4.7 release we didn't impose any transport level performance improvements. However, from 4.6 to 4.7 we have ensured that we have the same performance level as with 4.6, which is capable of out performing all open source ESB vendors. The following performance comparison between 4.6 and 4.7 shows the identical TPS for each scenario.

            WSO2 ESB 4.6 vs 4.7 

























            HL7 

            In HL7 space also we did several improvements such that it can support Accept Ack and Application Ack scenarios. [7]  

            Invoking Sequence/Proxy Service via Scheduled Tasks

            With ESB 4.7, now you can invoke any named sequence or a proxy service with ease. We only have give the name of the sequence or the proxy service name along with the task config. Please refer [8]


            images courtesy of : 
            http://www.ansoncheunghk.info/article/web-apis-resource-oriented-architecture
            http://www.altisinc.com/resources/IE/images/storfor.gif
            http://windowsitpro.com/









            3

            View comments

          • I thought of compiling some of the basics concepts related to Map Reduce and Hadoop along with a trivial sample to get started with Hadoop. Also, I would like to discuss about the new MapReduce API of Hadoop and samples will be based on the new API.

            Why Map Reduce?

            Nowadays, we are surrounded by huge amount of data and each one of us keep on consuming and generating data at every second. Facebook, YouTube, Twitter, LinkedIn, Googling and every other thing that we do on the internet is dealing with a huge amount of data. The main challenge here is to analyze this sort of huge volumes of data and make decisions based on the analysis. Google was the first initiator who came up with an abstraction called Map Reduce to address the challenges in parallel processing of high volume of data.

            Fundamentals of Map Reduce

            MapReduce is a programming model and an associated implementation for processing and generating large data sets. It was originally developed by Google (MapReduce: Simplified Data Processing on Large Clusters) and built on well-known principles in parallel and distributed processing. Since then Map Reduce was extensively adopted for analyzing large data sets in its open source flavor Hadoop.




            The original motivation behind Map Reduce aroused with the requirements of Google for processing large amount of raw data such as crawled documents, web requests logs etc and building up inverted index or graph representations. Despite those computations are not so complex, owing to the high volume of data, the computations has to be distributed across multiple machines (normally hundreds or thousands) along with the support for parallelization, fault-tolerance, data distribution and load balancing.

            Map Reduce provide a new abstraction for all type of high volume data processing with allows the users to express the simple computations we were trying to perform but hides the messy details of parallelization, fault-tolerance, data distribution and load balancing in a library.

            In most computation related to high data volumes, it is observed that two main phases are commonly used in most data processing components. The original authors of Map Reduce spotted this commonality and created an abstraction phases of Map Reduce model called 'mappers' and 'reducers' (Original idea was inspired from programming languages such as Lisp).

            When it comes to processing large data sets, for each logical record in the input data it is often required to use a mapping function to create intermediate key value pairs . Then another phase called 'reduce' to be applied to the data that shares the same key, to derive the combined data appropriately.

            Mapper

            The mapper is applied to every input key-value pair (split across an arbitrary number of files) to generate an arbitrary number of intermediate key-value pairs. The standard representation of this is as follows:


            map: (k1 , v1 ) → [(k2 , v2 )]

            So, if we take an example of processing a large set of text files for obtaining word frequencies (word count), then the input of map function will be the file_name and the file_content which is denoted by k1 and v1. So, with in the map function user may emit the any arbitrary key/value pair as denoted in the list [k2, v2]. 

            eg: Mapper for Word Count sample
            Loading ....


            Reducer

            The reducer is applied to all values associated with the same intermediate key to generate output key-value pairs. It is important to keep in mind that, in between map and reduce jobs there exists an implicit distributed "group by" operation on intermediate keys and intermediate data arrive at each reducer
            in order, sorted by the key.

            Since we have an intermediate 'group by' operation, the input to the reducer function is a key value pair where the key-k2 is the one which is emitted from mapper and a list of values [v2] with shares the same key.

            reduce: (k2 , [v2 ]) → [(k3 , v3 )]

            eg: Reducer for Word Count sample

            Loading ....



            WordCount with Map Reduce


            So, as we have covered all the fundamentals related to Map Reduce, its time to start writing our simple map reduce program to count words frequencies of a given data set.

            images courtsey of : http://dme.rwth-aachen.de/de/research/projects/mapreduce
            http://technorati.com/technology/article/massively-parallel-processing-of-big-data1/
            http://www.rabidgremlin.com/data20/

            11

            View comments

          About Me
          About Me
          My Photo
          Colombo, Sri Lanka
          I'm a Software Architect, working for WSO2 Inc. - The Open Source SOA Company. I've completed my B.Sc. Engineering degree in Computer Science and Engineering at University of Moratuwa, Sri Lanka. At WSO2, I'm currently working on WSO2 ESB team. Also a committer and a PMC member of Apache Synapse and had contributed to Apache Rampart/C and Axis2/C project. I'm a Cricket and Music fan.. and also interested in travelling and photography.
          Categories
          Blog List
          Blog List
          Loading