Kenzan has created many microservice applications over the years, with many of those running thousands of instances. Our experience is that a lot of organizations want to take advantage of the increased scalability and other benefits that microservices offer, only they don’t know where to start. How do they set up data? How will microservices affect their deployments? What technologies should they use? The task can feel quite daunting! The reality is that it’s much simpler than it seems. To prove this point, Kenzan created an open source microservices project called Million Song Library (MSL).
Not familiar with the benefits of microservices yet? No problem! Check out our blog series on microservices for a deeper dive. In a nutshell, microservices are a set of small services that make up a full application stack. The services are typically broken down by functional area within the business. This brings several core benefits:
- Targeted scalability where and when needed
- Decoupling of the functional areas of an application
- Facilitation of continuous delivery
- Cleaner code management
This is just a high level overview of the benefits, so be sure to read Microservices for a Macro World if you’d like a more detailed discussion.
So what is Million Song Library? At a basic level, MSL is a microservices-based application that lets users navigate through large sets of music (albums, artists, and songs) while also tracking and rating their favorites. That said, the functionality of the application is actually secondary to the main goal: to show how easy it can be to create microservices and run them both locally and up in the cloud. As you’ll see, with the help of good, solid architectural patterns, it is simple to create and maintain a fully functioning microservices application.
Over the next couple of weeks, you can look forward to a series of blog posts covering the different aspects of MSL. At the end of the series, our hope is that you’ll have a good familiarity with microservices, the technologies used in MSL, and its core architectural patterns. You’ll also be able to run MSL locally as well as within your own AWS environment.
At this point you probably want to know more about the architecture of MSL and what makes it tick. So let’s get to it!
For MSL, it’s best to review the stack from a bottom up approach. The data layer leverages Cassandra NoSQL data stores, and there is a separate data collection client for each functional area. The services are also broken up into functional areas, each one related to managing a library with a million songs. These include things like catalog services, login services, and even ranking services. Each of these services are fully RESTful, and each (ideally) has a very small set of responsibilities. The task of managing access to these services belongs to a proxy layer (Zuul) that handles all the API traffic coming into the application as well as discovery of the correct microservice.
As it stands now, MSL is easily deployed into AWS, but it was intentionally built to be deployed into other environments. Want to drop the routing layer logic, the services layer, and the data tier into another cloud or data center environment? You can do that!
From a front-end perspective, MSL is comprised of a few core technologies. Our goal was to use a technology stack that we find easy to work with and is something common in the marketplace:
- AngularJS 1.4 – Front end framework
- Less – CSS preprocessing
- Gulp – Build management
- NPM – Front end package manager
- Webpack – Bundler for modules and dependencies
- ESlint – Style guide linter tool
- Material Design – Standard design toolkit
The back-end technology was architected with the same simplicity in mind. Given the robustness of the Netflix OSS stack, we decided to stick with Java and gain the benefits of a solid Netflix OSS ecosystem:
- Java 8 – Server side language
- Jersey – Web service layer that extends JAX-RS
- JUnit – Server side unit testing
- Datastax – Database driver (Cassandra)
- Netflix OSS – Components for building microservices applications:
- Eureka – Enables application discovery within the microservices environment
- Hystrix – Handles circuit breaking within the application
- Zuul – Routes API calls into the environment (proxy server)
- Ribbon – Manages software load balancing (client library)
- Archaius – Offers dynamic properties management
- Karyon – Provides a base container for all microservices
At Kenzan, we believe that documentation is as important as the code we write. For that reason, the MSL project uses some critical tools to facilitate documentation:
- Swagger – Framework for generating API documentation alongside code
- KSS – Specification for generating CSS style guides
- AsciiDoc – Markup language for generating general user documentation
Currently, you can deploy MSL in several ways, with additional methods on the horizon:
- Maven – Local builds
- Docker – Containerized deployments
- RPM – Manual deployments into AWS
- Spinnaker – Pipelines for continuous integration (CI) and continuous delivery (CD)
This all probably seems like a lot, but don’t worry! By the end of this blog series, you will be familiar with all these technologies, along with the recommended architectural patterns to use for the microservices. We are excited to release MSL into the open source community and watch it take flight. And, we’re glad you came along on the adventure!
Craig Martin is the Vice President of Engineering at Kenzan. He is based out of Denver, and oversees all engineering activities within Kenzan. Many of his current responsibilities have been focused on architecting, and leading projects to create, highly scalable cloud-based microservice applications.