Category Archives: Tech Skills

Kenzan has created many microservice applications over the years, with many of those running thousands of instances. Our experience is that a lot of organizations want to take advantage of the increased scalability and other benefits that microservices offer, only they don’t know where to start. How do they set up data? How will microservices affect their deployments? What technologies should they use? The task can feel quite daunting! The reality is that it’s much simpler than it seems. To prove this point, Kenzan created an open source microservices project called Million Song Library (MSL).


Not familiar with the benefits of microservices yet? No problem! Check out our blog series on microservices for a deeper dive. In a nutshell, microservices are a set of small services that make up a full application stack. The services are typically broken down by functional area within the business. This brings several core benefits:

  • Targeted scalability where and when needed
  • Decoupling of the functional areas of an application
  • Facilitation of continuous delivery
  • Cleaner code management

This is just a high level overview of the benefits, so be sure to read Microservices for a Macro World if you’d like a more detailed discussion.


So what is Million Song Library? At a basic level, MSL is a microservices-based application that lets users navigate through large sets of music (albums, artists, and songs) while also tracking and rating their favorites. That said, the functionality of the application is actually secondary to the main goal: to show how easy it can be to create microservices and run them both locally and up in the cloud. As you’ll see, with the help of good, solid architectural patterns, it is simple to create and maintain a fully functioning microservices application.

Over the next couple of weeks, you can look forward to a series of blog posts covering the different aspects of MSL. At the end of the series, our hope is that you’ll have a good familiarity with microservices, the technologies used in MSL, and its core architectural patterns. You’ll also be able to run MSL locally as well as within your own AWS environment.

At this point you probably want to know more about the architecture of MSL and what makes it tick. So let’s get to it!

For MSL, it’s best to review the stack from a bottom up approach. The data layer leverages Cassandra NoSQL data stores, and there is a separate data collection client for each functional area. The services are also broken up into functional areas, each one related to managing a library with a million songs. These include things like catalog services, login services, and even ranking services. Each of these services are fully RESTful, and each (ideally) has a very small set of responsibilities. The task of managing access to these services belongs to a proxy layer (Zuul) that handles all the API traffic coming into the application as well as discovery of the correct microservice.

As it stands now, MSL is easily deployed into AWS, but it was intentionally built to be deployed into other environments. Want to drop the routing layer logic, the services layer, and the data tier into another cloud or data center environment? You can do that!

From a front-end perspective, MSL is comprised of a few core technologies. Our goal was to use a technology stack that we find easy to work with and is something common in the marketplace:

  • AngularJS 1.4 – Front end framework
  • ES6 – JavaScript environment
  • Less – CSS preprocessing
  • Gulp – Build management
  • NPM – Front end package manager
  • Webpack – Bundler for modules and dependencies
  • Karma – Test runner for JavaScript
  • ESlint – Style guide linter tool
  • Material Design – Standard design toolkit

The back-end technology was architected with the same simplicity in mind. Given the robustness of the Netflix OSS stack, we decided to stick with Java and gain the benefits of a solid Netflix OSS ecosystem:

  • Java 8 – Server side language
  • Jersey – Web service layer that extends JAX-RS
  • JUnit – Server side unit testing
  • Datastax – Database driver (Cassandra)
  • Netflix OSS – Components for building microservices applications:
    • Eureka – Enables application discovery within the microservices environment
    • Hystrix – Handles circuit breaking within the application
    • Zuul – Routes API calls into the environment (proxy server)
    • Ribbon – Manages software load balancing (client library)
    • Archaius – Offers dynamic properties management
    • Karyon – Provides a base container for all microservices

At Kenzan, we believe that documentation is as important as the code we write. For that reason, the MSL project uses some critical tools to facilitate documentation:

  • Swagger – Framework for generating API documentation alongside code
  • KSS – Specification for generating CSS style guides
  • AsciiDoc – Markup language for generating general user documentation

Currently, you can deploy MSL in several ways, with additional methods on the horizon:

  • Maven – Local builds
  • Docker – Containerized deployments
  • RPM – Manual deployments into AWS
  • Spinnaker – Pipelines for continuous integration (CI) and continuous delivery (CD)

This all probably seems like a lot, but don’t worry! By the end of this blog series, you will be familiar with all these technologies, along with the recommended architectural patterns to use for the microservices. We are excited to release MSL into the open source community and watch it take flight. And, we’re glad you came along on the adventure!


Craig Martin is the Vice President of Engineering at Kenzan. He is based out of Denver, and oversees all engineering activities within Kenzan. Many of his current responsibilities have been focused on architecting, and leading projects to create, highly scalable cloud-based microservice applications.

In our first post of this series, we looked at how microservices help you build applications with an eye towards the future. And in our second post, we took a deep dive into how microservices work. But what if you currently have a monolithic application? How do you know if it’s time to move to microservices? And if so, how do you get there?

As we mentioned before, knowing why to build a microservice is only half the battle. Knowing when (or when not) to build one can be a trickier question. There’s no one size fits all. That said, there are a few questions you can ask to help decide if you’re ready for microservices. We’ll explore them in this final post of our series.

Reading the Signs

Is your organization or team experiencing exceptional growth in both employee base and technical stack?
As a codebase grows to a certain size, more people are needed to maintain it. But anyone who has worked on a single codebase with ten other engineers knows the struggle of dealing with merge conflicts during code reviews. If a team is losing valuable time trying to resolve conflicts, it might be time for microservices. Microservices help distribute development teams more efficiently, as complex backend systems can be built faster by dividing workloads into multiple small teams, with each team responsible for a given set of microservices.

Is your application down often?
Every time you deploy an application, it breaks. Or the backend system is seemingly down all the time for maintenance. If these scenarios sound all too familiar, it might be time to shift from a monolithic architecture to microservices.

Are you having difficulty scaling hardware to optimize application performance?
Larger and more complex applications require additional hardware to perform well. Some parts of an application might put more demand on the hardware than others, and the whole application can face degraded performance due to a single poorly-performing feature. Microservices allow individual units of business logic to be assigned their own dedicated hardware, and they can be scaled independently of other services. A slower-performing process can be isolated and capped to a fixed about of CPU, memory, disk, and network bandwidth, which means it can’t steal resources from other features.

Choosing the Moment

Once you’ve decided to build a microservice, the next questions are when to build it and when to switch over. That’s a hard question to answer for any organization. The monolith architecture works for small applications and small engineering teams. But when does the monolith stop working?

It turns out growth is often the death of the monolith. Organizations in the process of undergoing exceptional growth, in both the employee base and in technology stack, can achieve that growth by switching to microservices. If the engineering team is losing valuable time resolving merge conflicts, then it might be time for microservices. If the growing backend system is down all the time for maintenance, then microservices might be the answer. The answer of when to build a microservice is when growth is expected.

Hardware provides a great example of this, as it can only be scaled horizontally so much. Maybe those high-CPU and high-memory servers are too expensive and too underutilized during non-peak hours. The cloud is becoming more appealing with the ability to provision hardware as needed, and smaller servers with lightweight workloads become easier to bring up and tear down in response to changing demands. Microservices have lower hardware requirements, and they can start up quickly to meet peak demands.

Making the Move

Monoliths present a number of challenges that you’ll need to tackle in the move to microservices. A monolith has a tendency toward tight coupling of components, as well as stateful behavior. The application was never intended to live as separate and isolated components working in concert to make a system. The monolith may also be rooted to the infrastructure it lives on, depending on system resources such as a local filesystem and network. What’s more, you need to keep everything up and running during the transition. So how do you move to microservices? Let’s break it down.

Step 1: Determine Your Domains

Your first task is to look at all the features of the application and organize them into logical business units, or domains. For example, your application may have major features related to login/logout, user data, and session management. These features can be logically organized into a single business domain called Authentication. Repeat this step for all of the other domains in your application.

Step 2: Prepare the Monolith

Next, get your application ready for the big break up. To do this, decouple components within the application along the lines of the business domains you came up with. This will result in a number of edge and middle modules within the application, each dedicated to a particular domain, like our Authentication example. You’ll also need to move stateful in-memory stores into shared datastores. Finally, put a router in front of the monolith to smooth the rollout of microservices.

Step 3: Work From the Bottom Up

As you begin breaking out microservices from the monolith, it’s always best to start at the bottom and work your way up. First, create a separate database to store data for the domains you are moving to microservices. Next, break out the data access modules that access this data into middle microservices. Finally, break out edge modules that consume this data into edge microservices. When everything’s ready to go, use the router to toggle redirection to the new edge services. Keep repeating this same process for each domain until all of your modules are broken out and the monolith is no more.

In the diagram below, the Login, Users, and Sessions modules were decomposed into modules within the monolith, and then broken out as separate microservices.

breaking-up-the-monolith

One challenge you’ll encounter during the transition is the need to support both the microservices and the monolith side by side. That also means duplicating work, as you’ll often have to make changes or fixes in both places. The good news is that organizing your monolith similarly to your microservices makes it easier to copy-and-paste code between them.

Conclusion

If your application and organization are both set for growth, microservices can help you meet the challenge of building modular, scalable, highly-available solutions. When you’re ready, you need to organize your application’s features into domain-specific modules, then break them out one by one.

Just remember that transitioning from a monolith won’t happen in a single overnight deployment. You need to strategize as you cherry-pick domains out of the monolith. The key is to complete the transition to microservices and not leave your application in limbo. If you’d like to learn more about how to make the move to microservices, or if you need some help, feel free to ask us. That’s what we’re here for.

In the quickly evolving world of front-end development, it can be overwhelming to choose from the multitude of frameworks. It is, by extension, downright baffling to build a whole project from scratch – which is exactly what we did for Kenzan.io.

In this article we’ll walk through the technologies we chose, why we chose them, and what we thought.

Kenzan .IO Scope
Before diving in, let’s take a look at the scope of the project. We needed to build a fast, sleek, and responsive website that could integrate with other Kenzan sites. The website also needed to be easily updated by our marketing department. Kenzan.io was simple in terms of business logic, with very little state maintained, and most of the complexity held in the views.

Front-End Architecture
This leads into our first design decision: React for our front-end view library. React gave us the scaffolding we needed to design component-based views within a single page application without weighing down the project. Most importantly, React’s one-way data binding paired with the virtual-DOM made our image and animation rich site run with impressive speed on all browsers. We styled our views using Sass, to take advantage of variables, mixins, and other advanced CSS features. The Sass was compiled down to CSS in our build process and vendor prefixes were added using Auto-Prefixer. These few pieces led to fast and aesthetic pages.

Since React only handles our views, we needed a way to handle the model and controller portions of our front-end. We decided to use React Router for our SPA routing, jQuery for advanced DOM interaction and HTTP calls, and ES6 for all other business logic in the site. React Router has quickly gained steam as the widely accepted routing package for React applications, and we found it easy to learn and incorporate. We combined React Router, jQuery, and vanilla JS to build a scroll based navigation for the website, called scroll jacking. This feature is often handled with CSS and HTML sections but we decided to incorporate it into a single page application architecture by pairing scrolling with view routing. We also used jQuery to handle our AJAX calls because the library was already present in the project. For the sake of learning, a dive into Fetch or Thunk would have been interesting, but ultimately would have added unnecessary weight to our application. Finally, we chose ES6 over ES5 for all the new features including JavaScript modules, arrow functions, and classes. With the help of the very opinionated AirBnB style guide, we found ES6 syntax to be more concise when compared to ES5. We compiled our ES6 using Babel and handled module loading with WebPack streams in our Gulp build process. Both libraries had fairly simple configuration and no work required once the boilerplate was assembled.

Back-End Architecture
With our front-end architected, we began looking at ways to integrate Kenzan.io with our other Kenzan sites and make it friendly for marketing updates. Since all Kenzan sites are WordPress sites and our marketing team is very familiar with the WordPress content management system, we decided to pursue the bleeding edge WordPress API. With the API, all data was entered, stored, and retrieved from the Word Press CMS, and we had access to data from all of Kenzan’s pages. Most importantly, we found the WordPress API extremely easy to use. The API had good documentation, was straightforward to integrate into a single page application, and updating content was quick and easy.

Testing
We saved the best for last with unit testing. This does not follow test driven development, but for the sake of time constraints, we wanted to get all content on the site before testing so we could evaluate time remaining before making a testing plan. When we found ourselves with a couple weeks left, we decided to branch out again and try a new test runner called Ava. The allure of Ava is the ability to run unit tests concurrently, each test with an isolated scope in a separate Node thread. This means no interference between tests with faster test suite execution. For pure JS, we tested with Ava and Sinon, used for spies and stubs. For React components, we paired Ava with Enzyme, an extension of ReactTestUtils, and BrowserEnv, for a virtual browser in Node. This trifecta allowed for quick and seamless testing of our React components, including rendering the DOM, testing lifecycle methods, updating the state, and re-rendering the component. All the testing libraries had very little boilerplate code to get started and were easy to work with when writing the tests.

Finally, we wanted to add a last layer of confidence with a suite of E2E tests. Prior to this project, most of our front-end development experience had been in Angular with E2E testing handled by Protractor. Unfortunately, Protractor is not friendly with React so this was another chance to learn something new. We found an E2E library called Nightwatch.js that integrated with React and ran off Node, making configuration and execution not too different from Protractor. The creation of these tests was handled by our QA team, and is a topic for another blog post, but their inclusion helped ensure no bugs made it out to production.

The Final Product
After six weeks, and many scrums we met our goal of delivering a responsive, performant website with a WordPress back-end and full unit and E2E test suites. However, our most important accomplishment was diving deep into new technologies and expanding our knowledge here at Kenzan.

To checkout the website, click here: https://kenzan.io


The author of this post is Marie Schmidt, at junior front-end developer at Kenzan. She’s featured as our employee spotlight.


We’re looking for some  talented developers, architects and engineers to help us build more cool stuff like the Million Song Library. Click here to see open positions.

 

In our first post in this series, we showed how microservices can help you architect applications with the future in mind.

Knowing what a microservice is, and what purpose it serves, is a big part of building a successful architecture. But to truly fit your business needs and meet your goals, understanding the variations of microservices is key.

While there is no precise way to define the architectural style of microservices, by looking at specific characteristics, we can better understand what makes an application a microservice. In this post, we’ll go into more detail about how microservices work, and how to make them work for you.

How Big is Micro?

You can build what you think is a microservice, but actually, what you have just created is a distributed monolith. Individual applications become so coupled together that we start referring to them as a single noun. In other cases, we end up seeing microservices get so large that they themselves become monolithic, resulting in a monolith army.

The size of a microservice isn’t determined by the number of lines of code or the amount of functionality, but by the amount of volatility a microservice has—the amount of change that is expected to occur over the life of a microservice. If changing one microservice also requires changing another microservice, it means those microservices have been incorrectly decoupled from one another and should instead be combined. On the other hand, if a microservice is composed of features that are fundamentally dissimilar and volatile, it means those features have been incorrectly coupled together, and they should instead be split up.

In a nutshell, features of a microservice should be grouped by similarity and the likeliness of change. If changes to a microservice keep causing backwards incompatibility or constantly require refactoring, it probably needs to be decomposed into multiple microservices.

Life on the Edge (and in the Middle)

Another part of defining a microservice is understanding what the types of microservices are and the rules they should follow. These can vary depending on your architecture. One architectural solution we’ve employed with success defines two types of microservices: data-driven services in the middle and business-driven services on the edge.microservices-cloud-architecture

Middle Tier: Driving Data

Data-driven services are called middle tier services, which are solely assigned and are responsible for a single data source. In this architecture, the only way to access any given data source is through the corresponding middle tier service. These are assigned to a single data source, so that if one data source goes down, it will not take the other services with it in the event of an outage. The only responsibility of a middle tier service is to make data available to other microservices. This could also mean applying caching or fallback scenarios to keep the data flowing.

Edge Tier: Driving Business

We refer to business-driven services as edge tier services, which drive the business logic of an application. Edge tier services typically exhibit the most amount of volatility-based decomposition, as business logic for each edge tier service can vary greatly depending on the systems it supports.

While choosing when to build a middle tier service is easy (if you have a data source, you need a microservice to go with it), choosing edge tier services requires more attention to the amount of volatility in the business logic. Functionality is driven by edge services, and in this type of architecture the number of edge tier services will always be greater than the number of middle tier services. Edge tier services connect with one or more middle tier services, but typically won’t connect with another edge tier service. In this particular solution, proper decomposition of microservices based on volatility shouldn’t require edge services to depend on other edge services, or middle services to depend on more than one data source.

While we’ve put this type of architecture to good use, other solutions are certainly possible—more on that in a bit.

Finding Each Other in the Cloud

Deploying microservices to the cloud enables clusters of microservices to be scaled up as load increases, and scaled down after load diminishes. This means that IP addressing of machines running microservices is constantly changing. So the challenge becomes: how do we route fixed traffic to a moving target?

In the past, a common pattern was to reference hosts using domain names rather than IP addresses. In this case, when a host IP changes, the DNS record is updated with the new IP address. Tools such as Consul can be employed to propagate DNS changes. Alternately, redundantly-deployed services can register with a load balancing appliance. The group of services is then referred to using the address of the load balancer. However, in a microservices architecture, this can rapidly become costly due to the large number of load balancers required.

To better solve this issue, we must get creative and route traffic in a more dynamic way. Service discovery is a pattern that lets us identify a group of microservices by name rather than by IP address. With service discovery, a centralized registry is used to store the locations of all services in the surrounding environment.

The most common pattern we implement is to have microservices push their own information to the registry on startup and say, “Hey, my name is service-a, and here is my IP address and port”. Another service can ask the registry for all IP addresses for services named “service-a”. The requesting service can then strategize which one of the services named “service-a” to talk to. This type of push-based discovery pattern can be implemented with Eureka (part of the NetflixOSS stack), and it requires discovery-enabled apps to use a client library to talk to the discovery service.

API Gateways performing reverse proxy operations can also take advantage of this service discovery feature, and can route all traffic for a specific path to a group of microservices of the same name. These gateways are utilized as a router of all inbound HTTP requests, which consolidates Internet-facing traffic under a single domain name. The router can also apply rules or filters to enforce security for edge services, for example, to require authentication. This gives you a fine degree of control over how different types of traffic are routed.

Exploring Alternate Designs

At Kenzan, we are always thinking of ways we can improve on our architectural patterns. Lately we have been exploring alternatives to the edge/middle microservice design pattern described in this post. The details of this alternate design will hopefully make an appearance in a future blog post, but in the meantime here’s a sneak peek.

We like to refer to this design as the one-edge/many-middle pattern. It might even become a three-tiered design as it evolves, with edge services, middle services, and data services. In this scenario, data services provide the data abstraction layer, middle services drive business logic, and edge services drive domain logic. The edge services have similarities to an API gateway, but they also make domain-specific decisions on which middle services to use. The result is a better decoupling of microservices and a clearer design with fewer edge services.

As we are learning, a particular microservice architecture may work for one organization but not for another. We will continue to explore different microservice designs, and evaluate the benefits and challenges of each.

Conclusion

To make microservices work for your business, it’s important to compose them correctly and place the right functionality in the right tier according to your chosen architecture. There are many ways to architect and develop microservices, and we are constantly talking about ways to improve the design.

When starting with a microservice architecture, plan a consistent strategy and define guidelines for the entire organization to use. Share best practices and lessons learned with the other teams, and continue to evolve microservice patterns. Consider the scaling capabilities, and build in capabilities like service discovery.

Going with microservices from the start can be a great approach. But what if you have an existing application you want to migrate to microservices? In our next post, we’ll talk about how to break up a monolith into manageable chunks.

Architecting for the Future

At Kenzan, we often work with large, consumer-facing companies whose customers are demanding richer, more interactive, and friendlier experiences on all of their devices. As trends in software architecture evolve to adjust to these expectations, organizations need to create applications that are both highly available and highly scalable.

We’ve been building web applications with microservices for a while now and have found they’re easy to manage, can be scaled independently or across servers, and offer flexibility to quickly deliver features.

In this blog series, we’ll look at how you can architect for the future using microservices, some of the challenges you may face along the way, and strategies for mitigating these challenges.

What Are Microservices?

Microservices have been around for a few years, but you may not have encountered them yet.

In a traditional architecture, all functionality resides in a single, monolithic server-side application. This makes it easier for one developer to understand the whole system or run it locally, but it can quickly lead to issues as the application grows in size.

In a microservices architecture, functionality is distributed across small, self-contained, modular services that communicate with one another over the network.

monolith-vs-microservices

What Are the Advantages?

Adopting a microservices architecture brings a lot of benefits – let’s check them out.

Up and Running Today

High availability and uptime are two big selling points for microservices. That’s because they allow for smaller components or services to be upgraded without risk of taking down an entire system.

Imagine a customer-facing application that has one monolith for all services, including account information, identity data, and login portals. Maintaining and deploying updates to just one of those services means you’re at risk of bringing down the entire application if there’s an issue. With microservices, if one service goes offline, the rest of the application doesn’t have to go down with it.

Scale to Meet the Future

The first thought that comes to mind when talking about scale is the amount of load a system can handle before falling down. But that’s not the only type of scale to consider when you’re building a microservice. When you’re dealing with big data, a microservices architecture lends itself to scale for velocity (number of active users on a given system) and for volume (how much data you have). With microservices, you can scale your applications just at the point of demand. You don’t have to scale everything together. For example, edge services that communicate directly with web clients often have higher load requirements than back end services, especially if caching strategies are used to reduce communication with the back end.

Microservices can also help scale your organization along with your application. Using microservices allows for distributing development teams more efficiently. Complex backend systems can be built faster by dividing workloads into multiple small teams, with each team responsible for a given set of microservices. Clear domain ownership means you always know which team to go to when there’s a problem with a microservice. Likewise, teams are empowered to operate and deploy independently at a delivery cadence that makes sense for them rather than coordinating large releases across many teams.

Every Service For Itself!

Let’s look at our imaginary monolithic application again. The many features of this application share resources like CPU, memory, network latency, and I/O, and as a result could easily overwhelm these resources, causing all the services to suffer. In a microservices architecture, many of these resources are isolated to the individual service and can be optimized more easily. Through containerization with frameworks like Docker, or serverless solutions like AWS Lambda and API Gateway, resource isolation is a much more cost-efficient practice in microservices. Services scale according to their own needs rather than to the needs of the busiest component.

Use of common protocols like HTTP, and standard interfaces like REST for communication with other services, allows microservices to be technology agnostic. Teams can choose to build microservices in the language they are most comfortable developing in. Having a microservice stack including both Node.js and Java applications is becoming more common. This lets organizations tap into a wider range of skillsets. (Choose technologies wisely, though, as some may not play nicely with other technologies in the microservice stack.)

What Are the Challenges?

Knowing why you should build a microservice is only half the battle. Knowing when to (or when not to) build one can be a trickier question. Every engagement is different, and there’s no one size fits all. Here are some of the key factors to consider:

  • Infrastructure – Adopting a microservice stack can require a bit of scaffolding to get up and running. Considerations such as infrastructure requirements, deployment strategies, monitoring, and configuration management come into play when building a custom microservices solution. Platform-as-a-Service products like Amazon’s Elastic Beanstalk can reduce the time required to develop a microservice platform, and they streamline the process of developers deploying code.
  • Code Compatibility – Microservices can suffer from code compatibility issues like monoliths can, but the single codebase nature of a monolith does allow code compatibility issues to be caught at compile time. In contrast, changing the interface of a microservice may go unnoticed until another microservice breaks after deployment. Maintaining API Contract Tests is a good way to ensure consistency of a microservice’s external API and avoid unintentional changes.
  • Shared Libraries – There will be many microservices doing similar things, and it often becomes apparent to put common patterns into libraries to be shared with other teams. Sharing code with other teams is like sharing open source code on GitHub. The code must be useful enough to share and intended to accept contributions from other teams.
  • Coding Practices – Many different teams working in siloed environments can result in vastly diverged coding practices. Whether or not this is an issue can be left to the discretion of the organization and their development culture, but establishing company-wide standards and periodic discussions of new patterns can help keep teams in sync.

Finally, microservices aren’t right for every application, like those with a limited scope, a small number of users, or a single data repository.

Conclusion

When applied to the right problems, microservices are a flexible solution that many companies are adopting to architect for the future. Given the complexities, you might decide to have an experienced guide on your journey.

In our next post, we’ll talk about how to put microservices to work to solve challenges now and prepare for future needs.


Have a question about building and implementing microservices at your company? Email us at info@kenzan.com


Darren Bathgate is a technical architect at Kenzan/ Over the course of his 5+ years at Kenzan, Darren has worked extensively with Java, MySQL, PHP, Cassandra, Node.js, oracle, Jenkins, Netflix OSS and Docker.

As we enter a new era in the Internet of Things, millions of devices are sending out streams of data. Analyzing this data can be a huge undertaking, but by combining AWS services, one can create a scalable dashboard. 

We hosted another Lunch and Learn a few weeks ago, where Nicholas Sledgianowski and Charles Palczak gave an overview and demo, which you can find below. 

In order to create innovative, scalable and intelligent solutions for our clients, Kenzan believes that continued development is crucial. Learning from fellow Kenzanites is just way that our employees gain new skills, so each week we host a lunch and learn, for employees across all four of our offices to join in. 

Stay tuned for more videos from our Lunch & Learns.

School may still be out for the summer, but at Kenzan, the learning continues. We’re big believers in skill development and there’s no better way to learn than from our fellow techies. So in late June, we sent some Kenzanite’s off to Dinosaur JS, a Javascript conference in Denver.

They brought back some new skills that they shared with the rest of Kenzan during one of our Lunch and Learn programs.  Their entire presentation covering refactoring, accessibility and Electron was recorded.  Check it out below:

 

As Kenzan works with our clients to implement scalable and reliable systems in the cloud, we’ve often turned to the NetflixOSS stack for the myriad of tools and libraries it offers. A major component of our implementations has been Asgard, an instrumental tool that allows for the efficient automated deployment of applications. While Asgard has been a significant factor in the success of our implementations of the Netflix OSS stack, especially with its support for naming conventions, immutable infrastructure, application awareness and red-black deployments, it has been showing its age and has shortcomings in several areas, including a weak API and support that is limited to AWS.

Fortunately, Netflix has been working on a successor project to Asgard: Spinnaker (http://spinnaker.io/). Spinnaker represents the evolution of both Asgard and Mimir, another Netflix internal project which focuses on pipelines. Spinnaker takes a very different approach from Asgard, focusing much more narrowly on deployments, while offering a rich set of tools for creating pipelines to manage deployments across environments. In addition to being strongly API driven, another huge part of Spinnaker is the implementation of multiple back-end cloud drivers for different infrastructure providers. Spinnaker launched with support for AWS and GCP with follow up support coming from Azure and Pivotal.

When Kenzan first heard about Spinnaker, we jumped at the chance to work with it and also knew we wanted to contribute to this open source project. One tangible Kenzan contribution to the Spinnaker release was in the area of usability. Typically, projects like Spinnaker are released in such a way that they require significant effort just to install and start up, let alone do anything useful with. Our contribution for launch was to work with the Google team and, building on work they had already done, to come up with a straightforward, os-native install process for Spinnaker. Beyond this, we published, and continue to maintain a public image for Spinnaker on AWS.

Shortly after launch, we contributed to the documentation with a tutorial showing a complete continuous-delivery setup with simple sample application (http://spinnaker.io/documentation/hello-spinnaker.html). The tutorial goes through the complete process of setting up Jenkins, an Apt repository for baking, Spinnaker, jobs and pipelines to give a feel for what Spinnaker can really do when it is set up and integrated with a real project in Jenkins.

However, the tutorial is quite lengthy and there are a fair number of moving parts that are needed to get set up. This is understandable, since Spinnaker is a complex product trying to orchestrate the often complex details of cloud infrastructure. This leaves a high barrier for entry for people who want to see Spinnaker as it should be: with pipelines and code deployment, rather than just deploying OS images.

So, we set out to improve and reduce the barrier to entry. That’s where Terrfaform comes in. A Hashicorp product, Terraform allows you to describe and manage infrastructure.  Using relative simple text configurations and back-ends for different cloud providers, it is possible to easily set up infrastructure from scratch. We decided to use Terraform to create an example Spinnaker install that gives a complete build-to-deploy experience with minimal set-up on the part of the user. The project (at https://github.com/kenzanlabs/spinnaker-terraform) takes advantage of Terraform to offer versions on both AWS and GCP. After running through the instructions, you get the following:

  • Networking and security
  • Bastion host
  • Jenkins
  • Local Apt repo
  • Spinnaker
  • Jenkins Jobs
  • Spinnaker Pipeline

spin_terra

Since it is an automated set-up, the configuration is slightly different from the tutorial since there isn’t a fork of the sample app. It would be very easy to modify the installation to match the tutorial after it is set up.

We’ll be adding more example pipelines to this demo to provide real-world, continuous delivery scenarios, including multiple pipelines with triggers and testing.
——
You love technology, and so do we. Join the team that’s helping to Make Next Possible.

Want to check out some of the other work we’ve been doing? Visit our page on GitHub: https://github.com/kenzanlabs

 

While there is a plethora of amazing open source tools out there, recent advances in the EMCAScript language specification (ES6/ES2015) have brought amazing power and expressiveness to the web for JavaScript. Language features like import, module loading, and classes have made writing JavaScript cleaner, more consistent, and less reliant on opinionated tooling, thus making code written today in ES6 immune to churn and less likely to become “legacy,” even after a few years. While these claims are bold, they are, in fact, very realistic, well-reasoned, and do not prevent teams from using their preferred build tools or any of the popular frameworks/libraries available.

The main concepts covered are:

  • Language Specification – Embrace ES6 or, ideally, Typescript
  • Dependency Management – Consume your dependencies in a universal format, be it from NPM or Github, while still being standards compliant
  • Module Loading – Load not only your JavaScript, but also your CSS, in accordance with standards compliance.

ES6 and TypeScript
ES6/ES2015 is the current iteration of JavaScript, which brings with it a number of useful features like classes, import/export statements, fat arrow, and improved support for variable scoping. This is a direct acknowledgment by the standards committee to support the needs and experience of developers, as our JavaScript applications have gotten bigger and more complex over the past few years. However, there is also a need to incorporate tried and true conventions from mature, strictly typed languages like Java, where concrete types and values can be caught before the code even reaches the browser.

While an excellent tool like Babel will make sure that one can write the ECMAScript of tomorrow today, TypeScript will do the same but will also support interfaces, member/method privacy, and member/argument types. As applications grow and become more “distributed” (component libraries/micro ui’s), and client side data management, consistency, and durability becomes more mission critical, TypeScript will ensure continuity and compile time feedback to developers.

Below is an example of a React component written in ES6. Inline comments added here to highlight key language features:

'use strict';

//we can import all our dependencies explicitly per module / component
import React from 'react';
 
//we can even load our CSS!
import './user-details.css!';

import { GithubStore } from '../../stores/github/github-store';

//here we use ES6 classes and merely extend the React library.  This is the only usage of React in this entire file
//we now align our components with a concrete language feature.  Our components are just classes at the end of the day

class UserDetails extends React.Component {

  //no longer we have init, activate or IIFE to kick off our component.  There is now consistent standard provided by the language
  constructor() {
    super();

    this.state = {
      avatar: '',
      name: ''
    };

    this.getUserDetails();
  }

  getUserDetails() {
    let store = new GithubStore();

    store.getUserDetails().then(response => {
      this.setState(response.data);
    });
  }

  render() {
    return (
      <div className="user-details">
        <img className="user-avatar img-responsive" src={this.state.avatar}/>
        <h1><span className="user-name">{this.state.name}</span></h1>      </div>
    )
  }

}

//ES6 way to expose our class for others to consume
export default UserDetails;

Below are some TypeScript code samples, based on the Angular 2 Tour of Heroes guide. Inline comments added here to highlight key language features.

//we now have support for strictly typed language features like interfaces
export interface Hero {
  id: number;
  name: string;
}
 
//an example of an Angular 2 Component
import { Component, OnInit } from 'angular2/core';

import { Router } from 'angular2/router';

import { Hero } from './hero.interface';
import { HeroDetailComponent } from './hero-detail.component';
import { HeroService } from './hero.service';

//here we are using ES7 decorators (supported in TypeScript) to annotate our component

@Component({
  selector: 'heroes-list',
  templateUrl: 'src/components/heroes/heroes-list.component.html',
  styleUrls:  ['src/components/heroes/heroes-list.component.css'],
  directives: [HeroDetailComponent]

})

export class HeroesComponent implements OnInit {

  //with TypeScript, we are able to enforce member types, as defined by our Hero interface
  heroes: Hero[];

  selectedHero: Hero;

  //with TypeScript, we are able to enforce privacy and argument types

  constructor(private router: Router, private heroService: HeroService) {

  }

 getHeroes() {

    this.heroService.getHeroes().then(heroes => this.heroes = heroes);

  }

  ngOnInit() {

    this.getHeroes();

  }

  onSelect(hero: Hero) { this.selectedHero = hero; }

  gotoDetail() {

    this.router.navigate(['HeroDetail', {
      id: this.selectedHero.id 
    }]);
  }
}


Dependency Management
Dependency management for JavaScript applications have conventionally been managed through Bower, and more recently through just NPM itself alone. Either way, package management has relied on ultimately delivering all static assets (concatenated or not) to be included via  <script> and <link> tags in an HTML file.  But!  No more!  JSPM was specifically developed to support SystemJS (covered in the next section) as the de facto package manager for client side JavaScript.  It can:

  1. Install a package / module / library / etc from NPM or Github.
  2. Save a single configration file (js) that captures the entire dependency graph for an application, such that all dependencies can easily be loaded in the browser with just a single JS file in a <script> tag.
  3. Bundle all dependencies to be loaded into the browser; either for development or production (minified and concatenated).

Module Loading
Module loading and dependency management, although separate concerns, often go hand-in-hand in the build process and the runtime of an application.  Though it didn’t make the final cut of the current ES6 spec, a module loader specification called System, which can be polyfilled today using SystemJS will be coming in ES7.  This allows us to greatly reduce the overhead in our code by allowing us to use ES6 import today with all our client side static assets, in addition to be able to load all our third-party vendor dependencies and the underlying dependency graph needed to support them all with a single <script> include.

Although there are tools like Webpack or Browserify which aim to encapsulate the entire build and packaging lifecycle of an application, they are not specifically built around ES6 like SystemJS is. For that reason, the choice to use SystemJS over Webpack or Browserify is specifically because of its primary motivation to support ES6 / ES7 and forgoing any sort of “vendor lock-in”.  Both SystemJS and JSPM are also authored by the same developer, Guy Bedford.

Below is an example of what the index.html of a SPA looks like when using SystemJS / JSPM.  Inline comments added here to highlight certain features.

<!DOCTYPE html>

<html lang="en">
 <head>
    <title>Github Dashboard</title>    <base href="/">
    <meta charset="utf-8">
    <meta name="description" content="Github Dashboard"/>
    <meta name="viewport" content="width=device-width, initial-scale=1">

    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>

    <!-- single line to import the System polyfill -->

    <script src="./jspm_packages/system.js"></script>

    <!-- import our vendor dependencies, as generated for us by JSPM-->

    <script src="./config.js"></script>

    <!-- kick off our application, and that's it! -->
    <script>
      System.import('src/bootstrap.js');

    </script>

  </head>

  <body>

<div class="container-fluid">

       <div class="col-md-*">
        <section id="content"></section>
      </div>

    </div>

</body>


</
html>

Rounding Things Out
As mentioned, this guide is opinionated only as it relates to future-proofing of how one writes and maintains their JavaScript in order to be as accommodating to standards and specifications as possible. Intentionally left out of this article are two key layers of the stack:

  1. Build / task runners
  2. UI library / frameworks

While there is no opinion on these, there are of recommendations and considerations.  For your build it is recommended to use Gulp via Keystone, as it is fast, composable, and has a robust ecosystem of plugins and can be installed and managed with NPM.  For the UI, any library or framework that plays nicely with the underlying stack here, like ReactAngular 2, or Aurelia will work as long as it is the right one for your application’s needs. What matters is that whatever you substitute, the only requirement should be that it plays well with ES6 and SystemJS / JSPM, so as to make the next few years of your JavaScript development as frictionless as possible.

So putting it all together, here is our stack of the future, today!

frontend-stack

This post was written by Owen Buckley, a software engineer based out of Kenzan’s Rhode Island office.