JavaScript in 2017

Tech Skills,

JavaScript still reigns as king for many developers. In Stack Overflow’s recent survey of 64,000 developers across the globe, JavaScript came out on top for the fifth year in a row as the most common programming language. But with so many tools, libraries and frameworks in the Javascript ecosystem, how does a developer make sense…

JavaScript still reigns as king for many developers. In Stack Overflow’s recent survey of 64,000 developers across the globe, JavaScript came out on top for the fifth year in a row as the most common programming language.

But with so many tools, libraries and frameworks in the Javascript ecosystem, how does a developer make sense of it all?

Kenzan’s director of engineering, Owen Buckley, shared his insight into the world of JavaScript during one of Kenzan’s latest tech meetups in Providence, Rhode Island. Owen cover’s some important aspects including language & specification, libraries & frameworks, and development & tooling.

If you didn’t get a chance to hear it live, check out the recording below:

Visit our meetup page to RSVP for Kenzan’s next meetup! 

The Digital Skills Gap: Rethinking Tech Talent

Tech Culture

For companies looking to grow their digital footprint, an increased investment in new tools and technology is a no-brainer. This may also mean expanding technical teams in order to support the needs of an organization. That said, finding and retaining top tech talent can be a major concern for businesses. Despite thousands of new jobs…

For companies looking to grow their digital footprint, an increased investment in new tools and technology is a no-brainer. This may also mean expanding technical teams in order to support the needs of an organization. That said, finding and retaining top tech talent can be a major concern for businesses.

Despite thousands of new jobs opening up in software development, engineering and architecture, companies are hitting a hiring wall, unable to fill the roles they need to take their digital game to the next level. Failure to meet hiring needs has managers and CTOs alike nervous about meeting business goals. 83% of hiring managers cite the inability to fill roles as having a negative effect on revenue, market expansion, product development and employee turnover.

Likewise, job seekers are struggling to meet the criteria set by hiring companies. With a laundry list of requirements, many applicants simply don’t check off all the technical boxes in a job description.

What’s standing between companies and job seekers?
On February 23, Kenzan and Dev Bootcamp hosted an event, “Rethinking Tech Talent” to address that question. We brought together speakers from our own organizations, as well as from Uncubed and Andela, to discuss the current tech recruiting climate and actionable solutions to close the skills gap.

digital skills gap
“Rethinking Tech Talent on February 23 at Dev Bootcamp. Panelists included: Uncubed, Andela, Dev Bootcamp and Kenzan

We wanted to share some key takeaways  that came out of that conversation.

Alternative Education
One reason for the skills gap: Education just can’t keep up with technology. The tools and skills needed to develop software are evolving quicker than most colleges and universities can teach them. By the time a course is complete in one technology, another has emerged.

Instead of turning to higher education, job seekers are now starting to approach learning differently. Alternative training, like the immersive coding courses that Dev Bootcamp offers, are becoming an increasingly popular choice among those wishing to get into technology. In 2016, there were almost 18,000 graduates from coding bootcamps across the US and Canada, with that number to likely increase in 2017.

Training and Mentorship
It’s not just jobseekers that are seeking more education. According to Stack Overflow, 70% of working developers say that learning new technology is a priority. Companies looking to retain top tech talent would do well to look at continued learning opportunities for their current workforce and to invest in programs that help employees skill up. By introducing more employer-sponsored education, companies will not only be able to keep workers happy, but will also be able to provide less-experienced developers with on-the-job training.

Rethinking Recruiting
Our panel was lucky to be joined by Andela, an organization that is helping companies look beyond the usual recruiting sources in order to tap into a market with plenty of tech talent: the African continent. Rather than focusing on education, the organization vets developers based on skills, putting applicants through a rigorous assessment before presenting them to hiring companies.

While organizations like Andela are getting more attention, many companies are still hindered by their limited definition of what it means to be highly-qualified, looking solely at candidates from specific colleges or with experience at a well-known brand. Even as the number of people gaining skills from coding bootcamps and similar technical schools increases, more than half of employers still say that a computer science degree is the most important qualification. Instead of focusing on an applicant’s education, companies could benefit more by shaking off that narrow criteria in favor of a more holistic, inclusive hiring policy.  

More than technical skills
Gone is the image of the hoodie-wearing developer, secluded behind his computer, headphones blaring, locked into a coding marathon. In 2017, developers work on cross-functional teams, connect with clients, and give demos and presentations in public venues. Collaboration and communication are among crucial soft skills developers need to possess.

Bring hiring companies and education together
Despite the growing popularity of alternative education and a change in recruiting policies, the biggest change can come from companies and educational sources working together.

Uncubed is an organization helping to facilitate that kind of dialogue and also, as it turns out, was a panelist at our event. As a video-first jobs platform, they know all too well the challenges both companies and job seekers face. Beyond their recruiting tools, Uncubed addresses the tech skills gap by bringing together educators and companies to develop more effective education that meets the needs of hiring companies and better prepares students for a career in the digital economy.

Creating a Spinnaker Appliance with Kubernetes

Tech Skills

Continuous delivery has quickly become the choice method for faster, safer and more frequent software deployments. As more and more tools come into play in this arena, developers are looking for new ways to enable this kind of software delivery and maximize its benefits. This guide will show how I created a bare-metal continuous-delivery appliance…

Continuous delivery has quickly become the choice method for faster, safer and more frequent software deployments. As more and more tools come into play in this arena, developers are looking for new ways to enable this kind of software delivery and maximize its benefits. This guide will show how I created a bare-metal continuous-delivery appliance using Spinnaker, running on a Kubernetes cluster of “mini pcs”.

Why build a bare-metal cluster?

Because it’s fun! Many times when using a cloud platform, much of the magic gets abstracted behind dashboards and APIs. When you set up your own cluster from scratch, it really helps to connect the dots and learn about how the pieces fit together.

The cloud is not cheap. Running Spinnaker in the cloud is quite pricey due to the resource requirements. When we run this on our own hardware, we pay up front around as much as one month in the cloud, but we can run it forever!

Free up resources. There are tools like minkube that allow you to set up your own single node Kubernetes cluster on your laptop. However this ties up resources. It’s very nice being able to use an always running cluster on your network without needing to constantly “spin up and spin down” environments.

Total control. Running a cluster via minikube or GKE is very convenient. However with bare-metal we can tweak our setup to our heart’s content. Want to install an nfs server for persistent volumes on a node? Go for it! Want to experiment with the Ubuntu Kubernetes distribution? Install the iso on a node! There is less magic and more “nitty gritty”. It really helps you understand how things work from the core.

Put your spare compute to work. It’s really satisfying to have your own “on prem” equipment. How many of us have raspberry pis or old desktops/laptops just laying around? Instead of just collecting dust, we can add these nodes to our fleet. We can then run jobs or applications that distribute this load. You can put the cluster behind your router and host web sites or run home automation applications. All without the overhead of the cloud. Plus it looks really cool sitting on your desk!

What is Spinnaker?

http://www.spinnaker.io/ is set of microservices that make it easy to build continuous delivery pipelines. Contributors to the project include Netflix, Google, Microsoft and Kenzan.

The project brings together best practices and patterns for easily deploying immutable infrastructure style software. Deploy targets can be instances or containers running on a multitude of platforms including AWS, GCP, Azure and Kubernetes.

Why Kubernetes?

Spinnaker can be run from any of the above platforms, however due to the nature of the resources needed it can be quite expensive. Kubernetes allows us to set up our own “cloud” on bare metal. We can then use our Spinnaker instance to easily deploy to other cloud platforms or clusters. It’s also pretty neat having a self contained “appliance” running Spinnaker. Hardware prices are constantly falling and it is pretty fun experimenting with software on our own “datacenter”.

Choosing hardware

While I was able to get the cluster turned up with the first version of this guide, The “Stick pcs” proved to be too weak on the memory requirement. I needed nodes that had at least 4gb memory. After some searching I decided on three “nexbox” pcs.

https://www.aliexpress.com/store/product/1Set-Nexbox-T9-Smart-TV-Box-Z8300-1-84GHz-4-Cores-Win-10-Mini-PC-4GB/2130214_32658221265.html

  • 4GB memory
  • Quad core atom processor
  • 64GB SSD
  • Ethernet port

It took a while for the boxes to arrive from Aliexpress but I was excited to get started when they did.

Installing Ubuntu

Unlike the tv-sticks, these boxes came with windows installed. That was not good for the cluster, so I began by trying to install Ubuntu server. Unfortunately since the chipset in these machines was “cherry trail” it had limited Linux compatibility. The nic did not work at all during install.

Thankfully, after some searching I was able to find an Ubuntu image with a modified kernel to support the chipset: http://linuxiumcomau.blogspot.com/search?updated-min=2017-01-01T00:00:00%2B11:00&updated-max=2018-01-01T00:00:00%2B11:00&max-results=2

Burning the image to a thumb drive, I was then able to hold the ESC key and boot from the drive to install.

With Ubuntu installed on the boxes I was now able to install docker on each node.

Installing Kubernetes

apt-get update
apt-get install docker.io

I leveraged the docker-multinode scripts to get Kubernetes installed along with heapster and the dashboard.

https://github.com/kubernetes/kube-deploy

Installation is relativly simple, you run the master.sh script on one node and worker.sh on the remaining two nodes.

Tunneling 8080 into the master node and the dashboard displayed like a charm.

Installing Spinnaker

I then was able to make some minor modifications to the “spinikube” specs and get Spinnaker installed.

https://github.com/kenzanlabs/spinikube

Overall, I’m very happy with how the cluster turned out. It’s great to be able to have a dedicated cluster without wasting resources on vms. I’m looking forward to running more workloads and monitoring performance. The next step will be to experiment with getting persistent volume storage in place with ceph or gluster. It will be great to take advantage of all the storage on the nodes. Stay tuned for part 2 of this guide, where we leverage Spinnaker to do a deploy on the cluster, along with some other advanced functionality.


Chad Moon is a platform engineer at Kenzan, based out of the Denver office. Specialties include crafting continuous delivery pipelines and containerizing all the things. Current work includes integrating Jenkins, Spinnaker and Kubernetes for large enterprise clients.


Have questions about building your own Spinnaker-Kubernetes cluster, or just about Continuous Delivery in general? Comment here or tweet at us: @kenzanmedia.

For more information about Kenzan services, contact info@kenzan.com

 

Hacking Docker: Discovering Containers

Tech Skills

As more businesses prepare to make a digital transformation, containers have become the choice cloud computing architecture for faster, more portable and reliable deployments. With the growing interest in containerization, the question arises about how containers integrate with existing infrastructure. In this post, we will look at how containerization affects service discovery and present a…

As more businesses prepare to make a digital transformation, containers have become the choice cloud computing architecture for faster, more portable and reliable deployments. With the growing interest in containerization, the question arises about how containers integrate with existing infrastructure. In this post, we will look at how containerization affects service discovery and present a network routing solution that allows NetflixOSS Eureka to provide unified discovery between both containers and with VM-based services.

Kenzan specializes in cloud technologies with extensive experience in Amazon Web Services (AWS). We have adopted a number of tools from the NetflixOSS stack for use in AWS, such as Zuul, Ribbon, and Eureka. The discovery service feature of Eureka allows us to build dynamically scalable AWS environments without the need to setup fixed routing and load balancing infrastructure.

Docker introduced a new networking layer that changes everything we know about networking in the cloud. This makes discovery and routing with Eureka challenging. Containers have their own IP addresses, belong on a different subnet, and are only routable from the host running the Docker daemon.

We have experimented with tools like flanneld that create virtual networking layers between Docker hosts, allowing for cross-host communication between Docker containers. Flanneld is easy to setup and does as advertised, but requires hosts wishing to network with containers to be running the flannel daemon. All-in-one Docker solutions like Kubernetes do everything from networking containers to orchestrating and managing multiple Docker hosts, but still cannot network with containers from outside the Kubernetes cluster.

What we are looking for is to spin up containers like we do with EC2 instances, have them register with a discovery service, and allow us to send traffic from anywhere in the VPC. Let’s start simple with a cluster of Docker hosts, which is something that EC2 Container Service (ECS) will provide us. We launch a few applications as Docker instances using ECS, but we get containers that can’t talk to each other and can’t be reached from any external service.

As an application in a discovery-based world, you need to tell the discovery service who you are and how others can reach you. This is easy on EC2 instances because the application can provide the host IP address and the port it’s listening on. Containers in Docker are given IP addresses that are only routable to containers running on the same Docker daemon. We can expose internal container ports as host ports, but that port (which port? That = ?)has to be static and non-conflicting with other containers on the same host. e want to be dynamic and not have to remember which ports are used versus which ports are free.

ECS provides an option to expose containers through an Elastic Load Balancer (ELB). Issues with ELB’s include consuming several VPC IP addresses, requiring management of limits on how many ELB’s can be created, and adding additional AWS costs. Our applications now have to remember a series of hostnames representing ELB endpoints for each environment, adding more configuration overhead. ELB’s  have the advantage of security groups, which is something we may expect to lose in a container world. The new networking layer on top of Docker does not play nicely with network based firewalls like security groups in AWS.

The goal is to find a non-conflicting dynamic way for containers deployed to a cluster of Docker hosts to identify themselves to a discovery service and have their identity be reachable. To achieve this, we need a tool that will find other containers on the host and route traffic based on a series of rules. Traefik is that tool. Traefik is a discovery based HTTP reverse proxy and load balancer that can discover Docker containers with minimal configuration, as well as several other means of discovery.

Let’s look at what it takes to get Traefik running with a Docker based backend:

$ docker run -d -p 80:80 -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock traefik –web –docker –docker.domain=docker.localhost

That’s it.  The front end is now listening on port 80. Notice what we did here is we mounted the docker.sock file as a volume launching the container. This gives Traefik API access to the Docker daemon so that it can find other containers.

Alright, now it is time to add some containers for Traefik to find:

$ docker run -d –name nginx -l traefik.port=80 nginx

We added an Nginx container, and added a Label of traefik.port=80. Traefik will use the Docker metadata API exposed through the mounted unix socket to find container labels and bind the listener to port 80 that Nginx is listening on.

The Nginx container can be seen on the Traefik admin page listening on port 8080

Notice how the Rule is Host:nginx.docker.localhost. This is a combination of the container name we provided to Nginx with the –name argument. The docker.localhost part of the domain came from the –docker.domain=docker.localhost we gave to Traefik at startup.

Running a curl to the Docker host with a Host header of nginx.docker.localhost returns the Nginx welcome page.

$ curl -H “Host: nginx.docker.localhost” 192.168.33.10
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
….

Traefik is routing requests with this Host header to our Nginx container. We didn’t tell Traefik to do that, but it did (and that’s alright).

We can use other labels attached to the Nginx container to control how Traefik routes requests to it. Let’s try a path based rule.

$ docker run –name nginx -l traefik.port=80 -l traefik.frontend.rule=PathPrefix:/nginx nginx

Now all requests with the path prefix of /nginx will go to our Nginx container

$ curl 192.168.33.10/nginx/test
<html>
<head><title>404 Not Found</title></head>
….

We of course get a 404 since our nginx doesn’t have a /test resource, but we can see in the Nginx logs that the requests are coming through.

172.17.0.2 – – [10/Jan/2017:22:48:37 +0000] “GET /nginx/test HTTP/1.1” 404 169 “-” “curl/7.49.1” “192.168.33.1”

That was a quick proof-of-concept to show how quickly we got Traefik running as a discovery based routing service with Docker. By putting a Traefik container on every Docker host, we can  dynamically setup routes to our containers running on those hosts. All we need to do now is identify our containers to the discovery service as the host IP address and the port that Traefik is listening on. The applications calling the services have to remember to include the path prefix in the request. Below is an architecture diagram showing Traefik set up on multiple Docker hosts.


Darren Bathgate is a technical architect at Kenzan. Over the course of his 5+ years at Kenzan, Darren has worked extensively with Java, MySQL, PHP, Cassandra, Node.js, oracle, Jenkins, Netflix OSS and Docker.

Closing the Tech Skills Gap

Tech Culture

As more and more business are going digital, the question many companies are asking themselves isn’t how they’ll make this transformation, but who will help them do it. That question is becoming increasingly more difficult to answer. While technology is evolving with lightening speed, the demand for highly-skilled employees who will support this work is…

As more and more business are going digital, the question many companies are asking themselves isn’t how they’ll make this transformation, but who will help them do it.

That question is becoming increasingly more difficult to answer. While technology is evolving with lightening speed, the demand for highly-skilled employees who will support this work is far outpacing the supply.

Kenzan and General Assembly are bringing together Denver-area technology companies and educational resources to address the tech skills gap in Colorado. On January 19, at Kenzan’s office in downtown Denver, we’ll discuss actionable solutions to the challenges both job seekers and hiring companies are facing. Our panel of industry experts from Galvanize, Skillful, Istonish and GoSpotCheck will talk about:

  • The challenges tech companies face when looking for tech talent;
  • Why you shouldn’t disregard a candidate that doesn’t have a degree in computer science;
  • In-demand skills and where you can find employees that have them;
  • How tech companies and educational organizations can work together to grow the talent pipeline;
  • Supporting tech workers through continued learning and development programs

RSVP for Closing the Skills Gap:
Join the conversation on January, 19th at Kenzan:
1743 Wazee St, Suite 200 at 6pm


Rona is a Director of Engineering for Kenzan where she leads technical teams in the production of web applications. She is active in the hiring of development resources and helps drive initiatives that encourage the growth of Kenzan’s employees.

Mythbusted: 4 Misconceptions of a Project Manager

Tech Culture, Tech Skills, Uncategorized, ,

From the cloud, to the Internet of Things, to big data, companies are embracing digital transformation to help them quickly react to marketplace shifts, consumer demands and new opportunities. The roadmap for digital transformation is different for all companies and often calls for customized software, which can be a significant investment of time, money, attention…

From the cloud, to the Internet of Things, to big data, companies are embracing digital transformation to help them quickly react to marketplace shifts, consumer demands and new opportunities. The roadmap for digital transformation is different for all companies and often calls for customized software, which can be a significant investment of time, money, attention and resources.

Software development can sometimes be seen as a slow and expensive process, especially at the enterprise level. Managing these factors are fundamental priorities and few people are better equipped to keep things under control than project managers.

Many organizations that are on a path towards digital transformation don’t fully recognize the value PM’s add to software development teams. According to a 2016 report from the Project Management Institute titled “The High Cost of Low Performance”, less than two in five companies surveyed place a high priority on creating cultures that recognize the importance of project management as a driver of better project performance.

That’s not the case at Kenzan, where our project managers play a central role on each development team and are vital for success. We’re not the the only ones that have seen the positive results of project managers. Organizations that invest in project management waste 13 times less money because strategic initiatives are completed more successfully.

But for those who might disagree, we’re here to dispel some myths about Project Managers:

Myth: Project managers are clueless.

Busted: Technical comprehension is as critical to the success of a project as leadership and strategic business management. While a PM doesn’t write code, they spend their days working with those that do. In order for a PM to adapt quickly to changing conditions, assign and re-prioritize tasks, communicate effectively and spot risks, they need an understanding of  relevant tools and technologies that the team uses.    

Myth: Project Managers just schedule meetings and manage calendars.

Busted: Project managers guide the structure, scope, quality and budget of a project while also representing the interests of the product owner. The real value of a project manager is as a leader, liaison and mentor. With a big picture point of view, the project manager balances and guides software development teams while defining requirements and goals to ensure the teams meet expectations — on time and on budget.

Myth: Project managers are just paper pushers.

Busted: Project managers define policies and procedures that enable success. Without the structure of process, a project can easily fall apart. A project manager knows the operational requirements (things like time sheets, budgets and resource allocation) and can navigate communication channels with clients so technical team members can focus on what they do best – developing software.

Myth: A Project manager’s top priority is execution.

Busted: Project managers support the team from start to finish. While the software delivery lifecycle (SDLC) is the guiding framework for development, it doesn’t stand alone in supporting and achieving business objectives. Project management methodologies work in tandem with the SDLC for the initiating, planning, monitoring and delivery of projects, which provides consistency and stability of process through every phase of the SDLC.

Now that we’ve dispelled some of these major misconceptions about project managers, it’s (hopefully) clear how important they really are. Think about all the projects you or your organization have worked on that went over budget, missed a deadline or derailed entirely. It may be too late to go back and right those wrongs, but it’s not too late to consider how future projects — and your company as whole —  will benefit from the guidance of project managers.

For companies that are looking to get or stay competitive, strong project management practices can play a crucial role in driving the business forward through digital transformation. Contact info@kenzan.com to learn more about our project management and other services.


As a certified Scrum Master in Agile methodologies, Jennifer Aczualdez leads Kenzan’s project management and business analysis team . She manages the scope, budget and timelines of projects and acts as a central point of contact for both internal and client teams. She is involved in the full software delivery cycle, from initiation and planning to monitoring, delivering and closing.

Million Song Library: A Case Study in Microservices

Tech Skills

Kenzan has created many microservice applications over the years, with many of those running thousands of instances. Our experience is that a lot of organizations want to take advantage of the increased scalability and other benefits that microservices offer, only they don’t know where to start. How do they set up data? How will microservices…

Kenzan has created many microservice applications over the years, with many of those running thousands of instances. Our experience is that a lot of organizations want to take advantage of the increased scalability and other benefits that microservices offer, only they don’t know where to start. How do they set up data? How will microservices affect their deployments? What technologies should they use? The task can feel quite daunting! The reality is that it’s much simpler than it seems. To prove this point, Kenzan created an open source microservices project called Million Song Library (MSL).


Not familiar with the benefits of microservices yet? No problem! Check out our blog series on microservices for a deeper dive. In a nutshell, microservices are a set of small services that make up a full application stack. The services are typically broken down by functional area within the business. This brings several core benefits:

  • Targeted scalability where and when needed
  • Decoupling of the functional areas of an application
  • Facilitation of continuous delivery
  • Cleaner code management

This is just a high level overview of the benefits, so be sure to read Microservices for a Macro World if you’d like a more detailed discussion.


So what is Million Song Library? At a basic level, MSL is a microservices-based application that lets users navigate through large sets of music (albums, artists, and songs) while also tracking and rating their favorites. That said, the functionality of the application is actually secondary to the main goal: to show how easy it can be to create microservices and run them both locally and up in the cloud. As you’ll see, with the help of good, solid architectural patterns, it is simple to create and maintain a fully functioning microservices application.

Over the next couple of weeks, you can look forward to a series of blog posts covering the different aspects of MSL. At the end of the series, our hope is that you’ll have a good familiarity with microservices, the technologies used in MSL, and its core architectural patterns. You’ll also be able to run MSL locally as well as within your own AWS environment.

At this point you probably want to know more about the architecture of MSL and what makes it tick. So let’s get to it!

For MSL, it’s best to review the stack from a bottom up approach. The data layer leverages Cassandra NoSQL data stores, and there is a separate data collection client for each functional area. The services are also broken up into functional areas, each one related to managing a library with a million songs. These include things like catalog services, login services, and even ranking services. Each of these services are fully RESTful, and each (ideally) has a very small set of responsibilities. The task of managing access to these services belongs to a proxy layer (Zuul) that handles all the API traffic coming into the application as well as discovery of the correct microservice.

As it stands now, MSL is easily deployed into AWS, but it was intentionally built to be deployed into other environments. Want to drop the routing layer logic, the services layer, and the data tier into another cloud or data center environment? You can do that!

From a front-end perspective, MSL is comprised of a few core technologies. Our goal was to use a technology stack that we find easy to work with and is something common in the marketplace:

  • AngularJS 1.4 – Front end framework
  • ES6 – JavaScript environment
  • Less – CSS preprocessing
  • Gulp – Build management
  • NPM – Front end package manager
  • Webpack – Bundler for modules and dependencies
  • Karma – Test runner for JavaScript
  • ESlint – Style guide linter tool
  • Material Design – Standard design toolkit

The back-end technology was architected with the same simplicity in mind. Given the robustness of the Netflix OSS stack, we decided to stick with Java and gain the benefits of a solid Netflix OSS ecosystem:

  • Java 8 – Server side language
  • Jersey – Web service layer that extends JAX-RS
  • JUnit – Server side unit testing
  • Datastax – Database driver (Cassandra)
  • Netflix OSS – Components for building microservices applications:
    • Eureka – Enables application discovery within the microservices environment
    • Hystrix – Handles circuit breaking within the application
    • Zuul – Routes API calls into the environment (proxy server)
    • Ribbon – Manages software load balancing (client library)
    • Archaius – Offers dynamic properties management
    • Karyon – Provides a base container for all microservices

At Kenzan, we believe that documentation is as important as the code we write. For that reason, the MSL project uses some critical tools to facilitate documentation:

  • Swagger – Framework for generating API documentation alongside code
  • KSS – Specification for generating CSS style guides
  • AsciiDoc – Markup language for generating general user documentation

Currently, you can deploy MSL in several ways, with additional methods on the horizon:

  • Maven – Local builds
  • Docker – Containerized deployments
  • RPM – Manual deployments into AWS
  • Spinnaker – Pipelines for continuous integration (CI) and continuous delivery (CD)

This all probably seems like a lot, but don’t worry! By the end of this blog series, you will be familiar with all these technologies, along with the recommended architectural patterns to use for the microservices. We are excited to release MSL into the open source community and watch it take flight. And, we’re glad you came along on the adventure!


Craig Martin is the Vice President of Engineering at Kenzan. He is based out of Denver, and oversees all engineering activities within Kenzan. Many of his current responsibilities have been focused on architecting, and leading projects to create, highly scalable cloud-based microservice applications.

Breaking Up the Monolith

Tech Skills

In our first post of this series, we looked at how microservices help you build applications with an eye towards the future. And in our second post, we took a deep dive into how microservices work. But what if you currently have a monolithic application? How do you know if it’s time to move to…

In our first post of this series, we looked at how microservices help you build applications with an eye towards the future. And in our second post, we took a deep dive into how microservices work. But what if you currently have a monolithic application? How do you know if it’s time to move to microservices? And if so, how do you get there?

As we mentioned before, knowing why to build a microservice is only half the battle. Knowing when (or when not) to build one can be a trickier question. There’s no one size fits all. That said, there are a few questions you can ask to help decide if you’re ready for microservices. We’ll explore them in this final post of our series.

Reading the Signs

Is your organization or team experiencing exceptional growth in both employee base and technical stack?
As a codebase grows to a certain size, more people are needed to maintain it. But anyone who has worked on a single codebase with ten other engineers knows the struggle of dealing with merge conflicts during code reviews. If a team is losing valuable time trying to resolve conflicts, it might be time for microservices. Microservices help distribute development teams more efficiently, as complex backend systems can be built faster by dividing workloads into multiple small teams, with each team responsible for a given set of microservices.

Is your application down often?
Every time you deploy an application, it breaks. Or the backend system is seemingly down all the time for maintenance. If these scenarios sound all too familiar, it might be time to shift from a monolithic architecture to microservices.

Are you having difficulty scaling hardware to optimize application performance?
Larger and more complex applications require additional hardware to perform well. Some parts of an application might put more demand on the hardware than others, and the whole application can face degraded performance due to a single poorly-performing feature. Microservices allow individual units of business logic to be assigned their own dedicated hardware, and they can be scaled independently of other services. A slower-performing process can be isolated and capped to a fixed about of CPU, memory, disk, and network bandwidth, which means it can’t steal resources from other features.

Choosing the Moment

Once you’ve decided to build a microservice, the next questions are when to build it and when to switch over. That’s a hard question to answer for any organization. The monolith architecture works for small applications and small engineering teams. But when does the monolith stop working?

It turns out growth is often the death of the monolith. Organizations in the process of undergoing exceptional growth, in both the employee base and in technology stack, can achieve that growth by switching to microservices. If the engineering team is losing valuable time resolving merge conflicts, then it might be time for microservices. If the growing backend system is down all the time for maintenance, then microservices might be the answer. The answer of when to build a microservice is when growth is expected.

Hardware provides a great example of this, as it can only be scaled horizontally so much. Maybe those high-CPU and high-memory servers are too expensive and too underutilized during non-peak hours. The cloud is becoming more appealing with the ability to provision hardware as needed, and smaller servers with lightweight workloads become easier to bring up and tear down in response to changing demands. Microservices have lower hardware requirements, and they can start up quickly to meet peak demands.

Making the Move

Monoliths present a number of challenges that you’ll need to tackle in the move to microservices. A monolith has a tendency toward tight coupling of components, as well as stateful behavior. The application was never intended to live as separate and isolated components working in concert to make a system. The monolith may also be rooted to the infrastructure it lives on, depending on system resources such as a local filesystem and network. What’s more, you need to keep everything up and running during the transition. So how do you move to microservices? Let’s break it down.

Step 1: Determine Your Domains

Your first task is to look at all the features of the application and organize them into logical business units, or domains. For example, your application may have major features related to login/logout, user data, and session management. These features can be logically organized into a single business domain called Authentication. Repeat this step for all of the other domains in your application.

Step 2: Prepare the Monolith

Next, get your application ready for the big break up. To do this, decouple components within the application along the lines of the business domains you came up with. This will result in a number of edge and middle modules within the application, each dedicated to a particular domain, like our Authentication example. You’ll also need to move stateful in-memory stores into shared datastores. Finally, put a router in front of the monolith to smooth the rollout of microservices.

Step 3: Work From the Bottom Up

As you begin breaking out microservices from the monolith, it’s always best to start at the bottom and work your way up. First, create a separate database to store data for the domains you are moving to microservices. Next, break out the data access modules that access this data into middle microservices. Finally, break out edge modules that consume this data into edge microservices. When everything’s ready to go, use the router to toggle redirection to the new edge services. Keep repeating this same process for each domain until all of your modules are broken out and the monolith is no more.

In the diagram below, the Login, Users, and Sessions modules were decomposed into modules within the monolith, and then broken out as separate microservices.

breaking-up-the-monolith

One challenge you’ll encounter during the transition is the need to support both the microservices and the monolith side by side. That also means duplicating work, as you’ll often have to make changes or fixes in both places. The good news is that organizing your monolith similarly to your microservices makes it easier to copy-and-paste code between them.

Conclusion

If your application and organization are both set for growth, microservices can help you meet the challenge of building modular, scalable, highly-available solutions. When you’re ready, you need to organize your application’s features into domain-specific modules, then break them out one by one.

Just remember that transitioning from a monolith won’t happen in a single overnight deployment. You need to strategize as you cherry-pick domains out of the monolith. The key is to complete the transition to microservices and not leave your application in limbo. If you’d like to learn more about how to make the move to microservices, or if you need some help, feel free to ask us. That’s what we’re here for.

Kenzan.IO: A Guide to a Website From Scratch

Tech Skills

In the quickly evolving world of front-end development, it can be overwhelming to choose from the multitude of frameworks. It is, by extension, downright baffling to build a whole project from scratch – which is exactly what we did for Kenzan.io. In this article we’ll walk through the technologies we chose, why we chose them,…

In the quickly evolving world of front-end development, it can be overwhelming to choose from the multitude of frameworks. It is, by extension, downright baffling to build a whole project from scratch – which is exactly what we did for Kenzan.io.

In this article we’ll walk through the technologies we chose, why we chose them, and what we thought.

Kenzan .IO Scope
Before diving in, let’s take a look at the scope of the project. We needed to build a fast, sleek, and responsive website that could integrate with other Kenzan sites. The website also needed to be easily updated by our marketing department. Kenzan.io was simple in terms of business logic, with very little state maintained, and most of the complexity held in the views.

Front-End Architecture
This leads into our first design decision: React for our front-end view library. React gave us the scaffolding we needed to design component-based views within a single page application without weighing down the project. Most importantly, React’s one-way data binding paired with the virtual-DOM made our image and animation rich site run with impressive speed on all browsers. We styled our views using Sass, to take advantage of variables, mixins, and other advanced CSS features. The Sass was compiled down to CSS in our build process and vendor prefixes were added using Auto-Prefixer. These few pieces led to fast and aesthetic pages.

Since React only handles our views, we needed a way to handle the model and controller portions of our front-end. We decided to use React Router for our SPA routing, jQuery for advanced DOM interaction and HTTP calls, and ES6 for all other business logic in the site. React Router has quickly gained steam as the widely accepted routing package for React applications, and we found it easy to learn and incorporate. We combined React Router, jQuery, and vanilla JS to build a scroll based navigation for the website, called scroll jacking. This feature is often handled with CSS and HTML sections but we decided to incorporate it into a single page application architecture by pairing scrolling with view routing. We also used jQuery to handle our AJAX calls because the library was already present in the project. For the sake of learning, a dive into Fetch or Thunk would have been interesting, but ultimately would have added unnecessary weight to our application. Finally, we chose ES6 over ES5 for all the new features including JavaScript modules, arrow functions, and classes. With the help of the very opinionated AirBnB style guide, we found ES6 syntax to be more concise when compared to ES5. We compiled our ES6 using Babel and handled module loading with WebPack streams in our Gulp build process. Both libraries had fairly simple configuration and no work required once the boilerplate was assembled.

Back-End Architecture
With our front-end architected, we began looking at ways to integrate Kenzan.io with our other Kenzan sites and make it friendly for marketing updates. Since all Kenzan sites are WordPress sites and our marketing team is very familiar with the WordPress content management system, we decided to pursue the bleeding edge WordPress API. With the API, all data was entered, stored, and retrieved from the Word Press CMS, and we had access to data from all of Kenzan’s pages. Most importantly, we found the WordPress API extremely easy to use. The API had good documentation, was straightforward to integrate into a single page application, and updating content was quick and easy.

Testing
We saved the best for last with unit testing. This does not follow test driven development, but for the sake of time constraints, we wanted to get all content on the site before testing so we could evaluate time remaining before making a testing plan. When we found ourselves with a couple weeks left, we decided to branch out again and try a new test runner called Ava. The allure of Ava is the ability to run unit tests concurrently, each test with an isolated scope in a separate Node thread. This means no interference between tests with faster test suite execution. For pure JS, we tested with Ava and Sinon, used for spies and stubs. For React components, we paired Ava with Enzyme, an extension of ReactTestUtils, and BrowserEnv, for a virtual browser in Node. This trifecta allowed for quick and seamless testing of our React components, including rendering the DOM, testing lifecycle methods, updating the state, and re-rendering the component. All the testing libraries had very little boilerplate code to get started and were easy to work with when writing the tests.

Finally, we wanted to add a last layer of confidence with a suite of E2E tests. Prior to this project, most of our front-end development experience had been in Angular with E2E testing handled by Protractor. Unfortunately, Protractor is not friendly with React so this was another chance to learn something new. We found an E2E library called Nightwatch.js that integrated with React and ran off Node, making configuration and execution not too different from Protractor. The creation of these tests was handled by our QA team, and is a topic for another blog post, but their inclusion helped ensure no bugs made it out to production.

The Final Product
After six weeks, and many scrums we met our goal of delivering a responsive, performant website with a WordPress back-end and full unit and E2E test suites. However, our most important accomplishment was diving deep into new technologies and expanding our knowledge here at Kenzan.

To checkout the website, click here: https://kenzan.io


The author of this post is Marie Schmidt, at junior front-end developer at Kenzan. She’s featured as our employee spotlight.


We’re looking for some  talented developers, architects and engineers to help us build more cool stuff like the Million Song Library. Click here to see open positions.

 

Making Microservices Work

Tech Skills

In our first post in this series, we showed how microservices can help you architect applications with the future in mind. Knowing what a microservice is, and what purpose it serves, is a big part of building a successful architecture. But to truly fit your business needs and meet your goals, understanding the variations of…

In our first post in this series, we showed how microservices can help you architect applications with the future in mind.

Knowing what a microservice is, and what purpose it serves, is a big part of building a successful architecture. But to truly fit your business needs and meet your goals, understanding the variations of microservices is key.

While there is no precise way to define the architectural style of microservices, by looking at specific characteristics, we can better understand what makes an application a microservice. In this post, we’ll go into more detail about how microservices work, and how to make them work for you.

How Big is Micro?

You can build what you think is a microservice, but actually, what you have just created is a distributed monolith. Individual applications become so coupled together that we start referring to them as a single noun. In other cases, we end up seeing microservices get so large that they themselves become monolithic, resulting in a monolith army.

The size of a microservice isn’t determined by the number of lines of code or the amount of functionality, but by the amount of volatility a microservice has—the amount of change that is expected to occur over the life of a microservice. If changing one microservice also requires changing another microservice, it means those microservices have been incorrectly decoupled from one another and should instead be combined. On the other hand, if a microservice is composed of features that are fundamentally dissimilar and volatile, it means those features have been incorrectly coupled together, and they should instead be split up.

In a nutshell, features of a microservice should be grouped by similarity and the likeliness of change. If changes to a microservice keep causing backwards incompatibility or constantly require refactoring, it probably needs to be decomposed into multiple microservices.

Life on the Edge (and in the Middle)

Another part of defining a microservice is understanding what the types of microservices are and the rules they should follow. These can vary depending on your architecture. One architectural solution we’ve employed with success defines two types of microservices: data-driven services in the middle and business-driven services on the edge.microservices-cloud-architecture

Middle Tier: Driving Data

Data-driven services are called middle tier services, which are solely assigned and are responsible for a single data source. In this architecture, the only way to access any given data source is through the corresponding middle tier service. These are assigned to a single data source, so that if one data source goes down, it will not take the other services with it in the event of an outage. The only responsibility of a middle tier service is to make data available to other microservices. This could also mean applying caching or fallback scenarios to keep the data flowing.

Edge Tier: Driving Business

We refer to business-driven services as edge tier services, which drive the business logic of an application. Edge tier services typically exhibit the most amount of volatility-based decomposition, as business logic for each edge tier service can vary greatly depending on the systems it supports.

While choosing when to build a middle tier service is easy (if you have a data source, you need a microservice to go with it), choosing edge tier services requires more attention to the amount of volatility in the business logic. Functionality is driven by edge services, and in this type of architecture the number of edge tier services will always be greater than the number of middle tier services. Edge tier services connect with one or more middle tier services, but typically won’t connect with another edge tier service. In this particular solution, proper decomposition of microservices based on volatility shouldn’t require edge services to depend on other edge services, or middle services to depend on more than one data source.

While we’ve put this type of architecture to good use, other solutions are certainly possible—more on that in a bit.

Finding Each Other in the Cloud

Deploying microservices to the cloud enables clusters of microservices to be scaled up as load increases, and scaled down after load diminishes. This means that IP addressing of machines running microservices is constantly changing. So the challenge becomes: how do we route fixed traffic to a moving target?

In the past, a common pattern was to reference hosts using domain names rather than IP addresses. In this case, when a host IP changes, the DNS record is updated with the new IP address. Tools such as Consul can be employed to propagate DNS changes. Alternately, redundantly-deployed services can register with a load balancing appliance. The group of services is then referred to using the address of the load balancer. However, in a microservices architecture, this can rapidly become costly due to the large number of load balancers required.

To better solve this issue, we must get creative and route traffic in a more dynamic way. Service discovery is a pattern that lets us identify a group of microservices by name rather than by IP address. With service discovery, a centralized registry is used to store the locations of all services in the surrounding environment.

The most common pattern we implement is to have microservices push their own information to the registry on startup and say, “Hey, my name is service-a, and here is my IP address and port”. Another service can ask the registry for all IP addresses for services named “service-a”. The requesting service can then strategize which one of the services named “service-a” to talk to. This type of push-based discovery pattern can be implemented with Eureka (part of the NetflixOSS stack), and it requires discovery-enabled apps to use a client library to talk to the discovery service.

API Gateways performing reverse proxy operations can also take advantage of this service discovery feature, and can route all traffic for a specific path to a group of microservices of the same name. These gateways are utilized as a router of all inbound HTTP requests, which consolidates Internet-facing traffic under a single domain name. The router can also apply rules or filters to enforce security for edge services, for example, to require authentication. This gives you a fine degree of control over how different types of traffic are routed.

Exploring Alternate Designs

At Kenzan, we are always thinking of ways we can improve on our architectural patterns. Lately we have been exploring alternatives to the edge/middle microservice design pattern described in this post. The details of this alternate design will hopefully make an appearance in a future blog post, but in the meantime here’s a sneak peek.

We like to refer to this design as the one-edge/many-middle pattern. It might even become a three-tiered design as it evolves, with edge services, middle services, and data services. In this scenario, data services provide the data abstraction layer, middle services drive business logic, and edge services drive domain logic. The edge services have similarities to an API gateway, but they also make domain-specific decisions on which middle services to use. The result is a better decoupling of microservices and a clearer design with fewer edge services.

As we are learning, a particular microservice architecture may work for one organization but not for another. We will continue to explore different microservice designs, and evaluate the benefits and challenges of each.

Conclusion

To make microservices work for your business, it’s important to compose them correctly and place the right functionality in the right tier according to your chosen architecture. There are many ways to architect and develop microservices, and we are constantly talking about ways to improve the design.

When starting with a microservice architecture, plan a consistent strategy and define guidelines for the entire organization to use. Share best practices and lessons learned with the other teams, and continue to evolve microservice patterns. Consider the scaling capabilities, and build in capabilities like service discovery.

Going with microservices from the start can be a great approach. But what if you have an existing application you want to migrate to microservices? In our next post, we’ll talk about how to break up a monolith into manageable chunks.