Qualicom Innovations Inc.

The logo of Qualicom

Cloud Implementation

In order to improve time to market for new products and deliver a premium customer experience for its wireless service users, Qualicom’s client—one of Canada’s largest telecom companies—determined that they needed a more agile, flexible, and scalable platform on which to build their applications. A key aspect of this entailed moving to the cloud.

While this is an enterprise-wide strategy, Qualicom worked with one particular business unit to help them develop an architectural framework and implement an application to support their mobile device trade-in and upgrade program. This program relies on valuation data from a third-party logistics company. As our client identified that this company’s APIs do not support geographic redundancy, a key goal of the project was to find a solution to address fault tolerance and reduce potential service disruptions.

What we did

In our initial engagement, we proposed using the Google Cloud Platform (GCP) in a hybrid approach that would make use of native cloud tools to build new applications that could be deployed quickly in the cloud or on-premises. We worked closely with the client—with support from Google engineers—to design a cloud-native application architecture using a domain-driven OpenAPI design and to develop a proof of concept.

To address the fault tolerant requirement, we built new, cloud-native microservices that provided seamless failover support for the trade-in product catalog. These were deployed on the Google Kubernetes Engine (GKE). Software was decomposed into smaller units for deployment so that we could be selective about which components needed to be upgraded or scaled in a production environment with minimal disruption.

We used the following tools, technologies, and processes in the project:

  • Domain-Driven Design to implement TMF 620 Product Catalog specification.
  • TMF 630 REST API Design Guidelines for specification and best practices.
  • Spring Boot and Java 8 for building scalable micro-services.
  • Google Cloud SQL – Postgres 11 for cloud storage with full-text search support
  • Cloud-native application deployment in Google Kubernetes Engine (GKE).
  • Continuous Integration (CI in DevOps) using Google Cloud Build.
  • Continuous Delivery (CD in DevOps) using Spinnaker.
  • Karate test automation framework for unit and integration testing in the CI/CD pipeline.
  • Cloud-native provisioning using Terraform, following Infrastructure as Code (IaC) best practices in GitHub.
  • API monitoring using Google Cloud Monitoring.

How it helped

  • The client can quickly and easily manage cloud computing resources such as processing capacity and data storage through GCP’s self-service interface. Previously, it could have taken weeks to procure, install and configure new application servers.
  • Costs were reduced significantly as there was no equipment to be purchased up-front in a cloud environment. The client pays for resources as used or reserved, easily scaling up or down according to current usage.
  • The low-level security for these resources (physical security of the premises and hardware, integrity of the physical network, data encryption on disk, and so on) are now handled by Google, freeing up client resources that were previously allocated to such activities.
  • The use of separate geographical zones with application failover results in higher availability.
  • The CI/CD pipeline to automate the build process reduces deployment time to a matter of seconds, resulting in increased efficiency and productivity for the client’s IT teams, developers, and administrators, as well as reducing application time-to-market.
  • The new environment and its ability to quickly scale will provide a platform for faster and more resilient systems integration with other business partners in the future.