Cloud Journey — Part 1

Chris Shayan
9 min readOct 24, 2021

Whether you are headed for a public cloud, private cloud or a multi-cloud solution, chances are that you are either ready to begin your cloud journey or already well on your way. In my series of “Cloud Journey” articles, I would like to share my experiences of cloud adoption.

My experience of using cloud varies on different topics such as: rolling out SaaS ERP/POS in retail industries with over hundreds of stores and thousands of employees, SaaS core banking for a fintech business with over 5 million customers and more specific to development such as building a eCommerces with online-to-offline capabilities (across industries like jewellery, mattress, pharmacy and generic eCommerce for low income customer segment), loyalty app (cvs extracare alike), hrtech, edutech, job boards, property listings, online and offline lending, business intelligence, loan origination, retail banking and corporate banking. Some of the companies that I have involved and caused cloud (private, public and multi-cloud) adoption are f88, pharmacity, btj, iCare, 4P’s, yola, vietnamworks, vuanem, jobnet, shweproperty and latest one Techcombank. I have worked in these companies as CTO or CTO-in-residence model. During my experiences, there has been plenty of lessons learned which In my series of “Cloud Journey” articles, I would like to share my experiences hoping that can help some.

Introduction

Relying on someone else, such as a cloud service provider (CSP), to store and process data requires trust and a willingness to give up control (if you are not ready for better stop the cloud journey now). There are different reasons why people are sometimes willing to do so. Often, someone else has more expertise to do something, so people are willing to let them do it. Sometimes, someone else can do a task cheaper or faster, so others are willing to hand the task off. At other times, people may just do something because apparently everybody else is doing it. :)

Amazon Web Services (AWS), Microsoft Azure, and Google Cloud are known as hyperscale CSPs, with firms like Alibaba and Tencent playing a similar role in China.

What is the Cloud?

At its most basic level, the cloud is simply someone else’s more powerful computer that does work for others. There is no one single cloud-so while it might be accurate to say that data crosses the internet, it is not correct to say that such data is stored in an ephemeral form, hovering somewhere in the sky. In fact, the cloud stores and transports data across a global infrastructure of data centers and networks. A more accurate description of the cloud is that cloud services are an abstraction of a parallel system of computers, data centers, cables, infrastructure, and networks that provides the power to run modern enterprises’ and organizations’ digital operations and to store their data. Building the necessary infrastructure for cloud services on a truly global scale has been one of the most significant architectural achievements of the past decade-and it mostly exists behind the scenes, out of common knowledge.

Amazon, Google, and Microsoft particularly focus on providing the basic elements of IT infrastructure-server space and computing power-that are highly scalable, custom-configurable, and capable of being rapidly deployed. However, cloud computing encompasses a wide range of service types. These various services can be grouped into the three principal types of cloud services. In practice, the major CSPs offer different services spanning all three of these categories.

  • Infrastructure as a service (IaaS): CSPs provide basic access to storage, networking, servers, or other computing resources.
  • Platform as a service (PaaS): CSPs provide an environment-a platform-for customers to build and deliver applications.
  • Software as a service (SaaS): CSPs build, run, and host applications delivered over the internet, which customers pay to access.

How CSPs do it?

While modern CSPs rely on highly complex systems for allocating, managing, and deploying resources among millions of customers, three key technologies are essential for understanding how cloud services work at scale: virtualization, hypervisors, and containerization.

  1. Virtualization allows for abstraction between physical hardware and individual computers. Essentially, virtualization allows for multiple computers, referred to as virtual machines, to exist on the same physical server. Beyond computational tasks and storage, entire networks can be built through virtualization.
  2. Hypervisors are programs that manage virtual machines, servers, the connection between those virtual machines and servers, and the allocation of resources to the virtual machines. Thus, it becomes possible to have a whole bank of physical servers, each running a hypervisor, and then create virtual machines across this bank of servers.
  3. Containerization is a refinement of virtualization that works by running discrete containers within the same operating system, basically moving up the abstraction provided by virtualization by one level. Containerization caught on around 2014 with the introduction of a new tool called Docker, which made it much more convenient and efficient to implement containerization for business uses. While a virtual machine includes an entirely virtual operating system, a container is an isolated environment within one single operating system. In terms of layers, while a hypervisor lies between the hardware and virtual machines, each with their own operating system inside them, a container sits on top of an operating system that is on top of the container engine, and then the hardware is below.

In following figure (source Carnegie Endowment), the infrastructure for the cloud is visualized according to a layers model. The table presents a hierarchy of layers from the physical data centers at the bottom to the entirely virtual application layer at the top, allowing the various parts of the cloud to be simplified. The descriptions for each layer present the key technologies operating at that level, along with several examples.

How do you use CSPs or aka Cloud Deployment?

CSPs also offer their services in three main deployment models: the public cloud, a private cloud, or a hybrid cloud.

  1. In the public cloud, customers share the same infrastructure available to be rented out to the public. A CSP manages the infrastructure and allows prospective customers to purchase resources. The customer uses the same CSP-allocated infrastructure as other customers.
  2. A private cloud arrangement is designed for a single customer, essentially housing resources on premises or off premises but still isolated from other customers. An organization may set up its own private cloud, or it may contract with a CSP to do this.
  3. Some organizations choose a hybrid cloud, in which they combine a public cloud service from a CSP with either a private cloud setup or a more traditional data center so they can communicate and share data and applications. Some organizations opt for this arrangement because they have sensitive data that they consider too risky to store in a public-cloud-only environment, but they still want to take advantage of the public cloud’s computing power to run applications.

and Data Governance

The field of data governance encompasses a large set of issues involving the control, access, protection, and regulation of personal and commercial data by both technology companies and governments. As huge amounts of both individual and business data have started being stored and processed in the cloud, this transition has raised questions about the access that governments can exercise to data in the cloud, particularly for CSPs that store data overseas.

The nature of cloud services is such that data belonging to one country’s citizens (PII, personally identity information) or government may be stored in different countries and jurisdictions.

Data Governance is one of the most challenging hurdles on public cloud adoption, this is the reason many organizations still prefer “hybrid cloud adoption”.

Public Cloud

  • If you use it right (highly recommend tools like https://www.nops.io/) then you can have significant cost savings
  • A good support for DR, high availability, many IaaS provides a pay-as-you-go disaster recovery that is only chargeable if it’s used

Private Cloud

  • Cost Reduction by avoiding capacity vacancy
  • Total control specially when it comes to Data Governance
  • Better performance

Hybrid Cloud

  • Scalable & Cost-Effective
  • Architectural Flexibility
  • Increased Compliance & Security over only Public Cloud
  • Enabled for Disaster Recovery, Business Continuity and High Availability

now, Architecture

During Cloud Adoption, you will have various options and tools to select at your disposal (sometimes overwhelming). You must design smart integration architecture (I prefer MuleSoft & restful), with applications hosted on your legacy, private cloud, public cloud or even newly SaaS purchased tools. Gartner has a great article on this topic. These are four essential components in any multi cloud architecture:

  • Technology Silos to host every particular application and data
  • Integration levels across technology silos
  • mechanism to interconnect the different silos
  • management requirements to access, monitor, measure and control the silos

Also worth to note network connectivity as well. Network connectivity is quite important and sometimes I have seen not think through properly, I like to treat it separately. You might have different needs for connectivity across your assets based on your demand of security, performance, bandwidth elasticity and so on. Regardless, you need to select your “interconnection mechanism” carefully. Basic connectivities like vpn mpls are still option but they are not created for cloud connectivity. You should think about options like:

  • wan-to-cloud
  • software-defined-wan
  • virtual remote access (vRAS)
  • Value-Add Connectivity services that can offer Bandwidth on Demand (BoD), Virtual Network Function (VNF), Virtual WAN Optimization Services (vWOS).

Using Pace-Layered Architecture as your Cloud Strategy

Gartner’s Pace-Layered Application Strategy is a methodology for categorizing, selecting, managing and governing applications to support business change, differentiation and innovation, if you like to know more about it please refer to here and refer to following figure for a quick refresh.

It is very common and even more practical to have a different cloud strategy for your legacy and other layers of your architecture. Driving digital innovation that delivers business results and enables business to experiment and scale in a jiffy is a key requirement for any CTO/CIO. Much of these initiatives will require different DevSecOps, shorter development time, exploratory software development style, flexible governance and higher speed. Often, these applications use whatever assets available on public or private cloud to deliver the business value (hence note the network connectivity), for instance, if you like to build a feature for end-users on their mobile to scan an invoice and you extract some values out of the image and store the image on some safe network, this is the best use case of hybrid, using your current PII micro-services on private cloud and utilize services like s3, aws rekognition and others.

For each system of in your SoI, SoD and SoR you can design a path and roadmap based on “The 6Rs”.

6 Application Migration Strategies: “The 6 R’s”

Originally Gartner designed the 5 R’s but aws extended it to T he 6 R’s which I respect and I found it over years very practical.

For each system of in your SoI, SoD and SoR you can design a path and roadmap based on “The 6Rs”. Please avoid the confusion of “Cloud First” to every system must go to cloud with whatever format of IaaS, PaaS or SaaS.

Use “The 6R’s” and more importantly understand your organization vision and business objectives and priorities, not everything needs to be on cloud.

  1. Rehosting — Otherwise known as “lift-and-shift.”
  2. Replatforming — I sometimes call this “lift-tinker-and-shift.”
  3. Repurchasing — Moving to a different product.
  4. Refactoring / Re-architecting — Re-imagining how the application is architected and developed, typically using cloud-native features.
  5. Retire — Get rid of.
  6. Retain — Usually this means “revisit” or do nothing (for now).

Originally published at https://www.linkedin.com.

--

--

Chris Shayan

Purpose-Driven Product Experience Architect. I’m committed to Purpose-Driven Product engineering. My life purpose is Relentlessly elevating experience.