Cloud Journey — Part 6 | Foundations of Cloud Architecture

Cloud Journey Series:

Another Piece of art by “Gizem Vural”​

Introduction

There are few things that you need to get it right which will help to boost your cloud journey and any other initiatives that you have planned in order to deliver a better experience. These are the few topics of what in my humble opinion are the foundations of a cloud architecture for your firm through a digital transformation.

  1. Containerization like OpenShift RedHat
  2. API Gateway like MuleSoft specially if you are planning to use SalesForce solutions as your CRM in future
  3. DevSecOps, MicroServices, MicroFrontends (Mesh Apps) and Events like Kafka
  4. Your Agile Ways of Working
  5. DataOps and your data ways of working

Containerization

Many organizations are now deploying applications packaged in containers at scale, introducing the need for new skill sets, processes and operational models. Containers can provide many agility benefits, including faster software delivery through streamlined life cycle management, and helping to refactor legacy applications toward modern architectures. To successfully adopt containers requires new skill sets, processes and operational models. It will also require cultural and organizational changes.

Please refer to following articles for more details:

Containers are a virtualization technology that enable multiple applications to share an OS kernel so that it appears as if each application had its own copy of the OS. Unlike VMs, which simulate an entire computer system in software, including hardware, containers virtualize just the operating system, which allows applications to maintain their own copies of specific OS libraries. The current interest in container technologies is being driven by agility requirements — agile development, rapid provisioning and real-time horizontal scaling. While containers were originally used primarily for new applications based on microservices architecture and cloud computing, they are increasingly being considered for improving the life cycle management of traditional monolithic applications as well. Containers provide two fundamental benefits:

  • Consistent application packaging: Containerized applications can be moved quickly and easily across heterogeneous infrastructure, and between private and public cloud environments from multiple providers. Containers therefore provide a smooth path for applications throughout their entire life cycle, allowing them to run in an identical environment at every stage, from a developer’s laptop to different testing, staging and production environments.
  • Streamlined configuration management: The contents of containers and the manifests for orchestrating containers on Kubernetes are defined configuration files (e.g., Dockerfiles, docker-compose files, YAML files used to configure Kubernetes orchestration of containers, etc.). These artifacts can be treated the same way as application source code, i.e., integrated into CI/CD pipelines and transactionally updated with version control systems, with changes that are selectively applied using canary or blue-green testing methods.

When using a public cloud CaaS platform to deploy Kubernetes, the cloud service provider is responsible for provisioning and managing the container orchestrator control plane. Depending on which CaaS approach is used, customers are responsible for managing the data plane, i.e., the worker nodes for hosting the containers. In this “serverless” approach, administrators do not have to configure worker nodes for the public cloud Kubernetes service. Instead, the public cloud Kubernetes service automatically provisions the resources for running pods that are scheduled by the Kubernetes orchestrator.

API Gateway, DevSecOps, MicroServices, MicroFrontends (Mesh Apps) and Events

Containers change the technology stack, but they also fundamentally change the way that I&O, applications, and security teams work together. A move to containers will require IT organizations to simultaneously make a cultural transition and develop new workflows. As part of this adaptation, the organizations will have to embrace:

  • Cross-team collaboration and DevOps culture and practices
  • Continuous integration and delivery/deployment (CI/CD) of applications and infrastructure
  • Extensive automation throughout the application delivery pipeline, including testing, security, and infrastructure provisioning and maintenance

A CI pipeline is a necessary first step in building a DevSecOps culture and enabling CD practices to be put in place. The main purpose of CI is to integrate changes more frequently in order to quickly provide feedback to the team. Therefore, CI’s prime directive is that a software system must always compile, and that unit tests must always execute successfully. CI overcomes the challenges of code integration while minimizing risk throughout all phases of the software development life cycle.

Please refer to following article for more details on Platform Ops:

Continuous customer experiences across multiple channels and changing business needs mean application technical professionals must use an agile architecture for enterprise applications. Applying mesh app and service architecture can deliver a seamless user experience and enable integration agility. Mesh app (micro-frontend) and service architecture (MASA) incorporates best practices and patterns from API- centric, micro-service architecture and multichannel user experience design to build “mesh” applications from multiple apps, APIs and services.

To improve customer experience continuity and application integration agility, technical professionals responsible for delivering enterprise applications should:

  • Apply MASA principles to enable greater agility when adaptability to change, modernization, faster delivery cadence and unique UX are key business drivers.
  • Use MASA for enterprise applications when there is a clear business need for improved agility and when you have the required skills. Teams must evaluate the cost of added complexity against the benefit of agility.
  • Adopt agile and DevSecOps practices and develop skills in API design, API-based integrations and multi-experience application development as prerequisites to your MASA implementation.
  • Apply MASA iteratively to enterprise applications, avoid the temptation to fix direct integrations all at once — start with small, well-defined experiences that result in customer or business value.

MASA’s modular approach provides the following benefits when implementing enterprise applications:

  • Agility: A decoupled component architecture enables application teams to adapt to changing business priorities and replace aging components. An example would be a legacy service replaced by a new service built with modern technologies and frameworks without affecting any other service or app. The API for the new service must be consistent with the legacy service.
  • Cohesive UX: MASA enables cohesive user experiences by providing touch-points across multiple devices and channels, integrated with business functionality designed to span enterprise application silos. For example, customers may initiate a shopping experience by clicking on a link in an email or SMS message to view a sales promotion provided by marketing services. This takes them to a mobile webpage where they view the promotion and add items to their cart and then purchase them provided by commerce services. Later, they use the web application at home to view and track the order, or even to open a follow-up request via customer support services. The MASA approach uses modular services that support both the customer and agent side of this example, which allows for experience consistency that reaches beyond multiexperience into the total experience realm.
  • Integration: Using a mediated consumer-centric integration approach provides the ability to connect front-end technologies with enterprise application services, functionality and data using APIs. This allows individual mesh apps to consume capabilities from a variety of systems and services to better support the UX requirements.
  • Innovation: Individual fit-for-purpose apps allow you to select optimal technologies (web, mobile, augmented reality/virtual reality, voice, chat, wearable, Internet of Things) for each app experience. This enables application teams to optimize UI capabilities, support new user/customer channels, and create optimal experiences in the user’s preferred channels.
  • Faster delivery: Decoupling the apps and services from each other using APIs enables independent release cadences. You have the option to release new user-facing capabilities faster, get feedback, adapt and innovate for some apps, while using a slower cadence for apps that rarely change.
  • Scalability: Scalability lets you efficiently expand or collapse your solution’s capacity for individual features or capabilities in a fine-grained and dynamic manner. Independent services allow you to scale specific capabilities independently of others, whereas large back-end systems require you to launch entire instances of your application just to support increased demand in a narrow scope of functionality. For example, using an independent reporting service that is able to dynamically increase instances to support heavy loads at a specific time to generate end-of-month or end-of-year reports.
  • Extend legacy systems: MASA’s modular service architecture allows you to create new independent app experiences and custom integrations using legacy system capabilities, as well as add capabilities from other services to enhance legacy applications. For example, you could encapsulate a legacy reporting system with an API to feed the reporting data to a modern data visualization app.
  • Modernization: MASA’s distributed nature of abstracted discrete components allows you to modernize individual parts of the application architecture at a pace your organization can tolerate.
  • Consistency: Using an API gateway to mediate APIs simplifies security and access policy enforcement, traffic management, monitoring and logging by providing a single place to configure policies and monitor the APIs.
  • Reduced technical debt: MASA supports the use of open standards technologies and the ability to minimize proprietary implementations that result in technical debt. Additionally, a modular architecture allows teams to iteratively reduce technical debt over time rather than a “big bang” approach, which, realistically, rarely happens.

The technique of consuming APIs from a variety of sources to deliver a new UX, workflow or other feature is called composite architecture and is core to the concepts of MASA.

Business priorities are critical determining factors when selecting a MASA approach for delivering apps and integrations using enterprise applications. Identify and clearly define the business priorities before choosing MASA to implement integrations or user experiences enabled by the enterprise application. Above figure illustrates several common business priorities and identifies which ones are better supported by a MASA approach and which ones better align with traditional commercial-off-the-shelf (COTS) approaches. Essentially, priorities that require flexibility and customizability for apps that provide commercial differentiation are a better fit for MASA. Conversely, an organizational focus on lowering IT operational complexity better aligns with traditional (non-MASA) approaches.

The specific skills required for each of these will vary depending on the capabilities of the enterprise applications, additional technologies you choose and the requirements of the mesh application. For example, web/mobile development frameworks, digital experience platforms (DXPs) and multiexperience development platforms (MXDPs) all allow you to implement custom experience UIs. However, the skills required and approaches are very different.

The design of mesh applications requires an understanding of the overall customer journey through the fit-for-purpose apps. The customer journey may be initiated from different points, consisting of a number of touch-points that happen at different times, in different channels, on different devices and have multiple branches.

These app experiences may include apps and channels outside the enterprise app UI. For example, a customer journey might be initiated in a promotional email, where customers click on a web link to view the promotion in the browser. Clicking on the promotional item then links them to a mobile app to complete the purchase. Later, while viewing the item in the mobile app, they click on an “operation manual” link that opens the mobile web browser to view the item’s manual in PDF form. The journey traverses email, web apps and mobile apps — all linked together.

There is a real risk of creating individual app experiences that are so different that the customer journey through them is disjointed, even jarring. In such cases, although an individual experience may be exceptional, the overall customer experience may be terrible. This is especially true if you are trying to provide a unified brand experience. Avoid independent app experiences by providing a consistent look and feel, as well as contextual navigation.

A benefit of MASA is that you don’t have to just pick a single approach for building custom app experiences. Using multiple approaches provides additional flexibility in technologies and design, but multiple options require multiple skill sets and implementation models, as well as a wider scope of support and maintenance. Use the optimal UI approach when you have the available skills and are able to manage the additional complexity. However, if your skill sets are limited, use the approach that best fits the overall UX requirements of the enterprise application.

In addition to API-driven request/response communication, services in MASA communicate state changes and other actions using event-driven models. An event-driven model is extremely lightweight in operations, but requires high effort to implement the complex integration capabilities. That being said, event-driven integrations result in even less coupling than integrations that use request/response APIs. An event-driven communication model provides additional agility and scalability and is a natural fit for some of the distributed communication between the enterprise application and other services in a MASA. But it is not the optimal approach for synchronous communication or for simply getting resources from a service. The following event implementations (illustrated in below figure) enable a two-way flow of events between enterprise applications and MASA services:

  • Event mediation: Use this approach when the enterprise application supports event messaging but uses its own internal proprietary event messaging system. The approach involves applying event mediation that connects the internal event messaging system in the enterprise application with the MASA services event broker. Event mediation decouples the constraints and implementation details of the enterprise application away from the rest of the MASA services, providing better agility and fewer dependencies. Event mediation may include using a connector in the enterprise application that links the enterprise application to the event broker or a plug-in in the event broker. Or it may be implemented as a custom service that subscribes to the enterprise application’s event messaging system, converts the event and publishes it to the MASA services event broker. The implementation will depend on the enterprise application capabilities and event broker technology used. Just one example of many is using Confluent’s Kafka Connect Salesforce connector to publish Salesforce events to Kafka.
  • Custom service detection: Use this approach when the enterprise application does not support direct event publishing or event mediation, or when the enterprise application does not support capturing and publishing the events that you need to communicate to other MASA services. Publishing involves creating a custom service that detects when specific events occur in the enterprise application (without impacting application performance) and then publishes those events to the event broker. Subscribing involves using a custom service to subscribe to an event channel and processing events by applying the appropriate enterprise application business logic. Custom services have the flexibility of using the optimal approach to detect the event. The exact approach will depend on the enterprise application capabilities and event-driven design requirement. Using the enterprise application APIs, applying an extract, transform, load (ETL) procedure to an enterprise application log, or directly connecting to the enterprise application, are common methods of custom event implementation.

Creating modular applications using multiple independent components requires additional coordination and cooperation, as well as possibly governance, across multiple roles, teams and potentially different groups. Breaking down the traditional enterprise application silos (better agile way of working) requires coordination and cooperation, and a sound technical plan for building the MASA across those silos.

MASA involves the architecture, design, integration and deployment of multiple independent components, which makes the application more operationally complex than a monolithic enterprise application implementation in a single technology. This makes MASA a poor fit for organizations looking for an operationally simple, turnkey approach. Applying MASA to enterprise applications requires additional effort to ensure consistent security, as well as identity and access management, across components and experiences.

Your Agile Ways of Working

I have written two very in-depth sharing on how to design your agile way of working inspired by Spotify model and my own failures and success stories:

  1. Scaling Engineering Teams & Rise of Platform Engineering Squads. How to use ‘Spotify Agile model’ and scale up engineering teams, in same time, maximize efficiency, reusability, accountability and time to market. That’s the rise of “Platform Engineering Squads” which ultimately gets you to the nirvana, innovation pipeline management.
  2. Scaling Agility & Product Mindset. I am in pursue of finding a scalable and practical methods into scaling agile that really works. Away from all corporate/consulting bullsh*t and theoretical world. Something that works or I see the pain in setting up structure for over 300s of engineers and product folks. It is not easy and please share your thoughts.

DataOps and your data ways of working

The goal of XOps (data, ML, model and platform ops for AI) is to achieve efficiencies and economies of scale using DevSecOps best practices and ensure reliability, reusability and repeatability while reducing duplication of technology and enabling automation. Data operationalization (DataOps) is a collaborative data management practice focused on improving the communication, integration and automation of data pipelines within the organization. The goal of DataOps is to deliver value faster by creating predictable delivery and change management of data, data models and related artifacts. DataOps uses technology to automate the design, deployment, management and delivery of data with the appropriate levels of governance and metadata to improve the use and value of data in a dynamic environment.

Data pipelines usually suffer from misinterpretation, but this typically doesn’t mean you’re losing the actual data, just that the context is lost. As data moves through the pipeline from source to target, what that data means and represents may change at each step or each individual initiative or business use case. The data values may be handled differently, leaving the next stage to figure out what the individual elements or collection of data means. And this is recursive throughout the pipeline. The challenge is that each stage has its own understanding of the data based on its use — analytics, data science and machine learning. This results in brittle pipelines that are incredibly slow to react to change.

Like DevSecOps, DataOps represents a change in culture that focuses on improving collaboration and accelerating service delivery by adopting lean and iterative practices, where appropriate, to scale data pipeline operations from development to delivery. A typical data pipeline consists of data extraction, integration, transformation and analysis stages. Hence, in order to implement DataOps, technical professionals need to primarily focus on:

  • People and culture: Ensure collaboration across a cross-functional team that works toward shared outcomes in the form of data accessibility in a governed fashion. The goal is to bring the development and operations side of the house together so they can work seamlessly.
  • Development: Data engineers, data architects, data scientists, citizen data scientists and developers
  • Operations: Data infrastructure engineers and platform engineers
  • Governance: Bring the appropriate balance of control, accessibility, accountability and traceability of data usage behavior through access control, metadata management and data versioning.
  • Tools and products: As part of operationalization, you are looking for tools to support the following capabilities. These capabilities could be supported using a single platform vendor or may require integration across multiple products to support development- and operations-specific capabilities.
  • ETL store to capture integration and transformation scripts
  • Data quality tool to identify, understand and correct flaws within the data across the development and operations pipeline
  • Metadata management and data versioning tool to capture and maintain the metadata toward support lineage and traceability
  • Tool to support automated testing of the ETL scripts once generated and transitioned across environments
  • An orchestration tool to support CI/CD as scripts and data are moved across environments and business-domain-specific data pipelines
  • ETL process monitoring to ensure the orchestration and jobs have been executed successfully
  • Platform (a data management and operations platform): To support the ability to build quick prototypes, build and deliver data products and services across a diverse ecosystem, and automate data pipeline development using the rich metadata.

MLOps

Machine learning operationalization (MLOps) aims to streamline the deployment, operationalization and execution of ML models. It supports the release, activation, monitoring, performance tracking, management, reuse, update, maintenance and governance of ML models.

The process and framework to operationalize the ML Pipeline is referred to as machine learning operationalization (MLOps). MLOps aims to standardize the deployment and management of ML models alongside the operationalization of the ML pipeline. It helps support the release, activation, monitoring, performance tracking, management, reuse, maintenance and governance of ML artifacts. The MLOps process supports the CI/CD framework, and it derives its core principles from the best practices of DevSeOps.

The model management system forms the fundamental basis for supporting MLOps. A typical ML pipeline consists of acquire, organize, analyze and deliver stages, where data needs to be acquired and presented to data scientists for building the models that are embedded (delivered) in enterprise applications or analytic reports/dashboards. MLOps embraces DevSecOps’ continuous integration and continuous delivery best practices, but replaces the continuous testing phase with continuous training and evaluation. Hence, technical professionals need to ensure building the following to implement an MLOps framework:

  • Model management: This allows technical professionals to manage different versions of ML artifacts, the manifestation and delivery of the models (APIs, containers), and model monitoring.
  • Model monitoring: Models undergo decay, which results in reduced performance of the model in production due to the new real-world data brought in during the inference cycle. This creates the need for “explainability” as to how and why a model makes certain predictions. It is also an important consideration for audit and regulatory compliance.
  • CI/CD tools: The continuous training of new models requires automating the redeployment of new iterations of models and all of the technical effort that goes into building the delivery pipeline. The continuous development and enhancements to the model — driven by business requirements — requires orchestration, retraining and redeployment of new models in inferencing mode. MLOps embraces the idea that the model will constantly and inevitably change, which means organizations need to implement MLOps strategies to support continuous training, integration and deployment within production applications.

Let’s look at ModelOps, which in principle is similar to MLOps but with an extension on being able to support not just ML models but all AI models.

ModelOps

ModelOps brings governance and life cycle management of all AI (graphs, linguistic, rule-based systems) and decision models. Core capabilities include management of model repository, champion-challenger testing, model rollout/rollback and CI/CD integration. In addition, ModelOps provides business domain experts autonomy to interpret the outcomes and validate KPIs of AI models in production. It also provides the ability to promote or demote the models for inferencing without a full dependency on data scientists or ML engineers.

In a way, technical professionals can:

  • Help avoid rework by keeping a deployment scenario in mind when creating models.
  • Retain data lineage and track-back information for governance and audit compliance.
  • Help focus on monitoring when deploying models so that business users can monitor and work closely with data scientists to retrain models as they degrade.

And most importantly, technical professionals can:

Platform Ops for AI

The purpose of AI orchestration platforms is to provide an integrated continuous integration/continuous delivery (CI/CD) pipeline across all of the different stages of building AI-based systems — supporting reproducibility, reusability, rollback/rollout, lineage and secure environment. These orchestration platforms are the backbone of some of the leading innovative tech organizations, which have also stepped forth and open-sourced in-house AI orchestration platforms to benefit the broader technology community. This will lead to the mature marketplace of AI platforms and accelerate the delivery and adoption of AI-based solutions.

Platform Ops for AI can help enterprises:

  • Architect AI-augmented systems that are resilient and accept that disruptive change will be the norm of the future.
  • Scale AI initiatives by modularizing and orchestrating the underlying platform for business outcomes.
  • Provide autonomy to business units by enabling discoverable and reusable AI artifacts (ETL, feature, model, stores) toward multiple use cases.

Finally, we will address AIOps, which at times leads to a bit of an ambiguity in terms of what it stands for.

AIOps

AIOps platforms help transform the way we operate our IT infrastructure. The power of machine learning and analytics can turn the overwhelming mountains of metrics, logs and traces into proactive operations that deliver unprecedented levels of availability and efficiency. Enterprises adopting AIOps platforms use them to enhance and, occasionally, augment classical application performance monitoring (APM) and network performance monitoring and diagnostics (NPMD) tools. As depicted in below figure, AIOps might not be accomplished using just one platform. AIOps is usually represented as a range of disciplines where each is focused on a specific aspect of IT operations. It considers data that is generated from across various assets within an enterprise technology landscape — apps, business operations, network data and infrastructure. It is important to establish AIOps projects around clear, discrete problems with achievable solutions, rather than around existing data from across all domains.

MicroFrontends/MicroServices involve the architecture, design, integration and deployment of multiple independent components, which makes the application more operationally complex than a monolithic enterprise application implementation in a single technology. This makes MicroFrontends/MicroServices a poor fit for organizations looking for an operationally simple, turnkey approach. However, this is the only approach to deliver multiexperience solutions for customers as well as a solid foundation for cloud journey.

--

--

Scaling Up, Growth and Digital Transformation guy.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store