Cloud Journey — Part 6 | Foundations of Cloud Architecture

Another Piece of art by “Gizem Vural”​

Introduction

There are few things that you need to get it right which will help to boost your cloud journey and any other initiatives that you have planned in order to deliver a better experience. These are the few topics of what in my humble opinion are the foundations of a cloud architecture for your firm through a digital transformation.

  1. Containerization like OpenShift RedHat
  2. API Gateway like MuleSoft specially if you are planning to use SalesForce solutions as your CRM in future
  3. DevSecOps, MicroServices, MicroFrontends (Mesh Apps) and Events like Kafka
  4. Your Agile Ways of Working
  5. DataOps and your data ways of working

Containerization

Many organizations are now deploying applications packaged in containers at scale, introducing the need for new skill sets, processes and operational models. Containers can provide many agility benefits, including faster software delivery through streamlined life cycle management, and helping to refactor legacy applications toward modern architectures. To successfully adopt containers requires new skill sets, processes and operational models. It will also require cultural and organizational changes.

  • Consistent application packaging: Containerized applications can be moved quickly and easily across heterogeneous infrastructure, and between private and public cloud environments from multiple providers. Containers therefore provide a smooth path for applications throughout their entire life cycle, allowing them to run in an identical environment at every stage, from a developer’s laptop to different testing, staging and production environments.
  • Streamlined configuration management: The contents of containers and the manifests for orchestrating containers on Kubernetes are defined configuration files (e.g., Dockerfiles, docker-compose files, YAML files used to configure Kubernetes orchestration of containers, etc.). These artifacts can be treated the same way as application source code, i.e., integrated into CI/CD pipelines and transactionally updated with version control systems, with changes that are selectively applied using canary or blue-green testing methods.

API Gateway, DevSecOps, MicroServices, MicroFrontends (Mesh Apps) and Events

Containers change the technology stack, but they also fundamentally change the way that I&O, applications, and security teams work together. A move to containers will require IT organizations to simultaneously make a cultural transition and develop new workflows. As part of this adaptation, the organizations will have to embrace:

  • Cross-team collaboration and DevOps culture and practices
  • Continuous integration and delivery/deployment (CI/CD) of applications and infrastructure
  • Extensive automation throughout the application delivery pipeline, including testing, security, and infrastructure provisioning and maintenance
  • Apply MASA principles to enable greater agility when adaptability to change, modernization, faster delivery cadence and unique UX are key business drivers.
  • Use MASA for enterprise applications when there is a clear business need for improved agility and when you have the required skills. Teams must evaluate the cost of added complexity against the benefit of agility.
  • Adopt agile and DevSecOps practices and develop skills in API design, API-based integrations and multi-experience application development as prerequisites to your MASA implementation.
  • Apply MASA iteratively to enterprise applications, avoid the temptation to fix direct integrations all at once — start with small, well-defined experiences that result in customer or business value.
  • Agility: A decoupled component architecture enables application teams to adapt to changing business priorities and replace aging components. An example would be a legacy service replaced by a new service built with modern technologies and frameworks without affecting any other service or app. The API for the new service must be consistent with the legacy service.
  • Cohesive UX: MASA enables cohesive user experiences by providing touch-points across multiple devices and channels, integrated with business functionality designed to span enterprise application silos. For example, customers may initiate a shopping experience by clicking on a link in an email or SMS message to view a sales promotion provided by marketing services. This takes them to a mobile webpage where they view the promotion and add items to their cart and then purchase them provided by commerce services. Later, they use the web application at home to view and track the order, or even to open a follow-up request via customer support services. The MASA approach uses modular services that support both the customer and agent side of this example, which allows for experience consistency that reaches beyond multiexperience into the total experience realm.
  • Integration: Using a mediated consumer-centric integration approach provides the ability to connect front-end technologies with enterprise application services, functionality and data using APIs. This allows individual mesh apps to consume capabilities from a variety of systems and services to better support the UX requirements.
  • Innovation: Individual fit-for-purpose apps allow you to select optimal technologies (web, mobile, augmented reality/virtual reality, voice, chat, wearable, Internet of Things) for each app experience. This enables application teams to optimize UI capabilities, support new user/customer channels, and create optimal experiences in the user’s preferred channels.
  • Faster delivery: Decoupling the apps and services from each other using APIs enables independent release cadences. You have the option to release new user-facing capabilities faster, get feedback, adapt and innovate for some apps, while using a slower cadence for apps that rarely change.
  • Scalability: Scalability lets you efficiently expand or collapse your solution’s capacity for individual features or capabilities in a fine-grained and dynamic manner. Independent services allow you to scale specific capabilities independently of others, whereas large back-end systems require you to launch entire instances of your application just to support increased demand in a narrow scope of functionality. For example, using an independent reporting service that is able to dynamically increase instances to support heavy loads at a specific time to generate end-of-month or end-of-year reports.
  • Extend legacy systems: MASA’s modular service architecture allows you to create new independent app experiences and custom integrations using legacy system capabilities, as well as add capabilities from other services to enhance legacy applications. For example, you could encapsulate a legacy reporting system with an API to feed the reporting data to a modern data visualization app.
  • Modernization: MASA’s distributed nature of abstracted discrete components allows you to modernize individual parts of the application architecture at a pace your organization can tolerate.
  • Consistency: Using an API gateway to mediate APIs simplifies security and access policy enforcement, traffic management, monitoring and logging by providing a single place to configure policies and monitor the APIs.
  • Reduced technical debt: MASA supports the use of open standards technologies and the ability to minimize proprietary implementations that result in technical debt. Additionally, a modular architecture allows teams to iteratively reduce technical debt over time rather than a “big bang” approach, which, realistically, rarely happens.
  • Event mediation: Use this approach when the enterprise application supports event messaging but uses its own internal proprietary event messaging system. The approach involves applying event mediation that connects the internal event messaging system in the enterprise application with the MASA services event broker. Event mediation decouples the constraints and implementation details of the enterprise application away from the rest of the MASA services, providing better agility and fewer dependencies. Event mediation may include using a connector in the enterprise application that links the enterprise application to the event broker or a plug-in in the event broker. Or it may be implemented as a custom service that subscribes to the enterprise application’s event messaging system, converts the event and publishes it to the MASA services event broker. The implementation will depend on the enterprise application capabilities and event broker technology used. Just one example of many is using Confluent’s Kafka Connect Salesforce connector to publish Salesforce events to Kafka.
  • Custom service detection: Use this approach when the enterprise application does not support direct event publishing or event mediation, or when the enterprise application does not support capturing and publishing the events that you need to communicate to other MASA services. Publishing involves creating a custom service that detects when specific events occur in the enterprise application (without impacting application performance) and then publishes those events to the event broker. Subscribing involves using a custom service to subscribe to an event channel and processing events by applying the appropriate enterprise application business logic. Custom services have the flexibility of using the optimal approach to detect the event. The exact approach will depend on the enterprise application capabilities and event-driven design requirement. Using the enterprise application APIs, applying an extract, transform, load (ETL) procedure to an enterprise application log, or directly connecting to the enterprise application, are common methods of custom event implementation.

Your Agile Ways of Working

I have written two very in-depth sharing on how to design your agile way of working inspired by Spotify model and my own failures and success stories:

  1. Scaling Engineering Teams & Rise of Platform Engineering Squads. How to use ‘Spotify Agile model’ and scale up engineering teams, in same time, maximize efficiency, reusability, accountability and time to market. That’s the rise of “Platform Engineering Squads” which ultimately gets you to the nirvana, innovation pipeline management.
  2. Scaling Agility & Product Mindset. I am in pursue of finding a scalable and practical methods into scaling agile that really works. Away from all corporate/consulting bullsh*t and theoretical world. Something that works or I see the pain in setting up structure for over 300s of engineers and product folks. It is not easy and please share your thoughts.

DataOps and your data ways of working

The goal of XOps (data, ML, model and platform ops for AI) is to achieve efficiencies and economies of scale using DevSecOps best practices and ensure reliability, reusability and repeatability while reducing duplication of technology and enabling automation. Data operationalization (DataOps) is a collaborative data management practice focused on improving the communication, integration and automation of data pipelines within the organization. The goal of DataOps is to deliver value faster by creating predictable delivery and change management of data, data models and related artifacts. DataOps uses technology to automate the design, deployment, management and delivery of data with the appropriate levels of governance and metadata to improve the use and value of data in a dynamic environment.

  • People and culture: Ensure collaboration across a cross-functional team that works toward shared outcomes in the form of data accessibility in a governed fashion. The goal is to bring the development and operations side of the house together so they can work seamlessly.
  • Development: Data engineers, data architects, data scientists, citizen data scientists and developers
  • Operations: Data infrastructure engineers and platform engineers
  • Governance: Bring the appropriate balance of control, accessibility, accountability and traceability of data usage behavior through access control, metadata management and data versioning.
  • Tools and products: As part of operationalization, you are looking for tools to support the following capabilities. These capabilities could be supported using a single platform vendor or may require integration across multiple products to support development- and operations-specific capabilities.
  • ETL store to capture integration and transformation scripts
  • Data quality tool to identify, understand and correct flaws within the data across the development and operations pipeline
  • Metadata management and data versioning tool to capture and maintain the metadata toward support lineage and traceability
  • Tool to support automated testing of the ETL scripts once generated and transitioned across environments
  • An orchestration tool to support CI/CD as scripts and data are moved across environments and business-domain-specific data pipelines
  • ETL process monitoring to ensure the orchestration and jobs have been executed successfully
  • Platform (a data management and operations platform): To support the ability to build quick prototypes, build and deliver data products and services across a diverse ecosystem, and automate data pipeline development using the rich metadata.

MLOps

Machine learning operationalization (MLOps) aims to streamline the deployment, operationalization and execution of ML models. It supports the release, activation, monitoring, performance tracking, management, reuse, update, maintenance and governance of ML models.

  • Model management: This allows technical professionals to manage different versions of ML artifacts, the manifestation and delivery of the models (APIs, containers), and model monitoring.
  • Model monitoring: Models undergo decay, which results in reduced performance of the model in production due to the new real-world data brought in during the inference cycle. This creates the need for “explainability” as to how and why a model makes certain predictions. It is also an important consideration for audit and regulatory compliance.
  • CI/CD tools: The continuous training of new models requires automating the redeployment of new iterations of models and all of the technical effort that goes into building the delivery pipeline. The continuous development and enhancements to the model — driven by business requirements — requires orchestration, retraining and redeployment of new models in inferencing mode. MLOps embraces the idea that the model will constantly and inevitably change, which means organizations need to implement MLOps strategies to support continuous training, integration and deployment within production applications.

ModelOps

ModelOps brings governance and life cycle management of all AI (graphs, linguistic, rule-based systems) and decision models. Core capabilities include management of model repository, champion-challenger testing, model rollout/rollback and CI/CD integration. In addition, ModelOps provides business domain experts autonomy to interpret the outcomes and validate KPIs of AI models in production. It also provides the ability to promote or demote the models for inferencing without a full dependency on data scientists or ML engineers.

  • Help avoid rework by keeping a deployment scenario in mind when creating models.
  • Retain data lineage and track-back information for governance and audit compliance.
  • Help focus on monitoring when deploying models so that business users can monitor and work closely with data scientists to retrain models as they degrade.

Platform Ops for AI

The purpose of AI orchestration platforms is to provide an integrated continuous integration/continuous delivery (CI/CD) pipeline across all of the different stages of building AI-based systems — supporting reproducibility, reusability, rollback/rollout, lineage and secure environment. These orchestration platforms are the backbone of some of the leading innovative tech organizations, which have also stepped forth and open-sourced in-house AI orchestration platforms to benefit the broader technology community. This will lead to the mature marketplace of AI platforms and accelerate the delivery and adoption of AI-based solutions.

  • Architect AI-augmented systems that are resilient and accept that disruptive change will be the norm of the future.
  • Scale AI initiatives by modularizing and orchestrating the underlying platform for business outcomes.
  • Provide autonomy to business units by enabling discoverable and reusable AI artifacts (ETL, feature, model, stores) toward multiple use cases.

AIOps

AIOps platforms help transform the way we operate our IT infrastructure. The power of machine learning and analytics can turn the overwhelming mountains of metrics, logs and traces into proactive operations that deliver unprecedented levels of availability and efficiency. Enterprises adopting AIOps platforms use them to enhance and, occasionally, augment classical application performance monitoring (APM) and network performance monitoring and diagnostics (NPMD) tools. As depicted in below figure, AIOps might not be accomplished using just one platform. AIOps is usually represented as a range of disciplines where each is focused on a specific aspect of IT operations. It considers data that is generated from across various assets within an enterprise technology landscape — apps, business operations, network data and infrastructure. It is important to establish AIOps projects around clear, discrete problems with achievable solutions, rather than around existing data from across all domains.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store