Decoupling is a leading architectural principle for developing microservices-based applications. Among other principles, decoupling involves securing the microservices, based on zero trust methods, data separation per application, and more. For IT systems, decoupling is also considered as a practice for reducing operational dependencies across different systems: it provides organizations with the means to enable easier system changes, standardize API-based system integrations, and provide a centralized place for new services to be created. Decoupling also allows securing, monitoring and adding basic business logic that is implemented inside the orchestration and workflows that defines the new services.
As part of the ongoing omnichannel digital transformation, organizations empower their users with self-service capabilities that rely on an API layer that provides rapidly-served, accurate business data. While internal users may tolerate some latency, external users are usually unforgiving – if they don’t like their app experience they can leave bad reviews or move to a competitor. Organizations also use non-business hours for heavy batch operations, and these batch processes slow down systems dramatically. Additionally, IT systems maintenance usually includes downtime that doesn’t impact business continuity from the employees perspective. For external users however, this is unacceptable. External users expect continuous access to their apps, also, outside business hours, and are not tolerant of stale data that is a ‘placeholder’ until the next batch update.
To provide an optimum customer experience, organizations must optimize their systems to provide scalable high service availability, performance and also ensure data integrity – with data that is always accurate and delivered in close to real time. To ensure a consistent, accurate experience across all channels, it is critical that data is unified throughout the organization.
Data Consistency & Low Latency: Solutions that partially address these issues
The approaches discussed below can partially address the need for fast and accurate delivery of disparate data to digital services across multiple channels, where data consistency, low latency and user experience are of the essence. The first approach relies on an ESB or iPaaS approach focusing on API virtualization; the second relies on data virtualization as a way to decouple the data services from the logic that is required to integrate disparate data into a unified result-set required by the application.
Using an integration platform such as an ESB (Enterprise Service Bus) or an iPaaS (integration Platform as a Service) can assure that data remains coherent and correct, enabling API-based digital channels timely access to business data that is primarily stored in IT operational systems. With this approach, external applications are decoupled from IT systems at the network and protocol levels. The REST / SOAP APIs facade provides digital channels with the data and functionality they need, while keeping the backend systems abstract – whether those are JDBC, SOAP, PL SQL, BAPI, files or FTP-based. This is great. However, when a digital platform is widely adopted the load on backend systems increases significantly, since each customer interaction creates a chain of requests that goes all the way into the IT systems.
Ultimately, this solution is as good as its weakest link – or in this case, the weakest backend data store. As a result, it cannot be scaled, and its performance will usually fail to meet up to expectations. Although maintenance windows and batch processes run outside business hours, digital users now expect accurate data 24/7, and are not forgiving of any downtime. At this point it becomes evident that we need a different approach to ensure that customers have a truly successful, always-on digital service.
Improving Data Accessibility with Data Virtualization
Data virtualization improves data accessibility, allowing single point access to data that is distributed across different IT systems, particularly when accessing large volumes of data or when working with real-time data. It also involves complex implementation and complex ongoing maintenance when the underlying systems are changing. Data virtualization is also dependent on the underlying data sources availability and performance, adding an overhead as it transforms data in real-time at every request.
To minimize the impact stemming from the gap between what IT systems were designed to provide and what digital applications require, organizations tend to use the above methods with the addition of some sort of caching and/or throttling at the middleware layer. The purpose of throttling is either to flatten peaks of requests, or to deny requests that may cause the IT systems to become unstable and underperform. Caching helps reduce read overhead at the price of performance and data accuracy as the client has no way of knowing whether the information provided came from a true IT system, or from a cache.
True Isolation – Implemented by Smart DIH
Most legacy IT systems were designed to provide services to internal users (employees). But in today’s online digital business environment, apps and services need to be able to serve thousands or even millions of external users concurrently. The volume and speed that is required to meet this level of performance cannot be achieved with legacy systems.
Gigaspaces’s Smart DIH solution solves this issue by taking decoupling to the next level: achieving true isolation. With Smart DIH the digital platform is isolated from the underlying IT systems and is able to serve the external digital channels with an always true copy of the data, in near real-time, directly from its built-in in memory data grid. This degree of isolation allows top-notch performance for read operations without any dependency on the availability and/or load on the backend IT system. Since users are primarily outside the organization, and their apps must communicate with internal systems, isolation also assists with security concerns.
Smart DIH is an out-of-the-box, self-sufficient middleware platform that offloads all read workloads from traditional IT systems. The platform provides the data as REST, SOAP or SQL, regardless of how it is stored on backend systems. Data is served from an ultra-fast, proven, in memory data grid designed to execute extreme concurrency workloads, so that even a massive surge of parallel requests hitting the API will not degrade the end-user experience.
Learn more about how Smart DIH enables full isolation, by decoupling applications and decoupling microservices
Smart DIH offers Docker-based processing unit deployments, offering native Kubernetes support that is implemented with Kubernetes microservices design patterns. In addition, the platform offers full integration with OpenShift, supporting a wide range of topologies for on-prem, cloud, hybrid and multi-cloud deployments.
Smart DIH’s Data Integration offers built-in integrations with third-party and proprietary connectors, and includes IBM’s IIDR and Kafka. The implemented data integration pattern ensures that captured data change event messages delivery is guaranteed from the IIDR to the Space within Smart DIH where the data is stored in-memory and on local SSDs in a redundant, highly available pattern. The event-based data ingestion, data cleansing and validation and built-in reconciliation mechanisms ensure that the conflicts that can occur when using ESBs and iPaaS solely are not an issue.
Utilizing Event Driven Architecture
As data from various systems is loaded and updated via the CDC into the Smart DIH, aggregating, joining and executing business logic, machine learning algorithms inference and additional operations can be executed as part of the event driven architecture flow. This ensures that the system maintains an always true and enriched copy of the relevant operational data. Once the data is incorporated into Smart DIH it offers a high degree of resiliency, throughput and persistence also enabling full disaster recovery. High availability of services and data is maintained in all circumstances and conditions, such as hardware failures, system upgrades, when the backend IT system is down or underperforming, and in extreme peaks of requests.
By offering true isolation, Smart DIH achieves the highest possible level of decoupling between clients and the IT systems. Smart DIH’s decoupling and full isolation protects the client from dependencies between the backend systems and their configuration, availability, and performance. Smart DIH also supports geographic replication, so that organizations can replicate their data into regions that provide lower latency. Smart DIH enables a positive and consistent user experience from business applications that is not impacted by the performance, throughput and availability of backend systems.