Introducing Data Without Borders

Ampool 2.0 – Introducing Data Without Borders 

You probably heard of Doctors Without Borders (Médecins Sans Frontières). The incredible Nobel Prize winning organization is committed to bringing quality medical care to people caught in crisis, regardless of the geographic borders. Its actions are guided by the principles of neutrality and impartiality and some of their stories are simply amazing. At Ampool, we have been busy executing an equally bold vision- “Data Without Borders” and we are excited to introduce Ampool 2.0 in a series of blogs. 

Trends -The data world is getting silo’d 

We are now living in a multi-cloud and hybrid world. Customer data is now distributed across public clouds (PaaS, SaaS, IaaS) and on-premise data centers. Data velocity, volume and variety are increasing.  80% of customers using public cloud are looking at more than one cloud service provider (Gartner). Many enterprise customers with deep investments in on-premises data center are adopting hybrid cloud (Forrester). Add to the mix the increasing security exposure to data spread across silos -many of our customers are concerned about their PII or PHI data getting exposed in the new multi-cloud and hybrid world. 

We will take a trip down the memory lane to see how we got here. In early 70s, the first relational database system became a viable alternative to traditional mainframe-based data management. The data workloads proliferated and data platforms emerged to handle different kinds of data workloads. The seminal paper “One Size Fits All”: An Idea Whose Time Has Come and Gone -by Turing Award winner Michael Stonebraker- pronounced the end of a single enterprise-wide data management system. 

Then, the last fifteen years have produced and popularized hundreds of different data management platforms according to volume and velocity of data. Massively parallel data warehouses (Teradata, Greenplum, Vertica), Apache Hadoop-based data lakes (Cloudera), NoSQL systems (MongoDB, Cassandra), Event Streaming systems (Apache Kafka) have emerged and become ubiquitous in the last decade alone. Fueled by omnipresent Internet connectivity, Software-as-a-service enterprise applications (Salesforce, Workday) added to the data fragmentation by providing a hosted application over the managed enterprise data. In addition to the SaaS platforms, last five years has seen an explosion of managed data platforms provides by Cloud services providers, such as AWS, Azure, and GCP. For every traditionally on-premises data management system, there are at least three CSP-managed platforms available now. 

Geographical, country-specific data residency requirements and regulatory restrictions (e.g. GDPR) are contributing immensely to the data fragmentation. New state-specific privacy laws (e.g. CCPA in California) within the same country are only going to accelerate such fragmentation. 

The paradox is the businesses now need to get insights to data faster than ever to stay competitive- as evident by how quickly S&P500 companies are getting replaced now. To achieve business agility, various business units within an enterprise are now making their own choices regarding data platforms, mostly in the cloud. This is making the existing data fragmentation problem worse. 

Stress on our Data Heroes 

We talked to many Data Engineers, Business Analysts, Data Scientists and Security Administrators. We are seeing that in the new Multi-cloud and Hybrid Cloud world, it is taking a lot longer for Data Engineers to ETL (extract, translate, load) data from various data silos to a centralized location. With the volume and real-time nature of the data, ETL is not a pragmatic option. Business analysts are not able to provide business insights quickly. They need to wait for months before the data engineers get the data ready. Plus, they are seeing slowness with their favorite BI tools, as data is spread across geographies. Security administrators are now worried about exposing PHI or PII or other sensitive data with the lack of anonymization. 

Introducing Ampool Borderless Data Mart 

Ampool Bordeless Data Mart provides a unified view across Data Lake, Data Stream, Data Warehouse, RDBMS Sources, deployed in multiple cloud and on-premises environments. Now, Data Engineers, Business Analysts and Data Scientists can reduce time to insights from months to days across real-time and batch data, with faster performance and security. All this by pointing their favorite tool to the cloud hosted single panel called Ampool Hub. 

Next Steps 

Ampool Borderless Data Mart is now generally available. Our customers can supercharge their favorite BI tools by 30x and cut down their time to insights by 180x (from months to hours).  Please visit our new website and sign up for our free trials to gain a competitive edge in the new Multi-Cloud and Hybrid Cloud world. 

Leave a comment

Try Free

Experience the self-service Borderless Data Mart yourself

[reCAPTCHA]

©2019 Copyright – Ampool, Inc.