From flat files to deconstructed databases The evolution and future of the big data ecosystem
Over the past 10 years, big data infrastructure has evolved from flat files in a distributed filesystem to an efficient ecosystem to a fully deconstructed and open source database with reusable components. With Hadoop, we started from a system that was good at looking for a needle in a haystack usin...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Online |
Sprache: | eng |
Veröffentlicht: |
Erscheinungsort nicht ermittelbar
O'Reilly Media, Inc.
2019
Sebastopol, CA O'Reilly Media Inc. |
Ausgabe: | 1st edition |
Schlagworte: | |
Online Zugang: | https://learning.oreilly.com/library/view/-/0636920339847 https://learning.oreilly.com/library/view/-/0636920339847/?ar |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Over the past 10 years, big data infrastructure has evolved from flat files in a distributed filesystem to an efficient ecosystem to a fully deconstructed and open source database with reusable components. With Hadoop, we started from a system that was good at looking for a needle in a haystack using snowplows. We had a lot of horsepower and scalability but lacked the subtlety and efficiency of relational databases. But since Hadoop provided the ultimate flexibility compared to the more constrained and rigid RDBMSs, we didn’t mind and plowed through. However, machine learning, recommendations, matching, abuse detection, and data-driven products in general require a more flexible infrastructure. Over time, we started applying everything that had been known to the database world for decades to this new environment. We’d been told loud enough how Hadoop was a huge step backward. And it was true to some degree. The key difference was the flexibility of the Hadoop stack. There are many highly integrated components in a relational database and decoupling them took some time. Today, we see the emergence of key components, such as optimizers, columnar storage, in-memory representation, table abstraction, and batch and streaming execution, as standards that provide the glue between the options available to process, analyze, and learn from our data. We’ve been deconstructing the tightly integrated relational database into flexible reusable open source components. Storage, compute, multitenancy, and batch or streaming execution are all decoupled and can be modified independently to fit every use case. Julien Le Dem (WeWork) discusses the key open source components of the big data ecosystem—including Apache Calcite, Parquet, Arrow, Avro, and Kafka as well as batch and streaming systems—and explains how they relate to each other and how they make the ecosystem more of a database and less of a filesystem. (Parquet is the columnar data layout to optimize data at rest for querying. Arrow is the in-memory representation for maximum throughput execution and overhead-free data exchange. Calcite is the optimizer to make the most of our infrastructure capabilities.) Julien also explores the emerging components that are still missing or haven’t become standard yet to fully materialize the transformation to an extremely flexible database that lets you innovate with your data. This session was recorded at the 2019 O'Reilly Strata Data Conference in San Francisco. |
---|---|
Beschreibung: | 1 Online-Ressource (1 video file, approximately 44 min.) |
ISBN: | 0636920339823 |