Gluent Data Platform
We conceived the idea of transparent data virtualization and founded Gluent way back in 2013. Hadoop was already widely used in startups and large Internet companies, but enterprise IT shops were still struggling to understand how to make use of this new and promising technology.
Fast forward to today and Hadoop has become more mature, adding features such as security, encryption and SQL engines suitable for interactive use.
Gluent Data Platform has now been implemented at some of the largest finance, telecom, retail and healthcare customers around the world.
Offloading your data to Hadoop and accessing it, and other Hadoop data, using Gluent Data Platform will give you a number of benefits:
- For every query, billions of rows can be easily scanned with high parallelism in Hadoop using cheap Hadoop cluster hardware and software
- Only relevant columns and rows will be returned to the database for further processing
- These efficiencies improve your query performance and free-up your expensive enterprise database and SAN storage resources
- Gluent Smart Connector does not require any changes to your existing application code
- No need to port your applications or edit those 20,000 reports accumulated by your business over the years
- As no application re-design is needed, Gluent is the fastest and lowest-risk option for offloading your data and workloads to Hadoop
Future-proofing With No Lock-in:
- Gluent offloads data to industry-standard open source Hadoop using open data formats, meaning that you will not be locked in
- Choose from and use multiple data engines (like Impala, Hive and Spark) to process your data
- Open data formats give you the freedom to use any application for processing: one data, many engines
- No data conversion or export/import is needed when using new engines on Hadoop
- You finally own your data and are in control of its future
Gluent Advisor informs you which data can be safely offloaded to Hadoop and optionally dropped from your databases.
Encrypted data at rest, data in motion and role based access control fully supported.
Gluent Offload Engine automates offloading data from enterprise databases to Hadoop. You can have an up-to-date copy of your data, in a familiar data model, ready for analytics in the powerful ecosystem of Hadoop.
Use Gluent Offload Engine toolsets to:
- Copy database tables to Hadoop and keep them synchronized
- Move old data to Hadoop, allowing it to be dropped from your database
- Offload only a part of a table (such as old data in a data warehouse fact table), keeping recent data in your relational database
- Convert data to open data formats (Parquet, ORC), columnar-compressed for space savings and partitioned for optimal query performance
- Access offloaded data without any application changes
Have a look at the Gluent Offload video demonstrations to learn more.
Gluent Smart Connector is the core building block of our product offering. It is the underlying engine that allows access to both the database and offloaded Hadoop data in a single query. SQL queries are still executed by the original applications in their usual databases with no modifications. Under the hood much of the processing is pushed to Hadoop.
A better name for our Smart Connector would be a Really Smart Connector. Because we want to offload a significant amount of your databases’ reporting, batch, and analytic heavy-duty processing to Hadoop, we need to understand the running SQL statements and their execution plans intimately.
We have gone to great lengths to develop a product that extracts detailed information about your SQL statements in real time, direct from database engine memory. For the database geeks, we are talking about column projection, filter predicates, bind variables and even table join conditions and aggregations.
Gluent’s Present component, allows your relational databases to query tables and views of any big data sources in Hadoop, including tables offloaded from other databases.
Use Gluent’s Present toolset to:
- Present any Hadoop data source to your relational databases
- Access the presented data (hybrid tables) using native SQL
- Utilize Hadoop resources for filtering, aggregations and joins of presented data
- Have on-demand access to Hadoop data and avoid traditional data export/import
Single command database offloading, no need to hire ETL developers.
Open Data Formats
Data is no longer stored in proprietary formats, you finally own your data.