Introduction to HoraeDB's Architecture
Target
- Provide the overview of HoraeDB to the developers who want to know more about HoraeDB but have no idea where to start.
- Make a brief introduction to the important modules of HoraeDB and the connections between these modules but details about their implementations are not be involved.
Motivation
HoraeDB is a timeseries database (TSDB). However, HoraeDB’s goal is to handle both timeseries and analytic workloads compared with the classic TSDB, which usually have a poor performance in handling analytic workloads.
In the classic timeseries database, the Tag columns (InfluxDB calls them Tag and Prometheus calls them Label) are normally indexed by generating an inverted index. However, it is found that the cardinality of Tag varies in different scenarios. And in some scenarios the cardinality of Tag is very high (we name this case after analytic workload), and it takes a very high cost to store and retrieve the inverted index. On the other hand, it is observed that scanning+pruning often used by the analytical databases can do a good job to handle such analytic workload.
The basic design idea of HoraeDB is to adopt a hybrid storage format and the corresponding query method for a better performance in processing both timeseries and analytic workloads.
Architecture
| |
The figure above shows the architecture of HoraeDB stand-alone service and the details of some important modules will be described in the following part.
RPC Layer
module path: https://github.com/apache/incubator-horaedb/tree/main/server
The current RPC supports multiple protocols including HTTP, gRPC, MySQL.
Basically, HTTP and MySQL are used to debug HoraeDB, query manually and perform DDL operations (such as creating, deleting tables, etc.). And gRPC protocol can be regarded as a customized protocol for high-performance, which is suitable for massive reading and writing operations.
SQL Layer
module path: https://github.com/apache/incubator-horaedb/tree/main/query_frontend
SQL layer takes responsibilities for parsing sql and generating the query plan.
Based on sqlparser a sql dialect, which introduces some key concepts including Tag and Timestamp, is provided for processing timeseries data. And by utilizing DataFusion the planner is able to generate both regular logical plans and tailored ones which is used to implement the special operators defined by timeseries queries, e.g PromQL.
Interpreter
module path: https://github.com/apache/incubator-horaedb/tree/main/interpreters
The Interpreter module encapsulates the SQL CRUD operations. In the query procedure, a sql received by HoraeDB is parsed, converted into the query plan and then executed in some specific interpreter, such as SelectInterpreter, InsertInterpreter and etc.
Catalog
module path: https://github.com/apache/incubator-horaedb/tree/main/catalog_impls
Catalog is actually the module managing metadata and the levels of metadata adopted by HoraeDB is similar to PostgreSQL: Catalog > Schema > Table, but they are only used as namespace.
At present, Catalog and Schema have two different kinds of implementation for standalone and distributed mode because some strategies to generate ids and ways to persist metadata differ in different mode.
Query Engine
module path: https://github.com/apache/incubator-horaedb/tree/main/query_engine
Query Engine is responsible for optimizing and executing query plan given a basic SQL plan provided by SQL layer and now such work is mainly delegated to DataFusion.
In addition to the basic functions of SQL, HoraeDB also defines some customized query protocols and optimization rules for some specific query plans by utilizing the extensibility provided by DataFusion. For example, the implementation of PromQL is implemented in this way and read it if you are interested.
Pluggable Table Engine
module path: https://github.com/apache/incubator-horaedb/tree/main/table_engine
Table Engine is actually a storage engine for managing tables in HoraeDB and the pluggability of Table Engine is a core design of HoraeDB which matters in achieving our long-term target, e.g supporting handle log or tracing workload by implementing new storage engines. HoraeDB will have multiple kinds of Table Engine for different workloads and the most appropriate one should be chosen as the storage engine according to the workload pattern.
Now the requirements for a Table Engine are:
- Manage all the shared resources under the engine:
- Memory
- Storage
- CPU
- Manage metadata of tables such as table schema and table options;
- Provide
Tableinstances which providesreadandwritemethods; - Take responsibilities for creating, opening, dropping and closing
Tableinstance; - ….
Actually the things that a Table Engine needs to process are a little complicated. And now in HoraeDB only one Table Engine called Analytic is provided and does a good job in processing analytical workload, but it is not ready yet to handle the timeseries workload (we plan to enhance it for a better performance by adding some indexes which help handle timeseries workload).
The following part gives a description about details of Analytic Table Engine.
WAL
module path: https://github.com/apache/incubator-horaedb/tree/main/wal
The model of HoraeDB processing data is WAL + MemTable that the recent written data is written to WAL first and then to MemTable and after a certain amount of data in MemTable is accumulated, the data will be organized in a query-friendly form to persistent devices.
Now three implementations of WAL are provided for standalone and distributed mode:
- For standalone mode,
WALis based onRocksDBand data is persisted on the local disk. - For distributed mode,
WALis required as a distributed component and to be responsible for durability of the newly written data, so now we provide an implementation based on OceanBase. - For distributed mode, in addition to OceanBase, we also provide a more lightweight implementation based on
Apache Kafka.
MemTable
module path: https://github.com/apache/incubator-horaedb/tree/main/analytic_engine/src/memtable
For WAL can’t provide efficient data retrieval, the newly written data is also stored in Memtable for efficient data retrieval, after a certain amount of data is reached, HoraeDB organizes the data in MemTable into a query-friendly storage format (SST) and stores it to the persistent device.
The current implementation of MemTable is based on agatedb’s skiplist. It allows concurrent reads and writes and can control memory usage based on Arena.
Flush
module path: https://github.com/apache/incubator-horaedb/blob/main/analytic_engine/src/instance/flush_compaction.rs
What Flush does is that when the memory usage of MemTable reaches the threshold, some MemTables are selected for flushing into query-friendly SSTs saved on persistent device.
During the flushing procedure, the data will be divided by a certain time range (which is configured by table option Segment Duration), and any SST is ensured that the timestamps of the data in it are in the same Segment. Actually this is also a common operation in most timeseries databases which organizes data in the time dimension to speed up subsequent time-related operations, such as querying data over a time range and assisting purge data outside the TTL.
Compaction
module path: https://github.com/apache/incubator-horaedb/tree/main/analytic_engine/src/compaction
The data of MemTable is flushed as SSTs, but the file size of recently flushed SST may be very small. And too small or too many SSTs lead to the poor query performance. Therefore, Compaction is then introduced to rearrange the SSTs so that the multiple smaller SST files can be compacted into a larger SST file.
Manifest
module path: https://github.com/apache/incubator-horaedb/tree/main/analytic_engine/src/meta
Manifest takes responsibilities for managing tables’ metadata of Analytic Engine including:
- Table schema and table options;
- The sequence number where the newest flush finishes;
- The information of all the
SSTs belonging to the table.
Now the Manifest is based on WAL and Object Storage. The newly written updates on the Manifest are persisted as logs in WAL, and in order to avoid infinite expansion of Manifest (actually every Flush leads to an update), Snapshot is also introduced to clean up the history of metadata updates, and the generated Snapshot will be saved to Object Storage.
Object Storage
module path: https://github.com/apache/incubator-horaedb/tree/main/components/object_store
The SST generated by Flush needs to be persisted and the abstraction of the persistent storage device is ObjectStore including multiple implementations:
- Based on local file system;
- Based on Alibaba Cloud OSS.
The distributed architecture of HoraeDB separates storage and computing, which requires Object Store needs to be a highly available and reliable service independent of HoraeDB. Therefore, storage systems like Amazon S3, Alibaba Cloud OSS is a good choice and in the future implementations on storage systems of some other cloud service providers is planned to provide.
SST
module path: https://github.com/apache/incubator-horaedb/tree/main/analytic_engine/src/sst
SST is actually an abstraction that can have multiple specific implementations. The current implementation is based on Parquet, which is a column-oriented data file format designed for efficient data storage and retrieval.
The format of SST is very critical for retrieving data and is also the most important part to perform well in handling both timeseries and analytic workloads. At present, our Parquet-based implementation is good at processing analytic workload but is poor at processing timeseries workload. In our roadmap, we will explore more storage formats in order to achieve a good performance in both workloads.
Space
module path: https://github.com/apache/incubator-horaedb/blob/main/analytic_engine/src/space.rs
In Analytic Engine, there is a concept called space and here is an explanation for it to resolve some ambiguities when read source code. Actually Analytic Engine does not have the concept of catalog and schema and only provides two levels of relationship: space and table. And in the implementation, the schema id (which should be unique across all catalogs) on the upper layer is actually mapped to space id.
The space in Analytic Engine serves mainly for isolation of resources for different tenants, such as the usage of memory.
Critical Path
After a brief introduction to some important modules of HoraeDB, we will give a description for some critical paths in code, hoping to provide interested developers with a guide for reading the code.
Query
| |
Take SELECT SQL as an example. The figure above shows the query procedure and the numbers in it indicates the order of calling between the modules.
Here are the details:
- Server module chooses a proper rpc module (it may be HTTP, gRPC or mysql) to process the requests according the protocol used by the requests;
- Parse SQL in the request by the parser;
- With the parsed sql and the information provided by catalog/schema module, DataFusion can generate the logical plan;
- With the logical plan, the corresponding
Interpreteris created and logical plan will be executed by it; - For the logical plan of normal
SelectSQL, it will be executed throughSelectInterpreter; - In the
SelectInterpreterthe specific query logic is executed by theQuery Engine:- Optimize the logical plan;
- Generate the physical plan;
- Optimize the physical plan;
- Execute the physical plan;
- The execution of physical plan involves
Analytic Engine:- Data is obtained by
readmethod ofTableinstance provided byAnalytic Engine; - The source of the table data is
SSTandMemtable, and the data can be filtered by the pushed down predicates; - After retrieving the table data,
Query Enginewill complete the specific computation and generate the final results;
- Data is obtained by
SelectInterpretergets the results and feeds them to the protocol module;- After the protocol layer converts the results, the server module responds to the client with them.
The following is the flow of function calls in version v1.2.2:
┌───────────────────────◀─────────────┐ ┌───────────────────────┐
│ handle_sql │────────┐ │ │ parse_sql │
└───────────────────────┘ │ │ └────────────────┬──────┘
│ ▲ │ │ ▲ │
│ │ │ │ │ │
│ │ │ └36───┐ │ 11
1│ │ │ │ │ │
│ 8│ │ │ │ │
│ │ │ │ 10 │
│ │ │ │ │ │
▼ │ │ │ │ ▼
┌─────────────────┴─────┐ 9│ ┌┴─────┴────────────────┐───────12─────────▶┌───────────────────────┐
│maybe_forward_sql_query│ └────────▶│fetch_sql_query_output │ │ statement_to_plan │
└───┬───────────────────┘ └────┬──────────────────┘◀───────19─────────└───────────────────────┘
│ ▲ │ ▲ │ ▲
│ │ │ │ │ │
│ │ │ │ │ │
│ │ │ 35 13 18
2│ 7│ 20 │ │ │
│ │ │ │ │ │
│ │ │ │ │ │
│ │ │ │ ▼ │
▼ │ ▼ │ ┌───────────────────────┐
┌───────────────────────┐───────────6───────▶┌─────────────────┴─────┐ ┌─────────────────┴─────┐ │Planner::statement_to_p│
│ forward_with_endpoint │ │ forward │ │execute_plan_involving_│ │ lan │
└───────────────────────┘◀────────5──────────└───┬───────────────────┘ ┌──│ partition_table │◀────────┐ └───┬───────────────────┘
│ ▲ │ └───────────────────────┘ │ │ ▲
│ │ │ │ ▲ │ │ │
│ │ │ │ │ │ 14 17
┌───────────────────────┐ │ 4│ │ │ │ │ │ │
┌─────│ PhysicalPlan::execute │ 3│ │ │ 21 │ │ │ │
│ └───────────────────────┘◀──┐ │ │ │ │ 22 │ │ │
│ │ │ │ │ │ │ │ ▼ │
│ │ │ │ │ │ │ │ ┌────────────────────────┐
│ │ ▼ │ │ ▼ │ 34 │sql_statement_to_datafus│
│ ┌───────────────────────┐ 30 ┌─────────────────┴─────┐ │ ┌─────────────────┴─────┐ │ │ ion_plan │
31 │ build_df_session_ctx │ │ │ route │ │ │ build_interpreter │ │ └────────────────────────┘
│ └────┬──────────────────┘ │ └───────────────────────┘ │ └───────────────────────┘ │ │ ▲
│ │ ▲ │ │ │ │ │
│ 27 26 │ 23 │ 15 16
│ ▼ │ │ │ │ │ │
└────▶┌────────────────┴──────┐ │ ┌───────────────────────┐ │ │ │ │
│ execute_logical_plan ├───┴────32────────▶│ execute │──────────┐ │ ┌───────────────────────┐ │ ▼ │
└────┬──────────────────┘◀────────────25────┴───────────────────────┘ 33 │ │interpreter_execute_pla│ │ ┌────────────────────────┐
│ ▲ ▲ └──────┴──▶│ n │────────┘ │SqlToRel::sql_statement_│
28 │ └──────────24────────────────┴───────────────────────┘ │ to_datafusion_plan │
│ 29 └────────────────────────┘
▼ │
┌────────────────┴──────┐
│ optimize_plan │
└───────────────────────┘
- The received request will be forwarded to
handle_sqlafter various protocol conversions, and since the request may not be processed by this node, it may need to be forwarded tomaybe_forward_sql_queryto handle the forwarding logic. - After constructing the
ForwardRequestinmaybe_forward_sql_query, callforward - After constructing the
RouteRequestinforward, callroute - Use
routeto get the destination nodeendpointand return toforward. - Call
forward_with_endpointto forward the request - return
forward - return
maybe_forward_sql_query - return
handle_sql - If this is a
Localrequest, callfetch_sql_query_outputto process it - Call
parse_sqlto parsesqlintoStatment - return
fetch_sql_query_output - Call
statement_to_planwithStatment - Construct
PlannerwithctxandStatment, and call thestatement_to_planmethod ofPlanner - The
plannerwill call the correspondingplannermethod for the requested category, at this point oursqlis a query and will callsql_statement_to_plan - Call
sql_statement_to_datafusion_plan, which will generate thedatafusionobject, and then callSqlToRel::sql_statement_to_plan - The generated logical plan is returned from
SqlToRel::sql_statement_to_plan - return
- return
- return
- Call
execute_plan_involving_partition_table(in the default configuration) for subsequent optimization and execution of this logical plan - Call
build_interpreterto generateInterpreter - return
- Call
Interpreter'sinterpreter_execute_planmethod for logical plan execution. - The corresponding
executefunction is called, at this time thesqlis a query, so the execute of theSelectInterpreterwill be called - call
execute_logical_plan, which will callbuild_df_session_ctxto generate the optimizer build_df_session_ctxwill use theconfiginformation to generate the corresponding context, first using datafusion and some custom optimization rules (in logical_optimize_rules()) to generate the logical plan optimizer, usingapply_adapters_for_physical_optimize_rulesto generate the physical plan optimizer- return optimizer
- Call
optimize_plan, using the optimizer just generated to first optimize the logical plan and then the physical plan - Return to optimized physical plan
- execute physical plan
- returned after execution
- After collecting the results of all slices, return
- return
- return
- return
- Return to the upper layer for network protocol conversion and finally return to the request sender
Write
| |
Take INSERT SQL as an example. The figure above shows the query procedure and the numbers in it indicates the order of calling between the modules.
Here are the details:
- Server module chooses a proper rpc module (it may be HTTP, gRPC or mysql) to process the requests according the protocol used by the requests;
- Parse SQL in the request by the parser;
- With the parsed sql and the catalog/schema module, DataFusion can generate the logical plan;
- With the logical plan, the corresponding
Interpreteris created and logical plan will be executed by it; - For the logical plan of normal
INSERTSQL, it will be executed throughInsertInterpreter; - In the
InsertInterpreter,writemethod ofTableprovidedAnalytic Engineis called:- Write the data into
WALfirst; - Write the data into
MemTablethen;
- Write the data into
- Before writing to
MemTable, the memory usage will be checked. If the memory usage is too high, the flush process will be triggered:- Persist some old MemTables as
SSTs; - Store updates about the new
SSTs and the flushed sequence number ofWALtoManifest; - Delete the corresponding
WALentries;
- Persist some old MemTables as
- Server module responds to the client with the execution result.