The database is a vital part of the application process. As the only enterprise-level, globally distributed and highly consistent enterprise-level database service, Cloud Spanner perfectly combines the advantages of the relational database structure with the scale of the non-relational database. More uniquely, Spanner usually combines transactions, SQL queries, and relational structures with the scalability of non-relational or NoSQL databases.
How does Cloud Spanner work?
In Figure 1, you can see a four-node regional cloud Spanner instance, which hosts two databases. Nodes are the metrics calculated by Cloud Spanner. Node servers provide read and write/commit transaction requests, but they do not store data. Each node is replicated in three areas of this area, and database storage is also replicated in three areas. The nodes in the area are responsible for reading and writing the storage in their area. The data is stored in Google's basic Colossus distributed replication file system, which can provide a huge advantage when it comes to redistributing the load, because the data is not linked to a single node. If a node or database fails, the database is still available and the remaining nodes provide services without manual intervention to maintain availability.
How does Spanner provide high availability and scalability?
Each table in the database is sorted and stored according to the primary key, divided according to the scope of the primary key, that is, split. Each split is completely managed independently by a different Spanner node. The number of splits of a table varies according to the amount of data, and an empty table has only one split. Depending on the amount of data and load, the split is dynamically rebalanced. But the tables and nodes are replicated across three regions. How does this work?
So the content is copied between the three areas, and the split management is the same. The split copy is associated with a cross-regional group (Paxos), using the Paxos consensus protocol, in which one region is identified as the leader. The leader is responsible for managing the write transaction of the split, while other copies can be used for reading. If the leader fails, the consensus is re-determined and a new leader may be selected. For different splits, different regions can become leaders, thereby assigning leadership roles among Spanner computing nodes. A node can be the leader or a copy of other splits. Through this distributed mechanism of split, leader, and copy, Spanner achieves high availability and scalability.
Types of reads in Spanner
There are two types of reads in Cloud Spanner.
Strong read-used when the absolute latest value needs to be read. Here is how it works: The
Cloud Spanner API recognizes the split, finds the Paxos group used for the split, and routes the request to one of the replicas (usually in the same region as the client). In this example, the request is sent to region 1 Read-only copy in;
If it can be read, request a copy from the leader, and ask for the TrueTime timestamp of the latest transaction on this line;
The leader responds, and the copy compares the response with its status;
If the row is the most recent, it can return the result. Otherwise, it needs to wait for the leader to send an update;
The response is returned to the client.
In some cases, for example, when the row has just been updated and the read request is in transit, the status of the copy is up to date enough that it does not even need to ask the leader for the latest transaction.
Stale reads — When low read latency is more important than getting the latest value, expired reads are used, so some data can be tolerated. In expired reading, the client does not request the absolute latest version, but only the latest data (for example, up to n seconds). If the staleness factor is at least 15 seconds, in most cases, the copy only needs to return the data, without even asking the leader, because its internal state will show that the data is up to date. As you can see, in every read request, there is no need for row locking-the ability of any node to respond to reads is why Cloud Spanner is so fast and scalable.
How does Spanner provide global consistency?
TrueTime is a way to synchronize clocks among all computers across multiple data centers. The system uses a combination of GPS and atomic clocks. Each atomic clock corrects the failure mode of the other. Combining the two sources (with multiple redundancy of course) provides an accurate source of time for all Google applications. However, clock drift on each single computer may still occur. Even if it is synchronized every 30 seconds, the difference between the server clock and the reference clock can be as high as 2ms. The drift will look like a sawtooth graph, and the uncertainty increases until it is corrected by the clock synchronization. Since the duration of 2ms is quite long (at least in terms of calculations), TrueTime uses this uncertainty as part of the time signal.
If your application requires a highly scalable relational database, consider Cloud Spanner. To learn more about Cloud Spanner, please refer to the official documentation.
Official document: https://cloud.google.com/spanner/docs
Want to get more Google Cloud related information and dry goods content?
Follow us now! Click "Read Original" to leave your question
There will be professionals to contact you~