# Disk, memory and bandwidth

Background: Start to summarize the knowledge of redis, and take a look at the evolution of storage media by the way.

### Disk:

For a disk, a complete IO operation is performed like this: When the controller sends an IO operation command to the disk, the Actuator Arm of the disk with the read-write head (Head) leaves the Landing Zone and is located in the (The area with no data in the inner circle), move to just above the track (Track) where the initial data block to be operated is located. This process is called Seeking, and the corresponding time consumed is called Seek Time ) ; However, the data cannot be read immediately after the corresponding track is found. At this time, the head has to wait until the disk platter (Platter) rotates to the sector where the initial data block (Sector) is located directly above the read/write head before reading the data , waiting for disk rotation in this process is operable to time spent in the sector called rotational latency (rotational delay); next to the rotation of the disc, the head continues to read / write the corresponding data block, until the completion of all the data required for IO operations, the data transfer process is called (data transfer), the corresponding period is called transfer time (transfer time). After completing these three steps, an IO operation is completed.

The two most important parameters for measuring disk performance are IOPS and throughput.

Therefore, as long as the size of a single IO is given, we know how much time the disk needs to spend on data transfer. This time is IO Chunk Size / Max Transfer Rate.

Now we can get such a formula for calculating the single IO time.

IO Time = Seek Time + 60 sec/Rotational Speed/2 + IO Chunk Size/Transfer Rate

So we can calculate IOPS like this.

IOPS = 1/IO Time = 1/(Seek Time + 60 sec/Rotational Speed/2 + IO Chunk Size/Transfer Rate)

IOPS (Input/Output Per Second) is the amount of input and output (or the number of reads and writes) per second, and is one of the main indicators for measuring disk performance. IOPS refers to the number of I/O requests that the system can handle per unit time. I/O requests are usually read or write data operation requests. For applications with frequent random reads and writes, such as OLTP (Online Transaction Processing), IOPS is a key metric. Another important indicator is data throughput (Throughput), which refers to the amount of data that can be successfully transmitted per unit time. For a large number of sequential read and write applications, such as VOD (Video On Demand), more attention is paid to throughput indicators.

in short:

• Disk IOPS, that is, how many I/O reads and writes are performed by the disk in one second.
• The throughput of the disk (refers to the speed of the data stream when the hard disk or device (router/switch) is transmitting data, even if the same hard disk writes different sizes of data, the bandwidth displayed is different), that is, every time Disk I/O traffic per second, that is, the size of disk writes plus read data.

It can be seen from the above data that when the single IO is smaller, the time consumed by a single IO is also less, and the corresponding IOPS is also larger. However, our data above are all based on an ideal assumption. The ideal situation here is that the disk takes an average addressing time and an average rotation delay. This assumption is actually more in line with our actual situation. Random read and write in the random read and write, in the random read and write, the addressing time and rotation delay of each IO operation can not be ignored.

Writing 10,000 files with a size of 1KB takes more time than writing a file of 10MB. Because 10,000 files need to do tens of thousands of IOs, and writing large files of 10MB, because they are stored continuously, only a few dozen IOs are needed.

For writing 10,000 small files, because the IO required per second is very high, if you use a disk with a higher IOPS, it will speed up a lot.

Writing a 10MB file will not increase the speed even if a higher IOPS is used. Because only a small amount of IO is required. Only the use of larger transmission bandwidth will show the advantage.

②Bandwidth-how many byte streams can be transmitted per unit time, several G or several M.
• High transmission bandwidth has advantages when transmitting large blocks of continuous data
• High IOPS has advantages when transferring small discontinuous data

### RAM:

①Addressing---nanosecond ns level. Seconds=1000 milliseconds=1000*1000 microseconds=1000*1000*1000 nanoseconds. In terms of addressing, disks are 100,000 times slower than memory.

Disk I/O has mechanical movement costs, so the time consumption of disk I/O is huge. The memory is made of transistors (CPU is also made of transistors), and the characteristics of transistors are what we usually say to represent 1,0 with the on and off of switches. Some combinations of gate circuits can be used to represent numbers and implement complex Logic function, and memory is mainly used to temporarily save data, CPU is to deal with some logical relations. Because the transistor must be energized, and then use the current state to indicate the information, the amount of charge (potential level) after charging and discharging corresponds to binary data 0 and 1, so the data can only be saved when the power is turned on, and the power in the memory is turned off. The transistor is in an unknown state and is of no use, and the magnetic substance still exists after the disk is powered off.

However, some non-volatile storage media are also emerging now, and data will not be lost even if the power is turned off in time.

The following are some of the current mainstream storage media I have summarized:

②Bandwidth-very large

I/O Buffer:

The tracks and sectors in the disk are 512 bytes per sector. If the disk capacity is large and the sectors are small, the cost of the index (equivalent to the number of sectors I coded) will inevitably increase. No matter how much data the operating system reads from the disk, the unit is 4K.

As the file becomes larger, the speed will be slower, and disk IO will become a bottleneck.

### database:

The emergence of the database is to improve the bottleneck of disk IO. But on the whole, the total amount of disk IO and database IO is equal, so there is the concept of index. If there is no index, just build the database and table, it will not help much, it is still very slow.

The smallest unit in the database is page page, with 4k as the unit.

To build a table in a relational database, the schema and data type (byte width) must be given first, and row-level storage is preferred when storing data. When the advantage of byte width is given first, the position is reserved, and the data is directly overwritten without data movement when inserting or updating data.

Indexes are also data, which are stored on the hard disk like table data. Create a B+ tree in the memory to store the index range and offset. The index and data are stored on the disk. Because the memory is limited, so much data cannot be stored. The index is used to increase the speed of traversal and search, and to reduce disk IO and search. The process of addressing, but the data is still obtained from the disk.

### Memory Database

Although the memory will be lost if it is powered off, if it is only used as a query to temporarily store some small things, it can greatly reduce disk io and increase the amount of concurrency. So in-memory databases such as redis appeared.