Currently, a double data rate synchronous dynamic random access memory (DDR SDRAM) is commonly used in a mobile terminal as a memory of the mobile terminal. Generally, the DDR SDRAM is referred to as DDR for short; a clock rate of the DDR is variable, and changes with a working frequency of a central processing unit.
FIG. 1 is a schematic structural diagram of a combination of a DDR memory and a System-On-a-Chip (SoC) that includes a DDR interface. The SoC system includes a central processing unit (CPU), a service processor, a phase-locked loop (PLL), a DDR controller, and the DDR interface. The DDR controller, the DDR interface (also referred to as a DDR PHY), and the DDR memory form a DDR system. A data channel (DATA channel shown in FIG. 1), a clock channel (CLK channel shown in FIG. 1), a command channel (CMD channel shown in FIG. 1), and the like exist between the DDR interface and the DDR memory. The CPU and at least one service processor are both connected to the DDR controller. The CPU is mainly responsible for running an operating system and scheduling service software, and instructions and data for processing need to be obtained from the DDR memory. There may be at least one service processor; for example, a service processor 1 may be a video encoder or decoder, which needs to read from and write to the storage of the DDR memory, and generally includes some internal cache units to cache received data and data to be sent; for another example, a service processor 2 may be a network unit, an audio unit, or the like, which needs to read from and write to the storage of the DDR memory, and also generally includes some internal cache units to cache received data and data to be sent. The DDR controller is used to accept DDR read/write requests of the CPU or the service processor, take the responsibility of scheduling priorities of the read/write requests, and return, to the CPU or the service processor, data that is obtained from the DDR memory in response to the DDR request. The DDR interface is used to send read/write commands of the DDR controller to a DDR storage unit according to a physical time sequence and take the responsibility of returning data of the DDR memory to the DDR controller according to a time sequence.
Generally, caches with particular capacities are set in the CPU and the service processor, and are configured to store read/written data of the DDR interface. A greater demand for reading/writing data from the DDR memory and a greater read/write delay indicate greater capacities of the caches in the CPU and the service processor. If the DDR read/write delay increases, performance of the CPU and the service processor become worse. Therefore, to reduce system power consumption and improve system performance, generally, a working frequency of the DDR interface is dynamically adjusted according to a bandwidth requirement of the system, and this is referred to as DDR dynamic frequency adjustment (DFA). For example, when the CPU in the SoC system has a large bandwidth for accessing the DDR memory, the DDR interface is set to a relatively high clock rate; and when the CPU in the SoC system has a small bandwidth for accessing the DDR memory, the DDR interface is set to a relatively low clock rate.
As shown in FIG. 2, the adjustment of the working frequency of the DDR interface generally includes the following phases.
Phase 1: The DDR interface works normally at a frequency A.
After a requirement that the SoC system needs to switch the frequency of the DDR interface is received, a frequency adjustment process of the DDR interface begins.
Phase 2: DDR frequency adjustment phase. In this phase, the DDR memory cannot be accessed. Generally, the process includes the following steps.
Phase 2.1. The DDR controller suspends, using back pressure, data read/write of the service processor, and empties data cached in the DDR controller.
Phase 2.2. The DDR memory starts self-refresh.
Phase 2.3. The DDR memory switches the clock rate from the frequency A to a frequency B.
Phase 2.4. Wait until working clocks of the DDR interface and the DDR memory become stable again at the frequency B.
Phase 3: The DDR memory exits the self-refresh and back pressure, and the DDR interface resumes working at the frequency B.
In the foregoing solution, during the frequency adjustment of the DDR interface, that is, during the phase 2, an operation of accessing the DDR memory needs to be suspended, and read/write access of the service processor and the CPU is suspended. In this case, the capacity of the cache in the service processor needs to be increased to ensure continuity of a service. Using a DDR3 64-Bit DDR interface as an example, when the system needs to reduce power consumption from 1600 megabits per second (Mbps) to 1200 Mbps, if the foregoing method is used, access suspension of the entire DDR memory lasts at least 50 microseconds (μs) (actually, DDR parameters of systems are different, 50 μs is a conservative estimation, and in many systems, the time is longer than 50 μs), and during this period, it is necessary to consider increasing the capacities of caches in all online service processors (such network, video input, and display), so as to store data that needs to be received or sent in real time on the DDR interface. Assuming that efficiency of the DDR is 60%, during access suspension of the DDR, a total amount of data to be cached in service modules is: 1200*32*50*0.6/1000000˜=3 megabits (Mbits), and this causes a relatively high requirement on the cache in the service processors. In addition, because the CPU cannot access the DDR memory during the frequency adjustment of the DDR interface, processing performance of the CPU is affected.