At present, along with the fact that the application of the database system is continuously increasing and becoming more and more popular, data transfer between a variety of databases is becoming more and more common and important. Moreover, typically, in a single database application system, a plurality of databases based on a variety of platforms are used. Therefore, there is an urgent need for a technology of transferring data smoothly between homogeneous/heterogeneous databases. Commercial databases have some replication capabilities, but they only work in an ideal situation with many limitations, and thus are restricted greatly. Independent data replication software has the architecture of replicating databases in manner of point-to-point, and thus is not able to flexibly implement data transfer between a plurality of homogeneous/heterogeneous databases with complicated topology. Furthermore, there are also data transfer software developed specifically for a certain application system, but it lacks the commonality. In summary, existing technologies for transferring data between databases have the following deficiencies: the cost is high; there exist greater influences on the databases or data tables, and the degree of coupling is high (that is, the data transfer software that is suitable for a certain type, such as type A, of databases or data tables are not suitable for another type, such as type B, of databases or data tables moreover, in some cases, there even exists the need to create triggers on the source database or depend on the support for certain functionality of certain database products, and thus these solutions lack commonality and expansibility; these solutions have a single function, and thus cannot support both quasi real-time transfer mode (that is, minute-level transfer) and real-time transfer mode (that is, second-level transfer) at the same time; it is difficult or even impossible to implement data transfer between heterogeneous databases, that is, it is difficult or even impossible to implement the filtering and transformation of data, and thus the database system has poor disaster tolerance and recovery capabilities.
Furthermore, with the rapid improvement of database technology, all of the large database systems support concurrent operations in order to meet more and more application requirements. Meanwhile, in order to further improve the performance of inserting operation, a lot of database products begin to support batch inserting technology. Existing methods for rapidly inserting a large amount of dynamic data to be inserted into a specified target database can be implemented by one of: (1) saving the dynamic data to be inserted into a database file, and then the data are imported in batches by means of database backup or by the loading means provided by the database; (2) by means of a number of concurrent processes, inserting the data into the database with the use of a plurality of links; (3) inserting the data in batches, that is, submitting multiple pieces of data at a time. However, each of the methods abovementioned has its own drawbacks as follows: in method (1), additional disk space is needed, and when the file is saved, the input/output operations are time consuming; in method (2), although a number of processes are executed concurrently, each of the processes submits only one record at a time, and thus the efficiency is low; in method (3), if a large amount of data is submitted at the same time, a large log space will be required, and if the submitting fails, the entire failing caused by this may occur. Therefore, there is also an urgent commercial need for a batch inserting technology that can maximize the performance of the databases.