1. Field of the Invention
This invention relates generally to telephone switching systems and methods and more particularly to telephonic switching systems and methods in which access of different special processes to a shared data memory is controlled to prevent access to shared data by one special process when the data is in the process of being altered by another special process.
2. Description of the Related Art Including Information Disclosed Under 37 CFR .sctn.1.97-1.99
In modern computer controlled communication switching systems there are a plurality of different special functions which must be performed. While these special functions can each be performed by separate devices or by separate processes of a single central processing unit. In either event, often a plurality of special processes require access to a single data memory which is shared on a time slicing basis between all the special processes. In performing their special functions, some of the special processes alter the data which is shared with other special processes. In known telephonic systems, the access to the shared data memory is automatically, periodically shifted successively from one special process to another without regard to whether the shared data is being altered and is incomplete at the time for the periodic shift in access. This disadvantageously can result in erroneous reading or storage of data and consequent malfunctions of the telephonic system.
This periodic shifting of access is commonly known as "time slicing". In a central processor based system, the concept of time slicing relates to the allocation of central processor unit time across a number of processes requesting CPU time. When a time slice occurs, access to the data stored in a data memory through the CPU is shifted from the special process currently accessing the data to another special process which has requested access to the data. The operating system of the central processing unit allocates read lines to the central processing unit from the number of processes requesting access to data through the central processing unit. As noted, the processing of an event can require the updating of multiple pieces of data. If a time slice occurs in the middle of this updating interval, it is possible that another process could read this data erroneously after the occurrence of a time slice and cause a software failure.
The known solution to this problem in a CPU controlled telephonic switch is to administer data access flags to determine when a piece of data can be accessed. Disadvantageously, with this known approach to time slicing in a multiuser or multiprocess system, numerous extra data bits in the data itself are required as flags. When the operating system indicates that time allocated to process time has lapsed and access to the CPU has shifted upon a time slice, these extra flag bits are needed to direct the special process to return to its last completed step when it is allowed to resume processing. These extra data bits, or semaphores, associated with the accessed data are also used to inform other special processes by signalling a flag to indicate that the data involved or associated with these data bits are possibly in the act of being changed and that the other processes should not use the data as it is not necessarily accurate at that moment.
This known flag technique disadvantageously requires the expenditure of a substantial amount of real-time overhead in administering semaphores for every access of data to prevent multiple accesses to the data as a result of time slicing. This leads to undesirable real time utilization. The known multiprocesses systems require the special processes to account for the possibility of being time sliced at any point in their execution through the use of software protocols between the processes which access shared data. This makes the interaction between application processes which share data more complex.
Referring to FIG. 1, a block diagram of the known method for multiple processes to access data from the same data area.
In step 5, when a special process attempts to access data, the special process will ask if the data area of the data memory has been locked, because data is being accessed by another process.
In step 6, if the data memory area is not locked by another process, the special process will begin execution by first setting a data access flag or locking the data area, itself.
In step 7, the special process will then access the data from the data area of the data memory. Finally, after the process has completed all the accessing of its requested data, the flag associated with the data will be removed and the data area will be unlocked in step 8. If the data is being accessed by another special process, in step 9 a delay will result because the special process will read a flag associated with this data indicating that the data is locked by another process and therefore no further access to this data may result until the other process completes its access of the data and unlocks the data area. Therefore, if a time slice occurs while a special process is still accessing data thereby locking the data area, the next process which is allocated a read line through the CPU as a result of the time slice will not be able to access the data area because the originally accessing process which was time sliced has kept the data area locked.
As a result, all other successive special processes which desire to access the locked data have their execution delayed because they will not be able to access the data required to run the process until the system eventually gets back to the original accessing process and the special process completes its access from the data area and finally unlocks the data area by removing its flag. Thus, in addition to the necessity of needing a high real-time overhead, the known method of FIG. 1 leads to long response times for real time events still due to the fact that when a process is time sliced upon accessing data, access by no other process can occur until the process with access returns to execution and completes the access. Since there is no integration of time slicing with the prevention of accessing shared data, the time slicing of processes actually adds to delays and long response times.