The JVT standard (also known as H.264 and MPEG AVC) is the first video compression standard to adopt a Weighted Prediction (“WP”) feature. In video compression standards prior to JVT, such as MPEG-1, 2 and 4, when a single reference picture prediction was used for predictive (“P”) pictures or slices, the prediction was not scaled. When bi-directional prediction was used for bi-predictive (“B”) pictures or slices, predictions were formed from two different pictures, and then the two predictions were averaged together, using equal weighting factors of (½, ½), to form a single averaged prediction. In JVT, multiple reference pictures may be used for inter-prediction, with a reference picture index coded to indicate which of the multiple reference pictures is used.
In P pictures or slices, only single directional prediction is used, and the allowable reference pictures are managed in list 0. In B pictures or slices, two lists of reference pictures are managed, list 0 and list 1. In B pictures or slices, single directional prediction using either list 0 or list 1 is allowed, or bi-prediction using both list 0 and list 1 is allowed. When bi-prediction is used, the list 0 and the list 1 predictors are averaged together to form a final predictor. Thus, the JVT WP tool allows arbitrary multiplicative weighting factors and additive offsets to be applied to reference picture predictions in both P and B pictures.
Weighted prediction is supported in the Main and Extended profiles of the JVT standard. Use of weighted prediction is indicated in the picture parameter set for P, SP (switching P) and B slices. There are two WP modes—explicit mode, which is supported in P, SP, and B slices, and implicit mode, which is supported in B slices only.
Explicit Mode
In explicit mode, the WP parameters are coded in the slice header. A multiplicative weighting factor and additive offset for each color component may be coded for each of the allowable reference pictures in list 0 for P slices and B slices. However, different macroblocks in the same picture can use different weighting factors even when predicted from the same reference picture store. This can be accomplished by using reference picture reordering and memory management control operation (“MMCO”) commands to associate more than one reference picture index with a particular reference picture store.
The same weighting parameters that are used for single prediction are used in combination for bi-prediction. The final inter prediction is formed for the pixels of each macroblock or macroblock partition, based on the prediction type used. For single directional prediction from list 0,SampleP=Clip1(((SampleP0·W0+2LWD−1)>>LWD)+O0) and for single directional prediction from list 1,  (1)SampleP=Clip1(((SampleP1·W1+2LWD−1)>>LWD)+O1) and for bi-prediction,  (2)SampleP=Clip1(((SampleP0·W0+SampleP1·W1+2LWD)>>(LWD+1))+(O0+O1+1)>>1)  (3)
where Clip1 ( ) is an operator that clips to the range [0, 255], W0 and O0 are the list 0 reference picture weighting factor and offset, and W1 and O1 are the list 1 reference picture weighting factor and offset, and LWD is the log weight denominator rounding factor. SampleP0 and SampleP1 are the list 0 and list 1 initial predictors, and SampleP is the weighted predictor.
Implicit Mode
In WP implicit mode, weighting factors are not explicitly transmitted in the slice header, but instead are derived based on relative distances between the current picture and the reference pictures. Implicit mode is used only for bi-predictively coded macroblocks and macroblock partitions in B slices, including those using direct mode. The same formula for bi-prediction as given in the preceding explicit mode section for bi-prediction is used, except that the offset values O0 and O1 are equal to zero, and the weighting factors W0 and W1 are derived using the formulas below.X=(16384+(TDD>>1))/TDD Z=clip3(−1024,1023,(TDB·X+32)>>6)W1=Z>>2 W0=64−W1  (4)
This is a division-free, 16-bit safe operation implementation ofW1=(64*TDD)/TDB  (5)
where TDB is temporal difference between the list 1 reference picture and the list 0 reference picture, clipped to the range [−128, 127] and TDB is difference of the current picture and the list 0 reference picture, clipped to the range [−128, 127].
Approaches for application of weight parameters are described by equations 6 through 8.
For simplicity, we write weighted prediction for list 0 prediction asSampleP=SampleP0·w0+o0,  (6)
For simplicity, we write weighted prediction for list 1 prediction asSampleP=SampleP1·w1+o1,  (7)
and for bi-predictionSampleP=(SampleP0·w0+SampleP1·w1+o0+o1)/2,  (8)
where wi is weighting factor and oi is weighting offset.
Accordingly, what is needed is an apparatus and new class of methods for determining weighted prediction parameters.