The future of intelligent information retrieval seems to be based on machine learning techniques such as Artificial Neural Network (ANN or NN). ANN's ability to express non-linear relationships in data results in better classification and is best suited for information retrieval applications such as pattern recognition, prediction, and classification.
The ANN technique attempts to emulate the architecture and information representation schemes of the human brain. Its architecture depends on the goal to be achieved. The learning in ANN can be either supervised or unsupervised. In supervised learning (SL) ANN assumes what the result should be (like a teacher instructing a pupil). In this case we present the input, check what the output shows and then adjust the connection strengths (weights) between the input and output mapping until the correct output is given. This can be applied to all inputs until the network becomes as error free as possible. The SL method requires an output class declaration for each of the inputs.
Present SL methods can handle either offline (static) or online (dynamic/time series) data but not both. Also, current SL methods take a long time for learning, and require a significantly greater number of iterations to stabilize. The present SL methods use the Conjugate Generalized Delta Rule (Conjugate GDR) for machine learning when using static data and are not guaranteed to find a global optimum. The GDR based on stochastic approximation used for time series data is complex and unreliable because GDR can only handle offline data.
Therefore, there is a need in the art for a SL technique that can handle both static and time series data. Further, there is also a need in the art for a SL technique that can reduce the dimensionality of the received data to enhance machine learning rate and system performance.