The automatic evaluation and assessment of skin has been an area of intense investigation for several decades [1-9]. There have been numerous algorithms and methods to detect problem areas on the skin and to measure and monitor how these areas change over time [10]. In recent years, a combination of factors, including the wide availability of smartphones with significant processing capability and a high definition camera, have increased the level of interest in automatic skin assessment [9].
Prior skin evaluation methods can generally be divided into three groups. The first group utilizes image filters or transforms to highlight specific concerns which can then be closely investigated on the filtered image [2,3,4,6]. These methods are fairly efficient, simple, and usually yield good results. A second group of methods enable users to provide feedback on a particular area which is then closely investigated (though region segmentation, color analysis, or other methods) [12,13]. This provides a more focused and accurate evaluation, but does require user intervention which may not always be possible or ideal. A third group of methods focus on machine learning for learning the characteristics of different skin conditions which are then employed to classify different parts of the skin [1,5,7,8,11]. The latter method provides significant potential for automatic skin diagnosis, but does require extensive labelled skin images which are usually not available [7,8,11].
In this work, we focus on the first method, namely to apply a hierarchical filter to the skin image from which we extract quantitative coefficients related to different set of general skin conditions including texture/evenness, wrinkles, and spots. Our goal at this stage of our research is to obtain a high level understanding of the skin rather than focus on a particular skin anomaly. This work is indirectly related (by similarity of subject matter) to our prior work on video filters for skin evaluation [9], although the actual problem and methods presented in this paper are entirely different than [9].