Edited Conference Papers

Display:

Facial age estimation is an essential feature in many applications satisfying the need to provide users with content that corresponds to their ages. However, providing an inclusive facial age estimation solution that is also high-performing is challenging due to the many different factors that influence the face. This article leverages DeepSets for Symmetric Elements (DSS) to propose an approach that aims to extract a reliable set of rich feature vectors for age estimation. It combines a DSS feature extractor, ternary classifier, and a race determiner. Precisely, the extractor consists of a siamese-like layer that applies a regular convolutional neural network to input images and an aggregation module that sums up all of the images and then adds them to the output from the siamese layer. To estimate the age, the ternary classifier obtains the feature vectors seeking to classify them into three possible outcomes that correspond to younger than, similar to, or older than. The correlation is achieved using identical pairs of input and reference images that belong to the same race. The result indicates the similarity between the images: the higher the score, the closer the similarity. With an accuracy of 94.8%, 95.2%, and 90.5% on the MORPH II, a race-inclusive dataset, and the FG-NET, we demonstrate that our proposal exemplifies facial age estimation particularly when the race factor is considered in the estimation.

We introduce an age-group estimation scheme known as DeepComp. It is a combination of an Early Information-Sharing Feature Aggregation (EISFA) mechanism and a ternary classifier. The EISFA part is a feature extractor that applies a siamese layer to input images and an aggregation module that sums up all the images. The ternary process compares the image representations into three possible outcomes corresponding to younger, similar, or older. From the comparisons, we arrive at a score indicating the similarity between an input and reference images: the higher the score, the closer the similarity. Experimentation shows that our DeepComp scheme achieves an impressive 94.9% accuracy on the Adience benchmark dataset using a minimum number of reference images per age group. Moreover, we demonstrate the generality of our method on the MORPH II dataset, and the result is equally impressive. Altogether, we show that, among other schemes, our method exemplifies facial age-group estimation.

The Institute

Achievements

Divisions

Contact Us

Address:
P. O. Box UP40,
Kumasi, Ghana

Telephone:
+233244190056 / +233244190037
+233244190038 / +233322060064
Fax:
+233-032-206-0080
Email:
[email protected]            

FACEBOOK LOGO YOUTUBE LOGO INSTAGRAM LOGO LINKEDIN LOGO TWITTER X LOGO