Large-Scale Face Image Retrieval Using Semantic Facial Attributes And Deep Transferred Descriptors

Citation

Banaeeyan, Rasoul (2018) Large-Scale Face Image Retrieval Using Semantic Facial Attributes And Deep Transferred Descriptors. PhD thesis, Multimedia University.

Full text not available from this repository.

Abstract

With the ever-increasing popularity of social networks, a colossal amount of images are being uploaded to the digital world encompassing human faces. Analysis of such faces has led to the expansion of fascinating and enabling technologies in different spheres such as social sciences, entertainment, security, etc. Face retrieval is one instance of such enabling technologies which intends to locate index of one or more identical faces to a given query face. The performance of face retrieval systems heavily rely on the careful analysis of different facial components (eyes, nose, mouth, etc.) and, at a higher level, facial attributes (gender, race, age, hair color, eye color, etc.) owing to the fact that these semantic attributes help to tolerate some degrees of geometrical distortion, illuminations, expressions, and partial occlusions. However, solely employing facial attribute classifiers fail to add scalability in the context of thousands of distracting face images even though these classifiers are highly accurate. In addition, owing to the discriminative power of Convolutional Neural Networks (CNN) features, recent works have employed a complete set of deep transferred CNN features (taken from fully-connected layers) with a large feature dimensionality to obtain enhanced performance; yet these retrieval systems require high computational power and are very resource-demanding at the retrieval time due to the curse of dimensionality. Therefore, this study aims to exploit the distinctive capability of all facial attribute classifiers while their results are further refined by a proposed sequential subset feature selection to reduce the dimensionality of the features extracted from a very deep pre-trained CNN model (VGG-face).

Item Type: Thesis (PhD)
Additional Information: Call No.: Z699.5.P53 R37 2018
Uncontrolled Keywords: Content-based image retrieval
Subjects: Z Bibliography. Library Science. Information Resources > Z665 Library Science. Information Science
Divisions: Faculty of Engineering (FOE)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 17 Sep 2020 04:29
Last Modified: 17 Sep 2020 05:39
URII: http://shdl.mmu.edu.my/id/eprint/7720

Downloads

Downloads per month over past year

View ItemEdit (login required)