Overview

Welcome to Makeup Datasets, datasets of female face images assembled for studying the impact of makeup on face recognition.
We have assembled 4 datasets:
  • YMU (YouTube Makeup): face images of subjects were obtained from YouTube video makeup tutorials. We also provide the YouTube URLs.
  • VMU (Virtual Makeup): face images of Caucasian female subjects in the FRGC repository (http://www.nist.gov/itl/iad/ig/frgc.cfm) were synthetically modified to simulate the application of makeup. A publicly available software (www.taaz.com) was used to perform this alteration.
  • MIW (Makeup in the "wild"): face images of subjects with and without makeup were obtained from the internet.
  • MIFS (Makeup Induced Face Spoofing): face images of subjects were obtained from YouTube video makeup tutorials and face images of associated target subjects from the internet. 

Dataset

Subjects

Images per subject

Total number of images

YMU

151

4 (2 before and 2 after makeup application)

604

VMU

51

4 (1 no makeup, 1 lipstick, 1 eye makeup, 1 full makeover)

204

MIW

125

1-2

154 images (77 with makeup, 77 without makeup) 

MIFS

107 subjects + 107 target subjects

4 (2 before, 2 after makeup application) + 2 target 

642


Datasets

  • YMU: We assembled a dataset consisting of 151 subjects, specifically Caucasian females, from YouTube makeup tutorials. We captured images of the subjects before and after the application of makeup. There are four shots per subject: two shots before the application of makeup and two shots after the application of makeup. For a few subjects, we were able to obtain three shots each before and after the application of makeup. The makeup in these face images varies from subtle to heavy. The cosmetic alteration is mainly in the ocular area, where the eyes have been accentuated by diverse eye makeup products. Additional changes are on the quality of the skin due to the application of foundation and change in lip color. This dataset includes some variations in expression and pose. The illumination condition is reasonably constant over multiple shots of the same subject. In few cases, the hair style before and after makeup changes drastically. More details about this dataset can be found in:
    1. A. Dantcheva, C. Chen, A. Ross, "Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?," Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.
    2. C. Chen, A. Dantcheva, A. Ross, "Automatic Facial Makeup Detection with Application in Face Recognition," Proc. of 6th IAPR International Conference on Biometrics (ICB), (Madrid, Spain), June 2013.
        Examples:

ymu1ymu2ymu3

  • VMU: The VMU dataset was assembled by synthetically adding makeup to 51 female Caucasian subjects in the FRGC dataset. We added makeup by using a publicly available tool from Taaz. We created three virtual makeovers: (a) application of lipstick only; (b) application of eye makeup only; and (c) application of a full makeup consisting of lipstick, foundation, blush and eye makeup. Hence, the assembled dataset contains four images per subject: one before-makeup shot and three aftermakeup shots. More details about this dataset can be found in:  
    1. A. Dantcheva, C. Chen, A. Ross, "Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?," Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.
        Examples:
VMU
  • MIW: The images are obtained from the internet and the faces are unconstrained. More details about this dataset can be found in:
    1. C. Chen, A. Dantcheva, A. Ross, "Automatic Facial Makeup Detection with Application in Face Recognition," Proc. of 6th IAPR International Conference on Biometrics (ICB), (Madrid, Spain), June 2013.
        Examples:
MIW

  • MIFS: We assembled a dataset consisting of 107 makeup-transformations taken from random YouTube makeup video tutorials. Each subject is attempting to spoof a target identity. Hence we provide three sets of face images: images of a subject before makeup; images of the same subject after makeup with the intention of spoofing; and images of the target subject who is being spoofed. More details about this dataset can be found in:
    1. C. Chen, A. Dantcheva, T. Swearingen, A. Ross, "Spoofing Faces Using Makeup: An Investigative Study," Proc. of 3rd IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), (New Delhi, India), February 2017.
        Examples:
MU_spoofing


Download the datasets
Please send an email to MU_data@antitza.com, CC: rossarun@msu.edu, providing following details:
  • Name,
  • Affiliation,
  • Email address,
  • Requested dataset,
  • Reason for the dataset download.
We are requesting this information in order to offer updates or to contact you if we organize a related workshop. We will respond to your email and will send you the download details.
Thanks for your interest.

References
When using these datasets in your research, please cite the following papers:
  1. A. Dantcheva, C. Chen, A. Ross, "Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?," Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.
  2. C. Chen, A. Dantcheva, A. Ross, "Automatic Facial Makeup Detection with Application in Face Recognition," Proc. of 6th IAPR International Conference on Biometrics (ICB), (Madrid, Spain), June 2013.
  3. C. Chen, A. Dantcheva, A. Ross, "An Ensemble of Patch-based Subspaces for Makeup-Robust Face Recognition," Information Fusion Journal, Vol. 32, pp. 80 - 92, November 2016.
  4. C. Chen, A. Dantcheva, T. Swearingen, A. Ross, "Spoofing Faces Using Makeup: An Investigative Study," Proc. of 3rd IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), (New Delhi, India), February 2017.