UPDATE

  • Join the workshop online on November 2 from 08:30 - 12:00am (Canadian time,EDT)!
    • Due to the many visa issues that have occurred at this year’s ACM Multimedia, many of our participants for the grand challenge and our workshop cannot present in-person. Dr. Adrian K. Davison will be introducing the workshop, and allowing any presenters to present if they are available, otherwise a video that has been prepared for the participant will be shown.
    • We will also attempt to facilitate some interactive elements to the workshop to encourage discussion and improvements for future workshops and challenges.
  • The submission system for the Facial Micro-Expression workshop is open here: https://easychair.org/conferences/?conf=fme2023.
  • Sumission deadline: 21st July 2023 AOE 28th July 2023 AOE
>

Facial Micro-Expression (FME) Workshop 2023

- Advanced Techniques for Multi-Modal Facial Expression Analysis

Click here to download the CFP.

Facial micro-expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress the facial expression, typically found in a high-stakes environment. The duration of MEs are very short, generally lasting no more than 500 milliseconds (ms) and is the telltale sign that distinguishes them from a normal facial expression. Computational analysis and automation of tasks on MEs are emerging areas in face research, with a strong interest appearing as recently as 2014. The availability of a few spontaneously induced facial micro-expression datasets has provided the impetus to further advance in the computational aspect. Since the elicitation and the artificial annotation of MEs are challenging, the amount of labeled ME samples is limited. So far, there are only around 1162 (video) samples across seven public spontaneous databases. Besides, it is impossible to unify the standardization of ME labeling for different annotators. To tackle this problem, we expect that the recent advancement in pattern recognition can help improve ME spotting and recognition performance. For instance, self-supervised learning, one-shot learning, and artificially generating data to aid with the relatively low number of samples.

Furthermore, micro-expression analysis (MEA) faces many challenges. First, the micro-expression generation mechanism is still not precise, while the valence of micro-expression in lie detection is not sufficiently clear. Second, micro-expression samples with high ecological validity are difficult to induce, and data labeling can be quite time-consuming and labor-intensive. This causes the problems of small sample size and imbalanced distribution in MEA tasks. With the development of imaging devices, MEA is not limited to traditional RGB video. New data and research trends are towards combining facial data captured from multiple and various sensors, e.g., depth and thermal cameras, so that different features can be fused for MEA. Furthermore, MEA is an interdisciplinary field with multi-modality research capabilities. First, multi-modality data, such as depth information and physiological signals, can improve micro-expression analysis performance. Second, multi-modal micro-expression analysis can enable more in-depth research on face and emotion analysis.

Agenda

  • To organize a Facial Micro-Expression (FME) Workshop for facial micro-expression research, involving FME recognition, spotting and generation.
  • To solicit original works that address a variety of challenges of Facial Expressions research, but not limited to:
    • Facial expressions (both micro- and macro-expressions) detection/spotting
    • Facial expressions recognition
    • Multi-modal micro-expression analysis, combining such as depth information, heart rate signal etc.
    • FME feature representation and computational analysis
    • Unified FME spot-and-recognize schemes
    • Deep learning techniques for FMEs detection and recognition
    • New objective classes for FMEs analysis
    • New FMEs datasets Facial expressions data synthesis
    • Psychology of FMEs research
    • Facial Action Unit (AU) detection and recognition
    • Emotion recognition using Aus
    • FME Applications

This workshop explores the intelligent analysis of personal emotions through facial expressions, with particular emphasis on micro-expression analysis to study hidden emotions. Further, a focus on multi-modal approaches and novel generation techniques will be encouraged.

Submission

Please note: The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth.

  • Submission Deadline: 21st July 2023 28th July 2023
  • Notification: 30th July 2023
  • Camera-ready: 06th August 2023
  • Submission guidelines:
    • FME2023 workshop papers will go through a double-blind review process.
    • Paper Format and page limit : The template is the same as the one used for the main conference (ACMMM23) track. Submitted papers (.pdf format) must use the ACM Article Template https://www.acm.org/publications/proceedings-template as used by regular ACMMM submissions. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf (\documentclass[sigconf,anonymous]{acmart}) template. The page limit would be 8 pages.
    • Submission system: https://easychair.org/conferences/?conf=fme2023.