UPDATE: The leaderboard will be closing on 13th July, 9:00, Anywhere on Earth (AoE), and the top 3 entries on the leaderboard will be invited to submit papers to the conference by 14th July AoE (the official hard deadline of ACM MM grand challenges). ->For anyone outside of the top 3, we highly encourage you to submit to our sister workshop: https://megc2023.github.io/workshop.html, the submission system for the Facial Micro-Expression workshop is open here: https://easychair.org/conferences/?conf=fme2023.
UPDATE: The deadline for submitting your results to the leaderboard will be the same as the paper submission date - 14th July 2023 AOE.
After this deadline, the leaderboard rankings will be made public. The best submission score from each participant will be used.
NOTICE ON EVALUATION: To facilitate paper reviewing, as the leaderboard scores are hidden, participants could state their results using SAMM-LV, CAS(ME)^2, CAS(ME)^3, and/or the MEGC 2022 unseen dataset at the article submission stage. The use of all these datasets for this purpose is not mandatory, and participants are free to choose a subset or another dataset if they wish. Final scores can be added in the camera-ready version.
[update!!] Leaderboard's closure has been postponed.
[update] The unseen dataset is available for application now! Click here for more detail.
[update] The LeaderBoard is available for result submission now! Click here for more detail.
ME and Macro-expression Spotting task
Click here to download the CFP.
Recommended Training Databases
- SAMM Long Videos with 147 long videos at 200 fps (average duration: 35.5s).
- To download the dataset, please visit: http://www2.docm.mmu.ac.uk/STAFF/M.Yap/dataset.php. Download and fill in the license agreement form, email to M.Yap@mmu.ac.uk with email subject: SAMM long videos.
- Reference: Yap, C. H., Kendrick, C., & Yap, M. H. (2020, November). SAMM long videos: A spontaneous facial micro-and macro-expressions dataset. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) (pp. 771-776). IEEE.
- CAS(ME)2 with 97 long videos at 30 fps (average duration: 148s).
- To download the dataset, please visit: http://casme.psych.ac.cn/casme/e3. Download and fill in the license agreement form, submit throuth the website. >.
- Reference: Qu, F., Wang, S. J., Yan, W. J., Li, H., Wu, S., & Fu, X. (2017). CAS (ME) $^ 2$: a database for spontaneous macro-expression and micro-expression spotting and recognition. IEEE Transactions on Affective Computing, 9(4), 424-436.
- SMIC-E-long with 162 long videos at 100 fps (average duration: 22s).
- To download the dataset, please visit: https://www.oulu.fi/cmvs/node/41319. Download and fill in the license agreement form (please indicate which version/subset you need), email to Xiaobai.Li@oulu.fi.
- Reference: Tran, T. K., Vo, Q. N., Hong, X., Li, X., & Zhao, G. (2021). Micro-expression spotting: A new benchmark. Neurocomputing, 443, 356-368.
- CAS(ME)3 with 1300 long videos at 30 fps (average duration: 98s).
- To download the dataset, please visit: http://casme.psych.ac.cn/casme/e4. Download and fill in the license agreement form, submit throuth the website.
- Reference: Li, J., Dong, Z., Lu, S., Wang, S. J., Yan, W. J., Ma, Y., ... & Fu, X. (2022). CAS (ME)3: A third generation facial spontaneous micro-expression database with depth information and high ecological validity. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 2782-2800, doi: 10.1109/TPAMI.2022.3174895..
- 4DME with 270 long videos at 60 fps (average duration: 2.5s).
- To download the dataset, please visit: https://www.oulu.fi/en/university/faculties-and-units/faculty-information-technology-and-electrical-engineering/center-machine-vision-and-signal-analysis. Download and fill in the license agreement form , email to Xiaobai.Li@oulu.fi.
- Reference: Li, X., Cheng, S., Li, Y., Behzad, M., Shen, J., Zafeiriou, S., ... & Zhao, G. (2022). 4DME: A spontaneous 4D micro-expression dataset with multimodalities. IEEE Transactions on Affective Computing.
Unseen Test Dataset
- This year, in order to evaluate algorithms' performance more fairly, based on the experience gained from CAS(ME)3, SAMM Long Videos, SMIC-E-long, CAS(ME)3 and 4DME, we will build an unseen cross-cultural long-video test set and the sample size will be tripled from last year’s challenge.
- The unseen testing set (MEGC2023-testSet) contains 30 long video, including 10 long videos from SAMM (SAMM Challenge dataset) and 20 clips cropped from different videos in CAS(ME)3 (unreleased before). The frame rate for SAMM Challenge dataset is 200fps and the frame rate for CAS(ME)3 is 30 fps. The participants should test on this unseen dataset.
- To download the MEGC2023-testSet, Download and fill in the license agreement form of SAMM Challenge
dataset and
the license agreement form of CAS(ME)3_clip,
upload the file through this
link: https://www.wjx.top/vm/PpmFKf7.aspx# .
- For the request from a bank or company, the participants need to ask their director or CEO to sign the form.
- Reference:
- Li, J., Dong, Z., Lu, S., Wang, S.J., Yan, W.J., Ma, Y., Liu, Y., Huang, C. and Fu, X. (2023). CAS(ME)3: A Third Generation Facial Spontaneous Micro-Expression Database with Depth Information and High Ecological Validity. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 2782-2800, 1 March 2023, doi: 10.1109/TPAMI.2022.3174895.
- Davison, A. K., Lansley, C., Costen, N., Tan, K., & Yap, M. H. (2016). SAMM: A spontaneous micro-facial movement dataset. IEEE Transactions on Affective Computing, 9(1), 116-129.
Evaluation Protocol
- Participant should test the proposed algorithm on the unseen dataset and upload the result to the Leaderboard (https://codalab.lisn.upsaclay.fr/competitions/14254) for the evaluation.
- Baseline Method:
Please cite:
Zhang, L. W., Li, J., Wang, S. J., Duan, X. H., Yan, W. J., Xie, H. Y., & Huang, S. C. (2020, November). Spatio-temporal fusion for macro-and micro-expression spotting in long video sequences. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) (pp. 734-741). IEEE. - Baseline result: Available on the Leaderboard
- Leaderboard :
a zip file contains your predicted csv files
with the following labels:
- cas_pred.csv
- samm_pred.csv
- An example submission can be seen at example_submission and example_submission_withoutExpType.
- Note: For submission without labelling expression type (me or mae), the labelling will be done automatically using ME threshold of 0.5s (15 frames for CAS and 100 frames for SAMM).
- The participants could upload the result and then the Leaderboards will calculate the metrics.
- The evaluation result of other participants and the ranking will not be provided during this stage. You could compare your result with the provided baseline result.
- Results uploaded after submission deadline will not be considered by ACM MEGC2023 for the final ranking of the competition.
- However, any research team interested in the spotting task can upload results to validate the performance of their method.
- The leaderboard will calculate and display the uploaded results and real-time ranking.
Submission
Please note: The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth.
- Submission platform: TBD
- Submission Deadline: 14th July 2023
- Notification: TBD
- Camera-ready:
06th August 202331st July 2023 (Hard deadline) - Submission guidelines:
- Submitted papers (.pdf format) must use the ACM Article Template https://www.acm.org/publications/proceedings-template as used by regular ACMMM submissions. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template.
- Grand challenge papers will go through a single-blind review process. Each grand challenge paper submission is limited to 4 pages with 1-2 extra pages for references only.
- For all files with different task requirements except for the paper, please submit in a
single zip file and upload to the submission system as supplementary material.
- GitHub repository URL containing codes of your implemented method, and all other relevant files such as feature/parameter data.
- CSV files reporting the results.
Top three winners
Rank | Contestant | Affiliation | Article Title | GitHub Link |
---|---|---|---|---|
1st Place | Ke Xu1,2, Kang Chen1,3, Licai Sun1,2, Zheng Lian1,2, Bin Liu1,2, Gong Chen1,2, Haiyang Sun1,2, Mingyu Xu1,2, and Jianhua Tao4,5 |
1 The State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences
2 University of Chinese Academy of Sciences
3 Peking University
4 Department of Automation, Tsinghua University
5 Beijing National Research Center for Information Science and Technology, Tsinghua University
|
Integrating VideoMAE based model and Optical Flow for Micro- and Macro-expression Spotting | https://github.com/AAAuthenticator/MESpotting |
2nd Place | Wenfeng Qin, Bochao Zou, Xin Li, Weiping Wang, Huimin Ma | University of Science and Technology Beijing | Micro-Expression Spotting with Face Alignment and Optical Flow | https://github.com/qin123xyz/MEGC2023_macro-and-micro-expression-spotting |
3rd Place | Jun Yu1, Zhongpeng Cai1, Shenshen Du1, Xiaxin Shen1, Lei Wang1, and Fang Gao2 |
1 University of Science and Technology of China
2 Guangxi University
|
Efficient Micro-Expression Spotting Based on Main Directional Mean Optical Flow Feature | https://github.com/CZP-1/MEGC2023-3rd |
Frequently Asked Questions
- Q: How to deal with the spotted intervals with overlap?
A: We consider that each ground-truth interval corresponds to at most one single spotted interval. If your algorithm detects multiple with overlap, you should merge them into an optimal interval. The fusion method is also part of your algorithm, and the final result evaluation only cares about the optimal interval obtained.