Realistic Single Image Recovering in Adverse Weather
Mar. 1, 2019Web page now online.
Apr. 17, 2019Registration now open.
Apr. 26, 2019Training data is now available.
Apr. 29, 2019Testing data will be released on May 12.
May 13, 2019Testing data has been released.
Outdoor scenes are often affected by fog, haze, rain, and smog. Poor visibility in the atmosphere is due
to suspended particles.
This challenge is meant to consolidate research efforts about single image recovering in adverse weather,
especially hazy and rainy days. The challenge consists of two tracks: Hazy Image Recovering (HIR) and Rainy Image
Recovering (RIR). In both tracks the researchers are required to recover sharp images from give
degraded (hazy and rainy) inputs.
The dataset consists of two parts: rainy dataset and hazy dataset. Both these two datasets including training,
validation, and test data.
Hazy dataset: We provide 3,000 real-world hazy images collected from traffic surveillance scene,
all of which are labeled with object bounding boxes and categories (car, bus, bicycle, motorcycle,
and pedestrian), for validation and testing purposes. Training: [Dropbox] [Baidu Yun] Passward: w54h Testing: [Google Drive]
Rainy dataset: We provide 2,495 real rainy images from high-resolution driving videos.
They were captured in diverse real traffic locations and scenes during multiple drives, all of which
are labeled with object bounding boxes and categories: car, person, bus, bicycle, and motorcycle. Training: [Google Drive] Testing: [Google Drive]
1. Additional training data is allowed in this challenge.
2. Please KEEP the NAMES and the original RESOLUTION of testing images when submitting your
results. You can share the derained/dehazed results and the code to us via Baidu Yun, Google Drive,
or Dropbox. Only the last submission will be valid. Please send the shared link to firstname.lastname@example.org
with your team name. You may visit the
to submit your accompanying paper. Both DEADLINEs of submission of testing images and the accompanying paper
(paper submission is optional) are May 27.
Each team will be asked to register prior to the submission period. Registration is now opened, to
register please submit the Registration Form to email@example.com.
Team Members' Name
⦁ May 12, Release of testing data.
⦁ May 27, Test image submission deadline.
⦁ May 27, Deadline for the accompanying papers for possible publication at ICIP 2019. (optional)
⦁ July 1, Announcement of the evaluation results.
⦁ July 1, Notification for the acceptance of the accompanying papers. (optional)
⦁ July 15, Deadline for the submission of accepted camera-ready paper. (optional)
Topics Of Interest
The challenge focuses on analysis of daily indoor activities from skeleton data captured by 3D
cameras for two different tasks.
⦁ Segmented Action Recognition Challenge:
Given a well-segmented
skeleton video clip, predict the label of the activity present in the video clip.
⦁ Untrimmed Action Detection Challenge:
Given a long skeleton video,
predict the action intervals with labels of the activities.
Activity analysis is an important area in computer vision and strongly relevant to multimedia.
Different from other topics in the main conference, this workshop focuses on 3D human activity
analysis which has been shown to have a potentially large impact in broad practical applications
like visual surveillance, human-robot interaction, elderly assistance systems, etc.
Dr. Jiaying Liu
Associate Professor, Institute of Computer Science and Technology
Peking University, Beijing, P.R. China
Dr. Wenqi Ren
Assistant Professor, Institute of Information Engineering
Chinese Academy of Sciences, Beijing, P.R. China
Dr. Zhangyang Wang
Assistant Professor, Department of Computer Science & Engineering
Texas A&M University, US
This workshop is funded by Microsoft Research Asia, project ID FY17-RES-THEME-013.