Original language | English |
---|---|
Pages (from-to) | 1-2 |
Number of pages | 2 |
Journal | Computer Vision and Image Understanding |
Volume | 168 |
DOIs | |
State | Published - Mar 2018 |
Funding
Funders | Funder number |
---|---|
Total |
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
In: Computer Vision and Image Understanding, Vol. 168, 03.2018, p. 1-2.
Research output: Contribution to journal › Editorial
TY - JOUR
T1 - Guest Editorial
T2 - Vision and Computational Photography and Graphics
AU - Timofte, Radu
AU - Gool, Luc Van
AU - Yang, Ming Hsuan
AU - Avidan, Shai
AU - Matsushita, Yasuyuki
AU - Yang, Qingxiong
N1 - Funding Information: Radu Timofte a [email protected] Luc Van Gool b [email protected] Ming-Hsuan Yang c [email protected] Shai Avidan d [email protected] Yasuyuki Matsushita e [email protected] Qingxiong Yang f [email protected] a ETH Zurich ETH Zurich b KU Leuven and ETH Zurich KU Leuven and ETH Zurich c University of California at Merced University of California at Merced d Tel-Aviv University Tel-Aviv University e Osaka University Osaka University f JingChi Corp JingChi Corp Computational photography is a new and rapidly developing research field. It aims at removing the limitations of the traditional camera by recording more information and processing this information afterwards. Computational photography is believed to lie at the convergence of computer graphics, computer vision and photography, and many of the techniques adopted in computational photography indeed first appeared in the computer vision literature. Many of the latest exciting developments in computational photography are closely related to computer vision, e.g., computational cameras that use object detection and visual tracking to better focus and expose the image. In this context, we present a CVIU special issue on “Vision and Computational Photography and Graphics”. This special issue was intended to cover a wide range of topics on computational photography, with the common denominator of applying computer vision techniques to computational photography. The scope of this special issue is interdisciplinary. We were especially welcoming collaborations between academia and industrial experts in the areas of image sensors, photonics, information theory, signal processing, computer vision, and machine learning/data mining. We have received a total of 29 paper submissions from 15 different countries (Austria, Brazil, Canada, Chile, China, France, Hong Kong, India, Italy, Morocco, Portugal, South Africa, Taiwan, Thailand, United Kingdom). After a thorough reviewing process with up to three rounds of revisions, 14 papers were selected. These offer a good picture of the exciting research that goes on at the confluence of vision and computational photography and graphics. In “Towards an Automatic Correction of Over-Exposure in Photographs: Application to Tone-Mapping” by Mekides A. Abebe, Alexandra Booth, Jonathan Kervec, Tania Pouli, and Mohamed-Chaker Larabi, a solution is proposed to the over-exposure artifact in photographs. Symptoms are flat areas and lacking details. They adaptively estimate a clipping threshold value per image based on the image white point and automatically classify over-exposure as light sources, specular highlights, or diffuse surfaces. Several applications are explored, including video extension and preprocessing for reverse tone mapping. In “A Bio-inspired Synergistic Virtual Retina Model for Tone Mapping” by Marco Benzi, Maria-Jose Escobar, and Pierre Kornprobst, it is shown how to enrich the Virtual Retina model from computational neuroscience with new features to use it as a tone mapping operator. Applications include color management, luminance adaptation at photoreceptor level, and readout from a heterogeneous population activity. In “Clustering based content and color adaptive tone mapping” by Hui Li, Xixi Jia, and Lei Zhang, a statistical clustering based tone mapping method is proposed which can more faithfully adapt local image content and colors. In “Simultaneous deconvolution and denoising using a second order variational approach applied to image super resolution” by Amine Laghrib, Mahmoud Ezzaki, Mohammed El Rhabi, Abdelilah Hakim, Pascal Monasse, and Said Raghay, a multi-frame image super-resolution algorithm is proposed based on a convex combination of bilateral total variation and a non-smooth second order variational regularization. A proof is given for the existence of a minimizer for the proposed energy function and the results on simulated and real images show the ability of the algorithm to avoid undesirable artifacts. In “Modified Non-local Means for Super-Resolution of Hybrid Videos” by Yawei Li, Xiaofeng Li, and Zhizhong Fu, novel criteria are proposed to choose parameters in a non-local means framework used for super-resolution of hybrid videos, i.e. videos with periodically low and high resolution frames (used for guidance). In “Video Super-Resolution Based on Spatial-Temporal Recurrent Residual Networks” by Wenhan Yang, Jiashi Feng, Guosen Xie, Jiaying Liu, Zongming Guo, and Shuicheng Yan, intra-frame redundancy and inter-frame motion context are jointly modeled in a unified deep network to super-resolve videos. In “Optimized Sampling for View Interpolation in Light Fields Using Local Dictionaries” by David Christian Schedl, Clemens Birklbauer, and Oliver Bimber, an angular super-resolution method is introduced. It is based on local dictionaries for light fields captured with a sparse camera array. The desired output perspectives and the number of available cameras can be specified arbitrarily. In “Depth Range Accuracy for Plenoptic Cameras” by Nuno Barroso Monteiro, Simo Marto, Joo P. Barreto, and Jos Gaspar, a forward projection model is formalized and projective geometry cues are considered to improve a metric reconstruction methodology for a calibrated standard plenoptic camera. The validation is done for depth estimation accuracy under certain zoom and focus settings. In “A Novel Framework for Highlight Reflectance Transformation Imaging” by Andrea Giachetti, Irina M. Ciortan, Claudia Daffara, Giacomo Marchioro, Ruggero Pintus, and Enrico Gobbetti, a novel pipeline and related software tools for processing multi-light image collections (MLICs) are presented. The latter are acquired in different application contexts (like cultural heritage), to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations. In “Specular Highlight Reduction With Known Surface Geometry” by Xing Wei, Xiaobin Xu, Jiawei Zhang, and Yihong Gong, the surface geometry is assumed known and a method is proposed to simultaneously separate specularities and estimate the position of light sources. The method uses a novel objective function based on robust principal component analysis. In “Atmospheric Light Estimation in Hazy Images Based on Color-Planes Model” by Ming-Zhu Zhu, Bing-Wei He, and Li-Wei Zhang, describes a novel air light recovery method, applied to single image dehazing. A color-plane model combines a color-line model and a haze-line model. In “Underwater image and video dehazing with pure haze region segmentation” by Simon Emberton, Lars Chittka, and Andrea Cavallaro, a novel dehazing method is proposed to improve visibility in images and videos by detecting and segmenting image regions that contain only water. Extensive subjective evaluation tests are conducted to validate the approach. In “Blind Image Deblurring Using Elastic-Net based Rank Prior” by Hongyan Wang, Jinshan Pan, Zhixun Su, and Songxin Liang, an image prior is proposed based on similar patches of an image and an elastic-net regularization of singular values. The prior is applied to both uniform and non-uniform image deblurring under blind settings for top performance. In “A Bi-directional Evaluation-based Approach for Image Retargeting Quality Assessment” by Saulo A. F. Oliveira, Shara Alves, Joo Gomes, and Ajalmar Rocha Neto, an approach is proposed to assess image quality in retargeting, in a bi-directional way, all in a feature fusion framework. The approach is validated on a well-known state-of-the-art dataset, incl. human observers' opinions on perceptual quality. We wish to thank the CVIU's Editor-in-Chief and Editorial Staff for their continuous support that made this Special Issue possible. Our gratitude is extended to our numerous Reviewers, experts in their respective fields, whose high-quality reviews are behind the success of this Special Issue.
PY - 2018/3
Y1 - 2018/3
UR - http://www.scopus.com/inward/record.url?scp=85044162617&partnerID=8YFLogxK
U2 - 10.1016/j.cviu.2018.02.007
DO - 10.1016/j.cviu.2018.02.007
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.editorial???
AN - SCOPUS:85044162617
SN - 1077-3142
VL - 168
SP - 1
EP - 2
JO - Computer Vision and Image Understanding
JF - Computer Vision and Image Understanding
ER -