学术论文

    Image & Video Editing

    Thursday, 21 November

    16:15 - 18:00

    Convention Hall B

    Inverse Image Editing: Recovering a Semantic Editing History from a Before-and-After Image Pair

    We recover a semantically-meaningful editing history from a source image and an edited copy, which supports commonly used linear and non-linear geometric and color transforms with spatially varying adjustment brushes. User study suggests that our recovered histories are semantically comparable to the ones by artists.


    Shi-Min Hu, Tsinghua University
    Kun Xu, Tsinghua University
    Li-Qian Ma, Tsinghua university
    Bin Liu, Tsinghua University
    Bi-Ye Jiang, Tsinghua University
    Jue Wang, Adobe Research

    3-Sweep: Extracting Editable Objects from a Single Photo

    We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. The extracted 3D object can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space.


    Tao Chen, Tsinghua University
    Zhe Zhu, Tsinghua University
    Ariel Shamir, Interdisciplinary Center (IDC) Herzliya
    Shi-Min Hu, Tsinghua University
    Daniel Cohen-Or, Tel Aviv University

    PatchNet: A Patch-based Image Representation for Interactive Library-driven Image Editing

    We introduce PatchNets, a compact, hierarchical representation describing structural and appearance characteristics of image regions, for use in image editing. This PatchNet representation can be used as a basis for interactive, library-driven, image editing.


    Shi-Min Hu, Tsinghua University
    Fang-Lue Zhang, Tsinghua University
    Miao Wang, Tsinghua University
    Ralph Martin, Cardiff University
    Jue Wang, Adobe Research

    A Sparse Control Model for Image and Video Editing

    We proposed a new edit propagation approach that automatically determines the influence of edit samples across the whole image jointly considering spatial distance, sample location, and appearance. It greatly reduces the number of samples while allowing for a decent level of global and local manipulation and reducing propagation ambiguity.


    Li Xu, The Chinese University of Hong Kong
    Qiong Yan, The Chinese University of Hong Kong
    Jiaya Jia, The Chinese University of Hong Kong

    WYSIWYG Computational Photography via Viewfinder Editing

    We describe a framework for interactively editing images or videos directly on a live viewfinder of a camera before or during capture. The viewfinder reflects the global and local edits the user has specified, helping the user frame her shot. The edits also guide the parameter selection for stack photography.


    Jongmin Baek, Stanford University
    Dawid Pająk, NVIDIA Research
    Kihwan Kim, NVIDIA Research
    Kari Pulli, NVIDIA Research
    Marc Levoy, Stanford University