CLiC: Concept Learning in Context

Mehdi Safaee 1
Aryan Mikaeili 1
Or Patashnik 2
Daniel Cohen-Or 2
Ali Mahdavi-Amiri 1
1 Simon Fraser University
2 Tel Aviv University

CVPR 2024 (Highlight)



TL;DR: We Focus on learning a specific pattern from an image (like the ornaments on the red chair) through a unique text token. This token can then be used to effectively transfer the learned pattern onto various other objects (Right) or to create new objects featuring that pattern (Left).

Abstract

This paper addresses the challenge of learning a local visual pattern of an object from one image, and generating images depicting objects with that pattern. Learning a localized concept and placing it on an object in a target image is a nontrivial task, as the objects may have different orientations and shapes. Our approach builds upon recent advancements in visual concept learning. It involves acquiring a visual concept (e.g., an ornament) from a source image and subsequently applying it to an object (e.g., a chair) in a target image. Our key idea is to perform in-context concept learning, acquiring the local visual concept within the broader context of the objects they belong to. To localize the concept learning, we employ soft masks that contain both the concept within the mask and the surrounding image area. We demonstrate our approach through object generation within an image, showcasing plausible embedding of in-context learned concepts. We also introduce methods for directing acquired concepts to specific locations within target images, employing cross-attention mechanisms, and establishing correspondences between source and target objects. The effectiveness of our method is demonstrated through quantitative and qualitative experiments, along with comparisons against baseline techniques.

Transfer Results

Results

Generation Results

Results

Method


Additional Results


Presentation

Citation

@article{safaee2023clic,
    title={CLiC: Concept Learning in Context},
    author={Mehdi Safaee and Aryan Mikaeili and Or Patashnik and Daniel Cohen-Or and Ali Mahdavi-Amiri},
    journal={CVPR},
    year={2024}
}