Mon Sep 09 14:00:00 UTC 2024: ## New Image Retrieval Method Leverages Model’s ‘Inductive Knowledge’ to Find Images Based on Conditions

**Seoul, South Korea** – Researchers at Yonsei University have developed a new image retrieval method called “Backward Search” that allows for efficient and accurate retrieval of images based on user-specified conditions. This method leverages the model’s “inductive knowledge,” meaning it uses the model’s learned relationships between images and their labels to find images that satisfy specific criteria.

Traditional image retrieval methods often rely on complex datasets with triplets of images and text, or heavily annotated image-text pairs, making them expensive and time-consuming to implement. The Backward Search method, however, only requires a pair of images and image-level labels, making it more readily adaptable to various datasets.

The key innovation of Backward Search lies in its inverse mapping approach. Instead of searching for images that match the query, the model updates the embedding of the query image itself to conform to the specified condition. This is achieved by minimizing the difference between the query image’s label and the condition label, thereby aligning the query’s embedding with the desired criteria.

“Our proposed method enables both single and multi-conditional image retrieval, meaning users can specify one or multiple conditions for the images they want to find,” explains Dr. Dong-Hyun Lee, lead author of the study published in PLOS ONE. “Furthermore, we have effectively reduced the computational time by distilling the knowledge from the Backward Search process into a simpler student model.”

The researchers tested their method on three benchmark datasets: WikiArt, aPY, and CUB. The results showed that the Backward Search method outperformed existing composed image retrieval (CoIR) methods in terms of mean average precision (mAP@10), demonstrating its effectiveness in accurately retrieving images based on specified conditions.

The study also highlights the benefits of using knowledge distillation to accelerate the retrieval process. The distilled student model was able to achieve a significant reduction in computation time, up to 160 times faster, with only a slight decrease in performance.

This research has potential implications for various image-related applications, including image search engines, content-based image retrieval systems, and automated image analysis tasks. The researchers believe that the Backward Search method can be further enhanced by incorporating natural language processing techniques, allowing users to specify conditions in a more flexible and intuitive way.

The implementation of the Backward Search method is available on GitHub: [https://github.com/dhlee-work/BackwardSearch](https://github.com/dhlee-work/BackwardSearch)

**Citation:** Lee D, Kim W (2024) Backward induction-based deep image search. PLoS ONE 19(9): e0310098. https://doi.org/10.1371/journal.pone.0310098

Read More