This article explores the methods and datasets used to build a benchmark for content-based image retrieval (CBIR) in medical imaging. It examines vector databases, the challenges of large-scale similarity search, and indexing techniques such as flat search, Locality Sensitive Hashing (LSH), and Hierarchical Navigable Small World (HNSW). The Facebook AI Similarity Search (FAISS) library is used to implement efficient approximate nearest neighbor (ANN) search. Using the TotalSegmentator dataset of over 1,200 CT volumes, embeddings were extracted slice-by-slice and indexed, enabling rapid, metadata-free retrieval across more than 290,000 image embeddings.This article explores the methods and datasets used to build a benchmark for content-based image retrieval (CBIR) in medical imaging. It examines vector databases, the challenges of large-scale similarity search, and indexing techniques such as flat search, Locality Sensitive Hashing (LSH), and Hierarchical Navigable Small World (HNSW). The Facebook AI Similarity Search (FAISS) library is used to implement efficient approximate nearest neighbor (ANN) search. Using the TotalSegmentator dataset of over 1,200 CT volumes, embeddings were extracted slice-by-slice and indexed, enabling rapid, metadata-free retrieval across more than 290,000 image embeddings.

Building a CBIR Benchmark with TotalSegmentator and FAISS

Abstract and 1. Introduction

  1. Materials and Methods

    2.1 Vector Database and Indexing

    2.2 Feature Extractors

    2.3 Dataset and Pre-processing

    2.4 Search and Retrieval

    2.5 Re-ranking retrieval and evaluation

  2. Evaluation and 3.1 Search and Retrieval

    3.2 Re-ranking

  3. Discussion

    4.1 Dataset and 4.2 Re-ranking

    4.3 Embeddings

    4.4 Volume-based, Region-based and Localized Retrieval and 4.5 Localization-ratio

  4. Conclusion, Acknowledgement, and References

2 Materials and Methods

2.1 Vector Database and Indexing

In the context of image search a database typically constitutes the central location where all the representations of the images, a.k.a. embeddings, and their metadata including annotations are stored. A query allows the user or the system to request specific images in various ways, e.g., by inputting a reference image or a textual description. The

\ Figure 1: Overview of a retrieval system based on Khun Jush et al. [2023]: Step 1: 2D slices are extracted from the 3D volumes. Step 2: Feature extractors are used to extract the embeddings from the database slices and query volumes. Step 3: Database embeddings are indexed using HNSW or LSH indexing. Step 4: Search and slice retrieval is performed, and a hit-table is saved (the hit-table shows the occurrence of volume-ids per each query volume or region saved along with the sum of its total score). Step 5: The results from slice retrieval are aggregated to retrieve the final volume.

\ goal is to search the database for similar images that match the query. Similarly, in this study, the search process entails comparing a query image with images in the database to identify the most similar image using the cosine similarity of the embeddings. Throughout this process, we do not depend on any metadata information at any stage. Metadata-independence is an intended design choice and in contrast to widely used metadata-based image retrieval solutions that frequently lack the necessary specificity in real-world retrieval applications. In small sets, the similarity search is easy but with the growing size of the database, the complexity increases. Accuracy and speed are the key factors in search, thus, naive approaches typically fail in huge datasets.

\ Indexing in the context of content-based image search involves creating a structured system that allows for efficient storage and retrieval of images based on their visual content. A flat index is the simplest form of indexing, where no modification is made to the vectors before they are used for search. In flat indexing, the query vector is compared to every other full-size vector in the database and their distances are calculated. The nearest k of the searched spaces is then returned as the k-nearest neighbors (kNN). While this method is the most accurate, it comes at the cost of significant search time [Aumüller et al., 2020]. To improve search time, two approaches can be employed: reducing the vector size through dimensionality reduction, e.g., by reducing the number of bits representing each vector, or reducing the search scope by clustering or organizing vectors into tree structures based on similarity or distance. This results in the identification of an approximation of the true nearest neighbors, known as approximate nearest neighbor search (ANN) [Aumüller et al., 2020].

\ There are several ANN methods available. In the context of content-based volumetric medical image retrieval, Khun Jush et al. [2023] compared Locality Sensitive Hashing (LSH) Charikar [2002] and Hierarchical Navigable Small World (HNSW) Malkov and Yashunin [2018] for indexing and search. LSH hashes data points in a way that similar data points are mapped to the same buckets with higher probabilities. This allows for a more efficient search for nearest neighbors by reducing the number of candidates to be examined. HNSW [Malkov and Yashunin, 2018] indexing organizes data into a hierarchical graph structure where each layer of the hierarchy has a lower resolution. The top layer connects data points directly, but the lower layers have fewer connections. The graph structure is designed to allow for efficient navigation during the search. Compared to LSH, HNSW typically enables faster search and requires less memory Taha et al. [2024]. Based on findings in [Khun Jush et al., 2023] HSNW was chosen as the indexing method in the setting of this study due to speed advantages over LSH at a comparable recall. There are various index solutions available to store and search vectors. In this study, we used the Facebook AI Similarity Search (FAISS) package that enables fast similarity search [Johnson et al., 2019]. The indexing process involves running the feature extractors on slices of each volumetric image and storing the output embeddings per slice. The produced representations are then added to the search index which is used later on for vector-similarity-based retrieval.

2.3 Dataset and Pre-processing

We designed a CBIR benchmark relying on the publicly available TotalSegmentator (TS) dataset Wasserthal et al. [2023], version 1. This dataset comprises in total of 1204 computed tomography (CT) volumes covering 104 anatomical structure annotations (TS, V1). The anatomical regions presented in the original dataset include several fine-grained sub-classes for which we considered an aggregation to a coarser common class as a reasonable measure, e.g., all the rib classes are mapped to a single class ‘rib’. The coarse organ labels can help identify similarities and potential mismatches between neighboring anatomical regions, providing valuable insights into the proximity information of the target organ. Table 1 shows the mapping of the original TS classes to the coarse aggregated classes. For the sake of reproducibility, the query cases are sourced from the original TS test split, while the cases contained in the original TS train and validation set serve as the database for searching. The search is assessed on the retrieval rate of 29 coarse anatomical structures and 104 original TS anatomical structures.

\ The models presented in Section 2.2 are 2D models used without fine-tuning to extract the embeddings. Thus, per each 3D volume, individual 2D slices of the corresponding 3D volumes are utilized for embedding extraction. The input size for all the used models is equal to 224 × 224 pixels with image replication along the RGB channel axis. For all the ViT-based models and the ResNet50 trained on fractal images, images are normalized to the ImageNet

\ \ Figure 2: Volume-based retrieval: For a query volume Vq covering a range of anatomical regions, a volume is retrieved that should cover the same anatomical regions. The similarity search is based on all slices from the query volume.

\ \ mean and standard deviation of (.485, .456, .406) and (.229, .224, .225), respectively. For the SwinTransformer and the ResNet50 model pre-trained on the RadImageNet dataset, the images are normalized to .5 mean and .5 standard deviation based on Mei et al. [2022]. The total size of the database is 290757 embeddings, while the final query set of the test set comprises 20442 embeddings.

\

:::info Authors:

(1) Farnaz Khun Jush, Bayer AG, Berlin, Germany (farnaz.khunjush@bayer.com);

(2) Steffen Vogler, Bayer AG, Berlin, Germany (steffen.vogler@bayer.com);

(3) Tuan Truong, Bayer AG, Berlin, Germany (tuan.truong@bayer.com);

(4) Matthias Lenga, Bayer AG, Berlin, Germany (matthias.lenga@bayer.com).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Market Opportunity
Moonveil Logo
Moonveil Price(MORE)
$0.002476
$0.002476$0.002476
+0.24%
USD
Moonveil (MORE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

TD Cowen cuts Strategy price target to $440, cites lower bitcoin yield outlook

TD Cowen cuts Strategy price target to $440, cites lower bitcoin yield outlook

Despite the target cut, TD Cowen said Strategy remains an attractive vehicle for investors seeking bitcoin exposure.
Share
Coinstats2026/01/15 07:29
How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings

The post How to earn from cloud mining: IeByte’s upgraded auto-cloud mining platform unlocks genuine passive earnings appeared on BitcoinEthereumNews.com. contributor Posted: September 17, 2025 As digital assets continue to reshape global finance, cloud mining has become one of the most effective ways for investors to generate stable passive income. Addressing the growing demand for simplicity, security, and profitability, IeByte has officially upgraded its fully automated cloud mining platform, empowering both beginners and experienced investors to earn Bitcoin, Dogecoin, and other mainstream cryptocurrencies without the need for hardware or technical expertise. Why cloud mining in 2025? Traditional crypto mining requires expensive hardware, high electricity costs, and constant maintenance. In 2025, with blockchain networks becoming more competitive, these barriers have grown even higher. Cloud mining solves this by allowing users to lease professional mining power remotely, eliminating the upfront costs and complexity. IeByte stands at the forefront of this transformation, offering investors a transparent and seamless path to daily earnings. IeByte’s upgraded auto-cloud mining platform With its latest upgrade, IeByte introduces: Full Automation: Mining contracts can be activated in just one click, with all processes handled by IeByte’s servers. Enhanced Security: Bank-grade encryption, cold wallets, and real-time monitoring protect every transaction. Scalable Options: From starter packages to high-level investment contracts, investors can choose the plan that matches their goals. Global Reach: Already trusted by users in over 100 countries. Mining contracts for 2025 IeByte offers a wide range of contracts tailored for every investor level. From entry-level plans with daily returns to premium high-yield packages, the platform ensures maximum accessibility. Contract Type Duration Price Daily Reward Total Earnings (Principal + Profit) Starter Contract 1 Day $200 $6 $200 + $6 + $10 bonus Bronze Basic Contract 2 Days $500 $13.5 $500 + $27 Bronze Basic Contract 3 Days $1,200 $36 $1,200 + $108 Silver Advanced Contract 1 Day $5,000 $175 $5,000 + $175 Silver Advanced Contract 2 Days $8,000 $320 $8,000 + $640 Silver…
Share
BitcoinEthereumNews2025/09/17 23:48
BlackRock boosts AI and US equity exposure in $185 billion models

BlackRock boosts AI and US equity exposure in $185 billion models

The post BlackRock boosts AI and US equity exposure in $185 billion models appeared on BitcoinEthereumNews.com. BlackRock is steering $185 billion worth of model portfolios deeper into US stocks and artificial intelligence. The decision came this week as the asset manager adjusted its entire model suite, increasing its equity allocation and dumping exposure to international developed markets. The firm now sits 2% overweight on stocks, after money moved between several of its biggest exchange-traded funds. This wasn’t a slow shuffle. Billions flowed across multiple ETFs on Tuesday as BlackRock executed the realignment. The iShares S&P 100 ETF (OEF) alone brought in $3.4 billion, the largest single-day haul in its history. The iShares Core S&P 500 ETF (IVV) collected $2.3 billion, while the iShares US Equity Factor Rotation Active ETF (DYNF) added nearly $2 billion. The rebalancing triggered swift inflows and outflows that realigned investor exposure on the back of performance data and macroeconomic outlooks. BlackRock raises equities on strong US earnings The model updates come as BlackRock backs the rally in American stocks, fueled by strong earnings and optimism around rate cuts. In an investment letter obtained by Bloomberg, the firm said US companies have delivered 11% earnings growth since the third quarter of 2024. Meanwhile, earnings across other developed markets barely touched 2%. That gap helped push the decision to drop international holdings in favor of American ones. Michael Gates, lead portfolio manager for BlackRock’s Target Allocation ETF model portfolio suite, said the US market is the only one showing consistency in sales growth, profit delivery, and revisions in analyst forecasts. “The US equity market continues to stand alone in terms of earnings delivery, sales growth and sustainable trends in analyst estimates and revisions,” Michael wrote. He added that non-US developed markets lagged far behind, especially when it came to sales. This week’s changes reflect that position. The move was made ahead of the Federal…
Share
BitcoinEthereumNews2025/09/18 01:44