- Introduction
- Data
- Methods
- Results
- Conclusion
- Requirements
- Installation
- How to Run
- Data Access
- How to Cite
This repository presents an artificial intelligence (AI)-driven approach for the precise segmentation and quantification of histological features observed during the microscopic examination of tissue-engineered vascular grafts (TEVGs). The development of next-generation TEVGs is a leading trend in translational medicine, offering minimally invasive surgical interventions and reducing the long-term risk of device failure. However, the analysis of regenerated tissue architecture poses challenges, necessitating AI-assisted tools for accurate histological evaluation.
The study utilized a dataset comprising 104 Whole Slide Images (WSIs) obtained from biodegradable TEVGs implanted into the carotid arteries of 20 sheep. After six months, the sheep were euthanized to assess vascular tissue regeneration patterns. The WSIs were automatically sliced into 99,831 patches, which underwent filtering and manual annotation by pathologists. A total of 1,401 patches were annotated, identifying nine histological features: arteriole lumen (AL), arteriole media (AM), arteriole adventitia (AA), venule lumen (VL), venule wall (VW), capillary lumen (CL), capillary wall (CW), immune cells (IC), and nerve trunks (NT) (Figure 1). These annotations were meticulously verified by a senior pathologist, ensuring accuracy and consistency.
Figure 1. Annotation methodology for histology patches (top row) depicting features associated with a blood vessel regeneration (replacement of a biodegradable polymer by de novo formed vascular tissue). Histological annotations delineated with segmentation masks (bottom row) include arteriole lumen (red), arteriole media (pink), arteriole adventitia (light pink), venule lumen (blue), venule wall (light blue), capillary lumen (brown), capillary wall (tan), immune cells (lime), and nerve trunks (yellow).
The methodology involved two main stages: hyperparameter tuning and model training. Six deep learning models (U-Net, LinkNet, FPN, PSPNet, DeepLabV3, and MA-Net) were rigorously tuned across 200 configurations to achieve optimal performance. Hyperparameters such as encoder architecture, input image size, optimizer, and learning rate were extensively explored using Bayesian optimization and HyperBand early termination strategies.
Following the tuning stage, the models were trained and evaluated on the entire dataset using a 5-fold cross-validation approach (Figure 2). This ensured the integrity of subject groups within each subset, preventing data leakage. During training, various augmentation techniques were applied to expand the dataset and mitigate overfitting. Besides that, batch size adjusted based on GPU memory utilization (~90-100% usage).
Figure 2. Comparative analysis of loss and DSC evolution during training and testing phases over 5-fold cross-validation with 95% confidence interval.
The MA-Net model achieved the highest mean Dice Similarity Coefficient (DSC) of 0.875, excelling in arteriole segmentation (Table 1). DeepLabV3 performed well in segmenting venous and capillary structures, while FPN exhibited proficiency in identifying immune cells and nerve trunks. An ensemble of these three models attained an average DSC of 0.889, surpassing their individual performances.
Table 1. Feature-specific and average Dice Similarity Coefficients of the studied models.
Model | AL | AM | AA | VL | VW | CL | CW | IC | NT | Mean |
---|---|---|---|---|---|---|---|---|---|---|
U-Net | 0.931 | 0.907 | 0.820 | 0.797 | 0.766 | 0.801 | 0.783 | 0.920 | 0.966 | 0.855 |
LinkNet | 0.898 | 0.881 | 0.825 | 0.799 | 0.773 | 0.778 | 0.774 | 0.935 | 0.925 | 0.843 |
FPN | 0.919 | 0.904 | 0.805 | 0.852 | 0.800 | 0.756 | 0.755 | 0.955 | 0.981 | 0.859 |
PSPNet | 0.872 | 0.838 | 0.830 | 0.784 | 0.734 | 0.728 | 0.722 | 0.937 | 0.959 | 0.823 |
DeepLabV3 | 0.872 | 0.861 | 0.803 | 0.900 | 0.861 | 0.815 | 0.793 | 0.895 | 0.975 | 0.864 |
MA-Net | 0.939 | 0.893 | 0.860 | 0.848 | 0.830 | 0.806 | 0.787 | 0.937 | 0.978 | 0.875 |
Figure 3. Comparison of models for microvascular segmentation in tissue-engineered vascular grafts.
To illustrate the network predictions, we provide three patches showcasing the segmentation of the studied histologic features in (Figure 4). This figure presents predictions derived from an optimal solution: an ensemble of three models (MA-Net, DeepLabV3, and FPN).
Figure 4. Comparison between ground truth segmentation and ensemble predictions.
This study demonstrates the potential of deep learning models for precise segmentation of histological features in regenerated tissues, paving the way for improved AI-assisted workflows during the analysis of tissue-engineered medical devices. The obtained findings foster further research in this field, contributing to the advancement of translational medicine and the implementation of next-generation tissue-engineered constructs.
- Operating System
- macOS
- Linux
- Windows (limited testing carried out)
- Python 3.11.x
- Required core libraries: environment.yaml
Step 1: Install Miniconda
Installation guide: https://docs.conda.io/projects/miniconda/en/latest/index.html#quick-command-line-install
Step 2: Clone the repository and change the current working directory
git clone https://github.com/ViacheslavDanilov/histology_segmentation.git
cd histology_segmentation
Step 3: Set up an environment and install the necessary packages
chmod +x make_env.sh
./make_env.sh
Specify the data_path
and save_dir
parameters in the predict.yaml configuration file. By default, all images within the specified data_path
will be processed and saved to the save_dir
directory.
Available data_path
options:
- Option 1 - Directory with images (default):
data/demo/input
- Option 2 - Single image:
data/demo/input/011_0123.jpg
To run the pipeline, execute predict.py from your IDE or command prompt with:
python src/models/smp/predict.py
All essential components of the study, including the curated dataset and trained models, have been made publicly available:
- Dataset: https://zenodo.org/doi/10.5281/zenodo.10838383
- Models: https://zenodo.org/doi/10.5281/zenodo.10838431
Please cite our paper if you found our data, methods, or results helpful for your research:
Danilov V.V., Laptev V.V., Klyshnikov K.Yu., Stepanov A.D., Bogdanov L.A., Antonova L.V., Krivkina E.O., Kutikhin A.G., Ovcharenko E.A. (2024). AI-driven segmentation of microvascular features during histological examination of tissue-engineered vascular grafts. Frontiers in Cell and Developmental Biology. DOI: TO.BE.UPDATED.SOON