(ADINSERTER AMP)
Thursday, September 29, 2022
HomePediatrics DentistryDeep learning-based transformation of H&E stained tissues into special stains

Deep learning-based transformation of H&E stained tissues into special stains


Training of stain transformation network

All of the stain transformation networks and virtual staining networks used in this paper were trained using GANs. Each of these GANs consists of a generator (G) and a discriminator (D). The generator is used to perform the transformation of the input images (xinput), while the discriminator network is used to help train the network to generate images, which match the distribution of the ground truth stained images. It does this by trying to discriminate between the generated images (G(xinput)) and the ground truth images (zlabel). The generator is in turn taught to generate images, which cannot be classified correctly by the discriminator. This GAN loss is used in conjunction with two additional losses: a mean absolute error (L1) loss and a total variation (TV) loss. The L1 loss is used to ensure that the transformations are performed accurately in space and color, while the TV loss is used as a regularizer, and reduces noise created by the GAN loss. Together, the overall loss function is described as:

$${l}_{{{{{{\rm{generator}}}}}}}={L}_{1}\{{z}_{{{{{{\rm{label}}}}}}},G({x}_{{{{{{\rm{input}}}}}}})\}+\alpha \times {{{{{\rm{TV}}}}}}\{G({x}_{{{{{{\rm{input}}}}}}})\}+{\beta }\times {(1-D(G({x}_{{{{{{\rm{input}}}}}}})))}^{2}$$

(1)

where α and β are constants used to balance the various terms of the loss function. The stain transformation networks are tuned such that the L1 loss makes up ~1% of the overall loss, the TV loss makes up only ~0.03% of the overall loss, and the discriminator loss makes up the remaining ~99% of the loss (relative ratios change over the course of the training). The L1 portion of the loss can be written as:

$${L}_{1}\left({{{{z}}}}{{{{{\mathscr{,}}}}}}G\right)=\frac{1}{P\times Q}\mathop{\sum}\limits_{p}\mathop{\sum}\limits_{q}{{{{{\rm{|}}}}}}{{{{{z}}}}}_{p,q}-{G({x}_{{{{{{\rm{input}}}}}}})}_{p,q}{{{{{\rm{|}}}}}}$$

(2)

where p and q are the pixel indices and P and Q are the total number of pixels in each image. The total variation loss is defined as:

$${{{{{\rm{TV}}}}}}(G({x}_{{{{{{\rm{input}}}}}}}))=\mathop{\sum}\limits_{p}\mathop{\sum}\limits _{q}{{{{{\rm{|}}}}}}{G({x}_{{{{{{\rm{input}}}}}}})}_{p+1,q}-{G({x}_{{{{{{\rm{input}}}}}}})}_{p,q}{{{{{\rm{|}}}}}}+{{{{{\rm{|}}}}}}{G({x}_{{{{{{\rm{input}}}}}}})}_{p,q+1}-{G({x}_{{{{{{\rm{input}}}}}}})}_{p,q}{{{{{\rm{|}}}}}}$$

(3)

The discriminator network has a separate loss function which is defined as:

$${l}_{{{{{{\rm{discriminator}}}}}}}={D(G({x}_{{{{{{\rm{input}}}}}}}))}^{2}+{(1-D({z}_{{{{{{\rm{label}}}}}}}))}^{2}$$

(4)

A modified U-net1 neural network architecture was used for the generator, while the discriminator used a VGG-style2 network. The U-net architecture uses a set of four up-blocks and four down-blocks, each containing three convolutional layers with a 3 × 3 kernel size, activated upon by the LeakyReLU activation function which is described as:

$${{{{{\rm{LeakyReLU}}}}}}\left(x\right)=\left\{\begin{array}{ccc} x & {{{{{\rm{for}}}}}}\; x\, > \, 0\hfill\\ 0.1\; x & {{{{{\rm{otherwise}}}}}}\end{array}\right.$$

(5)

The first down-block increases the number of channels to 32, while the rest each increase the number of channels by a factor of two. Each of these down-blocks ends with an average pooling layer which has both a stride and a kernel size of two. The up-blocks begin with a bicubic up-sampling prior to the application of the convolutional layers. Between each of the blocks of a certain layer, a skip connection is used to pass data through the network without needing to go through all the blocks. After the final up-block, a convolutional layer maps back to three channels.

The discriminator is made up of five blocks. These blocks contain two convolutional layers and LeakyReLU pairs, which together increase the number of channels by a factor of two. These are followed by an average pooling layer with a stride of two. After the five blocks, two fully connected layers reduce the output dimensionality to a single value, which in turn is input into a sigmoid activation function to calculate the probability that the input to the discriminator network is real, i.e., not generated.

Both the generator and discriminator were trained using the adaptive moment estimation (Adam)26 optimizer to update the learnable parameters. A learning rate of 1 × 10−5 was used for the discriminator network while a rate of 1 × 10−4 was used for the generator network. For each iteration of the discriminator training, the generator network is trained for seven iterations. This ratio reduces by one every 4000 iterations of the discriminator to a minimum of one discriminator iteration for every three generator iterations. The network was trained for 50000 iterations of the discriminator, with the model being saved every 1000 iterations. The best generator model was chosen manually from these saved models by visually comparing different models. For all three of the generator networks (MT, PAS, and JMS), the 15,000th iteration of the discriminator was chosen as the optimal model.

The stain transformation networks were trained using pairs of 256 × 256-pixel image patches generated by the class conditional virtual staining network (label-free), downsampled by a factor of 2 (to match 20× magnification). These patches were randomly cropped from one of 1013 712 × 712-pixel images coming from ten unique tissue sections, leading to ~7836 unique patches usable for training. Seventy-six additional images coming from three unique tissue sections were used to validate the network. These images were augmented using the eight stain augmentation networks and further augmented through random rotation and flipping of the images. The diagnoses of each of the samples used for training and validation have been added to Supplementary Tables 2 and 3. Each of the three stain transformation networks (MT, PAS, and JMS) were trained using images generated by the label-free virtual staining networks from the same input autofluorescence images. Furthermore, the images were converted to the YCbCr color space27 before being used as either the input or ground truth for the neural networks.

As this stain transformation neural network performs an image-to-image transformation, it learns to transform specific structures using the ~513 million pixels in the dataset that are independently accounted for in the loss function. Furthermore, since the network learns to convert structures which are common throughout many different types of samples, it can be applied to tissues with diseases that the network was not trained with. When used in conjunction with the eight data augmentation networks which convert the values of these pixels, as well as random rotation and flipping (for an additional 8×) augmentation, there are effectively many billions of pixels which are used to learn the desired stain-to-stain transformation. Because of these advantages, a much smaller number of training samples from unique patients can be used than would be required for a typical classification neural network.

Image data acquisition

All of the neural networks were trained using data obtained by microscopic imaging of thin tissue sections coming from needle core kidney biopsies. Unlabeled tissue sections were obtained from the UCLA Translational Pathology Core Laboratory (TPCL) under UCLA IRB 18-001029, from an existing specimen. The autofluorescence images were captured using an Olympus IX-83 microscope (controlled with the MetaMorph microscope automation software, version 7.10.161), using a DAPI filter cube (Semrock OSFI3-DAPI5060C, EX 377/50 nm EM 447/60 nm) as well as a Texas Red filter cube (Semrock OSFI3-TXRED-4040C, EX 562/40 nm EM 624/40 nm) to generate the second autofluorescence image channel.

In order to create the training dataset for the virtual staining network, pairs of matched unlabeled autofluorescence images and brightfield images of the histochemically stained tissue were obtained. H&E, MT, and PAS histochemical staining were performed by the Tissue Technology Shared Resource at UC San Diego Moores Cancer Center. The JMS staining was performed by the Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA. These stained slides were digitally scanned using a brightfield scanning microscope (Leica Biosystems Aperio AT2 slide, using 40x/0.75NA objective). All the slides and digitized slide images were prepared from an existing specimen. Therefore, this work did not interfere with standard practices of care or sample collection procedures. The H&E image dataset used for the study came from the existing UCLA pathology database containing WSIs of stained kidney needle core biopsies, under UCLA IRB 18-001029. These slides were similarly imaged using Aperio AT2 slide scanning microscopes.

Image co-registration

To train label-free virtual staining networks, the autofluorescence images of unlabeled tissue were co-registered to brightfield images of the same tissue after it had been histochemically stained. This image co-registration was done through a multistep process28, beginning with a coarse matching which was progressively improved until subpixel level accuracy is achieved. The registration process first used a cross-correlation-based method to extract the most similar portions of the two images. Next, the matching was improved using multimodal image registration29. This registration step applied an affine transformation to the images of the histochemically stained tissue to correct for any changes in size or rotations. To achieve pixel-level co-registration accuracy, an elastic registration algorithm was then applied. However, this relies upon a local correlation-based matching. Therefore, to ensure that this matching could be accurately performed, an initial rough virtual staining network is applied to the autofluorescence images7,8. These roughly stained images were then co-registered to the brightfield images of the stained tissue using a correlation-based elastic pyramidal co-registration algorithm30.

Once the image co-registration is complete, the autofluorescence images were normalized by subtracting the average pixel value of the tissue area for the WSI and subsequently dividing it by the standard deviation of the pixel values in the tissue area.

Class conditional virtual staining of label-free tissue

A class conditional GAN was used to generate both the input and the ground truth images to be used during the training of the presented stain transformation networks (Fig. 2a). This class conditional GAN allows multiple stains to be created simultaneously using a single deep neural network8. To ensure that the features of the virtually stained images are highly consistent between stains, a single network must be used to generate the stain transformation network input (virtual H&E) and the corresponding ground truth images (virtual special stains) that are automatically registered to each other as the information source is the same image. This is only required for the training of the stain transformation neural networks and is rather beneficial as it allows both the H&E and special stains to be perfectly matched. Furthermore, an alternative image dataset made up of co-registered virtually stained and histochemically stained fields of view will present limitations due to imperfect co-registration and deformities caused by the staining process. These are eliminated by using a single class conditional GAN to generate both the input and the ground truth images.

This network uses the same general architecture as the network described in the previous section, with the addition of a Digital Staining Matrix concatenated to the network input for both the generator and discriminator8. This staining matrix defines the stain coordinates within a given image FOV. Therefore, the loss functions for the generator and discriminator are:

$${l}_{{{{{{\rm{generator}}}}}}}={L}_{1}\{{z}_{{{{{{\rm{label}}}}}}},G({x}_{{{{{{\rm{input}}}}}}},\widetilde{{{{{{\bf{c}}}}}}})\}+\alpha \times {{{{{\rm{TV}}}}}}\{G({x}_{{{{{{\rm{input}}}}}}},\widetilde{{{{{{\bf{c}}}}}}})\}+{\beta }\times {(1-D(G({x}_{{{{{{\rm{input}}}}}}},\widetilde{{{{{{\bf{c}}}}}}}),\widetilde{{{{{{\bf{c}}}}}}}))}^{2}$$

(6)

$${l}_{{{{{{\rm{discriminator}}}}}}}={D(G({x}_{{{{{{\rm{input}}}}}}},\widetilde{{{{{{\bf{c}}}}}}}),\widetilde{{{{{{\bf{c}}}}}}})}^{2}+{(1-D({z}_{{{{{{\rm{label}}}}}}},\widetilde{{{{{{\bf{c}}}}}}}))}^{2}$$

(7)

where \(\widetilde{{{{{{\bf{c}}}}}}}\) is a one-hot encoded digital staining matrix with the same pixel dimensions as the input image. When used in the testing phase, the one-hot encoding allows the network to generate two separate stains (H&E and the corresponding special stain) for each FOV.

The number of channels in each layer used by this deep neural network was increased by a factor of two compared to the stain transformation architecture described above to account for the larger dataset size and the need for the network to perform two distinct stain transformations.

A set of four adjacent tissue sections were used to train the virtual staining networks for H&E and the three special stains. The H&E portion of all three of the networks was trained with 1058 1424 × 1424-pixel images coming from ten unique patients, the PAS network was trained with 946 1424 × 1424-pixel images coming from 11 unique patients, the Jones network was trained with 816 1424 × 1424-pixel images coming from ten unique patients, and the MT network was trained with 966 1424 × 1424-pixel images coming from ten unique patients. A list of the samples used to train the various networks, and the original diagnoses of the patients can be seen in Supplementary Table 2. All of the stains were validated using the same three validations slides.

Style transfer for H&E image data augmentation

In order to ensure that the stain transformation neural network is capable of being applied to a wide variety of histochemically stained H&E images, we use the CycleGAN18 model to augment the training dataset by performing style transfer (Fig. 2b). As discussed, these CycleGAN networks only augment the image data used as inputs in the training phase. This CycleGAN model learns to map between two domains \(X\) and \(Y\) given the training samples \(x\) and \(y\), where \(X\) is the domain for the original virtually stained H&E and Y is the domain for the H&E image generated by a different lab or hospital. This model performs two mappings \({G:X}\to Y\) and \({F:Y}\to X\). In addition, two adversarial discriminators \({D}_{X}\) and \({D}_{Y}\) are introduced. A diagram showing the relationship between these various networks is shown in Supplementary Fig. 4.

The loss function of the generator \({l}_{{{{{{\rm{generator}}}}}}}\) contains two types of terms: adversarial losses \({l}_{{{{{{\rm{adv}}}}}}}\) to match the stain style of the generated images to the style of histochemically stained images in target domain; and cycle consistency losses \({l}_{{{{{{\rm{cycle}}}}}}}\) to prevent the learned mappings \(G\) and \(F\) from contradicting each other. The overall loss is therefore described by:

$${l}_{{{{{{\rm{generator}}}}}}}=\lambda {\times l}_{{{{{{\rm{cycle}}}}}}}+\varphi \times {l}_{{{{{{\rm{adv}}}}}}}$$

(8)

where\(\lambda\) and \(\varphi\) are relative weights/constants. For each of the networks, we set \(\lambda\) = 10 and \(\varphi\) = 1. Each generator is associated with a discriminator, which ensures that the generated image matches the distribution of the ground truth. The adversarial losses for each of the generator networks can be written as:

$${l}_{{{{{{\rm{adv}}}}}}X\to Y}={\left(1-{D}_{Y}\left(G\left(x\right)\right)\right)}^{2}$$

(9)

$${l}_{{{{{{\rm{adv}}}}}}Y\to X}={\left(1-{D}_{X}\left(F\left(y\right)\right)\right)}^{2}$$

(10)

And the cycle consistency loss can be described as:

$${l}_{{{{{{\rm{cycle}}}}}}}={L}_{1}\left\{y,G\left(F\left(y\right)\right)\right\}+{L}_{1}\left\{x,F\left(G\left(x\right)\right)\right\}$$

(11)

The adversarial loss terms used to train \({{{{{{\rm{D}}}}}}}_{{{{{{\rm{X}}}}}}}\) and \({{{{{{\rm{D}}}}}}}_{{{{{{\rm{Y}}}}}}}\) are defined as:

$${l}_{{D}_{X}}={\left({1-D}_{X}\left(x\right)\right)}^{2}+{D}_{X}{\left(F\left(y\right)\right)}^{2}$$

(12)

$${l}_{{D}_{Y}}={\left({1-D}_{Y}\left(y\right)\right)}^{2}+{D}_{Y}{\left(G\left(x\right)\right)}^{2}$$

(13)

For these CycleGAN models, \(G\) and \(F\) use U-net architectures similar to the stain transformation network. It consists of three down-blocks followed by three up-blocks. Each of these down-blocks and up-blocks are identical to the corresponding blocks in the stain transformation network. \({D}_{X}\) and \({D}_{Y}\) also have similar architectures to the discriminator network of stain transformation network. However, they have four blocks rather than five blocks as in the previous model.

During the training, the Adam optimizer was used to update the learnable parameters with learning rates of 2 × 10−5 for both the generator and discriminator networks. For each step of discriminator training, one iteration of training was performed for the generator network, and the batch size for training was set to 6.

A list of the original diagnoses of the samples used to train the CycleGAN stain augmentation networks can be seen in Supplementary Table 3. The same table also indicates how many FOVs were used for each sample used to train the CycleGAN network.

Training of single-stain virtual staining networks

In addition to performing multiple virtual stains using a single neural network, separate networks which only generate one individual virtual stain each were also trained. These networks were used to perform the rough virtual staining that enables the elastic co-registration. These networks use the same general architecture as the stain transformation networks, with the only difference being that the first block in both the generator and the discriminator increases the number of channels to 64. The input and output images are the autofluorescence images and the histochemically stained images, respectively, processed using the image registration described in the image co-registration section.

Implementation details

The image co-registration was implemented in MATLAB using version R2018a (The MathWorks Inc.). The neural networks were trained and implemented using Python version 3.6.2 with TensorFlow version 1.8.0. The timing was measured on a Windows 10 computer with two Nvidia GeForce GTX 1080 Ti GPUs, 64GB of RAM, and an Intel I9-7900X CPU.

Pathologic evaluation of kidney biopsies

An initial study of 16 sections—comparing the diagnoses made with H&E only against the diagnoses made with H&E as well as the stain-transformed special stains—was first performed to determine the feasibility of the technique. For this initial evaluation, 16 nonneoplastic kidney cases were selected by a board-certified kidney pathologist (J.E.Z.) to represent a variety of kidney diseases (listed in Supplementary Data 1). For each case, the WSI of the histochemically stained H&E slide, along with a worksheet that included a brief clinical history, were presented to three board-certified renal pathologists (W.D.W, M.F.P.D., and A.E.S.). The diagnostic worksheet can be seen in Supplementary Table 4. The WSIs were exported to the Zoomify format31, and uploaded to the GIGAmacro32 website to allow the pathologists to confidentially view the images using a standard web browser. The WSIs were viewed using standard displays (e.g., LCD Monitor, FullHD, 1920 × 1080 pixels).

In the diagnostic worksheet, the reviewers were given the H&E WSI and brief patient history and asked to make a preliminary diagnosis and quantify certain features of the biopsy (i.e., number of glomeruli and arteries) and provide additional comments if necessary. After a >3-week washout period to reduce the pathologists’ familiarity with the cases, the three reviewing pathologists received, in addition to the same histologically stained H&E WSIs and the same patient medical history, three computationally generated special stain WSIs for each case: MT, PAS, and JMS. Being given these slides, they were asked to provide a preliminary diagnosis for a second time. This >3-week washout period was chosen to be 1 week greater than the College of American Pathologists Pathology and Laboratory Quality Center guidelines33, ensuring that the pathologists were not influenced by previous diagnoses.

To test the hypothesis that using additional stain-transformed WSIs can be used to improve the preliminary diagnosis, the adjudicator pathologist (J.E.Z.) who was not among the three diagnosticians provided judgment to determine Concordance (C), Discordance (D), or Improvements (I) between the diagnosis quality of the first and second round of preliminary diagnoses provided by the group of diagnosticians (see Supplementary Table 4).

To expand the total number of cases to 58 and perform the third study, (Fig. 3) the same set of steps were repeated. To allow for higher throughput, in this case, the WSIs were uploaded to a custom-built online file viewing server based on the Orthanc server package34. Using this online server, the user is able to swap between the various cases. For each case, the patient history is presented, along with the WSI and the option to swap between the various stains, where applicable. The pathologists were asked to input their diagnosis, the chronicity, and any comments that they might have into text boxes within the interface.

Once the pathologists completed the diagnoses with H&E only as well as with H&E and the stain-transformed special stains, another >3-week washout period was observed. Following this second washout period, the pathologists were given WSIs of the original histochemically stained H&E along with the three histochemically stained special stains coming from serial tissue sections. Two of these cases used in the preliminary study were excluded from the final analysis, as WSIs of the three special stains could not be obtained from serial tissue sections. For the first of these excluded cases, all of the pathologist’s diagnoses were improved using stain-to-stain transformation, and for the second, one of the diagnoses was improved while the other two pathologists’ diagnoses were concordant.

The pathologists’ diagnoses and comments can be found in the file Supplementary Data 1. Pathologist 2 was replaced for the expanded study due to time availability. Therefore, there is a separate page containing the initial study diagnoses for this pathologist.

Statistical analysis

Using the preliminary study of 16 samples, we calculated that a total of 41 samples are needed to show statistical significance (using a power of 0.8 and an alpha level of 0.05 and using a one-tailed t-test). Therefore, the total number of patients was increased to 58 to ensure that the study was sufficiently powered.

A one-tailed t-test was used to determine whether a statistically significant number of improvements were made when using either [H&E and stain-transformed special stains], or [H&E and histochemically stained special stains] over only [H&E] images. The statistical analyses were performed by giving a score of +1 to any improvement, −1 to any discordance, and 0 to any concordance. The score for each case was then averaged among the three pathologists who evaluated the case, and the test showed that the amount of improvement (i.e., if the average score is greater than zero) across the 58 cases was statistically significant.

A chi-squared test with two degrees of freedom was used to compare the proportion of improvements, concordances, and discordances between the methods tested above. The improvements, concordances, and discordances for each pathologist was compared individually.

For all tests, a P value of 0.05 or less was considered to be significant.

Reporting Summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments