PET_image

Positron Emission Tomography

PET basics

Positron Emission Tomography, or PET, uses trace amounts of short-lived radioactive labeled molecules, usually called tracers or radio-tracers, which are injected into bloodstream to map functional processes in the brain. When the material undergoes radioactive decay a positron is emitted, which can be picked up by the detector.

Areas of high radioactivity are associated with brain activity: As blood is more concentrated in activated brain areas than in inactivated ones, so the scanner will detect more gamma rays (see below) coming from parts of the brain that are working harder. On a PET scan regions of the brain show up as different colors depending on the degree of activity in those regions. Yellow and red regions are “hot” and indicate high brain activity, while blue and black “cold” regions indicate little to no brain activity.

The scanner

The scanner consists of a ring of detectors that surround the subject, just like the MRI scanner magnets! But they work different: The PET scanner detectors contain crystals that scintillate (give off light) in response to gamma rays, which are extremely high-energy rays of light.  Each time a crystal in a detector absorbs a gamma ray is called an event. When two detectors exactly opposite from each other on the ring simultaneously detect a gamma ray, a computer hooked up to the scanner records this as a coincidence event. A coincidence event represents a line in space connecting those two detectors, and it is assumed that the source of the two gamma rays lies somewhere along that line.  The computer records all of the coincidence events that occur during the imaging period and then reconstructs this data to produce cross-sectional images, used to construct a 3D volume.

PET-scanner

The tracer

The tracer is usually a substance that can be broken down (metabolized) by cells in the body, and it is labeled with a radioactive isotope, radioisotope, or radioactive-tracer. The brain function measured by a PET scan varies according to the type of radioisotope that is used. There is minimal risk involved since the dose of radiation is low and the isotope is quickly eliminated from the body through urination. More specifically, the radioactive tracers are made up of carrier molecules that are bonded tightly to a radioactive atom. These carrier molecules vary greatly depending on the purpose of the scan. Some tracers employ molecules that interact with a specific protein or sugar (like glucose) in the body and can even employ the patient’s own cells.

The radiotracer most widely used to study the brain function is the fluorodeoxyglucose or FDG, which is fluorine-18 attached to a glucose molecule, and gives information about sugar metabolism in the brain. Many more radioisotopes exist, and which one is chosen for a specific PET scan depends on what type of brain function a researcher desires to study.

Here is a complete list of PET radiotracers

The scan

After it has been injected into the bloodstream, the isotope, which is very unstable, starts to decay, becoming less radioactive over time. In the process it emits a positron (a positively charged electron).  When a positron collides with an electron, the two particles annihilate each other, producing two gamma rays with the same energy but traveling in opposite directions. These gamma rays leave the subject’s body and are sensed by two detectors positioned 180 degrees from each other on the scanner, which gets recorded as a coincidence event.  A computer can determine where the gamma rays came from in the brain and generate a three-dimensional image.

The quality of a PET scan is not affected by small movements, so the subject does not have to remain as still for a PET scan as they would for an fMRI or MRI scan.

PETScan

Considerations

PET scans are considered to have relatively poor spatial resolution, so the images may not be very clear.  Due to this, it is common for PET to be used together with CT or MRI.

PET images are actually affected by the physics of the scanner, which make PET scans to look smoother. This should be considered in the processing steps before its analysis.

Also, the use of radiation, even in a small dose, always involves a slight risk.


PET processing/analysis (static)

PET frames averaging and motion correction

Sometimes, when they are acquired, dynamic scans (time-series data) must be corrected for patient motion to register each of the subsequent frames rigidly to the image’s first frame. It’s a common step that the resulting co-registered frames are averaged to produce a single static image. This average image is then used as reference for motion correction (see the dynamic part for another aim than averaging), normally using a rigid (6-dregrees) transformation.

PET between modality coregistration

For each subject, then the PET static image is co-registered with the corresponding MRI image (or pre-processed MRI) by means of the between modality coregistration methodology. PET images are finally re-sampled to the higher resolution of the MRI. The most reliable coregistration algorithms use information theory. The recommendation here is to employ the Normalized Mutual Information cost function to estimate a 12-parameter (degree of freedom) affine transformation matrix to transform voxels from PET to MRI space, as implemented in SPM. Some other programs (Non-free) as PMOD allow the use of the same between modality transformations as SPM.

PET Partial Volume Effects Correction

PET images suffer of limited and relatively low spatial resolution of the scanner. As a result, in structures with dimensions, such as the neocortex, the apparent radiotracer concentration is influenced by surrounding structures (spill-over or cross-contamination), a phenomenon known as the partial-volume effect (PVE), which affects the quantitative accuracy of the observed images. The degree to which a structure will suffer PVEs depends on its size. Smaller structures tend to be more severely affected by the PVE than larger ones. This effect is also particularly critical when the relative proportion of brain tissue components is altered, such as when imaging degenerative diseases in which cortical (gray matter) atrophy is present, e.g., in mild cognitive impairment (MCI) and Alzheimer’s disease (AD).

Normally known as PVC or PVE, the partial volume effects correction (hereafter called PVEc) tries to deal with such effects and to compensate for the effects of resolution, improving quantitative accuracy. These PVEc techniques are either data-driven or make use of anatomical information from other modalities such as magnetic resonance (MR) imaging. Extensions to the anatomy-based PVEc techniques are Region-based corrections. To date, several methods to correct for PVEs have been proposed, but I will only mention the most widely and commonly used.

There are many toolboxes/programs for this purpose, both free and non-free, being the PMOD and PVEout/PVElab the most commonly used; I also provide/suggest a new SPM toolbox (PETPVE12, see in the Programs & Code section of this web for more info). In all programs even when the provided algorithm is the same, they vary in the way the algorithm was implemented, as such, the results may vary quantitively a bit, and for this reason is not uncommon to find some articles that compare the PVEc algorithms across programs.

  • Point-spread function

The spatial resolution of PET images is usually characterized by the point-spread function (PSF), which essentially corresponds to the image of a point source. Generally, it is spatially invariant, which means that the distribution of values around the position of the source point depends on the location of the source of the field of view (FOV) of the scanner.

Thus the final aim of PVEc is to reverse the effect of the system (scanner) PSF in a PET image and thereby restore the true activity distribution, qualitatively and quantitatively.

  • Data-driven Correction

This can be done either in the image domain (post reconstruction) or during iterative image reconstruction by incorporating the PSF in the system matrix (during reconstruction). The first approach results in noise amplification, while the second approach has a superior noise performance (PVEs reduction in magnitude). In both cases, however, the resulting images often suffer from so called ‘Gibbs artefacts’, corresponding to ringing in the vicinity of sharp boundaries, which is related to missing high frequency information. Application has been largely in neurological studies where registration tends to be straightforward. The most known methods are Bayesian or maximum a posteriori (MAP) and deconvolution, respectively for each approach.

  • Anatomy-based Correction

The aim of anatomically based PVC methods is to utilize structural information from other imaging modalities as a priori information in order to stabilize the solution.

The first step here is to segment (sometimes misleadingly called parcellation) the anatomical image (typically a T1) into a numbers of tissue compartments that are usually considered to possess uniform activity. This does not mean that the activity concentration has to be identical at each point in a region, but that the variability within a tissue segment should be small in comparison to the potential variability between different regions and thus, the net-effect of the cross-talk between voxels within one single region is negligible. Here three common methods/algorithms must be outlined: The one from Meltzer et al., (1990), which considers only two compartments, brain (including GM and WM) and non-brain tissue (concluding CSF); then Muller-Gartner et al., 1992, extended the method to include three regions (GM, WM and CSF), this is known as Muller-Gartner method (MG). These two methods has a few disadvantages: i) the correction is valid only for voxels within the target region (GM); ii) the activity is considered to be invariant along the whole segments, which is not completely truth as brain activity changes across brain regions; and iii) that the mean values in the background regions (WM and CSF) would, preferably, be obtained from regions large enough that the central part is unaffected by PVEs, like the centrum-semiovale for WM. The third algorithm presents a solution to this, via using the GTM (see below) to determine the mean values of the background regions, then, this mean values will account for the spill-over across all regions in the WM and CSF. This method is known as the modified MG method (Rousset et al., 1998; described in Quarantelli et al., 2004).

  • Region-based Correction

Originally known as volume-of-interest (VOI) based correction it consist in parcellating/dividing the GM into several regions of interests, or ROIs, to correct for cross-contamination between N number of regions. It is widely known as the geometric transfer matrix (GTM) method (Rousset et al., 1998) and is used for regional analysis and has the advantage that it can account for spill-over (spill-in and spill-out) effects between multiple regions. However it does not produce a PET-volume corrected image.

Here, the cross-talk and recovery factors is computed by convolving the binary maps of the region of interest by the PSF of the imaging system.

PET image intensity normalization

This process consists basically in dividing observed/ROI-mean radiotracer uptake by the mean uptake in a reference region.

PET image normalization is often performed relative to the cerebral global mean or to the whole brain cortex mean. However, due to the nature of some disease process, patients with some of the known metabolic syndromes, like MCI and AD patients, have a lower glucose metabolic rate than normal subjects across the whole brain. Normalization to the cerebral global or cortex mean therefore artificially scales up values from patients, whilst scaling down those from normal subjects, resulting in under-estimation of the relative hypometabolism in patients compared to normal subjects.

Recent work suggests that improved group discrimination can be achieved by using the signal intensity in the cerebellum, brainstem, basal ganglia, and sensorimotor cortex, which are relatively preserved regions of the brain for normalization and disease. Being the most common the cerebellar cortex and the brainstem.

I do recommend the cerebellar cortex as the reference region. This recommendation comes from the knowledge that PET images have been previously corrected from PV effects, process that includes the removal of WM spill-in and no WM voxels survive the process. So that, a normalization to brainstem would precise to perform first the normalization and then the PVE correction if PET images still wanted to be used for quantitative analysis. If mean metabolic rates from ROIs want to be used, if they are extracted from raw PET images (not GTM method), any reference region can be used.

PET quantification
  • Functional volumetric (voxel-based) quantification

This include the use of volumetric images, normally PVE-corrected and reference-intensity normalized. Common programs for such implementations exist, although the most use is SPM.

  • Functional ROI (Atlas-based) quantification

This includes the GTM method or the extraction of metabolic rates for a series of regions and performing statistics as single vector data, like in SPSS, statistica, R, etc.

PET processing/analysis (dynamic)

This is quite similar to the described before. However, there may be some further steps required to increase the reliability of the data.

Motion correction

This is a non-trivial step given that it may allow us to apply the common steps of the preprocessing mentioned earlier to each frame of the dynamic PET image. Which means, after this step all other previous descriptions remain unchanged here and will not be repeated.

One thought on “Positron Emission Tomography

Leave a comment