EE290T: Advanced Reconstruction Methods for Magnetic

EE290T: Advanced Reconstruction Methods for Magnetic
Resonance Imaging
Martin Uecker
Introduction
Topics:
I
Image Reconstruction as Inverse Problem
I
Parallel Imaging
I
Non-Cartestian MRI
I
Subspace Methods
I
Model-based Reconstruction
I
Compressed Sensing
Tentative Syllabus
I
I
I
I
I
I
I
I
I
I
I
I
I
I
01: Jan 27 Introduction
02: Feb 03 Parallel Imaging as Inverse Problem
03: Feb 10 Iterative Reconstruction Algorithms
–: Feb 17 (holiday)
04: Feb 24 Non-Cartesian MRI
05: Mar 03 Nonlinear Inverse Reconstruction
06: Mar 10 Reconstruction in k-space
07: Mar 17 Reconstruction in k-space
–: Mar 24 (spring recess)
08: Mar 31 Subspace methods
09: Apr 07 Model-based Reconstruction
10: Apr 14 Compressed Sensing
11: Apr 21 Compressed Sensing
12: Apr 28 TBA
Tentative Syllabus
I
I
I
I
I
I
I
I
I
I
I
I
I
I
01: Jan 27 Introduction
02: Feb 03 Parallel Imaging as Inverse Problem
03: Feb 10 Iterative Reconstruction Algorithms
–: Feb 17 (holiday)
04: Feb 24 Non-Cartesian MRI
05: Mar 03 Nonlinear Inverse Reconstruction
06: Mar 10 Reconstruction in k-space
07: Mar 17 Reconstruction in k-space
–: Mar 24 (spring recess)
08: Mar 31 Subspace methods
09: Apr 07 Model-based Reconstruction
10: Apr 14 Compressed Sensing
11: Apr 21 Compressed Sensing
12: Apr 28 TBA
Today
I
Review of last lecture
I
Noise Propagation
I
Iterative Reconstruction Algorithms
I
Software
Phased Array
Signal is Fourier transform of magnetization image m
weighted by coil sensitivities cj :
Z
~
sj (t) = d~x ρ(~x )cj (~x )e −i2πk(t)~x
Images of a human brain from an eight channel array:
Channel Combination
RSS
MVUE
MMSE
Parallel MRI
Goal: Reduction of measurement time
I Subsampling of k-space
I Simultaneous acquisition with multiple receive coils
I
I
Coil sensitivities provide spatial information
Compensation for missing k-space data
1. DK Sodickson, WJ Manning. Simultaneous acquisition of spatial harmonics (SMASH): Fast imaging with
radiofrequency coil arrays. Magn Reson Med; 38:591–603 (1997) 2. KP Pruessmann, M Weiger, MB Scheidegger,
P Boesiger. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med; 42:952–962 (1999) 3. MA Griswold, PM
Jakob, RM Heidemann, M Nittka, V Jellus, J Wang, B Kiefer, A Haase. Generalized autocalibrating partially
parallel acquisitions (GRAPPA). Magn Reson Med; 47:1202–10 (2002)
Parallel MRI: Undersampling
Undersampling
Aliasing
kphase
kread
kpartition
kphase
Parallel Imaging as Inverse Problem
Model: Signal from multiple coils (image ρ, sensitivities cj ):
Z
~
sj (t) =
d~x ρ(~x )cj (~x )e −i2π~x ·k(t) + nj (t)
Ω
Assumptions:
I
Image is square-integrable function ρ ∈ L2 (Ω, C)
I
Additive multi-variate Gaussian white noise n
Problem: Find best approximate/regularized solution in L2 (Ω, C).
Ω
Discretization of Linear Inverse Problems
Continuous integral operator F : f 7→ g with kernel K :
b
Z
g (t) =
ds K (t, s)f (s)
a
Discrete system of linear equations:
y = Ax
Considerations:
I
Discretization error
I
Efficient computation
I
Implicit regularization
operator
unknown
data
continuous
F
f
g
discrete
A
x
y
Discretization for Parallel Imaging
Discrete Fourier basis:
f (x, y ) ≈
N
N
X
X
ˆal,k e
i2π
kx
FOVy
ly
+ FOV
y
l=−N k=−N
I
I
I
Efficient computation (FFT)
Approximates R(F H ) extremely well (for smooth cj )
y
x
Voxels: Dirichlet kernel DN ( FOV
)DN ( FOV
)
x
y
FOVy
Ω
FOVx
Discretization
SENSE: Discretization
Weak voxel condition: Images from discretized subspace should
be recovered exactly (from noiseless data).
Common choice:
¯
f (x, y ) ≈
X
r ,s
δ(x − r
FOVy
FOVx
)δ(y − s
)
Nx
Ny
I
Efficient computation using FFT algorithm
I
Periodic sampling (⇒ decoupling)
Problem: Periodically extended k-space.
⇒ Error at the k-space boundary!
SENSE: Decoupling
periodic sampling
System of decoupled equations:




m1 (x, y )
c1 (x, y1 )
=
···
···
mn (x, y )
cn (x, y1 )
y1
c1 ρ
c2 ρ
y2
c1 ρ
c2 ρ
y
m1
m2
x
x
spin density ρ, sensitivities ci

c1 (x, y2 )
·
···
cn (x, y2 )
ρ(x, y1 )
ρ(x, y2 )
!
Discretization: Summary
I
Continuous reconstruction (from finite data) is ill-posed!
I
Discretization error
I
Implicit regularization
(discretized problem might be well-conditioned)
Attention: Be carefull when simulating data! Same discretization
for simulation and reconstruction
⇒ misleading results (inverse crime)
Today
I
Review of last lecture
I
Noise Propagation
I
Iterative Reconstruction Algorithms
I
Software
Complex Gaussian Distribution
Random variable Z = X + iY
Proper complex Gaussian:
Z ∼ CN (µ, σ 2 )
p(Z ) =
2
1 − |Z −µ|
e σ2
σπ
I
mean: µ = E [Z ]
variance: σ 2 = E [(Z − µ)(Z − µ)? ]
proper: pseudo-variance
E [(Z − µ)(Z − µ)] = 0
R
Multi-Variate Complex Gaussian Distribution
Random vector Z = X + iY
Multi-variate proper complex Gaussian distribution:
Z ∼ CN (µ, Σ)
mean µ = E [Z ]
covariance Σ = Cov [Z , Z ] = E [(Z − µ)(Z − µ)H ]
pseudo-covariance E [(Z − µ)(Z − µ)T ] = 0
Multi-Variate Complex Gaussian Distribution
Linear reconstruction: Zˆ = FZ
Zˆ ∼ CN (F µ, F ΣF H )
Full covariance matrix for all pixels: F ΣF H
⇒ Not (always) practical
(2D size ∝ 109 , 3D size: ∝ 1014 )
Geometry Factor
Quantity of interest: noise variance of pixel values:
q
σ(xi ) = (F ΣF H )ii
Spatially dependent noise:
√
σund. (x) = g (x) Rσfull (x)
Acceleration: R, Geometry factor: g
Practical estimation: Monte-Carlo method
X
σ
ˆ 2 (x) = N −1
|Fnj |2
j
Gaussian white noise nj .
Noise Amplification in Parallel Imaging
R=1
R=3
Local noise amplification dependent on:
I Sampling pattern
I Coil sensitivities
Pruessmann et al. Magn Reson Med. 42:952–962 (1999)
g-factor map
Parallel MRI: Regularization
I
General problem: bad condition
I
Noise amplification during image reconstruction
I
L2 regularization (Tikhonov):
argminx kAx − y k22 + αkxk22
I
⇔
(AH A + αI )x = AH y
Influence of the regularization parameter α:
small
medium
large
Noise vs. Bias
Parallel MRI: Nonlinear Regularization
I
Good noise suppression
I
Edge-preserving
⇒ Sparsity, nonlinear regularization
argminx kAx − y k22 + αR(x)
Regularization: R(x) = TV (x), R(x) = kWxk1 , . . .
1. JV Velikina. VAMPIRE: variation minimizing parallel imaging reconstruction. Proc. 13th ISMRM; 2424 (2005)
2. G Landi, EL Piccolomini. A total variation regularization strategy in dynamic MRI, Optimization Methods and
Software; 20:545–558 (2005) 2. B Liu, L Ying, M Steckner, J Xie, J Sheng. Regularized SENSE reconstruction
using iteratively refined total variation method. ISBI; 121-123 (2007) 3. A Raj, G Singh, R Zabih, B Kressler, Y
Wang, N Schuff, M Weiner. Bayesian parallel imaging with edge-preserving priors. Magn Reson Med; 57:8–21
(2007) 4. M Uecker, KT Block, J Frahm. Nonlinear Inversion with L1-Wavelet Regularization - Application to
Autocalibrated Parallel Imaging. ISMRM 1479 (2008) 5. . . .
Parallel MRI: Nonlinear Regularization
I
L2 , L1 -wavelet and TV-regularization
I
3D-FLASH, 12-channel head coil
I
2D-reduction factor of 12 (phantom) and 6 (brain)
Correlated Noise - Whitening
Cholesky decomposition:
Σ = LLH
Lower triangular matrix L:
Transform with W = L−1 to uncorrelated and equalized noise:
CN (0, Σ)
⇒
CN (0, W ΣW H ) = CN (0, I )
Reconstruction problem: Ax = y
equations:
AH W H WAx = AH W H Wy
⇒
⇔
WAx = Wy Normal
argminx kWAx − Wy k2
In MRI: Noise correlations between receive channels.
whitening ⇒ uncorrelated virtual channels: Wy
Today
I
Review of last lecture
I
Noise Propagation
I
Iterative Reconstruction Algorithms
I
Software
Parallel MRI: Iterative Algorithms
Signal equation:
Z
~
si (~k) =
d~x ρ(~x ) ci (~x )e −i2π~x ·k
|
{z
}
V
encoding functions
Discretization:
A = Pk FC
A has size 2562 × (8 × 2562 )
⇒ Iterative methods built
from matrix-vector products
Ax, AH y
sensitivities
Landweber Iteration
Gradient descent:
1
φ(x) = kAx − y k22
2
∇φ(x) = AH (Ax − y )
Iteration rule:
xn+1 = xn − µAH (Axn − y )
with µkAH Ak ≤ 1
= (I − µAH A)xn − AH y
Explicit formula for x0 = 0:
xn =
n−1
X
(I − µAH A)j µAH y
j=0
Landweber Iteration
1
iteration 1
iteration 2
iteration 10
iteration 50
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
reconstruction of right singular-vectors (µ = 0.9)
xn =
n−1
X
(I − µAH A)j µAH Ax =
j=0
geometric series:
X
n
1 − (1 − µσl2 ) Vl VlH x
l
Pn−1
j=0
xj =
1−x n
1−x
SVD: A =
P
j
σj Uj VjH
Projection Onto Convex Sets (POCS)
Iterative projection onto convex sets:
yn+1 = PB PA yn
A
B
Problems: slow, not stable, A ∩ B = ∅
Projection Onto Convex Sets (POCS)
Iterative projection onto convex sets:
yn+1 = PB PA yn
A
B
Problems: slow, not stable, A ∩ B = ∅
POCSENSE
Iteration:
yˆn+1 = PC Py yˆn
(multi-coil k-space yˆ )
Projection onto sensitivities:
PC = FCC H F −1
(normalized sensitivities)
Projection onto data y :
Py yˆ = My + (1 − M)ˆ
y
AA Samsonov, EG Kholmovski, DL Parker, and CR Johnson. POCSENSE: POCS-based reconstruction for
sensitivity encoded magnetic resonance imaging. Magn Reson Med, 52:1397–1406, 2004.
POCSENSE
Fully-sampled data should be consistent with sensitivities:
PC yfull = yfull ⇒ F CC H − I F −1 yfull = 0
But is not:
F −1 yfull
C
Note: Can be used to validate coil sensitivities.
CC H − I F −1 yfull
POCS and Landweber
For parallel MRI: POCS corresponds to Landweber for µ = 1 and
normalized sensitivities.
yn+1 = PC Py yn
= FC C H F −1 ((I − Pk )yn + y0 )
|
{z
}
xn
Rewrite:
xn+1 = C H F −1 ((I − Pk )FCxn + y0 )
H
H
= C Cxn − C F
−1
with yn = FCxn
Pk FCxn + C F −1 Pk y0
= xn − AH (Axn − y0 )
H
with C H C = I ,
with Pk y0 = y0
A = Pk FC
Krylov Subspace Methods
Krylov subspace:
Kn = spani=0···n {T n b}
⇒ Repeated application of T .
Landweber, Arnoldi, Lanczos, Conjugate gradients, ...
Conjugate Gradients
T symmetric (or Hermitian)
Initialization:
r0 = b − Tx0
d0 = r0
Iteration:
qi ⇐ Tdi
α=
|ri |2
R riH qi
xi+1 ⇐ xi + αdi
ri+1 ⇐ ri − αqi
|ri+1 |2
|ri |2
⇐ ri+1 + βdi
β=
di+1
Conjugate Gradients
T symmetric (or Hermitian)
Initialization:
r0 = b − Tx0
d0 = r0
Iteration:
qi ⇐ Tdi
xi+1 ⇐ xi + αdi
ri+1 ⇐ ri − αqi
di+1 ⇐ ri+1 + βdi
Conjugate Gradients
T symmetric (or Hermitian)
Initialization:
r0 = b − Tx0
d0 = r0
Iteration:
qi ⇐ Tdi
α=
|ri |2
R riH qi
xi+1 ⇐ xi + αdi
ri+1 ⇐ ri − αqi
|ri+1 |2
|ri |2
⇐ ri+1 + βdi
β=
di+1
Conjugate Gradients
T symmetric or Hermitian
Directions are conjugate basis:
piH Tpj = δij
(Wikipedia)
Coefficients can be directly computed:
X
X
b = Tx = T
α i pi =
αi Tpi
i
pjH b
=
X
αi pjH Tpi
i
⇒ αj =
pjH b
pjH Tpj
i
=
αj pjH Tpj
Conjugate Gradients on Normal Equations
Normal equations:
AH Ax = AH y
⇔
x = argminx kAx − y k22
Tikhonov:
AH A + αI x = AH y
⇔
x = argminx kAx − y k22 + αkxk22
CGNE:
Tx = b
with:
T = AH A
b = AH y
Implementation
repeat
FFT
P
IFFT
FFT
P
IFFT
sum channels
point-wise mult. with
conjugate sensitivities
inverse FFT
data flow
sampling mask or projection onto data
multi-dimensional FFT
point-wise multiplication with sensitivities
distribute image
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732 0
2.
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
1.
1.414
1.732 0
2.
1
Conjugate Gradient Algorithm vs Landweber
conjugate gradients
Landweber
2
1.
1.414
1.732 0
2.
1
Conjugate Gradient Algorithm vs Landweber
Size: 41400 × 41400 (complex)
1
Landweber
CG
0.1
0.01
0.001
0.0001
0
20
40
60
80
100
Nonlinear Methods
I
Non-linear Conjugate Gradient
I
Landweber xn+1 = xn + µDF H (y − Fx)
I
Iteratively Regularized Gauss-Newton Method
I
...
Project 1: Iterative SENSE
Project: Implement and study Cartesian iterative SENSE
I
Tools: Matlab, reconstruction toolbox, python, ...
I
Deadline: Feb 24
I
Hand in: Working code and plots/results with description.
I
See website for data and instructions.
Project 1: Iterative SENSE
Step 1: Implement Model
A = PFS
AH = S H F −1 P H
Hints:
I
Use unitary and centered (fftshift) FFT kFxk2 = kxk2
I
Implement undersampling as a mask, store data with zero
Check hx, Ay i = AH x, y for random vectors x, y
I
Project 1: Iterative SENSE
Step 2: Implement Reconstruction
Landweber (gradient descent)1
xn+1 = xn + αAH (y − Axn )
Conjugate gradient algorithm2
Step 3: Experiments
I
Noise, errors, and convergence speed
I
Different sampling
I
Regularization
1. L Landweber. An iteration formula for Fredholm integral equations of the first kind. Amer J Math; 73:615–624
(1951) 2. MR Hestenes and E Stiefel. Methods of conjugate gradients for solving linear systems. J. Ref. N.B.S.
49:409–436 (1952)
Today
I
Review of last lecture
I
Noise Propagation
I
Iterative Reconstruction Algorithms
I
Software
Software Toolbox
I
Rapid prototyping
(similar to Matlab, octave, ...)
I
Reproducible research
(i.e. scripts to reproduce experiments)
I
Robustness and clinically feasible runtime
(C/C++, OpenMP, GPU programming)
Programming library
I
Consistent API based on multi-dimensional arrays
I
FFT and wavelet transform
I
Generic iterative algorithms
(conjugate gradients, IST, IRGNM, . . . )
I
Transparent GPU acceleration of most functions
Command-line tools
I
Simple file format
I
Interoperability with Matlab
I
Basic operations: fft, crop, resize, slice, . . .
I
Sensitivity calibration and image reconstruction
Software
I
Available for Linux and Mac OS X (64 bit)
http://www.eecs.berkeley.edu/~uecker/toolbox.html
I
Requirements: FFTW, GSL, LAPACK (CUDA, ACML)
Ubuntu:
sudo apt-get install libfftw3-dev
sudo apt-get install libgsl0-dev
sudo apt-get install liblapack-dev
Mac OS X:
sudo port install fftw-3-single
sudo port install gsl
sudo port install gcc47
Data Files
Data files store multi-dimensional arrays.
example.hdr
example.cfl
⇐ Text header
⇐ Data: complex single-precision floats
Text header:
# Dimensions
1 230 180 8 2 1 1 1 1 1 1 1 1 1 1 1
Matlab functions:
data = readcfl(’example’);
writecfl(’example’, data)
C Functions (using memory-mapped IO)
Rapid Prototyping
Data processing using command line tools:
# resize 0 320 tmp in
# fft -i 7 out tmp
Load result into Matlab/Octave:
i data = squeeze(readcfl(’out’));
i imshow3(abs(data), []);
C Programming Example
#i n c l u d e <comple x . h>
#i n c l u d e ”num/ f f t . h”
#i n c l u d e ” m i s c /mmio . h”
i n t main ( )
{
i n t N = 16;
l o n g dims [ N ] ;
c o mplex f l o a t ∗ i n = l o a d c f l ( ” i n ” , N, dims ) ;
c o mplex f l o a t ∗ o u t = c r e a t e c f l ( ” o u t ” , N, dims ) ;
f f t c (N, dims , 1 + 2 + 4 , out , i n ) ;
}
Reconstruction Algorithms
I
Iterative SENSE1
I
Nonlinear inversion2
I
ESPIRiT calibration and reconstruction3
I
Regularization: L2 and L1-wavelet
1. Pruessmann KP et al. Advances in sensitivity encoding with arbitrary k-space trajectories. MRM
46:638-651 (2001)
2. Uecker M et al. Image Reconstruction by Regularized Nonlinear Inversion - Joint Estimation of Coil
Sensitivities and Image Content. MRM 60:674-682 (2008)
3. Uecker M, Lai P, et al. ESPIRiT - An Eigenvalue Approach to Autocalibrating Parallel MRI: Where SENSE
meets GRAPPA. MRM EPub (2013)