Say we have a vector z ⃗ ∈ C n \vec{z}\in\mathbb{C}^n z ∈ C n here elements of z ⃗ \vec{z} z are complex, z i = a i + i b i z_i=a_i+ib_i z i = a i + i b i
z ⃗ = [ z 1 z 2 ⋮ z n ] \vec{z}=\begin{bmatrix} z_1\\z_2\\ \vdots \\z_n\\ \end{bmatrix} z = ⎣ ⎡ z 1 z 2 ⋮ z n ⎦ ⎤ how can we find the length of vector z ⃗ \vec{z} z
Magnitude of a complex number z i z_i z i is,
∥ z i ∥ = ∥ z i ‾ ∥ ∥ z i ∥ \|z_i\|=\|\overline{z_i}\|\|z_i\| ∥ z i ∥ = ∥ z i ∥∥ z i ∥
∥ z i ∥ = ( a i − i b i ) ( a i + i b i ) \|z_i\|=(a_i - ib_i)(a_i + ib_i) ∥ z i ∥ = ( a i − i b i ) ( a i + i b i )
∥ z i ∥ = a i 2 + b i 2 \|z_i\|=a_i^2 + b_i^2 ∥ z i ∥ = a i 2 + b i 2
We can get the magnitude of z ⃗ \vec{z} z by taking dot product z ⃗ ⋅ z ⃗ \vec{z}\cdot\vec{z} z ⋅ z , but how to take the dot product of of two complex vectors.
We can not compute dot product like z ⃗ ⋅ z ⃗ = z ⃗ T z ⃗ \vec{z}\cdot\vec{z}=\vec{z}^T\vec{z} z ⋅ z = z T z
Say z ⃗ = [ 1 i ] \vec{z}=\begin{bmatrix} 1\\i\\ \end{bmatrix} z = [ 1 i ] then z ⃗ T z ⃗ = a 2 + i 2 = 0 \vec{z}^T\vec{z}=a^2+i^2=0 z T z = a 2 + i 2 = 0
But we can clearly see that the magnitude of z ⃗ \vec{z} z is not 0 0 0
Magnitude of a complex vector is given as,
∥ z ∥ = z ⋅ z = z ‾ T z ; z ∈ C n \begin{matrix} \|\mathbf{z}\|=\mathbf{z}\cdot\mathbf{z}=\mathbf{\overline{z}}^T\mathbf{z};\quad \mathbf{z}\in\mathbb{C}^n \end{matrix} ∥ z ∥ = z ⋅ z = z T z ; z ∈ C n So for z ⃗ = [ 1 i ] \vec{z}=\begin{bmatrix} 1\\i\\ \end{bmatrix} z = [ 1 i ] magnitude of z \mathbf{z} z is ∥ z ∥ = z ‾ T z \|\mathbf{z}\|=\mathbf{\overline{z}}^T\mathbf{z} ∥ z ∥ = z T z
∥ z ∥ = [ 1 ‾ i ‾ ] [ 1 i ] \|\mathbf{z}\|=\begin{bmatrix} \overline{1} & \overline{i}\\ \end{bmatrix} \begin{bmatrix} 1\\i\\ \end{bmatrix} ∥ z ∥ = [ 1 i ] [ 1 i ]
∥ z ∥ = 1 × 1 ‾ + i i ‾ \|\mathbf{z}\|=1 \times \overline{1} + i\overline{i} ∥ z ∥ = 1 × 1 + i i
∥ z ∥ = 1 − i 2 \|\mathbf{z}\|= 1 - i^2 ∥ z ∥ = 1 − i 2
∥ z ∥ = 1 + 1 \|\mathbf{z}\|= 1 + 1 ∥ z ∥ = 1 + 1
∥ z ∥ = 2 \|\mathbf{z}\|= 2 ∥ z ∥ = 2
z ‾ T z \mathbf{\overline{z}}^T\mathbf{z} z T z is also written as z H z \mathbf{z}^H\mathbf{z} z H z we pronounce it as "z Hermitian z" .
Dot product between two complex vector say u , v ∈ C n \mathbf{u},\mathbf{v} \in\mathbb{C}^n u , v ∈ C n is given as.
u ⋅ v = u ‾ T v = u H v ; u , v ∈ C n \begin{matrix} \mathbf{u}\cdot\mathbf{v}=\mathbf{\overline{u}}^T\mathbf{v}=\mathbf{u}^H\mathbf{v};\quad \mathbf{u},\mathbf{v}\in\mathbb{C}^n \end{matrix} u ⋅ v = u T v = u H v ; u , v ∈ C n What we have seen for symmetric matrix is A T = A A^T=A A T = A but it's only true if all the elements of A A A are Real .
A Complex matrix is symmetric if
A ‾ T = A \begin{matrix} \overline{A}^T=A \end{matrix} A T = A And these matrices are known as Hermitian matrices .
Say we have an orthogonal matrix Q ∈ C n × n Q\in\mathbb{C}^{n\times n} Q ∈ C n × n whose column vectors are perpendicular to each other.
Q = [ ⋮ ⋮ ⋮ q 1 q 2 ⋯ q n ⋮ ⋮ ⋮ ] Q=\begin{bmatrix} \vdots & \vdots & & \vdots \\ q_1 & q_2 & \cdots & q_n \\ \vdots & \vdots & & \vdots \\ \end{bmatrix} Q = ⎣ ⎡ ⋮ q 1 ⋮ ⋮ q 2 ⋮ ⋯ ⋮ q n ⋮ ⎦ ⎤ all columns vectors are perpendicular to each other,
q i ‾ T q j = { 0 if i ≠ j 1 if i = j \overline{q_i}^Tq_j=\left\{\begin{matrix} 0& \text{if }i\neq j\\ 1& \text{if }i=j \\ \end{matrix}\right. q i T q j = { 0 1 if i = j if i = j And we can also say,
Q ‾ T Q = Q H Q = I \begin{matrix} \overline{Q}^TQ=Q^HQ=\mathcal{I} \end{matrix} Q T Q = Q H Q = I We discussed about Fourier Series .
f ( x ) = a 0 1 + a 1 cos ( x ) + b 1 sin ( x ) + a 2 cos ( 2 x ) + b 2 sin ( 2 x ) + ⋯ f(x)=a_0 1+a_1\cos(x)+b_1\sin(x)+a_2\cos(2x)+b_2\sin(2x)+\cdots f ( x ) = a 0 1 + a 1 cos ( x ) + b 1 sin ( x ) + a 2 cos ( 2 x ) + b 2 sin ( 2 x ) + ⋯ Now assume we have n n n samples ( f 0 , f 2 , ⋯ , f n − 1 ) (f_0,f_2,\cdots,f_{n-1}) ( f 0 , f 2 , ⋯ , f n − 1 ) from a signal f f f , we only have some realizations of f f f so we can only have a Discrete Fourier Transform .
Using these n n n samples we want to approximate the there fourier coefficient ( f ^ 0 , f ^ 1 , ⋯ , f ^ n − 1 ) (\hat{f}_0, \hat{f}_1, \cdots , \hat{f}_{n-1}) ( f ^ 0 , f ^ 1 , ⋯ , f ^ n − 1 ) .
f ^ k = ∑ j = 0 n − 1 f j exp ( − i 2 π j k n ) \begin{matrix} \displaystyle \hat{f}_k=\sum_{j=0}^{n-1}f_j \exp\left(\frac{-i2\pi jk}{n}\right) \end{matrix} f ^ k = j = 0 ∑ n − 1 f j exp ( n − i 2 πjk ) f k = 1 n ( ∑ j = 0 n − 1 f ^ j exp ( i 2 π j k n ) ) f_k=\frac{1}{n}\left(\sum_{j=0}^{n-1}\hat{f}_j \exp\left(\frac{ i2\pi jk}{n}\right)\right) f k = n 1 ( j = 0 ∑ n − 1 f ^ j exp ( n i 2 πjk ) ) Say,
ω n = exp ( i 2 π n ) \begin{matrix} \displaystyle \omega_n=\exp\left(\frac{i2\pi}{n}\right) \end{matrix} ω n = exp ( n i 2 π ) We can rewrite the above equations as,
[ f ^ 0 f ^ 1 f ^ 2 ⋮ f ^ n − 1 ] = [ 1 1 1 ⋯ 1 1 ω n ω n 2 ⋯ ω n n − 1 1 ω n 2 ω n 4 ⋯ ω n 2 ( n − 1 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 ω n n − 1 ω n 2 ( n − 1 ) ⋯ ω n ( n − 1 ) 2 ] ⏟ DFT matrix say F n [ f 0 f 1 f 2 ⋮ f n − 1 ] \begin{bmatrix} \hat{f}_0\\\hat{f}_1\\\hat{f}_2\\ \vdots \\\hat{f}_{n-1}\\ \end{bmatrix}= \underbrace{\begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ 1 & \omega_n & \omega_n^2 & \cdots & \omega_n^{n-1}\\ 1 & \omega_n^2 & \omega_n^4 & \cdots & \omega_n^{2(n-1)}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \omega_n^{n-1} & \omega_n^{2(n-1)} & \cdots & \omega_n^{(n-1)^2}\\ \end{bmatrix}}_{\text{DFT matrix say } F_n} \begin{bmatrix} f_0\\f_1\\f_2\\ \vdots \\f_{n-1}\\ \end{bmatrix} ⎣ ⎡ f ^ 0 f ^ 1 f ^ 2 ⋮ f ^ n − 1 ⎦ ⎤ = DFT matrix say F n ⎣ ⎡ 1 1 1 ⋮ 1 1 ω n ω n 2 ⋮ ω n n − 1 1 ω n 2 ω n 4 ⋮ ω n 2 ( n − 1 ) ⋯ ⋯ ⋯ ⋱ ⋯ 1 ω n n − 1 ω n 2 ( n − 1 ) ⋮ ω n ( n − 1 ) 2 ⎦ ⎤ ⎣ ⎡ f 0 f 1 f 2 ⋮ f n − 1 ⎦ ⎤ ω n \omega_n ω n is an complex number so DFT matrix is a Complex Matrix , so our fourier coefficient
( f ^ 0 , f ^ 1 , ⋯ , f ^ n − 1 ) (\hat{f}_0, \hat{f}_1, \cdots , \hat{f}_{n-1}) ( f ^ 0 , f ^ 1 , ⋯ , f ^ n − 1 ) are also complex number .
Magnitude of f ^ i \hat{f}_i f ^ i tells us the magnitude of the i t h i^{th} i t h sin \sin sin and cos \cos cos . Phase of f ^ i \hat{f}_i f ^ i tells us the phase of the i t h i^{th} i t h sin \sin sin and cos \cos cos . DFT matrix is an Orthogonal Matrix , all column vectors are orthogonal to each other.
F ‾ n T F n = F n H F n = I n \begin{matrix} \overline{F}_n^TF_n=F_n^HF_n=\mathcal{I}_n \end{matrix} F n T F n = F n H F n = I n Now the problem is this matrix multiplication, time complexity of this matrix multiplication is O ( n 2 ) O(n^2) O ( n 2 ) ,
for a matrix of size 1024 × 1024 1024\times 1024 1024 × 1024 then it requires 1 , 048 , 576 1,048,576 1 , 048 , 576 multiplications.
This problem is solved by Fast Fourier Transform
IDEA behind FFT is we can write F n F_n F n as,
F n = [ I n / 2 − D n / 2 I n / 2 D n / 2 ] [ F n / 2 0 0 F n / 2 ] P n \begin{matrix} F_n = \begin{bmatrix} \mathcal{I}_{n/2} & -D_{n/2}\\ \mathcal{I}_{n/2} & D_{n/2}\\ \end{bmatrix} \begin{bmatrix} F_{n/2} & 0\\ 0 & F_{n/2}\\ \end{bmatrix} P_n \end{matrix} F n = [ I n /2 I n /2 − D n /2 D n /2 ] [ F n /2 0 0 F n /2 ] P n D n / 2 = [ 1 0 0 ⋯ 0 0 ω n / 2 0 ⋯ 0 0 0 ω n / 2 2 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ ω n / 2 n / 2 − 1 ] D_{n/2}=\begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & \omega_{n/2} & 0 & \cdots & 0 \\ 0 & 0 & \omega_{n/2}^2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \omega_{n/2}^{n/2-1} \end{bmatrix} D n /2 = ⎣ ⎡ 1 0 0 ⋮ 0 0 ω n /2 0 ⋮ 0 0 0 ω n /2 2 ⋮ 0 ⋯ ⋯ ⋯ ⋱ ⋯ 0 0 0 ⋮ ω n /2 n /2 − 1 ⎦ ⎤
P n P_n P n is a n × n n\times n n × n even and odd permutation matrix.
P n = [ 1 0 0 0 ⋯ 0 0 0 0 1 0 ⋯ 0 0 ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 0 0 ⋯ 1 0 0 1 0 0 ⋯ 0 0 0 0 0 1 ⋯ 0 0 ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 0 0 ⋯ 0 1 ] P_n= \begin{bmatrix} \color{red}{1} & \color{red}{0} & \color{red}{0} & \color{red}{0} & \cdots & \color{red}{0} & \color{red}{0} \\ \color{red}{0} & \color{red}{0} & \color{red}{1} & \color{red}{0} & \cdots & \color{red}{0} & \color{red}{0} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ \color{red}{0} & \color{red}{0} & \color{red}{0} & \color{red}{0} & \cdots & \color{red}{1} & \color{red}{0} \\ \color{blue}{0} & \color{blue}{1} & \color{blue}{0} & \color{blue}{0} & \cdots & \color{blue}{0} & \color{blue}{0} \\ \color{blue}{0} & \color{blue}{0} & \color{blue}{0} & \color{blue}{1} & \cdots & \color{blue}{0} & \color{blue}{0} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ \color{blue}{0} & \color{blue}{0} & \color{blue}{0} & \color{blue}{0} & \cdots & \color{blue}{0} & \color{blue}{1} \\ \end{bmatrix} P n = ⎣ ⎡ 1 0 ⋮ 0 0 0 ⋮ 0 0 0 ⋮ 0 1 0 ⋮ 0 0 1 ⋮ 0 0 0 ⋮ 0 0 0 ⋮ 0 0 1 ⋮ 0 ⋯ ⋯ ⋱ ⋯ ⋯ ⋯ ⋱ ⋯ 0 0 ⋮ 1 0 0 ⋮ 0 0 0 ⋮ 0 0 0 ⋮ 1 ⎦ ⎤
FFT will have to make 1 2 n log 2 ( n ) \frac{1}{2}n\log_2(n) 2 1 n log 2 ( n ) calculations
FFT gives time complexity of n log 2 ( n ) n\log_2(n) n log 2 ( n )
For n = 1024 = 2 10 n=1024=2^{10} n = 1024 = 2 10
DFT will have to make n 2 = 1 , 048 , 576 n^2=1,048,576 n 2 = 1 , 048 , 576 calculationsFFT will have to make only 1 2 n log 2 ( n ) = 5 , 120 \frac{1}{2}n\log_2(n)=5,120 2 1 n log 2 ( n ) = 5 , 120 calculations
So here FFT is more then 200 x 200x 200 x faster then DFT