-
Notifications
You must be signed in to change notification settings - Fork 0
/
RF_Fundamentals.tex
818 lines (747 loc) · 48 KB
/
RF_Fundamentals.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
\chapter{Radio-Frequency Fundamentals}
% ###########################################################################
% ###########################################################################
% ###########################################################################
\section{Introduction}
In this book we will the International System (SI) of units, which differs from most of the plasma physics and waves into plasma reference books in which CGS units are used instead. In this system of units, the unit of length is the meter, the unit of time is the second and the unit of mass is the kilogram. This choice is motivated by the fact that most engineering tools such as electromagnetic solvers also use SI units by default. Moreover, these units are also the one used in practice when performing measurements.
% ###########################################################################
% ###########################################################################
% ###########################################################################
\section{The Decibel}
[1] E.A. Wolff, R. Kaul, Microwave engineering and systems applications, 1988. doi:10.1109/MAP.1989.6095665.
% ###########################################################################
% ###########################################################################
% ###########################################################################
\section{Basic Equations}
% ###########################################################################
% ###########################################################################
\subsection{Maxwell Equations}
We shall start the Maxwell equations is their most general form, before recasting them to fit our needs. The usual electromagnetic field quantities are expressed in terms of six quantities that are:
\begin{itemize}
\item $\boldsymbol{\mathcal{E}}$: the electric field intensity (in $V/m$)
\item $\boldsymbol{\mathcal{H}}$: the magnetic field intensity (in $A/m$)
\item $\boldsymbol{\mathcal{D}}$: the electric flux density (in $A\cdot s/m^2=C/m^2$)
\item $\boldsymbol{\mathcal{B}}$: the magnetic flux density (in $V\cdot s/m^2=Wb/m^2$, also known as Tesla $T$)
\item $\boldsymbol{\mathcal{J}}$: the electric current density (in $A/m^2$)
\item $q_v$: the electric charge density (in $C/m^3$)
\end{itemize}
where all quantities are function of space and time, e.g. $\boldsymbol{\mathcal{E}}=\boldsymbol{\mathcal{E}}(\mathbf{r},t)$.
Since James Clerk Maxwell discovered the full set of mathematical laws describing electromagnetic fields, many mathematicians, physicists and engineers have proposed different frameworks for representing fields and waves equations\parencite{Lindell2004, Warnick2014}. For day-to-day work in electromagnetic engineering, Heaviside's vector representation is commonly used. Within this frame, Maxwell equations can be stated as a set of local differential equations:
\begin{subequations}
\begin{align}
\boldsymbol{\nabla} \times \boldsymbol{\mathcal{E}} &= -\frac{\partial \boldsymbol{\mathcal{B}}}{\partial t} \label{eq:Maxwell-Faraday}\\
\boldsymbol{\nabla} \times \boldsymbol{\mathcal{H}} &= \frac{\partial \boldsymbol{\mathcal{D}}}{\partial t} + \boldsymbol{\mathcal{J}} \label{eq:Maxwell-Ampere} \\
\boldsymbol{\nabla} \cdot \boldsymbol{\mathcal{D}} &= q_v \label{eq:Maxwell-Gauss} \\
\boldsymbol{\nabla} \cdot \boldsymbol{\mathcal{B}} &= 0 \label{eq:Maxwell-Gauss-Magnetism}
\end{align}
\label{eq:MaxwellEquations}
\end{subequations}
The Maxwell-Faraday's law \ref{eq:Maxwell-Faraday} relates the magnetic flux to the electric field, by describing how a changing magnetic flux induces an electric field.
The Maxwell-Ampere's law \ref{eq:Maxwell-Ampere} relates the current to the magnetic field. It states that magnetic field can be generated by a changing electric flux density and by electric current.
The Maxwell-Gauss law \ref{eq:Maxwell-Gauss} describes the relationship between an electric flux density and the electric charges that cause it.
The Maxwell-Gauss law for magnetism states that no magnetic charge exists as for electric charges.
The corresponding equations in integral form are:
\begin{subequations}
\begin{align}
\oint \boldsymbol{\mathcal{E}} \cdot \diff \mathbf{l}
=&
- \frac{\diff }{\diff t} \iint \boldsymbol{\mathcal{B}} \cdot \diff \mathbf{S}
\\
\oint \boldsymbol{\mathcal{B}} \cdot \diff \mathbf{l}
=&
\frac{\diff }{\diff t} \iint \boldsymbol{\mathcal{D}} \cdot \diff \mathbf{S}
+ \iint \boldsymbol{\mathcal{J}} \cdot \diff \mathbf{S}
\\
\oiint \boldsymbol{\mathcal{B}} \cdot \diff \mathbf{S}
=& \; 0
\\
\oiint \boldsymbol{\mathcal{D}} \cdot \diff \mathbf{S}
=&
\iiint q_v \diff v
\end{align}
\label{eq:MaxwellEquationsIntegral}
\end{subequations}
On can define the following \emph{circuit} quantities associated with each field quantities\parencite{Harrington2001}:
\begin{itemize}
\item $v$, the \emph{voltage} in $V$
\item $i$, the \emph{current} in $A$
\item $q$, the \emph{electric charge} in $C$
\item $\psi$, the \emph{magnetic flux} in $Wb$
\item $\psi_e$, the \emph{electric flux} in $C$
\item $u$, the \emph{magnetomotive force} in $A$
\end{itemize}
defined by:
\begin{subequations}
\begin{align}
v =& \int \boldsymbol{\mathcal{E}} \cdot \diff \mathbf{l} \\
i =& \iint \boldsymbol{\mathcal{J}} \cdot \diff \mathbf{S} \\
q =& \iiint q_v \diff v \\
\psi =& \iint \boldsymbol{\mathcal{B}} \cdot \diff \mathbf{S} \\
\psi_e=& \iint \boldsymbol{\mathcal{D}} \cdot \diff \mathbf{S} \\
u =& \int \boldsymbol{\mathcal{H}} \cdot \diff \mathbf{l}
\end{align}
\label{eq:CircuitQuantities}
\end{subequations}
In order to be able to solve the equations(\ref{eq:MaxwellEquations}), one needs to specify the relationships existing between electric, magnetic flux densities ($\mathcal{D}$,$\mathcal{B}$) and electric current density $\mathcal{J}$ with electric and magnetic intensities ($\mathcal{E},\mathcal{H}$)\footnote{
I've chosen here is to treat both $\mathbf{E}$ and $\mathbf{H}$ as similar kinds of fields, different from $\mathbf{D}$ and $\mathbf{B}$, as for example it is done in \parencite{Pozar1998} or \parencite{Harrington2001}. This choice has symbolic advantages when dealing with RF networks with SI units, where the electric and magnetic field intensity can be associated to voltages and currents as they are measured (tbc).
However, many (most?) authors have chosen instead to treat $\mathbf{E}$ and $\mathbf{B}$ as the \emph{fundamental} quantities and $\mathbf{D}$ and $\mathbf{H}$ as \textit{auxiliary} (or convenience) quantities denoting the average fields over macroscopic small regions\parencite{Lindell1995, Griffiths2005, Jackson1999}. Indeed, both $\mathbf{D}$ and $\mathbf{H}$ allow to write Gauss and Ampere laws in terms of the \emph{free} charges and currents alone, such "incorporating" \emph{bounds} charge and current contributions. Thus, in these macroscopic Maxwell equations, only the external charges and currents brought into the system from outside are considered, without taking care of the average charge and current distributions in the medium, which can be a convenient mathematical tool. This choice has sense, this one cannot turn off bounds contributions as he can for free contributions \parencite[sec.6.3]{Griffiths2005}. In such as case, one would have read instead $\mathbf{H}=\boldsymbol{\mu}^{-1} $. Finally, the magnetic flux density $\mathbf{B}$ can be derived as the character of an electric field in a co-moving frame\parencite{Schwinger1998}.
The latter choice can also take origin from the interpretation that the electric and magnetic fields as a force acting on a test charge via the Lorentz force $q(\mathbf{E}+\mathbf{v}\times\mathbf{B})$, leading naturally to $(\mathbf{E}$,$\mathbf{B})$. However, if one uses an energy picture using differential forms, interpreting the electromagnetic field as the change of energy experienced by a test charge as it moves through the field, the couples $(\mathbf{E}$,$\mathbf{H})$ and $(\mathbf{D}$,$\mathbf{B})$ are represented by different mathematical objects\parencite{Warnick2014}.
}
. These relations depend on the medium properties in which the field exists and are called \emph{constitutive relations}. Explicit forms of these relationships have been found from experimentation or deduced from atomic considerations \parencite[sec.5]{Schwinger1998} and are discussed in a later section.
% ###########################################################################
% ###########################################################################
% ###########################################################################
\subsection{Time Harmonic Electromagnetic Fields}
% ###########################################################################
% ###########################################################################
\subsubsection{Phasors}
In most of the cases in magnetic fusion plasma heating and current drive, Radio-Frequency source time excitation varies sinusoidally in time with a single frequency (or \emph{AC} for Alternative Current). Such case of time varying electromagnetic fields is referred as \emph{time harmonic} or \emph{monochromatic} fields. In this case, the mathematical analysis is simplified by using complex quantities. A scalar quantity $a$ can be defined as\footnote{The convention $ a \stackrel{\Delta}{=} \sqrt{2} |A| \sin (\omega t + \alpha) = \sqrt{2} \Im\left[A e^{j \omega t} \right]$ could also have been used.}:
\begin{eqnarray}
a(\mathbf{r},t)
&\stackrel{\Delta}{=}&
\sqrt{2} |A(\mathbf{r})| \cos (\omega t + \alpha) = \sqrt{2} \Re\left[A(\mathbf{r}) e^{j \omega t} \right]
\label{eq:phasor}
\end{eqnarray}
where $a=a(\mathbf{r}, t)$ is called the \emph{instantaneous quantity} and $A=|A|e^{j\alpha}$ is called the \emph{complex quantity} or \emph{phasor}. Note that the complex quantity $A$ does \emph{not} depend of the time but it may be a function of position, i.e. $A=A(\mathbf{r})$. We also have the inverse mapping:
\begin{equation}
A(\r) = \frac{1}{\sqrt{2}}\left( a(\r,0) - j a(\r, \pi/2\omega) \right)
\end{equation}
This definition, taken from \parencite{Harrington2001}, leads to few remarks. In electrical engineering, it is more practical to use time-averaged power (over any integer number of cycles) than instantaneous power since the voltages and currents are time-varying functions. The $\sqrt{2}$ factor, also known as the \emph{crest factor}, comes for the choice made for the magnitude $|A|$ of the complex quantity $A$ to be the \emph{effective} (or RMS, for Root-Mean-Square) value of the sinusoidally time varying quantity $a$\footnote{Can also be proved from the average of the product of two instantaneous complex quantities $a$ and $b$ as defined by (\ref{eq:phasor}) which is
$$
\frac{1}{T} \int_0^T a(t) b(t)\diff t = \Re[AB^*] = \frac{1}{2}\left(AB^* + A^* B \right)
$$
Then, one deduces that
$$
\frac{1}{T} \int_0^T \left[a(t)\right]^2 \diff t = \Re[AA^*] = |A|^2
$$}:
\begin{subequations}
\begin{align}
a_{\mathrm{rms}}
&= \sqrt{\frac{1}{T} \int_0^T \left[a(t) \right]^2 \diff t} \nonumber \\
&= \sqrt{\frac{1}{T} \int_0^T \left[\sqrt{2} |A| \cos (\omega t + \alpha) \right]^2 \diff t} \nonumber \\
&= \sqrt{\frac{|A|^2}{T} \int_0^T \left[ 1+\cos (2\omega t + 2\alpha) \right] \diff t} \nonumber \\
&= |A| \nonumber
\end{align}
\end{subequations}
Dropping the factor $\sqrt{2}$ in (\ref{eq:phasor}) would lead to express $|A|$ as the peak value of $a$ instead\footnote{Which is the case for example in ANSYS HFSS (while the Poynting vector is a time-averaged quantity)}.
When calculating the complex power, one advantage of the previous definition is to get the same proportionally factors as for their instantaneous counterparts, i.e. $p=v i$ for the instantaneous power and $P=VI^*$ for the complex power. Otherwise, a factor $1/2$ would appear in the complex power if peak values would have been used for $|V|$ and $|I|$.
This definition can be extended to vectors quantities having sinusoidal time variation:
\begin{equation}
\boldsymbol{\mathcal{E}}(\mathbf{r},t)
=
\sqrt{2} \Re \left[ \mathbf{E}(\mathbf{r}) e^{j\omega t}\right]
\label{eq:definitionTimeHarmonicField}
\end{equation}
which means that each components of $\mathbf{E}$ are related to the components of $ \boldsymbol{\mathcal{E}}$ by the relation (\ref{eq:phasor}). For example, in a cartesian frame, the components of $\boldsymbol{\mathcal{E}}$ and $\mathbf{E}$ are related by:
\begin{eqnarray*}
\mathcal{E}_x = \sqrt{2} \Re \left[ E_x e^{j\omega t}\right] = \sqrt{2} |E_x| \cos (\omega t + \phi_x) \\
\mathcal{E}_y = \sqrt{2} \Re \left[ E_y e^{j\omega t}\right] = \sqrt{2} |E_y| \cos (\omega t + \phi_y) \\
\mathcal{E}_z = \sqrt{2} \Re \left[ E_z e^{j\omega t}\right] = \sqrt{2} |E_z| \cos (\omega t + \phi_z)
\end{eqnarray*}
which leads to:
\begin{subequations}
\begin{align}
\mathcal{E}_{\mathrm{rms}}
=& \sqrt{\frac{1}{T} \int_0^T \left[ \boldsymbol{\mathcal{E}}(t) \right]^2 \diff t} \nonumber \\
=& \sqrt{\frac{1}{T} \int_0^T \left[ \boldsymbol{\mathcal{E}}(t) \cdot \boldsymbol{\mathcal{E}}(t) \right] \diff t} \nonumber \\
=& \sqrt{\frac{1}{T} \int_0^T \left[ \mathcal{E}_x^2 + \mathcal{E}_y^2 + \mathcal{E}_z^2 \right] \diff t} \nonumber \\
=& \sqrt{|E_x|^2 + |E_y|^2 + |E_z|^2} \nonumber \\
=& \sqrt{ \mathbf{E} \cdot \mathbf{E}^* } \nonumber \\
=& \left| \mathbf{E} \right| \nonumber
\end{align}
\end{subequations}
Note that the phases $\phi_x, \phi_y, \phi_z$ are not necessarily equal. This leads to an important remark on the evaluation of peak values of time-harmonic vector fields. For sinusoidally time varying \emph{scalar} complex quantity, the \emph{peak} value can be obtained from:
\begin{equation}
A_{\mathrm{peak}} = \sqrt{2} |A|
\end{equation}
However, this relation does not hold in general for vector fields, unless in the particular case of linearly polarized fields\footnote{Inversely, if we have defined (\ref{eq:phasor}) without the $\sqrt{2}$ factor, the rms value would not be in general derived from $1/\sqrt{2}$ the peak value.}. In the case of circularly polarized field for example, the peak value is constant in time and such is equal to the rms value\parencite{Faria2008}.
Finally, note that in electrical engineering, the time convention in (\ref{eq:phasor}) is usually $e^{j\omega t}$ while in physics $e^{-j\omega t}$ is preferred\parencite{Bradley2007, Michelsen2017}. This choice is motivated by the fact that most of the electromagnetic solver packages use the former convention, and so to avoid any confusion. Note however that most of the physics books on plasma waves adopt instead the later convention \parencite{Swanson2003, Stix1992, Brambilla1998}.
% ###########################################################################
% ###########################################################################
\subsubsection{Time Harmonic Maxwell Equations}
For time-harmonic fields, using phasor analysis leads to obtain single frequency steady state response. Using the mathematical properties of the real part operator $\Re$, Maxwell equations can be reformulated in terms of complex phasors (See sec.1-8 of \parencite{Harrington2001} for a complete derivation). Moreover, in this process, time-derivatives are expressed by a $j\omega$ multiplier.
Thus, the Maxwell equations (\ref{eq:Maxwell-Faraday}, \ref{eq:Maxwell-Ampere}, \ref{eq:Maxwell-Gauss} \ref{eq:Maxwell-Gauss-Magnetism}) become for time harmonic fields:
\begin{subequations}
\begin{align}
\boldsymbol{\nabla} \times \mathbf{H} (\mathbf{r}) &= j\omega\mathbf{D}(\mathbf{r}) + \mathbf{J}(\mathbf{r}) \label{eq:Maxwell-Faraday-Harmonic} \\
\boldsymbol{\nabla} \times \mathbf{E} (\mathbf{r}) &= -j\omega\mathbf{B}(\mathbf{r}) \label{eq:Maxwell-Ampere-Harmonic} \\
\boldsymbol{\nabla} \cdot \mathbf{D} (\mathbf{r}) &= q_v(\mathbf{r}) \label{eq:Maxwell-Gauss-Harmonic} \\
\boldsymbol{\nabla} \cdot \mathbf{B} (\mathbf{r}) &= 0 \label{eq:Maxwell-Gauss-Magnetism-Harmonic}
\end{align}
\label{eq:MaxwellEquationsTimeHarmonic}
\end{subequations}
% #######################################
\subsubsection{Finite bandwidth solutions}
Let us generalize to field solutions of finite bandwidth, which are a continuous distribution of frequencies $\omega$. Similarly to our definition (\ref{eq:definitionTimeHarmonicField}), we express the field by a summation of time-harmonic solutions over all frequencies:
\begin{equation}
\boldsymbol{\mathcal{E}}(\mathbf{r}, t)
=
\int
\Re \left[
\boldsymbol{\mathcal{E}}(\mathbf{r}, \omega)
e^{j\omega t}
\right]
\diff \omega
\end{equation}
The real part operator can be put outside the integral:
\begin{equation}
\boldsymbol{\mathcal{E}}(\mathbf{r}, t)
=
\Re \left[
\int
\boldsymbol{\mathcal{E}}(\mathbf{r}, \omega)
e^{j\omega t}
\diff \omega
\right]
\end{equation}
which then can be seen as a time-domain Fourier transforms, defined such as as:
\begin{subequations}
\begin{align}
f(t) =& \int_{-\infty}^{+\infty} f(\omega) e^{j\omega t} \diff \omega \\
f(\omega) =& \frac{1}{2\pi}\int_{-\infty}^{+\infty} f(t) e^{-j\omega t} \diff t
\end{align}
\label{eq:FourierTransformTime}
\end{subequations}
If $f(t)$ is a real function, then a property of the Fourier transform for $f(\omega)$ is\footnote{More of Fourier transform in \parencite[sec.1.3]{Clemmow1996} and \parencite[chap.4]{Harrington2001}.}:
\begin{equation}
f(-\omega) = f^*(\omega)
\label{eq:FourierPropertyRealFunction}
\end{equation}
If we require the property (\ref{eq:FourierPropertyRealFunction}) to $\boldsymbol{\mathcal{E}}(\omega)$, i.e. $
\boldsymbol{\mathcal{E}}(\mathbf{r}, -\omega)
\overset{\Delta}{=}
\boldsymbol{\mathcal{E}}^*(\mathbf{r}, -\omega)
$
then we can eliminate the real part operator $\Re$:
\begin{equation}
\boldsymbol{\mathcal{E}}(\mathbf{r}, t)
=
\int
\boldsymbol{\mathcal{E}}(\mathbf{r}, \omega)
e^{j\omega t}
\diff \omega
\label{eq:FourierTimeHarmonicDef}
\end{equation}
so $\boldsymbol{\mathcal{E}}(\mathbf{r}, t)$ forms a Fourier transform pair with $\boldsymbol{\mathcal{E}}(\mathbf{r}, \omega)$, defined by:
\begin{equation}
\boldsymbol{\mathcal{E}}(\mathbf{r}, \omega) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}
\boldsymbol{\mathcal{E}}(\mathbf{r}, t)
e^{-j\omega t} \diff t
\label{eq:TimeFourierTransformE}
\end{equation}
After introducing the Fourier transform to Maxwell equations (\ref{eq:MaxwellEquations}), one obtains the time-spectral representation of Maxwell equations:
\begin{subequations}
\begin{align}
\boldsymbol{\nabla} \times \boldsymbol{\mathcal{H}} (\mathbf{r}, \omega) &= j\omega\boldsymbol{\mathcal{D}} (\mathbf{r}, \omega) + \boldsymbol{\mathcal{J}} (\mathbf{r}, \omega) \\
\boldsymbol{\nabla} \times \boldsymbol{\mathcal{E}} (\mathbf{r}, \omega) &= -j\omega\boldsymbol{\mathcal{B}} (\mathbf{r}, \omega) \\
\boldsymbol{\nabla} \cdot \boldsymbol{\mathcal{D}} (\mathbf{r}, \omega) &= q_v(\mathbf{r}, \omega) \\
\boldsymbol{\nabla} \cdot \boldsymbol{\mathcal{B}} (\mathbf{r}, \omega) &= 0
\end{align}
\label{eq:MaxwellEquationTimeSpectral}
\end{subequations}
which have the same form than (\ref{eq:MaxwellEquationsTimeHarmonic}), but with different quantities since they do not have different units\parencite[sec.1.7.5]{Smith1997}. For example, the electric field (phasor) in (\ref{eq:Maxwell-Ampere-Harmonic}) was in $V/m$, while here it is in $V/m/Hz$ that is intensity per unit frequency.
We can recover (\ref{eq:MaxwellEquationsTimeHarmonic}) in the case of time-harmonic field, i.e if $\boldsymbol{\mathcal{E}}(\mathbf{r}, t) = \mathbf{E}(\mathbf{r})\cos\omega_0 t=\Re[\mathbf{E}(\mathbf{r})e^{j\omega_0 t}]$, where (\ref{eq:TimeFourierTransformE}) becomes\footnote{We have dropped the $\sqrt{2}$ for convenience but without loss of generality.}:
\begin{eqnarray}
\boldsymbol{\mathcal{E}}(\mathbf{r},\omega)
&=&
\frac{1}{2\pi}\int_{-\infty}^{+\infty} \diff t \;
\Re[\mathbf{E}(\mathbf{r})e^{j\omega_0 t}]
e^{-j\omega t}
\nonumber
\\
&=&
\frac{1}{4\pi}\int_{-\infty}^{+\infty} \diff t \;
\mathbf{E}(\mathbf{r}) e^{-j(\omega-\omega_0 ) t}
+
\mathbf{E}^*(\mathbf{r}) e^{-(\omega+\omega_0 ) t}
\nonumber
\\
&=&
\frac{1}{4\pi} \left[
\mathbf{E}(\mathbf{r}) \delta(\omega - \omega_0)
+
\mathbf{E}^*(\mathbf{r}) \delta(\omega + \omega_0)
\right]
\nonumber
\end{eqnarray}
where we have the Dirac distribution definition:
\begin{eqnarray}
\delta(x - x_0)
= \frac{1}{2\pi}
\int
e^{j (x-x_0) u} \diff u
\label{eq:DiracDefinition}
\end{eqnarray}
Moreover, using the property (\ref{eq:FourierPropertyRealFunction}) and the fact that the Dirac distribution is even, we have finally:
\begin{eqnarray}
\boldsymbol{\mathcal{E}}(\mathbf{r},\omega)
&=&
\frac{1}{4\pi}\int_{-\infty}^{+\infty} \diff t \;
\mathbf{E}(\mathbf{r}) e^{-j(\omega-\omega_0 ) t}
+
\frac{1}{4\pi}\int_{-\infty}^{+\infty} \diff t
\mathbf{E}(-\mathbf{r}) e^{-j(\omega+\omega_0 ) t}
\nonumber
\\
&=&
\frac{1}{4\pi}\int_{-\infty}^{+\infty} \diff t \;
\mathbf{E}(\mathbf{r}) e^{-j(\omega-\omega_0 ) t}
+
\frac{1}{4\pi}\int_{-\infty}^{+\infty} \diff t
\mathbf{E}(\mathbf{r}) e^{+j(\omega+\omega_0 ) t}
\nonumber
\\
&=&
\frac{1}{2\pi}
\mathbf{E}(\mathbf{r}) \delta(\omega - \omega_0)
\end{eqnarray}
Doing the same for other fields and plugging into (\ref{eq:MaxwellEquationTimeSpectral}) leads to recover (\ref{eq:MaxwellEquationsTimeHarmonic}) if sources and materials do not depend of the frequency (no time dispersion).
% #######################################
\subsubsection{$k-\omega$ Fields Representation}\label{sec:spectralRepresentation}
Let us generalise to the case where the field solution is represented by the summation of many plane (or evanescent) waves characterized by their wavevector $\mathbf{k}=(k_x, k_y, k_z)$. We construct a solution on the form:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{A}}(\mathbf{r}, t) =& \Re \left[
\int \diff \omega \int \diff \mathbf{k} \;
\mathbf{\boldsymbol{\mathcal{A}}}(\mathbf{k}, \omega) e^{j(\omega t - \mathbf{k}\cdot\mathbf{r})}
\right]
\end{align}
\label{eq:k-spectralDefinition}
\end{subequations}
Since $\boldsymbol{\mathcal{A}}(\mathbf{r}, t)$ is a real function, one can drop the real part operator (cf previously. Still valid with causality?) which leads to the appearance of a four dimensional Fourier transform which pair is defined by:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{A}}(\mathbf{k}, \omega) =&
\frac{1}{(2\pi)^4}
\int \diff t \int \diff \mathbf{r} \;
\mathbf{\boldsymbol{\mathcal{A}}}(\mathbf{r}, t) e^{-j(\omega t - \mathbf{k}\cdot\mathbf{r})}
\end{align}
\end{subequations}
Using (\ref{eq:k-spectralDefinition}), the divergence and curl operators get simpler expressions in the $k-\omega$ domain\footnote{Note that the derivation requires an additional condition on the solution, which is to be 0 at infinity. Indeed, an integration per part is used during the evaluation of the nabla operator.} :
\begin{eqnarray}
\boldsymbol{\nabla} \cdot\boldsymbol{\mathcal{A}}(\mathbf{r}, t)
&\leftrightarrow&
-j\mathbf{k}\cdot \boldsymbol{\mathcal{A}} (\mathbf{k}, \omega)
\\
\boldsymbol{\nabla} \times \boldsymbol{\mathcal{A}} (\mathbf{r}, t)
&\leftrightarrow&
-j\mathbf{k}\times \boldsymbol{\mathcal{A}} (\mathbf{k}, \omega)
\\
\frac{\partial}{\partial t} \boldsymbol{\mathcal{A}} (\mathbf{r}, t)
&\leftrightarrow&
j \omega \boldsymbol{\mathcal{A}} (\mathbf{k}, \omega)
\end{eqnarray}
Plugging such a solution into the Maxwell equations (\ref{eq:MaxwellEquations}) leads to algebraic equations:
\begin{subequations}
\begin{align}
\mathbf{k} \times \boldsymbol{\mathcal{E}} (\mathbf{k}, \omega)
=&
\omega \boldsymbol{\mathcal{B}} (\mathbf{k}, \omega)
\\
\mathbf{k} \times \boldsymbol{\mathcal{H}} (\mathbf{k}, \omega)
=&
-\omega \boldsymbol{\mathcal{D}} (\mathbf{k}, \omega)
+
j\boldsymbol{\mathcal{J}} (\mathbf{k}, \omega)
\\
\mathbf{k} \cdot \boldsymbol{\mathcal{D}} (\mathbf{k}, \omega)
=& jq_v(\mathbf{k}, \omega)
\\
\mathbf{k} \cdot \boldsymbol{\mathcal{B}} (\mathbf{k}, \omega)
=& 0
\end{align}
\label{eq:k-omegaMaxwellEquations}
\end{subequations}
Not only the equations are simpler to solve, but we will see in the section devoted to the constitutive relations that in some case, medium properties are also simpler in this domain.
In the case of single plane-wave solution, such as:
\begin{equation}
\boldsymbol{\mathcal{E}}(\mathbf{r},t)
=
\Re\left[
\mathbf{E}_0
e^{j(\omega_0 t - \mathbf{k}_0\cdot\mathbf{r})}
\right]
\end{equation}
then we obtain:
\begin{equation}
\boldsymbol{\mathcal{E}}(\mathbf{k},\omega)
=
\frac{1}{(2\pi)^4}
\mathbf{E}_0
\delta(\omega - \omega_0) \delta(\mathbf{k} - \mathbf{k}_0)
\end{equation}
and the Maxwell equations (\ref{eq:MaxwellEquations}) become:
\begin{subequations}
\begin{align}
\mathbf{k} \times\mathbf{E}_0
=&
\omega \mathbf{B}_0
\\
% \mathbf{k} \times \mathbf{H}
% =&
% -\omega \mathbf{D}
% +
% j\mathbf{J}
% \\
% \mathbf{k} \cdot \mathbf{D}
% =& jq_v(\mathbf{k}, \omega)
% \\
\mathbf{k} \cdot \mathbf{B}
=& 0
\end{align}
\label{eq:k-omegaMaxwellEquationsPhasor}
\end{subequations}
where $\mathbf{B}_0 = (\mathbf{k}\times\mathbf{E}_0)/\omega$. Without any other information on the constitutive relations, we stop here in the derivation.
% ###########################################################################
% ###########################################################################
\subsection{The Constitutive Relations}
Fluxes densities ($\mathcal{D}$,$\mathcal{B}$) differ from field intensities ($\mathcal{E},\mathcal{H}$) inside the material with regards to relative magnitude and direction. Flux densities can be interpreted as a response of the medium to an applied excitation\footnote{If we recall the Gauss law $ Q = \oint \boldsymbol{\mathcal{D}} \cdot \diff S$, the flux $\boldsymbol{\mathcal{D}}$ depends on the charge inside the closed surface and doesn't depend on the material itself, but the field intensity does. }
. Such, the constitutive relationships can be written generally as:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}} =& \boldsymbol{\mathcal{D}}(\boldsymbol{\mathcal{E}},\boldsymbol{\mathcal{H}}) \\
\boldsymbol{\mathcal{B}} =& \boldsymbol{\mathcal{B}}(\boldsymbol{\mathcal{E}},\boldsymbol{\mathcal{H}}) \\
\boldsymbol{\mathcal{J}} =& \boldsymbol{\mathcal{J}}(\boldsymbol{\mathcal{E}},\boldsymbol{\mathcal{H}})
\end{align}
\end{subequations}
All of these relations hold only if the time rate of change of the electromagnetic field is small enough. Otherwise, one needs to extend the definition of linearity using linear differential relations\parencite{Harrington2001, Jackson1998}:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}} &= \varepsilon \boldsymbol{\mathcal{E}} + \varepsilon_1 \frac{\partial \boldsymbol{\mathcal{E}}}{\partial t} + \varepsilon_2 \frac{\partial^2 \boldsymbol{\mathcal{E}}}{\partial t^2} + \ldots \\
\boldsymbol{\mathcal{B}} &= \varepsilon \boldsymbol{\mathcal{H}} + \varepsilon_1 \frac{\partial \boldsymbol{\mathcal{H}}}{\partial t} + \varepsilon_2 \frac{\partial^2 \boldsymbol{\mathcal{H}}}{\partial t^2} + \ldots
\end{align}
\end{subequations}
Such situation arises typically when high intensity RF fields are used, which leads to non-linear phenomenons such \emph{ponderomotive effect}\parencite{Krapchev1979}.
%% ###########################################################################
%\subsubsection{General linear medium}
%\begin{subequations}
% \begin{align}
% \mathbf{D} =& \boldsymbol{\varepsilon}\mathbf{E} + \mathbf{P} \\
% \mathbf{B} =& \boldsymbol{\mu}\left( \mathbf{H} + \mathbf{M} \right)
% \end{align}
%\end{subequations}
%where:
%\begin{itemize}
% \item $\mathbf{P}$ is the electric polarization of the medium, caused by displacement of bounds charges, measured in $C/m^2$.
% \item $\mathbf{M}$ is the magnetisation of the medium (or magnetic polarization), which corresponds to the distribution of magnetic moments per unit volume, measured in $A/m$\footnote{Note that $\mathbf{M}$ is included in the parenthesis, while $\mathbf{P}$ is not, for a matter of historical definition. This of course affects the units of these quantities. }.
%\end{itemize}
% ###########################################################################
\subsubsection{Vacuum}
In vacuum or in any other medium having similar characteristics than vacuum (such as air), the constitutive relationships have their most simpler form:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}} &= \varepsilon_0 \boldsymbol{\mathcal{E}} \\
\boldsymbol{\mathcal{B}} &= \mu_0 \boldsymbol{\mathcal{H}} \\
\boldsymbol{\mathcal{J}} &= 0
\end{align}
\end{subequations}
where $\varepsilon_0$ is the vacuum \emph{permittivity} and $\mu_0$ the vacuum \emph{permeability}.
% ###########################################################################
\subsubsection{Isotropic linear mediums}
In a standard isotropic linear mediums, the constitutive relationships becomes linear relationships:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}} &= \varepsilon \boldsymbol{\mathcal{E}} \\
\boldsymbol{\mathcal{B}} &= \mu \boldsymbol{\mathcal{H}} \\
\boldsymbol{\mathcal{J}} &= \sigma \boldsymbol{\mathcal{E}}
\end{align}
\end{subequations}
where $\varepsilon$ and $\mu$ are the medium permittivity and permeability respectively. The parameter $\sigma$ is called the \emph{conductivity} of the medium. Note that these relationships generally do not hold when the field intensities are very large or in time varying medium.
Simple matter medium can be classified according its values of $\varepsilon, \mu$ and $\sigma$. Materials with high conductivity value $\sigma$ are called \emph{conductors} while those having a small value are referred as \emph{dielectrics} or \emph{insulators}. In electromagnetic models, good conductors are often approximated to \emph{perfect conductors}, characterized by the limit $\sigma\to\infty$. On the other hand, \emph{perfect dielectrics} assume $\sigma=0$.
The medium permittivity $\varepsilon$ can never be less than the vacuum permittivity $\varepsilon_0$. The \emph{relative permittivity} is defined such as $\varepsilon_r=\varepsilon/\varepsilon_0$. The permittivity of a conductor is hard to measure but appears to be unity\parencite{Harrington2001}. A similar definition holds for the \emph{relative permeability} $\mu_r=\mu/\mu_0$. For almost all materials except \emph{ferromagnetic} materials, one has $\mu=\mu_0$.
% ###########################################################################
\subsubsection{Anisotropic mediums}
If the response of the medium is different depending on the direction of the oscillating field, then the medium is called \emph{anisotropic}. In this case, his response is expressed by tensor relationships. In anisotropic linear mediums, the constitutive relationships becomes tensor-relationships:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}} &= \boldsymbol{\varepsilon} \cdot \boldsymbol{\mathcal{E}}
\label{eq:disp_relation_anisotropic_stationnary_D} \\
\boldsymbol{\mathcal{B}} &= \boldsymbol{\mu} \cdot \boldsymbol{\mathcal{H}}
\label{eq:disp_relation_anisotropic_stationnary_B}\\
\boldsymbol{\mathcal{J}} &= \boldsymbol{\sigma} \cdot \boldsymbol{\mathcal{E}}
\label{eq:disp_relation_anisotropic_stationnary_J}
\end{align}
\end{subequations}
where $\boldsymbol{\varepsilon}$, $\boldsymbol{\mu}$ and $\boldsymbol{\sigma}$ are the dielectric tensor, the permeability tensor and the conductivity tensor respectively, which can be interpreted as 3x3 matrices\parencite{Swanson2003}.
% ###########################################################################
\subsubsection{Nonlocal medium}
If a medium exhibits a time or space dependence to an electromagnetic excitation, it is said to be \emph{nonlocal} or \emph{dispersive}, with respect to time and space respectively. In a time and is called \emph{time dispersive} medium, explicit time dependence arises as a time delay between the imposition of the electric field and the resulting polarization of the medium. This delay is due to the inertia of charged particles to respond to the time-varying field\parencite{Mackay2010, Brambilla1998}. In a \emph{space-dispersive} medium, the response at the location $\mathbf{r}$ and time $t$ can not only depends on the field at location $\mathbf{r}$ and time $t$, but of the field in its vicinity $\mathbf{r}'$ and by all previous instant $t'$. Spatial non-locality can be significant when the wavelength is comparable to some characteristic length–scale in the medium. In plasma, the thermal agitation of the species induces add an additional erratic motion to the particles trajectory. Thus, the particles are influenced by the field in the domain explored by their motion\parencite{Brambilla1998}. This space dispersion can be omitted in the limit at which the temperature effects can be neglected.
Thus, the constitutive relations of a general non-local anisotropic linear medium should be stated as:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}}(\mathbf{r}, t)
= &
\int_{t'=-\infty}^t \diff t'
\int \diff \mathbf{r}' \;
\boldsymbol{\varepsilon}(\mathbf{r},\mathbf{r}', t,t') \cdot \boldsymbol{\mathcal{E}}(\mathbf{r}', t')
\\
\boldsymbol{\mathcal{B}}(\mathbf{r}, t)
=&
\int_{t'=-\infty}^t \diff t'
\int \diff \mathbf{r}' \;
\boldsymbol{\mu}(\mathbf{r},\mathbf{r}', t,t') \cdot \boldsymbol{\mathcal{H}}(\mathbf{r}', t')
\\
\boldsymbol{\mathcal{J}}(\mathbf{r}, t)
=&
\int_{t'=-\infty}^t \diff t'
\int \diff \mathbf{r}' \;
\boldsymbol{\sigma}(\mathbf{r},\mathbf{r}', t,t') \cdot \boldsymbol{\mathcal{E}}(\mathbf{r}', t')
\end{align}
\end{subequations}
The restriction of time integration to times $t'<t$ is the expression of the causality, which impose that the quantities at $t$ can only be influenced by the quantities are previous instants.
If invariance with respect to the choice of origin in space (i.e. homogeneous) and time (i.e. stationary) can be asserted, the quantities only depend on the difference of time-space coordinates, that is verify $\mathbf{u}(\r,\r',t,t')=\mathbf{u}(\r-\r', t-t')$. This means that the quantity only depend on the the distance (time elapsed) between the excitation location (time) $(\r', t')$ and the response location (time) $(\r,t)$)\parencite{Dumont2017}. The previous relation can then be expressed in terms of convolutions in the form of\parencite[p.19]{Brambilla1998}:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}}(\mathbf{r}, t)
= &
\int_{t'=-\infty}^t \diff t'
\int \diff \mathbf{r}' \;
\boldsymbol{\varepsilon}(\mathbf{r}-\mathbf{r}', t-t') \cdot \boldsymbol{\mathcal{E}}(\mathbf{r}', t')
\\
\boldsymbol{\mathcal{B}}(\mathbf{r}, t)
=&
\int_{t'=-\infty}^t \diff t'
\int \diff \mathbf{r}' \;
\boldsymbol{\mu}(\mathbf{r}-\mathbf{r}', t-t') \cdot \boldsymbol{\mathcal{H}}(\mathbf{r}', t')
\\
\boldsymbol{\mathcal{J}}(\mathbf{r}, t)
=&
\int_{t'=-\infty}^t \diff t'
\int \diff \mathbf{r}' \;
\boldsymbol{\sigma}(\mathbf{r}-\mathbf{r}', t-t') \cdot \boldsymbol{\mathcal{E}}(\mathbf{r}', t')
\end{align}
\label{eq:disp_relation_dispersive_homogeneous}
\end{subequations}
If one considers that the fields and current can be represented by a continuous spectrum of time-harmonic plane-waves such as done in section \ref{sec:spectralRepresentation}, which has the appearance of a four-dimensional Fourier transform \parencite{Clemmow1996}, i.e.:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{E}}(\mathbf{r}, t) =& \Re \left[
\int \diff \omega \int \diff \mathbf{k} \;
\boldsymbol{\mathcal{E}}(\mathbf{k}, \omega) e^{j(\omega t - \mathbf{k}\cdot\mathbf{r})}
\right]
\\
\boldsymbol{\mathcal{H}}(\mathbf{r}, t) =& \Re \left[
\int \diff \omega \int \diff \mathbf{k} \;
\boldsymbol{\mathcal{H}}(\mathbf{k}, \omega) e^{j(\omega t - \mathbf{k}\cdot\mathbf{r})}
\right]
\\
\boldsymbol{\mathcal{J}}(\mathbf{r}, t) =& \Re \left[ \int \diff \omega \int \diff \mathbf{k} \;
\boldsymbol{\mathcal{J}}(\mathbf{k}, \omega) e^{j(\omega t - \mathbf{k}\cdot\mathbf{r})} \right]
\end{align}
\end{subequations}
In such as case, replacing the fields with the latter definition leads in the $k-\omega$ domain, to simpler algebraic time-invariant relationships thanks to the convolution theorem:
\begin{subequations}
\begin{align}
\boldsymbol{\mathcal{D}}(\mathbf{k}, \omega)
=&
\boldsymbol{\varepsilon}(\mathbf{k}, \omega) \cdot \boldsymbol{\mathcal{E}}(\mathbf{k}, \omega)
\label{eq:disp_relation_kw_D}
\\
\boldsymbol{\mathcal{B}}(\mathbf{k}, \omega)
=&
\boldsymbol{\mu}(\mathbf{k}, \omega) \cdot \boldsymbol{\mathcal{H}}(\mathbf{k}, \omega)
\label{eq:disp_relation_kw_B}
\\
\boldsymbol{\mathcal{J}}(\mathbf{k}, \omega)
=&
\boldsymbol{\sigma}(\mathbf{k}, \omega) \cdot \boldsymbol{\mathcal{E}}(\mathbf{k}, \omega)
\label{eq:disp_relation_kw_J}
\end{align}
\label{eq:k-omegaDispersionRelation}
\end{subequations}
where the tensors have been defined by:
\begin{subequations}
\begin{align}
\boldsymbol{\sigma}(\mathbf{k}, \omega)
=
\frac{1}{(2\pi)^4}
\int_0^\infty \diff t \int \diff \mathbf{r} \;
\boldsymbol{\sigma}(\mathbf{r}, t) e^{-j(\omega t - \mathbf{k}\cdot\mathbf{r})}
\end{align}
\end{subequations}
\subsubsection{Energy density and power flow}
\parencite[p.78]{Felsen1994}
From the source free ($\boldsymbol{\mathcal{J}}=\boldsymbol{0}$) equations \ref{eq:Maxwell-Faraday} and \ref{eq:Maxwell-Ampere}, Taking the dot product of \ref{eq:Maxwell-Faraday} and \ref{eq:Maxwell-Ampere} with $\boldsymbol{\mathcal{H}}$ and $\boldsymbol{\mathcal{E}}$ respectively, and using the following vector calculus identity:
$$
\boldsymbol{\nabla}\cdot (\mathbf{A} \times \mathbf{B})
=
(\boldsymbol{\nabla} \times \mathbf{A})\cdot\mathbf{B} - \mathbf{A}\cdot (\boldsymbol{\nabla}\times \mathbf{B})
$$
one derives the following relation:
\begin{equation}
\boldsymbol{\nabla}\cdot(\boldsymbol{\mathcal{E}}\times\boldsymbol{\mathcal{H}})
=
- \left(
\boldsymbol{\mathcal{E}}\cdot\frac{\partial \boldsymbol{\mathcal{D}}}{\partial t}
+
\boldsymbol{\mathcal{H}}\cdot\frac{\partial \boldsymbol{\mathcal{B}}}{\partial t}
\right)
\label{eq:poynting_theorem_without_source}
\end{equation}
where $\boldsymbol{\mathcal{S}}=\boldsymbol{\mathcal{E}}\times\boldsymbol{\mathcal{H}}$ is the instantaneous density of electromagnetic power flow at $(\mathbf{r},t)$, known as the Poynting vector. Per definition of a power flow, the right hand side should correspond to the time derivative of an instantaneous energy density $\mathcal{W}$. However, this is difficult to identify in the present form since as seen in the previous section, for a generic time dispersive medium, the constitute relationships are quite tedious. However, if one drops the time-dependent fields by considering the fields as time-harmonic wave:
$$
\calE(\r,t) = \sqrt{2} \Re\left[\E(\r) e^{j\omega t}\right]
\;\;\;\;\;\;
\calH(\r,t) = \sqrt{2} \Re\left[\H(\r) e^{j\omega t}\right]
$$
then:
\begin{eqnarray}
\frac{\partial \calD}{\partial t}
&=&
j\omega \sqrt{2} \Re\left[\eps \cdot \E(\r)\right]
\\
\frac{\partial \calB}{\partial t}
&=&
j\omega \sqrt{2} \Re\left[\mut \cdot \H(\r)\right]
\end{eqnarray}
Recalling that the time-average of the product of two harmonic quantities $\mathcal{A}=\sqrt{2}\Re[Ae^{j\omega t}]$ and $\mathcal{B}=\sqrt{2}\Re[Be^{j\omega t}]$, with $T=2\pi/\omega$, is:
\begin{equation}
\left< \mathcal{A} \mathcal{B}\right>
=
\frac{1}{T} \int_0^T \mathcal{A} \mathcal{B} \, \diff t
= \Re[AB^*] = \frac{1}{2}\left(AB^* + A^* B \right)
\end{equation}
then the time-average of \ref{eq:poynting_theorem_without_source} is:
\begin{eqnarray*}
\del\cdot \Re \left[ \E \times \H^* \right]
&=&
j \omega \left[
-\E\cdot\eps^* \cdot \E^* + \E^*\cdot\eps \cdot \E
-\H\cdot\mut^* \cdot \H^* + \H^*\cdot\mut \cdot \H
\right]
\\
&=&
j \omega \left[
\E^*\cdot\left(\eps - \eps^H\right) \cdot \E +
\H^*\cdot\left(\mut - \mut^H\right) \cdot \H
\right]
\end{eqnarray*}
where $\mathbf{M}^H$ is the complex conjugate transpose of $\mathbf{M}$, also known as the \emph{Hermitian} transpose\footnote{This last relationship can be demonstrated from the following:
$$
\E\cdot\eps^*\E^* = \sum_j E_j \sum_{k} \varepsilon^{*}_{jk} E_k^* = \sum_j E_j^* \sum_{k} \varepsilon^{*}_{kj} E_k=\E^*\cdot\eps^H\E
$$ since $\sum_{k} \varepsilon^{*}_{kj} E_k=\eps^H E$ per definition of $\eps^H$.}. From the latter relation, if the permittivity and permeability tensors are Hermitian, that is if they verify $\eps=\eps^H$ and $\mut=\mut^H$, then the instantaneous density of electromagnetic power flow is zero. Thus, if the medium has Hermitian permittivity and permeability, the medium is lossless.
Although the latter property seems anecdotal, the fact that these tensors be Hermitian leads to many interesting consequences of time-invariant lossless media, since the following properties can be demonstrated:
\begin{itemize}
\item The diagonal elements are necessarily real, because they have to be equal to their complex conjugate.
\item Because of conjugation, for complex valued entries the off-diagonal elements cannot be symmetric (or same). \item Moreover, a matrix that has only real entries is Hermitian if and only if it is a symmetric matrix, i.e., if it is symmetric with respect to the main diagonal.
\item Hermitian matrix can be diagonalized by a unitary matrix, and that the resulting diagonal matrix has only real entries. This implies that all eigenvalues of a Hermitian matrix A with dimension n are real, and that A has n linearly independent eigenvectors (ie. orthogonal) and its determinant is real.
\end{itemize}
\subsubsection{Can a lossless dielectric exist? The Kramers-Kronig relationships}
A dielectric is a material that can be polarized by an applied electric field. This polarization involves the partial separation of electric charges within the material, a process which requires energy. So, polarizing a dielectric means storing energy in it. In a lossy dielectric, this energy is absorbed and for example converted into heat. In a lossless dielectric, one could theoretically get back the energy put into it.
The frequency dependence of dispersive dielectrics comes from the fact that the polarization response of the material to the time-varying electric field cannot be instantaneous. Like in signal processing, such dynamic response can be described by a convolutional relationship such like \ref{eq:disp_relation_dispersive_homogeneous}:
\begin{equation}
\label{eq:convolution_D1}
\calD(\r,t)
= \int_{-\infty}^t \eps(t-t')\E(\r,t') \, \diff t'
= \varepsilon_0 \int_{-\infty}^t \eps_r(t-t')\E(\r,t') \, \diff t'
\end{equation}
Equation (\ref{eq:convolution_D1}) means that the value of $\calD(\r,t)$ at the present time $t$ only depends on the past values of $\E(\r,t')$, with $t'\leq t$. This equation can be equivalently expressed as a standard convolution by extending the integration range to all times if the dielectric response is a causal function, that is if $\eps_r(t)=0$ for $t<0$:
\begin{equation}
\label{eq:convolution_D}
\calD(\r,t)
= \varepsilon_0 \int_{-\infty}^{+\infty} \eps_r(t-t')\E(\r,t') \, \diff t'
\end{equation}
The causality condition can be expressed in terms of the unit step function $u(t)$:
\begin{equation}
\eps_r(t) = \eps_r(t) u(t)
\end{equation}
where $u(t) = 1$ if $t \geq 0$. The Fourier transform of a product of two functions being a convolution of their Fourier transforms, we have:
\begin{equation}
\label{eq:convolution_frequency_epsr}
\eps_r(\omega)
=
\frac{1}{2\pi} \int_{-\infty}^{+\infty}
\eps_r(\omega') U(\omega - \omega') \, \diff \omega'
\end{equation}
where $U(\omega)$ is the Fourier transform of the unit step function $u(t)$:
\begin{equation}
U(\omega) = \lim_{e\to 0^+} \frac{1}{j\omega + e} = \mathcal{P}\frac{1}{j\omega} + \pi \delta(\omega)
\end{equation}
where the symbol $\mathcal{P}$ denotes the Cauchy principal value. Inserting the latter in \ref{eq:convolution_frequency_epsr}:
\begin{eqnarray}
\eps_r(\omega)
&=&
\frac{1}{2\pi} \int_{-\infty}^{+\infty}
\eps_r(\omega')
\left[
\mathcal{P}\frac{1}{j(\omega-\omega')} + \pi \delta(\omega - \omega')
\right]
\, \diff \omega'
\\
&=&
\frac{1}{2\pi j}
\mathcal{P}
\int_{-\infty}^{+\infty}
\frac{\eps_r(\omega')}{\omega-\omega'}
\, \diff \omega'
+ \frac{1}{2} \eps_r(\omega)
\end{eqnarray}
Rearranging the terms leads to:
\begin{eqnarray}
\label{eq:Kramers-Kronig-complex}
\eps_r(\omega)
&=&
\frac{1}{\pi j}
\mathcal{P}
\int_{-\infty}^{+\infty}
\frac{\eps_r(\omega')}{\omega-\omega'}
\, \diff \omega'
\end{eqnarray}
which is the complex-valued formulation of the Kramers-Kronig relation. Setting $\eps_r = \eps_r' + j \eps_r''$ and separating \ref{eq:Kramers-Kronig-complex} into its real and imaginary parts, we obtain the conventional Kramers-Kronig relations:
\begin{eqnarray}
\eps_r'(\omega)
&=&
\frac{1}{\pi}
\mathcal{P}
\int_{-\infty}^{+\infty}
\frac{\eps_r''(\omega')}{\omega-\omega'}
\, \diff \omega'
\\
\eps_r''(\omega)
&=&
-\frac{1}{\pi}
\mathcal{P}
\int_{-\infty}^{+\infty}
\frac{\eps_r'(\omega')}{\omega-\omega'}
\, \diff \omega'
\label{eq:Kramers-Kronig}
\end{eqnarray}
The Kramers-Kronig relations relates the real and the imaginary parts of the relative permittivity. This is a direct consequence in the frequency domain of the causality. An interesting consequence of the Kramers-Kronig relations is that there cannot exist a dispersive medium that is purely lossless, ie $\eps_r''(\omega)=0$ for all $\omega$, since this would also require that $\eps_r'(\omega)=0$. However many cases, the loss can be found small enough in a given bandwidth to be considered as negligible locally.
A second consequence is that the causality implies the analyticity condition of $\eps_r$. Indeed, in order to satisfy the Kramers-Kronig relations, $\eps_r$ must be analytic in the closed upper half-plane of $\omega$. This latter result is of interest for complex integration when dealing with lossy plasma\parencite{Brambilla1998}.
TODO: demonstration using signum function? (can be graphical)
TODO: expressing KK relation for positive $\omega$. Usefull?
% ###########################################################################
% ###########################################################################
\subsection{Boundary conditions}
% ###########################################################################
% ###########################################################################
% ###########################################################################
\section{Transmission Line Theory}
\subsection{Transmission Lines}
\subsubsection{Coaxial Line}
\paragraph{Power Handling}
https://microwaves101.com/encyclopedias/coax-power-handling
'e' in coax
peak electric field $e^1$ ==> 60 ohm
max power for $ e^0.5$ ==> 30 ohm
average power handling -> heat
\subsubsection{Rectangular Waveguides}
\paragraph{Power Handling}
\subsection{Circular Waveguides}
\paragraph{Power Handling}
% ###########################################################################
% ###########################################################################
% ###########################################################################
\section{Microwave Measurements}
[1] E.A. Wolff, R. Kaul, Microwave engineering and systems applications, 1988. doi:10.1109/MAP.1989.6095665.
VSWR (Voltage Standing Wave Ratio)
SWR (Standing Wave Ratio) are exactly the same. Since it is more practical to measure RF waves vy detecting voltage than current, most instruments measure the voltage and then convert to power ($P=V^2/R$) or the current from the line impedance ($I=V/Z$).
An antenna is a device exhibiting resistance and reactance, ie. an impedance. The resistance can be considered to be made of real resistor and imaginary resistor. The real resistance is due to RF current losses in the material, while the imaginary resistance is the radiation resistance, due to the transformation between the antenna and its facing medium impedance.
\section{Dipole antenna}
Reactance occurs when the antenna is operated away from its resonant point. At resonance, the antenna behaves like a pure resistance.
The antenna input can be modelled as capacitor, a self and a resistor in serie. At resonance, Xc and Xl are equal and opposite values so they cancel out, leaving just the resistance R. Below resonance only Xc is present and above only Xl is present. Hence the load will appear in these conditions either capacitive or inductive.
% ###########################################################################
% ###########################################################################
% ###########################################################################
\section{Microwave safety}
\parencite[sec.5.8.3]{Benford2015}