Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Inverse Modeling - Atmospheric Chemistry - Lecture Slides, Slides of Chemistry

Major topics of Atmospheric Chemistry course are Acid Rain, Aerosol, Aerosols Optics, Geochemical Cycles, Global Models, Trop Ozone Pollution and many others. These lecture slides contain following keywords: Inverse Modeling, Atmospheric Composition, Bayes’ Theorem, Scalar, Jacobian Matrix, Jacobian Matrix for Forward Model, Gaussian Pdfs for Vectors, Averaging Kernel Matrix, Application To Satellite Retrievals, Satellite Retrievals

Typology: Slides

2012/2013

Uploaded on 08/21/2013

babaa
babaa 🇮🇳

4.4

(38)

94 documents

1 / 22

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
INVERSE MODELING OF ATMOSPHERIC COMPOSITION DATA
docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16

Partial preview of the text

Download Inverse Modeling - Atmospheric Chemistry - Lecture Slides and more Slides Chemistry in PDF only on Docsity!

INVERSE MODELING OF ATMOSPHERIC COMPOSITION DATA

docsity.com

THE INVERSE MODELING PROBLEM

Optimize values of an ensemble of variables (

state vector

x

) using observations:

THREE MAIN APPLICATIONS FOR ATMOSPHERIC COMPOSITION:1.^

Retrieve atmospheric concentrations (

x) from observed atmospheric

radiances (

y) using a radiative transfer model as forward model

2.^

Invert sources (

x) from observed atmospheric concentrations (

y) using a

CTM as forward model

3.^

Construct a continuous field of concentrations (

x) by assimilation of sparse

observations (

y) using a forecast model (initial-value CTM) as forward model

a priori estimate

xa

a

observation vector

y

forward model^ y = F(x)

+

“MAP solution”“optimal estimate”

“retrieval”

ˆ^

x +

Bayes’theorem

docsity.com

SIMPLE LINEAR INVERSE PROBLEM FOR A SCALAR

use single measurement used to optimize a single source

a priori

bottom-up estimate

xa

a

Monitoring sitemeasuresconcentration

y

Forward model gives

y = kx

^

“Observational error”

^ ^

-^ instrument •^ fwd model

y = kx

^ 

2

2 2

2 (^

)

(^

)

ln

(^

|^

)^

ln

(^

|^

)^

ln

( )

a a x^

x

y^

kx

P x

y

P y

x

P x

^ 

^

^

^

Max of

P

( x|y

)^ is given by minimum of cost function

2

2 2

2 (^

)

(^

)

( )

a a x^

x

y^

kx

J x

 

 ^

Solution:

ˆ^
(^

a^

a

x^

x^

g^

y^

kx

^
^
^

where

g

is a

gain factor

2 2

2

2 a k a

g^

k

  ^

^

^ 2

2

2

1

(^

(^

/^

)^

)

a^

k

^

^

 ^

^

^

Alternate expression of solution:

(

)^

a

y^

kx

x^

ax

a x

g

^

^

^

^

^

where

a = gk

is an

averaging kernel



solve for

/^

dJ

dx

Assume random Gaussian errors, let

x

be the true value. Bayes’ theorem:

Variance of solution:

docsity.com

GENERALIZATION:
CONSTRAINING

n

SOURCES WITH

m

OBSERVATIONS

n^1 j^

ij^

i

i

y^

k x  ^

Linear forward model: A cost function defined as

,^

1

1

2

2

1

1

,^

,

(^

(^

(^

n j^

ij^

i

n^

m

i^

a i

i

n

i^

j

a i

j

y^

k x

x^

x

J x

x

^

^

is generally not adequate because it does not account for correlation betweensources or between observations. Need vector-matrix formalism:

1

1

(^

)^

(^

T^

T

n^

m

x^

x^

y^

y

^

x^

y^

y^

Kx

Jacobian matrix

docsity.com

GAUSSIAN PDFs FOR VECTORS

A priori

pdf for

x

: Scalar

x

Vector

2 2 (^

)

1

( )

exp[

]

2

2

a a

a

x^

x

P x

^

^

1

,^

1

,^

,

1

,^

,^

,

var(

)^

cov(

,^

cov(

,^
)^

var(

a^

a^

n^

a n

a^

n^

a n

n^

a n

x^

x^

x^

x^

x^

x

x^

x^

x^

x^

x^

x

^
^
^
^
^
^
^
^
^

S^ a

^

1

2 ln

( )

(^

)^

(^

)

T

P

c

^

^

a^

a^

a

x^

x^

x^

S^

x^

x 1/ 2 (^1) / 2

1

( )

exp[

2

(

)

T

n

a

P

^

^

- a^

a^

a

x^

(x - x ) S

(x - x )]

S

1

(^

T ) n

x^

x

x

where

S

is thea

a priori

error covariance matrix describing error statistics on (

x-x

)a^

In log space:

docsity.com

OBSERVATIONAL ERROR COVARIANCE MATRIX

^

^

i^

m

y^

F x

ε^

+^

observation

true value

fwd model errorinstrument error

observational error

i^

m

ε^

= ε

+

ε

Observational error covariance matrix

1

1

1 var(

)^

cov(

,^

)

cov(

,^

)^

var(

n )

n^

n

^

^

^

^

 ^

^

^

S

^

is the sum of the instrument and fwd model error covariance matrices:

i^

m

ε^

ε^

ε

S

= S

+ S

How well can the observing system constrain the

true value

of

x

?

1

2

2 ln

(^

)^

(^

)^

(^

)

T

P

c

 

^

^

^

^

y | x

y^

Kx

S

y^

Kx

Corresponding pdf, in log space:

docsity.com

PARALLEL BETWEEN VECTOR-MATRIX AND SCALAR SOLUTIONS:

Scalar problem

Vector-matrix problem

^

^

2

2

2

2

2

2

2

1

ˆ^
(^
(^
/^
ˆ^

a^

a

a a a

a

x^

x^

g y

kx

k

g^

k

k

x^

ax

a x

g

a^

gk

   ^

^

^

^

 

^

^
^
^
^
^
^
^

MAP solution:

1 1

ˆ^
(^
(^
ˆ^
(^
ˆ^
(^

T^

T

T

^

^

^
^
^
^
^
^
^

a^

a

a^

a 1

(^1) a

n^

a

x^

x^

G y

Kx

G
S K
KS K
S
S^
K S
K
S

x^

Ax

I^

A x

G

ε

A
GK

Gain factor: A posteriori error:Averaging kernel:Jacobian matrix

G

x/ y

K =

y/ x

sensitivity of observations to true state

Gain matrix

sensitivity of retrieval to observations

Averaging kernel matrix

^
^
A

x/ x

sensitivity of retrieval to true state

docsity.com

A LITTLE MORE ON THE AVERAGING KERNEL MATRIX
A

describes the sensitivity of the retrieval to the true state

1

1

1

ˆ^1

ˆ

/^

/

ˆ

ˆ^

ˆ

/^

n /

n^

n^

n

x^

x^

x^

x

x^

x^

x^

x

^

^

^

^

^

^

^

^

^

^

^

^

^

^

x

A

x

^

ˆ^

(^

)

n^

a

x^

Ax

I^

A x

G

ε

and hence the smoothing of the solution:

smoothing error

retrieval error

MAP retrieval gives

A

as part of the retrieval:

1

(^

)

T^

T

a^

a

A = GK = S K

KS K

S

K

Other retrieval methods (e.g., neural network, adjoint method) do

not

provide

A

# pieces of info in a retrieval = degrees of freedom for signal (DOFS) =

trace(A)

docsity.com

INVERSE ANALYSIS OF MOPITT AND TRACE-P (AIRCRAFT) DATA

TO CONSTRAIN ASIAN SOURCES OF CO

TRACE-P CO DATA

(G.W. Sachse)

Bottom-upemissions(customizedfor TRACE-P)

Fossil and biofuel

Daily biomass burning(satellite fire counts)

GEOS-Chem CTM

MOPITT CO

Inverseanalysis

validation

chemicalforecaststop-downconstraints

OPTIMIZATION OF

SOURCES

Streets et al. [2003]

Heald et al. [2003a]

docsity.com

COMPARE TRACE-P OBSERVATIONS
WITH CTM RESULTS USING
A PRIORI
SOURCES

Model is low north of 30

oN: suggests Chinese source is low

Model is high in free trop. south of 30

oN: suggests biomass burning source is high

-^ Assume that Relative Residual Error (RRE) after bias is removeddescribes the observational error variance (20-30%) •^ Assume that the difference between successive GEOS-Chem COforecasts during TRACE-P (

t^ +48h and o

t^ o

+ 24 h) describes the covariant

error structure (“NMC method”)

Palmer et al. [2003], Jones et al. [2003]

docsity.com

COMPARATIVE INVERSE ANALYSIS OF ASIAN CO SOURCES

USING DAILY MOPITT AND TRACE-P DATA

-^

MOPITT has higher information content than TRACE-P because it observes sourceregions and Indian outflow

-^

Don’t trust a posteriori error covariance matrix; ensemble modeling indicates 10-40%error on

a posteriori

sources

Heald et al. [2004]

CO observations from Spring 2001, GEOS-Chem CTM as forward model

TRACE-P Aircraft CO

MOPITT CO Columns

4 degreesof freedom

10 degreesof freedom

(from validation)

“Ensemble modeling”: repeat inversion with different values of forward modeland covariance parameters to span uncertainty range

docsity.com

Analytical solution to inverse problem

1

ˆ^

(^

)^

(^

)

T^

T

a^

a^

a^

a

x^

x^

S K

KS K

S

y^

Kx

requires (iterative) numerical construction of the Jacobian matrix

K

and matrix operations of dimension

(mxn)

; this limits the size of

n , i.e.,

the number of variables that you can optimizeAddress this limitation with Kalman filter (for time-dependent

x

)

or with adjoint method

-^

1

(^

)^

(^

T

J

 

^

^

^

x^

a^

a

x^

S^

x^

x^

K S

Kx - y



docsity.com

ADJOINT INVERSION (4-D VAR)

a

^2

^1

^3

x^2 x^1

x^3 x a Minimum of cost function

J

-^

1

( )

2

(^

)^

2

(^

( )

)

T

J

 

^

^

^

^

x^

a^

a

x^

S^

x^

x^

K S

F x - y

0

Solve

numerically rather than analytically1. Starting from

a priori

x

calculate , (^) a

(^

) J

x^

xa

2. Using an optimization algorithm (BFGS),

get next guess

x

1

3. Calculate

(^

) Jx^

x^1

, get next guess

x

2

4. Iterate until convergence

docsity.com

NUMERICAL CALCULATION OF COST FUNCTION GRADIENT

-^

1

(^
)^
(^

T

J

 

^
^

x^

a^

a

x^

S^

x^

x^

K S

F x - y

Adjoint model is applied to error-weighted difference between model and obs…but we want to avoid explicit construction of

K

( )^

( )^

( -1)

(1)^

(0)

(0)

( -1)

( -2)

(0)

(0)

...

i^

i^

i

i^

i

^

^

^

^

^

 ^

^

^

^

y^

y^

y^

y^

y

K

x^

y^

y^

y^

x

( )^

( -1)

(1)^

(0)

(0)

(1)^

(^ -1)

(^ )

( -1)

( -2)

(0)

(0)

(0)

(0)

(^ -2)

(^ -1)

...

...

T^

T^

T^

T^

T

i^

i^

n^

n

T

i^

i^

n^

n

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

^

y^

y^

y^

y^

y^

y^

y^

y

K

y^

y^

y^

x^

x^

y^

y^

y

and since (

AB)
T^ = B
T A

T ,

Apply transpose of tangent linear model to the adjoint forcings; for time interval[ t^0

, t ], start from observations at n

t^ n

and work backward in time until

t^0

,^ picking up

Construct new observations (adjoint forcings) along the way.

tangent linear model

( )^

(^

/ i^

i

^

y^

y

describing evolution of concentration field over time interval [

ti-

, t

] i

Sensitivity of

y

)^ (i to x

) ( 0 at time

t^0

can then be written

of forward model

adjoint

“adjoint forcing”

docsity.com