$ t_k $ : arrival time of rupture front at grid k.
Note : we just convert to estimating $ a_{qkl} $
there is no need to be orthogonal basis, here adopt Bézier basis functions.
so the shape can be control by neibourhood grid which favor applying smoothing constrians.
$$
\begin{aligned}
u_j\left(t\right)&=\sum_{q=1}^2 \int_s G_{qj}^0\left(t,\xi\right)* \color{red}{ \dot{D}_q^0\left(t,\xi\right) } \color{black}{~d\xi + e_{bj}\left( t \right)} \\
&=\sum_{q=1}^2 \int_s G_{qj}^0\left(t,\xi\right)* \color{red}{ \sum_{k=1}^K \sum_{l=1}^L a_{qkl}X_k\left(\xi\right)T_l\left(t-t_k\right)} \color{black}{~d\xi +e_{bj}\left( t \right)}\\
&=\sum_{q=1}^2 \sum_{k=1}^K \sum_{l=1}^L a_{qkl} T_l\left(t-t_k\right) * \color{red}{ \int_s X_k\left(\xi\right)G_{qj}^0\left(t,\xi\right) ~d\xi} \color{black}{+e_{bj}\left( t \right)}\\
&=\sum_{q=1}^2 \sum_{k=1}^K \sum_{l=1}^L a_{qkl} T_l \left( t - t_k \right) * \color{red}{ g_{qkj}^0\left( t \right)} \color{black}{+e_{bj}\left( t \right)} \\
\end{aligned}
$$
with
$X_k$ is analytical expression, but $g_{qkj}$ is not. so there still exist error raised from the discretization
A NURBS surface is obtained as the tensor product of two NURBS curves, thus using two independent parameters u and v (with indices i and j respectively):
with
as rational basis functions.
simple introduction of Bayesian rule
Joint probability of two events A and B:
In Bayesian probablity theory: one “events” is Hypothesis. the other is Data. so:
$P\left(D|H\right)$: likelihood function, as it assesses the probablity of the observed data arising from hypothesis.
$P\left(H\right)$: prior, as it reflects one′ prior knowledge before the data are considered.
$P\left(H|D\right)$: Posterior, as its name suggests. reflects the probability of the hypothesis after consideration.
A simple example
let′s say we have some quantity in the world, $x$, and our observation of this quantity,$y$, is corrupted by additive Gaussian noise, e:
we might want to pick the value of $x$ that maximizes this distribution
alternatively, we may want minimize the mean squared error of our guesses, then we should pick the mean of (Px|y);
let′s draw upon our existing knowledge. x with mean of 12,variance of 1. Thus,
The x which maximizes $P(x|y)$ is the same as that which minimizes the exponent in
brackets which may be found by simple algebraic manipulation to be:
normalized bicubic B-splines
where
which $ M_{4,j}(s) $ is the B-spline function of order 4(degree 3)
Red:M4;Green: first derivative;Blue: second derivative
value of center point
value of grid
integration of $M_{r,j}$
roughness
suppose coordinate on fault then $f_1=f_2=0, h_1=h_2=1$
M_4,i 积分,微分,最大值,
M_4,j * M_4,j 积分微分
suppose:
errors $e$ to be Gaussian with
$\sigma ^2 $ is unknow scale factor
likelihood function:
here $ || E || $ denotes the absolute value of the determinant of $ \mathbf{E} $
prior information:
posterior pdf
Marginal likelihood
Akaike Bayesian Information Criterion(ABIC)
with
for certain values of $\sigma ^2 $ and $\alpha ^2 $, minimizing $s(\mathbf{a}) $, $\frac{\partial s(\mathbf{a})}{\partial \mathbf{a}}=0 $
the solution is:
then, denoting
rewrite $s(\mathbf{a}) $
Gaussian integral
Suppose A is a symmetric positive-definite (hence invertible) $n \times n$ covariance matrix. Then,
subsitute equation (9) into equation (4), carrying out the integration of marginal likelihood equation (3) (see gaussian integral eq.(10)).
Consider a vector-valued function $\mathbb{R}^n\mapsto\mathbb{R}^m$ of order $m \times 1$, such that
The diferentiation of such a vector-valued function $\mathbf{F}(\mathbf{x})$ by another vector x of order $n\times1$ is ambiguous
in the sense that the derivative
can either be expressed as an $m \times n$ matrix ( numerator layout convention ), or as an $n \times m$ matrix ( denominator layout convention ), such that we have
draft
$$
\definecolor{rgb}{RGB}{220,220,220}
\begin{aligned}
\Lambda_0 &=
\begin{cases}
\quad \, s^3+6\Delta s s^2 + 12 \Delta s^2 s + 8 \Delta s^3 \color{rgb}{= (s+2\Delta s)^3} & [-2 \Delta s, - \Delta s)\\
-3 s^3 -6 \Delta s s^2 + 4 \Delta s^3 & [-\Delta s , 0 \quad )\\
\enspace \, 3 s^3 -6 \Delta s s^2 + 4 \Delta s^3 & [ 0 \quad , \Delta s )\\
\enspace -s^3+6\Delta s s^2 - 12 \Delta s^2 s + 8 \Delta s^3 \color{rgb}{= (2\Delta s-s)^3} & [ \Delta s, 2 \Delta s]
\end{cases} \\ \\
\Lambda_1 &=
\begin{cases}
\enspace \, 3 s^2 + 12 \Delta s s + 12 \Delta s^2 & \hspace{9.3em} [-2 \Delta s, - \Delta s)\\
-9 s^2 -12 \Delta s s & \hspace{9.3em} [- \Delta s , 0 )\\
\enspace \, 9 s^2 -12 \Delta s s & \hspace{9.3em} [ 0 , \Delta s )\\
-3 s^2 +12 \Delta s s - 12 \Delta s^2 & \hspace{9.3em} [ \Delta s, 2 \Delta s)
\end{cases} \\ \\
\Lambda_2 &=
\begin{cases}
\enspace \, 6 s \enspace + 12 \Delta s & \hspace{13.6em} [-2 \Delta s, - \Delta s)\\
-18 s -12 \Delta s & \hspace{13.6em} [- \Delta s , 0 )\\
\enspace \, 18 s -12 \Delta s & \hspace{13.6em} [ 0 , \Delta s )\\
-6 s \enspace +12 \Delta s & \hspace{13.6em} [ \Delta s, 2 \Delta s )
\end{cases} \\
\end{aligned}
$$