You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
46 lines
2.8 KiB
Plaintext
46 lines
2.8 KiB
Plaintext
4 years ago
|
The root problem of drift detection and correction is predicting sensor measurements. This can usually be accomplished in two ways:
|
||
|
|
||
|
|
||
|
This usually requires one or more time series of data and an algorithm which consumes these time series and produces a prediction for the value a sensor should measure next. The most commonly used model for this by far is called \emph{Kalman filtering}, which consists of two phases:
|
||
|
|
||
|
|
||
|
Given the previous state of knowledge at step $k-1$ (estimated system state and uncertainty), we calculate a prediction for the next system state and uncertainty. This is the prediction phase. We then observe a new (possibly skewed) measurement and compute our prediction of the actual current state and uncertainty (update phase). This algorithm is recursive in nature and can be calculated with limited hardware in real-time.
|
||
|
|
||
|
Kalman filters are based on a linear dynamical system on a discrete time domain. It represents the system state as vectors and matrices of real numbers. In order to use Kalman filters, the observed process must be modeled in a specific structure:
|
||
|
|
||
|
\begin{itemize}
|
||
|
\item $F_k$, the state transition model for the $k$-th step
|
||
|
\item $H_k$, the observation model for the $k$-th step
|
||
|
\item $Q_k$, the covariance of the process noise
|
||
|
\item $R_k$, the covariance of the observation noise
|
||
|
\item Sometimes a control input model $B_k$
|
||
|
\end{itemize}
|
||
|
|
||
|
These models must predict the true state $x$ and an observation $z$ in the $k$-th step according to:
|
||
|
|
||
|
\begin{align*}
|
||
|
x_k &= F_kx_{k-1} + B_ku_k + w_k \\
|
||
|
z_k &= H_kx_k+v_k
|
||
|
\end{align*}
|
||
|
|
||
|
Where $w_k$ and $v_k$ is noise conforming to a zero mean multivariate normal distribution $\mathcal{N}$ with covariance $Q_k$ and $R_k$ respectively ($w_k \sim \mathcal{N}(0,Q_k)$ and $z_k \sim \mathcal{N}(0,R_k) $).
|
||
|
|
||
|
The Kalman filter state is represented by two variables $\hat{x}_{k|j}$ and $P_{k|j}$ which are the state estimate and covariance at step $k$ given observations up to and including $j$.
|
||
|
|
||
|
When entering step $k$, we can now define the two phases. \textbf{Prediction phase:}
|
||
|
\begin{align*}
|
||
|
\hat{x}_{k|k-1} &= F_k \hat{x}_{k-1|k-1}+B_ku_k \\
|
||
|
P_{k|k-1} &= F_kP_{k-1|k-1} F_k^\intercal+Q_k
|
||
|
\end{align*}
|
||
|
Where we predict the next state and calculate our confidence in that prediction. If we are now given our measurement $z_k$, we enter the next phase. \textbf{Update phase:}
|
||
|
|
||
|
\begin{align*}
|
||
|
\tilde{y}_k &= z_k - H_k\hat{x}_{k|k-1} & \text{Innovation (forecast residual)} \\
|
||
|
S_k &= H_kP_{k|k-1} H_k^\intercal+R_k & \text{Innovation variance} \\
|
||
|
K_k &= P_{k|k-1}H_k^\intercal S_k^{-1} & \text{Optimal Kalman gain} \\
|
||
|
\hat{x}_{k|k} &= \hat{x}_{k|k-1} + K_k\tilde{y}_k & \text{State estimate} \\
|
||
|
P_{k|k} &= (I-K_kH_k)P_{k|k-1} & \text{Covariance estimate}
|
||
|
\end{align*}
|
||
|
|
||
|
After the update phase, we obtain $\hat{x}_{k|k}$, which is our best approximation of our real state.
|