## Abstract

For linear and well-defined estimation problems with Gaussian white noise, the Kalman filter (KF) yields the best result in terms of estimation accuracy. However, the KF performance degrades and can fail in cases involving large uncertainties such as modeling errors in the estimation process. The smooth variable structure filter (SVSF) is a relatively new estimation strategy based on sliding mode theory and has been shown to be robust to modeling uncertainties. The SVSF makes use of an existence subspace and of a smoothing boundary layer to keep the estimates bounded within a region of the true state trajectory. Currently, the width of the smoothing boundary layer is chosen based on designer knowledge of the upper bound of modeling uncertainties, such as maximum noise levels and parametric errors. This is a conservative choice, as a more well-defined smoothing boundary layer will yield more accurate results. In this paper, the state error covariance matrix of the SVSF is used for the derivation of an optimal time-varying smoothing boundary layer. The robustness and accuracy of the new form of the SVSF was validated and compared with the KF and the standard SVSF by testing it on a linear electrohydrostatic actuator (EHA).

## Introduction

The successful control of a mechanical or electrical system depends on the knowledge of the system states and parameters. Observations of the system are made through the use of sensors that provide measurements which contain information on the variables of interest. Filters are used to remove unwanted components such as noise in an effort to provide an accurate estimate of the states [1]. Advanced filtering and estimation methods are model-based and as such are sensitive to modeling uncertainties. The most popular and well-studied estimation method is the Kalman filter (KF), which was introduced in the 1960s [2,3]. The KF yields a statistically optimal solution for linear estimation problems, as defined by Eqs. (1.1) and (1.2), in the presence of Gaussian noise where $P(wk)~N(0,Qk)$ and $P(vk)~N(0,Rk)$. A typical model is represented by the following equations:
$xk+1=Axk+Buk+wk$
(1.1)
$zk+1=Cxk+1+νk+1$
(1.2)

A list of the Nomenclature used throughout this paper is provided at the end. It is the goal of a filter to remove the effects that the system wk and measurement $ν$k noise have on extracting the true state values xk from the measurements zk. The KF is formulated in a predictor-corrector manner. The states are first estimated using the system model, termed as a priori estimates, meaning “prior to” knowledge of the observations. A correction term is then added based on the innovation (also called residuals or measurement errors), thus forming the updated or a posteriori (meaning “subsequent to” the observations) state estimates.

The KF has been broadly applied to problems covering state and parameter estimation, signal processing, target tracking, fault detection and diagnosis, and even financial analysis [4,5]. The success of the KF comes from the optimality of the Kalman gain in minimizing the trace of the a posteriori state error covariance matrix [6,7]. The trace is taken because it represents the state error vector in the estimation process [8]. The following five equations form the core of the KF algorithm and are used in an iterative fashion. Equations (1.3) and (1.4) define the a priori state estimate $x∧k+1|k$ based on knowledge of the system A, the previous state estimate $x∧k|k$, the input matrix B, and the input uk, and the corresponding state error covariance matrix $Pk+1|k$, respectively
$x∧k+1|k=Ax∧k|k+Buk$
(1.3)
$Pk+1|k=APk|kAT+Qk$
(1.4)
The Kalman gain $Kk+1$ is defined by Eq. (1.5), and is used to update the state estimate $x∧k+1|k+1$ as shown in Eq. (1.6). The gain makes use of an innovation covariance $Sk+1|$, which is defined as the inverse term found in the following equation:
$Kk+1=Pk+1|kCT[CPk+1|kCT+Rk+1]-1$
(1.5)
$x∧k+1|k+1=x∧k+1|k+Kk+1|[zk+1|k-Cx∧k+1|k]$
(1.6)
The a posteriori state error covariance matrix $Pk+1|k+1$ is then calculated by Eq. (1.7), and is used iteratively, as per Eq. (1.4).
$Pk+1|k+1=[I-Kk-1C]Pk+1|k$
(1.7)

A number of different methods have extended the classical KF to nonlinear systems, with the most popular and simplest method being the extended Kalman filter (EKF) [9,10]. The EKF is conceptually similar to the KF; however, the nonlinear system is linearized according to its Jacobian. This linearization process introduces uncertainties that can sometimes cause instability [10]. For the purposes of this paper, only linear systems will be considered.

The optimality of the KF comes at a price of stability and robustness. The KF assumes that the system model is known and linear, the system and measurement noises are white, and the states have initial conditions with known means and variances [9,11]. However, the previous assumptions do not always hold in real applications. If these assumptions are violated, the KF yields suboptimal results and can become unstable [12]. Furthermore, the KF is sensitive to computer precision and the complexity of computations involving matrix inversions [13]. In an effort to further increase stability, the KF has been combined with a variety of square root algorithms and methods, such as Cholesky decomposition, unit diagonal-factorization, and triangularization algorithms [14–17]. These methods are based on reformulating the KF equations by using numerically stable implementations to mathematically increase the arithmetic precision of the computation [13]. Increasing the arithmetic precision reduces the effects of round-off errors, which improves the overall numerical stability of the filter.

Other methods have been proposed to reduce the effects of modeling errors [18,19]. These techniques are based on increasing the a priori covariance matrix, which increases the gain value. This approach puts more emphasis on the system model, as opposed to the model used by the filter [9].

The effects due to assuming Gaussian noise distributions may be minimized by implementing a Gaussian sum. This method is used to approximate the non-Gaussian probability density function (PDF) by a finite number of Gaussian PDFs [20]. This approach is computationally complex due to the number of filters that are used to approximate the overall estimate, however, has been shown to work well.

A recent robust filtering strategy that is less susceptible to uncertainties and is computationally efficient is the variable structure filter [21]. Variable structure system theory originated from the Soviet Union in the 1940s [22,23]. A special subcategory of it referred to as sliding mode control (SMC) is commonly used in control applications as it provides enhanced robustness and stability. In a typical sliding mode controller, a discontinuous switching gain is used to maintain the states along some desired trajectory [23]. The discontinuous gain is determined based on the distance of the states from a switching hyperplane. The gain forces the states to convergence onto the hyperplane, and slide along it [24]. While on the hyperplane and under ideal conditions, the state trajectory becomes insensitive to disturbances and uncertainties. The discontinuous switching brings an inherent amount of stability to the control, while in practice introducing chattering due to limitations and delays in switching. To remove chattering, a smoothing boundary layer is introduced along the sliding surface in order to interpolate and scale the discontinuous gain within the boundary region. This results in the discontinuous gain being applied outside the smoothing boundary layer, while inside it a continuous corrective action is applied. A number of sliding mode observers and filters have been proposed in literature [25,26]. In 2002, an optimal sliding mode filter design was introduced; however, the derivation led to more of a robust control strategy, rather than an estimator [27]. The estimation strategy to be discussed in this paper is significantly different.

The smooth variable structure filter (SVSF) is a relatively new estimation strategy based on sliding mode theory, and has been shown to be robust to modeling uncertainties. Similarly to SMC, the SVSF uses a discontinuous gain and a smoothing boundary layer ψ in its formulation. In this paper, an “optimal” smoothing boundary layer is derived for the SVSF with respect to the state error covariance matrix. Section 2 provides a brief overview of the SVSF, followed by the derivation of an optimal smoothing boundary layer equation. A linear electrohydrostatic actuator (EHA) estimation problem is then described, and the new form of the SVSF is compared with the KF and the standard SVSF in terms of estimation accuracy and robustness to uncertainties. The paper then concludes with a summary of the results.

## The Smooth Variable Structure Filter

A revised form of the VSF, referred to as the SVSF, was presented in 2007 [28]. The SVSF strategy is also a predictor-corrector estimator based on sliding mode concepts, and can be applied on both linear or nonlinear systems and measurements. As shown in Fig. 1, and similar to the VSF, it utilizes a switching gain to converge the estimates to within a boundary of the true state values (i.e., existence subspace) [28]. The SVSF has been shown to be stable and robust to modeling uncertainties and noise, when given an upper bound on the level of unmodeled dynamics and noise [21,28]. The origin of the SVSF name comes from the requirement that the system is differentiable (or “smooth”) [28,29]. Furthermore, it is assumed that the system under consideration is observable [28].

Fig. 1
Fig. 1
Close modal
Consider the following process for the SVSF estimation strategy, as applied to a linear system with a linear measurement equation. Note that this formulation includes state error covariance equations as presented in Ref. [30], which was not originally presented in the standard SVSF form [28]. The predicted state estimates $x∧k+1|k$ are first calculated as follows:
$x∧k+1|k=Ax∧k|k+Buk$
(2.1)
Similar to the KF, the a priori state error covariance matrix $Pk+1|k$ may be found as follows:
$Pk+1|k=APk|kAT+Qk$
(2.2)
Utilizing the predicted state estimates $x∧k+1|k$, the corresponding predicted measurements $z∧k+1|k$ and measurement error vector $ez,k+1|k$ may be calculated
$z∧k+1|k=Cx∧k+1|k$
(2.3)
$ez,k+1|k=zk+1-z∧k+1|k$
(2.4)
Next, the SVSF gain is calculated as follows [6]:
$Kk+1|k=C+diag[(|ez,k+1|k|Abs+γ|ez,k|k|Abs)°sign(ez,k+1|kψi)]×[diag(ez,k+1|k)]-1$
(2.5)
The SVSF gain is a function of: the a priori and a posteriori measurement error vectors $ez,k+1|k$ and $ez,k|k$; the smoothing boundary layer widths $ψi$ where i refers to the ith width; the “SVSF” memory or convergence rate $γ$ with elements $0<γii≤1$; and the linear measurement matrix C. However, for numerical stability, it is important to ensure that one does not divide by zero in Eq. (2.5). This can be accomplished using a simple if statement with a very small threshold (i.e., 1 × 10−12). The SVSF gain is used to refine the state estimates as follows:
$x∧k+1|k+1=x∧k+1|k+Kk+1ez,k+1|k$
(2.6)
Following this, the a posteriori state error covariance matrix $Pk+1|k+1$ is calculated as follows [6]:
$Pk+1|k+1=(I-Kk+1C)Pk+1|k(I-Kk+1C)T+Kk+1Rk+1Kk+1T$
(2.7)
Next, the updated measurement estimates $z∧k+1|k+1$ and corresponding errors $ez,k+1|k+1$ are calculated
$z∧k+1|k+1=Cx∧k+1|k+1$
(2.8)
$ez,k+1|k+1=zk+1-z∧k+1|k+1$
(2.9)
The SVSF process may be summarized by Eqs. (2.1)(2.9), and is repeated iteratively. According to Ref. [28], the estimation process is stable and converges to the existence subspace if the following condition is satisfied:
$|ek|k|Abs>|ek+1|k+1|Abs$
(2.10)

Note that |e|Abs is the absolute of the vector e, and is equal to $|e|Abs=e·sign(e)$. The proof, as described in Refs. [28,29], yields the derivation of the SVSF gain from Eq. (2.8). The SVSF results in the state estimates converging to within a region of the state trajectory, referred to as the existence subspace. Thereafter, it switches back and forth across the state trajectory, as shown earlier in Fig. 1. The existence subspace shown in Figs. 1–3 represents the amount of uncertainties present in the estimation process, in terms of modeling errors or the presence of noise. The width of the existence space β is a function of the uncertain dynamics associated with the inaccuracy of the internal model of the filter as well as the measurement model, and varies with time [28]. Typically this value is not exactly known but an upper bound may be selected based on a priori knowledge.

Once within the existence boundary subspace, the estimated states are forced (by the SVSF gain) to switch back and forth along the true state trajectory. As mentioned earlier, high-frequency switching caused by the SVSF gain is referred to as chattering, and in most cases, is undesirable for obtaining accurate estimates [28]. However, the effects of chattering may be minimized by the introduction of a smoothing boundary layer ψ. The selection of the smoothing boundary layer width reflects the level of uncertainties in the filter and the disturbances (i.e., system and measurement noise, and unmodeled dynamics). The effect of the smoothing boundary layer is shown in Figs. 2 and 3. When the smoothing boundary layer is defined larger than the existence subspace boundary, the estimated state trajectory is smoothed. However, when the smoothing term is too small, chattering remains due to the uncertainties being underestimated. Similar to the VSF strategy, the smoothing boundary layer ψ modifies the SVSF gain as follows [28]:
Fig. 2

Smoothed estimated trajectory ψ ≥ β [28]

Fig. 2

Smoothed estimated trajectory ψ ≥ β [28]

Close modal
Fig. 3

Presence of chattering effect ψ < β [28]

Fig. 3

Presence of chattering effect ψ < β [28]

Close modal
$Kk+1=C+diag[(|ez,k+1|k|Abs+γ|ez,k|k|Abs)°sat(ez,k+1|kψi)]×[diag(ez,k+1|k)]-1$
(2.11)

The SVSF gain is considerably less complex than its predecessor (VSF), which allows it to be implemented more easily (mathematically and conceptually). Furthermore, the SVSF estimation process is inherently robust and stable to modeling uncertainties due to the switching effect of the gain. This makes for a powerful estimation strategy, particularly when the system is not well known. Note that for systems that have fewer measurements than states, a “reduced order” approach is taken to formulate a full measurement matrix [28,31]. Essentially “artificial measurements” are created and used throughout the estimation process.

## Derivation of an Optimal Smoothing Boundary Layer

The partial derivative of the a posteriori covariance (trace) with respect to the smoothing boundary layer term ψ is the basis for obtaining a strategy for the specification of ψ. The approach taken is similar to determining an optimal gain for the KF. The following derivation is applicable to any measurement case provided that the measurement matrix is completely observable [32]. For the case when there are fewer measurements than states, one needs to implement a reduced order form of the SVSF as shown in Ref. [28]. This allows the creation of a full measurement matrix, typically in the form of an identity. For the case when there are more measurements than states, the system output can be multiplied by the inverse of the measurement matrix, thus mapping the measurements to the states. One could then use a full measurement matrix (i.e., identity) in the estimation process.

Previous forms of the SVSF included a vector form of ψ, which had a single smoothing boundary layer term for each corresponding measurement error [28]. Essentially, the boundary layer terms were independent of each other such that the measurement errors would only directly be used for calculating its corresponding gain. The coupling effects are not explicitly considered thus preventing an optimal derivation. A “near-optimal” formulation of the SVSF could be created using a vector form of ψ, however, this would lead to a minimization of only the diagonal elements of the state error covariance matrix [32]. In this paper, in an effort to obtain a smoothing boundary layer equation that yields optimal state estimates for linear systems (like the KF), a full smoothing boundary layer matrix is proposed. Hence, consider the full matrix form of the smoothing boundary layer
$ψ=[ψ11ψ12⋮ψm1ψ11ψ22⋮ψm2………ψ1mψ2m⋮ψmm]$
(3.1)
Note that the off-diagonal terms of Eq. (3.1) are zero for the standard SVSF (presented in Sec. 2 and in Ref. [28]), whereas this is not the case for the algorithm presented here. This definition includes terms that relate one smoothing boundary layer to another (i.e., off-diagonal terms). To solve for a time-varying smoothing boundary layer (variable boundary layer (VBL)) based on Eq. (3.1), consider
$∂(trace[Pk+1|k+1)∂ψ=0$
(3.2)
To solve Eq. (3.2), first consider the following modification of the SVSF gain defined by Eq. (2.11), where the system is fully measured. Note that the gain structure remains the same
$Kk+1=C-1{diag(E)·sat(ψ-1diag[ez,k+1|k])}[diag(ez,k+1|k)]-1$
(3.3)
where E is a “vector of errors,” defined as follows:
$E=(|ez,k+1|k|Abs+γ|ez,k|k|Abs)$
(3.4)
In an effort to avoid significant chattering or switching, consider the region only inside the saturation term of the SVSF gain (3.5). Furthermore, as will be demonstrated later, this will improve the overall SVSF estimation accuracy. Also, consider the bar notation ā to signify a diagonal matrix formed of the vector a, such that ā = diag(a)
$sat(ψ-1ezk+1|k¯)=ψ-1ezk+1|k¯$
(3.5)
Applying Eq. (3.5) to Eq. (3.3) yields
$Kk+1=C-1E¯ψ-1ezk+1|k¯(ezk+1|k¯)-1$
(3.6)
In an effort to help visualize (3.6), consider a system with two states and measurements (where C = I), such that Eq. (3.6) becomes
$Kk+1=[e100e1][ψ11ψ12ψ21ψ22]-1[ez100ez2][1ez1001ez2]=[a100a1][ψ11ψ12ψ21ψ22]-1$
(3.7)
Note that the notation of Eq. (3.3) does not impact the gain formulations or the state update equation, since the error terms $ez,k+1|k$ eventually cancel out. Simplifying Eq. (3.6), using Eq. (3.7) to visualize, yields the following definition for the SVSF gain:
$Kk+1=C-1E¯ψ-1$
(3.8)
In evaluating Eq. (3.2), consider an expansion of the a posteriori covariance equation (2.7) as follows:
$Pk+1|k+1=Pk+1|k-Kk+1CPk+1|k-Pk+1|kCTKk+1T+Kk+1CPk+1|kCTKk+1T+Kk+1Rk+1Kk+1T$
(3.9)
Note that the measurement covariance $Rk+1$ and the state error covariance $Pk+1|k$ are symmetric. Furthermore, recall the definition for the innovation (or measurement error) covariance matrix as follows:
$Sk+1=CPk+1|kCT+Rk+1$
(3.10)
Equation (3.10) can be used to simplify Eq. (3.9) as follows:
$Pk+1|k+1=Pk+1|k-Kk+1CPk+1|k-Pk+1|kCTKk+1T+Kk+1Sk+1Kk+1T$
(3.11)
Substitution of Eq. (3.8) into Eq. (3.11) yields
$Pk+1|k+1=Pk+1|k-C-1E¯ψ-1CPk+1|k-Pk+1|kCT(C-1E¯ψ-1)T+C-1E¯ψ-1Sk+1(C-1E¯ψ-1)T$
(3.12)
Next, to solve for Eq. (3.2) or $∂(trace[Pk+1|k+1])/∂ψ$, the individual terms of Eq. (3.12) will be considered, respectively, as follows [33]:
$∂(trace[Pk+1|k])∂ψ=0$
(3.13)
$∂(trace[-C-1E¯ψ-1(CPk+1|k])∂ψ=ψ-TE¯C-TPk+1|kCTψ-T$
(3.14)
$∂(trace[-Pk+1|kCT(C-1E¯ψ-1)T])∂ψ=ψ-TE¯C-TPk+1|kCTψ-T$
(3.15)
$∂(trace[C-1E¯ψ-1Sk+1(C-1E¯ψ-1)T])∂ψ=-2ψTE¯C-TC-1×E¯ψ-1Sk+1ψ-T$
(3.16)
Combining Eqs. (3.13)(3.16) into Eqs. (3.2) and (3.12) yields
$∂(trace[Pk+1|k+1])∂ψ=-2ψ-TE¯C-TPk+1|kCTψ-T-2ψ-TE¯C-TC-1E¯ψ-1Sk+1ψ-T=0$
(3.17)
Now, what remains, is to simplify Eq. (3.17) and solve for the smoothing boundary layer ψ. First, multiply from the left by $12(ψ-T)-1$, and then from the right by $(ψ-T)-1$
$E¯C-TPk+1|kCT-E¯C-TC-1E¯ψ-1Sk+1=0$
(3.18)
Next, multiply Eq. (3.18) from the left by $E¯-1$
$C-TPk+1|kCT-C-TC-1E¯ψ-1Sk+1=0$
(3.19)
Simplify Eq. (3.19) further by multiplying from the left by $E¯-TC(C-T)-1$, which yields
$E¯-1CPk+1|kCT+ψ-1Sk+1=0$
(3.20)
Rearranging Eq. (3.20) yields a solution for the inverse of the smoothing boundary layer
$ψ-1=E¯-1CPk+1|kCTSk+1-1$
(3.21)
Finally, a solution for the full smoothing boundary layer matrix may be found as follows:
$ψk+1|=(E¯-TCPk+1|kCTSk+1-1)-1$
(3.22)
Note that the square matrix (3.22) is invertible if $E¯-1CPk+1|kCTSk+1-1$ is nonsingular, or if the determinate of the matrix $E¯-1CPk+1|kCTSk+1-1$ is nonzero. Performing a dimensionality check verifies the correct dimension
$ψk+1=((m×n)-1¯(m×n)(n×n)(n×m)(m×m)-1)-1=(m×m)$
(3.23)

The proposed smoothing boundary layer equation (3.22) is found to be a function of the a priori state error covariance Pk+1|k, measurement covariance Sk+1 measurement matrix C, a priori and previous a posteriori measurement error vectors (ez.k+1|k and ez.k|k and the convergence rate or SVSF “memory” $γ$. It appears that the width of the boundary layer is therefore directly related to the level of modeling uncertainties (by virtue of the errors), as well as the estimated system and measurement noise (captured by Pk+1|k and Sk+1). The smoothing boundary layer widths can now be obtained according to Eq. (3.22) at each time step, in an optimal fashion, as opposed to the constant (conservative) width presented in Ref. [28]. As shown in the Appendix, the units and values of the smoothing boundary layer matrix have been studied.

## A Robust Filtering Strategy for Linear Systems

### Description of the SVSF–VBL Strategy.

As per the previous results and as shown in the Appendix, it appears that the time-varying smoothing boundary layer (VBL) for the SVSF yields the KF solution (gain) for linear systems. In this case, robustness to modeling uncertainties using the SVSF strategy is lost. It is hence beneficial to propose a combined strategy, referred to here as the SVSF–VBL, such that an accurate estimate is maintained (i.e., using the VBL calculation or KF gain) while ensuring the estimate remains stable (i.e., using the standard SVSF gain). This strategy is implemented by imposing a saturation limit on the optimal smoothing boundary layer as follows. Outside the limit the robustness and stability of the SVSF is maintained, while inside the boundary layer the optimal gain is applied. Consider the following sets of figures to help describe the overall implementation of the SVSF–VBL strategy.

Figure 4 illustrates the case when a limit is imposed on the smoothing boundary layer width (a conservative value) and the time-varying (optimal) smoothing boundary layer per Eq. (3.22) follows within this limit. In the standard SVSF, the smoothing boundary layer width is made equal to the limit; such that the difference between the limit and the optimal variable boundary layers quantifies the loss in optimality. Essentially, in this case, the SVSF–VBL (or KF) gain should be used to obtain the best result. Another way to simplify and understand this process is to consider the SVSF–VBL as using a time-varying boundary layer with saturated limits to ensure stability.

Fig. 4
Fig. 4
Close modal

Figure 5 illustrates the case when the optimal time-varying smoothing boundary layer is larger than the limit imposed on the smoothing boundary layer. This typically occurs when there is modeling uncertainty (which leads to a loss in optimality) or when the limit on the smoothing boundary layer is underestimated. This strategy is useful for applications such as fault detection. Recall that that the width of the smoothing boundary layer (3.22) is directly related to the level of modeling uncertainties (by virtue of the errors), as well as the estimated system and measurement noise (captured by Pk+1|k and Sk+1). Therefore, the VBL creates another indicator of performance for the SVSF: the widths may be used to determine the presence of modeling uncertainties, as well as detect any changes in the system.

Fig. 5
Fig. 5
Close modal

To summarize the estimation strategy (SVSF–VBL) proposed in this section, consider Fig. 6. Essentially, in a well-defined case, the gain used to correct the estimate is calculated by the SVSF–VBL or KF gain. When the smoothing boundary layer calculated by Eq. (3.22) or Eq. (4.7) goes beyond the limits, the smoothing boundary layer width requires saturation.

Fig. 6
Fig. 6
Close modal

### The Computational Process for the SVSF–VBL.

This section briefly summarizes the proposed SVSF–VBL strategy and equations. Consider the prediction stage for a linear system as described earlier, where the state estimates and covariance are first calculated as per Eqs. (4.1) and (4.2), respectively.
$x∧k+1|k=A∧x∧k|k+B∧uk$
(4.1)
$Pk+1|k=APk|kAT+Qk$
(4.2)
The a priori measurement estimate (4.3) and errors (4.4) are then calculated
$z∧k+1|k=Cx∧k+1|k$
(4.3)
$ez,k+1|k=zk+1-z∧k+1|k$
(4.4)
The update stage is then defined by the following sets of equations. The innovation covariance (4.5) and combined error vector (4.6) are calculated, and then used in Eq. (4.7) to determine the smoothing boundary layer matrix. Recall that a “divide by zero” check should be performed on Eq. (4.6) to avoid inversion of zero in Eq. (4.7). As described earlier, this can be accomplished using a simple if statement with a very small threshold (i.e., 1 × 10−12).
$Sk+1=CPk+1|kCT+Rk+1$
(4.5)
$Ek+1=|ez,k+1|k|Abs+γ|ez,k|k|Abs$
(4.6)
$ψk+1(E¯k+1-1CPk+1|kCTSk+1-1)-1$
(4.7)
The SVSF gain is then calculated (4.8), and then used to update the state estimates (4.9).
$Kk+1=C-1E¯k+1ψk+1-1$
(4.8)
$x∧k+1|k+1=x∧k+1|k+Kk+1ez,k+1|k$
(4.9)
Finally, the a posteriori state error covariance (4.10), updated measurement estimate (4.11), and a posteriori errors (4.12) are calculated.
$Pk+1|k+1=(I-Kk+1C)Pk+1|k(I-Kk+1|kC)T+Kk+1Rk+1Kk+1T$
(4.10)
$z∧k+1|k+1=Cx∧k+1|k+1$
(4.11)
$ez,k+1|k+1=zk+1-z∧k+1|k+1$
(4.12)

Equations (4.1)(4.12) summarize the proposed SVSF–VBL strategy.

## Simulation Results

### Description of the Linear Estimation Problem.

In this section, the proposed algorithm is applied for state estimation on an EHA. This example uses computer simulations in order to allow a detailed investigation of the effects of parametric uncertainties. The EHA model is based on an actual prototype built for experimentation [28,34]. The purpose of this example is to demonstrate that the new SVSF–VBL estimation process is functional, and that the resulting estimation process is comparable to the KF for linear and known systems. Furthermore, the addition of modeling errors will demonstrate its robustness. For this computer experiment, the input to the system is a random signal with amplitude in the range of ±1 rad/s, superimposed onto a unit step occurring at 0.5 s [28].

The EHA has been modeled as a third-order linear system with state variables related to its position, velocity, and acceleration [28]. Initially, it is assumed that all three states have measurements associated with them (i.e., C = I ). The sample time of the system is T = 0.001 s, and the discrete-time state space system equation may be defined as follows [28]:
$xk+1=[10.0010010.001-557.02-28.6160.9418]xk+[00557.02]uk$
(5.1)
For this case, the corresponding measurement equation is defined by
$zk+1=[100010001]xk+1$
(5.2)
The initial state values are set to zero. The system and measurement noises (w and v are considered to be Gaussian, with zero mean and variances Q and R, respectively. The initial state error covariance P0|0, system noise covariance Q, and measurement noise covariance R are defined, respectively, as follows:
$P0|0=10Q$
(5.3)
$Q=[1×10-50001×10-30001×10-1]$
(5.4)
$R=[1×10-40001×10-20001]$
(5.5)

For the standard SVSF estimation process, the memory or convergence rate was set to γ = 0.1, and the limits for the smoothing boundary layer widths (diagonal elements) were defined as $ψ=[0.050.55]T$. These parameters were selected based on the distribution of the system and measurement noises. For example, the limit for the smoothing boundary layer width ψ was set to 5 times the maximum system noise, or approximately equal to the measurement noise. The initial state estimates for the filters were defined randomly by a normal distribution, around the true initial state values x0 and using the initial state error covariance P0|0. Two different cases were studied in this section. The first case was considered “normal,” and the second included system modeling error half-way through the simulation.

### Normal Case.

The main results of applying the KF, SVSF, and the SVSF–VBL are shown in Fig. 7. This figure shows the true position of the EHA, with the corresponding filter estimates. The estimation results of all filters are practically the same (note that the lines are nearly overlapping and are thus difficult to distinguish). It is important to note that the KF provides the best estimate (i.e., optimal) for a linear and known system, subject to Gaussian noise. Consequently, the SVSF–VBL yielded the same results, since the derived gain (4.8) is the same as the KF. Although the standard SVSF yielded good results, the estimates were not optimal. The velocity and acceleration estimates were relatively the same, and were thus omitted for space constraints. As shown in Table 1, in the normal (standard) case, the KF and SVSF–VBL provide optimal results (in terms of estimation accuracy). The SVSF–VBL improved the SVSF with a constant boundary layer width by roughly 40% (in the position estimate). This is a significant improvement in terms of estimation accuracy. However, note that after some tuning by trial-and-error, it may be possible to improve the SVSF results.

Table 1

RMSE computer experiment results (normal case)

FilterPosition (m)Velocity (m/s)Acceleration (m/s2)
KF3.72 × 10−34.89 × 10−20.87
SVSF–VBL3.72 × 10−34.89 × 10−20.87
SVSF6.11 × 10−35.93 × 10−21.21
FilterPosition (m)Velocity (m/s)Acceleration (m/s2)
KF3.72 × 10−34.89 × 10−20.87
SVSF–VBL3.72 × 10−34.89 × 10−20.87
SVSF6.11 × 10−35.93 × 10−21.21
Fig. 7
Fig. 7
Close modal

The root mean squared error (RMSE) results of running the computer experiment are shown in Table 1.

Figure 8 provides an illustration of the individual smoothing boundary layer widths (found within the ψ matrix), as they evolve with time. The standard SVSF results could be improved if the information contained along the diagonal of the smoothing boundary layer matrix were used to tune the standard SVSF boundary layer widths.

Fig. 8
Fig. 8
Close modal

In its current form, the SVSF–VBL is equivalent to the KF. However, as shown in the following example, some cases exist such that the KF no longer provides an optimal and reliable estimate.

### Modeling Uncertainties Case.

As per Eq. [28], consider the introduction of modeling error or uncertainty, such that the system used by the filters is modified (5.6) at 0.5 s. The model changes at this point to coincide with the input step, to exaggerate the effects of modeling uncertainty.
$xk+1=[10.0010010.001-240-280.9418]xk+[00557.02]uk$
(5.6)

The corresponding position estimates for this case are shown in Fig. 9.

Fig. 9
Fig. 9
Close modal

An interesting result occurs when studying the elements of the smoothing boundary layer matrix. As shown in Fig. 10, the smoothing boundary layer widths corresponding to the acceleration state grows larger at the inception of the modeling uncertainty (0.5 s). This is due to the fact that the width of the smoothing boundary layer is directly related to the level of modeling uncertainties (by virtue of the errors), as well as the estimated system and measurement noise (captured by Pk+1|k and Sk+1), as described in Eq. (3.22). Furthermore, this can be seen by looking at the value of Eq. (4.6) at the onset of modeling uncertainties. The average value in E (corresponding to the third state E3 ) increased by nearly 100 times; which in turn, drastically increased the smoothing boundary layer width. The system modeling error leads to an incorrect a priori state covariance Pk+1|k, which propagates to the smoothing boundary layer calculation. The smoothing boundary layer matrix ψk+1 therefore provides an alternative method for fault detection, as demonstrated by the immediate changes at the inception of the system modeling uncertainties.

Fig. 10
Fig. 10
Close modal

The smoothing boundary layers grow to accommodate for the increased uncertainties at 0.5 s. Injection of uncertainties leads to a loss of optimality, as the basic assumption related to having a known model no longer applies. As shown by Fig. 9, at the inception of the modeling error (0.5 s), the KF failed to yield a reasonable estimate. However, the SVSF–VBL and SVSF retain their robust stability and their estimates remained bounded to within a region of the true state trajectory. In terms of RMSE, the SVSF–VBL estimation strategy yielded the best results, as shown in Table 2.

Table 2

RMSE simulation results (uncertainties case)

FilterPosition (m)Velocity (m/s)Acceleration (m/s2)
SVSF–VBL4.96 × 10−35.43 × 10−20.98
SVSF6.01 × 10−35.75 × 10−21.12
KF0.313.4917.9
FilterPosition (m)Velocity (m/s)Acceleration (m/s2)
SVSF–VBL4.96 × 10−35.43 × 10−20.98
SVSF6.01 × 10−35.75 × 10−21.12
KF0.313.4917.9

As shown in Table 2, the KF provides the worst result (in terms of estimation accuracy). However, the standard SVSF and the SVSF–VBL estimation processes remained relatively stable (when compared with Table 1). These results would have significant implications when attempting the accurate control of a mechanical or electrical system.

## Conclusions

This paper introduced the derivation of an optimal smoothing boundary layer width for the smooth variable structure filter. A new estimation strategy which makes use of the KF optimality and robustness of the SVSF was presented and is referred to as SVSF–VBL. Prior to this work, a variable smoothing boundary layer did not exist for the SVSF. In the standard SVSF, the smoothing boundary layer widths were selected based on upper bounds of the uncertainties in the estimation process. This was a conservative choice for the smoothing boundary layer, which resulted in a loss of optimality. In this paper, a variable smoothing boundary layer was derived in an optimal fashion based on minimizing the state error covariance matrix with respect to the smoothing boundary layer term. The robustness and accuracy of the new form of the SVSF was demonstrated and compared with the KF by testing it on a linear EHA estimation problem. It was demonstrated that the SVSF–VBL strategy performed exactly the same as the KF in the absence of modeling error. In the presence of system modeling uncertainties (or a fault), the SVSF–VBL outperformed both the KF and standard SVSF, yielding very accurate and stable estimates.

### NOMENCLATURE

NOMENCLATURE
x =

state vector or values

z =

measurement (system output) vector or values

y =

artificial measurement vector or values

u =

input to the system

w =

system noise vector

v =

measurement noise vector

A =

linear system transition matrix

B =

input gain matrix

C =

linear measurement (output) matrix

E =

combination of measurement error vectors

K =

filter gain matrix (i.e., KF or SVSF)

P =

state error covariance matrix

Q =

system noise covariance matrix

R =

measurement noise covariance matrix

S =

innovation covariance matrix

e =

measurement (output) error vector

diag(a) or ā =

defines a diagonal matrix of some vector a

sat(a) =

defines a saturation of the term a

γ =

SVSF “convergence” or memory parameter

ψ =

SVSF smoothing boundary layer width

|a| =

absolute value of some parameter a

E{·} =

expectation of some vector or value

T =

transpose of some vector or matrix

∧ =

estimated vector or values

k + 1|k =

a priori time step (i.e., before applied gain)

k + 1|k + 1 =

a posteriori time step (i.e., after update)

### Appendix A: Closer Look at the Saturation Term

A closer examination of the SVSF gain Kk+1 defined by Eq. (3.3) reveals that the derivation of $ψ$ removes the need for the saturation term in the gain, as follows. Consider the saturation term of Eq. (3.3) with Eq. (3.21) as follows:
$sat(ψ-1diag[ez,k+1|k])=sat(E¯-1CPk+1|kCTSk+1-1diag[ez,k+1|k])$
(A1)
From Eq. (A1), consider the following two terms:
$term1=CPk+1|kCTSk+1-1$
(A2)
$term2=E¯-1diag[ez,k+1|k]$
(A3)
Analyzing the first term (A2) and recalling $Sk+1=CPk+1|kCT+Rk+1$, consider the following:
$CPk+1|kCTSk+1-1=(Sk+1-Rk+1)Sk+1-1$
(A4)
From Eq. (3.10), it is known that Sk+1 = Rk+1. Hence, Eq. (A2) is bounded between 0 and −1 as per Eq. (A4). Next, the second term defined by Eq. (A3) will be studied. Note the following definition:
$|ez,k+1|k|Abs+γ|ez,k|k|Abs≥ez,k+1|k$
(A5)
Due to the definition of Eq. (A5), the second term (A3) may only yield values equal to or between and, depending on the γ value of the convergence rate γ. This can be confirmed by looking at the diagonal elements i of Eq. (A3) given any system
$[E¯-1ez,k+1|k¯]i=ez,k+1|ki|ez,k+1|k|i+γi|ez,k|k|i$
(A6)

If the convergence rate γ is set to zero, Eq. (A6) simply yields the sign function of the measurement error (and the answer is −1, 0, or 1). If the convergence rate γ is nonzero (however, bounded between 0+ and 1), Eq. (A6) yields a value between −1 and 1. The argument holds for Eq. (A3). Given the above discussion, when calculating a time-varying smoothing boundary layer using Eq. (3.22), the argument inside the saturation term will always be between −1 and 1. Hence, the saturating function used in Eq. (A3) is redundant given the definition of ψ as provided in Eq. (3.22). Note that this also works with the earlier assumption (3.5) that the region of interest for the value of the smoothing boundary layer width is inside the saturation term (i.e., between −1 and 1).

### Appendix B: Studying the Revised SVSF Gain

In an effort to study the effects of the time-varying smoothing boundary layer term on the SVSF gain, consider the following. Substituting Eq. (3.22) into Eq. (3.8) yields the revised gain, based on the above derivation
$Kk+1=C-1E¯[E¯-1CPk+1|kCTSk+1-1]$
(B1)
Note that Eq. (B1) easily simplifies to the following:
$Kk+1=Pk+1|kCTSk+1-1$
(B2)

Therefore, based on a full smoothing boundary layer matrix defined by Eq. (3.22), the gain (3.8) becomes the KF gain (1.5), which yields the optimal solution for well-defined linear systems. This is to be expected as the KF yields the best possible estimate for linear, known systems with Gaussian noise. This implies that the robustness of the SVSF is lost with the use of an optimal smoothing boundary layer that would make the saturation function redundant.

## References

1.
Nise
,
N.
, 2004,
Control Systems Engineering
, 4th ed.,
John Wiley and Sons, Inc.
,
New York
.
2.
Kalman
,
R. E.
, 1960, “
A New Approach to Linear Filtering and Prediction Problems
,”
ASME J. Basic Eng.
,
82
,
pp.
35
45
.10.1115/1.3662552
3.
Anderson
,
B. D. O.
, and
Moore
,
J. B.
, 1979,
Optimal Filtering
,
Prentice-Hall
,
Englewood Cliffs, NJ
.
4.
Ristic
,
B.
,
Arulampalam
,
S.
, and
Gordon
,
N.
, 2004,
Beyond the Kalman Filter: Particle Filters for Tracking Applications
,
Artech House
,
Boston
.
5.
Simon
Haykin
, 2001,
Kalman Filtering and Neural Networks
,
John Wiley and Sons, Inc.
,
New York
.
6.
,
S. A.
,
2011
, “
Smooth Variable Structure Filtering: Theory and Applications
,”
Ph.D. thesis, Department of Mechanical Engineering
,
McMaster University
,
Hamilton, Ontario
.
7.
,
J. A.
,
Al-Shabir
,
M.
, and
Habibi
,
S. R.
,
2011
, “
Estimation Strategies for the Condition Monitoring of a Battery System in a Hybrid Electric Vehicle
,”
ISRN Signal Processing
, 2011, p. 12035110.5402/2011/120351.
8.
Gelb
,
A.
, 1974,
Applied Optimal Estimation
,
MIT Press
,
Cambridge, MA
.
9.
Simon
,
D.
, 2006,
Optimal State Estimation: Kalman, H-Infinity, and Nonlinear Approaches
,
Wiley-Interscience
, New Jersey.
10.
Welch
,
G.
, and
Bishop
,
G.
, 2006, “
An Introduction to the Kalman Filter
,”
Department of Computer Science, University of North Carolina
, Chapel Hill, NC, Report.
11.
Bar-Shalom
,
Y.
,
Li
,
X.-R.
, and
Kirubarajan
,
T.
, 2001,
Estimation With Applications to Tracking and Navigation
,
John Wiley and Sons, Inc.
,
New York
.10.1002/0471221279
12.
Julier
,
S. J.
,
Ulhmann
,
J. K.
, and
Durrant-Whyte
,
H. F.
,
2000
, “
A New Method for Nonlinear Transformation of Means and Covariances in Filters and Estimators
,”
IEEE Trans. Autom. Control
,
45
,
pp.
472
482
.10.1109/9.847726
13.
Grewal
,
M. S.
, and
Andrews
,
A. P.
, 2008,
Kalman Filtering: Theory and Practice Using MATLAB
, 3rd ed.,
John Wiley and Sons, Inc.
,
New York
.
14.
Kaminski
,
P.
,
Bryson
,
A.
, and
Schmidt
,
S.
,
1971
, “
Discrete Square Root Filtering: A Survey of Current Techniques
,”
IEEE Trans. Autom. Control
,
16
,
pp.
727
786
.10.1109/TAC.1971.1099816
15.
Hammarling
,
S.
,
1977
, “
A Survey of Numerical Aspects of Plane Rotations
,”
Middlesex Polytechnic, Report, Maths
(1). pp.
1
35
.
16.
Wang
,
H.
, and
Gregory
,
R.
,
1964
, “
On the Reduction of an Arbitrary Real Square Matrix to Tridiagonal Form
,”
Math. Comput.
,
18
(
87
),
pp.
501
505
.10.1090/S0025-5718-1964-0165670-0
17.
Chandrasekar
,
J.
,
Kim
,
I. S.
,
Bernstein
,
D. S.
, and
Ridley
,
A. J.
, 2008, “
Cholesky-Based Reduced-Rank Square-Root Kalman Filtering
,” Proceedings of American Control Conference (
ACC
),
pp.
3987
3992
.10.1109/ACC.2008.4587116
18.
Xie
,
L.
,
Soh
,
C.
, and
Souza
,
C. E.
,
1994
, “
Robust Kalman Filtering for Uncertain Discrete-Time Systems
,”
IEEE Trans. Autom Control
,
39
(
6
),
pp.
1310
1314
.10.1109/9.293203
19.
Zhu
,
X.
,
Soh
,
Y. C.
, and
Xie
,
L.
,
2002
, “
Design and Analysis of Discrete-Time Robust Kalman Filters
,”
Automatica
,
38
(
6
),
pp.
1069
1077
.10.1016/S0005-1098(01)00298-9
20.
Berndt
,
B.
,
Evans
,
R.
, and
Williams
,
K.
, 1998,
Gauss and Jacobi Sums
,
John Wiley & Sons, Inc.
,
New York
.
21.
Habibi
,
S. R.
, and
Burton
,
R.
,
2003
, “
The Variable Structure Filter
,”
ASME J. Dyn. Sys., Meas., Control
,
125
,
pp.
287
293
.10.1115/1.1590682
22.
Utkin
,
V. I.
,
1977
, “
Variable Structure Systems With Sliding Mode: A Survey
,”
IEEE Trans. Autom. Control
,
22
,
pp.
212
222
.10.1109/TAC.1977.1101446
23.
Utkin
,
V. I.
,
1978
,
Sliding Mode and Their Application in Variable Structure Systems
, English Translation ed.,
Mir Publication
,
Moscow, U.S.S.R
.
24.
Slotine
,
J. J.
, and
Li
,
W.
, 1991,
Applied Nonlinear Control
,
Prentice-Hall
,
Englewood Cliffs, NJ
.
25.
Basin
,
M. V.
,
Ferreira
,
A.
, and
Fridman
,
L.
,
2007
, “
Sliding Mode Identification and Control for Linear Uncertain Stochastic Systems
,”
Int. J. Syst. Sci.
,
pp.
861
869
.10.1080/00207720701409363
26.
Spurgeon
,
S. K.
,
2008
, “
Sliding Mode Observers: A Survey
,”
Int. J. Syst. Sci.
,
pp.
751
764
.10.1080/00207720701847638
27.
Basin
,
M. V.
,
Fridman
,
L.
, and
Skliar
,
M.
,
2002
, “
Optimal and Robust Integral Sliding Mode Filter Design for Systems With Continuous and Delayed Measurements
,” Proceedings of the 41st
IEEE
Conference on Decision and Control,
Las Vegas, NV
,
pp.
2594
2599
.10.1109/CDC.2002.1184229
28.
Habibi
,
S. R.
,
2007
, “
The Smooth Variable Structure Filter
,”
Proc. IEEE
,
95
(
5
),
pp.
1026
1059
.10.1109/JPROC.2007.893255
29.
Al-Shabi
,
M.
,
2011
, “
The General Toeplitz/Observability SVSF
,”
Ph.D. thesis
,
Department of Mechanical Engineering, McMaster University
,
Hamilton, Ontario
.
30.
,
S. A.
, and
Habibi
,
S. R.
,
2010
, “
A New Form of the Smooth Variable Structure Filter With a Covariance Derivation
,”
IEEE
Conference on Decision and Control,
Atlanta, GA
.10.1109/CDC.2010.5717397
31.
Luenberger
,
D. G.
, 1979,
Introduction to Dynamic Systems
,
John Wiley
,
New York.
32.
,
S. A.
,
El Sayed
,
M.
, and
Habibi
,
S. R.
,
2011
, “
Derivation of an Optimal Boundary Layer Width for the Smooth Variable Structure Filter
,”
American Control Conference (ACC)
,
San Francisco, CA
.
33.
Petersen
,
K. B.
, and
Pedersen
,
M. S.
, 2008,
The Matrix Cookbook
,
Technical University of Denmark
,
Copenhagen, Denmark.
34.
Habibi
,
S. R.
, and
Burton
,
R.
,
2007
, “
Parameter Identification for a High Performance Hydrostatic Actuation System Using the Variable Structure Filter Concept
,”
ASME J. Dyn. Sys., Meas., Control
,
129
,
pp.
229–235.10.1115/1.2431816