A Stable Approach for Numerical Differentiation by Local Regularization Method with its Regularization Parameter Selection Strategies

The local regularization method for solving the first-order numerical differentiation problem is considered in this paper. The a-priori and a-posteriori selection strategy of the regularization parameter is introduced, and the convergence rate of local regularization solution under some assumption of the exact derivative is also given. Numerical comparison experiments show that the local regularization method can reflect sharp variations and oscillations of the exact derivative while suppress the noise of the given data effectively.


Introduction
Numerical differentiation, which aims to compute the derivative of a function from its measured data approximately, has extensive application values in scientific studies and engineering practices. For example, differential operator method is the most common image edge detection method [1]. In general, image edges can be detected by finding the maximum of first-order derivatives or the zero-crossing of second-order derivatives of the image intensity [2].
Numerical differentiation is a classical ill-posed problem, and its main difficulty is the instability of numerical derivatives. There have been many stable methods developed for solving this problem. Generally speaking, these methods can be categorized as the finite difference method [3], the regularization method [4], the mollification method [5], the Lanczos integral method [6,7] and so on.
Assume that 1 As we know, the solution of Tikhonov regularization is too smooth, while the solution of Lavrentiev regularization is too sensitive to the noise. In order to ensure the calculation accuracy while suppressing the noise, some eclectic methods should be introduced.
The local regularization method can be used to solve Volterra integral equation, Fredholm integral equation of the first kind [9][10][11][12]. If only consider the equation (1.1), the local regularization method adopts the information of () fx  on a small future interval [ , ] x x r  when we compute () ux , which avoids the overuse of () fx  in Tikhonov regularization and the underuse of () fx  in Lavrentiev regularization. Therefore, the local regularization method will be adopted to solve the Volterra equation (1.1). In this paper, we give the a-priori and aposteriori parameter selection strategy in 2 [0,1] L space with the convergence rate of local regularization solution, where the a-priori assumption is u belongs to Sobolev space The a-posteriori parameter selection strategy we used is the extension of the generalized discrepancy principle for Lavrentiev regularization method given in [13][14][15].
The paper is organized as follows. In Section 2, the local regularization method for solving numerical differentiation problem is given. The a-priori and a-posteriori selection strategy of the regularization parameter with the convergence rate of local regularization solution are given in Section 3. At last, numerical comparison experiments are shown in Section 4.

Local Regularization for Numerical Differentiation
Notice that equation (2.2) still is an equation that the exact solution uf   satisfies exactly. In the text that follows, we assume that In order to solve (1.1) stably and obtain a desirable approximate derivative, we replace () u x s  in the second term of (2.2) by () ux as 0 rR  is small enough, and thus 2 ,,

The Selection Strategy of Regularization Parameter
In the following, we first introduce the a-priori selection strategy of the regularization parameter r and the convergence rate of the regularization solution , r u  . For the simplicity of notation, the norm  ‖ ‖ without the subscript means the norm of 2 The Cauchy-Schwarz inequality yields 1 2 2 00 The a-priori selection strategy of the regularization parameter given in Theorem 1 relies on the a-priori assumption of exact solution uf   , which is usually unknown in many practical problems. Compared with the apriori selection strategy, the a-posteriori selection strategy generally only relies on the given data and its noise level, therefore, is more useful. There have been many a-posteriori choice strategies of the regularization parameter, such as the discrepancy principle and its generalizations [13,16], the generalized cross-validation method [17], the Lcurve criterion [18] and the Arcangeli's method [19]. Next, we will extend the generalized discrepancy principle for Lavrentiev regularization method [13][14][15] where ( Figure 3, the result of the a-posteriori selection strategy is acceptable, although it is not optimal. Example 2 Consider a function ( ) (( ) ), . The noise data () fx  is generated by the same way in Example 1.
In this example, the integer k reflects the oscillation of () ux , and the bigger values of k , the stronger oscillations of () ux . In table 1, the relative errors of Tik u , Lav u and Loc u are shown for different k when the relative error of noise data is 5% Rel   . From Table 1 we can see that the superiority of local regularization method over the other two methods, and still the stability of Loc u with respect to different oscillations of () fx .