PLoS ONE
Home Poisson noisy image restoration via overlapping group sparse and nonconvex second-order total variation priors
Poisson noisy image restoration via overlapping group sparse and nonconvex second-order total variation priors
Poisson noisy image restoration via overlapping group sparse and nonconvex second-order total variation priors

Competing Interests: The authors have declared that no competing interests exist.

Article Type: Research Article Article History
Abstract

The restoration of the Poisson noisy images is an essential task in many imaging applications due to the uncertainty of the number of discrete particles incident on the image sensor. In this paper, we consider utilizing a hybrid regularizer for Poisson noisy image restoration. The proposed regularizer, which combines the overlapping group sparse (OGS) total variation with the high-order nonconvex total variation, can alleviate the staircase artifacts while preserving the original sharp edges. We use the framework of the alternating direction method of multipliers to design an efficient minimization algorithm for the proposed model. Since the objective function is the sum of the non-quadratic log-likelihood and nonconvex nondifferentiable regularizer, we propose to solve the intractable subproblems by the majorization-minimization (MM) method and the iteratively reweighted least squares (IRLS) algorithm, respectively. Numerical experiments show the efficiency of the proposed method for Poissonian image restoration including denoising and deblurring.

Jon,Liu,Lv,Zhu,and Zeng: Poisson noisy image restoration via overlapping group sparse and nonconvex second-order total variation priors

1 Introduction

In many real applications, during the image recording, the measurement of light inevitably leads to the uncertainty of striking particles on the image sensor. In other words, the finite number of electrons or photons carrying energy in an image sensor may cause statistical fluctuations, which are usually modeled as the Poisson distribution [13]. As another degradation factor, image formation may involve undesirable blurrings such as motion or out-of-focus. By rewriting the concerned images into column-major vectorized form, we can regard the observed image gRn×n as a realization of Poisson random vector with expected value Hf + b, where H is a n2 × n2 convolution matrix corresponding to the point spreading function (PSF) which models blur effects, fRn×n is the original image, and bRn×n is a nonnegative constant background [46].

Poissonian image restoration calls for applying an inverse procedure to approximate f from an observation g degraded by blur and Poisson noise. A general option to tackle this problem is to use the maximum a posteriori (MAP) estimation from Bayesian perspective. Taking the Poisson statistics of noise into account, we can write the conditional distribution of the observed data g as

where (⋅)i,j indicates a vector element corresponding to position (i, j) hereafter. On the premise that we adopt a Gibbs prior [79] of the form
for f, the use of Bayes’ rule and Stirling’s approximation leads to the following minimization problem [10, 11]:
where λ > 0 is a regularization parameter, ϕ(f) is a nonnegative given function (often called the regularizer), and DKL(Hf||g) denotes the generalized Kullback-Leibler (KL) divergence of Hf from g as shown below:

Since the efficiency of the restoration model hinges on the choice of image regularizer, numerous authors have proposed different regularization methods. Tikhonov regularization of form ϕ(f)=Γf22 (usually, Γ is an identity matrix or difference matrix) is probably one of the most classical methods. In the literatures, several methods have been proposed to efficiently solve Eq (3) with the Tikhonov regularizer, including the scaled gradient projection method [12], the split-gradient method [13], the projected Newton-conjugate gradient (PNCG) method [2], the quasi-Newton projection method [14], the hybrid gradient projection-reduced Newton method [15, 16] and the scaled gradient projection-type method [17]. Although the KL divergence and Tikhonov regularizer are well-known to be convex and the unique solution to Eq (3) is guaranteed with inexpensive computational cost, it tends to over-smooth the important details in restored images.

Another classical method, the total variation (TV) regularizer ϕ(f) = ‖fTV (see Section 2), has been widely accepted since the noisy signals have larger TV than the original signal [18]. Bardsley et al. [19] proved that minimizing the Poisson likelihood function in conjunction with TV regularization is well-posed, and they also verified its convergence by a nonnegatively constrained and projected quasi-Newton minimization algorithm. Landi et al. [20] also adopted the TV regularizer for denoising the medical images collected by the photon-counting devices and extended the PNCG method introduced in [2]. Tao et al. [21] appended a non-negativity constraint and used the half-quadratic splitting method to solve the related minimization problem. In addition to the methods mentioned above, other authors employed the effective optimization techniques for solving the TV regularized Poisson deblurring model, including the alternating extra-gradient method [22], the primal-dual method [23, 24] and the alternating direction method of multipliers (ADMM) [25, 26]. It is well-known that the TV regularization methods could preserve fine details such as sharp edges, but they often exhibit false jump discontinuities causing spurious staircase artifacts in smooth regions of the restored images. This may be due to the fact that the TV regularization tends to transform the smooth regions of the solution into piecewise constant ones while solving the minimization problem.

One possible remedy for this oversharpening behavior is to introduce high-order TV, which could penalize jumps more. Zhou et al. [27] selected the second-order TV ϕ(f) = ‖fHTV (see Section 2) as a regularizer, and solved by using the alternating minimization scheme. Their numerical experiments indicated the advantage of the second-order TV in avoiding the staircase artifacts, compared with the classical TV regularization [28]. But the high-order TV usually transforms the smooth signal into over-smoothing [29], so the hybrid regularizations combining the high-order TV with other regularizers were also considered. Among the hybrid models, Zhang et al. [30] obtained better results than the model with a strongly convex term f22 proposed in [31], by replacing the first-order TV with a second-order TV. Besides, Jiang et al. [11] combined the first-order and second-order TV priors to restore images contaminated by Poisson noise and solved the minimization model by the ADMM. To further improve the restoration result, Liu et al. [5] studied a spatially adapted regularization parameter updating scheme. As an adaptive balancing scheme between the first and second derivatives, Wang et al. [32] established the Poisson noise removal framework of iterative reweighted total generalized variation (TGV) model based on the EM algorithm for the denoising case. Zhang et al. [33] combined the fractional-order TV with non-local TV to alleviate the staircase artifacts for the cartoon component as well as to preserve the details for the texture component. Ma et al. [34] proposed a hybrid regularizer containing a patch-based sparsity promoting prior over a learned dictionary and a pixel-based total variation prior, but it requires additional strategies to reduce the high computational cost.

Considering that the tight framelet framework can facilitate the sparse representation of images [35], Shi et al. [36] combined the framelet regularization with non-local TV and non-negativity constraint in the non-blind stage of blind Poisson deblurring. Zhang et al. [37] proposed the nonconvex and noncontinuous model with 0 norm of the tight wavelet representation as a regularizer. Fang and Yan [38] combined the 1 norm of framelet coefficients with TV for Poissonian image deconvolution to reduce artifacts yielded by only using TV regularization. In addition to working with transform-domain sparsity mentioned above, the overlapping group sparsity, which describes the natural tendency of large values to arise near other large values rather than in isolation, has been widely concerned in the field of image restoration, due to its remarkable ability to exploit the structural information of the natural image gradient [4, 3941]. Especially, for the Poisson noisy restoration, Lv et al. [4] focused on the regularization by the total variation with overlapping group sparsity (TVOGS) and they showed that the solution obtained by the ADMM framework is superior to the first-order TV regularized methods [25, 42] and the high-order regularized methods [27].

To sum up, the high-order TV or the overlapping group sparse prior can be the most feasible option to alleviate the staircase artifacts. Nonetheless, the high-order TV, due to the over-penalizing tendency of the 1 norm [43], often causes over-smoothing at the edges while the overlapping group sparsity tends to smoothen out blockiness in the restored images more globally. Adam and Paramesran, motivated by the work of Chen et al. [44], proposed a hybrid regularization model that combined the overlapping group sparse total variation with the high-order nonconvex TV to denoise images corrupted by Gaussian noise [40]. Recently, they extended it to non-blind deblurring under Gaussian noise [45]. In this paper, we present an extension of Adam’s regularizer to the problem of restoring the Poisson noisy images. This regularizer takes the advantages of both the high-order nonconvex regularizer and overlapping group sparsity regularizer, i.e., it can simultaneously facilitate the pixel-level and structured sparsities of the natural image in the gradient domain. Therefore, we expect that it can not only effectively reduce staircase artifacts but also preserve well sharp edges in the restored image. However, its optimization is more challenging than Gaussian deblurring due to the ill-conditioned non-quadratic data fidelity term. We employ the alternating direction method of multipliers to solve the derived minimization problem. In particular, during the ADMM iterations, we solve two intractable subproblems: one is from overlapping group sparse prior and solved by majorization-minimization (MM) method with well-known quadratic majorizer; another is from the nonconvex p(0 < p < 1) quasi norm, which is solved by the iteratively reweighted least squares (IRLS) algorithm with the motivation that the IRLS is guaranteed to converge to its local minima provided better theoretical and practical abilities than the iteratively reweighted 1 (IRL1) algorithm [5255].

The rest of this paper is organized as follows. In Section 2, we introduce some notations that will be used to formulate our proposed method. We also briefly review the essential concepts and tools including the overlapping group sparsity prior, the ADMM algorithm, the MM method, and the IRLS algorithm. Section 3 establishes a novel Poissonian image deblurring model which comprises the generalized Kullback Leibler (KL) divergence as data fidelity term, and a combined first-order and second-order total variation priors with overlapping group sparsity as a regularization term. Sequentially, we derive an effective algorithm for minimizing the non-convex and non-smooth objective function under the ADMM optimization framework. Section 4 demonstrates the superiority of our method via numerical experiments, followed by analyzing the parameter setting and convergence behavior. We finally conclude this paper in Section 5.

2 Preliminaries

2.1 Notations

It is necessary to introduce some notations throughout this paper. Assuming that all images under consideration are gray-scale images of size n × n, we lexicographically stack the columns of an image matrix into a vector form. For example, the (i, j)th pixel of image f becomes the ((j − 1)n + i)th entry of the vector, just written as fi,j. Under the periodic boundary conditions for image f, we introduce the discrete forward difference operator D+:Rn×n(Rn×n,Rn×n) defined by

and similarly for the backward difference operator D-:Rn×n(Rn×n,Rn×n),
where the definitions of each forward and backward sub-operators are separately:

Second-order difference operators can be recursively defined by using the first-order difference operators such as

and other second-order difference operators can be similarly defined. Based on the above definitions, we denote the first and second-order TV of f as
where ‖⋅‖2 means the Euclidean norm, (f)i,j=((D1+f)i,j,(D2+f)i,j),and(2f)i,j=((D11-+f)i,j,(D12++f)i,j,(D21++f)i,j,(D22-+f)i,j).

2.2 TVOGS and MM algorithm

To describe the structural sparsity of image gradient, we define a pixel-group v˜(i,j),K of size K × K (K is called group size) centered at every position (i, j) of a two-dimensional array v = (vi,j)n × n as

where m1=K-12,m2=K2, and ⌊⋅⌋ denotes the floor function, i.e., it converts any real number into a nearest integer less than or equal to it. Let v(i,j),K be the column-major vectorized form of v˜(i,j),K, i.e., v(i,j),K=v˜(i,j),K(:). Then, as in Liu et al.’s work [46], we can denote the TVOGS regularizer ϕTO(f) as
where 1fD1+f and 2fD2+f denote the horizontal and vertical gradient of f, respectively, and ϕO(v)=i,j=1nv(i,j),K2 is the K-group OGS function of vRn2. For the notational simplicity, we denote ϕO(∇f) = ϕO(∇1 f) + ϕO(∇2 f).

Next, we review a minimization problem of the form

and we denote the objective function of Eq (12) as R(v).

To solve the above sophisticated OGS model, the majorization-minimization approach is used with ψ(v|u)=12i,j=1n[1u(i,j),K2v(i,j),K22+u(i,j),K2] as a quadratic majorizer of ϕO(v), namely, ψ(v|u) ≥ ϕO(v) for all v and u0 and with equality if v = u. Therefore, instead of the intractable direct optimization of R(v), we can approximately solve it through iterative minimizing a sequence of surrogate convex problems:

where Λ(u) is a diagonal matrix containing the following entries along its diagonal
with l = (r − 1)n + t, for r, t = 1, 2, ⋯, n. Eq (13) has the closed-form solution
which is the input for the next MM iteration. The inversion of the matrix In2+λΛ(v(k))TΛ(v(k)) can be efficiently done via simple component-wise calculations. In summary, we obtain the following algorithm for Eq (12).

Algorithm 1 MM algorithm for minimizing Eq (12)

input λ > 0, K, Nin(inner iterations)

initialize v(0) = v0, k = 0.

while k < Nin do             ▹ MM inner loop

 1: [Λ(v(k))]l,l=i,j=-m1m2[k1,k2=-m1m2|vr-i+k1,t-j+k2(k)|2]-12;

 2: v(k+1)=(In2+λΛ(v(k))TΛ(v(k)))1 v0;

 3: k = k + 1;

end while

2.3 Alternating direction method of multipliers

The standard form of the ADMM [47] is designed to tackle the distributed convex optimization problem with linear equality constraints, and many variants have been developed [4850]. Recently, Wang et al. [51] extended it to the problems involving the nonconvex and nonsmooth muti-blocks objective, as follows:

where x=[x1;;xs]RN (here xiRni,N=i=1sni) and yRL are primal variables with the corresponding coefficient matrix AiRM×ni and BRM×L respectively, Fi:RniR is a continuous function, and G:RLR is a smooth function.

In general, Fi can be nonsmooth and nonconvex, and G can be nonconvex. By introducing a Lagrangian multiplier wRM and a penalty parameter δ > 0 for the linear constraint i=1sAixi+By=0, we obtain the augmented Lagrangian in a scaled form,

The ADMM algorithm proceeds to alternatively update each variable (x(k+1), y(k+1)) until it reaches a stationary point of Eq (17) or meets a stopping criterion. Then, we have the following iterative scheme:

The following lemma establishes the convergence result of ADMM with the nonconvex and nonsmooth objective [51].

Lemma 1. Let F(x)=F1(x1)++Fs(xs) and A = [A1As]. Suppose that the following assumptions S1–S5 hold, then the sequence generated by Eq (18) with any sufficiently large δ and any initialization has at least one limit point, and each limit point is a stationary point of Eq (17).

S1 (coercivity) Let D={(x,y)RN+L:Ax+By=0} be the nonempty feasible set and F(x)+G(y) is coercive over D;

S2 (feasibility) Im(A) ⊆ Im(B), where Im(⋅) is defined as the image of a matrix;

S3 (Lipschitz sub-minimization paths)

(a) For any fixed x, there exists a Lipschitz continuous map G:Im(B)RL obeying G(u)=argminy{F(x)+G(y):By=u} provided a unique minimizer,

(b) For i = 1, ⋯, s and fixed xi and y, there exists a Lipschitz continuous map Fi:Im(Ai)Rni obeying Fi(u)=argminxi{F(x)+G(y):Aixi=u} provided a unique minimizer, where xi ≔ [x1;⋯;xi−1;xi+1;⋯;xs];

S4 (objective-F regularity) F is lower semi-continuous or F1 is lower semi-continuous and Fi,i=2,,s is restricted prox-regular;

S5 (objective-G regularity) G is Lipschitz differentiable with the constant LG>0.

2.4 Iteratively reweighted least squares algorithm

IRLS algorithm [52, 53] solves the following nonconvex p norm sparsity problem:

where z,z˚Rn2, and 0 < p < 1 causes the model to be nonconvex. It is showed that the IRLS is guaranteed to converge to its local minima with better theoretical and practical abilities than the iteratively reweighted 1 (IRL1) [5255]. Then, the problem Eq (19) can be approximated to
with the weight ωi=λp(zi2+ϵ)p/2-1 and ϵ ≪ 1 being a small positive number to avoid division by zero. Given k-th estimation z(k)=(z1(k),,zn2(k)), IRLS first calculates the weight ω(k)=(ω1(k),,ωn2(k)), then updates z by solving the following problem:

We summarize the IRLS method in Algorithm 2.

Algorithm 2 IRLS algorithm for minimizing Eq (19)

input z˚Rn2,p(0,1),λ>0,ϵ1.

initialize z(0)=z˚,k=0.

while not converged do

 1: update weight: ω(k) = λp((z(k))2 + ϵ)p/2−1;

 2: update z: z(k+1)=(In2+diag(ω(k)))-1z˚;

 3: k = k + 1;

end while

3 Proposed model and optimization method

In this section, we first present the proposed model and then solve it in the framework of nonconvex and nonsmooth ADMM.

3.1 Model formulation

Considering the advantages of the overlapping group sparse total variation and the high-order nonconvex total variation, we investigate the Poisson noisy image restoration problem with the following regularizer:

where ϕTO(f) is the TVOGS term, ϕHTVp(f)(2f)pp=i,j=1n(2f)i,jpp (0 < p < 1) is the nonconvex second-order TV term, and η > 0 is a regularization parameter. Substituting the hybrid regularizer Eq (22) into Eq (3) with the consideration to Eq (11), we obtain

In the model above, λ and η are used to control the data fidelity term and the nonconvex second-order regularizer, respectively. The data fidelity term is an ill-conditioned non-quadratic log-likelihood, while the regularizer is nonconvex and nonsmooth due to the presence of the p(0 < p < 1) quasi norm. This may increase the difficulty in minimizing the model. To circumvent this difficulty, we propose an efficient algorithm based on the framework of nonconvex and nonsmooth ADMM in the following subsection.

3.2 Optimization

In order to apply the variable splitting scheme, we introduce three auxiliary variables to transform the original complicated minimization problem into the following equivalent linear constraint minimization problem:

We establish a corresponding relation between the variables to conform with the ADMM framework Eq (16), as shown below:

    (1) x1=HfRn2,x2=fR2n2,x3=2fR4n2,y=fRn2;

    (2) F1(x1)=λDKL(x1||g),F2(x2)=ϕO(x2),F3(x3)=ηx3pp,G(y)=0;

    (3) A1=(-In203T02), A2=(03-I2n201T), A3=(02T01-I4n2), B=(H2),

where 01, 02, 03 denote zero matrix of size 2n2 × 4n2, 4n2 × n2, n2 × 2n2, respectively, and In2 is the identity matrix of order n2. By introducing the Lagrangian multiplier w=[w1;w2;w3]R7n2 (here w1Rn2,w2R2n2,w3R4n2) and the positive penalty parameter vector δ=(δ1,δ2,δ3)TR3, we can turn Eq (24) into the unconstrained minimization of the following augmented Lagrangian function:

Then, the ADMM for solving Eq (24) starts its iteration to update every variable by minimizing Eq (25) alternatively. This iterative scheme could be split into several subproblems.

3.2.1 x1-subproblem

According to the ADMM scheme, pulling out the terms with x1 from Eq (25) yields

It is apparent that each component of x1 can be solved separately. Through the basic mathematical manipulations in the case of b = 0, we obtain

3.2.2 x2-subproblem

Minimizing Lδ(x1,x2,x3,f;w) with respect to x2 leads to the overlapping group sparse problem

It is easy to see that this minimization problem matches the framework of Eq (12), thus x2 can be solved by Algorithm 1.

3.2.3 x3-subproblem

By omitting terms irrespective of x3, we have

Since this problem involves the nonconvex p norm, minimizing x3 is not straightforward. But, after some simple modifications, we can apply the IRLS algorithm according to Algorithm 2.

3.2.4 f-subproblem

Considering Eq (25) with respect to f, we have

which can be solved by the following normal equation

We assume that both the image and convolution matrix are periodically extended, thus well-known fast Fourier transform can be adopted to efficiently solve Eq (31) [4, 56, 57], which results in an optimal solution

with
where F denotes the FFT operator, * and ∘ stand for complex conjugate and element-wise multiplication, respectively. The division is computed by component-wise fashion. Note that many subterms need to be computed only once during the iterative updates.

3.2.5 w-subproblem

Finally, the updating scheme of the Lagrangian multipliers can be written as

In summary, the key steps of the ADMM algorithm for solving the suggested model are described in Algorithm 3.

Algorithm 3 The ADMM algorithm for solving Eq (23)

input p, λ, η, δ, K, Nin.

initialize f(0), k = 0, w(0) = 0.

while a stopping criteria unsatisfied do             ▹ outer loop

 1: Update x1(k+1) according to Eq (27).

 2: Update x2(k+1) according to Eq (28) using Algorithm 1.

 3: Update x3(k+1) according to Eq (29) using Algorithm 2.

 4: Update f(k+1) according to Eq (32).

 5: Update wi(k+1),i=1,2,3 according to Eq (34).

 6: k = k + 1.

end while

We discuss the convergence of the proposed algorithm. Inspired by [51], by verifying the assumptions in Lemma 1, we can obtain the following convergence result for Algorithm 3.

Theorem 1. The sequence of (x1, x2, x3, f) generated by Algorithm 3 with any sufficiently large δ and any start point will converge to a stationary point of the augmented Lagrangian Lδ.

Proof. Since our model fits the framework of Eq (16), it remains only to check S1–S5 in Lemma 1. It is easy to verify that DKL(x1||g) is coercive [58] as well as ϕO(x2) and 2x3pp. Thus, S1 holds. Ax + By = 0 implies S2. S3 holds because both A and B are full column rank matrices. DKL(x1,g) is lower semi-continuous and p-quasi norm is restricted prox-regular [51]. Furthermore, ϕO(x2) is convex, hence restricted prox-regular. It is clear that S5 holds, which completes the proof.

The convergence of Algorithm 3 can also be verified experimentally.

4 Numerical experiments

In this section, we present the numerical experiments to illustrate the effectiveness of the proposed method for the Poisson noisy image restoration, compared with the closely related state-of-the-art methods including TVOGS [4], SAHTV [5], SB-FA [38] and FT-ADMM [37]. The TVOGS method introduced the overlapping group sparse TV prior combined with the box-constraint as a regularizer and solved the optimization model under the ADMM framework, but denoising is out of their consideration. In the SAHTV method, Liu et al. [5] combined the first and second-order TV priors, and updated their pixel-wise weighting coefficients with the local information during the consecutive estimation of the latent image. FT-ADMM, whose regularizer is the ℓ0 norm of the wavelet representation of the image, was also proposed to efficiently eliminate the staircase artifacts. Fang and Yan [38] proposed to combine the ℓ1 norm of the framelet representation of the latent image with the TV prior, and solved the resultant model by the split Bregman method. Since they recommended using the SB-FA algorithm without the TV prior through the numerical experiments, we adopt SB-FA as a competitive method rather than SB-FATV with the TV prior.

All experiments in this paper are implemented on Windows 10 64-bit and Matlab R2016b running on a desktop computer with an Intel Core i5-4590 CPU 3.3GHz and 8GB of RAM. We uploaded the code for our algorithm on https://github.com/KSJhon/PoissonDeblur_hybrid.

To proceed with the simulation experiments, we intentionally degrade the clean image to construct the corrupted image. More specifically, an original clean image is first scaled to a preset peak value MAXf, which decides the different levels of Poisson noise. After that, the scaled image is convolved with a given PSF to simulate the blurring effect, and further contaminated by the signal-dependent Poisson noise. In the following experiments, we set MAXf to be 100, 200, 300, and 350, in which the lower MAXf indicates the relatively higher noise level. For the deblurring simulations, we consider three types of blur kernels: (1) a 9 × 9 Gaussian kernel with standard deviation 1; (2) a linear motion blur with a motion length 5 and an angle 45° in the counterclockwise direction; (3) a kernel from Levin et al.’s public dataset [59]. All blurring operations are fulfilled by the Matlab built-in function “fspecial”, and the Poisson noise is generated by “poissrnd” without any additional parameter. The test images with various sizes are shown in Fig 1.

Test images used in our experiments.
Fig 1

Test images used in our experiments.

(a): “beauty” (256 × 256). This image was republished from https://www.flickr.com/photos/90471071@N00/3201337190 under a CC BY license, with permission from Irina Gheorghita, original copyright 2009; (b): “lotus” (500 × 500). This image was republished from https://www.flickr.com/photos/robertlylebolton/7797693474 under a CC BY license, with permission from Robert Lyle Bolton, original copyright 2012; (c): “dolphin” (512 × 512). This image was republished from https://www.flickr.com/photos/grovecrest/6981622176 under a CC BY license, with permission from Simon Lewis, original copyright 2011.

We quantitatively evaluate the performances of the proposed method and the competing methods by means of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM), defined as

where f is the original image, f^ is the restored image, μf,μf^ are the respective averages, σf2,σf^2 are the respective variances, σf,f^ is the covariance of f and f^, and C1=MAXf2/104,C2=9·MAXf2/104 by default. Generally, the larger the PSNR and the closer SSIM to 1, the better the quality of the restored image. In the experiments, we terminate the iteration in Algorithm 3 when the relative error (RelErr) between two consecutive estimates falls below the predefined tolerance level ε, as follows:

4.1 Selection of parameters

Since the convergence of the nonconvex ADMM and its suboptimization in Algorithm 3 depends on the values of parameters, they require careful tuning.

For the choice of p, as in the experiments reported in [40], we set it to be 0.1 throughout the experiments, with the consideration of the important aspect of p norm, which says that taking p sufficiently close to 0 favors to preserve sharp edges in the restored image.

The group size K also plays an important role in balancing the trade-off between the global noise filtering performance and the computational cost. In order to find the optimal K, we conduct a denoising experiment by varying K while keeping other parameters remained, as shown in Fig 2. Obviously, K = 3 is the best choice for all noise levels, so we fix K = 3 in the following experiments. In addition, the number of inner iterations, Nin, also affects the MM algorithm. We straightforwardly fix Nin = 5 as indicated in [4, 29, 39, 40]. To balance the accuracy and speed, we empirically set Nout = 50, ε = 10−3. The other parameters, including the regularization parameters (λ, η) and penalty parameter δ, are manually adjusted to achieve the highest improvement in the PSNR and SSIM values. In the following subsections, we compare the proposed method (named HTVp-OGS) with the state-of-the-art methods for denoising and deblurring images under the Poisson noise. Unless otherwise indicated, all parameters involved in the competitive algorithms are carefully selected near the given default values to give the best performance, respectively.

Evolution of PSNR values of denoised images with MAXf = 350, 300, 200 and 100 along the varying group sizes.
Fig 2

Evolution of PSNR values of denoised images with MAXf = 350, 300, 200 and 100 along the varying group sizes.

(a): “beauty” image; (b): “lotus” image; (c): “dolphin” image.

4.2 Denoising

Here, we show that our model provides better results in Poisson noise removal through the comparison with the state-of-the-art methods: SB-FA [38], SAHTV [5], and FT-ADMM [37].

To evaluate the denoising performance, we just set H as an identity matrix, and empirically fix δ = (5 ⋅ 10−3, 3 ⋅ 10−2, 2 ⋅ 10−4)T. We can observe that λ = 3 ⋅ MAXf can bring the satisfactory results of Eq (23). The other regularization parameter η is hand-tuned to get the best denoising performance according to the noise level. The denoising results, in terms of the PSNR and SSIM values, are shown in Table 1. From Table 1, it is obvious that the PSNR and SSIM values of the images denoised by our model are higher than those of the other three methods (SB-FA, SAHTV, FT-ADMM). We display in Fig 3. the denoised versions and the zoom-in regions obtained by our method and other competitive methods from the noisy images with the noise level MAXf = 100.

Denoised images and a zoom-in region from the Poisson noisy images with the noise level MAXf = 100.
Fig 3

Denoised images and a zoom-in region from the Poisson noisy images with the noise level MAXf = 100.

first column: Poisson noisy images; second column: denoised images by SB-FA [38]; third column: denoised images by SAHTV [5]; fourth column: denoised images by FT-ADMM [37]; fifth column: denoised images by HTVp-OGS.

Table 1
The PSNR and SSIM values for denoised images by different methods.
MAXfclean(f)noisy(g)denoised(alternatives)
SB-FASAHTVFT-ADMMHTVp-OGS
350beauty27.97/0.64934.96/0.92935.41/0.94134.17/0.92037.08/0.955
lotus28.12/0.53036.34/0.91337.42/0.93634.96/0.89838.64/0.957
dolphin27.95/0.46538.18/0.92638.41/0.94436.75/0.91040.14/0.966
300beauty27.31/0.62034.29/0.91234.77/0.93133.73/0.90636.82/0.953
lotus27.46/0.49835.63/0.89036.73/0.92234.42/0.87638.18/0.955
dolphin27.28/0.43137.04/0.89837.89/0.93335.67/0.88439.61/0.964
200beauty25.55/0.53932.16/0.84432.81/0.88031.85/0.84635.73/0.944
lotus25.70/0.41233.23/0.79634.08/0.84332.38/0.78637.12/0.945
dolphin25.52/0.34833.94/0.78735.00/0.85533.06/0.77938.44/0.956
100beauty22.53/0.40227.74/0.64528.06/0.67628.18/0.66033.01/0.917
lotus22.68/0.28328.35/0.54228.45/0.56228.49/0.54235.06/0.925
dolphin22.51/0.22728.41/0.50028.66/0.53928.58/0.54436.70/0.939

4.3 Deblurring

In our deblurring experiments, we consider three types of blur kernels. As mentioned above, the first two are artificial to respectively mimic the effect of the out-of-focus blur and motion blur, and the last one is the blur kernel from Levin et al.’s dataset [59]. For the sake of a fair comparison with other methods, we equally set the tolerance level ε = 10−3 and iteration number Nout = 50 in all experiments. As in the denoising experiments, we fix λ = 3 ⋅ MAXf and select the penalty parameter δ as (0.01, 0.1, 0.01)T. The values of δ and λ are kept constant, and the other parameter η is selected by the grid search scheme to obtain high-quality restored images in the following three cases:

    Setting 1. Gaussian blur case: We set η = 18, 14, 6, and 2 according to the noise level MAXf = 350, 300, 200, and 100, respectively. The detailed comparison of five methods is provided in Table 2.

    Setting2. linear motion blur case: We keep δ and λ remained and empirically set η = 8, 6, 4, 1, and the detailed results are summarized in Table 3.

    Setting 3. ground truth blur case: In this case, we set η = 4, 3, 2, 0.2, and the detailed results are summaried in Table 4.

Table 2
The PSNR and SSIM values for the Poisson image restoration by different methods in the case of the Gaussian blur with kernel size 9 × 9 and variation 1.
MAXfclean(f)degraded(g)restored(alternatives)
SB-FASAHTVTVOGSFT-ADMMHTVp-OGS
350beauty27.00/0.6232.81/0.9232.45/0.9331.19/0.9232.39/0.9332.85/0.94
lotus27.68/0.5135.82/0.9236.31/0.9434.64/0.9334.90/0.9336.66/0.94
dolphin27.55/0.4536.70/0.9336.97/0.9436.67/0.9535.87/0.9537.69/0.95
300beauty26.47/0.5932.58/0.9232.19/0.9231.16/0.9232.33/0.9332.60/0.93
lotus27.08/0.4735.43/0.9136.02/0.9334.61/0.9334.84/0.9336.37/0.94
dolphin26.93/0.4136.09/0.9236.64/0.9436.50/0.9535.83/0.9437.37/0.95
200beauty24.95/0.5131.51/0.8831.41/0.9031.02/0.9131.96/0.9131.92/0.92
lotus25.44/0.3933.98/0.8734.96/0.9134.36/0.9234.50/0.9235.48/0.93
dolphin25.27/0.3334.49/0.8735.67/0.9235.99/0.9435.66/0.9336.58/0.95
100beauty22.22/0.3728.50/0.7529.78/0.8530.42/0.9030.66/0.8730.57/0.90
lotus22.55/0.2729.98/0.7032.44/0.8433.28/0.9032.86/0.8533.87/0.92
dolphin22.39/0.2230.28/0.6833.19/0.8534.53/0.9133.78/0.8635.29/0.94
Table 3
The PSNR and SSIM values for the Poisson image restoration by different methods in the case of the motion blur with length 5 and angle 45°.
MAXfclean(f)degraded(g)restored(alternatives)
SB-FASAHTVTVOGSFT-ADMMHTVp-OGS
350beauty26.64/0.6131.34/0.9231.90/0.9229.96/0.9031.76/0.9232.33/0.93
lotus27.54/0.5034.22/0.9335.62/0.9333.39/0.9234.56/0.9336.31/0.94
dolphin27.45/0.4436.07/0.9437.06/0.9435.63/0.9435.62/0.9537.68/0.95
300beauty26.13/0.5831.32/0.9231.71/0.9229.98/0.9031.58/0.9232.12/0.93
lotus26.94/0.4734.21/0.9235.33/0.9333.39/0.9234.27/0.9336.04/0.94
dolphin26.85/0.4136.01/0.9436.72/0.9435.54/0.9435.49/0.9337.46/0.95
200beauty24.74/0.5031.11/0.9130.97/0.9029.95/0.9031.12/0.9031.56/0.92
lotus25.34/0.3934.08/0.9234.57/0.9233.24/0.9133.79/0.9135.32/0.93
dolphin25.23/0.3335.78/0.9435.88/0.9335.19/0.9435.19/0.9236.58/0.95
100beauty22.10/0.3730.39/0.8829.88/0.8729.67/0.8929.82/0.8430.37/0.90
lotus22.79/0.3431.48/0.8431.51/0.8331.17/0.8430.77/0.8031.79/0.85
dolphin22.37/0.2134.56/0.9133.81/0.8833.92/0.9133.23/0.8435.34/0.94
Table 4
The PSNR and SSIM values for the Poisson image restoration by different methods in the case of the ground truth blur.
MAXfclean(f)degraded(g)restored(alternatives)
SB-FASAHTVTVOGSFT-ADMMHTVp-OGS
350beauty25.44/0.5629.81/0.8930.96/0.9029.22/0.8730.78/0.9131.17/0.91
lotus26.49/0.4632.69/0.9134.78/0.9232.70/0.9133.27/0.9234.95/0.93
dolphin26.83/0.4234.58/0.9435.84/0.9434.85/0.9434.38/0.9436.43/0.95
300beauty25.06/0.5329.80/0.8930.74/0.9029.22/0.8730.66/0.9030.97/0.91
lotus26.03/0.4332.66/0.9134.45/0.9232.67/0.9133.24/0.9134.62/0.92
dolphin26.31/0.3934.47/0.9435.57/0.9334.79/0.9434.38/0.9436.18/0.94
200beauty23.93/0.4629.72/0.8830.01/0.8829.21/0.8730.27/0.8830.31/0.89
lotus24.69/0.3632.67/0.9133.49/0.9032.51/0.9033.01/0.9033.90/0.92
dolphin24.84/0.3134.42/0.9334.67/0.9134.54/0.9334.43/0.9335.22/0.94
100beauty21.62/0.3329.30/0.8728.74/0.8328.96/0.8729.26/0.8429.20/0.87
lotus22.14/0.2432.19/0.8931.43/0.8431.91/0.8932.04/0.8632.55/0.90
dolphin22.17/0.2033.77/0.9232.19/0.8633.57/0.9133.42/0.8834.18/0.93

In Fig 4, we show the degraded “dolphin” images which are blurred by the corresponding kernels and further corrupted by the Poisson noise of noise level 200, as well as its restored versions obtained through our method and the competitive methods. Fig 5(a) depicts the RelErr curves for the sequence of temporary estimates of the “lotus” image obtained during the HTVp-OGS iterations. Besides, in Fig 5(b), we show that, as the iteration proceeds, the PSNR and SSIM values moderately increase. From the results of the above three experiments, we can see that in most cases, our method provides the best results in terms of the PSNR and SSIM values, and in extreme cases, we obtain a slightly lower PSNR value than others, but the SSIM value still keeps the best. It is worth noting that the transform-based methods (SB-FA and FT-ADMM) are quite time-consuming because they inherently involve some complicated nonlinear operations [3], while our method works in the gradient domain, making it take a relatively short running time.

Restored “dolphin” images and zoom-in regions from the degraded images with different blur kernels and the Poisson noise of level MAXf = 200.
Fig 4

Restored “dolphin” images and zoom-in regions from the degraded images with different blur kernels and the Poisson noise of level MAXf = 200.

first row: Gaussian blur; second row: motion blur; third row: ground truth blur; first column: degraded images (PSF is shown at the bottom-left corner); second column: restored images by SB-FA [38]; third column: restored images by SAHTV [5]; fourth column: restored images by TVOGS [4]; fifth column: restored images by FT-ADMM [37]; sixth column: restored images by HTVp-OGS.

Plot of performance along the HTVp-OGS iterations for restoring “lotus” images blurred by different kernels and corrupted by the Poisson noise of level MAXf = 200.
Fig 5

Plot of performance along the HTVp-OGS iterations for restoring “lotus” images blurred by different kernels and corrupted by the Poisson noise of level MAXf = 200.

(a) relative error in log scale; (b) PSNR and SSIM values.

5 Conclusions

In this paper, we propose a new method to restore Poissonian images (including denoising and deblurring) by using a hybrid regularizer, which combines the overlapping group sparse total variation with the high-order nonconvex total variation. This regularizer allows us to exploit their advantages in order to alleviate the staircase artifacts while preserving the original sharp edges simultaneously. The model derived from the Bayesian perspective is effectively solved by the nonconvex and nonsmooth ADMM optimization framework. We adopt the MM and IRLS algorithm for solving the subproblems accompanied by the OGS and nonconvex second-order total variation priors, respectively. Numerical experiments demonstrate that the proposed method outperforms the other related methods in terms of the PSNR and SSIM values. The study on adaptively determining the optimal parameters according to the image content and noise level will be the future work.

Acknowledgements

The authors would like to thank Dr. Houzhang Fang and Dr. Haimiao Zhang for supplying us with the codes used in our numerical comparisons.

References

YVardi, LAShepp, LKaufman. A statistical model for positron emission tomography. Journal of the American Statistical Association. 1985;80(389):820. 10.2307/2288037

GLandi, ELPiccolomini. A projected Newton-CG method for nonnegative astronomical image deblurring. Numerical Algorithms. 2008;48(4):279300. 10.1007/s11075-008-9198-3

APJames, BVDasarathy. Medical image fusion: A survey of the state of the art. Information Fusion. 2014;19:419. 10.1016/j.inffus.2013.12.002

XGLv, LJiang, JLiu. Deblurring Poisson noisy images by total variation with overlapping group sparsity. Applied Mathematics and Computation. 2016;289:132148. 10.1016/j.amc.2016.03.029

JLiu, TZHuang, XGLv, SWang. High-order total variation-based Poissonian image deconvolution with spatially adapted regularization parameter. Applied Mathematical Modelling. 2017;45:516529. 10.1016/j.apm.2017.01.009

AKumar, MOAhmad, MSwamy. A framework for image denoising using first and second order fractional overlapping group sparsity (HF-OLGS) regularizer. IEEE Access. 2019;7:2620026217. 10.1109/ACCESS.2019.2901691

TJHebert, RLeahy. Statistic-based MAP image-reconstruction from Poisson data using Gibbs priors. IEEE Transactions on Signal Processing. 1992;40(9):22902303. 10.1109/78.157228

GSebastiani, FGodtliebsen. On the use of Gibbs priors for Bayesian image restoration. Signal Processing. 1997;56(1):111118. 10.1016/S0165-1684(97)00002-9

FBenvenuto, ALa Camera, CTheys, AFerrari, HLantéri, MBertero. The study of an iterative method for the reconstruction of images corrupted by Poisson and Gaussian noise. Inverse Problems. 2008;24(3):035016. 10.1088/0266-5611/24/3/035016

10 

LJiang, JHuang, XGLv, JLiu. Restoring Poissonian images by a combined first-order and second-order variation approach. Journal of Mathematics. 2013;2013:111. 10.1155/2013/274573

11 

LJiang, JHuang, XGLv, JLiu. Alternating direction method for the high-order total variation-based Poisson noise removal problem. Numerical Algorithms. 2015;69(3):495516. 10.1007/s11075-014-9908-y

12 

MBertero, HLantéri, LZanni, et al. Iterative image reconstruction: A point of view. Mathematical Methods in Biomedical Imaging and Intensity-Modulated Radiation Therapy (IMRT). 2008;7:3763.

13 

HLanteri, MRoche, CAime. Penalized maximum likelihood image restoration with positivity constraints: multiplicative algorithms. Inverse Problems. 2002;18(5):1397. 10.1088/0266-5611/18/5/313

14 

GLandi, ELPiccolomini. An improved Newton projection method for nonnegative deblurring of Poisson-corrupted images with Tikhonov regularization. Numerical Algorithms. 2012;60(1):169188. 10.1007/s11075-011-9517-y

15 

JMBardsley, CRVogel. A nonnegatively constrained convex programming method for image reconstruction. SIAM Journal on Scientific Computing. 2004;25(4):13261343. 10.1137/S1064827502410451

16 

JMBardsley, NLaobeul. Tikhonov regularized Poisson likelihood estimation: theoretical justification and a computational method. Inverse Problems in Science and Engineering. 2008;16(2):199215. 10.1080/17415970701404235

17 

SBonettini, GLandi, ELPiccolomini, LZanni. Scaling techniques for gradient projection-type methods in astronomical image deblurring. International Journal of Computer Mathematics. 2013;90(1):929. 10.1080/00207160.2012.716513

18 

KChen. Introduction to variational image-processing models and applications. International Journal of Computer Mathematics. 2013;90:18. 10.1080/00207160.2012.757073

19 

JMBardsley, ALuttman. Total variation-penalized Poisson likelihood estimation for ill-posed problems. Advances in Computational Mathematics. 2009;31(1-3):35. 10.1007/s10444-008-9081-8

20 

GLandi, ELPiccolomini. An efficient method for nonnegatively constrained Total Variation-based denoising of medical images corrupted by Poisson noise. Computerized Medical Imaging and Graphics. 2012;36(1):3846. 10.1016/j.compmedimag.2011.07.002

21 

STao, WDong, ZXu, ZTang. Fast total variation deconvolution for blurred image contaminated by Poisson noise. Journal of Visual Communication and Image Representation. 2016;38:582594. 10.1016/j.jvcir.2016.04.005

22 

SBonettini, VRuggiero. An alternating extragradient method for total variation-based image restoration from Poisson data. Inverse Problems. 2011;27(9):095001.

23 

SBonettini, VRuggiero. On the convergence of primal–dual hybrid gradient algorithms for total variation image restoration. Journal of Mathematical Imaging and Vision. 2012;44(3):236253. 10.1007/s10851-011-0324-9

24 

YWen, RHChan, TZeng. Primal-dual algorithms for total variation based image restoration under Poisson noise. Science China Mathematics. 2016;59(1):141160. 10.1007/s11425-015-5079-0

25 

SSetzer, GSteidl, TTeuber. Deblurring Poissonian images by split Bregman techniques. Journal of Visual Communication and Image Representation. 2010;21(3):193199. 10.1016/j.jvcir.2009.10.006

26 

HChang, YLou, YDuan, SMarchesini. Total variation–based phase retrieval for Poisson noise removal. SIAM Journal on Imaging Sciences. 2018;11(1):2455. 10.1137/16M1103270

27 

WZhou, QLi. Poisson noise removal scheme based on fourth-order PDE by alternating minimization algorithm. Abstract and Applied Analysis. 2012;2012, Special Issue:114.

28 

TLe, RChartrand, TJAsaki. A variational approach to reconstructing images corrupted by Poisson noise. Journal of Mathematical Imaging and Vision. 2007;27(3):257263. 10.1007/s10851-007-0652-y

29 

GLiu, TZHuang, JLiu, XGLv. Total variation with overlapping group sparsity for image deblurring under impulse noise. PloS one. 2015;10(4):e0122562. 10.1371/journal.pone.0122562

30 

JZhang, MMa, ZWu, CDeng. High-order total bounded variation model and its fast algorithm for Poissonian image restoration. Mathematical Problems in Engineering. 2019;2019:111.

31 

XLiu, LHuang. Total bounded variation-based Poissonian images recovery by split Bregman iteration. Mathematical Methods in the Applied Sciences. 2012;35(5):520529. 10.1002/mma.1588

32 

XDWang, XcFeng, WwWang, WJZhang. Iterative reweighted total generalized variation based Poisson noise removal model. Applied Mathematics and Computation. 2013;223:264277. 10.1016/j.amc.2013.07.090

33 

ZZhang, JZhang, ZWei, LXiao. Cartoon-texture composite regularization based non-blind deblurring method for partly-textured blurred images with Poisson noise. Signal Processing. 2015;116:127140. 10.1016/j.sigpro.2015.04.020

34 

LMa, LMoisan, JYu, TZeng. A dictionary learning approach for Poisson image deblurring. IEEE Transactions on Medical Imaging. 2013;32(7):12771289. 10.1109/TMI.2013.2255883

35 

HFang, LYan, HLiu, YChang. Blind Poissonian images deconvolution with framelet regularization. Optics Letters. 2013;38(4):389391. 10.1364/OL.38.000389

36 

YShi, JSong, XHua. Poissonian image deblurring method by non-local total variation and framelet regularization constraint. Computers & Electrical Engineering. 2017;62:319329. 10.1016/j.compeleceng.2016.09.032

37 

HZhang, YDong, QFan. Wavelet frame based Poisson noise removal and image deblurring. Signal Processing. 2017;137:363372. 10.1016/j.sigpro.2017.01.025

38 

HFang, LYan. Poissonian image deconvolution with analysis sparsity priors. Journal of Electronic Imaging. 2013;22(2):023033. 10.1117/1.JEI.22.2.023033

39 

JLiu, TZHuang, GLiu, SWang, XGLv. Total variation with overlapping group sparsity for speckle noise reduction. Neurocomputing. 2016;216:502513. 10.1016/j.neucom.2016.07.049

40 

TAdam, RParamesran. Image denoising using combined higher order non-convex total variation with overlapping group sparsity. Multidimensional Systems and Signal Processing. 2019;30(1):503527. 10.1007/s11045-018-0567-3

41 

MDing, TZHuang, SWang, JJMei, XLZhao. Total variation with overlapping group sparsity for deblurring images under Cauchy noise. Applied Mathematics and Computation. 2019;341:128147. 10.1016/j.amc.2018.08.014

42 

MAFigueiredo, JMBioucas-Dias. Restoration of Poissonian images using alternating direction optimization. IEEE Transactions on Image Processing. 2010;19(12):31333145. 10.1109/TIP.2010.2053941

43 

Gong P, Zhang C, Lu Z, Huang J, Ye J. A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning; 2013. p. 37–45.

44 

PYChen, IWSelesnick. Group-sparse signal denoising: non-convex regularization, convex optimization. IEEE Transactions on Signal Processing. 2014;62(13):34643478. 10.1109/TSP.2014.2329274

45 

TAdam, RParamesran. Hybrid non-convex second-order total variation with applications to non-blind image deblurring. Signal, Image and Video Processing. 2020;14(1):115123. 10.1007/s11760-019-01531-3

46 

JLiu, TZHuang, IWSelesnick, XGLv, PYChen. Image restoration using total variation with overlapping group sparsity. Information Sciences. 2015;295:232246.

47 

Chan TFC, Glowinski R. Finite element approximation and iterative solution of a class of mildly non-linear elliptic equations. Computer Science Department, Stanford University Stanford; 1978.

48 

XXiu, WLiu, LLi, LKong. Alternating direction method of multipliers for nonconvex fused regression problems. Computational Statistics & Data Analysis. 2019;136:5971. 10.1016/j.csda.2019.01.002

49 

CLu, JFeng, SYan, ZLin. A unified alternating direction method of multipliers by majorization minimization. IEEE Transactions on Pattern Analysis and Machine intelligence. 2018;40(3):527541. 10.1109/TPAMI.2017.2689021

50 

WDeng, WYin. On the global and linear convergence of the generalized alternating direction method of multipliers. Journal of Scientific Computing. 2016;66(3):889916. 10.1007/s10915-015-0048-x

51 

YWang, WYin, JZeng. Global convergence of ADMM in nonconvex nonsmooth optimization. Journal of Scientific Computing. 2019;78(1):2963. 10.1007/s10915-018-0757-z

52 

QLyu, ZLin, YShe, CZhang. A comparison of typical ℓp minimization algorithms. Neurocomputing. 2013;119:413424. 10.1016/j.neucom.2013.03.017

53 

MJLai, YXu, WYin. Improved iteratively reweighted least squares for unconstrained smoothed q minimization. SIAM Journal on Numerical Analysis. 2013;51(2):927957. 10.1137/110840364

54 

SFoucart, MJLai. Sparsest solutions of underdetermined linear systems via ℓq–minimization for 0 < q ≤ 1. Applied and Computational Harmonic Analysis. 2009;26(3):395407.

55 

Chartrand R, Yin W. Iteratively reweighted algorithms for compressive sensing. In: IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE; 2008. p. 3869–3872.

56 

XGLv, FLi. An iterative decoupled method with weighted nuclear norm minimization for image restoration. International Journal of Computer Mathematics. 2020;97(3):602623. 10.1080/00207160.2019.1581178

57 

MAFigueiredo, JMBioucas-Dias, RDNowak. Majorization–minimization algorithms for wavelet-based image restoration. IEEE Transactions on Image processing. 2007;16(12):29802991. 10.1109/TIP.2007.909318

58 

HWoo. A characterization of the domain of Beta-Divergence and its connection to Bregman variational model. Entropy. 2017;19(9):482. 10.3390/e19090482

59 

Levin A, Weiss Y, Durand F, Freeman WT. Understanding and evaluating blind deconvolution algorithms. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2009. p. 1964–1971.