Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under\nfixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named\nweighted two-level Bregman method with dictionary updating (WTBMDU) are proposed for solving l???? optimization under the\ndictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted\nnorminto the two-level Bregman iteration method with dictionary updating scheme (TBMDU), themodified alternating direction\nmethod (ADM) solves the model of pursuing the approximated l????-norm penalty efficiently. Specifically, the algorithms converge\nafter a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental\nresults on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently\ndemonstrate that the proposed method can efficiently reconstructMR images fromhighly undersampled k-space data and presents\nadvantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values
Loading....