In this article we prove a maximal L-p-regularity result for stochastic convolutions, which extends Krylov's basic mixed L-p(L-q)-inequality for the Laplace operator on R-d to large classes of elliptic operators, both on R-d and on bounded domains in R-d with various boundary conditions. Our method of proof is based on McIntosh's H-infinity-functional calculus, R-boundedness techniques and sharp L-p (L-q)-square function estimates for stochastic integrals in L-q-spaces. Under an additional invertibility assumption on A, a maximal space time L-p-regularity result is obtained as well.
Kong, Jun
Liu, Chenhua
Jiang, Min
Wu, Jiao
Tian, Shengwei
Lai, Huicheng
Collaborative representation has been successfully applied to visual tracking to powerfully use all the PCA basis vectors in the target subspace for object representation. However, collaborative representation always exists redundant features that may affect the performance of visual tracking. In this paper, a visual tracking algorithm is proposed by solving a generalized l(P)-regularized (0 <=3D p <=3D 1) problem within a Bayesian inference framework for the reduction of redundant features. To efficiently solve the minimization problem of l(P)-regularization, the Generalization of Soft-threshold (GST) operator is applied in the framework of iterative Accelerated Proximal Gradient (APG) approach. Moreover, the GST operator can also provide a unified framework to observe the effects of different sparsity for visual tracking. To show the feasibility of l(P)-regularizer, we choose the representative l(0.5)-norm as the regularizer for the target coefficient and adjust the corresponding sparsity to be appropriate. Furthermore, we also introduce an extra l(0)-regularized tracker to observe the effect of excessive sparsity in a unified framework. Experimental results on several challenging sequences demonstrate that the proposed tracker leads to a more favorable performance in terms of accuracy measures including the overlap ratio and center location error, respectively. (C) 2016 Elsevier B.V. All rights reserved.
Let f be a p integrable function on K, a compact subset of R, and mu a sigma-finite positive measure. For p > 1, the one-sided L(P) norm is defined as follows: parallel to f parallel to(p) = max {integral ({f>0}) (vertical bar f vertical bar pd mu))(1/p) , (integral(vertical bar f vertical bar pd mu)({f>0}))(1/p)} We first show that the above definition is indeed a norm and then study the best approximation in the one-sided LP norms. Among others, characterization and uniqueness of best approximation are discussed.
Estaji, Ali Akbar
Abedi, Mostafa
Darghadam, Ahmad Mahmoudi
Let FPL :=3D Frm(P(R);L). We show that if L is a P-frame then FPL is an N-0-self-injective ring. We prove that a zero-dimensional frame L is extremally disconnected if and only if FPL is a self-injective ring. Finally, it is shown that FPL is a Baer ring if and only if FPL is a continuous ring if and only if (FL)-L-P is a complete ring if and only if FPL is a CS-ring. (C) 2019 Mathematical Institute Slovak Academy of Sciences
The notion of metric compactification was introduced by Gromov and later rediscovered by Rieffel. It has been mainly studied on proper geodesic metric spaces. We present here a generalization of the metric compactification that can be applied to infinite-dimensional Banach spaces. Thereafter we give a complete description of the metric compactification of infinite-dimensional l(p) spaces for all 1 <=3D p < infinity. We also give a full characterization of the metric compactification of infinite-dimensional Hilbert spaces.
In this paper, we will use Maynard's method [5] to prove that for any positive integer m, there exists infinitely many integers n with at most two prime factors that can be written in the form p + 2(l) in at least m + 1 different ways. (C) 2019 Elsevier Inc. All rights reserved.
Jiang, Shan
Fang, Shu-Cherng
Nie, Tiantian
Xing, Wenxun
In this paper, we study the linearly constrained l(p) minimization problem with p is an element of (0, 1). Unlike those known works in the literature that propose solving relaxed epsilon-KKT conditions, we introduce a scaled KKT condition without involving any relaxation of the optimality conditions. A gradient-descent-based algorithm that works only on the positive entries of variables is then proposed to find solutions satisfying the scaled KKT condition without invoking the nondifferentiability issue. The convergence proof and complexity analysis of the proposed algorithm are provided. Computational experiments support that the proposed algorithm is capable of achieving much better sparse recovery in reasonable computational time compared to state-of-the-art interior-point based algorithms. (C) 2019 Elsevier B.V. All rights reserved.
The study of high-dimensional distributions is of interest in probability theory, statistics, and asymptotic convex geometry, where the object of interest is the uniform distribution on a convex set in high dimensions. The l(P)-spaces and norms are of particular interest in this setting. In this paper we establish a limit theorem for distributions on l(P) -spheres, conditioned on a rare event, in a high-dimensional geometric setting. As part of our proof, we establish a certain large deviation principle that is also relevant to the study of the tail behavior of random projections of l(P)-balls in a high-dimensional Euclidean space.
In this work, we initiate the study of the geometry of the variable exponent sequence space l(p().()) when inf(n) p(n) =3D 1. In 1931 Orlicz introduced the variable exponent sequence spaces l(p().()) while studying lacunary Fourier series. Since then, much progress has been made in the understanding of these spaces and of their continuous counterpart. In particular, it is well known that l(p().()) is uniformly convex if and only if the exponent is bounded away from 1 and infinity. The geometry of l(p().()) when either inf(n) p(n) =3D 1 or sup(n) p(n) =3D infinity remains largely ill-understood. We state and prove a modular version of the geometric property of l(p().()) when inf(n) p(n) =3D 1, known as uniform convexity in every direction. We present specific applications to fixed point theory. In particular we obtain an analogue to the classical Kirk's fixed point theorem in l(p().()) when inf(n) p(n) =3D 1.
We study the -mean distortion functionals, for Sobolev self homeomorphisms of the unit disk with prescribed boundary values and pointwise distortion function . Here we discuss aspects of the existence, regularity and uniqueness questions for minimisers and discuss the diffeomorphic critical points of presenting results we know and making some conjectures. Remarkably, smooth minimisers of the -mean distortion functionals have inverses which are harmonic with respect to a metric induced by the distortion of the mapping. From this we are able to deduce that the complex conjugate Beltrami coefficient of a smooth minimiser is locally quasiregular and we identify the quasilinear equation it solves. This has other consequences such as a maximum principle for the distortion.
Wang, Qianqian
Gao, Quanxue
Gao, Xinbo
Nie, Feiping
Recently, many l(1)-norm-based PCA approaches have been developed to improve the robustness of PCA. However, most existing approaches solve the optimal projection matrix by maximizing l(1)-norm-based variance and do not best minimize the reconstruction error, which is the true goal of PCA. Moreover, they do not have rotational invariance. To handle these problems, we propose a generalized robust metric learning for PCA, namely, l(2),(p)-PCA, which employs l(2),(p)-norm as the distance metric for reconstruction error. The proposed method not only is robust to outliers but also retains PCA's desirable properties. For example, the solutions are the principal eigenvectors of a robust covariance matrix and the low-dimensional representation have rotational invariance. These properties are not shared by l(1)-norm-based PCA methods. A new iteration algorithm is presented to solve l(2),(p)-PCA efficiently. Experimental results illustrate that the proposed method is more effective and robust than PCA, PCA-L1 greedy, PCA-L1 nongreedy, and HQ-PCA.