Creat membership Creat membership
Sign in

Forgot password?

Confirm
  • Forgot password?
    Sign Up
  • Confirm
    Sign In
home > search

Now showing items 1 - 12 of 12

  • Forward Stability of ResNet and Its Variants

    Zhang, Linan   Schaeffer, Hayden  

    The residual neural network (ResNet) is a popular deep network architecture which has the ability to obtain high-accuracy results on several image processing problems. In order to analyze the behavior and structure of ResNet, recent work has been on establishing connections between ResNets and continuous-time optimal control problems. In this work, we show that the post-activation ResNet is related to an optimal control problem with differential inclusions and provide continuous-time stability results for the differential inclusion associated with ResNet. Motivated by the stability conditions, we show that alterations of either the architecture or the optimization problem can generate variants of ResNet which improves the theoretical stability bounds. In addition, we establish stability bounds for the full (discrete) network associated with two variants of ResNet, in particular, bounds on the growth of the features and a measure of the sensitivity of the features with respect to perturbations. These results also help to show the relationship between the depth, regularization, and stability of the feature space. Computational experiments on the proposed variants show that the accuracy of ResNet is preserved and that the accuracy seems to be monotone with respect to the depth and various corruptions.
    Download Collect
  • ON THE CONVERGENCE OF THE SINDy ALGORITHM

    Zhang, Linan   Schaeffer, Hayden  

    One way to understand time-series data is to identify the underlying dynamical system which generates it. This task can be done by selecting an appropriate model and a set of parameters which best fits the dynamics while providing the simplest representation (i.e., the smallest amount of terms). One such approach is the sparse identification of nonlinear dynamics framework [6], which uses a sparsity-promoting algorithm that iterates between a partial least-squares fit and a thresholding (sparsity-promoting) step. In this work, we provide some theoretical results on the behavior and convergence of the algorithm proposed in [S. L. Brunton, J. L. Proctor, and J. N. Kutz, Proc. Nat. Acad. Sci. USA, 113 (2016), pp. 3932-3937]. In particular, we prove that the algorithm approximates local minimizers of an unconstrained l(0)-penalized least-squares problem. From this, we provide sufficient conditions for general convergence, rate of convergence, conditions for one-step recovery, and a recovery result with respect to the condition number and noise. Examples illustrate that the rates of convergence are sharp. In addition, our results extend to other algorithms related to the algorithm in [S. L. Brunton, J. L. Proctor, and J. N. Kutz, Proc. Nat. Acad. Sci. USA, 113 (2016), pp. 3932-3937], and provide theoretical verification of several observed phenomena.
    Download Collect
  • Extracting Sparse High-Dimensional Dynamics from Limited Data

    Schaeffer, Hayden   Tran, Giang   Ward, Rachel  

    Download Collect
  • Sparse model selection via integral terms

    Schaeffer, Hayden   McCalla, Scott G.  

    Download Collect
  • Stability and error estimates of BV solutions to the Abel inverse problem

    Zhang, Linan   Schaeffer, Hayden  

    Reconstructing images from ill-posed inverse problems often utilizes total variation regularization in order to recover discontinuities in the data while also removing noise and other artifacts. Total variation regularization has been successful in recovering images for (noisy) Abel transformed data, where object boundaries and data support will lead to sharp edges in the reconstructed image. In this work, we analyze the behavior of BV solutions to the Abel inverse problem, deriving a priori estimates on the recovery. In particular, we provide L-2-stability bounds on BV solutions to the Abel inverse problem. These bounds yield error estimates on images reconstructed from a proposed total variation regularized minimization problem.
    Download Collect
  • An Accelerated Method for Nonlinear Elliptic PDE

    Schaeffer, Hayden   Hou, Thomas Y.  

    Download Collect
  • Space-Time Regularization for Video Decompression

    Schaeffer, Hayden   Yang, Yi   Osher, Stanley  

    We consider the problem of reconstructing frames from a video which has been compressed using the video compressive sensing (VCS) method. In VCS data, each frame comes from first subsampling the original video data in space and then averaging the subsampled sequence in time. This results in a large linear system of equations whose inversion is ill-posed. We introduce a convex regularizer to invert the system, where the spatial component is regularized by the total variation seminorm, and the temporal component is regularized by enforcing sparsity on the difference between the spatial gradients of each frame. Since the regularizers are L-1 -like norms, the model can be written in the form of an easy-to-solve saddle point problem. The saddle point problem is solved by the primaldual algorithm, whose implementation calls for nearly pointwise operations (i.e., no direct linear inversion) and has a simple parallel version. Results show that our model decompresses videos more accurately than other popular models, with PSNR gains of several dB.
    Download Collect
  • Real-Time Adaptive Video Compression

    Schaeffer, Hayden   Yang, Yi   Zhao, Hongkai   Osher, Stanley  

    Download Collect
  • Active Contours with Free Endpoints

    Schaeffer, Hayden   Vese, Luminita  

    Image segmentation methods with length regularized edge sets are known to have segments whose endpoints either terminate perpendicularly to the boundary of the domain, terminate at a triple junction where three segments connect, or terminate at a free endpoint where the segment does not connect to any other edges. However, level set based segmentation methods are only able to capture edge structures which contain the first two types of segments. In this work, we propose an extension to the level set based image segmentation method in order to detect free endpoint structures. By generalizing the curve representation used in Chan and Vese (Trans. Image Proces. 10(2):266-277, 2001; Int. J. Comput. Vis. 50(3):271-293, 2002) to also include free endpoint structures, we are able to segment a larger class of edge types. Since our model is formulated using the level set framework, the curve evolution inherits useful properties such as the ability to change its topology by splitting and merging. The numerical method is provided as well as experimental results on both synthetic and real images.
    Download Collect
  • Variational Dynamics of Free Triple Junctions

    Schaeffer, Hayden   Vese, Luminita  

    We propose a level set framework for representing triple junctions either with or without free endpoints. For triple junctions without free endpoints, our method uses two level set functions to represent the three segments that constitute the structure. For free triple junctions, we extend our method using the free curve work of Schaeffer and Vese (J Math Imaging Vis, 1-17, 2013), Smereka (Phys D Nonlinear Phenom 138(3-4):282-301, 2000). For curves moving under length minimizing flows, it is well known that the endpoints either intersect perpendicularly to the boundary, do not intersect the boundary of the domain or the curve itself (free endpoints), or meet at triple junctions. Although many of these cases can be formulated within the level set framework, the case of free triple junctions does not appear in the literature. Therefore, the proposed free triple junction formulation completes the important curve structure representations within the level set framework. We derive an evolution equation for the dynamics of the triple junction under length and area minimizing flow. The resulting system of partial differential equations are both coupled and highly non-linear, so the system is solved numerically using the Sobolev preconditioned descent. Qualitative numerical experiments are presented on various triple junction and free triple junction configurations, as well as an example with a quadruple junction instability. Quantitative results show convergence of the preconditioned algorithm to the correct solutions.
    Download Collect
  • A Low Patch-Rank Interpretation of Texture

    Schaeffer, Hayden   Osher, Stanley  

    We propose a novel cartoon-texture separation model using a sparse low-rank decomposition. Our texture model connects the separate ideas of robust principal component analysis (PCA) [E. J. Candes, X. Li, Y. Ma, and J. Wright, J. ACM, 58 (2011), 11], nonlocal methods [A. Buades, B. Coll, and J.-M. Morel, Multiscale Model. Simul., 4 (2005), pp. 490-530], [A. Buades, B. Coll, and J.-M. Morel, Numer. Math., 105 (2006), pp. 1-34], [G. Gilboa and S. Osher, Multiscale Model. Simul., 6 (2007), pp. 595-630], [G. Gilboa and S. Osher, Multiscale Model. Simul., 7 (2008), pp. 10051028], and cartoon-texture decompositions in an interesting way, taking advantage of each of these methodologies. We define our texture norm using the nuclear norm applied to patches in the image, interpreting the texture patches to be low-rank. In particular, this norm is easier to implement than many of the weak function space norms in the literature and is computationally faster than nonlocal methods since there is no explicit weight function to compute. This norm is used as an additional regularizer in several image recovery models. Using total variation as the cartoon norm and our new texture norm, we solve the proposed variational problems using the split Bregman algorithm [T. Goldstein and S. Osher, SIAM J. Imaging Sci., 2 (2009), pp. 323-343]. Since both of our regularizers are of L 1 type, a double splitting provides a fast algorithm that is simple to implement. Based on experimental results, we demonstrate our algorithm's success on a wide range of textures. Also, our particular cartoon-texture decomposition model has the advantage of separating noise from texture. Our proposed texture norm is shown to better reconstruct texture for other applications such as denoising, deblurring, sparse reconstruction, and pattern regularization.
    Download Collect
  • Learning partial differential equations via data discovery and sparse optimization

    Schaeffer, Hayden  

    We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
    Download Collect
1

Contact

If you have any feedback, Please follow the official account to submit feedback.

Turn on your phone and scan

Submit Feedback