Creat membership Creat membership
Sign in

Forgot password?

Confirm
  • Forgot password?
    Sign Up
  • Confirm
    Sign In
home > search

Now showing items 1 - 16 of 104

  • Semi-Supervised Video Object Segmentation with Super-Trajectories

    Wang, Wenguan   Shen, Jianbing   Porikli, Fatih   Yang, Ruigang  

    We introduce a semi-supervised video segmentation approach based on an efficient video representation, called as "super-trajectory". A super-trajectory corresponds to a group of compact point trajectories that exhibit consistent motion patterns, similar appearances, and close spatiotemporal relationships. We generate the compact trajectories using a probabilistic model, which enables handling of occlusions and drifts effectively. To reliably group point trajectories, we adopt the density peaks based clustering algorithm that allows capturing rich spatiotemporal relations among trajectories in the clustering process. We incorporate two intuitive mechanisms for segmentation, called as reverse-tracking and object re-occurrence, for robustness and boosting the performance. Building on the proposed video representation, our segmentation method is discriminative enough to accurately propagate the initial annotations in the first frame onto the remaining frames. Our extensive experimental analyses on three challenging benchmarks demonstrate that, our method is capable of extracting the target objects from complex backgrounds, and even reidentifying them after prolonged occlusions, producing high-quality video object segments. The code and results are available at: https://github.com/wenguanwang/SupertrajectorySeg.
    Download Collect
  • Feature Mask Network for Person Re-identification

    Ding, Guodong   Khan, Salman   Tang, Zhenmin   Porikli, Fatih  

    Download Collect
  • Identity-Preserving Face Recovery from Stylized Portraits

    Shiri, Fatemeh   Yu, Xin   Porikli, Fatih   Hartley, Richard   Koniusz, Piotr  

    Download Collect
  • Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey

    Naseer, Muzammal   Khan, Salman   Porikli, Fatih  

    Download Collect
  • Real-time Deep Tracking via Corrective Domain Adaptation

    Li, Hanxi   Wang, Xinyu   Shen, Fumin   Li, Yi   Porikli, Fatih   Wang, Mingwen  

    Download Collect
  • Robust Visual Tracking with Channel Attention and Focal Loss

    Li, Dongdong   Wen, Gongjian   Kuai, Yangliu   Zhu, Lingxiao   Porikli, Fatih  

    Download Collect
  • Learning Padless Correlation Filters for Boundary-Effect Free Tracking

    Li, Dongdong   Wen, Gongjian   Kuai, Yangliu   Porikli, Fatih  

    Recently, discriminative correlation filters (DCFs) have achieved enormous popularity in the tracking community due to high accuracy and beyond real-time speed. Among different DCF variants, spatially regularized discriminative correlation filters (SRDCFs) demonstrate excellent performance in suppressing boundary effects induced from circularly shifted training samples. However, SRDCF have two drawbacks which may be the bottlenecks for further performance improvement. First, SRDCF needs to construct an element-wise regularization weight map which can lead to poor tracking performance without careful tunning. Second, SRDCF does not guarantee zero correlation filter values outside the target bounding box. These small but nonzero filter values away from the filter center hardly contribute to target location but induce boundary effects. To tackle these drawbacks, we revisit the standard SRDCF formulation and introduce padless correlation filters (PCFs) which totally remove boundary effects. Compared with SRDCF that penalizes filter values with spatial regularization weights, PCF directly guarantee zero filter values outside the target hounding box with a binary mask. Experimental results on the OTB2013, OTB2015 and VOT2016 data sets demonstrate that PCF achieves real-time frame-rates and favorable tracking performance compared with state-of-the-art trackers.
    Download Collect
  • A Cascaded Convolutional Neural Network for Single Image Dehazing

    Li, Chongyi   Guo, Jichang   Porikli, Fatih   Fu, Huazhu   Pang, Yanwei  

    Images captured under outdoor scenes usually suffer from low contrast and limited visibility due to suspended atmospheric particles, which directly affects the quality of photographs. Despite numerous image dehazing methods have been proposed, effective hazy image restoration remains a challenging problem. Existing learning-based methods usually predict the medium transmission by convolutional neural networks (CNNs), but ignore the key global atmospheric light. Different from previous learning-based methods, we propose a flexible cascaded CNN for single hazy image restoration, which considers the medium transmission and global atmospheric light jointly by two task-driven subnetworks. Specifically, the medium transmission estimation subnetwork is inspired by the densely connected CNN while the global atmospheric light estimation subnetwork is a light-weight CNN. Besides, these two subnetworks are cascaded by sharing the common features. Finally, with the estimated model parameters, the haze free image is obtained by the atmospheric scattering model inversion, which achieves more accurate and effective restoration performance. Qualitatively and quantitatively experimental results on the synthetic and real-world hazy images demonstrate that the proposed method effectively removes haze from such images, and outperforms several state-of-the-art dehazing methods.
    Download Collect
  • Quadruplet Network with One-Shot Learning for Fast Visual Object Tracking

    Dong, Xingping   Shen, Jianbing   Wu, Dongming   Guo, Kan   Jin, Xiaogang   Porikli, Fatih  

    Download Collect
  • Underwater scene prior inspired deep underwater image and video enhancement

    Li, Chongyi   Anwar, Saeed   Porikli, Fatih  

    Download Collect
  • Video Saliency Detection via Sparsity-based Reconstruction and Propagation

    Cong, Runmin   Lei, Jianjun   Fu, Huazhu   Porikli, Fatih   Huang, Qingming   Hou, Chunping  

    Download Collect
  • One-shot Action Localization by Learning Sequence Matching Network

    Yang, Hongtao   He, Xuming   Porikli, Fatih  

    Learning based temporal action localization methods require vast amounts of training data. However, such largescale video datasets, which are expected to capture the dynamics of every action category, are not only very expensive to acquire but are also not practical simply because there exists an uncountable number of action classes. This poses a critical restriction to the current methods when the training samples are few and rare (e.g. when the target action classes are not present in the current publicly available datasets). To address this challenge, we conceptualize a new example-based action detection problem where only a few examples are provided, and the goal is to find the occurrences of these examples in an untrimmed video sequence. Towards this objective, we introduce a novel one-shot action localization method that alleviates the need for large amounts of training samples. Our solution adopts the one-shot learning technique of Matching Network and utilizes correlations to mine and localize actions of previously unseen classes. We evaluate our one-shot action localization method on the THUMOS14 and ActivityNet datasets, of which we modified the configuration to fit our one-shot problem setup.
    Download Collect
  • A Deeper Look at Power Normalizations

    Koniusz, Piotr   Zhang, Hongguang   Porikli, Fatih  

    Power Normalizations (PN) are very useful non-linear operators in the context of Bag-of-Words data representations as they tackle problems such as feature imbalance. In this paper, we reconsider these operators in the deep learning setup by introducing a novel layer that implements PN for non-linear pooling of feature maps. Specifically, by using a kernel formulation, our layer combines the feature vectors and their respective spatial locations in the feature maps produced by the last convolutional layer of CNN. Linearization of such a kernel results in a positive definite matrix capturing the second-order statistics of the feature vectors, to which PN operators are applied. We study two types of PN functions, namely (i) MaxExp and (ii) Gamma, addressing their role and meaning in the context of nonlinear pooling. We also provide a probabilistic interpretation of these operators and derive their surrogates with well-behaved gradients for end-to-end CNN learning. We apply our theory to practice by implementing the PN layer on a ResNet-50 model and showcase experiments on four benchmarks for fine-grained recognition, scene recognition, and material classification. Our results demonstrate stateof- the-part performance across all these tasks.
    Download Collect
  • Learning Target-Aware Correlation Filters for Visual Tracking

    Li, Dongdong   Wen, Gongjian   Kuai, Yangliu   Xiao, Jingjing   Porikli, Fatih  

    Download Collect
  • Video Representation Learning Using Discriminative Pooling

    Wang, Jue   Cherian, Anoop   Porikli, Fatih   Gould, Stephen  

    Popular deep models for action recognition in videos generate independent predictions for short clips, which are then pooled heuristically to assign an action label to the full video segment. As not all frames may characterize the underlying action-indeed, many are common across multiple actions-pooling schemes that impose equal importance on all frames might be unfavorable. In an attempt to tackle this problem, we propose discriminative pooling, based on the notion that among the deep features generated on all short clips, there is at least one that characterizes the action. To this end, we learn a (nonlinear) hyperplane that separates this unknown, yet discriminative, feature from the rest. Applying multiple instance learning in a large-margin setup, we use the parameters of this separating hyperplane as a descriptor for the full video segment. Since these parameters are directly related to the support vectors in a maxmargin framework, they serve as robust representations for pooling of the features. We formulate a joint objective and an efficient solver that learns these hyperplanes per video and the corresponding action classifiers over the hyperplanes. Our pooling scheme is end-to-end trainable within a deep framework. We report results from experiments on three benchmark datasets spanning a variety of challenges and demonstrate state-of-the-art performance across these tasks.
    Download Collect
  • Identity-preserving Face Recovery from Portraits

    Shiri, Fatemeh   Yu, Xin   Porikli, Fatih   Hartley, Richard   Koniusz, Piotr  

    Recovering the latent photorealistic faces from their artistic portraits aids human perception and facial analysis. However, a recovery process that can preserve identity is challenging because the fine details of real faces can be distorted or lost in stylized images. In this paper, we present a new Identity-preserving Face Recovery from Portraits (IFRP) to recover latent photorealistic faces from unaligned stylized portraits. Our IFRP method consists of two components: Style Removal Network (SRN) and Discriminative Network (DN). The SRN is designed to transfer feature maps of stylized images to the feature maps of the corresponding photorealistic faces. By embedding spatial transformer networks into the SRN, our method can compensate for misalignments of stylized faces automatically and output aligned realistic face images. The role of the DN is to enforce recovered faces to be similar to authentic faces. To ensure the identity preservation, we promote the recovered and ground-truth faces to share similar visual features via a distance measure which compares features of recovered and ground-truth faces extracted from a pre-trained VGG network. We evaluate our method on a large-scale synthesized dataset of real and stylized face pairs and attain state of the art results. In addition, our method can recover photorealistic faces from previously unseen stylized portraits, original paintings and human-drawn sketches.
    Download Collect
1 2 3 4 5 6 7

Contact

If you have any feedback, Please follow the official account to submit feedback.

Turn on your phone and scan

Submit Feedback