Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

l1 based regularization such as Lasso and Dantizg Selector succeed in two aspects. First,the inherent sparsity of l1 accords with the underlying nature of high dimensional data; second, the convexity essence paves the way to computational feasibility in high dimension.James and Radchenko developed an algorithm to solve Dantzig Selector for generalized linear model. Fan abstracted this framework to the set of convex loss function.To fill the gap of theoretical support within this framework, we derive non-asymptotic error bounds with logistic loss. We termed this classifier as High Confidence Set Selector (HCS).An implicit assumption of high confidence set selector is that the data is collected precisely. However, the data is inevitable to process with measurement error in reality.In response to this challenge, we introduce a new methodology abbreviated as MHCS accounts for measurement error. We derive the non-asymptotic error bounds in theoretic study.Our simulation study shows MHCS performs better than other competing classifiers especially when measurement error aggravates, which provides numerical support to our theory. Ascribe to embedded linearity instinct, HCS and MHCS is versatile to connect with state of art technique such as word vectors, deep network, transfer learning, etc.

Details

PDF

Statistics

from
to
Export
Download Full History