Literature DB >> 27019472

Algorithm-Dependent Generalization Bounds for Multi-Task Learning.

Tongliang Liu, Dacheng Tao, Mingli Song, Stephen J Maybank.   

Abstract

Often, tasks are collected for multi-task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order O(1/n), where n is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order O(1/T), where T is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.

Entities:  

Year:  2016        PMID: 27019472     DOI: 10.1109/TPAMI.2016.2544314

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  4 in total

1.  Algorithmic Stability and Generalization of an Unsupervised Feature Selection Algorithm.

Authors:  Xinxing Wu; Qiang Cheng
Journal:  Adv Neural Inf Process Syst       Date:  2021-12

2.  TEMImageNet training library and AtomSegNet deep-learning models for high-precision atom segmentation, localization, denoising, and deblurring of atomic-resolution images.

Authors:  Ruoqian Lin; Rui Zhang; Chunyang Wang; Xiao-Qing Yang; Huolin L Xin
Journal:  Sci Rep       Date:  2021-03-08       Impact factor: 4.379

3.  Multi-Task Learning Based on Stochastic Configuration Neural Networks.

Authors:  Xue-Mei Dong; Xudong Kong; Xiaoping Zhang
Journal:  Front Bioeng Biotechnol       Date:  2022-08-04

Review 4.  Representation Learning for Fine-Grained Change Detection.

Authors:  Niall O'Mahony; Sean Campbell; Lenka Krpalkova; Anderson Carvalho; Joseph Walsh; Daniel Riordan
Journal:  Sensors (Basel)       Date:  2021-06-30       Impact factor: 3.576

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.