Open Access   Article Go Back

A Review on Multi-Task clustering with self-adaptive and Model Relation Learning

C.Rupakumar 1 , S. Girinath2

Section:Review Paper, Product Type: Journal Paper
Volume-07 , Issue-06 , Page no. 106-108, Mar-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7si6.106108

Online published on Mar 20, 2019

Copyright © C.Rupakumar, S. Girinath . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: C.Rupakumar, S. Girinath, “A Review on Multi-Task clustering with self-adaptive and Model Relation Learning,” International Journal of Computer Sciences and Engineering, Vol.07, Issue.06, pp.106-108, 2019.

MLA Style Citation: C.Rupakumar, S. Girinath "A Review on Multi-Task clustering with self-adaptive and Model Relation Learning." International Journal of Computer Sciences and Engineering 07.06 (2019): 106-108.

APA Style Citation: C.Rupakumar, S. Girinath, (2019). A Review on Multi-Task clustering with self-adaptive and Model Relation Learning. International Journal of Computer Sciences and Engineering, 07(06), 106-108.

BibTex Style Citation:
@article{Girinath_2019,
author = {C.Rupakumar, S. Girinath},
title = {A Review on Multi-Task clustering with self-adaptive and Model Relation Learning},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {3 2019},
volume = {07},
Issue = {06},
month = {3},
year = {2019},
issn = {2347-2693},
pages = {106-108},
url = {https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=878},
doi = {https://doi.org/10.26438/ijcse/v7i6.106108}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i6.106108}
UR - https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=878
TI - A Review on Multi-Task clustering with self-adaptive and Model Relation Learning
T2 - International Journal of Computer Sciences and Engineering
AU - C.Rupakumar, S. Girinath
PY - 2019
DA - 2019/03/20
PB - IJCSE, Indore, INDIA
SP - 106-108
IS - 06
VL - 07
SN - 2347-2693
ER -

           

Abstract

Multi-task clustering improves the clustering performance of each task by transferring knowledge among the related tasks. An important aspect of multi-task clustering is to assess the task relatedness. However, to our knowledge, only two previous works have assessed the task relatedness, but they both have limitations. In this paper, we propose two multi-task clustering methods for partially related tasks: the self-adapted multi-task clustering (SAMTC) method and the manifold regularized coding multi-task clustering (MRCMTC) method, which can automatically identify and transfer related instances among the tasks, thus avoiding negative transfer. Both SAMTC and MRCMTC construct the similarity matrix for each target task by exploiting useful information from the source tasks through related instances transfer, and adopt spectral clustering to get the final clustering results. But they learn the related instances from the source tasks in different ways.

Key-Words / Index Term

Multi-task Clustering, Partially Related Tasks, Negative Transfer, Instance Transfer

References

[1] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010.
[2] J. Zhang and C. Zhang, “Multitask Bregman clustering,” in Proc. 24th AAAI Conf. Artif. Intell., 2010, pp. 655–660.
[3] X. Zhang and X. Zhang, “Smart multi-task Bregman clustering and multi-task Kernel clustering,” in Proc. 27th AAAI Conf. Artif. Intell., 2013, pp. 1034–1040.
[4] X. Zhang, “Convex discriminative multitask clustering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 1, pp. 28–40, 2015.
[5] J. Lin, “Divergence measures based on the shannon entropy,” IEEE Trans. Inform. Theory, vol. 37, no. 1, pp. 145–151, 1991.
[6] J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Scholkopf, “Correcting sample selection bias by unlabeled data,” ¨ in Proc. 20th Adv. Neural Inform. Process. Syst., 2006, pp. 601–608.
[7] X. Zhang, X. Zhang, and H. Liu, “Self-adapted multi-task clustering,” in Proc. 25th Int. Joint Conf. Artif. Intell., 2016, pp. 2357–2363.
[8] R. Caruana, “Multitask learning,” Mach. Learn., vol. 28, no. 1, pp. 41–75, 1997.
[9] R. K. Ando and T. Zhang, “A framework for learning predictive structures from multiple tasks and unlabeled data,” J. Mach. Learn. Res., vol. 6, pp. 1817–1853, 2005.
[10] A. Argyriou, T. Evgeniou, and M. Pontil, “Multi-task feature learning,” in Proc. 20th Adv. Neural Inform. Process. Syst., 2006, pp. 41–48.
[11] J. Chen, L. Tang, J. Liu, and J. Ye, “A convex formulation for learning shared structures from multiple tasks,” in Proc. 26th Int. Conf. Mach. Learn., 2009, pp. 137–144.
[12] T. Evgeniou and M. Pontil, “Regularized multi–task learning,” in Proc. 10th ACM SIGKDD Int. Conf. Knowl. Disc. Data Min., 2004, pp. 109–117.
[13] C. A. Micchelli and M. Pontil, “Kernels for multi–task learning,” in Proc. 18th Adv. Neural Inform. Process. Syst., 2004.
[14] T. Evgeniou, C. A. Micchelli, and M. Pontil, “Learning multiple tasks with kernel methods,” J. Mach. Learn. Res., vol. 6, pp. 615–637, 2005.
[15] A. Barzilai and K. Crammer, “Convex multi-task learning by clustering,” in Proc. 18th Int. Conf. Artif. Intell. and Stat., 2015, pp. 65–73.
[16] N. D. Lawrence and J. C. Platt, “Learning to learn with the informative vector machine,” in Proc. 21st Int. Conf. Mach. Learn., 2004.
[17] E. V. Bonilla, K. M. A. Chai, and C. K. I. Williams, “Multitask gaussian process prediction,” in Proc. 21st Adv. Neural Inform. Process. Syst., 2007, pp. 153–160.
[18] B. Zadrozny, “Learning and evaluating classifiers under sample selection bias,” in Proc. 21st Int. Conf. Mach. Learn., 2004, pp. 114– 121.
[19] W. Dai, Q. Yang, G. Xue, and Y. Yu, “Boosting for transfer learning,” in Proc. 24th Int. Conf. Mach. Learn., 2007, pp. 193–200.
[20] W. Dai, G. Xue, Q. Yang, and Y. Yu, “Transferring naive bayes classifiers for text classification,” in Proc. AAAI Conf. on Artif. Intell., 2007, pp. 540–545.