Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
zhaijianyang 0afec6622c | 2 years ago | |
---|---|---|
eval | 2 years ago | |
fig | 2 years ago | |
.gitignore | 2 years ago | |
LICENSE | 2 years ago | |
README.md | 2 years ago | |
cal_obj.m | 2 years ago | |
demo.m | 2 years ago | |
iterative_multiple_kernel_clustering.m | 2 years ago | |
kcenter.m | 2 years ago | |
knorm.m | 2 years ago | |
my_kernel_kmeans.m | 2 years ago | |
update_H.m | 2 years ago | |
update_HP_nor.m | 2 years ago | |
update_HP_rev.m | 2 years ago | |
update_beta.m | 2 years ago | |
update_gamma.m | 2 years ago |
Matalb implementation for AAAI21 paper:
Abstract
Current multiple kernel clustering algorithms compute a partition with the consensus kernel or graph learned from the pre-specified ones, while the emerging late fusion methods firstly construct multiple partitions from each kernel separately, and then obtain a consensus one with them. However, both of them directly distill the clustering information from kernels or graphs with size $\mathbb{R}^{n\times n}$ to partition matrices with size $\mathbb{R}^{n\times k}$ , where n and k are the number of samples and clusters, respectively. This sudden drop of dimension would result in the loss of advantageous details for clustering. In this paper, we provide a brief insight of the aforementioned issue and propose a hierarchical approach to perform clustering while preserving advantageous details maximumly. Specifically, we gradually group samples into ${c_t}_{t=1}^s$ clusters, together with generating a sequence of intermediary matrices with size $\mathcal{R}^{n\times c_t}$, in which $n>c_1>\cdots>c_s>k$. A consensus partition with size $\mathbb{R}^{n\times k}$ is simultaneously learned and conversely guides the construction of intermediary matrices. This cyclic process is modeled into an unified objective and an alternative algorithm is designed to solve it. In addition, the proposed method is validated and compared with other representative multiple kernel clustering algorithms on benchmark datasets, demonstrating state-of-the-art performance by a large margin.
Structure
If you find our code useful, please cite:
@inproceedings{DBLP:conf/aaai/JiyuanAAAI21Hierarchical,
author = {Jiyuan Liu and
Xinwang Liu and
Yuexiang Yang and
Siwei Wang and
Sihang Zhou},
title = {Hierarchical Multiple Kernel Clustering},
booktitle = {Proceedings of the Thirty-fifth {AAAI} Conference on Artificial Intelligence (AAAI-21), Virtually, February 2-9, 2021},
pages = {},
year = {2021},
crossref = {},
url = {},
}
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》