Scalable and interpretable kernel methods based on random Fourier features

Carregando...
Imagem de Miniatura

Título da Revista

ISSN da Revista

Título de Volume

Editor

Universidade Federal de São Carlos

Resumo

Kernel methods are a class of statistical machine learning models based on positive semidefinite kernels, which serve as a measure of similarity between data features. Examples of kernel methods include kernel ridge regression, support vector machines, and smoothing splines. Despite their widespread use, kernel methods face two main challenges. Firstly, due to operating on all pairs of observations, they require a large amount of memory and calculation, making them unsuitable for use with large datasets. This issue can be solved by approximating the kernel function via random Fourier features or preconditioners. Secondly, most used kernels consider all features to be equally relevant, without considering their actual impact on the prediction. This results in decreased interpretability, as the influence of irrelevant features is not mitigated. In this work, we extend the random Fourier features framework to Automatic Relevance Determination (ARD) kernels and proposes a new kernel method that integrates the optimization of kernel parameters during training. The kernel parameters reduce the effect of irrelevant features and might be used for post-processing variable selection. The proposed method is evaluated on several datasets and compared to conventional algorithms in machine learning.

Descrição

Citação

OTTO, Mateus Piovezan. Scalable and interpretable kernel methods based on random Fourier features. 2023. Dissertação (Mestrado em Estatística) – Universidade Federal de São Carlos, São Carlos, 2023. Disponível em: https://repositorio.ufscar.br/handle/20.500.14289/17579.

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced

Licença Creative Commons

Exceto quando indicado de outra forma, a licença deste item é descrita como Attribution 3.0 Brazil