[ABE-L] Fwd: Ciclo de Seminários do PPGEST-UnB

Associação Estatística secretaria em redeabe.org.br
Ter Nov 29 10:03:05 -03 2016


---------- Forwarded message ----------
From: Cibele <cibeleqs em unb.br>
Date: 2016-11-29 9:49 GMT-02:00
Subject: Ciclo de Seminários do PPGEST-UnB
To: abe-l-owner em ime.usp.br


Senhores,

Encaminho a divulgação de duas palestras, nesta quinta, no Ciclo de
Seminários do PPGEST-UnB.

Os palestrantes são Gabriela Guimarães Olinto e André Lobato Ramos,
egressos do Bacharelado em Estatística da UnB  e Egressos do Rochester
Institute of Technology/Center for Quality and Applied Statistics, ambos
orientados pelo prof. Ernest Fokoué.


Abraços,


Cibele


1)  “Kernelized Cost-Sensitive Listwise Ranking”

Palestrante: Gabriela Guimarães Olinto
(Data Scientist - Soleo Communications)

M.Sc. School of Mathematical Sciences College of Science
Rochester Institute of Technology.

DATA: 01/12/2016 (quinta-feira)
HORÁRIO: 14:30h
LOCAL: Sala Multiuso EST

Resumo

“Learning to Rank is an area of application in machine learning, typically
supervised, to build ranking models for Information Retrieval systems. The
training data consists of lists of items with some partial order specified
induced by an ordinal score or a binary judgment (relevant/not relevant).
The model purpose is to produce a permutation of the items in this list in
a way which is close to the rankings in the training data. This technique
has been successfully applied to ranking, and several approaches have been
proposed since then, including the listwise approach.
A cost-sensitive version of that is an adaptation of this framework which
treats the documents within a list with different probabilities, i.e.
attempt to impose weights for the documents with higher cost. We then take
this algorithm to the next level by kernelizing the loss and exploring the
optimization in different spaces. We will show how the Kernel
Cost-Sensitive ListMLE performs compared to the baseline Plain
Cost-Sensitive ListMLE, ListNet, and RankSVM and show different aspects of
the proposed loss function within different families of kernels.




2) “Evolutionary Weights for Random Subspace Learning”

Palestrante: André Lobato Ramos.
(Analista de marketing - Par Corretora)

M.Sc. School of Mathematical Sciences College of Science

Rochester Institute of Technology.


DATA: 01/12/2016 (quinta-feira)
HORÁRIO: 15:15h
LOCAL: Sala Multiuso EST


Resumo:


“Ensemble learning is a widely used technique in Data Mining, this method
allows us to aggregate models to reduce prediction error. There are many
methods on how to perform model aggregation, one of them is known as Random
Subspace Learning, which consists of building subspace of the feature space
where we want to create our models. The task of selecting good subspaces
and in turn produce good models for better prediction can be a daunting
one, so we want to propose a new method to accomplish such a task. This
proposed method allows for an automated data-driven way to attribute
weights to variables in the feature space in order select variables that
show themselves to be important in reducing the prediction error.”--





-- 
Luana Takahashi
Secretária ABE
Tel:  11 3091-6261
Rua do Matão,1010 sala 250A
www.redeabe.org.br
-------------- Próxima Parte ----------
Um anexo em HTML foi limpo...
URL: <https://lists.ime.usp.br/archives/abe/attachments/20161129/d5428731/attachment.html>


Mais detalhes sobre a lista de discussão abe