[ABE-L] Seminário do PPGE/UFPE - Salimeh Yasaei Sekeh (University of Maine - USA) - 06/10 (16h00)

'Pablo Martin Rodriguez' via abe-l@ime.usp.br abe-l em ime.usp.br
Ter Out 5 14:04:50 -03 2021


Boa tarde a todos(as),

Esta mensagem é para divulgar a próxima palestra do Ciclo de Seminários do
PPGE/UFPE, a ser realizada na próxima quarta-feira, 06/09 (amanhã) pelo
Google Meet. Para mais informação sobre o ciclo consultar o site:
https://sites.google.com/view/seminarios-ppge-ufpe

Sobre a palestra:
Compression Strategies for Efficient Learning in Deep Neural Networks

*Data: *06/10/2021 - 16h00 | *Link: meet.google.com/cwr-oaco-mmk
<https://meet.google.com/cwr-oaco-mmk?hs=224>*

*Palestrante*: Salimeh Yasaei Sekeh, University of Maine - USA

*Resumo:* The life cycle of deep learning applications consists of multiple
phases including: New architectures design and Training. Proposed solutions
in these phases are implicitly bound to constraints brought forward by
restricted space and memory availability on custom hardware
implementations. These factors are at odds with the general research design
goal of high performance in deep neural networks (DNN), which is often
achieved by increasing the overall size and capacity of the DNN and
inconsistent with efficient learning. This work focuses on the deep neural
network model training—and proposes new techniques to increase their
efficiency. The trade-off between hardware based constraints and high
performance is often done using
architecture search or pruning. Architecture search algorithms need
extremely precise definitions of rules and optimize a vast search space
which requires a large amount of time, in the order of days, and resources,
thus limiting their usability and flexibility. On the other hand, existing
pruning algorithms operate more efficiently than architecture search
algorithms but are not effective at simultaneously reducing the redundancy
in the flow of information between layers and modeling the sensitivity of
filters to being removed. In this work, we propose a single-shot model
pruning approach that rapidly modifies any existing DNN architecture to
match hardware constraints while using a hybrid formulation that leverages
a probabilistic model to reduce redundancy, and weight-based constraints to
model sensitivity. Further, we tackle more practical challenges built into
pruning pipelines such as the time complexity to determine the upper
pruning limit for each layer of the DNN as well as analyzing the
sensitivity of different filters to pruning. Our main takeaway is a fully
automated state-of-the-art single-shot model pruning pipeline which has a
performance of 72.90% on ILSVRC2012 when ResNet50 is pruned by 64.26%.

*Sobre a palestrante:* Salimeh is an Assistant Professor of Computer
Science in the School of Computing and Information Science (SCIS) at the
University of Maine. Prior to UMaine. she was a postdoctoral research
fellow in the Electrical Engineering and Computer Science Department at the
University of Michigan, Ann Arbor working with Alfred o. Hero. She was
CAPES-PNPD founder postdoctoral fellow at the Federal University of Sao
Carlos (UFSCar) in 2014 and 2015 and she was visiting scholar at
Polytechnic university of Turin between 2011 and 2013. She is director of
Sekeh lab and her team’s research spans various theoretical and practical
aspects of Machine Learning algorithms and deep learning techniques. Her
research focuses on foundations of efficient and robust Artificial
Intelligence methods especially deep neural networks and advancing their
applications in real-world problems and decision making.

Favor de avisar a possíveis interessados(as).

Um abraço.

Pablo Rodriguez
-------------- Próxima Parte ----------
Um anexo em HTML foi limpo...
URL: <http://lists.ime.usp.br/pipermail/abe/attachments/20211005/a12bd8c4/attachment.htm>


Mais detalhes sobre a lista de discussão abe