No
Yes
View More
View Less
Working...
Close
OK
Cancel
Confirm
System Message
Delete
Schedule
An unknown error has occurred and your request could not be completed. Please contact support.
Scheduled
Wait Listed
Personal Calendar
Speaking
Conference Event
Meeting
Interest
Schedule TBD
Conflict Found
This session is already scheduled at another time. Would you like to...
Loading...
Please enter a maximum of {0} characters.
Please enter a maximum of {0} words.
must be 50 characters or less.
must be 40 characters or less.
Session Summary
We were unable to load the map image.
This has not yet been assigned to a map.
Search Catalog
Reply
Replies ()
Search
New Post
Microblog
Microblog Thread
Post Reply
Post
Your session timed out.
This web page is not optimized for viewing on a mobile device. Visit this site in a desktop browser to access the full set of features.
2017 GTC San Jose

S7182 - Sparse Persistent RNN

Session Speakers
Session Description

Recurrent Neural Networks(RNNs) are a powerful tool for solving sequence-based problems, but their execution time are dependent on the size of the network. Baidu introduced persistent RNN to solve this issue by minimizing bandwidth of the network parameters, but the size of on-chip storage imposes a strict upper limit on the network size. Model pruning can lead to significant reductions in RNN parameters, which makes network sparse. We design a efficient method for accelerating sparse RNN, which includes several optimizations: Lamport barriers, wide memory loads, and a bank-aware weight layout. With these optimizations, on GP100, we achieve 1) ~4.5 TFlops for a hidden layer of size 1792, batch size of 4, and a density of 10%; and 218 TFLOP/s (36X speedup of cudnn RNN) with 45 SMs for a hidden layer of size 5760, batch size of 2, and a density of 1%.


Additional Session Information
Beginner
Talk
Deep Learning and AI Performance Optimization
General
25 minutes
Session Schedule