Research Computing Workshop
- Thu, Mar 19, 2020, 10:00 am to 4:30 pmResearch Computing recently installed four Intel FPGAs on the Della cluster. After attending this workshop you should have the skills needed to start using these devices.
- Wed, Mar 18, 2020, 12:00 pm to 5:00 pmLearn deep learning techniques for a range of computer vision tasks, including training and deploying neural networks.
- Wed, Feb 26, 2020, 4:30 pm
Of the many deep learning frameworks, PyTorch has largely emerged as the first choice for researchers. This workshop will show participants how to implement and train common network architectures in PyTorch. Special topics will be included as time permits. Participants should have some knowledge of Python, NumPy and deep learning theory. Bring a laptop if you would like to do the hands-on exercises and go over the installation instructions.
- Fri, Nov 15, 2019, 2:00 pm
Description: Please join us for this 90-minute workshop, taught at an intermediate level. We will briefly introduce TensorFlow 2.0, then dive in to writing a few flavors of neural networks. Attendees will need a laptop and an internet connection. There is nothing to install in advance, we will use https://colab.research.google.com for examples.
- Thu, Oct 17, 2019, 4:30 pmAccelerating automated modeling and design with stochastic optimization and neural networks (20-minute talk) Modeling Human Sequential Decision-Making (20-minute talk)
- Tue, Sep 17, 2019, 12:00 pmMichael Zephyr, AI Developer Evangelist of Intel, will offer a survey of the company's AI tools including Intel® Optimized Tensorflow* (CPU-only), Intel OpenVino™ Toolkit for computer vision, and Intel® AI libraries. The workshop is open to all members of the campus research community. Please see the attached poster for speaker bio and more details.
- Thu, Sep 12, 2019, 4:30 pmJAX is a system for high-performance machine learning research. It offers the familiarity of Python+NumPy and the speed of hardware accelerators, and it enables the definition and the composition of function transformations useful for machine-learning programs. In particular, these transformations include automatic differentiation, automatic batching, end-to-end-compilation (via XLA), and even parallelizing over multiple accelerators. They are the key to JAX's power and to its relative simplicity.