Research Computing Workshop

  • FPGA Training with Intel

    Thu, Mar 19, 2020, 10:00 am to 4:30 pm
    Research Computing recently installed four Intel FPGAs on the Della cluster. After attending this workshop you should have the skills needed to start using these devices.
  • Foundations of Deep Learning with PyTorch

    Wed, Feb 26, 2020, 4:30 pm

    Of the many deep learning frameworks, PyTorch has largely emerged as the first choice for researchers. This workshop will show participants how to implement and train common network architectures in PyTorch. Special topics will be included as time permits. Participants should have some knowledge of Python, NumPy and deep learning theory. Bring a laptop if you would like to do the hands-on exercises and go over the installation instructions.

  • Diving into TensorFlow 2.0

    Fri, Nov 15, 2019, 2:00 pm

    Description: Please join us for this 90-minute workshop, taught at an intermediate level. We will briefly introduce TensorFlow 2.0, then dive in to writing a few flavors of neural networks. Attendees will need a laptop and an internet connection. There is nothing to install in advance, we will use https://colab.research.google.com for examples.

  • AI Journey with Intel Workshop

    Tue, Sep 17, 2019, 12:00 pm
    Michael Zephyr,  AI Developer Evangelist of Intel, will offer a survey of the company's AI tools including Intel® Optimized Tensorflow* (CPU-only), Intel OpenVino™ Toolkit for computer vision, and Intel® AI libraries. The workshop is open to all members of the campus research community. Please see the attached poster for speaker bio and more details.
  • TensorFlow and PyTorch User Group

    Thu, Sep 12, 2019, 4:30 pm
    JAX is a system for high-performance machine learning research. It offers the familiarity of Python+NumPy and the speed of hardware accelerators, and it enables the definition and the composition of function transformations useful for machine-learning programs. In particular, these transformations include automatic differentiation, automatic batching, end-to-end-compilation (via XLA), and even parallelizing over multiple accelerators. They are the key to JAX's power and to its relative simplicity.
Subscribe to Research Computing Workshop