Home     spacer RSS

Overview

SuperComputing 2012

Thanks for visiting our booth at SC2012 in Salt Lake City. It was good to meet with you. Please let us know your successes or if you are having any difficulties building or using Open|SpeedShop.

How-To-Use Open|SpeedShop HPC-Admin Magazine Article

In this article, we will describe how to use Open|SpeedShop through step-by-step examples illustrating how to find a number of different performance bottlenecks. Additionally, we will describe the tool’s most common usage model (workflow) and provide several performance data viewing options.

See Look for Bottlenecks with Open|SpeedShop

Overview
Open|SpeedShop is a community effort by The Krell Institute with current direct funding from DOE’s NNSA and Office of Science. It is building on top of a broad list of community infrastructures, most notably Dyninst and MRNet from UW, libmonitor from Rice, and PAPI from UTK. Open|SpeedShop is an open source multi platform Linux performance tool which is initially targeted to support performance analysis of applications running on both single node and large scale IA64, IA32, EM64T, AMD64, and IBM Power PC platforms. Support for the Cray XT platform was added in release 1.9.3.4 and support for the IBM Blue Gene platforms was added in the 2.0.0 release (November 2010). Further updates to the Blue Gene and Cray platforms were delivered in the 2.0.1 release (December 2011). Support for shared/dynamic executables on the Cray-XE platforms is also included in the 2.0.1 release.

Open|SpeedShop is explicitly designed with usability in mind and is for application developers and computer scientists. The base functionality include:

  • Sampling Experiments
  • Support for Callstack Analysis
  • Hardware Performance Counters
  • MPI Profiling and Tracing
  • I/O Profiling and Tracing
  • Floating Point Exception Analysis

In addition, Open|SpeedShop is designed to be modular and extensible. It supports several levels of plug-ins which allow users to add their own performance experiments.

Open|SpeedShop development is hosted by the Krell Institute. The infrastructure and base components of Open|SpeedShop are released as open source code primarily under LGPL.

Highlights

  • Comprehensive performance analysis for sequential, multithreaded, and MPI applications
  • No need to recompile the user’s application.
  • Supports both first analysis steps as well as deeper analysis options for performance experts
  • Easy to use GUI and fully scriptable through a command line interface and Python
  • Supports Linux Systems and Clusters with Intel and AMD processors
  • Extensible through new performance analysis plugins ensuring consistent look and feel
  • In production use on all major cluster platforms at LANL, LLNL, and SNL

Features

  • Four user interface options: batch, command line interface, graphical user interface and Python scripting API.
  • Supports multi-platform single system image(SSI) and traditional clusters.
  • Scales to large numbers of processes, threads, and ranks.
  • Ability to automatically create and attach to both sequential and parallel jobs from within Open|SpeedShop.
  • View performance data using multiple customizable views.
  • View intermediate performance measurement data while the experiment is running.
  • Save and restore performance experiment data and symbol information for post experiment performance analysis
  • View performance data for all of application’s lifetime or smaller time slices.
  • Compare performance results between processes, threads, or ranks between a previous experiment and current experiment.
  • GUI Wizard facility and context sensitive help.
  • Interactive CLI help facility which lists the CLI commands, syntax, and typical usage.
  • Python Scripting API accesses Open|SpeedShop functionality corresponding to CLI commands.
  • Option to automatically group like performing processes, threads, or ranks.
  • Create traces in OTF (Open Trace Format).
  • Comprehensive installation scripts.

Comments are disabled

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.