Transcript
PVF® User's Guide Parallel Fortran for Scientists and Engineers
Release 2010
The Portland Group
While every precaution has been taken in the preparation of this document, The Portland Group® (PGI®), a wholly-owned subsidiary of STMicroelectronics, Inc., makes no warranty for the use of its products and assumes no responsibility for any errors that may appear, or for damages resulting from the use of the information contained herein. The Portland Group retains the right to make changes to this information at any time, without notice. The software described in this document is distributed under license from STMicroelectronics and/or The Portland Group and may be used or copied only in accordance with the terms of the license agreement ("EULA"). PGI Workstation, PGI Server, PGI Accelerator, PGF95, PGF90, PGFORTRAN, and PGI Unified Binary are trademarks; and PGI, PGHPF, PGF77, PGCC, PGC++, PGI Visual Fortran, PVF, PGI CDK, Cluster Development Kit, PGPROF, PGDBG, and The Portland Group are registered trademarks of The Portland Group Incorporated. No part of this document may be reproduced or transmitted in any form or by any means, for any purpose other than the purchaser's or the end user's personal use without the express written permission of STMicroelectronics and/or The Portland Group.
PVF® User’s Guide Copyright © 2006-2010 STMicroelectronics, Inc. All rights reserved. Printed in the United States of America First Printing: Release 6.1, December 2005 Second Printing: Release 6.2, August 2006 Third Printing: Release 7.0, December 2006 Fourth Printing: Release 7.1, October 2007 Fifth Printing: Release 7.2, May 2008 Sixth Printing: Release 8.0, November 2008 Seventh Printing: Release 9.0, June 2009 Eighth Printing: Release 2010, November 2009 Ninth Printing: Release 2010, 10.2, February 2010 Tenth Printing: Release 2010, 10.3, March 2010 Eleventh Printing: Release 2010, 10.4, April 2010 Twelfth Printing: Release 2010, 10.5, May 2010 Thirteenth Printing: Release 2010, 10.6, June 2010
Technical support:
[email protected] Sales:
[email protected] Web: www.pgroup.com
ID: 101531642
Contents Preface ................................................................................................................................... xxiii Audience Description .......................................................................................................... xxiii Compatibility and Conformance to Standards .......................................................................... xxiii Organization ....................................................................................................................... xxiv Hardware and Software Constraints ........................................................................................ xxv Conventions ......................................................................................................................... xxv Related Publications ........................................................................................................... xxvii
1. Getting Started with PVF .................................................................................................. 1 PVF on the Start Menu ............................................................................................................ Shortcuts to Launch PVF .................................................................................................. Commands Submenu ....................................................................................................... Profiler Submenu ............................................................................................................ Documentation Submenu ................................................................................................. Licensing Submenu ......................................................................................................... Introduction to PVF ................................................................................................................. Visual Studio Settings ....................................................................................................... Solutions and Projects ..................................................................................................... Creating a Hello World Project ................................................................................................. Using PVF Help ....................................................................................................................... PVF Sample Projects ................................................................................................................ Compatibility .......................................................................................................................... Win32 API Support (dfwin) .............................................................................................. Unix/Linux Portability Interfaces (dflib, dfport) ................................................................... Windows Applications and Graphical User Interfaces ............................................................
1 1 2 2 2 3 3 3 3 4 6 6 7 8 8 8
2. Build with PVF .................................................................................................................. 11 Creating a PVF Project .......................................................................................................... PVF Project Types .......................................................................................................... Creating a New Project .................................................................................................. PVF Solution Explorer ............................................................................................................ Adding Files to a PVF Project .................................................................................................. Add a New File .............................................................................................................
11 11 11 12 12 12 iii
Add an Existing File ....................................................................................................... Adding a New Project to a Solution ......................................................................................... Project Dependencies and Build Order .................................................................................... Configurations ...................................................................................................................... Platforms ............................................................................................................................. Setting Global User Options .................................................................................................... Setting Configuration Options using Property Pages .................................................................... Property Pages ...................................................................................................................... Setting File Properties Using the Properties Window ................................................................... Setting Fixed Format .............................................................................................................. Building a Project with PVF .................................................................................................... Order of PVF Build Operations ....................................................................................... Build Events and Custom Build Steps ....................................................................................... Build Events ................................................................................................................. Custom Build Steps ....................................................................................................... PVF Build Macros ................................................................................................................. Static and Dynamic Linking .................................................................................................... VC++ Interoperability ............................................................................................................ Linking PVF and VC++ Projects .............................................................................................. Common Link-time Errors ...................................................................................................... Migrating an Existing Application to PVF .................................................................................. Fortran Editing Features .........................................................................................................
13 13 14 14 14 14 15 15 20 21 22 22 22 22 23 23 23 24 24 24 25 25
3. Debug with PVF ................................................................................................................ 27 Windows Used in Debugging .................................................................................................. Autos Window .............................................................................................................. Breakpoints Window ..................................................................................................... Call Stack Window ........................................................................................................ Disassembly Window ..................................................................................................... Immediate Window ....................................................................................................... Locals Window ............................................................................................................. Memory Window ........................................................................................................... Modules Window .......................................................................................................... Output Window ............................................................................................................. Processes Window ......................................................................................................... Registers Window .......................................................................................................... Threads Window ........................................................................................................... Watch Window .............................................................................................................. Variable Rollover .................................................................................................................. Scalar Variables ........................................................................................................... Array Variables ............................................................................................................ User-Defined Type Variables ........................................................................................... Debugging an MPI Application in PVF ...................................................................................... Attaching the PVF Debugger to a Running Application ................................................................ Attach to a Native Windows Application ............................................................................ Using PVF to Debug a Standalone Executable ............................................................................ iv
27 27 27 28 28 28 29 29 29 29 29 30 30 30 30 31 31 31 31 31 32 33
®
PVF User’s Guide
Launch PGI Visual Fortran from a Native Windows Command Prompt .................................... 33 Using PGI Visual Fortran After a Command Line Launch ...................................................... 34 Tips on Launching PVF from the Command Line ................................................................ 34
4. Using MPI in PVF ............................................................................................................. 35 MPI Overview ....................................................................................................................... System and Software Requirements .......................................................................................... Local MPI .................................................................................................................... Cluster MPI .................................................................................................................. Compile using MSMPI ............................................................................................................ Enable MPI Execution .......................................................................................................... MPI Debugging Property Options .................................................................................... Launch an MPI Application .................................................................................................... Debug an MPI Application .................................................................................................... Tips and Tricks ..................................................................................................................... Profile an MPI Application ..................................................................................................... MSMPI Environment .............................................................................................................
35 35 35 35 36 36 36 36 37 38 38 39
5. Getting Started with the Command Line Compilers .................................................. 41 Overview .............................................................................................................................. Invoking the Command-level PGI Compilers .............................................................................. Command-line Syntax .................................................................................................... Command-line Options .................................................................................................. Fortran Directives ......................................................................................................... Filename Conventions ............................................................................................................ Input Files .................................................................................................................... Output Files .................................................................................................................. Fortran Data Types ................................................................................................................ Parallel Programming Using the PGI Compilers ......................................................................... Running SMP Parallel Programs ..................................................................................... Site-specific Customization of the Compilers .............................................................................. Using siterc Files ........................................................................................................... Using User rc Files ........................................................................................................ Common Development Tasks ..................................................................................................
41 42 42 43 43 44 44 45 46 47 47 47 47 48 48
6. Using Command Line Options ....................................................................................... 51 Command Line Option Overview ............................................................................................. Command-line Options Syntax ......................................................................................... Command-line Suboptions .............................................................................................. Command-line Conflicting Options ................................................................................... Help with Command-line Options ............................................................................................ Getting Started with Performance ............................................................................................ Using –fast and –fastsse Options ..................................................................................... Other Performance-related Options ................................................................................. Targeting Multiple Systems - Using the -tp Option ...................................................................... Frequently-used Options .........................................................................................................
51 51 52 52 52 54 54 55 55 55 v
7. Optimizing & Parallelizing ............................................................................................ 57 Overview of Optimization ....................................................................................................... 58 Local Optimization ........................................................................................................ 58 Global Optimization ....................................................................................................... 58 Loop Optimization: Unrolling, Vectorization, and Parallelization ........................................... 58 Interprocedural Analysis (IPA) and Optimization .............................................................. 58 Function Inlining ........................................................................................................... 59 Profile-Feedback Optimization (PFO) .............................................................................. 59 Getting Started with Optimizations ........................................................................................... 59 Common Compiler Feedback Format (CCFF) ............................................................................ 61 Local and Global Optimization using -O .................................................................................. 61 Scalar SSE Code Generation ............................................................................................ 63 Loop Unrolling using –Munroll ............................................................................................... 63 Vectorization using –Mvect ..................................................................................................... 64 Vectorization Sub-options ............................................................................................... 65 Vectorization Example Using SSE/SSE2 Instructions ............................................................ 67 Auto-Parallelization using -Mconcur ......................................................................................... 69 Auto-parallelization Sub-options ...................................................................................... 69 Loops That Fail to Parallelize ......................................................................................... 70 Processor-Specific Optimization & the Unified Binary ................................................................. 73 Interprocedural Analysis and Optimization using –Mipa .............................................................. 74 Building a Program Without IPA – Single Step ................................................................... 74 Building a Program Without IPA - Several Steps ................................................................. 74 Building a Program Without IPA Using Make ..................................................................... 75 Building a Program with IPA .......................................................................................... 75 Building a Program with IPA - Single Step ........................................................................ 76 Building a Program with IPA - Several Steps ..................................................................... 76 Building a Program with IPA Using Make ........................................................................ 77 Questions about IPA ...................................................................................................... 77 Profile-Feedback Optimization using –Mpfi/–Mpfo ..................................................................... 78 Default Optimization Levels ..................................................................................................... 79 Local Optimization Using Directives ......................................................................................... 79 Execution Timing and Instruction Counting ............................................................................... 79
8. Using Function Inlining .................................................................................................. 81 Invoking Function Inlining ..................................................................................................... Using an Inline Library .................................................................................................. Creating an Inline Library ...................................................................................................... Working with Inline Libraries ......................................................................................... Updating Inline Libraries - Makefiles ............................................................................... Error Detection during Inlining ............................................................................................... Examples ............................................................................................................................. Restrictions on Inlining ..........................................................................................................
81 82 83 84 84 85 85 85
9. Using OpenMP .................................................................................................................. 87 OpenMP Overview ................................................................................................................. 87 vi
®
PVF User’s Guide
OpenMP Shared-Memory Parallel Programming Model ....................................................... Terminology ................................................................................................................. OpenMP Example .......................................................................................................... Task Overview ...................................................................................................................... Fortran Parallelization Directives ............................................................................................. Directive Recognition ............................................................................................................. Directive Summary Table ........................................................................................................ Directive Clauses ................................................................................................................... Run-time Library Routines ...................................................................................................... Environment Variables ...........................................................................................................
87 88 89 90 90 91 91 93 95 99
10. Using an Accelerator ................................................................................................... 101 Overview ............................................................................................................................ Components ................................................................................................................ Availability .................................................................................................................. User-directed Accelerator Programming .......................................................................... Features Not Covered or Implemented ............................................................................ Terminology ........................................................................................................................ System Requirements ........................................................................................................... Supported Processors and GPUs ............................................................................................ Installation and Licensing ..................................................................................................... Enable Accelerator Compilation ..................................................................................... Execution Model ................................................................................................................. Host Functions ............................................................................................................ Levels of Parallelism .................................................................................................... Memory Model ................................................................................................................... Separate Host and Accelerator Memory Considerations ..................................................... Accelerator Memory .................................................................................................... Cache Management ...................................................................................................... Running an Accelerator Program ........................................................................................... Accelerator Directives .......................................................................................................... Enable Accelerator Directives ........................................................................................ Format ....................................................................................................................... Free-Form Fortran Directives ........................................................................................ Fixed-Form Fortran Directives ....................................................................................... Accelerator Directive Summary .................................................................................... Accelerator Directive Clauses ................................................................................................ PGI Accelerator Compilers Runtime Libraries .......................................................................... Runtime Library Definitions .......................................................................................... Runtime Library Routines ............................................................................................. Environment Variables ......................................................................................................... Applicable PVF Property Pages .............................................................................................. Applicable Command Line Options ........................................................................................ PGI Unified Binary for Accelerators ........................................................................................ Multiple Processor Targets ........................................................................................... Profiling Accelerator Kernels .................................................................................................
101 101 101 102 102 102 104 104 104 104 105 105 105 106 106 106 106 107 107 107 107 108 109 109 111 113 113 114 114 115 115 116 117 118 vii
Related Accelerator Programming Tools ................................................................................. PGPROF pgcollect ........................................................................................................ NVIDIA CUDA Profile ................................................................................................... TAU - Tuning and Analysis Utility ................................................................................... Supported Intrinsics ............................................................................................................ Supported Fortran Intrinsics Summary Table ................................................................... References related to Accelerators .........................................................................................
119 119 119 119 119 119 121
11. Using Directives ........................................................................................................... 123 PGI Proprietary Fortran Directives ......................................................................................... PGI Proprietary Optimization Directive Summary ..................................................................... Scope of Fortran Directives and Command-Line options ............................................................ Prefetch Directives ............................................................................................................. Prefetch Directive Syntax .............................................................................................. Prefetch Directive Format Requirements ......................................................................... Sample Usage of Prefetch Directive ................................................................................ !DEC$ Directives ................................................................................................................. Format Requirements ................................................................................................... Summary Table ...........................................................................................................
123 124 125 126 126 127 127 127 128 128
12. Creating and Using Libraries .................................................................................... 129 PGI Runtime Libraries on Windows ........................................................................................ Creating and Using Static Libraries on Windows ....................................................................... ar command ............................................................................................................... ranlib command ......................................................................................................... Creating and Using Dynamic-Link Libraries on Windows ........................................................... Using LIB3F ....................................................................................................................... LAPACK, BLAS and FFTs ........................................................................................................
129 130 130 131 131 136 136
13. Using Environment Variables .................................................................................... 137 Setting Environment Variables ............................................................................................... Setting Environment Variables on Windows ..................................................................... PGI-Related Environment Variables ........................................................................................ PGI Environment Variables ................................................................................................... FLEXLM_BATCH .......................................................................................................... FORTRANOPT .............................................................................................................. LM_LICENSE_FILE ....................................................................................................... MPSTKZ ..................................................................................................................... MP_BIND ................................................................................................................... MP_BLIST .................................................................................................................. MP_SPIN .................................................................................................................... MP_WARN .................................................................................................................. NCPUS ....................................................................................................................... NCPUS_MAX ............................................................................................................... NO_STOP_MESSAGE .................................................................................................... PATH ......................................................................................................................... PGI ............................................................................................................................ viii
137 137 138 139 140 140 140 141 141 141 141 141 142 142 142 142 143
®
PVF User’s Guide
PGI_CONTINUE ........................................................................................................... PGI_OBJSUFFIX ........................................................................................................... PGI_STACK_USAGE ...................................................................................................... PGI_TERM .................................................................................................................. PGI_TERM_DEBUG ...................................................................................................... STATIC_RANDOM_SEED ............................................................................................... TMP .......................................................................................................................... TMPDIR ..................................................................................................................... Stack Traceback and JIT Debugging .......................................................................................
143 143 143 143 145 145 145 145 145
14. Distributing Files - Deployment ............................................................................... 147 Deploying Applications on Windows ....................................................................................... PGI Redistributables .................................................................................................... Microsoft Redistributables ............................................................................................ Code Generation and Processor Architecture ........................................................................... Generating Generic x86 Code ........................................................................................ Generating Code for a Specific Processor ........................................................................ Generating One Executable for Multiple Types of Processors ...................................................... PGI Unified Binary Command-line Switches ..................................................................... PGI Unified Binary Directives ........................................................................................
147 147 148 148 148 148 148 149 149
15. Inter-language Calling ................................................................................................. 151 Overview of Calling Conventions ............................................................................................ Inter-language Calling Considerations ..................................................................................... Functions and Subroutines ................................................................................................... Upper and Lower Case Conventions, Underscores .................................................................... Compatible Data Types ......................................................................................................... Fortran Named Common Blocks .................................................................................... Argument Passing and Return Values ..................................................................................... Passing by Value (%VAL) .............................................................................................. Character Return Values ............................................................................................... Complex Return Values ................................................................................................ Array Indices ...................................................................................................................... Examples ........................................................................................................................... Example - Fortran Calling C .......................................................................................... Example - C Calling Fortran .......................................................................................... Example - Fortran Calling C++ ..................................................................................... Example - C++ Calling Fortran ..................................................................................... Win32 Calling Conventions ................................................................................................... Win32 Fortran Calling Conventions ................................................................................ Symbol Name Construction and Calling Example .............................................................. Using the Default Calling Convention .............................................................................. Using the STDCALL Calling Convention ............................................................................ Using the C Calling Convention ...................................................................................... Using the UNIX Calling Convention ................................................................................. Using the CREF Calling Convention .................................................................................
151 152 152 152 153 154 154 155 155 155 156 156 156 157 158 159 160 160 161 162 162 163 163 163 ix
16. Programming Considerations for 64-Bit Environments ....................................... 165 Data Types in the 64-Bit Environment .................................................................................... Fortran Data Types ...................................................................................................... Large Dynamically Allocated Data .......................................................................................... Compiler Options for 64-bit Programming .............................................................................. Practical Limitations of Large Array Programming .................................................................... Large Array and Small Memory Model in Fortran .....................................................................
165 165 166 166 167 167
17. Fortran Data Types ...................................................................................................... 169 Fortran Data Types .............................................................................................................. Fortran Scalars ........................................................................................................... FORTRAN 77 Aggregate Data Type Extensions .................................................................. Fortran 90 Aggregate Data Types (Derived Types) ............................................................
169 169 171 172
18. Command-Line Options Reference ........................................................................... 173 PGI Compiler Option Summary ............................................................................................. Build-Related PGI Options ............................................................................................ PGI Debug-Related Compiler Options ............................................................................. PGI Optimization-Related Compiler Options .................................................................... PGI Linking and Runtime-Related Compiler Options ......................................................... Generic PGI Compiler Options .............................................................................................. –M Options by Category ....................................................................................................... Code Generation Controls ............................................................................................. Environment Controls .................................................................................................. Fortran Language Controls ............................................................................................ Inlining Controls ......................................................................................................... Optimization Controls .................................................................................................. Miscellaneous Controls ................................................................................................
173 173 175 176 177 177 213 213 217 218 221 223 233
19. OpenMP Reference Information ............................................................................... 241 Tasks ................................................................................................................................. 241 Task Characteristics and Activities .................................................................................. 241 Task Scheduling Points ................................................................................................. 241 Task Construct ............................................................................................................ 242 Parallelization Directives ..................................................................................................... 244 ATOMIC ............................................................................................................................ 244 BARRIER ............................................................................................................................ 244 CRITICAL ... END CRITICAL .................................................................................................. 245 C$DOACROSS .................................................................................................................... 246 DO...END DO ..................................................................................................................... 247 FLUSH .............................................................................................................................. 248 MASTER ... END MASTER ...................................................................................................... 249 ORDERED ......................................................................................................................... 249 PARALLEL ... END PARALLEL ................................................................................................ 250 PARALLEL DO .................................................................................................................... 251 PARALLEL SECTIONS ........................................................................................................... 252 PARALLEL WORKSHARE ... END PARALLEL WORKSHARE ........................................................... 253 x
®
PVF User’s Guide
SECTIONS … END SECTIONS .............................................................................................. SINGLE ... END SINGLE ......................................................................................................... TASK .................................................................................................................................. TASKWAIT ........................................................................................................................... THREADPRIVATE ................................................................................................................ WORKSHARE ... END WORKSHARE ......................................................................................... Directive Clauses ................................................................................................................. COLLAPSE (n) ............................................................................................................. COPYIN (list) .............................................................................................................. COPYPRIVATE(list) ....................................................................................................... DEFAULT .................................................................................................................... FIRSTPRIVATE(list) ...................................................................................................... IF() ........................................................................................................................... LASTPRIVATE(list) ........................................................................................................ NOWAIT ..................................................................................................................... NUM_THREADS ........................................................................................................... ORDERED .................................................................................................................. PRIVATE ..................................................................................................................... REDUCTION ................................................................................................................ SCHEDULE .................................................................................................................. SHARED ..................................................................................................................... UNTIED ...................................................................................................................... OpenMP Environment Variables ............................................................................................. OMP_DYNAMIC ........................................................................................................... OMP_NESTED ............................................................................................................. OMP_MAX_ACTIVE_LEVELS .......................................................................................... OMP_NUM_THREADS ................................................................................................. OMP_SCHEDULE ......................................................................................................... OMP_STACKSIZE ......................................................................................................... OMP_THREAD_LIMIT .................................................................................................. OMP_WAIT_POLICY .....................................................................................................
253 254 255 256 257 257 258 258 258 259 259 259 260 260 260 260 260 260 261 261 262 262 262 262 262 263 263 263 263 264 264
20. PGI Accelerator Compilers Reference .................................................................... 265 PGI Accelerator Directives .................................................................................................... Accelerator Compute Region Directive ............................................................................ Accelerator Data Region Directive .................................................................................. Accelerator Loop Mapping Directive ............................................................................... Combined Directive ..................................................................................................... Accelerator Declarative Data Directive ............................................................................ Accelerator Update Directive ......................................................................................... PGI Accelerator Directive Clauses .......................................................................................... if (condition) ............................................................................................................. Data Clauses ............................................................................................................... copy (list) ................................................................................................................. copyin (list) ............................................................................................................. copyout (list) ............................................................................................................
265 266 267 268 268 269 270 271 271 271 272 272 273 xi
local (list) ................................................................................................................ mirror (list) ............................................................................................................. updatein|updateout (list) ............................................................................................. Loop Scheduling Clauses .............................................................................................. cache (list) ................................................................................................................ host [(width)] .......................................................................................................... independent ................................................................................................................ kernel ........................................................................................................................ parallel [(width)] ..................................................................................................... private (list) .............................................................................................................. seq [(width)] ........................................................................................................... unroll [(width)] ....................................................................................................... vector [(width)] ....................................................................................................... Declarative Data Directive Clauses ................................................................................. reflected (list) ........................................................................................................... Update Directive Clauses .............................................................................................. device (list) ............................................................................................................... host (list) .................................................................................................................. PGI Accelerator Runtime Routines ......................................................................................... acc_get_device ........................................................................................................... acc_get_device_num ................................................................................................... acc_get_num_devices .................................................................................................. acc_init ..................................................................................................................... acc_set_device ............................................................................................................ acc_set_device_num ................................................................................................... acc_shutdown ............................................................................................................. acc_on_device ............................................................................................................ Accelerator Environment Variables ......................................................................................... ACC_DEVICE ............................................................................................................... ACC_DEVICE_NUM ...................................................................................................... ACC_NOTIFY ............................................................................................................... pgcudainit Utility .................................................................................................................
273 273 273 274 275 275 276 276 276 276 277 277 277 277 278 278 278 278 279 279 279 280 280 281 281 282 282 283 283 283 284 284
21. Directives Reference .................................................................................................... 285 PGI Proprietary Fortran Directive Summary ............................................................................. altcode (noaltcode) .................................................................................................... assoc (noassoc) .......................................................................................................... bounds (nobounds) ................................................................................................... cncall (nocncall) ........................................................................................................ concur (noconcur) .................................................................................................... depchk (nodepchk) .................................................................................................... eqvchk (noeqvchk) .................................................................................................... invarif (noinvarif) ....................................................................................................... ivdep ......................................................................................................................... lstval (nolstval) ........................................................................................................... prefetch ..................................................................................................................... xii
285 286 286 287 287 287 287 287 287 287 288 288
®
PVF User’s Guide
opt ............................................................................................................................ safe_lastval ................................................................................................................. tp .............................................................................................................................. unroll (nounroll) ........................................................................................................ vector (novector) ........................................................................................................ vintr (novintr) ............................................................................................................ Prefetch Directives ............................................................................................................. !DEC$ Directives ................................................................................................................. ALIAS Directive ........................................................................................................... ATTRIBUTES Directive .................................................................................................. DECORATE Directive .................................................................................................... DISTRIBUTE Directive .................................................................................................. IGNORE_TKR Directive .................................................................................................
288 288 289 289 290 290 290 290 291 291 292 292 293
22. Run-time Environment ................................................................................................ 295 Win32 Programming Model .................................................................................................. Function Calling Sequence ............................................................................................ Function Return Values ................................................................................................ Argument Passing ........................................................................................................ Win64 Programming Model .................................................................................................. Function Calling Sequence ............................................................................................ Function Return Values ................................................................................................ Argument Passing ........................................................................................................ Win64 Fortran Supplement ...........................................................................................
295 295 298 299 301 301 304 304 307
23. PVF Properties .............................................................................................................. 313 General Property Page ......................................................................................................... Output Directory ......................................................................................................... Intermediate Directory ................................................................................................. Extensions to Delete on Clean ....................................................................................... Configuration Type ....................................................................................................... Build Log File ............................................................................................................. Build Log Level ........................................................................................................... Debugging Property Page ..................................................................................................... Application Command .................................................................................................. Application Arguments ................................................................................................. Environment ............................................................................................................... Merge Environment ..................................................................................................... MPI Debugging ........................................................................................................... Working Directory ....................................................................................................... Number of Processes ................................................................................................... Working Directory ....................................................................................................... Additional Arguments: mpiexec ..................................................................................... Location of mpiexec .................................................................................................... Number of Cores ......................................................................................................... Working Directory .......................................................................................................
327 327 327 327 327 327 327 328 328 328 328 328 329 329 329 329 329 330 330 330 xiii
Standard Input ........................................................................................................... Standard Output ......................................................................................................... Standard Error .......................................................................................................... Additional Arguments: job submit .................................................................................. Additional Arguments: mpiexec ..................................................................................... Location of job.exe ...................................................................................................... Fortran Property Pages ......................................................................................................... Fortran | General ................................................................................................................ Display Startup Banner ................................................................................................. Additional Include Directories ....................................................................................... Module Path ............................................................................................................... Object File Name ......................................................................................................... Debug Information Format ........................................................................................... Optimization ............................................................................................................... Fortran | Optimization .......................................................................................................... Optimization ............................................................................................................... Global Optimizations .................................................................................................... Vectorization ............................................................................................................... Inlining ...................................................................................................................... Use Frame Pointer ....................................................................................................... Loop Unroll Count ....................................................................................................... Auto-Parallelization ...................................................................................................... Fortran | Preprocessing ........................................................................................................ Preprocess Source File ................................................................................................. Additional Include Directories ....................................................................................... Ignore Standard Include Path ....................................................................................... Preprocessor Definitions .............................................................................................. Undefine Preprocessor Definitions ................................................................................. Fortran | Code Generation .................................................................................................... Runtime Library .......................................................................................................... Fortran | Language .............................................................................................................. Fortran Dialect ............................................................................................................ Treat Backslash as Character ........................................................................................ Extend Line Length ...................................................................................................... Process OpenMP Directives .......................................................................................... MPI ........................................................................................................................... Enable CUDA Fortran ................................................................................................... CUDA Fortran Register Limit ......................................................................................... CUDA Fortran Use Fused Multiply-Adds ........................................................................... CUDA Fortran Use Fast Math Library .............................................................................. CUDA Fortran Toolkit ................................................................................................... CUDA Fortran Compute Capability .................................................................................. CUDA Fortran CC 1.0 ................................................................................................... CUDA Fortran CC 1.1 ................................................................................................... CUDA Fortran CC 1.2 ................................................................................................... CUDA Fortran CC 1.3 ................................................................................................... xiv
330 330 331 331 331 331 331 332 332 332 332 333 333 333 334 334 334 334 334 335 335 335 335 335 335 336 336 336 337 337 337 337 338 338 338 338 338 339 339 339 339 340 340 340 341 341
®
PVF User’s Guide
CUDA Fortran CC 2.0 ................................................................................................... CUDA Fortran Keep Binary ............................................................................................ CUDA Fortran Keep Kernel Source ................................................................................. CUDA Fortran Keep PTX ............................................................................................... CUDA Fortran Emulation .............................................................................................. Fortran | Floating Point Options ............................................................................................ Floating Point Exception Handling ................................................................................. Floating Point Consistency ............................................................................................ Flush Denormalized Results to Zero ............................................................................... Treat Denormalized Values as Zero ................................................................................ IEEE Arithmetic ........................................................................................................... Fortran | External Procedures ............................................................................................... Calling Convention ....................................................................................................... String Length Arguments ............................................................................................... Case of External Names ................................................................................................ Fortran | Target Processors .................................................................................................. AMD Athlon ................................................................................................................ AMD Barcelona ........................................................................................................... AMD Istanbul .............................................................................................................. AMD Shanghai ............................................................................................................ Intel Core 2 ................................................................................................................ Intel Core i7 ............................................................................................................... Intel Penryn ................................................................................................................ Intel Pentium 4 ........................................................................................................... Generic x86 [Win32 only] ............................................................................................ Generic x86-64 [x64 only] ........................................................................................... Fortran | Target Accelerators ................................................................................................. Target NVIDIA Accelerator ............................................................................................ NVIDIA: Register Limit ................................................................................................. NVIDIA: Use Fused Multiply-Adds ................................................................................... NVIDIA: Use Fast Math Library ...................................................................................... NVIDIA: Use 24-bit Subscript Multiplication .................................................................... NVIDIA: Synchronous Kernel Launch .............................................................................. NVIDIA: CUDA Toolkit .................................................................................................. NVIDIA: Compute Capability .......................................................................................... NVIDIA: CC 1.0 ........................................................................................................... NVIDIA: CC 1.1 ........................................................................................................... NVIDIA: CC 1.2 ........................................................................................................... NVIDIA: CC 1.3 ........................................................................................................... NVIDIA: CC 2.0 ........................................................................................................... NVIDIA: Keep Kernel Binary .......................................................................................... NVIDIA: Keep Kernel Source ......................................................................................... NVIDIA: Keep Kernel PTX ............................................................................................. NVIDIA: Enable Profiling .............................................................................................. NVIDIA: Analysis Only .................................................................................................. Target Host .................................................................................................................
341 341 341 341 341 342 342 342 342 342 342 343 343 343 343 344 344 344 344 344 345 345 345 345 345 345 345 346 346 346 346 347 347 347 347 348 348 348 348 348 349 349 349 349 349 349 xv
Fortran | Diagnostics ........................................................................................................... Warning Level ............................................................................................................. Generate Assembly ....................................................................................................... Annotate Assembly ....................................................................................................... Accelerator Information ............................................................................................... CCFF Information ........................................................................................................ Fortran Language Information ....................................................................................... Inlining Information .................................................................................................... IPA Information .......................................................................................................... Loop Intensity Information ............................................................................................ Loop Optimization Information ...................................................................................... LRE Information .......................................................................................................... OpenMP Information ................................................................................................... Optimization Information .............................................................................................. Parallelization Information ............................................................................................ Unified Binary Information ........................................................................................... Vectorization Information ............................................................................................. Fortran | Profiling ............................................................................................................... Function-Level Profiling ................................................................................................ Line-Level Profiling ...................................................................................................... MPI ........................................................................................................................... Suppress CCFF Information ........................................................................................... Enable Limited DWARF ................................................................................................. Fortran | Command Line ...................................................................................................... Command Line ............................................................................................................ Linker Property Pages .......................................................................................................... Linker | General .................................................................................................................. Output File ................................................................................................................. Additional Library Directories ....................................................................................... Export Symbols ........................................................................................................... Linker | Input ..................................................................................................................... Additional Dependencies .............................................................................................. Linker | Command Line ........................................................................................................ Command Line ............................................................................................................ Librarian Property Pages ...................................................................................................... Librarian | General .............................................................................................................. Output File ................................................................................................................. Additional Library Directories ....................................................................................... Additional Dependencies .............................................................................................. Librarian | Command Line .................................................................................................... Command Line ............................................................................................................ Resources Property Page ...................................................................................................... Resources | Command Line .................................................................................................. Command Line ............................................................................................................ Build Events Property Page ................................................................................................... Build Event ................................................................................................................. xvi
349 349 350 350 350 350 350 350 350 350 351 351 351 351 351 351 351 351 352 352 352 352 352 352 352 353 353 353 353 353 354 354 354 354 354 355 355 355 355 356 356 356 356 356 356 357
®
PVF User’s Guide
Command Line ............................................................................................................ Description ................................................................................................................. Excluded From Build ................................................................................................... Custom Build Step Property Page ........................................................................................... Custom Build Step | General ................................................................................................. Command Line ............................................................................................................ Description ................................................................................................................. Outputs ...................................................................................................................... Additional Dependencies ..............................................................................................
357 357 357 357 357 357 358 358 358
24. PVF Build Macros ......................................................................................................... 359 25. Fortran Module/Library Interfaces for Windows ................................................... 363 Source Files ........................................................................................................................ Data Types .......................................................................................................................... Using DFLIB and DFPORT .................................................................................................... DFLIB ........................................................................................................................ DFPORT ..................................................................................................................... Using the DFWIN module ..................................................................................................... Supported Libraries and Modules .......................................................................................... advapi32 .................................................................................................................... comdlg32 ................................................................................................................... dfwbase ..................................................................................................................... dfwinty ....................................................................................................................... gdi32 ......................................................................................................................... kernel32 .................................................................................................................... shell32 ....................................................................................................................... user32 ....................................................................................................................... winver ....................................................................................................................... wsock32 ....................................................................................................................
363 363 364 364 365 371 371 371 373 374 374 374 377 385 386 390 390
26. Messages ........................................................................................................................ 393 Diagnostic Messages ............................................................................................................ Phase Invocation Messages ................................................................................................... Fortran Compiler Error Messages .......................................................................................... Message Format .......................................................................................................... Message List ............................................................................................................... Fortran Run-time Error Messages .......................................................................................... Message Format .......................................................................................................... Message List ...............................................................................................................
393 394 394 394 394 419 419 419
Index ...................................................................................................................................... 423
xvii
xviii
Tables 1. PGI Compilers and Commands .................................................................................................. xxvi 1.1. PVF Win32 API Module Mappings ............................................................................................... 8 2.1. Property Summary by Property Page .......................................................................................... 16 2.2. PVF Project File Properties ...................................................................................................... 20 2.3. Runtime Library Values for PVF and VC++ Projects ..................................................................... 24 5.1. Stop-after Options, Inputs and Outputs ...................................................................................... 45 6.1. Commonly Used Command Line Options .................................................................................... 56 7.1. Optimization and –O, –g and –M
Options ........................................................................ 79 9.1. Directive Summary Table ......................................................................................................... 92 9.2. Directive Clauses Summary Table ............................................................................................. 93 9.3. Run-time Library Routines Summary ......................................................................................... 96 9.4. OpenMP-related Environment Variable Summary Table ............................................................... 100 10.1. PGI Accelerator Directive Summary Table ............................................................................... 110 10.2. Directive Clauses Summary ................................................................................................... 111 10.3. Accelerator Runtime Library Routines .................................................................................... 114 10.4. Accelerator Environment Variables ........................................................................................ 115 10.5. Supported Fortran Intrinsics ................................................................................................. 120 11.1. Proprietary Optimization-Related Fortran Directive Summary ..................................................... 124 11.2. !DEC$ Directives Summary Table ........................................................................................... 128 13.1. PGI-Related Environment Variable Summary ............................................................................ 138 13.2. Supported PGI_TERM Values ................................................................................................ 144 15.1. Fortran and C/C++ Data Type Compatibility ............................................................................ 153 15.2. Fortran and C/C++ Representation of the COMPLEX Type ......................................................... 153 15.3. Calling Conventions Supported by the PGI Fortran Compilers ..................................................... 161 16.1. 64-bit Compiler Options ....................................................................................................... 166 16.2. Effects of Options on Memory and Array Sizes ......................................................................... 166 16.3. 64-Bit Limitations ................................................................................................................ 167 17.1. Representation of Fortran Data Types ..................................................................................... 169 17.2. Real Data Type Ranges ......................................................................................................... 170 17.3. Scalar Type Alignment .......................................................................................................... 170 18.1. PGI Build-Related Compiler Options ...................................................................................... 174 18.2. PGI Debug-Related Compiler Options ..................................................................................... 175 18.3. Optimization-Related PGI Compiler Options ............................................................................ 176 xix
18.4. Linking and Runtime-Related PGI Compiler Options ................................................................. 18.5. Subgroups for –help Option ................................................................................................. 18.6. –M Options Summary .......................................................................................................... 18.7. Optimization and –O, –g, –Mvect, and –Mconcur Options ........................................................ 19.1. Initialization of REDUCTION Variables .................................................................................... 21.1. IGNORE_TKR Example ........................................................................................................ 22.1. Register Allocation .............................................................................................................. 22.2. Standard Stack Frame .......................................................................................................... 22.3. Stack Contents for Functions Returning struct/union ................................................................. 22.4. Integral and Pointer Arguments ............................................................................................. 22.5. Floating-point Arguments ...................................................................................................... 22.6. Structure and Union Arguments ............................................................................................ 22.7. Register Allocation .............................................................................................................. 22.8. Standard Stack Frame .......................................................................................................... 22.9. Register Allocation for Example A-4 ....................................................................................... 22.10. Win64 Fortran Fundamental Types ....................................................................................... 22.11. Fortran and C/C++ Data Type Compatibility .......................................................................... 22.12. Fortran and C/C++ Representation of the COMPLEX Type ........................................................ 23.1. PVF Property Page Summary ................................................................................................. 24.1. PVF Build Macros ............................................................................................................... 25.1. Fortran Data Type Mappings ................................................................................................. 25.2. DFLIB Function Summary ..................................................................................................... 25.3. DFPORT Functions .............................................................................................................. 25.4. DFWIN advapi32 Functions ...................................................................................................
xx
177 185 191 198 261 294 296 296 299 299 299 300 302 302 305 307 309 309 314 359 363 364 365 371
Examples 1.1. PVF WinMain for Win32 ............................................................................................................ 9 1.2. PVF WinMain for x64 ................................................................................................................ 9 3.1. Use PVF to Debug an Application .............................................................................................. 33 3.2. Use PVF to Debug an Application with Arguments ........................................................................ 33 5.1. Hello program ....................................................................................................................... 42 6.1. Makefiles with Options ............................................................................................................ 52 7.1. Dot Product Code ................................................................................................................... 64 7.2. Unrolled Dot Product Code ...................................................................................................... 64 7.3. Vector operation using SSE instructions ..................................................................................... 67 7.4. Using SYSTEM_CLOCK code fragment ........................................................................................ 80 8.1. Sample Makefile ..................................................................................................................... 84 9.1. OpenMP Loop Example ........................................................................................................... 89 10.1. Accelerator Kernel Timing Data ............................................................................................. 118 11.1. Prefetch Directive Use .......................................................................................................... 127 12.1. Build a DLL: Fortran ........................................................................................................... 133 12.2. Build DLLs Containing Mutual Imports: Fortran ....................................................................... 134 12.3. Import a Fortran module from a DLL .................................................................................... 135 15.1. Character Return Parameters ................................................................................................ 155 15.2. COMPLEX Return Values ...................................................................................................... 156 15.3. Fortran Main Program f2c_main.f ......................................................................................... 156 15.4. C function f2c_func_ ........................................................................................................... 157 15.5. C Main Program c2f_main.c ................................................................................................. 157 15.6. Fortran Subroutine c2f_sub.f ................................................................................................ 158 15.7. Fortran Main Program f2cp_main.f calling a C++ function ....................................................... 158 15.8. C++ function f2cp_func.C .................................................................................................... 159 15.9. C++ main program cp2f_main.C .......................................................................................... 159 15.10. Fortran Subroutine cp2f_func.f ........................................................................................... 160 16.1. Large Array and Small Memory Model in Fortran ..................................................................... 167 19.1. OpenMP Task Fortran Example ............................................................................................. 243 22.1. C Program Calling an Assembly-language Routine .................................................................... 301 22.2. Parameter Passing ............................................................................................................... 305 22.3. C Program Calling an Assembly-language Routine .................................................................... 306
xxi
xxii
Preface This guide is part of a set of manuals that describe how to use The Portland Group (PGI) Fortran compilers and program development tools integrated with Microsoft Visual Studio. These tools, combined with Visual Studio and assorted libraries, are collectively known as PGI Visual Fortran®, or PVF®. You can use PVF to edit, compile, debug, optimize, and profile serial and parallel applications for x86 (Intel Pentium II/III/4/M, Intel Centrino, Intel Xeon, AMD Athlon XP/MP) or x64 (AMD Athlon64/Opteron/Turion, Intel EM64T/Core 2) processor-based systems. The PVF User's Guide provides operating instructions for both the Visual Studio integrated development environment as well as command-level compilation and general information about PGI’s implementation of the Fortran language. This guide does not teach the Fortran programming language.
Audience Description This manual is intended for scientists and engineers using PGI Visual Fortran. To fully understand this guide, you should be aware of the role of high-level languages, such as Fortran, in the software development process; and you should have some level of understanding of programming. PGI Visual Fortran is available on a variety of x86 or x64 hardware platforms and variants of the Windows operating system. You need to be familiar with the basic commands available on your system.
Compatibility and Conformance to Standards Your system needs to be running a properly installed and configured version of PVF. For information on installing PVF, refer to the Release Notes and Installation Guide included with your software. For further information, refer to the following: • American National Standard Programming Language FORTRAN, ANSI X3. -1978 (1978). • ISO/IEC 1539-1 : 1991, Information technology – Programming Languages – Fortran, Geneva, 1991 (Fortran 90). • ISO/IEC 1539-1 : 1997, Information technology – Programming Languages – Fortran, Geneva, 1997 (Fortran 95). • Fortran 95 Handbook Complete ISO/ANSI Reference, Adams et al, The MIT Press, Cambridge, Mass, 1997. • OpenMP Application Program Interface, Version 2.5, May 2005, http://www.openmp.org. xxiii
Organization
• Programming in VAX Fortran, Version 4.0, Digital Equipment Corporation (September, 1984). • IBM VS Fortran, IBM Corporation, Rev. GC26-4119. • Military Standard, Fortran, DOD Supplement to American National Standard Programming Language Fortran, ANSI x.3-1978, MIL-STD-1753 (November 9, 1978).
Organization Users typically begin by wanting to know how to use a product and often then find that they need more information and facts about specific areas of the product. Knowing how as well as why you might use certain options or perform certain tasks is key to using the PGI compilers and tools effectively and efficiently. However, once you have this knowledge and understanding, you very likely might find yourself wanting to know much more about specific areas or specific topics. To facilitate ease of use, this manual is divided into the following two parts: • Part I, Compiler Usage, contains the essential information on how to use the compiler. • Part II, Reference Information, contains more detailed reference information about specific aspects of the compiler, such as the details of compiler options, directives, and more. Part I, Compiler Usage, contains these chapters: Chapter 1, “Getting Started with PVF” gives an overview of the Visual Studio environment and how to use PGI Visual Fortran in that environment. Chapter 2, “Build with PVF” gives an overview of how to use PGI Visual Fortran (PVF) within the Microsoft Visual Studio IDE (Integrated Development Environment) to create and build a PVF project. Chapter 3, “Debug with PVF” gives an overview of how to use the custom debug engine that provides the language-specific debugging capability required for Fortran. Chapter 4, “Using MPI in PVF” describes how to use MPI with PGI Visual Fortran. Chapter 5, “Getting Started with the Command Line Compilers” provides an introduction to the PGI compilers and describes their use and overall features. Chapter 6, “Using Command Line Options” provides an overview of the command-line options as well as task-related lists of options. Chapter 7, “Optimizing & Parallelizing” describes standard optimization techniques that, with little effort, allow users to significantly improve the performance of programs. Chapter 8, “Using Function Inlining” describes how to use function inlining and shows how to create an inline library. Chapter 9, “Using OpenMP” provides a description of the OpenMP Fortran parallelization directives and shows examples of their use. Chapter 10, “Using an Accelerator” describes how to use the PGI Accelerator compilers. xxiv
Preface
Chapter 11, “Using Directives” provides a description of each Fortran optimization directive, and shows examples of their use. Chapter 12, “Creating and Using Libraries” discusses PGI support libraries, shared object files, and environment variables that affect the behavior of the PGI compilers. Chapter 13, “ Using Environment Variables” describes the environment variables that affect the behavior of the PGI compilers. Chapter 14, “Distributing Files - Deployment” describes the deployment of your files once you have built, debugged and compiled them successfully. Chapter 15, “Inter-language Calling” provides examples showing how to place C Language calls in a Fortran program and Fortran Language calls in a C program. Chapter 16, “Programming Considerations for 64-Bit Environments” discusses issues of which programmers should be aware when targeting 64-bit processors. Part II, Reference Information, contains these chapters: Chapter 17, “Fortran Data Types” describes the data types that are supported by the PGI Fortran compilers. Chapter 18, “Command-Line Options Reference” provides a detailed description of each command-line option. Chapter 19, “OpenMP Reference Information” contains detailed descriptions of each of the OpenMP directives that PGI supports. Chapter 21, “Directives Reference” contains detailed descriptions of PGI’s proprietary directives. Chapter 22, “Run-time Environment” describes the assembly language calling conventions and examples of assembly language calls. Chapter 23, “PVF Properties” provides a description of Property Pages that PGI supports. Chapter 24, “PVF Build Macros” provides a description of the build macros that PVF supports. Chapter 25, “Fortran Module/Library Interfaces for Windows” provides a description of the Fortran module library interfaces that PVF supports, describing each property available. Chapter 26, “Messages” provides a list of compiler error messages.
Hardware and Software Constraints This guide describes versions of PGI Visual Fortran that are intended for use on x86 and x64 processorbased systems. Details concerning environment-specific values and defaults and system-specific features or limitations are presented in the release notes delivered with PGI Visual Fortran.
Conventions The PVF User's Guide uses the following conventions: xxv
Conventions
italic Italic font is for emphasis. Constant Width
Constant width font is for commands, filenames, directories, examples and for language statements in the text, including assembly language statements. [ item1 ] Square brackets indicate optional items. In this case item1 is optional. { item2 | item 3} Braces indicate that a selection is required. In this case, you must select either item2 or item3. filename... Ellipsis indicate a repetition. Zero or more of the preceding item may occur. In this example, multiple filenames are allowed. FORTRAN
Fortran language statements are shown in the text of this guide using upper-case characters and a reduced point size. The PGI compilers and tools are supported on both 32-bit and 64-bit variants of Windows operating systems on a variety of x86-compatible processors. There are a wide variety of releases and distributions of each of these types of operating systems. Further, The PVF User’s Guide uses a number of terms with respect to these platforms. For a complete definition of these terms and other terms in this guide with which you may be unfamiliar, PGI provides a glossary of terms which you can access at www.pgroup.com/support/ definitions.htm. AMD64
Large arrays
SSE
Win32
barcelona
-mcmodel=small
SSE1
Win64
DLL
-mcmodel=medium
SSE2
Windows
driver
MPI
SSE3
x64
dynamic library
MPICH
SSE4A and ABM
x86
EM64T
multi-core
SSSE3
x87
hyperthreading (HT)
NUMA
static linking
IA32
shared library
The following table lists the PGI compilers and tools and their corresponding commands: Table 1. PGI Compilers and Commands
xxvi
Compiler or Tool Language or Function
Command
PGF77
FORTRAN 77
pgf77
PGF95
Fortran 90/95
pgf95
Preface
Note The commands pgf95 and pgfortran are equivalent. In general, the designation PGI Fortran is used to refer to The Portland Group’s Fortran 90/95 compiler, and pgfortran is used to refer to the command that invokes the compiler. A similar convention is used for each of the PGI compilers and tools. For simplicity, examples of command-line invocation of the compilers generally reference the pgfortran command, and most source code examples are written in Fortran. Usage of the PGF77 compiler, whose features are a subset of PGF95 or PGFORTRAN, is similar. There are a wide variety of x86-compatible processors in use. All are supported by the PGI compilers and tools. Most of these processors are forward-compatible, but not backward-compatible, meaning that code compiled to target a given processor will not necessarily execute correctly on a previous-generation processor. A table listing the processor options that PGI supports is available in the Release Notes. The table also includes the features utilized by the PGI compilers that distinguish them from a compatibility standpoint. In this manual, the convention is to use "x86" to specify the group of processors that are "32-bit" but not "64bit." The convention is to use "x64" to specify the group of processors that are both "32-bit" and "64-bit." x86 processor-based systems can run only 32-bit operating systems. x64 processor-based systems can run either 32-bit or 64-bit operating systems, and can execute all 32-bit x86 binaries in either case. x64 processors have additional registers and 64-bit addressing capabilities that are utilized by the PGI compilers and tools when running on a 64-bit operating system. The prefetch, SSE1, SSE2 and SSE3 processor features further distinguish the various processors. Where such distinctions are important with respect to a given compiler option or feature, it is explicitly noted in this manual.
Note The default for performing scalar floating-point arithmetic is to use SSE instructions on targets that support SSE1 and SSE2.
Related Publications The following documents contain additional information related to the x86 and x64 architectures, and the compilers and tools available from The Portland Group. • PGI Fortran Reference manual describes the FORTRAN 77, Fortran 90/95, and HPF statements, data types, input/output format specifiers, and additional reference material related to use of the PGI Fortran compilers. • System V Application Binary Interface Processor Supplement by AT&T UNIX System Laboratories, Inc. (Prentice Hall, Inc.). • System V Application Binary Interface X86-64 Architecture Processor Supplement, www.x86-64.org/ abi.pdf. • Fortran 95 Handbook Complete ISO/ANSI Reference, Adams et al, The MIT Press, Cambridge, Mass, 1997. • Programming in VAX Fortran, Version 4.0, Digital Equipment Corporation (September, 1984). xxvii
Related Publications
• IBM VS Fortran, IBM Corporation, Rev. GC26-4119. • OpenMP Application Program Interface, Version 2.5 May 2005 (OpenMP Architecture Review Board, 1997-2005).
xxviii
Part I. Compiler Usage Users typically begin by wanting to know how to use a product and often then find that they need more information and facts about specific areas of the product. Knowing how as well as why you might use certain options or perform certain tasks is key to using the PGI compilers and tools effectively and efficiently. In the chapters in this part of the guide you learn how to: • Get started using the PGI compilers, as described in Chapter 1, “Getting Started” on page 1. • Use the most common command line options and learn why specific ones are especially beneficial for you to use, as described in Chapter 2, “Using Command Line Options” on page 17. • Use optimization and parallelization to increase the performance of your program, as described in Chapter 3, “Optimizing & Parallelizing” on page 23. • Invoke function inlining and create an inline library, as described in Chapter 4, “Using Function Inlining” on page 49. • Use OpenMP directives, pragmas, run-time libraries, and environment variables, as described in Chapter 5, “Using OpenMP” on page 55. • Use MPI, including compiling, linking and generating MPI profile data, as described in Chapter 6, “Using MPI” on page 73. • Using PGI Accelerator compilers, as described in Chapter 7, “Using an Accelerator” on page 83. • Use PGI directives and pragmas, as described in Chapter 8, “Using Directives and Pragmas” on page 107. • Create and use libraries, as described in Chapter 9, “Creating and Using Libraries” on page 117. • Create and use environment variables to control the behavior of PGI software, as described in Chapter 10, “Using Environment Variables” on page 131. • Distribute files and deploy your applications, as described in Chapter 11, “Distributing Files - Deployment” on page 145. • Make inter-language calls, as described in Chapter 12, “Inter-language Calling” on page 151. • Incorporate programming considerations for 64-bit environments, as described in Chapter 13, “Programming Considerations for 64-bit Environments” on page 167. • Properly use C/C++ inline assembly instructions and intrinsics, as described in Chapter 14, “C/C++ Inline Assembly and Intrinsics” on page 175.
Chapter 1. Getting Started with PVF This chapter describes how to use PGI Visual Fortran (PVF) within the Microsoft Visual Studio IDE (Integrated Development Environment). For information on general use of Visual Studio, refer to Microsoft’s documentation. There are three products in the PVF product family. Each product is integrated with a particular version of Microsoft Visual Studio. PGI Visual Fortran 2010 is integrated with Microsoft Visual Studio 2010 (VS 2010), PGI Visual Fortran 2008 is integrated with Microsoft Visual Studio 2008 (VS 2008), and PGI Visual Fortran 2005 is integrated with Microsoft Visual Studio 2005 (VS 2005). Throughout this document, "PGI Visual Fortran" or "PVF" refers to all PVF products collectively. Further, “Microsoft Visual Studio” refers to the compatible version of Visual Studio that you are using with PVF. The following sections provide a general overview of the many features and capabilities available to you once you have installed PVF. Exploring the menus and trying the sample program in this section provide a quick way to get started using PVF.
PVF on the Start Menu PVF provides a Start menu entry that provides access to different versions of PVF as well as easy access to the PGI Profiler, Documentation and Licensing. This section provides a quick overview of the PVF menu selections. To access the PGI Visual Fortran menu, from the Start menu, select Start | All Programs | PGI Visual Fortran.
Shortcuts to Launch PVF From the PGI Visual Fortran menu, you have access to each version of PGI Visual Fortran you have installed. For example, if you have all three PVF products installed at the same time, then you have a shortcut for PVF 2010, PVF 2008 and PVF 2005. PVF runs within Visual Studio, so to invoke PVF you must invoke Visual Studio. If other languages such as Visual C++ or Visual Basic are installed, they are available in the same instance of Visual Studio as PVF. The PVF shortcuts include the following: PGI Visual Fortran 2010 – Select this option to invoke PGI Visual Fortran 2010. PGI Visual Fortran 2008 – Select this option to invoke PGI Visual Fortran 2008. PGI Visual Fortran 2005 – Select this option to invoke PGI Visual Fortran 2005. 1
PVF on the Start Menu
Commands Submenu From the Commands menu, you have access to PVF command shells for each version of PVF installed on your system. For example, if you have PVF 2005 and PVF 2008 installed, then you have a selection for each of these versions. These shortcuts invoke a command shell with the environment configured for the PGI compilers and tools. The command line compilers and graphical tools may be invoked from any of these command shells without any further configuration.
Important If you invoke a generic Command Prompt using Start | All Programs | Accessories | Command Prompt, then the environment is not pre-configured for PGI products. • PVF 2010 Cmd (64) – Select this option to launch a command shell in which the environment is preinitialized to use PVF 2010 and the 64-bit PGI compilers and tools. The default environment variables are already set and available. (Available only on x64 systems.) • PVF 2010 Cmd – Select this option to launch a command shell in which the environment is pre-initialized to use PVF 2010 and the 32-bit PGI compilers and tools. The default environment variables are already set and available. • PVF 2008 Cmd (64) – Select this option to launch a command shell in which the environment is preinitialized to use PVF 2008 and the 64-bit PGI compilers and tools. The default environment variables are already set and available. (Available only on x64 systems.) • PVF 2008 Cmd – Select this option to launch a command shell in which the environment is pre-initialized to use PVF 2008 and the 32-bit PGI compilers and tools. The default environment variables are already set and available. • PVF 2005 Cmd (64) – Select this option to launch a command shell in which the environment is preinitialized to use PVF 2005 and the 64-bit PGI compilers and tools. The default environment variables are already set and available. (Available only on x64 systems.) • PVF 2005 Cmd – Select this option to launch a command shell in which the environment is pre-initialized to use PVF 2005 and the 32-bit PGI compilers and tools. The default environment variables are already set and available.
Profiler Submenu Use the profiler menu to launch the PGPROF Performance Profiler. PGPROF provides a way to visualize and diagnose the performance of the components of your program and provides features for helping you to understand why certain parts of your program have high execution times.
Documentation Submenu From the Documentation menu, you have access to all PGI documentation that is useful for PVF users. The documentation that is available includes the following: 2
Chapter 1. Getting Started with PVF
• AMD Core Math Library– Select this option to display documentation that describes elements of the AMD Core Math Library, a software development library released by AMD that includes a set of useful mathematical routines optimized for AMD processors. • CUDA Fortran Reference– Select this option to display the CUDA Fortran Programming Guide and Reference. This document describes CUDA Fortran, a small set of extensions to Fortran that support and build upon the CUDA computing architecture. • Fortran Language Reference– Select this option to display the PGI Fortran Reference for PGI Visual Fortran. This document describes The Portland Group's implementation of the FORTRAN 77 and Fortran 90/95 languages and presents the Fortran language statements, intrinsics, and extension directives. • Installation Guide– Select this option to display the PVF Installation Guide. This document provides an overview of the steps required to successfully install and license PGI Visual Fortran. • Release Notes– Select this option to display the latest PVF Release Notes. This document describes the new features of the PVF IDE interface, differences in the compilers and tools from previous releases, and late-breaking information not included in the standard product documentation. • User's Guide– Select this option to display the PVF User's Guide. This document provides operating instructions for both the Visual Studio integrated development environment as well as command-level compilation and general information about PGI’s implementation of the Fortran language.
Licensing Submenu From the Licensing menu, you have access to the PGI License Agreement and an automated license generating tool: • Generate License– Select this option to display the PGI License Setup dialog that walks you through the steps required to download and install a license for PVF. To complete this process you need an internet connection. • License Agreement– Select this option to display the license agreement that is associated with use of PGI software.
Introduction to PVF This section provides an introduction to PGI Visual Fortran as well as basic information about how things work in Visual Studio. It contains an example of how to create a PVF project that builds a simple application, along with the information on how to run and debug this application from within PVF. If you’re already familiar with PVF or are comfortable with project creation in VS, you may want to skip ahead to the next section.
Visual Studio Settings PVF projects and settings are available as with any other language. The first time Visual Studio is started it may display a list of default settings from which to choose; select General Development Settings. If Visual Studio was installed prior to the PVF install, it will start as usual after PVF is installed, except PVF projects and settings will be available.
3
Creating a Hello World Project
Solutions and Projects The Visual Studio IDE frequently uses the terms solution and project. For consistency of terminology, it is useful to discuss these here. solution All the things you need to build your application, including source code, configuration settings, and build rules. You can see the graphical representation of your solution in the Solution Explorer window in the VS IDE. project Every solution contains one or more projects. Each project produces one output, where an output is an executable, a static library, or a dynamic-link library (DLL). Each project is specific to a single programming language, like PGI Visual Fortran or Microsoft Visual C++, but you can have projects of different languages in the same solution. We examine the relationship between a solution and its projects in more detail by using an example. But first let’s look at an overview of the process. Typically there are these steps: 1. Create a new PVF project. 2. Modify the source. 3. Build the solution. 4. Run the application. 5. Debug the application.
Creating a Hello World Project Let’s walk through how to create a PVF solution for a simple program that prints "Hello World".
Step 1: Create Hello World Project Follow these steps to create a PVF project to run "Hello World". 1. Select File | New | Project from the Visual Studio main menu. The New Project dialog appears. 2. In the Project types window located in the left pane of the dialog box, expand PGI Visual Fortran, and then select Win32. 3. In the Templates window located in the right pane of the dialog box, select Console Application (32-bit). 4. In the Name field located at the bottom of the dialog box, type: HelloWorld. 5. Click OK. You should see the Solution Explorer window in PVF. If not, you can open it now using View | Solution Explorer from the main menu. In this window you should see a solution named HelloWorld that contains a PVF project, which is also named HelloWorld. 4
Chapter 1. Getting Started with PVF
Step 2: Modify the Hello World Source The project contains a single source file called ConsoleApp.f90. If the source file is not already opened in the editor, open it by double-clicking the file name in the Solution Explorer. The source code in this file should look similar to this: program main implicit none ! Variables ! Body end program main
Now add a print statement to the body of the main program so this application produces output. For example, the new program may look similar to this: program main implicit none ! Variables ! Body print *, "Hello World" end program main
Step 3: Build the Solution You are now ready to build a solution. To do this, from the main menu, select Build | Build Solution. The View | Output window shows the results of the build.
Step 4: Run the Application To run the application, select Debug | Start Without Debugging. This action launches a command window in which you see the output of the program. It looks similar to this: Hello World Press any key to continue . . .
Step 5: View the Solution, Project, and Source File Properties The solution, projects, and source files that make up your application have properties associated with them.
Note The set of property pages and properties may vary depending on whether you are looking at a solution, a project, or a file. For a description of the property pages that PVF supports, refer to Chapter 23, “PVF Properties,” on page 313. To see a solution’s properties: 1. Select the solution in the Solution Explorer. 2. Right-click to bring up a context menu. 3. Select the Properties option. This action brings up the Property Pages dialog. 5
Using PVF Help
To see the properties for a project or file: 1. Select a project or a file in the Solution Explorer. 2. Right-click to bring up a context menu. 3. Select the Properties option. This action brings up the Property Pages dialog. At the top of the Property Pages dialog there is a box labeled Configuration. In a PVF project, two configurations are created by default: • The Debug configuration has properties set to build a version of your application that can be easily examined and controlled using the PVF debugger. • The Release configuration has properties set so a version of your application is built with some general optimizations. When a project is initially created, the Debug configuration is the active configuration. When you built the HelloWorld solution in “Creating a Hello World Project,” on page 4, you built and ran the Debug configuration of your project. Let’s look now at how to debug this application.
Step 6: Run the Application Using the Debugger To debug an application in PVF: 1. Set a breakpoint on the print statement in ConsoleApp.f90. To set a breakpoint, left-click in the far left side of the editor on the line where you want the breakpoint. A red circle appears to indicate that the breakpoint is set. 2. Select Debug | Start Debugging from the main menu to start the PGI Visual Fortran debug engine. The debug engine stops execution at the breakpoint set in Step 1. 3. Select Debug | Step Over to step over the print statement. Notice that the program output appears in a PGI Visual Fortran console window. 4. Select Debug | Continue to continue execution. The program should exit. For more information about building and debugging your application, refer to Chapter 2, “Build with PVF” and Chapter 3, “Debug with PVF”. Now that you have seen a complete example, let’s take a look at more of the functionality available in several areas of PVF.
Using PVF Help The PVF User’s Guide and PGI Fortran Reference are accessible in PDF form from the Visual Studio Help menu: Help | PGI Visual Fortran Help Help | PGI Fortran Reference 6
Chapter 1. Getting Started with PVF
Both of these documents, and all other PGI documentation installed with PVF, are also available from the PGI Visual Fortran | Documentation folder off the Start Menu. Context-sensitive () help is not currently supported in PVF.
PVF Sample Projects The PVF installation includes several sample solutions, available from the PVF installation directory, typically in a directory called Samples: $(VSInstallDir)\PGI Visual Fortran\Samples\
These samples provide simple demonstrations of specific PVF project and solution types. In the dlls subdirectory of the Samples directory, you find this sample program: pvf_dll
Creates a DLL that exports routines written in Fortran. In the gpu subdirectory of the Samples directory, you find these sample programs which require a PGI Accelerator License to compile and a GPU to run. AccelPM_Matmul
Uses directives from the PGI Accelerator Programming Model to offload a matmul computation to a GPU. CUDAFor_Matmul
Uses CUDA Fortran to offload a matmul computation to a GPU. In the interlanguage subdirectory of the Samples directory, you find these sample programs which require that Visual C++ be installed to build and run: pvf_calling_vc
Creates a solution containing a Visual C++ static library, where the source is compiled as C, and a PVF main program that calls it. vcmain_calling_pvfdll
Calls a routine in a PVF DLL from a main program compiled by VC++. In the win32api subdirectory of the Samples directory, you find this sample program: menu_dialog
Uses a resource file and Win32 API calls to create and control a menu and a dialog box.
Compatibility PGI Visual Fortran provides features that are compatible with those supported by older Windows Fortran products, such as Compaq® Visual Fortran. These include:
7
Compatibility
• Win32 API Support (dfwin) • Unix/Linux Portability Support (dflib, dfport) • Graphical User Interface Support PVF provides access to a number of libraries that export C interfaces by using Fortran modules. This is the mechanism used by PVF to support the Win32 Application Programming Interface (API) and Unix/Linux portability libraries. If C: is your system drive, and is your target system, such as win64, then source code containing the interfaces in these modules is located here: C:\Program Files\PGI\\\src\
For more information about the specific functions in dfwin, dflib, and dfport, refer to Chapter 25, “Fortran Module/Library Interfaces for Windows”.
Win32 API Support (dfwin) The Microsoft Windows operating system interface (the system call and library interface) is known collectively as the Win32 API. This is true for both the 32-bit and 64-bit versions of Windows; there is no "Win64 API" for 64-bit Windows. The only difference on 64-bit systems is that pointers are 64-bits rather than the 32-bit pointers found on 32-bit Windows. PGI Visual Fortran provides access to the Win32 API using Fortran modules. For details on specific Win32 API routines, refer to the Microsoft MSDN website. For ease of use, the only module you need to use to access the Fortran interfaces to the Win32 API is dfwin. To use this module, simply add the following line to your Fortran code. use dfwin
Table 1.1 lists all of the Win32 API modules and the Win32 libraries to which they correspond. Table 1.1. PVF Win32 API Module Mappings PVF Fortran Module
C Win32 API Lib
C Header File
advapi32
advapi32.lib
WinBase.h
comdlg32
comdlg32.lib
ComDlg.h
gdi32
gdi32.lib
WinGDI.h
kernel32
kernel32.lib
WinBase.h
shell32
shell32.lib
ShellAPI.h
user32
user32.lib
WinUser.h
winver
winver.lib
WinVer.h
wsock32
wsock32.lib
WinSock.h
Unix/Linux Portability Interfaces (dflib, dfport) PVF also includes Fortran module interfaces to libraries supporting some standard C library and Unix/Linux system call functionality. These functions are provided by the dflib and dfport modules. To utilize these modules add the appropriate use statement: 8
Chapter 1. Getting Started with PVF
use dflib use dfport
For more information about the specific functions in dflib and dfport, refer to Chapter 25, “Fortran Module/Library Interfaces for Windows”.
Windows Applications and Graphical User Interfaces Programs that manage graphical user interface components using Fortran code are referred to as Windows Applications within PVF. PVF Windows Applications are characterized by the lack of a PROGRAM statement. Instead, Windows Applications must provide a WinMain function like the following: Example 1.1. PVF WinMain for Win32 integer(4) function WinMain (hInstance,& hPrevInstance, lpszCmdLine, nCmdShow) integer(4) hInstance integer(4) hPrevInstance integer(4) lpszCmdLine integer(4) nCmdShow
Example 1.2. PVF WinMain for x64 integer(4) function WinMain (hInstance,& hPrevInstance, lpszCmdLine, nCmdShow) integer(8) hInstance integer(8) hPrevInstance integer(8) lpszCmdLine integer(4) nCmdShow
nCmdShow is an integer specifying how the window is to be shown. Since hInstance, hPrevInstance, and lpszCmdLine are all pointers, in a 32-bit program they must be 4-byte integers; in a 64-bit program, they must be 8-byte integers. For more details you can look up WinMain in the Microsoft Platform SDK
documentation. You can create a PVF Windows Application template by selecting Windows Application in the PVF New Project dialog. The project type of this name provides a default implementation of WinMain, and the project’s properties are configured appropriately. You can also change the Configuration Type property of another project type to Windows Application using the General property page, described in “General Property Page,” on page 327. If you do this, the configuration settings change to expect WinMain instead of PROGRAM, but a WinMain implementation is not provided. For an illustration of how to build a small application that uses WinMain, see the menu_dialog sample program available in the sample programs area: $(VSInstallDir)\PGI Visual Fortran\Samples\win32api\menu_dialog
Building Windows Applications from the Command Line Windows Applications can also be built using a command line version of pgfortran. To enable this feature, add the –winapp option to the compiler driver command line when linking the application. This option causes the linker to include the correct libraries and object files needed to support a Windows Application. However, 9
Compatibility
it does not add any additional system libraries to the link line. Add any required system libraries by adding the option –defaultlib: to the link command line for each library. For this option, can be any of the following: advapi32, comdlg32, gdi32, kernel32, shell32, user32, winver, or wsock32. For more information about the specific functions in each of these libraries, refer to Chapter 25, “Fortran Module/Library Interfaces for Windows”. Menus, Dialog Boxes, and Resources The use of resources in PVF is similar to their use in Visual C++. The resource files that control menus and dialog boxes have the file extension .rc. These files are processed with the Microsoft Resource Compiler to produce binary .res files. A .res file is then directly passed to the linker which incorporates the resources into the output file. See the PVF sample project menu_dialog for details on how resources are used within a windows application.
Note The complete Visual C++ Resource Editor is not available in PVF. Although you can edit files like icons (.ico) and bitmaps (.bmp) directly, the .rc file is not updated automatically by the environment. You must either install Visual C++, in which case the resource editor is fully functional, or you must edit .rc files using the source code (text) editor.
10
Chapter 2. Build with PVF This chapter describes how to use PGI Visual Fortran (PVF) within the Microsoft Visual Studio IDE (Integrated Development Environment) to create and build a PVF project. For information on general use of Visual Studio, see the Visual Studio integrated help. PVF runs within Visual Studio, so to invoke PVF you must invoke Visual Studio. If other languages such as Visual C++ or Visual Basic are installed, they will be available in the same instance of Visual Studio as PVF.
Creating a PVF Project PVF Project Types Once Visual Studio is running, you can use it to create a PGI Visual Fortran project. PVF supports a variety of project types: • Console Application - An application (.exe) that runs in a console window, using text input and output. • Dynamic Library - A dynamically-linked library file (.dll) that provides routines that can be loaded ondemand when called by the program that needs them. • Static Library - An archive file (.lib) containing one or more object files that can be linked to create an executable. • Windows Application - An application (.exe) that supports a graphical user interface that makes use of components like windows, dialog boxes, menus, and so on. The name of the program entry point for such applications is WinMain. • Empty Project - A skeletal project intended to allow migration of existing applications to PVF. This project type does not include any source files. By default, an empty project is set to build an application (.exe).
Creating a New Project To create a new project, follow these steps: 1. Select File | New | Project from the File menu. The New Project dialog appears. 2. In the left-hand pane of the dialog, select PGI Visual Fortran. 11
PVF Solution Explorer
The right-hand pane displays the icons that correspond to the project types listed in “PVF Project Types,” on page 11.
Note On x64 systems, 32-bit and 64-bit project types are clearly labeled. These types may be filtered using the 32-bit and 64-bit folders in the left-hand navigation pane of the dialog. 3. Select the project type icon corresponding to the project type you want to create. 4. Name the project in the edit box labeled Name.
Tip The name of the first project in a solution is also used as the name of the solution itself. 5. Select where to create the project in the edit box labeled Location. 6. Click OK and the project is created. Now look in the Solution Explorer to see the newly created project files and folders.
PVF Solution Explorer PVF uses the standard Visual Studio Solution Explorer to organize files in PVF projects.
Tip If the Solution Explorer is not already visible in the VS IDE, open it by selecting View | Solution Explorer. Visual Studio uses the term project to refer to a set of files, build rules, and so on that are used to create an output like an executable, DLL, or static library. Projects are collected into a solution, which is composed of one or more projects that are usually related in some way. PVF projects are reference-based projects, which means that although there can be folders in the representation of the project in the Solution Explorer, there are not necessarily any corresponding folders in the file system. Similarly, files added to the project can be located anywhere in the file system; adding them to the project does not copy them or move them to a project folder in the file system. The PVF project system keeps an internal record of the location of all the files added to a project.
Adding Files to a PVF Project This section describes how to add a new file to a project and how to add an existing file to a project.
Add a New File To add a new file to a PVF project, follow these steps: 1. Use the Solution Explorer to select the PVF project to which you want to add the new file. 12
Chapter 2. Build with PVF
2. Right-click on this PVF project to bring up a context menu. 3. Select Add => New Item... 4. In the Add New Item dialog box, select a file type from the available templates. 5. A default name for this new file will be in the Name box. Type in a new name if you do not want to use the default. 6. Click Add.
Add an Existing File To add an existing file to a PVF project, follow these steps: 1. Use the Solution Explorer to select the PVF project to which you want to add the new file. 2. Right-click on this PVF project to bring up a context menu. 3. Select Add => Existing Item… 4. In the Browse window that appears, navigate to the location of the file you want to add. 5. Select the file and click Add.
Tip You can add more than one file at a time by selecting multiple files.
Adding a New Project to a Solution Each project produces one output, where an output is an executable, a static library, or a dynamic-link library (DLL). For example, if you want one solution to build both a library and an application that links against that library, you need two projects in the solution. To add a project to a solution, follow these steps: 1. Use the Solution Explorer to select the solution. 2. Right-click on the solution to bring up a context menu. 3. Select Add => New Project... The Add New Project dialog appears. To learn how to use this dialog, refer to “Creating a PVF Project ,” on page 11. 4. In the Add New Project dialog box, select a project type from the available templates. 5. When you have selected and named the new project, click OK.
Note Each project is specific to a single programming language, like PGI Visual Fortran or Microsoft Visual C++, but you can have projects that use different languages in the same solution. 13
Project Dependencies and Build Order
Project Dependencies and Build Order If your solution contains more than one project, set up the dependencies for each project to ensure that projects are built in the correct order. To set project dependencies: 1. Right-click a project in the Solution Explorer. 2. From the resulting context menu select Project Dependencies. The dialog box that opens has two tabs: Dependencies and Build Order. a. Use the Dependencies tab to put a check next to the projects on which the current project depends. b. Use the Build Order tab to verify the order in which projects will be built.
Configurations Visual Studio projects are generally created with two default configurations: Debug and Release. The Debug configuration is set up to build a version of your application that can be easily debugged. The Release configuration is set up to build a generally-optimized version of your application. Other configurations may be created as desired using the Configuration Manager.
Platforms In Visual Studio, the platform refers to the operating system for which you are building your application. In a PVF project on a system running a 32-bit Windows OS, only the Win32 platform is available. In a PVF project on a system running a 64-bit Windows OS, both the Win32 and x64 platforms are available. When you create a new project, you select its default platform. When more than one platform is available, you can add additional platforms to your project once it exists. To do this, you use the Configuration Manager.
Setting Global User Options Global user options are settings that affect all Visual Studio sessions for a particular user, regardless of which project they have open. PVF supports several global user settings which affect the directories that are searched for executables, include files, and library files. To access these: 1. From the main menu, select Tools | Options... 2. From the Options dialog, expand Projects and Solutions. 3. Select PVF Directories in the dialog’s navigation pane. The PVF Directories page has two combo boxes at the top: • Platform allows selection of the platform (i.e., x64). • Show directories for allows selection of the search path to edit. Search paths that can be edited include the Executable files path, the Include and module files path, and the Library files path. 14
Chapter 2. Build with PVF
Tip It is good practice to ensure that all three paths contain directories from the same release of the PGI compilers; mixing and matching different releases of the compiler executables, include files, and libraries can have undefined results.
Setting Configuration Options using Property Pages Visual Studio makes extensive use of property pages to specify configuration options. Property pages are used to set options for compilation, optimization and linking, as well as how and where other tools like the debugger operate in the Visual Studio environment. Some property pages apply to the whole project, while others apply to a single file and can override the project-wide properties. You can invoke the Property Page dialog in several ways: • Select Project | Properties to invoke the property pages for the currently selected item in the Solution Explorer. This item may be a project, a file, a folder, or the solution itself. • Right-click a project node in the Solution Explorer and select Properties from the resulting context menu to invoke that project’s property pages. • Right-click a file node in the Solution Explorer and select Properties from the context menu to invoke that file’s property pages. The Property Page dialog has two combo boxes at the top: Configuration and Platform. You can change the configuration box to All Configurations so the property is changed for all configurations.
Tip A common error is to change a property like ‘Additional Include Directories’ for the Debug configuration but not the Release configuration, thereby breaking the build of the Release configuration. Chapter 18, “Command-Line Options Reference,” on page 173 contains descriptions of compiler options in terms of the corresponding command-line switches. For compiler options that can be set using the PVF property pages, the description of the option includes instructions on how to do so.
Property Pages Properties, or configuration options, are grouped into property pages. Further, property pages are grouped into categories. Depending on the type of project, the set of available categories and property pages vary. The property pages in a PVF project are organized into the following categories: • General
• Librarian
• Debugging
• Resources
• Fortran
• Build Events
• Linker
• Custom Build Step
15
Property Pages
Tip The Fortran, Linker and Librarian categories contain a Command Line property page where the command line derived from the properties can be seen. Options that are not supported by the PVF property pages can be added to the command line from this property page by entering them in the Additional Options field. Table 2.1 shows the properties associated with each property page, listing them in the order in which you see them in the Properties dialog box. For a complete description of each property, refer to Chapter 23, “PVF Properties,” on page 313. Table 2.1. Property Summary by Property Page
16
This Property Page...
Contains these properties...
General Property Page
Output Directory Intermediate Directory Extensions to Delete on Clean Configuration Type Build Log File Build Log Level
Debugging Property Page
Application Command Application Arguments Environment Merge Environment MPI Debugging Working Directory [Serial] Number of Processes [Local MPI] Working Directory [Local MPI] Additional Arguments: mpiexec [Local MPI] Location of mpiexec [Local MPI] Number of Cores [Cluster MPI] Working Directory [Cluster MPI] Standard Input [Cluster MPI] Standard Output [Cluster MPI] Standard Error [Cluster MPI] Additional Arguments: job submit [Cluster MPI] Additional Arguments: mpiexec [Cluster MPI] Location of job.exe [Cluster MPI]
Fortran | General
Display Startup Banner Additional Include Directories Module Path Object File Name Debug Information Format Optimization
Chapter 2. Build with PVF
This Property Page...
Contains these properties...
Fortran | Optimization
Optimization Global Optimizations Vectorization Inlining Use Frame Pointer Loop Unroll Count Auto-Parallelization
Fortran | Preprocessing
Preprocess Source File Additional Include Directories Ignore Standard Include Path Preprocessor Definitions Undefine Preprocessor Definitions
Fortran | Code Generation
Runtime Library
Fortran | Language
Fortran Dialect Treat Backslash as Character Extend Line Length Process OpenMP Directives MPI Enable CUDA Fortran CUDA Fortran Register Limit CUDA Fortran Use Fused Multiply-Adds CUDA Fortran Use Fast Math Library CUDA Fortran Toolkit CUDA Fortran Compute Capability CUDA Fortran CC 1.0 CUDA Fortran CC 1.1 CUDA Fortran CC 1.2 CUDA Fortran CC 1.3 CUDA Fortran CC 2.0 CUDA Fortran Keep Binary CUDA Fortran Keep Kernel Source CUDA Fortran Keep PTX CUDA Fortran Emulation
Fortran | Floating Point Options
Floating Point Exception Handling Floating Point Consistency Flush Denormalized Results to Zero Treat Denormalized Values as Zero IEEE Arithmetic
Fortran | External Procedures
Calling Convention String Length Arguments Case of External Names
17
Property Pages
18
This Property Page...
Contains these properties...
Fortran | Target Processors
AMD Athlon AMD Barcelona AMD Istanbul AMD Shanghai Intel Core 2 Intel Core i7 Intel Penryn Intel Pentium 4 Generic x86 [Win32 only] Generic x86-64 [x64 only]
Fortran | Target Accelerators
Target NVIDIA Accelerator NVIDIA: Register Limit NVIDIA: Use Fused Multiply-Adds NVIDIA: Use Fast Math Library NVIDIA: Use 24-bit Subscript Multiplication NVIDIA: Synchronous Kernel Launch NVIDIA: CUDA Toolkit NVIDIA: Compute Capability NVIDIA: CC 1.0 NVIDIA: CC 1.1 NVIDIA: CC 1.2 NVIDIA: CC 1.3 NVIDIA: CC 2.0 NVIDIA: Keep Kernel Binary NVIDIA: Keep Kernel Source NVIDIA: Keep Kernel PTX NVIDIA: Enable Profiling NVIDIA: Analysis Only Target Host
Chapter 2. Build with PVF
This Property Page...
Contains these properties...
Fortran | Diagnostics
Warning Level Generate Assembly Annotate Assembly Accelerator Information CCFF Information Fortran Language Information Inlining Information IPA Information Loop Intensity Information Loop Optimization Information LRE Information OpenMP Information Optimization Information Parallelization Information Unified Binary Information Vectorization Information
Fortran | Profiling
Function-Level Profiling Line-Level Profiling MPI Suppress CCFF Information Enable Limited DWARF
Fortran | Command Line
All options (read-only contents box) Additional options (contents box you can modify)
Linker | General
Output File Additional Library Directories Export Symbols
Linker | Input
Additional Dependencies
Linker | Command Line
All options (read-only contents box) Additional options (contents box you can modify)
Librarian | General
Output File Additional Library Directories Additional Dependencies
Librarian | Command Line
All options (read-only contents box) Additional options (contents box you can modify)
Resources | Command Line
All options (read-only contents box) Additional options (contents box you can modify)
Build Events | Pre-Build Event
Command Line Description Excluded From Build
19
Setting File Properties Using the Properties Window
This Property Page...
Contains these properties...
Build Events | Pre-Link Event
Command Line Description Excluded From Build
Build Events | Post-Build Event
Command Line Description Excluded From Build
Custom Build Step | General
Command Line Description Outputs Additional Dependencies
Setting File Properties Using the Properties Window Properties accessed from the Property Pages dialog allow you to change the configuration options for a project or file. The term property, however, has another meaning in the context of the Properties Window. In the Properties Window property means attribute or characteristic. To see a file's properties, do this: 1. Select the file in the Solution Explorer. 2. From the View menu, open the Properties Window. Some file properties can be modified, while others are read-only. The values of the properties in the Properties Window remain constant regardless of the Configuration (Debug, Release) or Platform (Win32, x64) selected. Table 2.2 lists the file properties that are available in a PVF project. Table 2.2. PVF Project File Properties
20
This property...
Does this...
Name
Shows the name of the selected file.
Filename
Shows the name of the selected file.
FilePath
Shows the absolute path to the file on disk. (Read-only)
FileType
Shows the registered type of the file, which is determined by the file’s extension. (Read-only)
Chapter 2. Build with PVF
This property...
Does this...
IsFixedFormat
Determines whether the Fortran file is fixed format. True indicates fixed format and False indicates free format. To change whether a source file is compiled as fixed or free format source, set this property appropriately. PVF initially uses file extensions to determine format style: the .f and .for extensions imply fixed format, while other extensions such as .f90 or .f95 imply free format.
Note The 'C' and '*' comment characters are only valid for fixed format compilation. IsIncludeFile
A boolean value that indicates if the file is an include file. When True, PVF considers the file to be an include file and it does not attempt to compile it. When False, if the filename has a supported Fortran or Resource file extension, PVF compiles the file as part of the build.
Tip You can use this property to exclude a source file from a build. IsOutput
Indicates whether a file is produced by the build. (Read-only)
ModifiedDate
Contains the date and time that the file was last saved to disk. (Readonly)
ReadOnly
Indicates the status of the Read-Only attribute of the file on disk.
Size
Describes the size of the file on disk.
Setting Fixed Format Some Fortran source is written in fixed-format style. If your fixed-format code does not compile, check that it is designated as fixed-format in PVF. Procedure 2.1. To check fixed-format in PVF, follow these steps: 1.
Use the Solution Explorer to select a file: View | Solution Explorer.
2.
Open the Properties Window: View | Other Windows | Properties Window.
3.
From the dropdown list for the file property IsFixedFormat, select True.
21
Building a Project with PVF
Building a Project with PVF Once a PVF project has been created, populated with source files, and any necessary configuration settings have been made, the project can be built. The easiest way to start a build is to use the Build | Build Solution menu selection; all projects in the solution will be built. If there are compile-time errors, the Error List window is displayed, showing a summary of the errors that were encountered. If the error message shows a line number, then double-clicking the error record in the Error List window will navigate to the location of the error in the editor. When a project is built for the first time, PVF must determine the build dependencies. Build dependencies are the result of USE or INCLUDE statements or #include preprocessor directives in the source. In particular, if file A contains a USE statement referring to a Fortran module defined in file B, file B must be compiled successfully before file A will compile. To determine the build dependencies, PVF begins compiling files in alphabetical order. If a compile fails due to an unsatisfied module dependency, the offending file is placed back on the build queue and a message is printed to the Output Window, but not to the Error List. In a correct Fortran program, all dependencies will eventually be met, and the project will be built successfully. Otherwise, errors will be printed to the Error List as usual. Unless the build dependencies change, subsequent builds use the build dependency information generated during the course of the initial build.
Order of PVF Build Operations In the default PVF project build, the build operations are executed in the following order: 1. Pre-Build Event 2. Custom Build Steps for Files 3. Build Resources 4. Compile Fortran Files to Objects (using the PGI Fortran compiler) 5. Pre-Link Event 6. Build Output Files (using linker or librarian) 7. Custom Build Step for Project 8. Post-Link Event
Build Events and Custom Build Steps PVF provides default build rules for Fortran files and Resource files. Other files are ignored unless a build action is specified using a Build Event or a Custom Build Step.
Build Events Build events allow definition of a specific command to be executed at a predetermined point during the project build. You define build events using the property pages for the project. Build events can be specified as Pre22
Chapter 2. Build with PVF
Build, Pre-Link, or Post-Build. For specific information about where build events are run in the PVF build, refer to “Order of PVF Build Operations,” on page 22. Build events are always run unless the project is up to date. There is no dependency checking for build events.
Custom Build Steps Custom build steps are defined using the Custom Build Step Property Page. You can specify a custom build step for an entire project or for an individual file, provided the file is not a Fortran or Resource file. When a custom build step is defined for a project, dependencies are not checked during a build. As a result, the custom build step only runs when the project itself is out of date. Under these conditions, the custom build step is very similar to the post-build event. When a custom build step is defined for an individual file, dependencies may be specified. In this case, the dependencies must be out of date for the custom build step to run.
Note The 'Outputs' property for a file-level custom build step must be defined or the custom build step is skipped.
PVF Build Macros PVF implements a subset of the build macros supported by Visual C++ along with a few PVF-specific macros. The macro names are not case-sensitive, and they should be usable in any string field in a property page. Unless otherwise noted, macros that evaluate to directory names end with a trailing backslash ('\'). In general these items can only be changed if there is an associated PVF project or file property. For example, $(VCInstallDir) cannot be changed, while $(IntDir) can be changed by modifying the General | Intermediate Directory property. For the names and descriptions of the build macros that PVF supports, refer to Chapter 24, “PVF Build Macros”.
Static and Dynamic Linking PVF supports both static and dynamic linking to the PGI and Microsoft runtime. In versions prior to release 7.1, only dynamic linking was supported. The Fortran | Code Generation | Runtime Library property in a project's property pages determines which runtime library the project targets. • For executable and static library projects, the default value of this property is static linking (-Bstatic). A statically-linked executable can be run on any system for which it is built; neither the PGI nor the Microsoft redistributable libraries need be installed on the target system. • For dynamically linked library projects, the default value of this property is dynamic linking (-Bdynamic). A dynamically-linked executable can only be run on a system on which the PGI and Microsoft runtime redistributables have been installed. 23
VC++ Interoperability
For more information on deploying PGI-compiled applications to other systems, refer to Chapter 14, “Distributing Files - Deployment”.
VC++ Interoperability If Visual C++ is installed along with PVF, Visual Studio solutions containing both PVF and VC++ projects can be created. Each project, though, must be purely PVF or VC++; Fortran and C/C++ code cannot be mixed in a single project. This constraint is purely an organizational issue. Fortran subprograms may call C functions and C functions may call Fortran subprograms as outlined in Chapter 15, “Inter-language Calling”. For an example of how to create a solution containing a VC++ static library, where the source is compiled as C, and a PVF main program that calls into it, refer to the PVF sample project pvf_calling_vc.
Note Because calling Visual C++ code (as opposed to C code) from Fortran is very complicated, it is only recommended for the advanced programmer. Further, to make interfaces easy to call from Fortran, Visual C++ code should export the interfaces using extern "C".
Linking PVF and VC++ Projects If you have multiple projects in a solution, be certain to use the same type of runtime library for all the projects. Further, if you have Microsoft VC++ projects in your solution, you need to be certain to match the runtime library types in the PVF projects to those of the VC++ projects. PVF's property Fortran | Code Generation | Runtime Library corresponds to the Microsoft VC++ property named C/C++ | Code Generation | Runtime Library. Table 2.3 lists the appropriate combinations of Runtime Library property values when mixing PVF and VC++ projects. Table 2.3. Runtime Library Values for PVF and VC++ Projects If PVF uses ...
VC++ should use...
Multi-threaded (-Bstatic)
Multi-threaded (/MT)
Multi-threaded DLL (-Bdynamic)
Multi-threaded DLL (/MD)
Multi-threaded DLL (-Bdynamic)
Multi-threaded debug DLL (/MDd)
Common Link-time Errors The runtime libraries specified for all projects in a solution should be the same. If both PVF and VC++ projects exist in the same solution, the runtime libraries targeted should be compatible. Keep in mind the following guidelines: • Projects that produce DLLs should use the Multi-threaded DLL (-Bdynamic) runtime. • Projects that produce executables or static libraries can use either type of linking. 24
Chapter 2. Build with PVF
The following examples provide a look at some of the link-time errors you might see when the runtime library targeted by a PVF project is not compatible with the runtime library targeted by a VC++ project. To resolve these errors, refer to Table 2.3 and set the Runtime Library properties for the PVF and VC++ projects accordingly. Errors seen when linking a PVF project using -Bstatic and a VC++ library project using /MDd: MSVCRTD.lib(MSVCR80D.dll) : libcmt.lib(printf.obj) LINK conflicts with use of other fatal error LNK1169: one or
error LNK2005: _printf already defined in : warning LNK4098: defaultlib 'MSVCRTD' libs; use /NODEFAULTLIB:library test.exe : more multiply defined symbols found
Errors seen when linking a PVF project using -Bstatic and a VC++ project using /MTd: LIBCMTD.lib(dbgheap.obj) : error LNK2005: _malloc already defined in libcmt.lib(malloc.obj) … LINK : warning LNK4098: defaultlib 'LIBCMTD' conflicts with use of other libs; use /NODEFAULTLIB:library test.exe : fatal error LNK1169: one or more multiply defined
Migrating an Existing Application to PVF An existing non-PVF Fortran application or library project can be migrated to PVF. This section provides a rough outline of how one might go about such a migration.
Tip Depending on your level of experience with Visual Studio and the complexity of your existing application, you might want to experiment with a practice project first to become familiar with the project directory structure and the process of adding existing files. Start your project migration by creating a new Empty Project. Add the existing source and include files associated with your application to the project. If some of your source files build a library, while other files build the application itself, you will need to create a separate project within your solution for the files that build the library. Set the configuration options using the property pages. You may need to add include paths, module paths, library dependency paths and library dependency files. If your solution contains more than one project, you will want to set up the dependencies between projects to ensure that the projects are built in the correct order. When you are ready to try a build, select Build | Build Solution from the main menu. This action starts a full build. If there are compiler or linker errors, you will probably have a bit more build or configuration work to do.
Fortran Editing Features PVF provides several Fortran-aware features to ease the task of entering and examining Fortran code in the Visual Studio Editor. Source Colorization – Fortran source is colorized, so keywords, comments, and strings are distinguished from other language elements. You can use the Tools | Options | Environment | Fonts and Colors dialog to assign colors for identifiers and numeric constants, and to modify the default colors for strings, keywords and comments. 25
Fortran Editing Features
Method Tips – Fortran intrinsic functions are supported with method tips. When an opening parenthesis is entered in the source editor following an intrinsic name, a method tip pop-up is displayed that shows the data types of the arguments to the intrinsic function. If the intrinsic is a generic function supporting more than one set of arguments, the method tip window supports scrolling through the supported argument lists. Keyword Completion – Fortran keywords are supported with keyword completion. When entering a keyword into the source editor, typing + will open a pop-up list displaying the possible completions for the portion of the keyword entered so far. Use the up or down arrow keys or the mouse to select one of the displayed items; type or double-click to enter the remainder of the highlighted keyword into the source. Type additional characters to narrow the keyword list or use to expand it.
26
Chapter 3. Debug with PVF PVF utilizes the Visual Studio debugger for debugging Fortran programs. PGI has implemented a custom debug engine that provides the language-specific debugging capability required for Fortran. This debug engine also supports Visual C++. The Debug configuration is usually used for debugging. By default, this configuration will build the application so that debugging information is provided. The debugger can be started by clicking on the green arrow in the toolbar (looks like the ‘play’ button on a CD or DVD player) or by selecting Debug | Start Debugging. Then use the Visual Studio debugger controls as usual.
Windows Used in Debugging Visual Studio uses a number of different windows to provide debugging information. Only a subset of these is opened by default in your initial debugging session. Use the Debug | Windows menu option to see a list of all the windows available and to select the one you want to open. This section provides an overview of most of the debugging windows you can use to get information about your debug session, along with a few tips about working with some of these windows.
Autos Window The autos window provides information about a changing set of variables as determined by the current debugging location. This window is supported for VC++ code but will not contain any information when debugging in a Fortran source file.
Breakpoints Window The breakpoints window contains all the breakpoints that have been set in the current application. You use the breakpoints window to manage the application's breakpoints.
Note This window is available even when the application is not being debugged. You can disable, enable or delete any or all breakpoints from within this window. 27
Windows Used in Debugging
• Double-clicking on a breakpoint opens the editor to the place in the source where the breakpoint is set. • Right-clicking on a breakpoint brings up a context menu display that shows the conditions that are set for the breakpoint. You can update these conditions via this display. • During debugging, each breakpoint's status is shown in this window. Breakpoint States A breakpoint can be enabled, disabled, or in an error state. A breakpoint in an error state indicates that it failed to bind to a code location when the program was loaded. An error breakpoint can be caused by a variety of things. Two of the most common reasons a breakpoint fails to bind are these: • The code containing the breakpoint may be in a module (DLL) that has not yet been loaded. • A breakpoint condition may contain a syntax error. Breakpoints in Multi-Process Programs When debugging a multi-process program, each user-specified breakpoint is bound on a per-process basis. When this situation occurs, the breakpoints in the breakpoints window can be expanded to reveal each bound breakpoint.
Call Stack Window The call stack window shows the call stack based on the current debugging location. Call frames are listed from the top down, with the innermost function on the top. Double-click on a call frame to select it. • The yellow arrow is the instruction pointer, which indicates the current location. • A green arrow beside a frame indicates the frame is selected but is not the current frame.
Disassembly Window The disassembly window shows the assembly code corresponding to the source code under debug. Using Step and Step Into in the disassembly window moves the instruction pointer one assembly instruction instead of one source line. Whenever possible, source lines are interleaved with disassembly.
Immediate Window The immediate window provides direct communication with the debug engine. You can type help in this window to get a list of supported commands. Variable Values in Multi-Process Programs When debugging a multi-process program, use the print command in the immediate window with a process/ thread set to display the values of a variable across all processes at once. For example, the following command prints the value of iVar for all processes and their threads. [*.*] print iVar
28
Chapter 3. Debug with PVF
Locals Window The locals window lists all variables in the current scope, providing the variable's name, value, and type. You can expand variables of type array, record, structure, union and derived type variables to view all members. The variables listed include any Fortran module variables that are referenced in the current scope.
Memory Window The memory window lists the contents of memory at a specified address. Type an address in memory into the memory window’s Address box to display the contents of memory at that address.
Modules Window In Visual Studio, the term module means a program unit such as a DLL. It is unrelated to the Fortran concept of module. The modules window displays the DLLs that were loaded when the application itself was loaded. You can view information such as whether or not symbol information is provided for a given module.
Output Window The output window displays a variety of status messages. When building an application, this window displays build information. When debugging, the output window displays information about loading and unloading modules, and exiting processes and threads. The output window does not receive application output such as standard out or standard error. In serial and local MPI debugging, such output is directed to a console window. In cluster MPI debugging, such output is directed to user-specified files. For more information, refer to the properties: “Standard Output ,” on page 330 and “Standard Output ,” on page 330.
Processes Window The processes window displays each process that is currently being debugged. For serial debugging, there is only one process displayed. For MPI debugging, the number of processes specified in the Debugging property page determines the number of processes that display in this window. The Title column of the processes window contains the rank of each process, as well as the name of the system on which the process is running and the process id. Switching Processes in Multi-Process Programs Many of the debugging windows display information for one process at a time. During multi-process debugging, the information in these windows pertains to the process with focus in the processes window. The process with focus has a yellow arrow next to it. You can change the focus from one process to another by selecting the desired process in one of these ways: • Double-click on the process. • Highlight the process and press . 29
Variable Rollover
Registers Window The registers window is available during debugging so you can see the value of the OS registers. Registers are shown in functional groups. The first time you use the registers window, the CPU registers are shown by default. • To show other register sets, follow these steps: 1. Right-click in the registers window to bring up a context menu. 2. From the context menu, select the group of registers to add to the registers window display. • To remove a group from the display, follow these steps: 1. Right-click in the registers window to bring up a context menu. 2. From the context menu, deselect the group of registers to remove from the registers window display.
Threads Window The threads window lists the active threads in the current process. Threads are named by process and thread rank using the form "process.thread".
Note Not all threads may be executing in user code at any given time.
Watch Window You use the watch window during debugging to display a user-selected set of variables.
Note If a watched variable is no longer in scope, its value is no longer valid in the watch window, although the variable itself remains listed until you remove it.
Variable Rollover Visual Studio provides a debugging feature called variable rollover. This feature is available when an application in debug mode stops at a breakpoint or is otherwise suspended. To activate variable rollover, use the mouse pointer to hover over a variable in the source code editor. After a moment, the value of the variable appears as a data tip next to the mouse pointer. The first data tip that you see is often upper level information, such as an array address or possibly the members of a user-defined type. If additional information is available for a variable, you see a plus sign in the data tip. Hovering over the plus sign expands the information. Once the expansion reaches the maximum number of lines available for display, about fifteen lines, the data tip has up and down triangles which allow you to scroll to see additional information. You can use variable rollover to obtain information about scalars, arrays, array elements, as well as userdefined type variables and their members. 30
Chapter 3. Debug with PVF
Scalar Variables If you roll over a scalar variable, such as an integer or a real, the data tip displays the scalar’s value.
Array Variables If you roll over an array, the data tip displays the array’s address. To see the elements of an array, either roll over the specific array element’s subscript operator (parenthesis), or roll over the array and then expand the data tip by moving the mouse over the plus sign in the data tip. The expanded view displays the individual array elements. The data tip can display up to about fifteen array elements at a time. For arrays with more than fifteen elements, use the up and down arrows on the top and bottom of the expanded data tip to scroll through the other elements. Fortran character arrays work slightly differently. • When rolling over a single element character array, the data tip displays the value of the string. To see the individual character elements, expand the string. • When rolling over a multi-element character array, the initial data tip contains the array’s address. To see the elements of the array, expand the array. Each expanded element appears as a string, which is also expandable.
User-Defined Type Variables User-defined types include derived types, records, structs, and unions. When rolling over a user-defined type, the initial data tip displays a condensed form of the value of the user-defined type variable, which is also expandable. To see a member of a user-defined type, you can either roll over the specific user-defined variable directly, or roll over the user-defined type and then expand the data tip by moving the mouse over the plus sign in the data tip. The expanded view displays the individual members of the variable and their values. The data tip can display up to about fifteen user-defined type members at a time. For user-defined types with more than fifteen members, use the up and down arrows on the top and bottom of the expanded data tip to scroll through the other members.
Debugging an MPI Application in PVF PVF has full debugging support for MPI applications running locally and on a cluster. For specific information on how to do this, refer to “Debug an MPI Application ,” on page 37.
Attaching the PVF Debugger to a Running Application PGI Visual Fortran can debug a running application using the PVF "Attach to Process" option. PVF supports attaching to Fortran applications built for 32-bit and 64-bit native Windows systems. PVF includes PGI compilers that build 32-bit and, on Win64, 64-bit native Windows applications. A PVF installation is all that is required to use PVF to attach to PGI-compiled native Windows applications. 31
Attaching the PVF Debugger to a Running Application
The following instructions describe how to use PVF to attach to a running native Windows application. As is often true, the richest debugging experience is obtained if the application being debugged has been compiled with debug information enabled.
Attach to a Native Windows Application To attach to a native Windows application, follow these steps: 1. Open PVF from the Start menu, invoke PVF as described in “PVF on the Start Menu,” on page 1. 2. From the main Tools menu, select Attach to Process… 3. In the Attach to: box of the Attach to Process dialog, verify that PGI Debug Engine is selected. If it is not selected, follow these steps to select it: a. Click the Select button. b. In the Select Code dialog box that appears, choose Debug these code types. c. Deselect any options that are selected and select PGI Debug Engine. d. Click OK. 4. Select the application to which you want to attach PVF from the Available Processes box in the Attach to Process dialog. This area of the dialog box contains the system’s running processes. If the application to which you want to attach PVF is missing from this list, try this procedure to locate it: a. Depending on where the process may be located, select Show processes in all sessions or Show processes from all users. You can select both. b. Click Refresh. 5. With the application to attach to selected, click Attach. PVF should now be attached to the application. To debug, there are two ways to stop the application: • Set a breakpoint using Debug | New Breakpoint | Break at Function… and let execution stop when the breakpoint is hit.
Tip Be certain to set the breakpoint at a line in the function that has yet to be executed. • Use Debug | Break All to stop execution. With this method, if you see a message box appear that reads There is no source code available for the current location., click OK. Use Step Over (F10) to advance to a line for which source is available. 32
Chapter 3. Debug with PVF
Note To detach PVF from the application and stop debugging, select Debug | Stop Debugging.
Using PVF to Debug a Standalone Executable You can invoke the PVF debug engine to debug an executable that was not created by a PVF project. To do this, you invoke Visual Studio from a command shell with special arguments implemented by PVF. You can use this method in any Native Windows command prompt environment. PGI Visual Fortran includes PGI compilers that build both 32-bit and, on Win64, 64-bit native Windows applications. A PVF installation is all that is required to use the PVF standalone executable debugging feature with PGI-compiled native Windows applications. The following instructions describe how to invoke the PGI Visual Fortran debug engine from a native Windows prompt.
Tip The richest debugging experience is obtained when the application being debugged has been compiled and linked with debug information enabled.
Launch PGI Visual Fortran from a Native Windows Command Prompt To launch PGI Visual Fortran from a native Windows Command Prompt, follow these steps: 1. Set the environment by opening a PVF Command Prompt window using the PVF Start menu, as described in “Shortcuts to Launch PVF,” on page 1. • To debug a 32-bit executable, choose the 32-bit command prompt: PVF Cmd. • To debug a 64-bit executable, choose the 64-bit command prompt: PVF Cmd (64). The environment in the option you choose is automatically set to debug a native Windows application. 2. Start PGI Visual Fortran using the executable devenv.exe. If you followed Step 1 to open the PVF Command Prompt, this executable should already be on your path. In the PVF Command Prompt window, you must supply the switch /PVF:DebugExe, your executable, and any arguments that your executable requires. The following examples illustrate this requirement. Example 3.1. Use PVF to Debug an Application This example uses PVF to debug an application, MyApp1.exe, that requires no arguments. CMD> devenv /PVF:DebugExe MyApp1
Example 3.2. Use PVF to Debug an Application with Arguments This example uses PVF to debug an application, MyApp2.exe, and pass it two arguments: arg1, arg2. CMD> devenv /PVF:DebugExe MyApp2 arg1 arg2
Once PVF starts, you should see a Solution and Project with the same name as the name of the executable you passed in on the command line, such as MyApp2 in the previous example. 33
Using PVF to Debug a Standalone Executable
You are now ready to use PGI Visual Fortran after a command line launch, as described in the next section.
Using PGI Visual Fortran After a Command Line Launch Once you have started PVF from the command line, it does not matter how you started it, you are now ready to run and debug your application from within PVF. To run your application from within PVF, from the main menu, select Debug | Start Without Debugging. To debug your application using PVF: 1. Set a breakpoint using the Debug | New Breakpoint | Break at Function dialog box. 2. Enter either a function or a function and line that you know will be executed.
Tip You can always use the routine name MAIN for the program’s entry point (i.e. main program) in a Fortran program compiled by PGI compilers. 3. Start the application using Debug | Start Debugging. When the debugger hits the breakpoint, execution stops and, if available, the source file containing the breakpoint is opened in the PVF editor.
Tips on Launching PVF from the Command Line If you choose to launch PVF from a command line, here are a few tips to help you be successful: • The path to the executable you want to debug must be specified using a full or relative path. Further, paths containing spaces must be quoted using double quotes ("). • If you specify an executable that does not exist, PVF starts up with a warning message and no solution is created. • If you specify a file to debug that exists but is not in an executable format, PVF starts up with a warning message and no solution is created.
34
Chapter 4. Using MPI in PVF Message Passing Interface (MPI) is an industry-standard application programming interface designed for rapid data exchange between processors in a cluster application. MPI is software used in computer clusters that allows many computers to communicate with one another. PGI provides MPI support with PGI compilers and tools. You can build, run, debug, and profile MPI applications on Windows using PVF and Microsoft’s implementation of MPI, MSMPI. This chapter describes how to use these capabilities and indicates some of their limitations, provides the requirements for using MPI in PVF, explains how to compile and enable MPI execution, and describes how to launch, debug, and profile your MPI application. In addition, there are tips on how to get the most out of PVF’s MPI capabilities.
MPI Overview MPI is a set of function calls and libraries that are used to send messages between multiple processes. These processes can be located on the same system or on a collection of distributed servers. Unlike OpenMP, the distributed nature of MPI allows it to work in almost any parallel environment. Further, distributed execution of a program does not necessarily mean that an MPI job must run on multiple machines. PVF has built-in support for Microsoft's version of MPI: MSMPI.
System and Software Requirements To use PVF's MPI features, additional Microsoft software which you can download from Microsoft must be installed on your system. There are two types of MPI support in PVF: local MPI and cluster MPI. The type of support you need determines your system requirements.
Local MPI To use PVF's local MPI capabilities the Microsoft HPC Pack 2008 SDK must be installed on your system. The HPC Pack 2008 SDK contains the MSMPI headers and libraries. It also contains mpiexec, which PVF uses to launch MPI applications.
Cluster MPI To use PVF's cluster MPI capabilities: 35
Compile using MSMPI
• Both the Microsoft HPC Pack 2008 SDK and the HPC Pack must be installed on your system. • The HPC Pack 2008 SDK contains the MSMPI headers and libraries. It also contains mpiexec, which PVF uses to launch MPI applications. • The HPC Pack contains, among other things, the HPC Job Manager, which is used to launch MPI applications on a cluster. • A Windows HPC Server 2008 cluster must be running. • Your PGI license must be enabled to run and debug on remote nodes. For information on upgrading your current license to a cluster license, please contact [email protected].
Compile using MSMPI The PVF Fortran | Language | MPI property enables MPI compilation and linking with the Microsoft MPI headers and libraries. Set this property to Microsoft MPI to enable an MPI build.
Enable MPI Execution Once your MPI application is built, you can run and debug it. The PVF Debugging property page is the key to both running and debugging an MPI application. For simplicity, in this section we use the term execution to mean either running or debugging the application. Use the MPI Debugging property to determine the type of execution you need, provided you have the appropriate system configuration and license.
MPI Debugging Property Options The MPI Debugging property can be set to one of three options: Disabled, Local, Cluster. Disabled When Disabled is selected, execution is performed serially. Local When Local is selected, MPI execution is performed locally. That is, multiple processes are used but all of them run on the local host. Cluster When Cluster is selected, MPI execution is performed using multiple processes – some or all of which can be running on a remote cluster node. Additional MPI properties become available when you select either the Local or Cluster MPI Debugging option. For more information about these properties, refer to “Debugging Property Page,” on page 328.
Launch an MPI Application As soon as you have built your MPI application, and selected Local or Cluster MPI Debugging, you can launch your executable using the Debug | Start Without Debugging menu option. 36
Chapter 4. Using MPI in PVF
PVF uses Microsoft’s version of mpiexec to support Local MPI execution. PVF uses Microsoft’s HPC Job Manager to support Cluster MPI execution. PVF submits jobs to the HPC Job Manager when you execute an application. To view the status of jobs you have submitted, open the HPC Job Manager interface from Start | All Programs | Microsoft HPC Pack | HPC Job Manager and use it.
Note It is not necessary to have the HPC Job Manager interface open to use MPI in PVF.
Debug an MPI Application To debug your MPI application, select Debug | Start Debugging or hit F5. As with running your MPI application, PVF uses mpiexec for Local MPI and the HPC Job Manager for Cluster MPI jobs. PVF supports two styles of multi-process debugging: Run Altogether and Run One At a Time. Run Altogether With this style of debugging, execution of all processes occurs at the same time. When you select Continue, all processes are continued. When one process hits a breakpoint, it stops. The other processes do not stop, however, until they hit a breakpoint or some other type of barrier. When you select Step, all processes are stepped. Control returns to you as soon as one or more processes finishes its step. If some process does not finish its step when the other processes are finished, it continues execution until it completes. Run One At a Time With this style of debugging, processes are shown as advancing one at a time. For example, on startup all processes run to a breakpoint. In the source pane, you see the first process that reaches the breakpoint. When you select Continue, execution switches and you see the next process at that same breakpoint. When you select Continue again, execution switches to show you the next process at that same breakpoint, and so on. Once all processes have been shown at the first breakpoint, selecting Continue advances the processes one at a time to the next breakpoint. In this style of debugging, Step works in a manner similar to Continue, stepping each process one at a time.
Note This style of debugging is not available for use with Microsoft Visual Studio 2005 or Microsoft Visual Studio 2008. To select the debugging style, open the Tools | Options menu and navigate to the Debugging | General page. To enable the Run Altogether style, check the box labeled Break All Processes When One Process Breaks. To enable the Run One At a Time style, uncheck that box.
Note Changing styles during a debug session is not supported. Checking or unchecking the Break All Processes option has no effect until the next time you debug. 37
Tips and Tricks
Additionally, PVF 2005 supports Run Altogether only; the status of the Break All Processes option has no effect with PVF 2005.
Tips and Tricks To get the most out of PVF’s MPI capabilities, here are a few techniques to keep in mind. • Cluster Debugging can only be launched from a working directory that is designated as shared among all cluster nodes. If the working directory is not shared, your application may fail to launch. Specify the working directory using the PVF Debugging property page. In you need assistance in designating your working directory as shared, contact your system or cluster administrator. • Standard I/O in Cluster Debugging comes from and goes to files specified in PVF’s Debugging property pages. If you do not specify a file for standard output, you can view the output of your application using the Job Manager interface. • Use the HPC Job Manager to view the status of jobs launched from PVF.
Note Jobs that queue at the start of a debug session prevent PVF from completing its launch, that is, if you are starting to debug and see no activity, it may seem like PVF is not working. Use the HPC Job Manager to determine why the job has queued, and to unqueue or cancel it. • To run a dynamically-linked application on a cluster from within PVF, the application must be able to find the dynamically-linked libraries upon which it depends. To ensure this is true, do the following: 1. Copy the PGI runtime DLLs, located in the PGI compilers' REDIST directory, to all the nodes. In the following paths, is the release version, such as 10.0-1. 32-bit Windows: C:\Program Files\PGI\win32\\REDIST
64-bit Windows: C:\Program Files\PGI\win64\\REDIST C:\Program Files (x86)\PGI\win32\\REDIST
2. Make certain that the system PATH environment variable on every node, including the head node, includes the path to the PGI runtime DLLs.
Profile an MPI Application The PGI profiling tool PGPROF is included with PVF. The process of profiling involves these basic steps: 1.
Build the application with profiling options enabled.
2.
Run the application to generate profiling output.
3.
Analyze the profiling output with a profiler.
To build a profiling-enabled application in PVF, use the Profiling property page. Enable MPICH-style profiling for Microsoft MPI by setting the MPI property to Microsoft MPI. Doing this adds the -Mprof=msmpi 38
Chapter 4. Using MPI in PVF
option to compilation and linking. You must also enable either Function-Level Profiling (-Mprof=func) or Line-Level Profiling (-Mprof=lines). The profile data generated by running an application built with the option -Mprof=msmpi contains information about the number of sends and receives, as well as the number of bytes sent and received, correlated with the source location associated with the sends and receives. Once you've built your application with the profiling properties enabled, run your application to generate the profiling information files. These files, named pgprof*.out, contain the profiled output of the application's run. Launch PGPROF from the PVF start menu via Start | All Programs | PGI Visual Fortran | Profiler | PGPROF Performance Profiler. Once PGPROF is running, use the File | New Profiling Session menu option to specify the location of the pgprof.out files and your application. For details on using pgprof, refer to the online Help available within the application.
MSMPI Environment Tip This section provides information about the MSMPI environment that is set up for you “behind-thescenes”. It is not necessary for you to define or set default values for these variables. Knowing they exist may be useful if your programs have difficulty locating include files or libraries. When the Microsoft HPC Pack and HPC Pack 2008 SDK are installed, some system environment variables are set. Further, there are two environment variables available to help you specify directory locations associated with using MSMPI on Windows: CCP_HOME and CCP_SDK. • CCP_HOME specifies the root directory of the Microsoft cluster management software for systems on which the Microsoft HPC Pack 2008 is installed. • CCP_SDK specifies the root directory of the MSMPI software for systems on which Microsoft’s HPC Pack 2008 SDK is installed. If the appropriate environment variable is set for the version of MSMPI that you are using, then the PVF MPI property page use of -Mmpi=msmpi automatically brings in the appropriate include files and libraries.
39
40
Chapter 5. Getting Started with the Command Line Compilers This chapter describes how to use the command-line PGI compilers. The PGI Visual Fortran IDE invokes the PGI compilers when you build a PVF project. You can invoke the compilers directly from the Start menu using the appropriate command prompt for your system, as described in “Shortcuts to Launch PVF,” on page 1. The command used to invoke a compiler, such as the pgfortran command, is called a compiler driver. The compiler driver controls the following phases of compilation: preprocessing, compiling, assembling, and linking. Once a file is compiled and an executable file is produced, you can execute, debug, or profile the program on your system. Executables produced by the PGI compilers are unconstrained, meaning they can be executed on any compatible x86 or x64 processor-based system, regardless of whether the PGI compilers are installed on that system.
Overview In general, using a PGI compiler involves three steps: 1. Produce a program source code in a file containing a .f extension or another appropriate extension, as described in “Input Files,” on page 44. This program may be one that you have written or one that you are modifying. 2. Compile the program using the appropriate compiler command. 3. Execute, debug, or profile the executable file on your system. You might also want to deploy your application, though this is not a required step. The PGI compilers allow many variations on these general program development steps. These variations include the following: • Stop the compilation after preprocessing, compiling or assembling to save and examine intermediate results. • Provide options to the driver that control compiler optimization or that specify various features or limitations. 41
Invoking the Command-level PGI Compilers
• Include as input intermediate files such as preprocessor output, compiler output, or assembler output.
Invoking the Command-level PGI Compilers To translate and link a Fortran language program, the pgf77, pgf95, and pgfortran commands do the following: 1. Preprocess the source text file. 2. Check the syntax of the source text. 3. Generate an assembly language file. 4. Pass control to the subsequent assembly and linking steps. Example 5.1. Hello program Let’s look at a simple example of using the PGI compiler to create, compile, and execute a program that prints hello. Step 1: Create your program. For this example, suppose you enter the following simple Fortran program in the file hello.f: print *, "hello" end
Step 2: Compile the program. When you created your program, you called it hello.f. In this example, we compile it from a shell command prompt using the default pgfortran driver option. Use the following syntax: PGI$ pgfortran hello.f PGI$
By default, the executable output is placed in a filename based on the name of the first source or object file on the command line. However, you can specify an output file name by using the –o option. To place the executable output in the file hello, use this command: PGI$ pgfortran -o hello hello.f PGI$
Step 3: Execute the program. To execute the resulting hello program, simply type the filename at the command prompt and press the Return or Enter key on your keyboard: PGI$ hello hello PGI$
Command-line Syntax The compiler command-line syntax, using pgfortran as an example, is: 42
Chapter 5. Getting Started with the Command Line Compilers
pgfortran [options] [path]filename [...]
Where: options is one or more command-line options, all of which are described in detail in Chapter 6, “Using Command Line Options”. path is the pathname to the directory containing the file named by filename. If you do not specify the path for a filename, the compiler uses the current directory. You must specify the path separately for each filename not in the current directory. filename is the name of a source file, preprocessed source file, assembly-language file, object file, or library to be processed by the compilation system. You can specify more than one [path]filename.
Command-line Options The command-line options control various aspects of the compilation process. For a complete alphabetical listing and a description of all the command-line options, refer to Chapter 6, “Using Command Line Options”. The following list provides important information about proper use of command-line options. • Case is significant for options and their arguments. • The compiler drivers recognize characters preceded by a hyphen (-) as command-line options. For example, the –Mlist option specifies that the compiler creates a listing file.
Note The convention for the text of this manual is to show command-line options using a dash instead of a hyphen; for example, you see –Mlist. • The order of options and the filename is flexible. That is, you can place options before and after the filename argument on the command line. However, the placement of some options is significant, such as the –l option, in which the order of the filenames determines the search order.
Note If two or more options contradict each other, the last one in the command line takes precedence.
Fortran Directives You can insert Fortran directives in program source code to alter the effects of certain command-line options and to control various aspects of the compilation process for a specific routine or a specific program loop. For more information on Fortran directives, refer to Chapter 9, “Using OpenMP” and Chapter 11, “Using Directives”.
43
Filename Conventions
Filename Conventions The PGI compilers use the filenames that you specify on the command line to find and to create input and output files. This section describes the input and output filename conventions for the phases of the compilation process.
Input Files You can specify assembly-language files, preprocessed source files, Fortran source files, object files, and libraries as inputs on the command line. The compiler driver determines the type of each input file by examining the filename extensions.
Note For systems with a case-insensitive file system, use the –Mpreprocess option, described in Chapter 18, “Command-Line Options Reference”, under the commands for Fortran preprocessing. The drivers use the following conventions: filename.f indicates a Fortran source file. filename.F indicates a Fortran source file that can contain macros and preprocessor directives (to be preprocessed). filename.FOR indicates a Fortran source file that can contain macros and preprocessor directives (to be preprocessed). filename.F95 indicates a Fortran 90/95 source file that can contain macros and preprocessor directives (to be preprocessed). filename.f90 indicates a Fortran 90/95 source file that is in freeform format. filename.f95 indicates a Fortran 90/95 source file that is in freeform format. filename.cuf indicates a Fortran 90/95 source file in free format with CUDA Fortran extensions. filename.CUF indicates a Fortran 90/95 source file in free format with CUDA Fortran extensions and that can contain macros and preprocessor directives (to be preprocessed). filename.s indicates an assembly-language file. filename.obj (Windows systems only) indicates an object file. filename.lib (Windows systems only) indicates a statically-linked library of object files or an import library. 44
Chapter 5. Getting Started with the Command Line Compilers
filename.dll (Windows systems only) indicates a dynamically-linked library. The driver passes files with .s extensions to the assembler and files with .obj, .dll, and .lib extensions to the linker. Input files with unrecognized extensions, or no extension, are also passed to the linker. Files with a .fpp suffix are first preprocessed by the Fortran compilers and the output is passed to the compilation phase. The Fortran preprocessor is built in to the Fortran compilers. This design ensures consistency in the preprocessing step regardless of the type or revision of operating system under which you’re compiling. Any input files not needed for a particular phase of processing are not processed. For example, if on the command line you specify an assembly-language file (filename.s) and the –S option to stop before the assembly phase, the compiler takes no action on the assembly language file. Processing stops after compilation and the assembler does not run. In this scenario, the compilation must have been completed in a previous pass which created the .s file. For a complete description of the –S option, refer to the following section:“Output Files”. In addition to specifying primary input files on the command line, code within other files can be compiled as part of include files using the INCLUDE statement in a Fortran source file or the preprocessor #include directive in Fortran source files that use a .F extension. When linking a program with a library, the linker extracts only those library components that the program needs. The compiler drivers link in several libraries by default. For more information about libraries, refer to Chapter 12, “Creating and Using Libraries”.
Output Files By default, an executable output file produced by one of the PGI compilers is placed in the file a.out, or, on Windows, in a filename based on the name of the first source or object file on the command line. As the example in the preceding section shows, you can use the –o option to specify the output file name. If you use one of the options: –F (Fortran only), –S or –c, the compiler produces a file containing the output of the last completed phase for each input file, as specified by the option supplied. The output file will be a preprocessed source file, an assembly-language file, or an unlinked object file respectively. Similarly, the –E option does not produce a file, but displays the preprocessed source file on the standard output. Using any of these options, the –o option is valid only if you specify a single input file. If no errors occur during processing, you can use the files created by these options as input to a future invocation of any of the PGI compiler drivers. The following table lists the stop-after options and the output files that the compilers create when you use these options. It also describes the accepted input files. Table 5.1. Stop-after Options, Inputs and Outputs Option
Stop after
Input
Output
–E
preprocessing
Source files.
preprocessed file to standard out
–F
preprocessing
Source files.
preprocessed file (.f)
–S
compilation
Source files or preprocessed files.
assembly-language file (.s) 45
Fortran Data Types
Option
Stop after
Input
Output
–c
assembly
Source files, preprocessed files or assembly-language files.
unlinked object file (.obj)
none
linking
Source files, preprocessed files, assembly-language files, object files or libraries.
executable file (.exe)
If you specify multiple input files or do not specify an object filename, the compiler uses the input filenames to derive corresponding default output filenames of the following form, where filename is the input filename without its extension: filename.f indicates a preprocessed file, if you compiled a Fortran file using the –F option. filename.i indicates a preprocessed file, if you compiled using the –P option. filename.lst indicates a listing file from the –Mlist option. filename.obj indicates an object file from the –c option. filename.s indicates an assembly-language file from the –S option.
Note Unless you specify otherwise, the destination directory for any output file is the current working directory. If the file exists in the destination directory, the compiler overwrites it. The following example demonstrates the use of output filename extensions. $ pgfortran -c proto.f proto1.F
This produces the output files proto.obj and proto1.obj all of which are binary object files. Prior to compilation, the file proto1.F is preprocessed because it has a .F filename extension.
Fortran Data Types The PGI Fortran compilers recognize scalar and aggregate data types. A scalar data type holds a single value, such as the integer value 42 or the real value 112.6. An aggregate data type consists of one or more scalar data type objects, such as an array of integer values. For information about the format and alignment of each data type in memory, and the range of values each type can have on x86 or x64 processor-based systems running a 32-bit operating system, refer to Chapter 17, “Fortran Data Types”. For more information on x86-specific data representation, refer to the System V Application Binary Interface Processor Supplement by AT&T UNIX System Laboratories, Inc. (Prentice Hall, Inc.). 46
Chapter 5. Getting Started with the Command Line Compilers
This manual specifically does not address x64 processor-based systems running a 64-bit operating system, because the application binary interface (ABI) for those systems is still evolving. For the latest version of this ABI, see www.x86-64.org/abi.pdf.
Parallel Programming Using the PGI Compilers The PGI Visual Fortran compilers support two styles of parallel programming: • Automatic shared-memory parallel programs compiled using the –Mconcur option to pgf77, pgf95, or pgfortran — parallel programs of this variety can be run on shared-memory parallel (SMP) systems such as dual-core or multi-processor workstations. • OpenMP shared-memory parallel programs compiled using the –mp option to pgf77, pgf95, or pgfortran — parallel programs of this variety can be run on SMP systems. Carefully coded user-directed parallel programs using OpenMP directives can often achieve significant speed-ups on dual-core workstations or large numbers of processors on SMP server systems. contains complete descriptions of user-directed parallel programming. On a single silicon die, some newer CPUs incorporate two or more complete processor cores - functional units, registers, level 1 cache, level 2 cache, and so on. These CPUs are known as multi-core processors. For purposes of threads, or OpenMP parallelism, these cores function as two or more distinct processors. However, the processing cores are on a single chip occupying a single socket on a system motherboard. For purposes of PGI software licensing, a multi-core processor is treated as a single CPU.
Running SMP Parallel Programs When you execute an SMP parallel program, by default it uses only one processor. To run on more than one processor, set the NCPUS environment variable to the desired number of processors, subject to a maximum of four for PGI’s workstation-class products. For information on how to set environment variables, refer to“Setting Environment Variables,” on page 137
Note If you set NCPUS to a number larger than the number of physical processors, your program may execute very slowly.
Site-specific Customization of the Compilers If you are using the PGI compilers and want all your users to have access to specific libraries or other files, there are special files that allow you to customize the compilers for your site.
Using siterc Files The PGI compiler drivers utilize a file named siterc to enable site-specific customization of the behavior of the PGI compilers. The siterc file is located in the bin subdirectory of the PGI installation directory. Using siterc, you can control how the compiler drivers invoke the various components in the compilation tool chain. 47
Common Development Tasks
Using User rc Files In addition to the siterc file, user rc files can reside in a given user’s home directory, as specified by the user’s HOME environment variable. You can use these files to control the respective PGI compilers. All of these files are optional. On Windows, these files are named mypgf77rc, mypgf90rc, mypgf95rc, and mypgfortranrc.
Common Development Tasks Now that you have a brief introduction to the compiler, let’s look at some common development tasks that you might wish to perform. • When you compile code you can specify a number of options on the command line that define specific characteristics related to how the program is compiled and linked, typically enhancing or overriding the default behavior of the compiler. For a list of the most common command line options and information on all the command line options, refer to Chapter 6, “Using Command Line Options”. • Code optimization and parallelization allows the compiler to organize your code for efficient execution. While possibly increasing compilation time and making the code more difficult to debug, these techniques typically produce code that runs significantly faster than code that does not use them. For more information on optimization and parallelization, refer to Chapter 7, “Optimizing & Parallelizing”. • Function inlining, a special type of optimization, replaces a call to a function or a subroutine with the body of the function or subroutine. This process can speed up execution by eliminating parameter passing and the function or subroutine call and return overhead. In addition, function inlining allows the compiler to optimize the function with the rest of the code. However, function inlining may also result in much larger code size with no increase in execution speed. For more information on function inlining, refer to Chapter 8, “Using Function Inlining”. • Directives allow users to place hints in the source code to help the compiler generate better assembly code. You typically use directives to control the actions of the compiler in a particular portion of a program without affecting the program as a whole. You place them in your source code where you want them to take effect. A directive typically stays in effect from the point where included until the end of the compilation unit or until another directive changes its status. For more information on directives, refer to Chapter 9, “Using OpenMP” and Chapter 11, “Using Directives”. • A library is a collection of functions or subprograms used to develop software. Libraries contain "helper" code and data, which provide services to independent programs, allowing code and data to be shared and changed in a modular fashion. The functions and programs in a library are grouped for ease of use and linking. When creating your programs, it is often useful to incorporate standard libraries or proprietary ones. For more information on this topic, refer to Chapter 12, “Creating and Using Libraries”. • Environment variables define a set of dynamic values that can affect the way running processes behave on a computer. It is often useful to use these variables to set and pass information that alters the default behavior of the PGI compilers and the executables which they generate. For more information on these variables, refer to Chapter 13, “ Using Environment Variables”. • Deployment, though possibly an infrequent task, can present some unique issues related to concerns of porting the code to other systems. Deployment, in this context, involves distribution of a specific file 48
Chapter 5. Getting Started with the Command Line Compilers
or set of files that are already compiled and configured. The distribution must occur in such a way that the application executes accurately on another system which may not be configured exactly the same as the system on which the code was created. For more information on what you might need to know to successfully deploy your code, refer to Chapter 14, “Distributing Files - Deployment”.
49
50
Chapter 6. Using Command Line Options A command line option allows you to control specific behavior when a program is compiled and linked. This chapter describes the syntax for properly using command-line options and provides a brief overview of a few of the more common options.
Note For a complete list of command-line options, their descriptions and use, refer to Chapter 18, “Command-Line Options Reference,” on page 173.
Command Line Option Overview Before looking at all the command-line options, first become familiar with the syntax for these options. There are a large number of options available to you, yet most users only use a few of them. So, start simple and progress into using the more advanced options. By default, the PGI compilers generate code that is optimized for the type of processor on which compilation is performed, the compilation host. Before adding options to your command-line, review the sections “Help with Command-line Options,” on page 52 and “Frequently-used Options,” on page 55.
Command-line Options Syntax On a command-line, options need to be preceded by a hyphen (-). If the compiler does not recognize an option, it passes the option to the linker. This document uses the following notation when describing options: [item] Square brackets indicate that the enclosed item is optional. {item | item} Braces indicate that you must select one and only one of the enclosed items. A vertical bar (|) separates the choices. 51
Help with Command-line Options
... Horizontal ellipses indicate that zero or more instances of the preceding item are valid.
NOTE Some options do not allow a space between the option and its argument or within an argument. When applicable, the syntax section of the option description in Chapter 18, “Command-Line Options Reference,” on page 173 contains this information.
Command-line Suboptions Some options accept several suboptions. You can specify these suboptions either by using the full option statement multiple times or by using a comma-separated list for the suboptions. The following two command lines are equivalent: pgfortran -Mvect=sse -Mvect=noaltcode pgfortran -Mvect=sse,noaltcode
Command-line Conflicting Options Some options have an opposite or negated counterpart. For example, both–Mvect and –Mnovect are available. –Mvect enables vectorization and –Mnovect disables it. If you used both of these commands on a command line, they would conflict.
Note Rule: When you use conflicting options on a command line, the last encountered option takes precedence over any previous one. This rule is important for a number of reasons. • Some options, such as –fast, include other options. Therefore, it is possible for you to be unaware that you have conflicting options. • You can use this rule to create makefiles that apply specific flags to a set of files, as shown in Example 6.1. Example 6.1. Makefiles with Options In this makefile fragment, CCFLAGS uses vectorization. CCNOVECTFLAGS uses the flags defined for CCFLAGS but disables vectorization. CCFLAGS=c -Mvect=sse CCNOVECTFLAGS=$(CCFLAGS) -Mnovect
Help with Command-line Options If you are just getting started with the PGI compilers and tools, it is helpful to know which options are available, when to use them, and which options most users find effective. 52
Chapter 6. Using Command Line Options
Using –help The –help option is useful because it provides information about all options supported by a given compiler. You can use –help in one of three ways: • Use –help with no parameters to obtain a list of all the available options with a brief one-line description of each. • Add a parameter to –help to restrict the output to information about a specific option. The syntax for this usage is this: –help
Suppose you use the following command to restrict the output to information about the -fast option: $ pgfortran -help -fast
The output you see is similar to this: -fast Common optimizations; includes -O2 -Munroll=c:1 -Mnoframe -Mlre
In the following example, we add the –help parameter to restrict the output to information about the help command. The usage information for –help shows how groups of options can be listed or examined according to function. $ pgfortran -help -help -help[=groups|asm|debug|language|linker|opt|other| overall|phase|prepro|suffix|switch|target|variable] Show compiler switches
• Add a parameter to –help to restrict the output to a specific set of options or to a building process. The syntax for this usage is this: -help=
By using the command pgfortran -help -help, as previously shown, we can see output that shows the available subgroups. You can use the following command to restrict the output on the –help command to information about only the options related to only one group, such as debug information generation. $ pgfortran -help=debug
The output you see is similar to this: Debugging switches: -M[no]bounds Generate code to check array bounds -Mchkfpstk Check consistency of floating point stack at subprogram calls (32-bit only) -Mchkstk Check for sufficient stack space upon subprogram entry -Mcoff Generate COFF format object -Mdwarf1 Generate DWARF1 debug information with -g -Mdwarf2 Generate DWARF2 debug information with -g -Mdwarf3 Generate DWARF3 debug information with -g -Melf Generate ELF format object -g Generate information for debugger -gopt Generate information for debugger without disabling optimizations
For a complete description of subgroups, refer to “–help ,” on page 184. 53
Getting Started with Performance
Getting Started with Performance One of the top priorities of most users is performance and optimization. This section provides a quick overview of a few of the command-line options that are useful in improving performance.
Using –fast and –fastsse Options PGI compilers implement a wide range of options that allow users a fine degree of control on each optimization phase. When it comes to optimization of code, the quickest way to start is to use the options –fast and –fastsse. These options create a generally optimal set of flags for targets that support SSE/ SSE2 capability. They incorporate optimization options to enable use of vector streaming SIMD (SSE/SSE2) instructions for 64-bit targets. They enable vectorization with SSE instructions, cache alignment, and SSE arithmetic to flush to zero mode.
Note The contents of the –fast and –fastsse options are host-dependent. Further, you should use these options on both compile and link command lines. • –fast and –fastsse typically include these options: –O2
Specifies a code optimization level of 2.
–Munroll=c:1
Unrolls loops, executing multiple instances of the loop during each iteration.
–Mnoframe
Indicates to not generate code to set up a stack frame.
–Mlre
Indicates loop-carried redundancy elimination.
–Mpre
Indicates partial redundancy elimination.
• These additional options are also typically available when using –fast for 64-bit targets or –fastsse for both 32- and 64-bit targets: –Mvect=sse
Generates SSE instructions.
–Mscalarsse
Generates scalar SSE code with xmm registers; implies –Mflushz.
–Mcache_align
Aligns long objects on cache-line boundaries.
–Mflushz
Sets SSE to flush-to-zero mode.
–M[no]vect
Controls automatic vector pipelining.
Note For best performance on processors that support SSE instructions, use the PGFORTRAN compiler, even for FORTRAN 77 code, and the –fast option. To see the specific behavior of –fast for your target, use the following command: $ pgfortran -help -fast
54
Chapter 6. Using Command Line Options
Other Performance-related Options While –fast and -fastsse are options designed to be the quickest route to best performance, they are limited to routine boundaries. Depending on the nature and writing style of the source code, the compiler often can perform further optimization by knowing the global context of usage of a given routine. For instance, determining the possible value range of actual parameters of a routine could enable a loop to be vectorized; similarly, determining static occurrence of calls helps to decide which routine is beneficial to inline. These types of global optimizations are under control of Interprocedural Analysis (IPA) in PGI compilers. Option -Mipa enables Interprocedural Analysis. -Mpi=fast is the recommended option to get best performances for global optimization. You can also add the suboption inline to enable automatic global inlining across files. You might consider using –Mipa=fast,inline. This option for interprocedural analysis and global optimization can improve performance. You may also obtain further performance improvements by experimenting with the –M options described in the section “–M Options by Category,” on page 213. These options include, but are not limited to, –Mconcur, –Mvect, –Munroll, –Minline, and –Mpfi/–Mpfo. However, performance improvements using these options are typically application- and system-dependent. It is important to time your application carefully when using these options to ensure no performance degradations occur. For more information on optimization, refer to Chapter 7, “Optimizing & Parallelizing,” on page 57. For specific information about these options, refer to “Optimization Controls,” on page 223.
Targeting Multiple Systems - Using the -tp Option The –tp option allows you to set the target architecture. By default, the PGI compiler uses all supported instructions wherever possible when compiling on a given system. As a result, executables created on a given system may not be usable on previous generation systems. For example, executables created on a Pentium 4 may fail to execute on a Pentium III or Pentium II. Processor-specific optimizations can be specified or limited explicitly by using the -tp option. Thus, it is possible to create executables that are usable on previous generation systems. With the exception of k8-64, k8-64e, p7-64, and x64, any of these sub-options are valid on any x86 or x64 processor-based system. The k8-64, k8-64e, p7-64 and x64 options are valid only on x64 processor-based systems For more information about the -tp option, refer to “–tp [,target...],” on page 206.
Frequently-used Options In addition to overall performance, there are a number of other options that many users find useful when getting started. The following table provides a brief summary of these options. For more information on these options, refer to the complete description of each option available in Chapter 18, “Command-Line Options Reference,” on page 173. Also, there are a number of suboptions available with each of the –M options listed. For more information on those options, refer to “–M Options by Category,” on page 213.
55
Frequently-used Options
Table 6.1. Commonly Used Command Line Options Option
Description
–fast
These options create a generally optimal set of flags for targets that support SSE/SSE2 capability. They incorporate optimization options to enable use of vector streaming SIMD instructions (64-bit targets) and enable vectorization with SEE instructions, cache aligned and flushz.
–fastsse
56
–g
Instructs the compiler to include symbolic debugging information in the object module.
–gopt
Instructs the compiler to include symbolic debugging information in the object file, and to generate optimized code identical to that generated when –g is not specified.
–Mconcur
Instructs the compiler to enable auto-concurrentization of loops. If specified, the compiler uses multiple processors to execute loops that it determines to be parallelizable; thus, loop iterations are split to execute optimally in a multithreaded execution context.
–Minfo
Instructs the compiler to produce information on standard error.
–Minline
Enables function inlining.
–Mipa=fast,inline
Enables interprocedural analysis and optimization. Also enables automatic procedure inlining.
–Mpfi or –Mpfo
Enable profile feedback driven optimizations.
–Mkeepasm
Keeps the generated assembly files.
–Munroll
Invokes the loop unroller to unroll loops, executing multiple instances of the loop during each iteration. This also sets the optimization level to 2 if the level is set to less than 2, or if no –O or –g options are supplied.
–M[no]vect
Enables/Disables the code vectorizer.
–o
Names the output file.
–O
Specifies code optimization level where is 0, 1, 2, 3, or 4.
Chapter 7. Optimizing & Parallelizing Source code that is readable, maintainable, and produces correct results is not always organized for efficient execution from the command line. Normally, the first step in the program development process involves producing code that executes and produces the correct results. This first step usually involves compiling without much worry about optimization. After code is compiled and debugged, code optimization and parallelization become an issue. Invoking one of the PGI compiler commands with certain options instructs the compiler to generate optimized code. Optimization is not always performed since it increases compilation time and may make debugging difficult. However, optimization produces more efficient code that usually runs significantly faster than code that is not optimized.
Note PGI provides a profiler, PGPROF, that provides a way to visualize the performance of the components of your program. Using tables and graphs, PGPROF associates execution time and resource utilization data with the source code and instructions of your program, allowing you to see where execution time is spent. Through resource utilization data and compiler analysis information, PGPROF helps you to understand why certain parts of your program have high execution times. To launch PGPROF, use the shortcut on the PVF Start menu: Start | All Programs | PGI Visual Fortran | Profiler | PGPROF Performance Profiler. The compilers optimize code according to the specified optimization level. In PVF, you use the Fortran | Optimization property page to specify optimization levels; on the command line, the options you commonly use are–O, –Mvect, –Mipa, and –Mconcur. You can also use several –M switches to control specific types of optimization and parallelization. You can set the options not supported by the Fortran | Optimization property page by using the Additional Options field of the Fortran | Command Line property page. For more information on these property pages, refer to “Fortran Property Pages”. This chapter describes the optimization options displayed in the following list. –fast
–Minline
–Mpfi
–Mvect
–Mconcur
–Mipa=fast
–Mpfo
–O
–Minfo
–Mneginfo
–Munroll
57
Overview of Optimization
This chapter also describes how to choose optimization options to use with the PGI compilers. This overview will help if you are just getting started with one of the PGI compilers, or wish to experiment with individual optimizations. Complete specifications of each of these options is available in Chapter 18, “Command-Line Options Reference”.
Overview of Optimization In general, optimization involves using transformations and replacements that generate more efficient code. This is done by the compiler and involves replacements that are independent of the particular target processor’s architecture as well as replacements that take advantage of the x86 or x64 architecture, instruction set and registers. For the discussion in this and the following chapters, optimization is divided into the following categories:
Local Optimization This optimization is performed on a block-by-block basis within a program’s basic blocks. A basic block is a sequence of statements in which the flow of control enters at the beginning and leaves at the end without the possibility of branching, except at the end. The PGI compilers perform many types of local optimization including: algebraic identity removal, constant folding, common sub-expression elimination, redundant load and store elimination, scheduling, strength reduction, and peephole optimizations.
Global Optimization This optimization is performed on a program unit over all its basic blocks. The optimizer performs controlflow and data-flow analysis for an entire program unit. All loops, including those formed by IFs and GOTOs, are detected and optimized. Global optimization includes: constant propagation, copy propagation, dead store elimination, global register allocation, invariant code motion, and induction variable elimination.
Loop Optimization: Unrolling, Vectorization, and Parallelization The performance of certain classes of loops may be improved through vectorization or unrolling options. Vectorization transforms loops to improve memory access performance and make use of packed SSE instructions which perform the same operation on multiple data items concurrently. Unrolling replicates the body of loops to reduce loop branching overhead and provide better opportunities for local optimization, vectorization and scheduling of instructions. Performance for loops on systems with multiple processors may also improve using the parallelization features of the PGI compilers.
Interprocedural Analysis (IPA) and Optimization Interprocedural analysis (IPA) allows use of information across function call boundaries to perform optimizations that would otherwise be unavailable. For example, if the actual argument to a function is in fact a constant in the caller, it may be possible to propagate that constant into the callee and perform optimizations that are not valid if the dummy argument is treated as a variable. A wide range of optimizations are enabled or improved by using IPA, including but not limited to data alignment optimizations, argument removal, constant propagation, pointer disambiguation, pure function detection, F90/F95 array shape propagation, data placement, vestigial function removal, automatic function inlining, inlining of functions from pre-compiled libraries, and interprocedural optimization of functions from pre-compiled libraries. 58
Chapter 7. Optimizing & Parallelizing
Function Inlining This optimization allows a call to a function to be replaced by a copy of the body of that function. This optimization will sometimes speed up execution by eliminating the function call and return overhead. Function inlining may also create opportunities for other types of optimization. Function inlining is not always beneficial. When used improperly it may increase code size and generate less efficient code.
Profile-Feedback Optimization (PFO) Profile-feedback optimization (PFO) makes use of information from a trace file produced by specially instrumented executables which capture and save information on branch frequency, function and subroutine call frequency, semi-invariant values, loop index ranges, and other input data dependent information that can only be collected dynamically during execution of a program. By definition, use of profile-feedback optimization is a two-phase process: compilation and execution of a specially-instrumented executable, followed by a subsequent compilation which reads a trace file generated during the first phase and uses the information in that trace file to guide compiler optimizations.
Getting Started with Optimizations Your first concern should be getting your program to execute and produce correct results. To get your program running, start by compiling and linking without optimization. Use the optimization level –O0 or select –g to perform minimal optimization. At this level, you will be able to debug your program easily and isolate any coding errors exposed during porting to x86 or x64 platforms. If you want to get started quickly with optimization, a good set of options to use with any of the PGI compilers is –fast –Mipa=fast. For example: $ pgfortran -fast -Mipa=fast prog.f
In PVF, similar options may be accessed using the Optimization property in the Fortran | Optimization property page. For more information on these property pages, refer to “Optimization,” on page 334. For all of the PGI Fortran compilers, the –fast –Mipa=fast options generally produce code that is welloptimized without the possibility of significant slowdowns due to pathological cases. • The –fast option is an aggregate option that includes a number of individual PGI compiler options; which PGI compiler options are included depends on the target for which compilation is performed. • The –Mipa=fast option invokes interprocedural analysis including several IPA suboptions. By experimenting with individual compiler options on a file-by-file basis, further significant performance gains can sometimes be realized. However, depending on the coding style, individual optimizations can sometimes cause slowdowns, and must be used carefully to ensure performance improvements. In addition to -fast, the optimization flags most likely to further improve performance are -O3, -Mpfi, -Mpfo, -Minline; and on targets with multiple processors, you can use -Mconcur. In PVF, you may access the -O3, -Minline, and -Mconcur options by using the Global Optimizations, Inlining, and Auto-Parallelization properties on the Fortran | Optimization property page, respectively. For more information on these property pages, refer to “Optimization,” on page 334. Three other extremely useful options are -help, -Minfo, and -dryrun. 59
Getting Started with Optimizations
–help As described in “Help with Command-line Options,” on page 52, you can see a specification of any commandline option by invoking any of the PGI compilers with -help in combination with the option in question, without specifying any input files. For example, you might want information on -O: $ pgfortran -help -O
The resulting output is similar to this: Reading rcfile /usr/pgi/linux86-64/7.0/bin/.pgfortranrc -O[] Set optimization level, -O0 to -O4, default -O2
Or you can see the full functionality of -help itself, which can return information on either an individual option or groups of options: $ pgfortran -help -help
The resulting output is similar to this: Reading rcfile /usr/pgi_rel/linux86-64/7.0/bin/.pgfortranrc -help[=groups|asm|debug|language|linker|opt|other|overall| phase|prepro|suffix|switch|target|variable]
In PVF these options may be accessed via the Fortran | Command Line property page, or perhaps more appropriately for the –help option via a Build Event or Custom Build Step. For more information on these property pages, refer to “Command Line,” on page 352
–Minfo You can use the -Minfo option to display compile-time optimization listings. When this option is used, the PGI compilers issue informational messages to stderr as compilation proceeds. From these messages, you can determine which loops are optimized using unrolling, SSE instructions, vectorization, parallelization, interprocedural optimizations and various miscellaneous optimizations. You can also see where and whether functions are inlined. For more information on –Minfo, refer to “Optimization Controls,” on page 223.
–Mneginfo You can use the -Mneginfo option to display informational messages listing why certain optimizations are inhibited. In PVF, you can use the Warning Level property available in the Fortran | General property page to specify the option -Mneginfo. For more information on –Mneginfo, refer to “Optimization Controls,” on page 223.
–dryrun The –dryrun option can be useful as a diagnostic tool if you need to see the steps used by the compiler driver to preprocess, compile, assemble and link in the presence of a given set of command line inputs. When you 60
Chapter 7. Optimizing & Parallelizing
specify the –dryrun option, these steps will be printed to stderr but are not actually performed. For example, you can use this option to inspect the default and user-specified libraries that are searched during the link phase, and the order in which they are searched by the linker. The remainder of this chapter describes the –0 options, the loop unroller option –Munroll, the vectorizer option –Mvect, the auto-parallelization option –Mconcur, the interprocedural analysis optimization –Mipa, and the profile-feedback instrumentation (–Mpfi) and optimization (–Mpfo) options. You should be able to get very near optimal compiled performance using some combination of these switches.
Common Compiler Feedback Format (CCFF) Using the Common Compiler Feedback Format (CCFF), PGI compilers save information about how your program was optimized, or why a particular optimization was not made, in the executable file. To append this information to the object file, use the compiler option–Minfo=ccff. If you choose to use PGPROF to aid with your optimization, PGPROF can extract this information and associate it with source code and other performance data, allowing you to view all of this information simultaneously in one of the available profiler panels.
Local and Global Optimization using -O Using the PGI compiler commands with the –Olevel option (the capital O is for Optimize), you can specify any of the following optimization levels: –O0 Level zero specifies no optimization. A basic block is generated for each language statement. –O1 Level one specifies local optimization. Scheduling of basic blocks is performed. Register allocation is performed. –O2 Level two specifies global optimization. This level performs all level-one local optimization as well as leveltwo global optimization. If optimization is specified on the command line without a level, level 2 is the default. –O3 Level three specifies aggressive global optimization. This level performs all level-one and level-two optimizations and enables more aggressive hoisting and scalar replacement optimizations that may or may not be profitable. –O4 Level four performs all level-one, level-two, and level-three optimizations and enables hoisting of guarded invariant floating point expressions.
Note If you use the –O option to specify optimization and do not specify a level, then level-two optimization (–O2) is the default. 61
Local and Global Optimization using -O
Level-zero optimization specifies no optimization (–O0). At this level, the compiler generates a basic block for each statement. Performance will almost always be slowest using this optimization level. This level is useful for the initial execution of a program. It is also useful for debugging, since there is a direct correlation between the program text and the code generated. Level-one optimization specifies local optimization (–O1). The compiler performs scheduling of basic blocks as well as register allocation. Local optimization is a good choice when the code is very irregular, such as code that contains many short statements containing IF statements and does not contain loops (DO or DO WHILE statements). Although this case rarely occurs, for certain types of code, this optimization level may perform better than level-two (–O2). The PGI compilers perform many different types of local optimizations, including but not limited to: - Algebraic identity removal
- Peephole optimizations
- Constant folding
- Redundant load and store elimination
- Common subexpression elimination
- Strength reductions
- Local register optimization Level-two optimization (–O2 or –O) specifies global optimization. The –fast option generally will specify global optimization; however, the –fast switch varies from release to release, depending on a reasonable selection of switches for any one particular release. The –O or –O2 level performs all level-one local optimizations as well as global optimizations. Control flow analysis is applied and global registers are allocated for all functions and subroutines. Loop regions are given special consideration. This optimization level is a good choice when the program contains loops, the loops are short, and the structure of the code is regular. The PGI compilers perform many different types of global optimizations, including but not limited to: - Branch to branch elimination
- Global register allocation
- Constant propagation
- Invariant code motion
- Copy propagation
- Induction variable elimination
- Dead store elimination You can explicitly select the optimization level on the command line. For example, the following command line specifies level-two optimization which results in global optimization: $ pgfortran -O2 prog.f
Specifying –O on the command-line without a level designation is equivalent to –O2. The default optimization level changes depending on which options you select on the command line. For example, when you select the –g debugging option, the default optimization level is set to level-zero (–O0). However, if you need to debug optimized code, you can use the -gopt option to generate debug information without perturbing optimization. Refer to “Default Optimization Levels,” on page 79 for a description of the default levels. As noted previously, the –fast option includes –O2 on all x86 and x64 targets. If you want to override the default for–fast with –O3 while maintaining all other elements of –fast, simply compile as follows: $ pgfortran -fast -O3 prog.f
62
Chapter 7. Optimizing & Parallelizing
Scalar SSE Code Generation For all processors prior to Intel Pentium 4 and AMD Opteron/Athlon64, for example Intel Pentium III and AMD AthlonXP/MP processors, scalar floating-point arithmetic as generated by the PGI Workstation compilers is performed using x87 floating-point stack instructions. With the advent of SSE/SSE2 instructions on Intel Pentium 4/Xeon and AMD Opteron/Athlon64, it is possible to perform all scalar floating-point arithmetic using SSE/SSE2 instructions. In most cases, this is beneficial from a performance standpoint. The default on 32-bit Intel Pentium II/III (options –tp p6, –tp piii, and so on) or on AMD AthlonXP/ MP (option –tp k7) is to use x87 instructions for scalar floating-point arithmetic. The default on Intel Pentium 4/Xeon or Intel EM64T running a 32-bit operating system (–tp p7), AMD Opteron/Athlon64 running a 32-bit operating system (–tp k8-32), or AMD Opteron/Athlon64 or Intel EM64T processors running a 64-bit operating system (using –tp k8-64 and –tp p7-64 respectively) is to use SSE/SSE2 instructions for scalar floating-point arithmetic. The only way to override this default on AMD Opteron/ Athlon64 or Intel EM64T processors running a 64-bit operating system is to specify an older 32-bit target. For example, you can use –tp k7 or –tp piii. In PVF, the –tp option is accessed using the Fortran | Target Processors and Fortran | Target Accelerators property pages. For more information on these property pages, refer to “Fortran | Target Processors,” on page 344 and to “Fortran | Target Accelerators,” on page 345.
Note There can be significant arithmetic differences between calculations performed using x87 instructions versus SSE/SSE2. By default, all floating-point data is promoted to IEEE 80-bit format when stored on the x87 floating-point stack, and all x87 operations are performed register-to-register in this same format. Values are converted back to IEEE 32-bit or IEEE 64-bit when stored back to memory (for REAL/float and DOUBLE PRECISION/double data respectively). The default precision of the x87 floating-point stack can be reduced to IEEE 32-bit or IEEE 64-bit globally by compiling the main program with the –pc {32|64} option to the PGI compilers, which is described in detail in Chapter 6, “Using Command Line Options”. However, there is no way to ensure that operations performed in mixed precision will match those produced on a traditional load-store RISC/ UNIX system which implements IEEE 64-bit and IEEE 32-bit registers and associated floating-point arithmetic instructions. In contrast, arithmetic results produced on Intel Pentium 4/Xeon, AMD Opteron/Athlon64 or Intel EM64T processors will usually closely match or be identical to those produced on a traditional RISC/UNIX system if all scalar arithmetic is performed using SSE/SSE2 instructions. You should keep this in mind when porting applications to and from systems which support both x87 and full SSE/SSE2 floating-point arithmetic. Many subtle issues can arise which affect your numerical results, sometimes to several digits of accuracy.
Loop Unrolling using –Munroll This optimization unrolls loops, executing multiple instances of the loop during each iteration. This reduces branch overhead, and can improve execution speed by creating better opportunities for instruction scheduling. A loop with a constant count may be completely unrolled or partially unrolled. A loop with a non-constant count may also be unrolled. A candidate loop must be an innermost loop containing one to four blocks of code. 63
Vectorization using –Mvect
The following example shows the use of the –Munroll option: $ pgfortran -Munroll prog.f
The –Munroll option is included as part of –fast on all x86 and x64 targets. The loop unroller expands the contents of a loop and reduces the number of times a loop is executed. Branching overhead is reduced when a loop is unrolled two or more times, since each iteration of the unrolled loop corresponds to two or more iterations of the original loop; the number of branch instructions executed is proportionately reduced. When a loop is unrolled completely, the loop’s branch overhead is eliminated altogether. In PVF, this option is accessed using the Loop Unroll Count property in the Fortran | Optimization property page. For more information on these property pages, refer to “Fortran | Optimization,” on page 334. Loop unrolling may be beneficial for the instruction scheduler. When a loop is completely unrolled or unrolled two or more times, opportunities for improved scheduling may be presented. The code generator can take advantage of more possibilities for instruction grouping or filling instruction delays found within the loop. Example 7.1, “Dot Product Code”and Example 7.2, “Unrolled Dot Product Code” show the effect of code unrolling on a segment that computes a dot product. Example 7.1. Dot Product Code REAL*4 A(100), B(100), Z INTEGER I DO I=1, 100 Z = Z + A(i) * B(i) END DO END
Example 7.2. Unrolled Dot Product Code REAL*4 A(100), B(100), Z INTEGER I DO I=1, 100, 2 Z = Z + A(i) * B(i) Z = Z + A(i+1) * B(i+1) END DO END
Using the –Minfo option, the compiler informs you when a loop is being unrolled. For example, a message similar to the following, indicating the line number, and the number of times the code is unrolled, displays when a loop is unrolled: dot: 5, Loop unrolled 5 times
Using the c: and n: sub-options to –Munroll, or using –Mnounroll, you can control whether and how loops are unrolled on a file-by-file basis. Using directives or pragmas as specified in Chapter 11, “Using Directives”, you can precisely control whether and how a given loop is unrolled. For a detailed description of the –Munroll option, refer to Chapter 6, “Using Command Line Options”.
Vectorization using –Mvect The –Mvect option is included as part of –fast on all x86 and x64 targets. If your program contains computationally-intensive loops, the –Mvect option may be helpful. If in addition you specify –Minfo, and your code contains loops that can be vectorized, the compiler reports relevant information on the optimizations applied. When a PGI compiler command is invoked with the –Mvect option, the vectorizer scans code searching for loops that are candidates for high-level transformations such as loop distribution, loop interchange, cache tiling, and idiom recognition (replacement of a recognizable code sequence, such as a reduction loop, with 64
Chapter 7. Optimizing & Parallelizing
optimized code sequences or function calls). When the vectorizer finds vectorization opportunities, it internally rearranges or replaces sections of loops (the vectorizer changes the code generated; your source code’s loops are not altered). In addition to performing these loop transformations, the vectorizer produces extensive data dependence information for use by other phases of compilation and detects opportunities to use vector or packed Streaming SIMD Extensions (SSE) instructions on processors where these are supported. The –Mvect option can speed up code which contains well-behaved countable loops which operate on large REAL, REAL*4, REAL*8, INTEGER*4, COMPLEX or COMPLEX DOUBLE arrays in Fortran. However, it is possible that some codes will show a decrease in performance when compiled with the –Mvect option due to the generation of conditionally executed code segments, inability to determine data alignment, and other code generation factors. For this reason, it is recommended that you check carefully whether particular program units or loops show improved performance when compiled with this option enabled. In PVF, you access the basic forms of this option using the Vectorization property in the Fortran | Optimization property page. For more advanced use of this option, use the Fortran | Command Line property page. For more information on these property pages, refer to “Fortran Property Pages,” on page 331.
Vectorization Sub-options The vectorizer performs high-level loop transformations on countable loops. A loop is countable if the number of iterations is set only before loop execution and cannot be modified during loop execution. Some of the vectorizer transformations can be controlled by arguments to the –Mvect command line option. The following sections describe the arguments that affect the operation of the vectorizer. In addition, some of these vectorizer operations can be controlled from within code using directives and pragmas. For details on the use of directives and pragmas, refer to Chapter 11, “Using Directives,” on page 123. The vectorizer performs the following operations: • Loop interchange • Loop splitting • Loop fusion • Memory-hierarchy (cache tiling) optimizations • Generation of SSE instructions on processors where these are supported • Generation of prefetch instructions on processors where these are supported • Loop iteration peeling to maximize vector alignment • Alternate code generation By default, –Mvect without any sub-options is equivalent to: -Mvect=assoc,cachesize=c
where c is the actual cache size of the machine. This enables the options for nested loop transformation and various other vectorizer options. These defaults may vary depending on the target system. 65
Vectorization using –Mvect
Assoc Option The option –Mvect=assoc instructs the vectorizer to perform associativity conversions that can change the results of a computation due to a round-off error (–Mvect=noassoc disables this option). For example, a typical optimization is to change one arithmetic operation to another arithmetic operation that is mathematically correct, but can be computationally different and generate faster code. This option is provided to enable or disable this transformation, since a round-off error for such associativity conversions may produce unacceptable results. Cachesize Option The option –Mvect=cachesize:n instructs the vectorizer to tile nested loop operations assuming a data cache size of n bytes. By default, the vectorizer attempts to tile nested loop operations, such as matrix multiply, using multi-dimensional strip-mining techniques to maximize re-use of items in the data cache. SSE Option The option –Mvect=sse instructs the vectorizer to automatically generate packed SSE (Streaming SIMD Extensions), SSE2, and prefetch instructions when vectorizable loops are encountered. SSE instructions, first introduced on Pentium III and AthlonXP processors, operate on single-precision floating-point data, and hence apply only to vectorizable loops that operate on single-precision floating-point data. SSE2 instructions, first introduced on Pentium 4, Xeon and Opteron processors, operate on double-precision floating-point data. Prefetch instructions, first introduced on Pentium III and AthlonXP processors, can be used to improve the performance of vectorizable loops that operate on either 32-bit or 64-bit floating-point data. Refer to the PGI Release Notes for a concise list of processors that support SSE, SSE2 and prefetch instructions.
Note Program units compiled with –Mvect=sse will not execute on Pentium, Pentium Pro, Pentium II or first generation AMD Athlon processors. They will only execute correctly on Pentium III, Pentium 4, Xeon, EM64T, AthlonXP, Athlon64 and Opteron systems running an SSE-enabled operating system. Prefetch Option The option –Mvect=prefetch instructs the vectorizer to automatically generate prefetch instructions when vectorizable loops are encountered, even in cases where SSE or SSE2 instructions are not generated. Usually, explicit prefetching is not necessary on Pentium 4, Xeon and Opteron because these processors support hardware prefetching; nonetheless, it sometimes can be worthwhile to experiment with explicit prefetching. Prefetching can be controlled on a loop-by-loop level using prefetch directives, which are described in detail in “Prefetch Directives ,” on page 126.
Note Program units compiled with –Mvect=prefetch will not execute correctly on Pentium, Pentium Pro, or Pentium II processors. They will execute correctly only on Pentium III, Pentium 4, Xeon, EM64T, AthlonXP, Athlon64 or Opteron systems. In addition, the prefetch instruction is only supported on AthlonXP, Athlon64 or Opteron systems and can cause instruction faults on non-AMD processors. For this reason, the PGI compilers do not generate prefetch instructions by default on any target. 66
Chapter 7. Optimizing & Parallelizing
In addition to these sub-options to –Mvect, several other sub-options are supported. Refer to the description of –M[no]vect in Chapter 18, “Command-Line Options Reference” for a detailed description of all available sub-options.
Vectorization Example Using SSE/SSE2 Instructions One of the most important vectorization options is -Mvect=sse. When you use this option, the compiler automatically generates SSE and SSE2 instructions, where possible, when targeting processors on which these instructions are supported. This process can improve performance by up to a factor of two compared with the equivalent scalar code. All of the PGI Fortran, C and C++ compilers support this capability. The PGI Release Notes show which x86 and x64 processors support these instructions. Prior to release 7.0, -Mvect=sse was omitted from the compiler switch -fast but was included in the switch -fastsse. Since release 7.0 , -fast is synonymous with -fastsse; therefore, both options include -Mvect=sse. In the program in Example 7.3, “Vector operation using SSE instructions”, the vectorizer recognizes the vector operation in subroutine 'loop' when either the compiler switch -Mvect=sse or -fast is used. This example shows the compilation, informational messages, and run-time results using the SSE instructions on an AMD Opteron processor-based system, along with issues that affect SSE performance. First note that the arrays in Example 7.3 are single-precision and that the vector operation is done using a unit stride loop. Thus, this loop can potentially be vectorized using SSE instructions on any processor that supports SSE or SSE2 instructions. SSE operations can be used to operate on pairs of single-precision floatingpoint numbers, and do not apply to double-precision floating-point numbers. SSE2 instructions can be used to operate on quads of single-precision floating-point numbers or on pairs of double-precision floating-point numbers. Loops vectorized using SSE or SSE2 instructions operate much more efficiently when processing vectors that are aligned to a cache-line boundary. You can cause unconstrained data objects of size 16 bytes or greater to be cache-aligned by compiling with the –Mcache_align switch. An unconstrained data object is a data object that is not a common block member and not a member of an aggregate data structure.
Note For stack-based local variables to be properly aligned, the main program or function must be compiled with –Mcache_align. The –Mcache_align switch has no effect on the alignment of Fortran allocatable or automatic arrays. If you have arrays that are constrained, such as vectors that are members of Fortran common blocks, you must specifically pad your data structures to ensure proper cache alignment. You can use –Mcache_align for only the beginning address of each common block to be cache-aligned. The following examples show the results of compiling the sample code in Example 7.3 both with and without the option –Mvect=sse. Example 7.3. Vector operation using SSE instructions program vector_op parameter (N = 9999) real*4 x(N), y(N), z(N), W(N)
67
Vectorization using –Mvect
do i = 1, n y(i) = i z(i) = 2*i w(i) = 4*i enddo do j = 1, 200000 call loop(x,y,z,w,1.0e0,N) enddo print *, x(1),x(771),x(3618),x(6498),x(9999) end subroutine loop(a,b,c,d,s,n) integer i, n real*4 a(n), b(n), c(n), d(n),s do i = 1, n a(i) = b(i) + c(i) - s * d(i) enddo end
Assume the preceding program is compiled as follows, where -Mvect=nosse disables SSE vectorization: % pgfortran -fast -Mvect=nosse -Minfo vadd.f vector_op: 4, Loop unrolled 4 times loop: 18, Loop unrolled 4 times
The following output shows a sample result if the generated executable is run and timed on a standalone AMD Opteron 2.2 Ghz system: % /bin/time vadd -1.000000 -771.000 -3618.000 -6498.00 -9999.00 5.39user 0.00system 0:05.40elapsed 99%CP
Now, recompile with SSE vectorization enabled, and you see results similar to these: % pgfortran -fast -Minfo vadd.f -o vadd vector_op: 4, Unrolled inner loop 8 times Loop unrolled 7 times (completely unrolled) loop: 18, Generated 4 alternate loops for the inner loop Generated vector sse code for inner loop Generated 3 prefetch instructions for this loop
Notice the informational message for the loop at line 18. • The first two lines of the message indicate that the loop was vectorized, SSE instructions were generated, and four alternate versions of the loop were also generated. The loop count and alignments of the arrays determine which of these versions is executed. • The last line of the informational message indicates that prefetch instructions have been generated for three loads to minimize latency of data transfers from main memory. Executing again, you should see results similar to the following: % /bin/time vadd -1.000000 -771.000 -3618.00 -6498.00 -9999.0 3.59user 0.00system 0:03.59elapsed 100%CPU
The result is a 50% speed-up over the equivalent scalar, that is, the non-SSE, version of the program. 68
Chapter 7. Optimizing & Parallelizing
Speed-up realized by a given loop or program can vary widely based on a number of factors: • When the vectors of data are resident in the data cache, performance improvement using vector SSE or SSE2 instructions is most effective. • If data is aligned properly, performance will be better in general than when using vector SSE operations on unaligned data. • If the compiler can guarantee that data is aligned properly, even more efficient sequences of SSE instructions can be generated. • The efficiency of loops that operate on single-precision data can be higher. SSE2 vector instructions can operate on four single-precision elements concurrently, but only two double-precision elements.
Note Compiling with –Mvect=sse can result in numerical differences from the executables generated with less optimization. Certain vectorizable operations, for example dot products, are sensitive to order of operations and the associative transformations necessary to enable vectorization (or parallelization).
Auto-Parallelization using -Mconcur With the -Mconcur option the compiler scans code searching for loops that are candidates for autoparallelization. -Mconcur must be used at both compile-time and link-time. When the parallelizer finds opportunities for auto-parallelization, it parallelizes loops and you are informed of the line or loop being parallelized if the -Minfo option is present on the compile line. See “Optimization Controls,” on page 223, for a complete specification of -Mconcur. In PVF, the basic form of this option is accessed using the Auto-Parallelization property of the Fortran | Optimization property page. For more advanced auto-parallelization, use the Fortran | Command Line property page. For more information on these property pages, refer to “Fortran Property Pages,” on page 331 A loop is considered parallelizable if doesn't contain any cross-iteration data dependencies. Cross-iteration dependencies from reductions and expandable scalars are excluded from consideration, enabling more loops to be parallelizable. In general, loops with calls are not parallelized due to unknown side effects. Also, loops with low trip counts are not parallelized since the overhead in setting up and starting a parallel loop will likely outweigh the potential benefits. In addition, the default is to not parallelize innermost loops, since these often by definition are vectorizable using SSE instructions and it is seldom profitable to both vectorize and parallelize the same loop, especially on multi-core processors. Compiler switches and directives are available to let you override most of these restrictions on auto-parallelization.
Auto-parallelization Sub-options The parallelizer performs various operations that can be controlled by arguments to the –Mconcur command line option. The following sections describe these arguments that affect the operation of the vectorizer. In addition, these vectorizer operations can be controlled from within code using directives and pragmas. For details on the use of directives and pragmas, refer to Chapter 11, “Using Directives”. By default, –Mconcur without any sub-options is equivalent to: 69
Auto-Parallelization using -Mconcur
-Mconcur=dist:block
This enables parallelization of loops with blocked iteration allocation across the available threads of execution. These defaults may vary depending on the target system. Altcode Option The option –Mconcur=altcode instructs the parallelizer to generate alternate serial code for parallelized loops. If altcode is specified without arguments, the parallelizer determines an appropriate cutoff length and generates serial code to be executed whenever the loop count is less than or equal to that length. If altcode:n is specified, the serial altcode is executed whenever the loop count is less than or equal to n. If noaltcode is specified, no alternate serial code is generated. Dist Option The option –Mconcur=dist:{block|cyclic} option specifies whether to assign loop iterations to the available threads in blocks or in a cyclic (round-robin) fashion. Block distribution is the default. If cyclic is specified, iterations are allocated to processors cyclically. That is, processor 0 performs iterations 0, 3, 6, etc.; processor 1 performs iterations 1, 4, 7, etc.; and processor 2 performs iterations 2, 5, 8, etc. Cncall Option The option –Mconcur=cncall specifies that it is safe to parallelize loops that contain subroutine or function calls. By default, such loops are excluded from consideration for auto-parallelization. Also, no minimum loop count threshold must be satisfied before parallelization will occur, and last values of scalars are assumed to be safe. The environment variable NCPUS is checked at run-time for a parallel program. If NCPUS is set to 1, a parallel program runs serially, but will use the parallel routines generated during compilation. If NCPUS is set to a value greater than 1, the specified number of processors will be used to execute the program. Setting NCPUS to a value exceeding the number of physical processors can produce inefficient execution. Executing a program on multiple processors in an environment where some of the processors are being time-shared with another executing job can also result in inefficient execution. As with the vectorizer, the -Mconcur option can speed up code if it contains well-behaved countable loops and/or computationally intensive nested loops that operate on arrays. However, it is possible that some codes will show a decrease in performance on multi-processor systems when compiled with -Mconcur due to parallelization overheads, memory bandwidth limitations in the target system, false-sharing of cache lines, or other architectural or code-generation factors. For this reason, it is recommended that you check carefully whether particular program units or loops show improved performance when compiled using this option. If the compiler is not able to successfully auto-parallelize your application, you should refer to Chapter 9, “Using OpenMP”. It is possible that insertion of explicit parallelization directives or pragmas, and use of the compiler option –mp might enable the application to run in parallel.
Loops That Fail to Parallelize In spite of the sophisticated analysis and transformations performed by the compiler, programmers may notice loops that are seemingly parallel, but are not parallelized. In this subsection, we look at some examples of common situations where parallelization does not occur. 70
Chapter 7. Optimizing & Parallelizing
Innermost Loops As noted earlier in this chapter, the PGI compilers will not parallelize innermost loops by default, because it is usually not profitable. However, you can override this default using the –Mconcur=innermost commandline option. Timing Loops Often, loops occur in programs that are similar to timing loops. The outer loop in the following example is one such loop. do j = 1, 2 do i = 1, n a(i) = b(i) + c(i) 1 enddo enddo
The outer loop in the preceding example is not parallelized because the compiler detects a cross-iteration dependence in the assignment to a(i). Suppose the outer loop were parallelized. Then both processors would simultaneously attempt to make assignments into a(1:n). Now in general the values computed by each processor for a(1:n) will differ, so that simultaneous assignment into a(1:n) will produce values different from sequential execution of the loops. In this example, values computed for a(1:n) don’t depend on j, so that simultaneous assignment by both processors will not yield incorrect results. However, it is beyond the scope of the compilers’ dependence analysis to determine that values computed in one iteration of a loop don’t differ from values computed in another iteration. So the worst case is assumed, and different iterations of the outer loop are assumed to compute different values for a(1:n). Is this assumption too pessimistic? If j doesn’t occur anywhere within a loop, the loop exists only to cause some delay, most probably to improve timing resolution. It is not usually valid to parallelize timing loops; to do so would distort the timing information for the inner loops. Scalars Quite often, scalars will inhibit parallelization of non-innermost loops. There are two separate cases that present problems. In the first case, scalars appear to be expandable, but appear in non-innermost loops, as in the following example. do j = 1, n x = b(j) do i = 1, n a(i,j) = x + c(i,j) enddo enddo
There are a number of technical problems to be resolved in order to recognize expandable scalars in noninnermost loops. Until this generalization occurs, scalars like x in the preceding code segment inhibit parallelization of loops in which they are assigned. In the following example, scalar k is not expandable, and it is not an accumulator for a reduction.
1
k = 1 do i = 1, n do j = 1, n a(j,i) = b(k) * x enddo
71
Auto-Parallelization using -Mconcur
2
k = i if (i .gt. n/2) k = n - (i - n/2) enddo
If the outer loop is parallelized, conflicting values are stored into k by the various processors. The variable k cannot be made local to each processor because its value must remain coherent among the processors. It is possible the loop could be parallelized if all assignments to k are placed in critical sections. However, it is not clear where critical sections should be introduced because in general the value for k could depend on another scalar (or on k itself), and code to obtain the value of other scalars must reside in the same critical section. In the previous example, the assignment to k within a conditional at label 2 prevents k from being recognized as an induction variable. If the conditional statement at label 2 is removed, k would be an induction variable whose value varies linearly with j, and the loop could be parallelized. Scalar Last Values During parallelization, scalars within loops often need to be privatized; that is, each execution thread has its own independent copy of the scalar. Problems can arise if a privatized scalar is accessed outside the loop. For example, consider the following loop: do I = 1,N if (x(I) > 5.0 ) then t = I endif enddo v = t
The value of t may not be computed on the last iteration of the loop. Normally, if a scalar is assigned within a loop and used following the loop, the PGI compilers save the last value of the scalar. However, if the loop is parallelized and the scalar is not assigned on every iteration, it may be difficult, without resorting to costly critical sections, to determine on what iteration t is last assigned. Analysis allows the compiler to determine that a scalar is assigned on each iteration and hence that the loop is safe to parallelize if the scalar is used later, as illustrated in the following example. do I = 1,N if (x(I) > 0.0 ) then t = 2.0 else t = 3.0 y(i) = t endif enddo v = t
where t is assigned on every iteration of the loop. However, there are cases where a scalar may be privatizable, but if it is used after the loop, it is unsafe to parallelize. Examine the following loop in which each use of t within the loop is reached by a definition from the same iteration. do I = 1,N if (x(I) > 0.0 ) then t = x(I) ... y(i) = ... t endif enddo v = t
72
Chapter 7. Optimizing & Parallelizing
Here t is privatizable, but the use of t outside the loop may yield incorrect results, since the compiler may not be able to detect on which iteration of the parallelized loop t is last assigned. The compiler detects the previous cases. When a scalar is used after the loop but is not defined on every iteration of the loop, parallelization does not occur. When the programmer knows that the scalar is assigned on the last iteration of the loop, the programmer may use a directive to let the compiler know the loop is safe to parallelize. The Fortran directive safe_lastval informs the compiler that, for a given loop, all scalars are assigned in the last iteration of the loop; thus, it is safe to parallelize the loop. We could add the following line to any of our previous examples. cpgi$l safe_lastval
The resulting code looks similar to this: cpgi$l safe_lastv ... do I = 1,N if (x(I) > 5.0 ) then t = I endif enddo v = t
In addition, a command-line option –Msafe_lastval, provides this information for all loops within the routines being compiled, which essentially provides global scope.
Processor-Specific Optimization & the Unified Binary Different processors have differences, some subtle, in hardware features such as instruction sets and cache size. The compilers make architecture-specific decisions about things such as instruction selection, instruction scheduling, and vectorization. By default, the PGI compilers produce code specifically targeted to the type of processor on which the compilation is performed. That is, the default is to use all supported instructions wherever possible when compiling on a given system. As a result, executables created on a given system may not be usable on previous generation systems. For example, executables created on a Pentium 4 may fail to execute on a Pentium III or Pentium II. All PGI compilers have the capability of generating unified binaries, which provide a low-overhead means for generating a single executable that is compatible with and has good performance on more than one hardware platform. You can use the –tp option to control compilation behavior by specifying the processor or processors with which the generated code is compatible. The compilers generate and combine into one executable multiple binary code streams, each optimized for a specific platform. At run-time, the one executable senses the environment and dynamically selects the appropriate code stream. For specific information on the –tp option, refer to –tp [,target...]. Executable size is automatically controlled via unified binary culling. Only those functions and subroutines where the target affects the generated code have unique binary images, resulting in a code-size savings of from 10% to 90% compared to generating full copies of code for each target. Programs can use the PGI Unified Binary even if all of the object files and libraries are not compiled as unified binaries. Like any other object file, you can use PGI Unified Binary object files to create programs or libraries. No special start up code is needed; support is linked in from the PGI libraries. 73
Interprocedural Analysis and Optimization using –Mipa
The –Mipa option disables generation of PGI Unified Binary. Instead, the default target auto-detect rules for the host are used to select the target processor.
Interprocedural Analysis and Optimization using –Mipa The PGI Fortran compilers use interprocedural analysis (IPA) that results in minimal changes to makefiles and the standard edit-build-run application development cycle. Other than adding –Mipa to the command line or selecting the appropriate value for the PVF Optimization property from the property page Fortran | Optimization, no other changes are required. For reference and background, the process of building a program without IPA is described later in this section, followed by the minor modifications required to use IPA with the PGI compilers.
Note PVF’s internal build engine uses the method described in “Building a Program with IPA - Several Steps,” on page 76.
Building a Program Without IPA – Single Step Using the pgfortran command-level compiler driver, multiple source files can be compiled and linked into a single executable with one command. The following example compiles and links three source files: % pgfortran -o file1.exe file1.f95 file2.f95 file3.f95
In actuality, the pgfortran driver executes several steps to produce the assembly code and object files corresponding to each source file, and subsequently to link the object files together into a single executable file. This command is roughly equivalent to the following commands performed individually: % % % % % % %
pgfortran -S -o file1.s file1.f95 as -o file1.obj file1.s pgfortran -S -o file2.s file2.f95 as -o file2.obj file2.s pgfortran -S -o file3.s file3.f95 as -o file3.obj file3.s pgfortran -o file1.exe file1.obj file2.obj file3.obj
If any of the three source files is edited, the executable can be rebuilt with the same command line: % pgfortran -o file1.exe file1.f95 file2.f95 file3.f95
Note This always works as intended, but has the side-effect of recompiling all of the source files, even if only one has changed. For applications with a large number of source files, this can be timeconsuming and inefficient.
Building a Program Without IPA - Several Steps It is also possible to use individual pgfortran commands to compile each source file into a corresponding object file, and one to link the resulting object files into an executable: % pgfortran -c file1.f95 % pgfortran -c file2.f95 % pgfortran -c file3.f95
74
Chapter 7. Optimizing & Parallelizing
% pgfortran -o file1.exe file1.obj file2.obj file3.obj
The pgfortran driver invokes the compiler and assembler as required to process each source file, and invokes the linker for the final link command. If you modify one of the source files, the executable can be rebuilt by compiling just that file and then relinking: % pgfortran -c file1.f95 % pgfortran -o file1.exe file1.obj file2.obj file3.obj
Building a Program Without IPA Using Make The program compilation and linking process can be simplified greatly using the make utility on systems where it is supported. Suppose you create a makefile containing the following lines: file1.exe: file1.obj pgfortran $(OPT) -o file3.obj file1.obj: pgfortran $(OPT) -c file2.obj: file2.c pgfortran $(OPT) -c file3.obj: file3.c pgfortran $(OPT) -c
file2.obj file3.obj file1.exe file1.obj file2.obj file1.c file1.f95 file2.f95 file3.f95
It is then possible to type a single make command: % make
The make utility determines which object files are out of date with respect to their corresponding source files, and invokes the compiler to recompile only those source files and to relink the executable. If you subsequently edit one or more source files, the executable can be rebuilt with the minimum number of recompilations using the same single make command.
Building a Program with IPA Interprocedural analysis and optimization (IPA) by the PGI compilers alters the standard and make utility command-level interfaces as little as possible. IPA occurs in three phases: • Collection: Create a summary of each function or procedure, collecting the useful information for interprocedural optimizations. This is done during the compile step if the –Mipa switch is present on the command line; summary information is collected and stored in the object file. • Propagation: Process all the object files to propagate the interprocedural summary information across function and file boundaries. This is done during the link step, when all the object files are combined, if the –Mipa switch is present on the link command line. • Recompile/Optimization: Recompile each of the object files with the propagated interprocedural information, producing a specialized object file. This process is also performed during the link step when the –Mipa switch is present on the link command line. When linking with –Mipa, the PGI compilers automatically regenerate IPA-optimized versions of each object file, essentially recompiling each file. If there are IPA-optimized objects from a previous build, the compilers will minimize the recompile time by reusing those objects if they are still valid. They will still be valid if the IPAoptimized object is newer than the original object file, and the propagated IPA information for that file has not changed since it was optimized. 75
Interprocedural Analysis and Optimization using –Mipa
After each object file has been recompiled, the regular linker is invoked to build the application with the IPAoptimized object files. The IPA-optimized object files are saved in the same directory as the original object files, for use in subsequent program builds.
Building a Program with IPA - Single Step By adding the –Mipa command line switch, several source files can be compiled and linked with interprocedural optimizations with one command: % pgfortran -Mipa=fast -o file1.exe file1.f95 file2.f95 file3.f95
Just like compiling without –Mipa, the driver executes several steps to produce the assembly and object files to create the executable: % % % % % % %
pgfortran -Mipa=fast -S as -o file1.obj file1.s pgfortran -Mipa=fast -S as -o file2.obj file2.s pgfortran -Mipa=fast -S as -o file3.obj file3.s pgfortran -Mipa=fast -o
-o file1.s file1.f95 -o file2.s file2.f95 -o file3.s file3.f95 file1.exe file1.obj file2.obj file3.obj
In the last step, an IPA linker is invoked to read all the IPA summary information and perform the interprocedural propagation. The IPA linker reinvokes the compiler on each of the object files to recompile them with interprocedural information. This creates three new objects with mangled names: file1_ipa5_file1.exe.obj, file2_ipa5_file1.exe.obj, file2_ipa5_file1.exe.obj
The system linker is then invoked to link these IPA-optimized objects into the final executable. Later, if one of the three source files is edited, the executable can be rebuilt with the same command line: % pgfortran -Mipa=fast -o file1.exe file1.f95 file2.f95 file3.f95
This will work, but again has the side-effect of compiling each source file, and recompiling each object file at link time.
Building a Program with IPA - Several Steps Just by adding the –Mipa command-line switch, it is possible to use individual pgfortran commands to compile each source file, followed by a command to link the resulting object files into an executable: % % % %
pgfortran pgfortran pgfortran pgfortran
-Mipa=fast -Mipa=fast -Mipa=fast -Mipa=fast
-c -c -c -o
file1.f95 file2.f95 file3.f95 file1.exe file1.obj file2.obj file3.obj
The pgfortran driver invokes the compiler and assembler as required to process each source file, and invokes the IPA linker for the final link command. If you modify one of the source files, the executable can be rebuilt by compiling just that file and then relinking: % pgfortran -Mipa=fast -c file1.f95 % pgfortran -Mipa=fast -o file1.exe file1.obj file2.obj file3.obj
When the IPA linker is invoked, it determines that the IPA-optimized object for file1.obj (file1_ipa5_a.out.obj) is stale, since it is older than the object file1.obj, and hence will need to be rebuilt, and will reinvoke the compiler to generate it. In addition, depending on the nature of the 76
Chapter 7. Optimizing & Parallelizing
changes to the source file file1.f95, the interprocedural optimizations previously performed for file2 and file3 may now be inaccurate. For instance, IPA may have propagated a constant argument value in a call from a function in file1.f95 to a function in file2.f95; if the value of the argument has changed, any optimizations based on that constant value are invalid. The IPA linker determines which of any previously created IPA-optimized objects need to be regenerated, and reinvokes the compiler as appropriate to regenerate them. Only those objects that are stale or which have new or different IPA information will be regenerated, which saves on compile time.
Building a Program with IPA Using Make As in the previous two sections, programs can be built with IPA using the make utility. Just add the commandline switch –Mipa, as shown here: OPT=-Mipa=fast file1.exe pgfortran $(OPT) -o file1.obj: file1.f95 pgfortran $(OPT) -c file2.obj: file2.f95 pgfortran $(OPT) -c file3.obj: file3.f95 pgfortran $(OPT) -c
file1 file1.obj file2.obj file3.obj file1.f95 file2.f95 file3.f95
Using the single make command invokes the compiler to generate any object files that are out-of-date, then invokes pgfortran to link the objects into the executable; at link time, pgfortran calls the IPA linker to regenerate any stale or invalid IPA-optimized objects. % make
Questions about IPA 1.
Why is the object file so large? An object file created with –Mipa contains several additional sections. One is the summary information used to drive the interprocedural analysis. In addition, the object file contains the compiler internal representation of the source file, so the file can be recompiled at link time with interprocedural optimizations. There may be additional information when inlining is enabled. The total size of the object file may be 5-10 times its original size. The extra sections are not added to the final executable.
2.
What if I compile with –Mipa and link without –Mipa? The PGI compilers generate a legal object file, even when the source file is compiled with –Mipa. If you compile with –Mipa and link without –Mipa, the linker is invoked on the original object files. A legal executable will be generated; while this will not have the benefit of interprocedural optimizations, any other optimizations will apply.
3.
What if I compile without –Mipa and link with –Mipa? At link time, the IPA linker must have summary information about all the functions or routines used in the program. This information is created only when a file is compiled with –Mipa. If you compile a file without –Mipa and then try to get interprocedural optimizations by linking with –Mipa, the IPA linker will issue a message that some routines have no IPA summary information, and will proceed to run the system linker using the original object files. If some files were compiled with –Mipa and others 77
Profile-Feedback Optimization using –Mpfi/–Mpfo
were not, it will determine the safest approximation of the IPA summary information for those files not compiled with –Mipa, and use that to recompile the other files using interprocedural optimizations. 4.
Can I build multiple applications in the same directory with –Mipa? Yes. Suppose you have three source files: main1.f95, main2.f95, and sub.f95 where sub.f95 is shared between the two applications. Suppose you build the first application with –Mipa, using this command: % pgfortran -Mipa=fast -o app1 main1.f95 sub.f95
Then the IPA linker creates two IPA-optimized object files and uses them to build the first application. main1_ipa4_app1.exe.oobj sub_ipa4_app1.exe.oobj
Now suppose you build the second application using this command: % pgfortran -Mipa=fast -o app2 main2.f95 sub.f95
The IPA linker creates two more IPA-optimized object files: main2_ipa4_app2.exe.oobj sub_ipa4_app2.exe.oobj
Note There are now three object files for sub.f95: the original sub.obj, and two IPA-optimized objects, one for each application in which it appears. 5.
How is the mangled name for the IPA-optimized object files generated? The mangled name has '_ipa' appended, followed by the decimal number of the length of the executable file name, followed by an underscore and the executable file name itself. The suffix is changed to .oobj so linking *.obj does not pull in the IPA-optimized objects. If the IPA linker determines that the file would not benefit from any interprocedural optimizations, it does not have to recompile the file at link time and uses the original object.
Profile-Feedback Optimization using –Mpfi/–Mpfo The PGI compilers support many common profile-feedback optimizations, including semi-invariant value optimizations and block placement. These are performed under control of the –Mpfi/–Mpfo command-line options. When invoked with the –Mpfi option, the PGI compilers instrument the generated executable for collection of profile and data feedback information. This information can be used in subsequent compilations that include the –Mpfo optimization option. –Mpfi must be used at both compile-time and link-time. Programs compiled with –Mpfi include extra code to collect run-time statistics and write them out to a trace file. When the resulting program is executed, a profile feedback trace file pgfi.out is generated in the current working directory.
Note Programs compiled and linked with –Mpfi execute more slowly due to the instrumentation and data collection overhead. You should use executables compiled with –Mpfi only for execution of training runs.
78
Chapter 7. Optimizing & Parallelizing
When invoked with the –Mpfo option, the PGI compilers use data from a pgfi.out profile feedback tracefile to enable or enhance certain performance optimizations. Use of this option requires the presence of a pgfi.out trace file in the current working directory.
Default Optimization Levels The following table shows the interaction between the –O , –g, and –M options. In the table, level can be 0, 1, 2, 3 or 4, and can be vect, concur, unroll or ipa. The default optimization level is dependent upon these command-line options. Table 7.1. Optimization and –O, –g and –M Options Optimize Option
Debug Option
–M Option
Optimization Level
none
none
none
1
none
none
–M
2
none
–g
none
0
–O
none or –g
none
2
–Olevel
none or –g
none
level
–Olevel <= 2
none or –g
–M
2
Code that is not optimized yet compiled using the option –O0 can be significantly slower than code generated at other optimization levels. The –M option, where is vect, concur, unroll or ipa, sets the optimization level to 2 if no –O options are supplied. Both the –fast and the –fastsse options set the optimization level to a target-dependent optimization level if no –O options are supplied.
Local Optimization Using Directives Command-line options let you specify optimizations for an entire source file. Directives supplied within a Fortran source file provide information to the compiler and alter the effects of certain command-line options or the default behavior of the compiler. (Many directives have a corresponding command-line option.) While a command line option affects the entire source file that is being compiled, directives let you do the following: • Apply, or disable, the effects of a particular command-line option to selected subprograms or to selected loops in the source file (for example, an optimization). • Globally override command-line options. • Tune selected routines or loops based on your knowledge or on information obtained through profiling. Chapter 11, “Using Directives” provides details on how to add directives and pragmas to your source files.
Execution Timing and Instruction Counting As this chapter describes, once you have a program that compiles, executes and gives correct results, you may optimize your code for execution efficiency. 79
Execution Timing and Instruction Counting
Selecting the correct optimization level requires some thought and may require that you compare several optimization levels before arriving at the best solution. To compare optimization levels, you need to measure the execution time for your program. There are several approaches you can take for timing execution. • You can use shell commands that provide execution time statistics. • You can include function calls in your code that provide timing information. • You can profile sections of code. Timing functions available with the PGI compilers include these: • 3F timing routines • The SECNDS pre-declared function in PGF77, PGF95, or PGFORTRAN • The SYSTEM_CLOCK or CPU_CLOCK intrinsics in PGF95 . In general, when timing a program, you should try to eliminate or reduce the amount of system level activities such as I/O, program loading, and task switching. The following example shows a fragment that indicates how to use SYSTEM_CLOCK effectively within an F90, F95 program unit. Example 7.4. Using SYSTEM_CLOCK code fragment . . . integer :: nprocs, hz, clock0, clock1 real :: time integer, allocatable :: t(:) #if defined (F95) allocate (t(number_of_processors())) #elif defined (_OPENMP) allocate (t(OMP_GET_NUM_THREADS())) #else allocate (t(1)) #endif call system_clock (count_rate=hz) ! call system_clock(count=clock0) < do work> call system_clock(count=clock1) ! t = (clock1 - clock0) time = real (sum(t)) / (real(hz) * size(t)) . . .
80
Chapter 8. Using Function Inlining Function inlining replaces a call to a function or a subroutine with the body of the function or subroutine. This can speed up execution by eliminating parameter passing and function/subroutine call and return overhead. It also allows the compiler to optimize the function with the rest of the code. Note that using function inlining indiscriminately can result in much larger code size and no increase in execution speed. The PGI compilers provide two categories of inlining: • Automatic inlining - During the compilation process, a hidden pass precedes the compilation pass. This hidden pass extracts functions that are candidates for inlining. The inlining of functions occurs as the source files are compiled. • Inline libraries - You create inline libraries, for example using the pgfortran compiler driver and the –o and –Mextract options. There is no hidden extract pass but you must ensure that any files that depend on the inline library use the latest version of the inline library. There are important restrictions on inlining. Inlining only applies to certain types of functions. Refer to “Restrictions on Inlining,” on page 85 for more details on function inlining limitations. This chapter describes how to use the following options related to function inlining: –Mextract –Minline –Mrecursive
Invoking Function Inlining To invoke the function inliner, use the -Minline option. If you do not specify an inline library, the compiler performs a special prepass on all source files named on the compiler command line before it compiles any of them. This pass extracts functions that meet the requirements for inlining and puts them in a temporary inline library for use by the compilation pass. In PVF, inlining can be turned on using the Inlining property in the Fortran | Optimization property page. For more advanced configuration of inlining, use the Fortran | Command Line property page. For more information on these property pages, refer to “Fortran Property Pages,” on page 331. 81
Invoking Function Inlining
Several -Minline suboptions let you determine the selection criteria for functions to be inlined. These suboptions include: except:func Inlines all eligible functions except func, a function in the source text. you can use a comma-separated list to specify multiple functions. [name:]func Inlines all functions in the source text whose name matches func. you can use a comma-separated list to specify multiple functions. [size:]n Inlines functions with a statement count less than or equal to n, the specified size.
Note The size n may not exactly equal the number of statements in a selected function; the size parameter is merely a rough gauge. levels:n Inlines n level of function calling levels. The default number is one (1). Using a level greater than one indicates that function calls within inlined functions may be replaced with inlined code. This approach allows the function inliner to automatically perform a sequence of inline and extract processes. [lib:]file.ext Instructs the inliner to inline the functions within the library file file.ext. If no inline library is specified, functions are extracted from a temporary library created during an extract prepass.
Tip Create the library file using the -Mextract option. If you specify both a function name and a size n, the compiler inlines functions that match the function name or have n or fewer statements. If a name is used without a keyword, then a name with a period is assumed to be an inline library and a name without a period is assumed to be a function name. If a number is used without a keyword, the number is assumed to be a size. In the following example, the compiler inlines functions with fewer than approximately 100 statements in the source file myprog.f and writes the executable code in the output file myprog.exe. $ pgfortran -Minline=size:100 myprog.f -o myprog
Refer to “–M Options by Category,” on page 213 for more information on the -Minline options.
Using an Inline Library If you specify one or more inline libraries on the command line with the -Minline option, the compiler does not perform an initial extract pass. The compiler selects functions to inline from the specified inline library. If you also specify a size or function name, all functions in the inline library meeting the selection criteria are selected for inline expansion at points in the source text where they are called. 82
Chapter 8. Using Function Inlining
If you do not specify a function name or a size limitation for the -Minline option, the compiler inlines every function in the inline library that matches a function in the source text. In the following example, the compiler inlines the function proc from the inline library lib.il and writes the executable code in the output file myprog.exe. $ pgfortran -Minline=name:proc,lib:lib.il myprog.f -o myprog
The following command line is equivalent to the preceding line, with the exception that in the following example does not use the keywords name: and lib:. You typically use keywords to avoid name conflicts when you use an inline library name that does not contain a period. Otherwise, without the keywords, a period informs the compiler that the file on the command line is an inline library. $ pgfortran -Minline=proc,lib.il myprog.f -o myprog
Creating an Inline Library You can create or update an inline library using the -Mextract command-line option. If you do not specify selection criteria with the -Mextract option, the compiler attempts to extract all subprograms. Several -Mextract options let you determine the selection criteria for creating or updating an inline library. These selection criteria include: func
Extracts the function func. you can use a comma-separated list to specify multiple functions. [name:]func Extracts the functions whose name matches func, a function in the source text. [size:]n Limits the size of the extracted functions to functions with a statement count less than or equal to n, the specified size.
Note The size n may not exactly equal the number of statements in a selected function; the size parameter is merely a rough gauge. [lib:]ext.lib Stores the extracted information in the library directory ext.lib. If no inline library is specified, functions are extracted to a temporary library created during an extract prepass for use during the compilation stage. When you use the -Mextract option, only the extract phase is performed; the compile and link phases are not performed. The output of an extract pass is a library of functions available for inlining. This output is placed in the inline library file specified on the command line with the –o filename specification. If the library file exists, new information is appended to it. If the file does not exist, it is created. You can use a command similar to the following: $ pgfortran -Mextract=lib:lib.il myfunc.f
You can use the -Minline option with the -Mextract option. In this case, the extracted library of functions can have other functions inlined into the library. Using both options enables you to obtain more than one 83
Creating an Inline Library
level of inlining. In this situation, if you do not specify a library with the –Minline option, the inline process consists of two extract passes. The first pass is a hidden pass implied by the –Minline option, during which the compiler extracts functions and places them into a temporary library. The second pass uses the results of the first pass but puts its results into the library that you specify with the –o option.
Working with Inline Libraries An inline library is implemented as a directory with each inline function in the library stored as a file using an encoded form of the inlinable function. A special file named TOC in the inline library directory serves as a table of contents for the inline library. This is a printable, ASCII file which you can examine to locate information about the library contents, such as names and sizes of functions, the source file from which they were extracted, the version number of the extractor which created the entry, and so on. Libraries and their elements can be manipulated using ordinary system commands. • Inline libraries can be copied or renamed. • Elements of libraries can be deleted or copied from one library to another. • The ls or dir command can be used to determine the last-change date of a library entry. Dependencies When a library is created or updated using one of the PGI compilers, the last-change date of the library directory is updated. This allows a library to be listed as a dependence in a makefile or a PVF property and ensures that the necessary compilations are performed when a library is changed.
Updating Inline Libraries - Makefiles If you use inline libraries you must be certain that they remain up-to-date with the source files into which they are inlined. One way to assure inline libraries are updated is to include them in a makefile. The makefile fragment in the following example assumes the file utils.f contains a number of small functions used in the files parser.f and alloc.f. This portion of the makefile: • Maintains the inline library utils.il. • Updates the library whenever you change utils.f or one of the include files it uses. • Compiles parser.f and alloc.f whenever you update the library. Example 8.1. Sample Makefile SRC = mydir FC = pgfortran FFLAGS = -O2 main.o: $(SRC)/main.f $(SRC)/global.h $(FC) $(FFLAGS) -c $(SRC)/main.f utils.o: $(SRC)/utils.f $(SRC)/global.h $(SRC)/utils.h
84
Chapter 8. Using Function Inlining
$(FC) $(FFLAGS) -c $(SRC)/utils.f utils.il: $(SRC)/utils.f $(SRC)/global.h $(SRC)/utils.h $(FC) $(FFLAGS) -Mextract=15 -o utils.il utils.f parser.o: $(SRC)/parser.f $(SRC)/global.h utils.il $(FC) $(FFLAGS) -Minline=utils.il -c $(SRC)/parser.f alloc.o: $(SRC)/alloc.f $(SRC)/global.h utils.il $(FC) $(FFLAGS) -Minline=utils.il -c $(SRC)/alloc.f myprog: main.o utils.o parser.o alloc.o $(FC) -o myprog main.o utils.o parser.o alloc.o
Error Detection during Inlining You can specify the –Minfo=inline option to request inlining information from the compiler when you invoke the inliner. For example: $ pgfortran -Minline=mylib.il -Minfo=inline myext.f
Examples Assume the program dhry consists of a single source file dhry.f. The following command line builds an executable file for dhry in which proc7 is inlined wherever it is called: $ pgfortran dhry.f -Minline=proc7
The following command lines build an executable file for dhry in which proc7 plus any functions of approximately 10 or fewer statements are inlined (one level only).
Note The specified functions are inlined only if they are previously placed in the inline library, temp.il, during the extract phase. $ pgfortran dhry.f -Mextract=lib:temp.il $ pgfortran dhry.f -Minline=10,proc7,temp.il
Using the same source file dhry.f, the following example builds an executable for dhry in which all functions of roughly ten or fewer statements are inlined. Two levels of inlining are performed. This means that if function A calls function B, and B calls C, and both B and C are inlinable, then the version of B which is inlined into A will have had C inlined into it. $ pgfortran dhry.f -Minline=size:10,levels:2
Restrictions on Inlining The following Fortran subprograms cannot be extracted: • Main or BLOCK DATA programs. • Subprograms containing alternate return, assigned GO TO, DATA, SAVE, or EQUIVALENCE statements. • Subprograms containing FORMAT statements. • Subprograms containing multiple entries. 85
Restrictions on Inlining
A Fortran subprogram is not inlined if any of the following applies: • It is referenced in a statement function. • A common block mismatch exists; in other words, the caller must contain all common blocks specified in the callee, and elements of the common blocks must agree in name, order, and type (except that the caller's common block can have additional members appended to the end of the common block). • An argument mismatch exists; in other words, the number and type (size) of actual and formal parameters must be equal. • A name clash exists, such as a call to subroutine xyz in the extracted subprogram and a variable named xyz in the caller.
86
Chapter 9. Using OpenMP The PGF77, PGF95, and PGFORTRAN Fortran compilers support the OpenMP Fortran Application Program Interface. OpenMP is a specification for a set of compiler directives, an applications programming interface (API), and a set of environment variables that can be used to specify shared memory parallelism in FORTRAN programs. OpenMP may be used to obtain most of the parallel performance you can expect from your code, or it may serve as a stepping stone to parallelizing an entire application with MPI. This chapter provides information on OpenMP as it is supported by PGI compilers. Use the -mp compiler switch to enable processing of the OMP pragmas listed in this chapter. Users must link with the -mp switch to link the OpenMP runtime library. This chapter describes how to use the following options related to using OpenMP: –mp
OpenMP Overview Let’s look at the OpenMP shared-memory parallel programming model and some common OpenMP terminology.
OpenMP Shared-Memory Parallel Programming Model The OpenMP shared-memory programming model is a collection of compiler directives, library routines, and environment variables that can be used to specify shared-memory parallelism in Fortran programs. Fortran directives Allow users to mark sections of code that can be executed in parallel when the code is compiled using the –mp switch. When this switch is not present, the compiler ignores these directives. OpenMP Fortran directives begin with !$OMP , C$OMP, or *$OMP, beginning in column 1. This format allows the user to have a single source for use with or without the –mp switch, as these lines are then merely viewed as comments when –mp is not present or the compilers are not capable of handling directives. 87
OpenMP Overview
These directives allow the user to create task, loop, and parallel section work-sharing constructs and synchronization constructs. They also allow the user to define how data is shared or copied between parallel threads of execution. Fortran directives include a parallel region construct for writing coarse grain SPMD programs, worksharing constructs which specify that DO loop iterations should be split among the available threads of execution, and synchronization constructs.
Note The data environment is controlled either by using clauses on the directives or with additional directives. Run-time library routines Are available to query the parallel run-time environment, for example to determine how many threads are participating in execution of a parallel region. Environment variables Are available to control the execution behavior of parallel programs. For more information on OpenMP, see www.openmp.org. Macro substitution C and C++ omp pragmas are subject to macro replacement after #pragma omp.
Terminology For OpenMP 3.0 there are a number of terms for which it is useful to have common definitions. Thread An execution entity with a stack and associated static memory, called threadprivate memory. • An OpenMP thread is a thread that is managed by the OpenMP runtime system. • A thread-safe routine is a routine that performs the intended function even when executed concurrently, that is, by more than one thread. Region All code encountered during a specific instance of the execution of a given construct or of an OpenMP library routine. A region includes any code in called routines as well as any implicit code introduced by the OpenMP implementation. Regions are nested if one region is (dynamically) enclosed by another region, that is, a region is encountered during the execution of another region. PGI currently does not support nested parallel regions. Parallel region In OpenMP 3.0 there is a distinction between a parallel region and an active parallel region. A parallel region can be either inactive or active. • An inactive parallel region is executed by a single thread. • An active parallel region is a parallel region that is executed by a team consisting of more than one thread. 88
Chapter 9. Using OpenMP
Note The definition of an active parallel region changed between OpenMP 2.5 and OpenMP 3.0. In OpenMP 2.5, the definition was a parallel region whose IF clause evaluates to true. To examine the significance of this change, look at the following example: program test logical omp_in_parallel !$omp parallel print *, omp_in_parallel() !$omp end parallel stop end
Suppose we run this program with OMP_NUM_THREADS set to one. In OpenMP 2.5, this program yields T while in OpenMP 3.0, the program yields F. In OpenMP 3.0, execution is not occurring by more than one thread. Therefore, change in this definition may mean previous programs require modification. PGI currently does not support nested parallel regions so currently has only one level of active parallel regions. Task A specific instance of executable code and its data environment, generated when a thread encounters a task construct or a parallel construct.
OpenMP Example Look at the following simple OpenMP example involving loops. Example 9.1. OpenMP Loop Example PROGRAM MAIN INTEGER I, N, OMP_GET_THREAD_NUM REAL*8 V(1000), GSUM, LSUM GSUM = 0.0D0 N = 1000 DO I = 1, N V(I) = DBLE(I) ENDDO !$OMP PARALLEL PRIVATE(I,LSUM) SHARED(V,GSUM,N) LSUM = 0.0D0 !$OMP DO DO I = 1, N LSUM = LSUM + V(I) ENDDO !$OMP END DO !$OMP CRITICAL print *, "Thread ",OMP_GET_THREAD_NUM()," local sum: ",LSUM GSUM = GSUM + LSUM !$OMP END CRITICAL !$OMP END PARALLEL
89
Task Overview
PRINT *, "Global Sum: ",GSUM STOP END
If you execute this example with the environment variable OMP_NUM_THREADS set to 4, then the output looks similar to this: Thread Thread Thread Thread Global Sum: FORTRAN STOP
0 local sum: 1 local sum: 2 local sum: 3 local sum: 500500.0000000000
31375.00000000000 93875.00000000000 156375.0000000000 218875.0000000000
Task Overview Every part of an OpenMP program is part of a task. A task, whose execution can be performed immediately or delayed, has these characteristics: • Code to execute • A data environment - that is, it owns its data • An assigned thread that executes the code and uses the data. There are two activities associated with tasks: packaging and execution. • Packaging: Each encountering thread packages a new instance of a task - code and data. • Execution: Some thread in the team executes the task at some later time. In the following sections, we use this terminology: Task The package of code and instructions for allocating data created when a thread encounters a task construct. A task can be implicit or explicit. • An explicit task is a task generated when a task construct is encountered during execution. • An implicit task is a task generated by the implicit parallel region or generated when a parallel construct is encountered during execution. Task construct A task directive plus a structured block Task region The dynamic sequence of instructions produced by the execution of a task by a thread.
Fortran Parallelization Directives Parallelization directives are comments in a program that are interpreted by the PGI Fortran compilers when the option –mp is specified on the command line. The form of a parallelization directive is: sentinel directive_name [clauses]
With the exception of the SGI-compatible DOACROSS directive, the sentinel must comply with these rules: 90
Chapter 9. Using OpenMP
• Be one of these: !$OMP, C$OMP, or *$OMP. • Must start in column 1 (one). • Must appear as a single word without embedded white space. • The sentinel marking a DOACROSS directive is C$. The directive_name can be any of the directives listed in Table 9.1, “Directive Summary Table,” on page 92. The valid clauses depend on the directive. Chapter 19, “OpenMP Reference Information” provides a list of directives and their clauses, their usage, and examples. In addition to the sentinel rules, the directive must also comply with these rules: • Standard Fortran syntax restrictions, such as line length, case insensitivity, and so on, apply to the directive line. • Initial directive lines must have a space or zero in column six. • Continuation directive lines must have a character other than a space or a zero in column six. Continuation lines for C$DOACROSS directives are specified using the C$& sentinel. • Directives which are presented in pairs must be used in pairs. Clauses associated with directives have these characteristics: • The order in which clauses appear in the parallelization directives is not significant. • Commas separate clauses within the directives, but commas are not allowed between the directive name and the first clause. • Clauses on directives may be repeated as needed, subject to the restrictions listed in the description of each clause.
Directive Recognition The compiler option –mp enables recognition of the parallelization directives. The use of this option also implies: –Mreentrant
Local variables are placed on the stack and optimizations, such as -Mnoframe, that may result in nonreentrant code are disabled. –Miomutex
For directives, critical sections are generated around Fortran I/O statements. In PVF, you set the –mp option by using the Process OpenMP Directives property in the Fortran | Language property page. For more information on these property pages, refer to “Fortran Property Pages,” on page 331.
Directive Summary Table The following table provides a brief summary of the directives and pragmas that PGI supports. For complete information on these statements and examples, refer to Chapter 19, “OpenMP Reference Information”. 91
Directive Summary Table
Table 9.1. Directive Summary Table
92
Fortran Directive
Description
ATOMIC
Semantically equivalent to enclosing a single statement in the CRITCIAL...END CRITICAL directive. Note: Only certain statements are allowed.
BARRIER
Synchronizes all threads at a specific point in a program so that all threads complete work to that point before any thread continues.
CRITICAL ... END CRITICAL
Defines a subsection of code within a parallel region, a critical section, which is executed one thread at a time.
DO...END DO
Provides a mechanism for distribution of loop iterations across the available threads in a parallel region.
C$DOACROSS
Specifies that the compiler should parallelize the loop to which it applies, even though that loop is not contained within a parallel region.
FLUSH
When this appears, all processor-visible data items, or, when a list is present (FLUSH [list]), only those specified in the list, are written to memory, thus ensuring that all the threads in a team have a consistent view of certain objects in memory.
MASTER ... END MASTER
Designates code that executes on the master thread and that is skipped by the other threads.
ORDERED
Defines a code block that is executed by only one thread at a time, and in the order of the loop iterations; this makes the ordered code block sequential, while allowing parallel execution of statements outside the code block.
PARALLEL DO
Enables you to specify which loops the compiler should parallelize.
PARALLEL ... END PARALLEL
Supports a fork/join execution model in which a single thread executes all statements until a parallel region is encountered.
PARALLEL SECTIONS
Defines a non-iterative work-sharing construct without the need to define an enclosing parallel region.
PARALLEL WORKSHARE ... END PARALLEL WORKSHARE
Provides a short form method for including a WORKSHARE directive inside a PARALLEL construct.
SECTIONS … END SECTIONS
Defines a non-iterative work-sharing construct within a parallel region.
SINGLE ... END SINGLE
Designates code that executes on a single thread and that is skipped by the other threads.
TASK
Defines an explicit task.
TASKWAIT
Specifies a wait on the completion of child tasks generated since the beginning of the current task.
Chapter 9. Using OpenMP
Fortran Directive
Description
THREADPRIVATE
When a common block or variable that is initialized appears in this directive, each thread’s copy is initialized once prior to its first use.
WORKSHARE ... END WORKSHARE
Provides a mechanism to effect parallel execution of noniterative but implicitly data parallel constructs.
Directive Clauses Some directives and pragmas accept clauses that further allow a user to control the scope attributes of variables for the duration of the directive or pragma. Not all clauses are allowed on all directives, so the clauses that are valid are included with the description of the directive and pragma. The following table provides a brief summary of the clauses associated with OPENMP directives and pragmas that PGI supports. Table 9.2. Directive Clauses Summary Table This clause
Applies to this directive
Has this functionality
“COLLAPSE (n)”
DO...END DO PARALLEL DO ... END PARALLEL DO PARALLEL WORKSHARE
Specifies how many loops are associated with the loop construct.
“COPYIN (list)”
PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE
Allows threads to access the master thread's value, for a threadprivate variable. You assign the same value to threadprivate variables for each thread in the team executing the parallel region. Then, for each variable specified, the value of the variable in the master thread of the team is copied to the threadprivate copies at the beginning of the parallel region.
“COPYPRIVATE(list)”
END SINGLE
Specifies that one or more variables should be shared among all threads. This clause provides a mechanism to use a private variable to broadcast a value from one member of a team to the other members.
“DEFAULT”
PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE
Specifies the behavior of unscoped variables in a parallel region, such as the data-sharing attributes of variables.
93
Directive Clauses
94
This clause
Applies to this directive
Has this functionality
“FIRSTPRIVATE(list)”
DO PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE SECTIONS SINGLE
Specifies that each thread should have its own instance of a variable, and that each variable in the list should be initialized with the value of the original variable, because it exists before the parallel construct.
“IF()”
PARALLEL ... END PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE
Specifies whether a loop should be executed in parallel or in serial.
“LASTPRIVATE(list)”
DO PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS SECTIONS
Specifies that the enclosing context's version of the variable is set equal to the private version of whichever thread executes the final iteration of a for-loop construct.
“NOWAIT”
DO ... END DO SECTIONS SINGLE WORKSHARE … END WORKSHARE
Overrides the barrier implicit in a directive.
“NUM_THREADS”
PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE
Sets the number of threads in a thread team.
“ORDERED”
DO...END DO PARALLEL DO ... END PARALLEL DO
Required on a parallel FOR statement if an ordered directive is used in the loop.
Chapter 9. Using OpenMP
This clause
Applies to this directive
Has this functionality
“PRIVATE”
DO PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE SECTIONS SINGLE
Specifies that each thread should have its own instance of a variable.
“REDUCTION”(
DO PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE SECTIONS
Specifies that one or more variables that are private to each thread are the subject of a reduction operation at the end of the parallel region.
DO ... END DO PARALLEL DO... END PARALLEL DO
Applies to the FOR directive, allowing the user to specify the chunking method for parallelization. Work is assigned to threads in different manners depending on the scheduling type or chunk size used.
“SHARED”
PARALLEL PARALLEL DO ... END PARALLEL DO PARALLEL SECTIONS ... END PARALLEL SECTIONS PARALLEL WORKSHARE
Specifies that one or more variables should be shared among all threads. All threads within a team access the same storage area for shared variables
“UNTIED”
TASK TASKWAIT
Specifies that any thread in the team can resume the task region after a suspension.
{operator | intrinsic } : list)
“SCHEDULE”(type [,chunk])
For complete information on these clauses, refer to the OpenMP documentation available on the WorldWide Web.
Run-time Library Routines User-callable functions are available to the programmer to query and alter the parallel execution environment.
Note Unlimited OpenMP thread counts are available in all PGI configurations. The number of threads is unlicensed in the OpenMP run-time libraries - up to the hard limit of 64 threads. The following table summarizes the run-time library calls. 95
Run-time Library Routines
Table 9.3. Run-time Library Routines Summary Run-time Library Routines with Examples omp_get_num_threads Returns the number of threads in the team executing the parallel region from which it is called. When called from a serial region, this function returns 1. A nested parallel region is the same as a single parallel region. By default, the value returned by this function is equal to the value of the environment variable OMP_NUM_THREADS or to the value set by the last previous call to omp_set_num_threads(). Fortran
integer function omp_get_num_threads()
omp_set_num_threads Sets the number of threads to use for the next parallel region. This subroutine can only be called from a serial region of code. If it is called from within a parallel region, or from within a subroutine that is called from within a parallel region, the results are undefined. Further, this subroutine has precedence over the OMP_NUM_THREADS environment variable. Fortran
subroutine omp_set_num_threads(scalar_integer_exp)
omp_get_thread_num Returns the thread number within the team. The thread number lies between 0 and omp_get_num_threads()-1. When called from a serial region, this function returns 0. A nested parallel region is the same as a single parallel region. Fortran
integer function omp_get_thread_num()
omp_get_ancestor_thread_num Returns, for a given nested level of the current thread, the thread number of the ancestor. Fortran
integer function omp_get_ancestor_thread_num(level) integer level
omp_get_active_level Returns the number of enclosing active parallel regions enclosing the task that contains the call. PGI currently supports only one level of active parallel regions, so the return value currently is 1. Fortran
integer function omp_get_active_level()
omp_get_level Returns the number of parallel regions enclosing the task that contains the call. Fortran
integer function omp_get_level()
omp_get_max_threads Returns the maximum value that can be returned by calls to omp_get_num_threads(). If omp_set_num_threads() is used to change the number of processors, subsequent calls to omp_get_max_threads() return the new value. Further, this function returns the maximum value whether executing from a parallel or serial region of code. 96
Chapter 9. Using OpenMP
Run-time Library Routines with Examples Fortran
integer function omp_get_max_threads()
omp_get_num_procs Returns the number of processors that are available to the program Fortran
integer function omp_get_num_procs()
omp_get_stack_size Returns the value of the OpenMP internal control variable that specifies the size that is used to create a stack for a newly created thread. This value may not be the size of the stack of the current thread. Fortran
!omp_get_stack_size interface function omp_get_stack_size () use omp_lib_kinds integer ( kind=OMP_STACK_SIZE_KIND ) :: omp_get_stack_size end function omp_get_stack_size end interface
omp_set_stack_size Changes the value of the OpenMP internal control variable that specifies the size to be used to create a stack for a newly created thread. The integer argument specifies the stack size in kilobytes. The size of the stack of the current thread cannot be changed. In the PGI implementation, all OpenMP or auto-parallelization threads are created just prior to the first parallel region; therefore, only calls to omp_set_stack_size() that occur prior to the first region have an effect. Fortran
subroutine omp_set_stack_size(integer(KIND=OMP_STACK_SIZE_KIND))
omp_get_team_size Returns, for a given nested level of the current thread, the size of the thread team to which the ancestor belongs. Fortran
integer function omp_get_team_size (level) integer level
omp_in_parallel Returns whether or not the call is within a parallel region. Returns .TRUE. if called from within a parallel region and .FALSE. if called outside of a parallel region. When called from within a parallel region that is serialized, for example in the presence of an IF clause evaluating .FALSE., the function returns .FALSE.. Fortran
logical function omp_in_parallel()
omp_set_dynamic Allows automatic dynamic adjustment of the number of threads used for execution of parallel regions. This function is recognized, but currently has no effect. Fortran
subroutine omp_set_dynamic(scalar_logical_exp)
97
Run-time Library Routines
Run-time Library Routines with Examples omp_get_dynamic Allows the user to query whether automatic dynamic adjustment of the number of threads used for execution of parallel regions is enabled. This function is recognized, but currently always returns .FALSE.. Fortran
logical function omp_get_dynamic()
omp_set_nested Allows enabling/disabling of nested parallel regions. This function is recognized, but currently has no effect. Fortran
subroutine omp_set_nested(nested) logical nested
omp_get_nested Allows the user to query whether dynamic adjustment of the number of threads available for execution of parallel regions is enabled. This function is recognized, but currently always returns .FALSE.. Fortran
logical function omp_get_nested()
omp_set_schedule Set the value of the run_sched_var. Fortran
subroutine include integer integer
omp_set_schedule(kind, modifier) ‘omp_lib_kinds.h’ (kind=omp_sched_kind) kind modifier
omp_get_schedule Retrieve the value of the run_sched_var. Fortran
subroutine include integer integer
omp_get_schedule(kind, modifier) ‘omp_lib_kinds.h’ (kind=omp_sched_kind) kind modifier
omp_get_wtime Returns the elapsed wall clock time, in seconds, as a DOUBLE PRECISION value. Times returned are per-thread times, and are not necessarily globally consistent across all threads. Fortran
double precision function omp_get_wtime()
omp_get_wtick Returns the resolution of omp_get_wtime(), in seconds, as a DOUBLE PRECISION value. Fortran
98
double precision function omp_get_wtick()
Chapter 9. Using OpenMP
Run-time Library Routines with Examples omp_init_lock Initializes a lock associated with the variable lock for use in subsequent calls to lock routines. The initial state of the lock is unlocked. If the variable is already associated with a lock, it is illegal to make a call to this routine. Fortran
subroutine omp_init_lock(lock) include ‘omp_lib_kinds.h’ integer(kind=omp_lock_kind) lock
omp_destroy_lock Disassociates a lock associated with the variable. Fortran
subroutine omp_destroy_lock(lock) include ‘omp_lib_kinds.h’ integer(kind=omp_lock_kind) lock
omp_set_lock Causes the calling thread to wait until the specified lock is available. The thread gains ownership of the lock when it is available. If the variable is not already associated with a lock, it is illegal to make a call to this routine. Fortran
subroutine omp_set_lock(lock) include ‘omp_lib_kinds.h’ integer(kind=omp_lock_kind) lock
omp_unset_lock Causes the calling thread to release ownership of the lock associated with integer_var. If the variable is not already associated with a lock, it is illegal to make a call to this routine. Fortran
subroutine omp_unset_lock(lock) include ‘omp_lib_kinds.h’ integer(kind=omp_lock_kind) lock
omp_test_lock Causes the calling thread to try to gain ownership of the lock associated with the variable. The function returns .TRUE. if the thread gains ownership of the lock; otherwise it returns .FALSE.. If the variable is not already associated with a lock, it is illegal to make a call to this routine. Fortran
logical function omp_test_lock(lock) include ‘omp_lib_kinds.h’ integer(kind=omp_lock_kind) lock
Environment Variables You can use OpenMP environment variables to control the behavior of OpenMP programs. These environment variables allow you to set and pass information that can alter the behavior of directives.
Note To set the environment for programs run from within PVF, whether or not they are run in the debugger, use the environment properties available in the “Debugging Property Page,” on page 328. 99
Environment Variables
The following summary table is a quick reference for the OpenMP environment variables that PGI uses. For detailed descriptions of each of these variables, refer to “OpenMP Environment Variables,” on page 262. Table 9.4. OpenMP-related Environment Variable Summary Table Environment Variable
Default
Description
OMP_DYNAMIC
FALSE
Currently has no effect. Typically enables (TRUE) or disables (FALSE) the dynamic adjustment of the number of threads.
OMP_NESTED
100
Currently has no effect. Typically specifies the maximum number of nested parallel regions.
OMP_MAX_ACTIVE_LEVELS
FALSE
Currently has no effect. Typically enables (TRUE) or disables (FALSE) nested parallelism.
OMP_NUM_THREADS
1
Specifies the number of threads to use during execution of parallel regions.
OMP_SCHEDULE
STATIC with Specifies the type of iteration scheduling and optionally the chunk size of chunk size to use for omp for and omp parallel for loops 1 that include the run-time schedule clause. The supported schedule types, which can be specified in upper- or lowercase are static, dynamic, guided, and auto.
OMP_STACKSIZE
Overrides the default stack size for a newly created thread.
OMP_THREAD_LIMIT
64
Specifies the absolute maximum number of threads that can be used in a program.
OMP_WAIT_POLICY
ACTIVE
Sets the behavior of idle threads, defining whether they spin or sleep when idle. The values are ACTIVE and PASSIVE.
Chapter 10. Using an Accelerator An accelerator is a special-purpose co-processor attached to a CPU and to which the CPU can offload data and executable kernels to perform compute-intensive calculations. This chapter describes a collection of compiler directives used to specify regions of code in Fortran that can be offloaded from a host CPU to an attached accelerator.
Overview The programming model and directives described in this chapter allow programmers to create high-level host+accelerator programs without the need to explicitly initialize the accelerator, manage data or program transfers between the host and accelerator, or initiate accelerator startup and shutdown. Rather, all of these details are implicit in the programming model and are managed by the PGI Fortran accelerator compilers. The method described provides a model for accelerator programming that is portable across operating systems and various host CPUs and accelerators. The directives allow a programmer to migrate applications incrementally to accelerator targets using standards-compliant Fortran. This programming model allows the programmer to augment information available to the compilers, including specification of data local to an accelerator region, guidance on mapping of loops onto an accelerator, and similar performance-related details.
Components The PGI Accelerator compiler technology includes the following components: • PGF95 auto-parallelizing accelerator-enabled Fortran 90/95 compiler • NVIDIA CUDA Toolkit components • PVF Target Accelerators property page • A simple command-line tool to detect whether the system has an appropriate GPU or accelerator card No accelerator-enabled debugger is included with this release
Availability The PGI 10.0 Fortran Accelerator compilersare available only on x86 processor-based workstations and servers with an attached NVIDIA CUDA-enabled GPU or Tesla card. These compilers target all platforms that 101
Terminology
PGI supports except 64-bit Mac OS X. All examples included in this chapter are developed and presented on such a platform. For a list of supported GPUs, refer to the Accelerator Installation and Supported Platforms list in the latest PVF Release Notes.
User-directed Accelerator Programming In user-directed accelerator programming the user specifies the regions of a host program to be targeted for offloading to an accelerator device. The bulk of a user’s program, as well as regions containing constructs that are not supported on the targeted accelerator, are executed on the host. This chapter concentrates on specification of loops and regions of code to be offloaded to an accelerator.
Features Not Covered or Implemented This chapter does not describe features or limitations of the host programming environment as a whole. Further, it does not cover automatic detection and offloading of regions of code to an accelerator by a compiler or other tool. While future versions of the PGI compilers may allow for automatic offloading or multiple accelerators of different types, these features are not currently supported.
Terminology Clear and consistent terminology is important in describing any programming model. This section provides definitions of the terms required for you to effectively use this chapter and the associated programming model. Accelerator a special-purpose co-processor attached to a CPU and to which the CPU can offload data and executable kernels to perform compute-intensive calculations. Compute intensity for a given loop, region, or program unit, the ratio of the number of arithmetic operations performed on computed data divided by the number of memory transfers required to move that data between two levels of a memory hierarchy. Compute region a region defined by an Accelerator compute region directive. A compute region is a structured block containing loops which are compiled for the accelerator. A compute region may require device memory to be allocated and data to be copied from host to device upon region entry, and data to be copied from device to host memory and device memory deallocated upon exit. Compute regions may not contain other compute regions or data regions. CUDA stands for Compute Unified Device Architecture; the CUDA environment from NVIDIA is a C-like programming environment used to explicitly control and program an NVIDIA GPU. Data region a region defined by an Accelerator data region directive, or an implicit data region for a function or subroutine containing Accelerator directives. Data regions typically require device memory to be allocated and data to be copied from host to device memory upon entry, and data to be copied from device to host memory and device memory deallocated upon exit. Data regions may contain other data regions and compute regions. 102
Chapter 10. Using an Accelerator
Device a general reference to any type of accelerator. Device memory memory attached to an accelerator which is physically separate from the host memory. Directive a specially formatted comment statement that is interpreted by a compiler to augment information about or specify the behavior of the program. DMA Direct Memory Access, a method to move data between physically separate memories; this is typically performed by a DMA engine, separate from the host CPU, that can access the host physical memory as well as an IO device or GPU physical memory. GPU a Graphics Processing Unit; one type of accelerator device. GPGPU General Purpose computation on Graphics Processing Units. Host the main CPU that in this context has an attached accelerator device. The host CPU controls the program regions and data loaded into and executed on the device. Loop trip count the number of times a particular loop executes. OpenCL - Open Compute Language a proposed standard C-like programming environment similar to CUDA that enables portable low-level general-purpose programming on GPUs and other accelerators. Private data with respect to an iterative loop, data which is used only during a particular loop iteration. With respect to a more general region of code, data which is used within the region but is not initialized prior to the region and is re-initialized prior to any use after the region. Region a structured block identified by the programmer or implicitly defined by the language. Certain actions may occur when program execution reaches the start and end of a region, such as device memory allocation or data movement between the host and device memory. Loops in a compute region are targeted for execution on the accelerator. Structured block a block of executable statements with a single entry at the top and a single exit at the bottom. Vector operation a single operation or sequence of operations applied uniformly to each element of an array. Visible device copy a copy of a variable, array, or subarray allocated in device memory, that is visible to the program unit being compiled. 103
System Requirements
System Requirements To use the PGI Accelerator compiler features, you must install the NVIDIA drivers. You may download these components from the NVIDIA website at www.nvidia.com/cuda
These are not PGI products, and are licensed and supported by NVIDIA.
Note You must be using an operating system that is supported by both the current PGI release and by the CUDA software and drivers.
Supported Processors and GPUs This PGI Accelerator compiler release supports all AMD64 and Intel 64 host processors supported by Release 9.0 or higher of the PGI compilers and tools. You can use the –tp flag as documented in the release to specify the target processor. Use the –ta=nvidia flag to enable the accelerator directives and target the NVIDIA GPU. You can then use the generated code on any system with CUDA installed that has a CUDA-enabled GeForce, Quadro, or Tesla card. In PVF you can use the PVF Target Accelerators property page to enable accelerator compilation. For more information on the properties, refer to “Fortran | Target Accelerators,” on page 345. For more information on these flags as they relate to accelerator technology, refer to “Applicable Command Line Options,” on page 115. For a complete list of supported GPUs, refer to the NVIDIA website at: www.nvidia.com/object/cuda_learn_products.html
You can detect whether the system has CUDA properly installed and has an attached graphics card by running the pgaccelinfo command, which is delivered as part of the PGI Accelerator compilers software package.
Installation and Licensing Note The PGI Accelerator compilers require a separate license key in addition to a normal PGI Visual Fortran license.
Enable Accelerator Compilation Once you have installed PVF Release 2010, you can enable accelerator compilation by using the properties available on the Fortran | Target Accelerators property page. For more information about these properties, refer to “Fortran | Target Accelerators”. 104
Chapter 10. Using an Accelerator
Execution Model The execution model targeted by the PGI Accelerator compilers is host-directed execution with an attached accelerator device, such as a GPU. The bulk of a user application executes on the host. Compute intensive regions are offloaded to the accelerator device under control of the host. The accelerator device executes kernels, which may be as simple as a tightly-nested loop, or as complex as a subroutine, depending on the accelerator hardware.
Host Functions Even in accelerator-targeted regions, the host must orchestrate the execution; it • allocates memory on the accelerator device • initiates data transfer • sends the kernel code to the accelerator • passes kernel arguments • queues the kernel • waits for completion • transfers results back to the host • deallocates memory
Note In most cases, the host can queue a sequence of kernels to be executed on the device, one after the other.
Levels of Parallelism Most current GPUs support two levels of parallelism: • an outer doall (fully parallel) loop level • an inner synchronous (SIMD or vector) loop level Each level can be multidimensional with 2 or 3 dimensions, but the domain must be strictly rectangular. The synchronous level may not be fully implemented with SIMD or vector operations, so explicit synchronization is supported and required across this level. No synchronization is supported between parallel threads across the doall level. The execution model on the device side exposes these two levels of parallelism and the programmer is required to understand the difference between, for example, a fully parallel loop and a loop that is vectorizable but requires synchronization across iterations. All fully parallel loops can be scheduled for either doall or synchronous parallel execution, but by definition SIMD vector loops that require synchronization can only be scheduled for synchronous parallel execution. 105
Memory Model
Memory Model The most significant difference between a host-only program and a host+accelerator program is that the memory on the accelerator can be completely separate from host memory, which is the case on most current GPUs. For example: • The host cannot read or write accelerator memory by reference because it is not mapped into the virtual memory space of the host. • All data movement between host memory and accelerator memory must be performed by the host through runtime library calls that explicitly move data between the separate memories. • It is not valid to assume the accelerator can read or write host memory, though this may be supported by accelerators in the future.
Separate Host and Accelerator Memory Considerations The concept of separate host and accelerator memories is very apparent in low-level accelerator programming models such as CUDA or OpenCL, in which data movement between the memories dominates user code. In the PGI Accelerator programming model, data movement between the memories is implicit and managed by the compiler. The programmer must be aware of the potentially separate memories for many reasons, including but not limited to: • Memory bandwidth between host memory and accelerator memory determines the compute intensity required to effectively accelerate a given region of code. • Limited size of accelerator memory may prohibit offloading of regions of code that operate on very large amounts of data.
Accelerator Memory On the accelerator side, current GPUs implement a weak memory model. In particular, they do not support memory coherence between threads unless those threads are parallel only at the synchronous level and the memory operations are separated by an explicit barrier. Otherwise, if one thread updates a memory location and another reads the same location, or two threads store a value to the same location, the hardware does not guarantee the results. While the results of running such a program might be inconsistent, it is not accurate to say that the results are incorrect. By definition, such programs are defined as being in error. While a compiler can detect some potential errors of this nature, it is nonetheless possible to write an accelerator region that produces inconsistent numerical results.
Cache Management Some current GPUs have a software-managed cache, some have hardware-managed caches, and most have hardware caches that can be used only in certain situations and are limited to read-only data. In lowlevel programming models such as CUDA or OpenCL, it is up to the programmer to manage these caches. However, in the PGI Accelerator programming model, the compiler manages these caches using hints from the programmer in the form of directives. 106
Chapter 10. Using an Accelerator
Running an Accelerator Program In PVF you can use the PVF Target Accelerators property page to enable accelerator compilation. For more information on the properties, refer to“Fortran | Target Accelerators,” on page 345. Running a program that has accelerator directives and was compiled and linked with the -ta=nvidia flag is the same as running the program compiled without the -ta=nvidia flag. • The program looks for and dynamically loads the CUDA libraries. If the libraries are not available, or if they are in a different directory than they were when the program was compiled, you may need to append the CUDA library directory to your PATH environment variable on Windows. • On Linux, if you have no server running on your NVIDIA GPU, when your program reaches its first accelerator region, there may be a 0.5 to 1.5 second pause to warm up the GPU from a power-off condition. You can avoid this delay by running the pgcudainit program in the background, which keeps the GPU powered on. • If you run an accelerated program on a system without a CUDA-enabled NVIDIA GPU, or without the CUDA software installed in a directory where the runtime library can find it, the program fails at runtime with an error message. • If you set the environment variable ACC_NOTIFY to a nonzero integer value, the runtime library prints a line to standard error every time it launches a kernel.
Accelerator Directives This section provides an overview of the Fortran directives used to delineate accelerator regions and to augment information available to the compiler for scheduling of loops and classification of data. For complete descriptions of each accelerator directive, refer to “PGI Accelerator Directives,” on page 265.
Enable Accelerator Directives PGI Accelerator compilers enable accelerator directives with the –ta command line option.In PVF, use the“Fortran | Target Accelerators” page to enable the –ta option. For more information on this option as it relates to the Accelerator, refer to “Applicable Command Line Options,” on page 115.
Note The syntax used to define directives allows compilers to ignore accelerator directives if support is disabled or not provided. _ACCEL macro The _ACCEL macro name is defined to have a value yyyymm where yyyy is the year and mm is the month designation of the version of the Accelerator directives supported by the implementation. For example, the version for May, 2009 is 200905. This macro must be defined by a compiler when accelerator directives are enabled.
Format The specific format of the directive depends on the language and the format or form of the source. 107
Accelerator Directives
Directives include a name and clauses, and the format of the directive depends on the type: • Free-form Fortran directives, described in “Free-Form Fortran Directives” • Fixed-form Fortran directives, described in “Fixed-Form Fortran Directives”
Note This document uses free form for all PGI Accelerator compiler Fortran directive examples. Rules The following rules apply to all PGI Accelerator compiler directives: • Only one directive-name can be specified per directive. • The order in which clauses appear is not significant. • Clauses may be repeated unless otherwise specified. • For clauses that have a list argument, a list is a comma-separated list of variable names, array names, or, in some cases, subarrays with subscript ranges.
Free-Form Fortran Directives PGI Accelerator compiler Fortran directives can be either Free-Form or Fixed-Form directives. Free-Form Accelerator directives are specified with the !$acc mechanism. Syntax The syntax of directives in free-form source files is: !$acc directive-name [clause [,clause]…]
Rules In addition to the general directive rules, the following rules apply to PGI Accelerator compiler Free-Form Fortran directives: • The comment prefix (!) may appear in any column, but may only be preceded by white space (spaces and tabs). • The sentinel (!$acc) must appear as a single word, with no intervening white space. • Line length, white space, and continuation rules apply to the directive line. • Initial directive lines must have a space after the sentinel. • Continued directive lines must have an ampersand (&) as the last nonblank character on the line, prior to any comment placed in the directive. • Comments may appear on the same line as the directive, starting with an exclamation point and extending to the end of the line. 108
Chapter 10. Using an Accelerator
• If the first nonblank character after the sentinel is an exclamation point, the line is ignored. • Directives are case-insensitive. • Directives cannot be embedded within continued statements. • Statements must not be embedded within continued directives.
Fixed-Form Fortran Directives Fixed-Form Accelerator directives are specified using one of three formats. Syntax The syntax of directives in fixed-form source files is one these three formats: !$acc directive-name [clause [,clause]…] c$acc directive-name [clause [,clause]…] *$acc directive-name [clause [,clause]…]
Rules In addition to the general directive rules, the following rules apply to Accelerator Fixed-Form Fortran directives: • The sentinel (!$acc, c$acc, or *$acc) must occupy columns 1-5. • Fixed form line length, white space, continuation, and column rules apply to the directive line. • Initial directive lines must have a space or zero in column 6, and continuation directive lines must have a character other than a space or zero in column 6. • Comments may appear on the same line as a directive, starting with an exclamation point on or after column 7 and continuing to the end of the line. • Directives are case-insensitive. • Directives cannot be embedded within continued statements. • Statements must not be embedded within continued directives.
Accelerator Directive Summary PGI currently supports these types of accelerator directives, which are defined in more detail in “PGI Accelerator Directives,” on page 265: Accelerator Compute Region Directive Accelerator Loop Mapping Directive Combined Directive Accelerator Declarative Data Directive Accelerator Update Directive Table 10.1 lists and briefly describes each of the accelerator directives that PGI currently supports. For a complete description of each directive, refer to “PGI Accelerator Directives,” on page 265. 109
Accelerator Directives
Table 10.1. PGI Accelerator Directive Summary Table This directive...
Accepts these clauses...
Has this functionality...
Accelerator Compute Region Directive
if( condition) copy ( list ) copyin( list ) copyout( list ) local( list ) updatein( list ) updateout( list )
Defines the region of the program that should be compiled for execution on the accelerator device.
Fortran Syntax !$acc region [clause [, clause]…] structured block !$acc end region
Accelerator Data Region Directive
copy ( list ) copyin( list ) copyout( list ) local( list ) mirror( list ) updatein( list ) updateout( list )
Defines data, typically arrays, that should be allocated in the device memory for the duration of the data region, whether data should be copied from the host to the device memory upon region entry, and copied from the device to host memory upon region exit.
Fortran Syntax !$acc data region [clause [, clause]…] structured block !$acc end data region
Accelerator Loop Mapping Directive
cache( list ) host [(width)] independent kernel parallel [(width)] private( list ) seq [(width)] unroll [(width)] vector [(width)]
Fortran Syntax !$acc do [clause [,clause]…] do loop
110
Describes what type of parallelism to use to execute the loop and declare loop-private variables and arrays. Applies to a loop which must appear on the following line.
Chapter 10. Using an Accelerator
This directive...
Accepts these clauses...
Has this functionality...
Combined Directive
Any clause that is allowed on a region directive or a loop directive is allowed on a combined directive.
Is a shortcut for specifying a loop directive nested immediately inside an accelerator compute region directive. The meaning is identical to explicitly specifying a region construct containing a loop directive.
Fortran Syntax !$acc region do [clause [, clause]...] do loop
Accelerator Declarative copy ( list ) Data Directive copyin( list ) copyout( list ) local( list ) mirror (list ) reflected( list )
Specifies that an array or arrays are to be allocated in the device memory for the duration of the implicit data region of a function, subroutine, or program. Specifies whether the data values are to be transferred from the host to the device memory upon entry to the implicit data region, and from the device to the host memory upon exit from the implicit data region. Creates a visible device copy of the variable or array.
Fortran Syntax !$acc declclause [,declclause]...
Accelerator Update Directive
host ( list ) device( list )
Used within an explicit or implicit data region to update all or part of a host memory array with values from the corresponding array in device memory, or to update all or part of a device memory array with values from the corresponding array in host memory.
Fortran Syntax !$acc update updateclause [,updateclause]...
Accelerator Directive Clauses Table 10.2 provides an alphabetical listing and brief description of each clause that is applicable for the various Accelerator directives. The table also indicates for which directives the clause is applicable. For more information on the restrictions and use of each clause, refer to “PGI Accelerator Directive Clauses,” on page 271. Table 10.2. Directive Clauses Summary Use this clause... In these directives... cache (list)
To do this...
Accelerator Loop Mapping Provides a hint to the compiler to try to move the variables, arrays, or subarrays in the list to the highest level of the memory hierarchy. 111
Accelerator Directive Clauses
112
Use this clause... In these directives...
To do this...
copy (list)
Accelerator Data Region Declarative Data
Declares that the variables, arrays, or subarrays in the list have values in the host memory that need to be copied to the accelerator memory, and are assigned values on the accelerator that need to be copied back to the host.
copyin (list)
Accelerator Data Region Declarative Data
Declares that the variables, arrays or subarrays in the list have values in the host memory that need to be copied to the accelerator memory.
copyout (list)
Accelerator Data Region Declarative Data
Declares that the variables, arrays, or subarrays in the list are assigned or contain values in the accelerator memory that need to be copied back to the host memory at the end of the accelerator region.
device (list)
Update
Copies the variables, arrays, or subarrays in the list argument from host memory to the visible device copy of the variables, arrays, or subarrays in device memory. Copy occurs before beginning execution of the compute or data region.
host (list)
Update
Copies the visible device copies of the variables, arrays, or subarrays in the list argument to the associated host memory locations. The copy occurs after completion of the compute or data region.
host [(width)]
Accelerator Loop Mapping Tells the compiler to execute the loop sequentially on the host processor.
if (condition)
Accelerator Compute Data When present, tells the compiler to generate two copies of Region the region - one for the accelerator, one for the host - and to generate code to decide which copy to execute.
independent
Accelerator Loop Mapping Tells the compiler that the iterations of this loop are dataindependent of each other, thus allowing the compiler to generate code to examine the iterations in parallel, without synchronization.
kernel
Accelerator Loop Mapping Tells the compiler that the body of this loop is to be the body of the computational kernel. Any loops contained within the kernel loop are executed sequentially on the accelerator.
local (list)
Accelerator Data Region Declarative Data
Declares that the variables, arrays or subarrays in the list need to be allocated in the accelerator memory, but the values in the host memory are not needed on the accelerator, and the values computed and assigned on the accelerator are not needed on the host.
mirror (list)
Accelerator Data Region Declarative Data
Declares that the arrays in the list need to mirror the allocation state of the host array within the region. Valid only in Fortran on Accelerator data region directive.
Chapter 10. Using an Accelerator
Use this clause... In these directives...
To do this...
parallel [(width)] Accelerator Loop Mapping Tells the compiler to execute this loop in parallel mode on the accelerator. There may be a target-specific limit on the number of iterations in a parallel loop or on the number of parallel loops allowed in a given kernel private (list)
Accelerator Loop Mapping Declares that the variables, arrays, or subarrays in the list argument need to be allocated in the accelerator memory with one copy for each iteration of the loop.
reflected (list)
Declarative Data
seq [(width)]
Accelerator Loop Mapping Tells the compiler to execute this loop sequentially on the accelerator. There is no maximum number of iterations for a seq schedule.
unroll [(width)]
Accelerator Loop Mapping Tells the compiler to unroll width iterations for sequential execution on the accelerator. The width argument must be a compile time positive constant integer.
updatein (list)
Accelerator Data Region
Copies the variables, arrays, or subarrays in the list argument from host memory to the visible device copy of the variables, arrays, or subarrays in device memory, before beginning execution of the compute or data region.
updateout (list)
Accelerator Data Region
Copies the visible device copies of the variables, arrays, or subarrays in the list argument to the associated host memory locations, after completion of the compute or data region.
vector [(width)]
Accelerator Loop Mapping Tells the compiler to execute this loop in vector mode on the accelerator.
Declares that the actual argument arrays that are bound to the dummy argument arrays in the list need to have a visible copy at the call site.
PGI Accelerator Compilers Runtime Libraries This section provides an overview of the user-callable functions and library routines that are available for use by programmers to query the accelerator features and to control behavior of accelerator-enabled programs at runtime.
Note In Fortran, none of the PGI Accelerator compilers runtime library routines may be called from a PURE or ELEMENTAL procedure.
Runtime Library Definitions There are separate runtime library files for Fortran. 113
Environment Variables
Fortran Runtime Library Files In Fortran, interface declarations are provided in a Fortran include file named accel_lib.h and in a Fortran module named accel_lib. These files define: • Interfaces for all routines in this section. • Integer parameters to define integer kinds for arguments to those routines. • Integer parameters to describe types of accelerators. • The integer parameter accel_version with a value yyyymm where yyyy and mm are the year and month designations of the version of the Accelerator programming model supported. This value matches the value of the preprocessor variable _ACCEL.
Runtime Library Routines Table 10.3 lists and briefly describes the supported PGI Accelerator compilers runtime library routines. For a complete description of these routines, refer to “PGI Accelerator Runtime Routines,” on page 279. Table 10.3. Accelerator Runtime Library Routines This Runtime Library Routine...
Does this...
acc_get_device
Returns the type of accelerator device used to run the next accelerator region, if one is selected.
acc_get_device_num
Returns the number of the device being used to execute an accelerator region.
acc_get_num_devices
Returns the number of accelerator devices of the given type attached to the host.
acc_init
Connects to and initializes the accelerator device and allocates control structures in the accelerator library.
acc_on_device
Tells the program whether it is executing on a particular device.
acc_set_device
Tells the runtime which type of device to use when executing an accelerator compute region.
acc_set_device_num
Tells the runtime which device of the given type to use among those that are attached.
acc_shutdown
Tells the runtime to shutdown the connection to the given accelerator device, and free up any runtime resources.
Environment Variables PGI supports environment variables that modify the behavior of accelerator regions. This section defines the user-setable environment variables used to control behavior of accelerator-enabled programs at execution. These environment variables must comply with these rules: • The names of the environment variables must be upper case. 114
Chapter 10. Using an Accelerator
• The values assigned environment variables are case insensitive and may have leading and trailing white space. • The behavior is implementation-defined if the values of the environment variables change after the program has started, even if the program itself modifies the values. Table 10.4 lists and briefly describes the Accelerator environment variables that PGI supports. Table 10.4. Accelerator Environment Variables This environment variable...
Does this...
ACC_DEVICE
Controls which accelerator device to use when executing accelerator regions, if the program has been compiled to use more than one different type of device. The value of this environment variable is implementation-defined, and currently may be the string NVIDIA or HOST.
ACC_DEVICE_NUM
Controls the default device number to use when executing accelerator regions. The value of this environment variable must be a nonnegative integer between zero and the number of devices attached to the host.
ACC_NOTIFY
When set to a non-negative integer, indicates to print a message to standard output when a kernel is executed on an accelerator.
Applicable PVF Property Pages The following property pages are applicable specifically when working with accelerators. “Fortran | Target Accelerators” Use the –ta option to enable recognition of Accelerator directives. “Fortran | Target Processors” Use the –tp option to specify the target host processor architecture. “Fortran | Diagnostics” Use the –Minfo option to see messages about the success or failure of the compiler in translating the accelerator region into GPU kernels. For more information about the many suboptions available with these options, refer to the respective sections in “Fortran Property Pages,” on page 331
Applicable Command Line Options The following command line options are applicable specifically when working with accelerators. Each of these command line options are available through the property pages described in the previous section: “Applicable PVF Property Pages”. –ta
Use this option to enable recognition of the !$ACC directives in Fortran. 115
PGI Unified Binary for Accelerators
–tp
Use this option to specify the target host processor architecture. –Minfo or –Minfo=accel
Use this option to see messages about the success or failure of the compiler in translating the accelerator region into GPU kernels. The –ta flag has the following accelerator-related suboptions: nvidia Select NVIDIA accelerator target. This option has a number of suboptions: analysis
Perform loop analysis only; do not generate GPU code.
cc10, cc11, cc12, cc13, cc20
Generate code for compute capability 1.0, 1.1, 1.2, 1.3, or 2.0 respectively; multiple selections are valid.
cuda2.3 or 2.3
Specify the CUDA 2.3 version of the toolkit.
cuda3.0 or 3.0
Specify the CUDA 3.0 version of the toolkit.
fastmath
Use routines from the fast math library.
keepbin
Keep the binary (.bin) files.
keepgpu
Keep the kernel source (.gpu) files.
keepptx
Keep the portable assembly (.ptx) file for the GPU code.
maxregcount:n
Specify the maximum number of registers to use on the GPU. Leaving this blank indicates no limit.
mul24
Use 24-bit multiplication for subscripting.
nofma
Do not generate fused multiply-add instructions.
time
Link in a limited-profiling library, as described in “Profiling Accelerator Kernels,” on page 118.
[no]wait
Wait for each kernel to finish before continuing in the host program.
host Select NO accelerator target. Generate PGI Unified Binary Code, as described in “PGI Unified Binary for Accelerators,” on page 116. The compiler automatically invokes the necessary CUDA software tools to create the kernel code and embeds the kernels in the object file.
PGI Unified Binary for Accelerators Note The information and capabilities described in this section are only supported for 64-bit systems. PGI compilers support the PGI Unified Binary feature to generate executables with functions optimized for different host processors, all packed into a single binary. This release extends the PGI Unified Binary technology for accelerators. Specifically, you can generate a single binary that includes two versions of functions: 116
Chapter 10. Using an Accelerator
• one is optimized for the accelerator • one runs on the host processor when the accelerator is not available or when you want to compare host to accelerator execution. To enable this feature, use the properties available on the “Fortran | Target Accelerators” property page. This flag tells the compiler to generate two versions of functions that have valid accelerator regions. • A compiled version that targets the accelerator. • A compiled version that ignores the accelerator directives and targets the host processor. If you use the –Minfo flag, which you enable in PVF through the “Fortran | Diagnostics” property page, you get messages similar to the following: s1: 12, PGI Unified Binary version for -tp=barcelona-64 -ta=host 18, Generated an alternate loop for the inner loop Generated vector sse code for inner loop Generated 1 prefetch instructions for this loop s1: 12, PGI Unified Binary version for -tp=barcelona-64 -ta=nvidia 15, Generating copy(b(:,2:90)) Generating copyin(a(:,2:90)) 16, Loop is parallelizable 18, Loop is parallelizable Parallelization requires privatization of array t(2:90) Accelerator kernel generated 16, !$acc do parallel 18, !$acc do parallel, vector(256) Using register for t
The PGI Unified Binary message shows that two versions of the subprogram s1 were generated: • one for no accelerator (–ta=host) • one for the NVIDIA GPU (–ta=nvidia) At run time, the program tries to load the NVIDIA CUDA dynamic libraries and test for the presence of a GPU. If the libraries are not available or no GPU is found, the program runs the host version. You can also set an environment variable to tell the program to run on the NVIDIA GPU. To do this, set ACC_DEVICE to the value NVIDIA or nvidia. Any other value of the environment variable causes the program to use the host version.
Note The only supported –ta targets for this release are nvidia and host.
Multiple Processor Targets With 64-bit processors, you can use the –tp flag with multiple processor targets along with the –ta flag. You see the following behavior: • If you specify one –tp value and one –ta value: 117
Profiling Accelerator Kernels
You see one version of each subprogram generated for that specific target processor and target accelerator. • If you specify one –tp value and multiple –ta values: The compiler generates two versions of subprograms that contain accelerator regions for the specified target processor and each target accelerator. • If you specify multiple –tp values and one –ta value: If 2 or more –tp values are given, the compiler generates up to that many versions of each subprogram, for each target processor, and each version also targets the selected accelerator. • If you specify multiple –tp values and multiple –ta values: With 'N' –tp values and two –ta values, the compiler generates up to N+1 versions of the subprogram. It first generates up to N versions, for each –tp value, ignoring the accelerator regions, which is equivalent to using –ta=host. It then generates one additional version with the accelerator target.
Profiling Accelerator Kernels This release supports the command line option: -ta=nvidia,time
The time suboption links in a timer library, which collects and prints out simple timing information about the accelerator regions and generated kernels. Example 10.1. Accelerator Kernel Timing Data bb04.f90 s1 15: region entered 1 times time(us): total=1490738 init=1489138 region=1600 kernels=155 data=1445 w/o init: total=1600 max=1600 min=1600 avg=1600 18: kernel launched 1 times time(us): total=155 max=155 min=155 avg=155
In this example, a number of things are occurring: • For each accelerator region, the file name /proj/qa/tests/accel/bb04.f90 and subroutine or function name s1 is printed, with the line number of the accelerator region, which in the example is 15. • The library counts how many times the region is entered (1 in the example) and the microseconds spent in the region (in this example 1490738), which is split into initialization time (in this example 1489138) and execution time (in this example 1600). • The execution time is then divided into kernel execution time and data transfer time between the host and GPU. • For each kernel, the line number is given, (18 in the example), along with a count of kernel launches, and the total, maximum, minimum, and average time spent in the kernel, all of which are 155 in this example. 118
Chapter 10. Using an Accelerator
Related Accelerator Programming Tools PGPROF pgcollect The PGI profiler, PGPROF, has an Accelerator tab - that displays profiling information provided by the accelerator. This information is available in the file pgprof.out and is collected by using pgcollect on an executable binary compiled for an accelerator target. For more information on pgcollect, refer to Chapter 22, “pgcollect Reference,” of the PGI Tools Guide.
NVIDIA CUDA Profile You can use the NVIDIA CUDA Profiler with PGI-generated code for the NVIDIA GPUs. You may download the CUDA Profiler from the same website as the CUDA software: www.nvidia.com/cuda
Documentation and support is provided by NVIDIA.
TAU - Tuning and Analysis Utility You can use the TAU (Tuning and Analysis Utility), version 2.18.1+, with PGI-generated accelerator code. TAU instruments code at the function or loop level, and version 2.18.1 is enhanced with support to track performance in accelerator regions. TAU software and documentation is available at this website: http://tau.uoregon.edu
Supported Intrinsics An intrinsic is a function available in a given language whose implementation is handled specifically by the compiler. Typically, an intrinsic substitutes a sequence of automatically-generated instructions for the original function call. Since the compiler has an intimate knowledge of the intrinsic function, it can better integrate it and optimize it for the situation. Intrinsics make the use of processor-specific enhancements easier because they provide a language interface to assembly instructions. In doing so, the compiler manages things that the user would normally have to be concerned with, such as register names, register allocations, and memory locations of data. This section contains an overview of the Fortran intrinsics that the accelerator supports.
Supported Fortran Intrinsics Summary Table Table 10.5 is an alphabetical summary of the supported Fortran intrinsics that the accelerator supports. These functions are specific to Fortran 90/95 unless otherwise specified.
Note For complete descriptions of these intrinsics, refer to the Chapter 6, “Fortran Intrinsics” of the PGI Fortran Reference. In most cases PGI provides support for all the data types for which the intrinsic is valid. When support is available for only certain data types, the middle column of the table specifies which ones, using the following codes: 119
Supported Intrinsics
I for integer
C for single precision complex
S for single precision real
Z for double precision complex
D for double precision real Table 10.5. Supported Fortran Intrinsics This intrinsic ABS
Returns this value ... I,S,D
ACOS
arccosine of the specified value.
AINT
truncation of the supplied value to a whole number.
ANINT
nearest whole number to the supplied argument.
ASIN
arcsine of the specified value.
ATAN
arctangent of the specified value.
ATAN2
arctangent of the specified value.
COS
S,D
COSH DBLE EXP
cosine of the specified value. hyperbolic cosine of the specified value.
S,D
DPROD
conversion of the value to double precision real. double precision real product.
S,D
exponential value of the argument.
IAND
result of a bit-by-bit logical AND on the arguments.
IEOR
result of a bit-by-bit logical exclusive OR on the arguments.
INT
I,S,D
IOR LOG
conversion of the value to integer type. result of a bit-by-bit logical OR on the arguments.
S,D
natural logarithm of the specified value.
LOG10
base-10 logarithm of the specified value.
MAX
maximum value of the supplied arguments.
MIN
minimum value of the supplied arguments.
MOD
I
remainder of the division.
NINT
nearest integer to the real argument.
NOT
result of a bit-by-bit logical complement on the argument.
REAL
I,S,D
SIGN SIN SQRT
conversion of the argument to real. absolute value of A times the sign of B.
S,D
SINH
120
absolute value of the supplied argument.
value of the sine of the argument. hyperbolic sine of the argument.
S,D
square root of the argument.
TAN
tangent of the specified value.
TANH
hyperbolic tangent of the specified value.
Chapter 10. Using an Accelerator
References related to Accelerators • ISO/IEC 1539-1:1997, Information Technology - Programming Languages - Fortran, Geneva, 1997 (Fortran 95). • American National Standard Programming Language C, ANSI X3.159-1989 (ANSI C). • ISO/IEC 9899:1999, Information Technology - Programming Languages - C, Geneva, 1999 (C99). • PGI Tools Guide, The Portland Group, Release 10.0, November, 2009. Available online at http:// www.pgroup.com/doc/pgitools.pdf. • PGI Fortran Reference, The Portland Group, Release 10.0, November, 2009. Available online at http:// www.pgroup.com/doc/pgifortref.pdf
121
122
Chapter 11. Using Directives It is often useful to be able to alter the effects of certain command line options or default behavior of the compiler. Fortran directives provide pragmatic information that control the actions of the compiler in a particular portion of a program without affecting the program as a whole. That is, while a command line option affects the entire source file that is being compiled, directives apply, or disable, the effects of a command line option to selected subprograms or to selected loops in the source file, for example, to optimize a specific area of code. Use directives to tune selected routines or loops.
PGI Proprietary Fortran Directives PGI Fortran compilers support proprietary directives that may have any of the following forms: !pgi$g directive !pgi$r directive !pgi$l directive !pgi$ directive
Note If the input is in fixed format, the comment character must begin in column 1 and either * or C is allowed in place of !. The scope indicator controls the scope of the directive. This indicator occurs after the $. Some directives ignore the scope indicator. The valid scopes, shown in the previous forms of the directive, are these: g (global) indicates the directive applies to the end of the source file. r (routine) indicates the directive applies to the next subprogram. l (loop) indicates the directive applies to the next loop, but not to any loop contained within the loop body. Loop-scoped directives are only applied to DO loops. blank indicates that the default scope for the directive is applied. 123
PGI Proprietary Optimization Directive Summary
The body of the directive may immediately follow the scope indicator. Alternatively, any number of blanks may precede the name of the directive. Any names in the body of the directive, including the directive name, may not contain embedded blanks. Blanks may surround any special characters, such as a comma or an equal sign. The directive name, including the directive prefix, may contain upper or lower case letters, and the case is not significant. Case is significant for any variable names that appear in the body of the directive if the command line option –Mupcase is selected. For compatibility with other vendors’ directives, the prefix cpgi$ may be substituted with cdir$ or cvd$.
PGI Proprietary Optimization Directive Summary The following table summarizes the supported Fortran directives. The following terms are useful in understanding the table. • Functionality is a brief summary of the way to use the directive. For a complete description, refer to Chapter 21, “Directives Reference,” on page 285. • Many of the directives can be preceded by NO. The default entry indicates the default for the directive. N/A appears if a default does not apply. • The scope entry indicates the allowed scope indicators for each directive, with l for loop, r for routine, and g for global. The default scope is surrounded by parentheses.
Note The "*" in the scope indicates this: For routine-scoped directive The scope includes the code following the directive until the end of the routine. For globally-scoped directive The scope includes the code following the directive until the end of the file rather than for the entire file.
Note The name of a directive may also be prefixed with –M. For example, you can use the directive –Mbounds, which is equivalent to the directive bounds and you can use –Mopt, which is equivalent to opt. Table 11.1. Proprietary Optimization-Related Fortran Directive Summary
124
Directive
Functionality
Default
Scope
altcode (noaltcode)
Do/don’t generate alternate code for vectorized and parallelized loops.
altcode
(l)rg
assoc (noassoc)
Do/don’t perform associative transformations.
assoc
(l)rg
Chapter 11. Using Directives
Directive
Functionality
Default
Scope
bounds (nobounds)
Do/don’t perform array bounds checking.
nobounds
(r)g*
cncall (nocncall)
Loops are considered for parallelization, even if they nocncall contain calls to user-defined subroutines or functions, or if their loop counts do not exceed usual thresholds.
(l)rg
concur (noconcur)
Do/don’t enable auto-concurrentization of loops.
concur
(l)rg
depchk (nodepchk)
Do/don’t ignore potential data dependencies.
depchk
(l)rg
eqvchk (noeqvchk)
Do/don’t check EQUIVALENCE s for data dependencies.
eqvchk
(l)rg
invarif (noinvarif)
Do/don’t remove invariant if constructs from loops.
invarif
(l)rg
ivdep
Ignore potential data dependencies.
ivdep
(l)rg
lstval (nolstval)
Do/don’t compute last values.
lstval
(l)rg
prefetch
Control how prefetch instructions are emitted
opt
Select optimization level.
N/A
(r)g
safe_lastval
Parallelize when loop contains a scalar used outside of not enabled (l) loop.
tp
Generate PGI Unified Binary code optimized for specified targets.
N/A
(r)g
unroll (nounroll)
Do/don’t unroll loops.
nounroll
(l)rg
vector (novector)
Do/don't perform vectorizations.
vector
(l)rg*
vintr (novintr)
Do/don’t recognize vector intrinsics.
vintr
(l)rg
Scope of Fortran Directives and Command-Line options During compilation the effect of a directive may be to either turn an option on, or turn an option off. Directives apply to the section of code following the directive, corresponding to the specified scope, which may include the following loop, the following routine, or the rest of the program. This section presents several examples that show the effect of directives as well as their scope. Consider the following Fortran code: integer maxtime, time parameter (n = 1000, maxtime = 10) double precision a(n,n), b(n,n), c(n,n) do time = 1, maxtime do i = 1, n do j = 1, n c(i,j) = a(i,j) + b(i,j) enddo enddo enddo end
When compiled with –Mvect, both interior loops are interchanged with the outer loop. $ pgfortran -Mvect dirvect1.f
125
Prefetch Directives
Directives alter this behavior either globally or on a routine or loop by loop basis. To assure that vectorization is not applied, use the novector directive with global scope. cpgi$g novector integer maxtime, time parameter (n = 1000, maxtime = 10) double precision a(n,n), b(n,n), c(n,n) do time = 1, maxtime do i = 1, n do j = 1, n c(i,j) = a(i,j) + b(i,j) enddo enddo enddo end
In this version, the compiler disables vectorization for the entire source file. Another use of the directive scoping mechanism turns an option on or off locally, either for a specific procedure or for a specific loop: integer maxtime, time parameter (n = 1000, maxtime = 10) double precision a(n,n), b(n,n), c(n,n) cpgi$l novector do time = 1, maxtime do i = 1, n do j = 1, n c(i,j) = a(i,j) + b(i,j) enddo enddo enddo end
Loop level scoping does not apply to nested loops. That is, the directive only applies to the following loop. In this example, the directive turns off vector transformations for the top-level loop. If the outer loop were a timing loop, this would be a practical use for a loop-scoped directive.
Prefetch Directives Today’s processors are so fast that it is difficult to bring data into them quickly enough to keep them busy. Prefetch instructions can increase the speed of an application substantially by bringing data into cache so that it is available when the processor needs it. When vectorization is enabled using the –Mvect or –Mprefetch compiler options, or an aggregate option such as –fast that incorporates –Mvect, the PGI compilers selectively emit instructions to explicitly prefetch data into the data cache prior to first use. You can control how these prefetch instructions are emitted by using prefetch directives. For a list of processors that support prefetch instructions refer to the PGI Release Notes.
Prefetch Directive Syntax The syntax of a prefetch directive is as follows: c$mem prefetch [,[,...]]
where is any valid variable, member, or array element reference. 126
Chapter 11. Using Directives
Prefetch Directive Format Requirements Note The sentinel for prefetch directives is c$mem, which is distinct from the cpgi$ sentinel used for optimization directives. Any prefetch directives that use the cpgi$ sentinel are ignored by the PGI compilers. • The "c" must be in column 1. • Either * or ! is allowed in place of c. • The scope indicators g, r and l used with the cpgi$ sentinel are not supported. • The directive name, including the directive prefix, may contain upper or lower case letters and is case insensitive (case is not significant). • If the command line option –Mupcase is used, any variable names that appear in the body of the directive are case sensitive.
Sample Usage of Prefetch Directive Example 11.1. Prefetch Directive Use This example uses prefetch directives to prefetch data in a matrix multiplication inner loop where a row of one source matrix has been gathered into a contiguous vector. real*8 a(m,n), b(n,p), c(m,p), arow(n) ... do j = 1, p c$mem prefetch arow(1),b(1,j) c$mem prefetch arow(5),b(5,j) c$mem prefetch arow(9),b(9,j) do k = 1, n, 4 c$mem prefetch arow(k+12),b(k+12,j) c(i,j) = c(i,j) + arow(k) * b(k,j) c(i,j) = c(i,j) + arow(k+1) * b(k+1,j) c(i,j) = c(i,j) + arow(k+2) * b(k+2,j) c(i,j) = c(i,j) + arow(k+3) * b(k+3,j) enddo enddo
This pattern of prefetch directives the compiler emits prefetch instructions whereby elements of arow and b are fetched into the data cache starting four iterations prior to first use. By varying the prefetch distance in this way, it is sometimes possible to reduce the effects of main memory latency and improve performance.
!DEC$ Directives PGI Fortran compilers for Microsoft Windows support several de-facto standard Fortran directives that help with inter-language calling and importing and exporting routines to and from DLLs. These directives all take the form: !DEC$ directive
127
!DEC$ Directives
Format Requirements You must follow the following format requirements for the directive to be recognized in your program: • The directive must begin in line 1 when the file is fixed format or compiled with –Mfixed. • The directive prefix !DEC$ requires a space between the prefix and the directive keyword, such as ATTRIBUTES. • The ! must begin the prefix when compiling Fortran 90/95 free-form format. • The characters C or * can be used in place of ! in either form of the prefix when compiling F77-style fixedform format. • The directives are completely case insensitive.
Summary Table The following table summarizes the supported !DEC$ directives. For a complete description of each directive, refer to the section “!DEC$ Directives,” on page 290 in Chapter 21, “Directives Reference”. Table 11.2. !DEC$ Directives Summary Table
128
Directive
Functionality
ALIAS
Specifies an alternative name with which to resolve a routine.
ATTRIBUTES
Lets you specify properties for data objects and procedures.
DECORATE
Specifies that the name specified in the ALIAS directive should have the prefix and postfix decorations performed on it that are associated with the calling conventions that are in effect. This directive has no effect if ALIAS is not specified.
DISTRIBUTE
Tells the compiler at what point within a loop to split into two loops.
IGNORE_TKR
Directs the compiler to ignore the type, kind, and/or rank (/TKR/) of specified dummy arguments in a procedure interface.
Chapter 12. Creating and Using Libraries A library is a collection of functions or subprograms that are grouped for reference and ease of linking. This chapter discusses issues related to PGI-supplied compiler libraries. Specifically, it addresses the creation of dynamically linked libraries, known as shared objects or shared libraries, and math libraries.
Note This chapter does not duplicate material related to using libraries for inlining, described in “Creating an Inline Library,” on page 83 or information related to run-time library routines available to OpenMP programmers, described in “Run-time Library Routines,” on page 95. PGI provides libraries that export C interfaces by using Fortran modules. It also provides additions to the supported library functionality, specifically, NARGS, a run-time function included in DFLIB. NARGS returns the total number of command-line arguments, including the command. The result is of type INTEGER(4). For example, NARGS returns 4 for the command-line invocation of PROG1 -g -c -a. This chapter has examples that include the following options related to creating and using libraries. –Bdynamic
–def
–implib
–Mmakeimplib
–Bstatic
–dynamiclib
–l
–o
–c
–fpic
–Mmakedll
–shared
PGI Runtime Libraries on Windows The PGI runtime libraries on Windows are available in both static and dynamically-linked (DLL) versions. The static libraries are used by default. • You can use the dynamically-linked version of the run-time by specifying –Bdynamic at both compile and link time.
Note C++ on Windows does not support –Bdynamic. 129
Creating and Using Static Libraries on Windows
• You can explicitly specify static linking, the default, by using –Bstatic at compile and link time. For details on why you might choose one type of linking over another type, refer to “Creating and Using Dynamic-Link Libraries on Windows,” on page 131.
Creating and Using Static Libraries on Windows The Microsoft Library Manager (LIB.EXE) is the tool that is typically used to create and manage a static library of object files on Windows. LIB is provided with the PGI compilers as part of the Microsoft Open Tools. Refer to www.msdn2.com for a complete LIB reference - search for LIB.EXE. For a list of available options, invoke LIB with the /? switch. For compatibility with legacy makefiles, PGI provides a wrapper for LIB and LINK called ar. This version of ar is compatible with Windows and object-file formats. PGI also provides ranlib as a placeholder for legacy makefile support.
ar command The ar command is a legacy archive wrapper that interprets legacy ar command line options and translates these to LINK/LIB options. You can use it to create libraries of object files. Syntax: The syntax for the ar command is this: ar [options] [archive] [object file].
Where: • The first argument must be a command line switch, and the leading dash on the first option is optional. • The single character options, such as –d and –v, may be combined into a single option, as –dv. Thus, ar dv, ar -dv, and ar -d -v all mean the same thing. • The first non-switch argument must be the library name. • One (and only one) of –d, –r, –t, or –x must appear on the command line. Options The options available for the ar command are these: –c This switch is for compatibility; it is ignored. –d Deletes the named object files from the library. –r Replaces in or adds the named object files to the library. 130
Chapter 12. Creating and Using Libraries
–t Writes a table of contents of the library to standard out. –v Writes a verbose file-by-file description of the making of the new library to standard out. –x Extracts the named files by copying them into the current directory.
ranlib command The ranlib command is a wrapper that allows use of legacy scripts and makefiles that use the ranlib command. The command actually does nothing; it merely exists for compatibility. Syntax: The syntax for the ranlib command is this: ranlib [options] [archive]
Options The options available for the ranlib command are these: –help Short help information is printed out. –V Version information is printed out.
Creating and Using Dynamic-Link Libraries on Windows There are several differences between static and dynamic-link libraries on Windows. Libraries of either type are used when resolving external references for linking an executable, but the process differs for each type of library. When linking with a static library, the code needed from the library is incorporated into the executable. When linking with a DLL, external references are resolved using the DLL's import library, not the DLL itself. The code in the DLL associated with the external references does not become a part of the executable. The DLL is loaded when the executable that needs it is run. For the DLL to be loaded in this manner, the DLL must be in your path. Static libraries and DLLs also handle global data differently. Global data in static libraries is automatically accessible to other objects linked into an executable. Global data in a DLL can only be accessed from outside the DLL if the DLL exports the data and the image that uses the data imports it. The PGI Fortran compilers support the DEC$ ATTRIBUTES extensions DLLIMPORT and DLLEXPORT: cDEC$ ATTRIBUTES DLLEXPORT :: object [,object] ... cDEC$ ATTRIBUTES DLLIMPORT :: object [,object] ...
Here c is one of C, c, !, or *. object is the name of the subprogram or common block that is exported or imported. Further, common block names are enclosed within slashes (/), as shown here: cDEC$ ATTRIBUTES DLLIMPORT :: intfunc !DEC$ ATTRIBUTES DLLEXPORT :: /fdata/
131
Creating and Using Dynamic-Link Libraries on Windows
For more information on these extensions, refer to “!DEC$ Directives,” on page 127. The examples in this section further illustrate the use of these extensions. To create a DLL in PVF, select File | New | Project..., then select PGI Visual Fortran, and create a new Dynamic Library project. To create a DLL from the command line, use the –Mmakedll option. The following switches apply to making and using DLLs with the PGI compilers: –Bdynamic Compile for and link to the DLL version of the PGI runtime libraries. This flag is required when linking with any DLL built by the PGI compilers. This flag corresponds to the /MD flag used by Microsoft’s cl compilers. When you use the PGI compiler flag –Bdynamic to create an executable that links to the DLL form of the runtime, the executable built is smaller than one built without –Bdynamic. The PGI runtime DLLs, however, must be available on the system where the executable is run. You must use the –Bdynamic flag when linking an executable against a DLL built by the PGI compilers.
Note C++ on Windows does not support –Bdynamic. –Bstatic Compile for and link to the static version of the PGI runtime libraries. This flag corresponds to the /MT flag used by Microsoft’s cl compilers. On Windows, you must use–Bstatic for both compiling and linking. –Mmakedll Generate a dynamic-link library or DLL. Implies –Bdynamic. –Mmakeimplib Generate an import library without generating a DLL. Use this flag when you want to generate an import library for a DLL but are not yet ready to build the DLL itself. This situation might arise, for example, when building DLLs with mutual imports, as shown in Example 12.2, “Build DLLs Containing Mutual Imports: Fortran,” on page 134. –o Passed to the linker. Name the DLL or import library . –def When used with –Mmakedll, this flag is passed to the linker and a .def file named is generated for the DLL. The .def file contains the symbols exported by the DLL. Generating a .def file is not required when building a DLL but can be a useful debugging tool if the DLL does not contain the symbols that you expect it to contain. When used with –Mmakeimplib, this flag is passed to lib which requires a .def file to create an import library. The .def file can be empty if the list of symbols to export are passed to lib on the command line or explicitly marked as DLLEXPORT in the source code. 132
Chapter 12. Creating and Using Libraries
–implib Passed to the colinker. Generate an import library named for the DLL. A DLL’s import library is the interface used when linking an executable that depends on routines in a DLL. To use the PGI compilers to create an executable that links to the DLL form of the runtime, use the compiler flag –Bdynamic. The executable built will be smaller than one built without –Bdynamic; the PGI runtime DLLs, however, must be available on the system where the executable is run. The –Bdynamic flag must be used when an executable is linked against a DLL built by the PGI compilers. The following examples outline how to use –Bdynamic, –Mmakedll and –Mmakeimplib to build and use DLLs with the PGI compilers.
Note C++ on Windows does not support –Bdynamic. Example 12.1. Build a DLL: Fortran This example builds a DLL from a single source file, object1.f, which exports data and a subroutine using DLLEXPORT. The source file, prog1.f, uses DLLIMPORT to import the data and subroutine from the DLL. object1.f subroutine sub1(i) !DEC$ ATTRIBUTES DLLEXPORT :: sub1 integer i common /acommon/ adata integer adata !DEC$ ATTRIBUTES DLLEXPORT :: /acommon/ print *, "sub1 adata", adata print *, "sub1 i ", i adata = i end
prog1.f program prog1 common /acommon/ adata integer adata external sub1 !DEC$ ATTRIBUTES DLLIMPORT:: sub1, /acommon/ adata = 11 call sub1(12) print *, "main adata", adata end
Step 1: Create the DLL obj1.dll and its import library obj1.lib using the following series of commands: % pgfortran -Bdynamic -c object1.f % pgfortran -Mmakedll object1.obj -o obj1.dll
Step 2: Compile the main program: % pgfortran -Bdynamic -o prog1 prog1.f -defaultlib:obj1
The –Bdynamic and –Mmakedll switches cause the compiler to link against the PGI runtime DLLs instead of the PGI runtime static libraries. The –Bdynamic switch is required when linking against any PGI-compiled 133
Creating and Using Dynamic-Link Libraries on Windows
DLL, such as obj1.dll. The -defaultlib: switch specifies that obj1.lib, the DLL’s import library, should be used to resolve imports. Step 3: Ensure that obj1.dll is in your path, then run the executable prog1 to determine if the DLL was successfully created and linked: % prog1 sub1 adata 11 sub1 i 12 main adata 12
Should you wish to change obj1.dll without changing the subroutine or function interfaces, no rebuilding of prog1 is necessary. Just recreate obj1.dll and the new obj1.dll is loaded at runtime.
Tip Example 12.2. Build DLLs Containing Mutual Imports: Fortran In this example we build two DLLs when each DLL is dependent on the other, and use them to build the main program. In the following source files, object2.f95 makes calls to routines defined in object3.f95, and vice versa. This situation of mutual imports requires two steps to build each DLL. To link the first DLL, the import library for the second DLL must be available. Usually an import library is created when a DLL is linked. In this case, however, the second DLL cannot be linked without the import library for the first DLL. When such circular imports exist, an import library for one of the DLLs must be created in a separate step without creating the DLL. The PGI drivers call the Microsoft lib tool to create import libraries in this situation. Once the DLLs are built, we can use them to build the main program. object2.f95 subroutine func_2a external func_3b !DEC$ ATTRIBUTES DLLEXPORT :: func_2a !DEC$ ATTRIBUTES DLLIMPORT :: func_3b print*,"func_2a, calling a routine in obj3.dll" call func_3b() end subroutine subroutine func_2b !DEC$ ATTRIBUTES DLLEXPORT :: func_2b print*,"func_2b" end subroutine
object3.f95 subroutine func_3a external func_2b !DEC$ ATTRIBUTES DLLEXPORT :: func_3a !DEC$ ATTRIBUTES DLLIMPORT :: func_2b print*,"func_3a, calling a routine in obj2.dll" call func_2b() end subroutine
134
Chapter 12. Creating and Using Libraries subroutine func_3b !DEC$ ATTRIBUTES DLLEXPORT :: func_3b print*,"func_3b" end subroutine
prog2.f95 program prog2 external func_2a external func_3a !DEC$ ATTRIBUTES DLLIMPORT :: func_2a !DEC$ ATTRIBUTES DLLIMPORT :: func_3a call func_2a() call func_3a() end program
Step 1: Use –Mmakeimplib with the PGI compilers to build an import library for the first DLL without building the DLL itself. % pgfortran -Bdynamic -c object2.f95 % pgfortran -Mmakeimplib -o obj2.lib object2.obj
Tip The -def= option can also be used with –Mmakeimplib. Use a .def file when you need to export additional symbols from the DLL. A .def file is not needed in this example because all symbols are exported using DLLEXPORT. Step 2: Use the import library, obj2.lib, created in Step 1, to link the second DLL. % pgfortran -Bdynamic -c object3.f95 % pgfortran -Mmakedll -o obj3.dll object3.obj -defaultlib:obj2
Step 3: Use the import library, obj3.lib, created in Step 2, to link the first DLL. % pgfortran -Mmakedll -o obj2.dll object2.obj -defaultlib:obj3
Step 4: Compile the main program and link against the import libraries for the two DLLs. % pgfortran -Bdynamic prog2.f95 -o prog2 -defaultlib:obj2 -defaultlib:obj3
Step 5: Execute prog2 to ensure that the DLLs were created properly: % prog2 func_2a, calling a routine in obj3.dll func_3b func_3a, calling a routine in obj2.dll func_2b
Example 12.3. Import a Fortran module from a DLL In this example we import a Fortran module from a DLL. We use the source file defmod.f90 to create a DLL containing a Fortran module. We then use the source file use_mod.f90 to build a program that imports and uses the Fortran module from defmod.f90. defmod.f90 module testm type a_type integer :: an_int end type a_type
135
Using LIB3F
type(a_type) :: a, b !DEC$ ATTRIBUTES DLLEXPORT :: a,b contains subroutine print_a !DEC$ ATTRIBUTES DLLEXPORT :: print_a write(*,*) a%an_int end subroutine subroutine print_b !DEC$ ATTRIBUTES DLLEXPORT :: print_b write(*,*) b%an_int end subroutine end module
usemod.f90 use testm a%an_int = 1 b%an_int = 2 call print_a call print_b end
Step 1: Create the DLL. % pgf90 -Mmakedll -o defmod.dll defmod.f90 Creating library defmod.lib and object defmod.exp
Step 2: Create the exe and link against the import library for the imported DLL. % pgf90 -Bdynamic -o usemod usemod.f90 -defaultlib:defmod.lib
Step 3: Run the exe to ensure that the module was imported from the DLL properly. % usemod 1 2
Using LIB3F The PGI Fortran compilers include complete support for the de facto standard LIB3F library routines on both Linux and Windows operating systems. See the PGI Fortran Reference manual for a complete list of available routines in the PGI implementation of LIB3F.
LAPACK, BLAS and FFTs Pre-compiled versions of the public domain LAPACK and BLAS libraries are included with the PGI compilers. The LAPACK library is called liblapack.a or on Windows, liblapack.lib. The BLAS library is called libblas.a or on Windows, libblas.lib. These libraries are installed to $PGI\\lib, where is replaced with the appropriate target name (win32, win64). To use these libraries, simply link them in using the –l option when linking your main program: % pgfortran myprog.f -llapack -lblas
Highly optimized assembly-coded versions of BLAS and certain FFT routines may be available for your platform. In some cases, these are shipped with the PGI compilers. See the current release notes for the PGI compilers you are using to determine if these optimized libraries exist, where they can be downloaded (if necessary), and how to incorporate them into your installation as the default. 136
Chapter 13. Using Environment Variables Environment variables allow you to set and pass information that can alter the default behavior of the PGI compilers and the executables which they generate. This chapter includes explanations of the environment variables specific to PGI compilers. Other environment variables are referenced and documented in other sections of this User’s Guide or the PGI Tools Guide. • You use OpenMP environment variables to control the behavior of OpenMP programs. For consistency related to the OpenMP environment, the details of the OpenMP-related environment variables are included in Chapter 9, “Using OpenMP”. • You can use environment variables to control the behavior of the PGDBG debugger or PGPROF profiler. For a description of environment variables that affect these tools, refer to the PGI Tools Guide.
Setting Environment Variables Before we look at the environment variables that you might use with the PGI compilers and tools, let’s take a look at how to set environment variables. To illustrate how to set these variables in various environments, lets look at how a user might initialize the shell environment prior to using the PGI compilers and tools.
Setting Environment Variables on Windows When you open the PVF Command Prompt, as described in “Commands Submenu”, the environment is preinitialized to use the PGI compilers and tools. You may want to use other environment variables, such as the OpenMP ones. This section explains how to do that. Suppose that your home directory is C:\tmp. The following examples show how you might set the temporary directory to your home directory, and then verify that it is set. DOS> set TMPDIR=C:\tmp DOS> echo %TMPDIR% C:\tmp DOS>
137
PGI-Related Environment Variables
Note To set the environment for programs run from within PVF, whether or not they are run in the debugger, use the environment properties described in the “Debugging Property Page,” on page 328.
PGI-Related Environment Variables For easy reference, the following table provides a quick listing of some OpenMP and all PGI compilerrelated environment variables. This section provides more detailed descriptions of the environment variables specific to PGI compilers and the executables they generate. For information specific to OpenMP environment variables, refer to Table 9.4, “OpenMP-related Environment Variable Summary Table,” on page 100 and to the complete descriptions in “OpenMP Environment Variables” Table 13.1. PGI-Related Environment Variable Summary
138
Environment Variable
Description
FLEXLM_BATCH
(Windows only) When set to 1, prevents interactive pop-ups from appearing by sending all licensing errors and warnings to standard out rather than to a pop-up window.
FORTRANOPT
Allows the user to specify that the PGI Fortran compilers user VAX I/ O conventions.
LM_LICENSE_FILE
Specifies the full path of the license file that is required for running the PGI software. On Windows, LM_LICENSE_FILE does not need to be set.
MPSTKZ
Increases the size of the stacks used by threads executing in parallel regions. The value should be an integer concatenated with M or m to specify stack sizes of n megabytes.
MP_BIND
Specifies whether to bind processes or threads executing in a parallel region to a physical processor.
MP_BLIST
When MP_BIND is yes, this variable specifically defines the threadCPU relationship, overriding the default values.
MP_SPIN
Specifies the number of times to check a semaphore before calling _sleep().
MP_WARN
Allows you to eliminate certain default warning messages.
NCPUS
Sets the number of processes or threads used in parallel regions.
NCPUS_MAX
Limits the maximum number of processors or threads that can be used in a parallel region.
NO_STOP_MESSAGE
If used, the execution of a plain STOP statement does not produce the message FORTRAN STOP.
OMP_DYNAMIC
Currently has no effect. Enables (TRUE) or disables (FALSE) the dynamic adjustment of the number of threads. The default is FALSE.
Chapter 13. Using Environment Variables
Environment Variable
Description
OMP_MAX_ACTIVE_LEVELS
Currently has no effect. Enables (TRUE) or disables (FALSE) nested parallelism. The default is FALSE.
OMP_NUM_THREADS
Specifies the number of threads to use during execution of parallel regions. Default is 1.
OMP_SCHEDULE
Specifies the type of iteration scheduling and, optionally, the chunk size to use for omp for and omp parallel for loops that include the run-time schedule clause. The default is STATIC with chunk size = 1.
OMP_STACKSIZE
Overrides the default stack size for a newly created thread.
OMP_WAIT_POLICY
Sets the behavior of idle threads, defining whether they spin or sleep when idle. The values are ACTIVE and PASSIVE. The default is ACTIVE.
PATH
Determines which locations are searched for commands the user may type.
PGI
Specifies, at compile-time, the root directory where the PGI compilers and tools are installed.
PGI_CONTINUE
If set, when a program compiled with–Mchkfpstk is executed, the stack is automatically cleaned up and execution then continues.
PGI_OBJSUFFIX
Allows you to control the suffix on generated object files.
PGI_STACK_USAGE
(Windows only) Allows you to explicitly set stack properties for your program.
PGI_TERM
Controls the stack traceback and just-in-time debugging functionality.
PGI_TERM_DEBUG
Overrides the default behavior when PGI_TERM is set to debug.
STATIC_RANDOM_SEED
Forces the seed returned by RANDOM_SEED to be constant.
TMP
Sets the directory to use for temporary files created during execution of the PGI compilers and tools; interchangeable with TMPDIR.
TMPDIR
Sets the directory to use for temporary files created during execution of the PGI compilers and tools.
PGI Environment Variables You use the environment variables listed in Table 13.1 to alter the default behavior of the PGI compilers and the executables which they generate. This section provides more detailed descriptions about the variables in this table that are not OpenMP environment variables.
139
PGI Environment Variables
FLEXLM_BATCH By default, on Windows the license server creates interactive pop-up messages to issue warning and errors. You can use the environment variable FLEXLM_BATCH to prevent interactive pop-up windows. To do this, set the environment variable FLEXLM_BATCH to 1. The following csh example prevents interactive pop-up messages for licensing warnings and errors: % set FLEXLM_BATCH = 1;
FORTRANOPT FORTRANOPT allows the user to adjust the behavior of the PGI Fortran compilers.
• If FORTRANOPT exists and contains the value vaxio, the record length in the open statement is in units of 4-byte words, and the $ edit descriptor only has an effect for lines beginning with a space or a plus sign (+). • If FORTRANOPT exists and contains the value format_relaxed, an I/O item corresponding to a numerical edit descriptor (such as F, E, I, and so on) is not required to be a type implied by the descriptor. • In a non-Windows environment, if FORTRANOPT exists and contains the value crif, a sequential formatted or list-directed record is allowed to be terminated with the character sequence \r\n (carriage return, newline). This approach is useful when reading records from a file produced on a Window’s system. The following example causes the PGI Fortran compilers to use VAX I/O conventions: % setenv FORTRANOPT vaxio
LM_LICENSE_FILE The LM_LICENSE_FILE variable specifies the full path of the license file that is required for running the PGI software.
Note LM_LICENSE_FILE is not required for PVF, but you can use it.
To set the environment variable LM_LICENSE_FILE to the full path of the license key file, do this: 1. Open the System Properties dialog: Start | Control Panel | System. 2. Select the Advanced tab. 3. Click the Environment Variables button. • If LM_LICENSE_FILE is not already an environment variable, create a new system variable for it. Set its value to the full path, including the name of the license key file, license.dat. • If LM_LICENSE_FILE already exists as an environment variable, append the path to the license file to the variable’s current value using a semi-colon to separate entries. 140
Chapter 13. Using Environment Variables
MPSTKZ MPSTKZ increases the size of the stacks used by threads executing in parallel regions. You typically use this
variable with programs that utilize large amounts of thread-local storage in the form of private variables or local variables in functions or subroutines called within parallel regions. The value should be an integer concatenated with M or m to specify stack sizes of n megabytes. For example, the following setting specifies a stack size of 8 megabytes. % setenv MPSTKZ 8M
MP_BIND You can set MP_BIND to yes or y to bind processes or threads executing in a parallel region to physical processor. Set it to no or n to disable such binding. The default is to not bind processes to processors. This variable is an execution-time environment variable interpreted by the PGI run-time support libraries. It does not affect the behavior of the PGI compilers in any way.
Note The MP_BIND environment variable is not supported on all platforms. % setenv MP_BIND y
MP_BLIST MP_BLIST allows you to specifically define the thread-CPU relationship.
Note This variable is only in effect when MP_BIND is yes. While the MP_BIND variable binds processors or threads to a physical processor, MP_BLIST allows you to specifically define which thread is associated with which processor. The list defines the processor-thread relationship order, beginning with thread 0. This list overrides the default binding. For example, the following setting for MP_BLIST maps CPUs 3, 2, 1 and 0 to threads 0, 1, 2 and 3 respectively. % setenv MP_BLIST=3,2,1,0
MP_SPIN When a thread executing in a parallel region enters a barrier, it spins on a semaphore. You can use MP_SPIN to specify the number of times it checks the semaphore before calling _sleep() . These calls cause the thread to be re-scheduled, allowing other processes to run. The default value is 10000. % setenv MP_SPIN 200
MP_WARN MP_WARN allows you to eliminate certain default warning messages.
By default, a warning is printed to stderr if you execute an OpenMP or auto-parallelized program with NCPUS or OMP_NUM_THREADS set to a value larger than the number of physical processors in the system. 141
PGI Environment Variables
For example, if you produce a parallelized executable a.exe and execute as follows on a system with only one processor, you get a warning message. > set OMP_NUM_THREADS=2 > a.exe Warning: OMP_NUM_THREADS or NCPUS (2) greater than available cpus (1) FORTRAN STOP
Setting MP_WARN to NO eliminates these warning messages.
NCPUS You can use the NCPUS environment variable to set the number of processes or threads used in parallel regions. The default is to use only one process or thread, which is known as serial mode.
Note OMP_NUM_THREADS has the same functionality as NCPUS. For historical reasons, PGI supports the environment variable NCPUS. If both OMP_NUM_THREADS and NCPUS are set, the value of OMP_NUM_THREADS takes precedence.
Warning Setting NCPUS to a value larger than the number of physical processors or cores in your system can cause parallel programs to run very slowly.
NCPUS_MAX You can use the NCPUS_MAX environment variable to limit the maximum number of processes or threads used in a parallel program. Attempts to dynamically set the number of processes or threads to a higher value, for example using set_omp_num_threads(), will cause the number of processes or threads to be set at the value of NCPUS_MAX rather than the value specified in the function call.
NO_STOP_MESSAGE If the NO_STOP_MESSAGE variable exists, the execution of a plain STOP statement does not produce the message FORTRAN STOP. The default behavior of the PGI Fortran compilers is to issue this message.
PATH The PATH variable sets the directories that are searched for commands that the user types. When using PGI products, it is important that you set your PATH to include the location of the PGI products. Within the PVF IDE, the PATH variable can be set using the Environment and MPI Debugging properties on the “Debugging Property Page”. The PVF Command Prompt, accessible from the PVF submenus of Start | All Programs | PGI Visual Fortran, opens with the PATH variable pre-configured for use of the PGI products.
Important If you invoke a generic Command Prompt using Start | All Programs | Accessories | Command Prompt, then the environment is not pre-configured for PGI products. 142
Chapter 13. Using Environment Variables
PGI The PGI environment variable specifies the root directory where the PGI compilers and tools are installed. This variable is recognized at compile-time. If it is not set, the default value depends on your system as well as which compilers are installed: • On Windows, the default value is C:\Program Files\PGI, where C represents the system drive. If both 32- and 64-bit compilers are installed, the 32-bit compilers are in C:\Program Files (x86)\PGI . In most cases, if the PGI environment variable is not set, the PGI compilers and tools dynamically determine the location of this root directory based on the instance of the compiler or tool that was invoked.
PGI_CONTINUE You set the PGI_CONTINUE variable to specify the actions to take before continuing with execution. For example, if the PGI_CONTINUE environment variable is set and then a program that is compiled with –Mchkfpstk is executed, the stack is automatically cleaned up and execution then continues. If PGI_CONTINUE is set to verbose, the stack is automatically cleaned up, a warning message is printed, and then execution continues.
Note There is a performance penalty associated with the stack cleanup.
PGI_OBJSUFFIX You can set the PGI_OBJSUFFIX environment variable to generate object files that have a specific suffix. For example, if you set PGI_OBJSUFFIX to .o, the object files have a suffix of .o rather than .obj.
PGI_STACK_USAGE (Windows only) The PGI_STACK_USAGE variable allows you to explicitly set stack properties for your program. When the user compiles a program with the –Mchkstk option and sets the PGI_STACK_USAGE environment variable to any value, the program displays the stack space allocated and used after the program exits. You might see something similar to the following message: thread 0 stack: max 8180KB, used 48KB
This message indicates that the program used 48KB of a 8180KB allocated stack. For more information on the –Mchkstk option, refer to –Mchkstk.
PGI_TERM The PGI_TERM environment variable controls the stack traceback and just-in-time debugging functionality. The runtime libraries use the value of PGI_TERM to determine what action to take when a program abnormally terminates. The value of PGI_TERM is a comma-separated list of options. The commands for setting the environment variable follow. • In csh: 143
PGI Environment Variables
% setenv PGI_TERM option[,option...]
• In bash, sh, zsh, or ksh: $ PGI_TERM=option[,option...] $ export PGI_TERM
• In the Windows Command Prompt: C:\> set PGI_TERM=option[,option...]
Table 13.2 lists the supported values for option. Following the table is a complete description of each option that indicates specifically how you might apply the option. By default, all of these options are disabled. Table 13.2. Supported PGI_TERM Values [no]debug
Enables/disables just-in-time debugging (debugging invoked on error)
[no]trace
Enables/disables stack traceback on error
[no]signal
Enables/disables establishment of signal handlers for common signals that cause program termination
[no]abort
Enables/disables calling the system termination routine abort()
[no]debug This enables/disables just-in-time debugging. The default is nodebug. When PGI_TERM is set to debug, the following command is invoked on error, unless you use PGI_TERM_DEBUG to override this default. pgdbg -text -attach
is the process ID of the process being debugged.
The PGI_TERM_DEBUG environment variable may be set to override the default setting. For more information, refer to “PGI_TERM_DEBUG,” on page 145. [no]trace This enables/disables the stack traceback. The default is notrace. [no]signal This enables/disables establishing signal handlers for the most common signals that cause program termination. The default is nosignal. Setting trace and debug automatically enables signal. Specifically setting nosignal allows you to override this behavior. [no]abort This enables/disables calling the system termination routine abort(). The default is noabort. When noabort is in effect the process terminates by calling _exit(127). 144
Chapter 13. Using Environment Variables
A few runtime errors just print an error message and call exit(127), regardless of the status of PGI_TERM. These are mainly errors such as specifying an invalid environment variable value where a traceback would not be useful. The abort routine exits with the status of the exception received; for example, if the program receives an access violation abort exits with status 0xC0000005. For more information on why to use this variable, refer to “Stack Traceback and JIT Debugging,” on page 145.
PGI_TERM_DEBUG The PGI_TERM_DEBUG variable may be set to override the default behavior when PGI_TERM is set to debug. The value of PGI_TERM_DEBUG should be set to the command line used to invoke the program. For example: gdb --quiet --pid %d
The first occurrence of %d in the PGI_TERM_DEBUG string is replaced by the process id. The program named in the PGI_TERM_DEBUG string must be found on the current PATH or specified with a full path name.
STATIC_RANDOM_SEED You can use STATIC_RANDOM_SEED to force the seed returned by the Fortran 90/95 RANDOM_SEED intrinsic to be constant. The first call to RANDOM_SEED without arguments resets the random seed to a default value, then advances the seed by a variable amount based on time. Subsequent calls to RANDOM_SEED without arguments reset the random seed to the same initial value as the first call. Unless the time is exactly the same, each time a program is run a different random number sequence is generated. Setting the environment variable STATIC_RANDOM_SEED to YES forces the seed returned by RANDOM_SEED to be constant, thereby generating the same sequence of random numbers at each execution of the program.
TMP You can use TMP to specify the directory to use for placement of any temporary files created during execution of the PGI compilers and tools. This variable is interchangeable with TMPDIR.
TMPDIR You can use TMPDIR to specify the directory to use for placement of any temporary files created during execution of the PGI compilers and tools.
Stack Traceback and JIT Debugging When a programming error results in a run-time error message or an application exception, a program will usually exit, perhaps with an error message. The PGI run-time library includes a mechanism to override this default action and instead print a stack tracebackor start a debugger. The stack traceback and just-in-time debugging functionality is controlled by an environment variable, PGI_TERM, described in “PGI_TERM,” on page 143. The run-time libraries use the value of PGI_TERM to determine what action to take when a program abnormally terminates. 145
Stack Traceback and JIT Debugging
When the PGI runtime library detects an error or catches a signal, it calls the routine pgi_stop_here() prior to generating a stack traceback or starting the debugger. The pgi_stop_here() routine is a convenient spot to set a breakpoint when debugging a program.
146
Chapter 14. Distributing Files Deployment Once you have successfully built, debugged and tuned your application, you may want to distribute it to users who need to run it on a variety of systems. This chapter addresses how to effectively distribute applications built using PGI compilers and tools. The application must be installed in such a way that it executes accurately on a system other than the one on which it was built, and which may be configured differently.
Deploying Applications on Windows Windows programs may be linked statically or dynamically. • A statically linked program is completely self-contained, created by linking to static versions of the PGI and Microsoft runtime libraries. • A dynamically linked program depends on separate dynamically-linked libraries (DLLs) that must be installed on a system for the application to run on that system. Although it may be simpler to install a statically linked executable, there are advantages to using the DLL versions of the runtime, including these: • Executable binary file size is smaller. • Multiple processes can use DLLs at once, saving system resources. • New versions of the runtime can be installed and used by the application without rebuilding the application. Dynamically-linked Windows programs built with PGI compilers depend on dynamic run-time library files (DLLs). These DLLs must be distributed with such programs to enable them to execute on systems where the PGI compilers are not installed. These redistributable libraries include both PGI runtime libraries and Microsoft runtime libraries.
PGI Redistributables PGI redistributable directories contain all of the PGI Linux runtime library shared object files or Windows dynamically-linked libraries that can be re-distributed by PGI 10.0 licensees under the terms of the PGI Enduser License Agreement (EULA). 147
Code Generation and Processor Architecture
Microsoft Redistributables The PGI products on Windows include Microsoft Open Tools. The Microsoft Open Tools directory contains a subdirectory named redist. PGI licensees may redistribute the files contained in this directory in accordance with the terms of the PGI End-User License Agreement. Microsoft supplies installation packages, vcredist_x86.exe and vcredist_x64.exe, containing these runtime files. These files are available in the redist directory.
Code Generation and Processor Architecture The PGI compilers can generate much more efficient code if they know the specific x86 processor architecture on which the program will run. When preparing to deploy your application, you should determine whether you want the application to run on the widest possible set of x86 processors, or if you want to restrict the application to run on a specific processor or set of processors. The restricted approach allows you to optimize performance for that set of processors. Different processors have differences, some subtle, in hardware features, such as instruction sets and cache size. The compilers make architecture-specific decisions about such things as instruction selection, instruction scheduling, and vectorization, all of which can have a profound effect on the performance of your application. Processor-specific code generation is controlled by the -tp option, described in “–tp [,target...],” on page 206. When an application is compiled without any -tp options, the compiler generates code for the type of processor on which the compiler is run.
Generating Generic x86 Code To generate generic x86 code, use one of the following forms of the-tp option on your command line: -tp px ! generate code for any x86 cpu type -tp p6 ! generate code for Pentium 2 or greater
While both of these examples are good choices for portable execution, most users have Pentium 2 or greater CPUs.
Generating Code for a Specific Processor You can use the -tp option to request that the compiler generate code optimized for a specific processor. The PGI Release Notes contains a list of supported processors.
Generating One Executable for Multiple Types of Processors PGI unified binaries provide a low-overhead method for a single program to run well on a number of hardware platforms. All 64-bit PGI compilers for Windows can produce PGI Unified Binary programs that contain code streams fully optimized and supported for both AMD64 and Intel EM64T processors using the -tp target option. You specify this option using PVF'sFortran | Target Processors property page. For more information on this property page, refer to “Fortran | Target Processors,” on page 344. 148
Chapter 14. Distributing Files - Deployment
The compilers generate and combine multiple binary code streams into one executable, where each stream is optimized for a specific platform. At runtime, this one executable senses the environment and dynamically selects the appropriate code stream. Executable size is automatically controlled via unified binary culling. Only those functions and subroutines where the target affects the generated code have unique binary images, resulting in a code-size savings of 10-90% compared to generating full copies of code for each target. Programs can use PGI Unified Binary technology even if all of the object files and libraries are not compiled as unified binaries. Like any other object file, you can use PGI Unified Binary object files to create programs or libraries. No special start up code is needed; support is linked in from the PGI libraries. The -Mpfi option disables generation of PGI Unified Binary object files. Instead, the default target auto-detect rules for the host are used to select the target processor.
PGI Unified Binary Command-line Switches The PGI Unified Binary command-line switch is an extension of the target processor switch, -tp, which may be applied to individual files during compilation using the PVF property pages described in Chapter 23, “PVF Properties,” on page 313. The target processor switch, -tp, accepts a comma-separated list of 64-bit targets and generates code optimized for each listed target. The following example generates optimized code for three targets: -tp k8-64,p7-64,core2-64
To use multiple -tp options within a PVF project, specify the comma-separated -tp list on both the Fortran | Command Line and the Linker | Command Line property pages, described in Chapter 23, “PVF Properties,” on page 313. A special target switch, -tp x64, is the same as -tp k8-64, p7-64s.
PGI Unified Binary Directives PGI Unified binary directives may be applied to functions, subroutines, or whole files. The directives and pragmas cause the compiler to generate PGI Unified Binary code optimized for one or more targets. No special command line options are needed for these pragmas and directives to take effect. The syntax of the Fortran directive is this: pgi$[g|r| ] pgi tp [target]...
where the scope is g (global), r (routine) or blank. The default is r, routine. For example, the following syntax indicates that the whole file, represented by g, should be optimized for both k8_64 and p7_64. pgi$g pgi tp k8_64 p7_64
149
150
Chapter 15. Inter-language Calling This chapter describes inter-language calling conventions for C, C++, and Fortran programs using the PGI compilers. The following sections describe how to call a Fortran function or subroutine from a C or C++ program and how to call a C or C++ function from a Fortran program. For information on calling assembly language programs, refer to Chapter 22, “Run-time Environment”. This chapter provides examples that use the following options related to inter-language calling. For more information on these options, refer to Chapter 18, “Command-Line Options Reference,” on page 173. -c
-Mnomain
-Miface
-Mupcase
Overview of Calling Conventions This chapter includes information on the following topics: • Functions and subroutines in Fortran, C, and C++ • Naming and case conversion conventions • Compatible data types • Argument passing and special return values • Arrays and indexes • Win32 calling conventions The sections “Inter-language Calling Considerations,” on page 152 through describe how to perform inter-language calling using the Win64 convention. Default Fortran calling conventions for Win32 differ, although Win32 programs compiled using the -Miface=unix Fortran command-line option use the Win64 convention rather than the default Win32 conventions. All information in those sections pertaining to compatibility of arguments applies to Win32 as well. For details on the symbol name and argument passing conventions used on Win32 platforms, refer to “Win32 Calling Conventions,” on page 160. The concepts in this chapter apply equally to using inter-language calling in PVF. While all of the examples given are shown as being compiled at the command line, they can also be used within PVF. The primary difference for you to note is this: Visual Studio projects are limited to a single language. To mix languages, create a multi-project solution. 151
Inter-language Calling Considerations
Tip For inter-language examples that are specific to PVF, look in the directory: $(VSInstallDir)\PGI Visual Fortran\Samples\interlanguage\
Inter-language Calling Considerations In general, when argument data types and function return values agree, you can call a C or C++ function from Fortran as well as call a Fortran function from C or C++. When data types for arguments do not agree, you may need to develop custom mechanisms to handle them. For example, the Fortran COMPLEX type has a matching type in C99 but does not have a matching type in C90; however, it is still possible to provide inter-language calls but there are no general calling conventions for such cases.
Note • If a C++ function contains objects with constructors and destructors, calling such a function from Fortran is not possible unless the initialization in the main program is performed from a C++ program in which constructors and destructors are properly initialized. • C++ member functions cannot be declared extern, since their names will always be mangled. Therefore, C++ member functions cannot be called from C or Fortran.
Functions and Subroutines Fortran, C, and C++ define functions and subroutines differently. For a Fortran program calling a C or C++ function, observe the following return value convention: • When a C or C++ function returns a value, call it from Fortran as a function. • When a C or C++ function does not return a value, call it as a subroutine. For a C/C++ program calling a Fortran function, the call should return a similar type. Table 15.1, “Fortran and C/C++ Data Type Compatibility,” on page 153 lists compatible types. If the call is to a Fortran subroutine, a Fortran CHARACTER function, or a Fortran COMPLEX function, call it from C/C++ as a function that returns void. The exception to this convention is when a Fortran subroutine has alternate returns; call such a subroutine from C/C++ as a function returning int whose value is the value of the integer expression specified in the alternate RETURN statement.
Upper and Lower Case Conventions, Underscores By default on Linux, Win64, and OSX systems, all Fortran symbol names are converted to lower case. C and C+ + are case sensitive, so upper-case function names stay upper-case. When you use inter-language calling, you can either name your C/C++ functions with lower-case names, or invoke the Fortran compiler command with the option –Mupcase, in which case it will not convert symbol names to lower-case. When programs are compiled using one of the PGI Fortran compilers on Linux, Win64, and OSX systems, an underscore is appended to Fortran global names (names of functions, subroutines and common blocks). This mechanism distinguishes Fortran name space from C/C++ name space. Use these naming conventions: 152
Chapter 15. Inter-language Calling
• If you call a C/C++ function from Fortran, you should rename the C/C++ function by appending an underscore. • If you call a Fortran function from C/C++, you should append an underscore to the Fortran function name in the calling program.
Compatible Data Types Table 15.1 shows compatible data types between Fortran and C/C++. Table 15.2, “Fortran and C/C++ Representation of the COMPLEX Type,” on page 153 shows how the Fortran COMPLEX type may be represented in C/C++.
Tip If you can make your function/subroutine parameters as well as your return values match types, you should be able to use inter-language calling. Table 15.1. Fortran and C/C++ Data Type Compatibility Fortran Type (lower case) C/C++ Type
Size (bytes)
character x
char x
1
character*n x
char x[n]
n
real x
float x
4
real*4 x
float x
4
real*8 x
double x
8
double precision
double x
8
integer x
int x
4
integer*1 x
signed char x
1
integer*2 x
short x
2
integer*4 x
int x
4
integer*8 x
long long x
8
logical x
int x
4
logical*1 x
char x
1
logical*2 x
short x
2
logical*4
int x
4
logical*8
long x
8
Table 15.2. Fortran and C/C++ Representation of the COMPLEX Type Fortran Type (lower case)
C/C++ Type
Size (bytes)
complex x
struct {float r,i;} x; float complex x;
8 8 153
Argument Passing and Return Values
Fortran Type (lower case)
C/C++ Type
Size (bytes)
complex*8 x
struct {float r,i;} x; float complex x;
8 8
double complex x
struct {double dr,di;} x; double complex x;
16 16
complex *16 x
struct {double dr,di;} x; double complex x;
16 16
Note For C/C++, the complex type implies C99 or later.
Fortran Named Common Blocks A named Fortran common block can be represented in C/C++ by a structure whose members correspond to the members of the common block. The name of the structure in C/C++ must have the added underscore. For example, here is a Fortran common block: INTEGER I COMPLEX C DOUBLE COMPLEX CD DOUBLE PRECISION D COMMON /COM/ i, c, cd, d
This Fortran Common Block is represented in C with the following equivalent: extern struct { int i; struct {float real, imag;} c; struct {double real, imag;} cd; double d; } com_;
This same Fortran Common Block is represented in C++ with the following equivalent: extern "C" struct { int i; struct {float real, imag;} c; struct {double real, imag;} cd; double d; } com_;
Tip For global or external data sharing, extern "C" is not required.
Argument Passing and Return Values In Fortran, arguments are passed by reference, that is, the address of the argument is passed, rather than the argument itself. In C/C++, arguments are passed by value, except for strings and arrays, which are passed by reference. Due to the flexibility provided in C/C++, you can work around these differences. Solving the parameter passing differences generally involves intelligent use of the & and * operators in argument passing when C/C++ calls Fortran and in argument declarations when Fortran calls C/C++. 154
Chapter 15. Inter-language Calling
For strings declared in Fortran as type CHARACTER, an argument representing the length of the string is also passed to a calling function.
Passing by Value (%VAL) When passing parameters from a Fortran subprogram to a C/C++ function, it is possible to pass by value using the %VAL function. If you enclose a Fortran parameter with %VAL(), the parameter is passed by value. For example, the following call passes the integer i and the logical bvar by value. integer*1 i logical*1 bvar call cvalue (%VAL(i), %VAL(bvar))
Character Return Values “Functions and Subroutines,” on page 152 describes the general rules for return values for C/C++ and Fortran inter-language calling. There is a special return value to consider. When a Fortran function returns a character, two arguments need to be added at the beginning of the C/C++ calling function’s argument list: • The address of the return character or characters • The length of the return character Example 15.1, “Character Return Parameters” illustrates the extra parameters, tmp and 10, supplied by the caller: Example 15.1. Character Return Parameters ! Fortran function returns a character CHARACTER*(*) FUNCTION CHF(C1,I) CHARACTER*(*) C1 INTEGER I END /* C declaration of Fortran function */ extern void chf_(); char tmp[10]; char c1[9]; int i; chf_(tmp, 10, c1, &i, 9);
If the Fortran function is declared to return a character value of constant length, for example CHARACTER*4 FUNCTION CHF(), the second extra parameter representing the length must still be supplied, but is not used.
Note The value of the character function is not automatically NULL-terminated.
Complex Return Values When a Fortran function returns a complex value, an argument needs to be added at the beginning of the C/C ++ calling function’s argument list; this argument is the address of the complex return value. Example 15.2, “COMPLEX Return Values” illustrates the extra parameter, cplx, supplied by the caller. 155
Array Indices
Example 15.2. COMPLEX Return Values COMPLEX FUNCTION CF(C, I) INTEGER I . . . END extern void cf_(); typedef struct {float real, imag;} cplx; cplx c1; int i; cf_(&c1, &i);
Array Indices C/C++ arrays and Fortran arrays use different default initial array index values. By default, C/C++ arrays start at 0 and Fortran arrays start at 1. If you adjust your array comparisons so that a Fortran second element is compared to a C/C++ first element, and adjust similarly for other elements, you should not have problems working with this difference. If this is not satisfactory, you can declare your Fortran arrays to start at zero. Another difference between Fortran and C/C++ arrays is the storage method used. Fortran uses columnmajor order and C/C++ uses row-major order. For one-dimensional arrays, this poses no problems. For twodimensional arrays, where there are an equal number of rows and columns, row and column indexes can simply be reversed. For arrays other than single dimensional arrays, and square two-dimensional arrays, interlanguage function mixing is not recommended.
Examples This section contains examples that illustrate inter-language calling.
Example - Fortran Calling C Note There are other solutions to calling C from Fortran than the one presented in this section. For example, you can use the iso_c_binding intrinsic module which PGI does support. For more information on this module and for examples of how to use it, search the web using the keyword iso_c_binding. Example 15.4, “C function f2c_func_” shows a C function that is called by the Fortran main program shown in Example 15.3, “Fortran Main Program f2c_main.f”. Notice that each argument is defined as a pointer, since Fortran passes by reference. Also notice that the C function name uses all lower-case and a trailing "_". Example 15.3. Fortran Main Program f2c_main.f logical*1 bool1 character letter1 integer*4 numint1, numint2 real numfloat1 double precision numdoub1 integer*2 numshor1
156
Chapter 15. Inter-language Calling
external f2c_func call f2c_func_(bool1, letter1, numint1, numint2, numfloat1, numdoub1, numshor1) write( *, "(L2, A2, I5, I5, F6.1, F6.1, I5)") + bool1, letter1, numint1, numint2, numfloat1,numdoub1, numshor1 end
Example 15.4. C function f2c_func_ #define TRUE 0xff #define FALSE 0 void f2c_func_( bool1, letter1, numint1, numint2, numfloat1,\ numdoub1, numshor1, len_letter1) char *bool1, *letter1; int *numint1, *numint2; float *numfloat1; double *numdoub1; short *numshor1; int len_letter1; { *bool1 = TRUE; *letter1 = 'v'; *numint1 = 11; *numint2 = -44; *numfloat1 = 39.6 ; *numdoub1 = 39.2; *numshor1 = 981; }
Compile and execute the program f2c_main.f with the call to f2c_func_ using the following command lines: $ pgcc -c f2c_func.c $ pgfortran f2c_func.o f2c_main.f
Executing the f2c_main.exe file should produce the following output: T v 11 -44 39.6 39.2 981
Example - C Calling Fortran Example 15.5, “C Main Program c2f_main.c” shows a C main program that calls the Fortran subroutine shown in Example 15.6, “Fortran Subroutine c2f_sub.f”. Notice that each call uses the & operator to pass by reference. Also notice that the call to the Fortran subroutine uses all lower-case and a trailing "_". Example 15.5. C Main Program c2f_main.c void main () { char bool1, letter1; int numint1, numint2; float numfloat1; double numdoub1; short numshor1; extern void c2f_func_(); c2f_sub_(&bool1,&letter1,&numint1,&numint2,&numfloat1,&numdoub1,&numshor1, 1); printf(" %s %c %d %d %3.1f %.0f %d\n", bool1?"TRUE":"FALSE", letter1, numint1, numint2, numfloat1, numdoub1, numshor1); }
157
Examples
Example 15.6. Fortran Subroutine c2f_sub.f subroutine c2f_func ( bool1, letter1, numint1, numint2, + numfloat1, numdoub1, numshor1) logical*1 bool1 character letter1 integer numint1, numint2 double precision numdoub1 real numfloat1 integer*2 numshor1 bool1 = .true. letter1 = "v" numint1 = 11 numint2 = -44 numdoub1 = 902 numfloat1 = 39.6 numshor1 = 299 return end
To compile this Fortran subroutine and C program, use the following commands: $ pgcc -c c2f_main.c $ pgfortran -Mnomain c2f_main.o c2_sub.f
Executing the resulting c2fmain.exe file should produce the following output: TRUE v 11 -44 39.6 902 299
Example - Fortran Calling C++ The Fortran main program shown in Example 15.7, “Fortran Main Program f2cp_main.f calling a C++ function” calls the C++ function shown in Example 15.8, “C++ function f2cp_func.C”. Notice: • Each argument is defined as a pointer in the C++ function, since Fortran passes by reference. • The C++ function name uses all lower-case and a trailing "_": Example 15.7. Fortran Main Program f2cp_main.f calling a C++ function logical*1 bool1 character letter1 integer*4 numint1, numint2 real numfloat1 double precision numdoub1 integer*2 numshor1 external f2cpfunc call f2cp_func (bool1, letter1, numint1, + numint2, numfloat1, numdoub1, numshor1) write( *, "(L2, A2, I5, I5, F6.1, F6.1, I5)") + bool1, letter1, numint1, numint2, numfloat1, + numdoub1, numshor1 end
158
Chapter 15. Inter-language Calling
Example 15.8. C++ function f2cp_func.C #define TRUE 0xff #define FALSE 0 extern "C" { extern void f2cp_func_ ( char *bool1, *letter1, int *numint1, *numint2, float *numfloat1, double *numdoub1, short *numshort1, int len_letter1) { *bool1 = TRUE; *letter1 = 'v'; *numint1 = 11; *numint2 = -44; *numfloat1 = 39.6; *numdoub1 = 39.2; } }
*numshort1 = 981;
Assuming the Fortran program is in a file fmain.f, and the C++ function is in a file cpfunc.C, create an executable, using the following command lines: $ pgcpp -c f2cp_func.C $ pgfortran f2cp_func.o f2cp_main.f -pgcpplibs
Executing the fmain.exe file should produce the following output: T v 11 -44 39.6 39.2 981
Example - C++ Calling Fortran Example 15.10, “Fortran Subroutine cp2f_func.f” shows a Fortran subroutine called by the C++ main program shown in Example 15.9, “C++ main program cp2f_main.C”. Notice that each call uses the & operator to pass by reference. Also notice that the call to the Fortran subroutine uses all lower-case and a trailing "_": Example 15.9. C++ main program cp2f_main.C #include extern "C" { extern void cp2f_func_(char *,char *,int *,int *, float *,double *,short *); } main () { char bool1, letter1; int numint1, numint2; float numfloat1; double numdoub1; short numshor1; cp2f_func(&bool1,&letter1,&numint1,&numint2,&numfloat1, &numdoub1,&numshor1); cout << " bool1 = "; bool1?cout << "TRUE ":cout << "FALSE "; cout <
Defines a preprocessor macro.
–dryrun
Shows but does not execute driver commands.
–drystdinc
Displays the standard include directories and then exists the compiler.
–E
Stops after the preprocessing phase and displays the preprocessed file on the standard output.
–F
Stops after the preprocessing phase and saves the preprocessed file in filename.f (this option is only valid for the PGI Fortran compilers).
--flagcheck
Simply return zero status if flags are correct.
–flags
Display valid driver options.
–I
Adds a directory to the search path for #include files.
–i2, –i4 and –i8
–i2: Treat INTEGER variables as 2 bytes. –i4: Treat INTEGER variables as 4 bytes. –i8: Treat INTEGER and LOGICAL variables as 8 bytes and use 64bits for INTEGER*8 operations.
174
–K
Requests special compilation semantics with regard to conformance to IEEE 754.
--keeplnk
If the compiler generates a temporary indirect file for a long linker command, preserves the temporary file instead of deleting it.
–L
Specifies a library directory.
–l
Loads a library.
–m
Displays a link map on the standard output.
–M
Selects variations for code generation and optimization.
–module
Save/search for module files in directory .
Chapter 18. Command-Line Options Reference
Option
Description
–mp[=all, align,bind, [no]numa]
Interpret and process user-inserted shared-memory parallel programming directives (see Chapters 5 and 6).
–noswitcherror
Ignore unknown command line switches after printing an warning message.
–o
Names the object file.
–pc
(–tp px/p5/p6/piii targets only) Set precision globally for x87 floating-point calculations; must be used when compiling the main program. may be one of 32, 64 or 80.
–pgf77libs
Append PGF77 runtime libraries to the link line.
–pgf90libs
Append PGF90/PGF95/PGFORTRAN runtime libraries to the link line.
–r4 and –r8
–r4: Interpret DOUBLE PRECISION variables as REAL. –r8: Interpret REAL variables as DOUBLE PRECISION.
–rc file
Specifies the name of the driver's startup file.
–S
Stops after the compiling phase and saves the assembly–language code in filename.s.
–show
Display driver's configuration parameters after startup.
–silent
Do not print warning messages.
–time
Print execution times for the various compilation steps.
–u
Initializes the symbol table with , which is undefined for the linker. An undefined symbol triggers loading of the first member of an archive library.
–U
Undefine a preprocessor macro.
–V[release_number]
Displays the version messages and other information, or allows invocation of a version of the compiler other than the default.
–v
Displays the compiler, assembler, and linker phase invocations.
–W
Passes arguments to a specific phase.
–w
Do not print warning messages.
PGI Debug-Related Compiler Options The options included in the following table are the ones you typically use when you are debugging your program or application. Table 18.2. PGI Debug-Related Compiler Options Option
Description
–C
(Fortran only) Generates code to check array bounds.
–c
Instrument the generated executable to perform array bounds checking at runtime. 175
PGI Compiler Option Summary
Option
Description
–E
Stops after the preprocessing phase and displays the preprocessed file on the standard output.
--flagcheck
Simply return zero status if flags are correct.
–flags
Display valid driver options.
–g
Includes debugging information in the object module.
–gopt
Includes debugging information in the object module, but forces assembly code generation identical to that obtained when –gopt is not present on the command line.
–K
Requests special compilation semantics with regard to conformance to IEEE 754.
--keeplnk
If the compiler generates a temporary indirect file for a long linker command, preserves the temporary file instead of deleting it.
–M
Selects variations for code generation and optimization.
–pc
(–tp px/p5/p6/piii targets only) Set precision globally for x87 floating-point calculations; must be used when compiling the main program. may be one of 32, 64 or 80.
–[no]traceback
Adds debug information for runtime traceback for use with the environment variable PGI_TERM.
PGI Optimization-Related Compiler Options The options included in the following table are the ones you typically use when you are optimizing your program or application code. Table 18.3. Optimization-Related PGI Compiler Options
176
Option
Description
–fast
Generally optimal set of flags for targets that support SSE capability.
–fastsse
Generally optimal set of flags for targets that include SSE/SSE2 capability.
–M
Selects variations for code generation and optimization.
–mp[=all, align,bind, [no]numa]
Interpret and process user-inserted shared-memory parallel programming directives (see Chapters 5 and 6).
–O
Specifies code optimization level where is 0, 1, 2, 3, or 4.
–pc
(–tp px/p5/p6/piii targets only) Set precision globally for x87 floating-point calculations; must be used when compiling the main program. may be one of 32, 64 or 80.
Chapter 18. Command-Line Options Reference
PGI Linking and Runtime-Related Compiler Options The options included in the following table are the ones you typically use to define parameters related to linking and running your program or application code. Table 18.4. Linking and Runtime-Related PGI Compiler Options Option
Description
–Bdynamic
Compiles for and links to the DLL version of the PGI runtime libraries.
–Bstatic_pgi
Compiles for and links to the static version of the PGI runtime libraries.
–byteswapio
(Fortran only) Swap bytes from big-endian to little-endian or vice versa on input/output of unformatted data
–i2
Treat INTEGER variables as 2 bytes.
–i4
Treat INTEGER variables as 4 bytes.
–i8
Treat INTEGER and LOGICAL variables as 8 bytes and use 64-bits for INTEGER*8 operations.
–K
Requests special compilation semantics with regard to conformance to IEEE 754.
–M
Selects variations for code generation and optimization.
Generic PGI Compiler Options The following descriptions are for all the PGI options. For easy reference, the options are arranged in alphabetical order. For a list of options by tasks, refer to the tables in the beginning of this chapter as well as to Chapter 6, “Using Command Line Options,” on page 51.
–# Displays the invocations of the compiler, assembler and linker. Default: The compiler does not display individual phase invocations. Usage: The following command-line requests verbose invocation information. $ pgfortran -# prog.f
Description: The –# option displays the invocations of the compiler, assembler and linker. These invocations are command-lines created by the driver from your command-line input and the default value. Related options: –Minfo, –V, –v.
–### Displays the invocations of the compiler, assembler and linker, but does not execute them. Default: The compiler does not display individual phase invocations. 177
Generic PGI Compiler Options
Usage: The following command-line requests verbose invocation information. $ pgfortran -### myprog.f
Description: Use the –### option to display the invocations of the compiler, assembler and linker but not to execute them. These invocations are command lines created by the compiler driver from the rc files and the command-line options. Related options: –#, –dryrun, –Minfo, –V
–Bdynamic Compiles for and links to the DLL version of the PGI runtime libraries. Default: The compiler uses static libraries. Usage: You can create the DLL obj1.dll and its import library obj1.lib using the following series of commands: % pgfortran -Bdynamic -c object1.f % pgfortran -Mmakedll object1.obj -o obj1.dll
Then compile the main program using this command: $ pgfortran -# prog.f
For a complete example, refer to Example 12.1, “Build a DLL: Fortran,” on page 133. Description: Use this option to compile for and link to the DLL version of the PGI runtime libraries. This flag is required when linking with any DLL built by the PGI compilers. This flag corresponds to the /MD flag used by Microsoft’s cl compilers.
Note On Windows, –Bdynamic must be used for both compiling and linking. When you use the PGI compiler flag –Bdynamic to create an executable that links to the DLL form of the runtime, the executable built is smaller than one built without –Bdynamic. The PGI runtime DLLs, however, must be available on the system where the executable is run. The –Bdynamic flag must be used when an executable is linked against a DLL built by the PGI compilers.
Note C++ on Windows does not support –Bdynamic. Related options:–Bstatic, –Mmakedll
–Bstatic Compiles for and links to the static version of the PGI runtime libraries. Default: The compiler uses static libraries. Usage: The following command line explicitly compiles for and links to the static version of the PGI runtime libraries: 178
Chapter 18. Command-Line Options Reference
% pgfortran -Bstatic -c object1.f
Description: You can use this option to explicitly compile for and link to the static version of the PGI runtime libraries.
Note On Windows, –Bstatic must be used for both compiling and linking. For more information on using static libraries on Windows, refer to “Creating and Using Static Libraries on Windows,” on page 130. Related options: –Bdynamic, –Bstatic_pgi, –Mdll
–Bstatic_pgi Linux only. Compiles for and links to the static version of the PGI runtime libraries. Implies –Mnorpath. Default: The compiler uses static libraries. Usage: The following command line explicitly compiles for and links to the static version of the PGI runtime libraries: % pgfortran -Bstatic -c object1.f
Description: You can use this option to explicitly compile for and link to the static version of the PGI runtime libraries.
Note On Linux, –Bstatic_pgi results in code that runs on most Linux systems without requiring a Portability package. For more information on using static libraries on Linux, refer to “Creating and Using Static Libraries on Windows,” on page 130. Related options: –Bdynamic, –Bstatic, –Mdll
–byteswapio Swaps the byte-order of data in unformatted Fortran data files on input/output. Default: The compiler does not byte-swap data on input/output. Usage: The following command-line requests that byte-swapping be performed on input/output. $ pgfortran -byteswapio myprog.f
Description: Use the –byteswapio option to swap the byte-order of data in unformatted Fortran data files on input/output. When this option is used, the order of bytes is swapped in both the data and record control words; the latter occurs in unformatted sequential files. You can use this option to convert big-endian format data files produced by most RISC workstations and highend servers to the little-endian format used on x86 or x64 systems on the fly during file reads/writes. 179
Generic PGI Compiler Options
This option assumes that the record layouts of unformatted sequential access and direct access files are the same on the systems. It further assumes that the IEEE representation is used for floating-point numbers. In particular, the format of unformatted data files produced by PGI Fortran compilers is identical to the format used on Sun and SGI workstations; this format allows you to read and write unformatted Fortran data files produced on those platforms from a program compiled for an x86 or x64 platform using the –byteswapio option. Related options: None.
–C Enables array bounds checking. This option only applies to the PGI Fortran compilers. Default: The compiler does not enable array bounds checking. Usage: In this example, the compiler instruments the executable produced from myprog.f to perform array bounds checking at runtime: $ pgfortran -C myprog.f
Description: Use this option to enable array bounds checking. If an array is an assumed size array, the bounds checking only applies to the lower bound. If an array bounds violation occurs during execution, an error message describing the error is printed and the program terminates. The text of the error message includes the name of the array, the location where the error occurred (the source file and the line number in the source), and information about the out of bounds subscript (its value, its lower and upper bounds, and its dimension). Related options: –M[no]bounds.
–c Halts the compilation process after the assembling phase and writes the object code to a file. Default: The compiler produces an executable file (does not use the –c option). Usage: In this example, the compiler produces the object file myprog.obj in the current directory. $ pgfortran -c myprog.f
Description: Use the –c option to halt the compilation process after the assembling phase and write the object code to a file. If the input file is filename.f, the output file is . Related options: –E, –Mkeepasm, –o, and –S.
–D Creates a preprocessor macro with a given value.
Note You can use the –D option more than once on a compiler command line. The number of active macro definitions is limited only by available memory. 180
Chapter 18. Command-Line Options Reference
Syntax: -Dname[=value]
Where name is the symbolic name and value is either an integer value or a character string. Default: If you define a macro name without specifying a value, the preprocessor assigns the string 1 to the macro name. Usage: In the following example, the macro PATHLENGTH has the value 256 until a subsequent compilation. If the –D option is not used, PATHLENGTH is set to 128. $ pgfortran -DPATHLENGTH=256 myprog.F
The source text in myprog.F is this: #ifndef PATHLENGTH #define PATHLENGTH 128 #endif SUBROUTINE SUB CHARACTER*PATHLENGTH path ... END
Description: Use the –D option to create a preprocessor macro with a given value. The value must be either an integer or a character string. You can use macros with conditional compilation to select source text during preprocessing. A macro defined in the compiler invocation remains in effect for each module on the command line, unless you remove the macro with an #undef preprocessor directive or with the –U option. The compiler processes all of the –U options in a command line after processing the –D options. To set this option in PVF, use the Fortran | Preprocessor | Preprocessor Definitions property, described in “Preprocessor Definitions,” on page 336. Related options: –U
–dryrun Displays the invocations of the compiler, assembler, and linker but does not execute them. Default: The compiler does not display individual phase invocations. Usage: The following command line requests verbose invocation information. $ pgfortran -dryrun myprog.f
Description: Use the –dryrun option to display the invocations of the compiler, assembler, and linker but not have them executed. These invocations are command lines created by the compiler driver from the rc files and the command-line supplied with –dryrun. Related options: –Minfo, –V, –###
–drystdinc Displays the standard include directories and then exits the compiler. 181
Generic PGI Compiler Options
Default: The compiler does not display standard include directores. Usage: The following command line requests a display for the standard include directories. $ pgfortran -drystdinc myprog.f
Description: Use the –drystdinc option to display the standard include directories and then exit the compiler. Related options:None.
–E Halts the compilation process after the preprocessing phase and displays the preprocessed output on the standard output. Default: The compiler produces an executable file. Usage: In the following example the compiler displays the preprocessed myprog.f on the standard output. $ pgfortran -E myprog.f
Description: Use the –E option to halt the compilation process after the preprocessing phase and display the preprocessed output on the standard output. Related options: –C, –c, –Mkeepasm, –o, –F, –S.
–F Stops compilation after the preprocessing phase. Default: The compiler produces an executable file. Usage: In the following example the compiler produces the preprocessed file myprog.f in the current directory. $ pgfortran -F myprog.F
Description: Use the –F option to halt the compilation process after preprocessing and write the preprocessed output to a file. If the input file is filename.F, then the output file is filename.f. Related options: –c,–E, –Mkeepasm, –o, –S
–fast Enables vectorization with SSE instructions, cache alignment, and flushz for 64-bit targets. Default: The compiler enables vectorization with SSE instructions, cache alignment, and flushz. Usage: In the following example the compiler produces vector SSE code when targeting a 64-bit machine. $ pgfortran -fast vadd.f95
Description: When you use this option, a generally optimal set of options is chosen for targets that support SSE capability. In addition, the appropriate –tp option is automatically included to enable generation of code 182
Chapter 18. Command-Line Options Reference
optimized for the type of system on which compilation is performed. This option enables vectorization with SSE instructions, cache alignment, and flushz.
Note Auto-selection of the appropriate –tp option means that programs built using the –fastsse option on a given system are not necessarily backward-compatible with older systems.
Note C/C++ compilers enable –Mautoinline with –fast. To set this option in PVF, use the Fortran | General | Optimization property, described in “Optimization,” on page 333. Related options: –O, –Munroll, –Mnoframe, –Mscalarsse, –Mvect, –Mcache_align, –tp, -M[no]autoinline
–fastsse Synonymous with –fast.
--flagcheck Causes the compiler to check that flags are correct then exit without any compilation occuring. Default: The compiler begins a compile without the additional step to first validate that flags are correct. Usage: In the following example the compiler checks that flags are correct, and then exits. $ pgfortran --flagcheck myprog.f
Description: Use this option to make the compiler check that flags are correct and then exit. If flags are all correct then the compiler returns a zero status. No compilation occurs. Related options: None
–flags Displays driver options on the standard output. Default: The compiler does not display the driver options. Usage: In the following example the user requests information about the known switches. $ pgfortran -flags
Description: Use this option to display driver options on the standard output. When you use this option with –v, in addition to the valid options, the compiler lists options that are recognized and ignored. Related options: –#, –###, –v
–g Instructs the compiler to include symbolic debugging information in the object module. 183
Generic PGI Compiler Options
Default: The compiler does not put debugging information into the object module. Usage: In the following example, the object file myprog.obj contains symbolic debugging information. $ pgfortran -c -g myprog.f
Description: Use the –g option to instruct the compiler to include symbolic debugging information in the object module. Debuggers, such as PGDBG, require symbolic debugging information in the object module to display and manipulate program variables and source code. If you specify the –g option on the command-line, the compiler sets the optimization level to –O0 (zero), unless you specify the –O option. For more information on the interaction between the –g and –O options, see the –O entry. Symbolic debugging may give confusing results if an optimization level other than zero is selected.
Note Including symbolic debugging information increases the size of the object module. To set this option in PVF, use the Fortran | General | Debug Information Format property, described in “Debug Information Format,” on page 333. Related options: –O, –gopt
–gopt Instructs the compiler to include symbolic debugging information in the object file, and to generate optimized code identical to that generated when –g is not specified. Default: The compiler does not put debugging information into the object module. Usage: In the following example, the object filemyprog.obj contains symbolic debugging information. $ pgfortran -c -gopt myprog.f
Description: Using –g alters how optimized code is generated in ways that are intended to enable or improve debugging of optimized code. The –gopt option instructs the compiler to include symbolic debugging information in the object file, and to generate optimized code identical to that generated when –g is not specified. To set this option in PVF, use the Fortran | General | Debug Information Format property described in “Debug Information Format”. Related options: –g, –M
–help Used with no other options, –help displays options recognized by the driver on the standard output. When used in combination with one or more additional options, usage information for those options is displayed to standard output. Default: The compiler does not display usage information. Usage: In the following example, usage information for –Minline is printed to standard output. 184
Chapter 18. Command-Line Options Reference
$ pgcc -help -Minline -Minline[=lib:||except:| name:|size:|levels:] Enable function inlining lib: Use extracted functions from extlib Inline function func except: Do not inline function func name: Inline function func size: Inline only functions smaller than n levels: Inline n levels of functions -Minline Inline all functions that were extracted
In the following example, usage information for –help shows how groups of options can be listed or examined according to function. $ pgcc -help -help -help[=groups|asm|debug|language|linker|opt|other| overall|phase|prepro|suffix|switch|target|variable]
Description: Use the –help option to obtain information about available options and their syntax. You can use –help in one of three ways: • Use –help with no parameters to obtain a list of all the available options with a brief one-line description of each. • Add a parameter to –help to restrict the output to information about a specific option. The syntax for this usage is this: -help
• Add a parameter to –help to restrict the output to a specific set of options or to a building process. The syntax for this usage is this: -help=
The following table lists and describes the subgroups available with –help. Table 18.5. Subgroups for –help Option Use this –help option To get this information... –help=asm
A list of options specific to the assembly phase.
–help=debug
A list of options related to debug information generation.
–help=groups
A list of available switch classifications.
–help=language
A list of language-specific options.
–help=linker
A list of options specific to link phase.
–help=opt
A list of options specific to optimization phase.
–help=other
A list of other options, such as ANSI conformance pointer aliasing for C.
–help=overall
A list of options generic to any PGI compiler.
–help=phase
A list of build process phases and to which compiler they apply. 185
Generic PGI Compiler Options
Use this –help option To get this information... –help=prepro
A list of options specific to the preprocessing phase.
–help=suffix
A list of known file suffixes and to which phases they apply.
–help=switch
A list of all known options; this is equivalent to usage of –help without any parameter.
–help=target
A list of options specific to target processor.
–help=variable
A list of all variables and their current value. They can be redefined on the command line using syntax VAR=VALUE.
Related options: –#, –###, –show, –V, –flags
–I Adds a directory to the search path for files that are included using either the INCLUDE statement or the preprocessor directive #include. Default: The compiler searches only certain directories for included files. Syntax: -Idirectory
Where directory is the name of the directory added to the standard search path for include files. Usage: In the following example, the compiler first searches the directory mydir and then searches the default directories for include files. $ pgfortran -Imydir
Description: Adds a directory to the search path for files that are included using the INCLUDE statement or the preprocessor directive #include. Use the –I option to add a directory to the list of where to search for the included files. The compiler searches the directory specified by the –I option before the default directories. The Fortran INCLUDE statement directs the compiler to begin reading from another file. The compiler uses two rules to locate the file: 1. If the file name specified in the INCLUDE statement includes a path name, the compiler begins reading from the file it specifies. 2. If no path name is provided in the INCLUDE statement, the compiler searches (in order): • Any directories specified using the –I option (in the order specified) • The directory containing the source file • The current directory For example, the compiler applies rule (1) to the following statements: INCLUDE '/bob/include/file1' (absolute path name) INCLUDE '../../file1' (relative path name)
186
Chapter 18. Command-Line Options Reference
and rule (2) to this statement: INCLUDE 'file1'
To set this option in PVF, use the Fortran | General | Additional Include Directories property, described in “Additional Include Directories,” on page 332, or the Fortran | Preprocessor | Additional Include Directories property, described in “Additional Include Directories,” on page 335. Related options: –Mnostdinc
–i2, –i4 and –i8 Treat INTEGER and LOGICAL variables as either two, four, or eight bytes. Default: The compiler treats INTERGER and LOGICAL variables as four bytes. Usage: In the following example, using the –i8 switch causes the integer variables to be treated as 64 bits. $ pgfortran -i8 int.f
int.f is a function similar to this: int.f print *, "Integer size:", bit_size(i) end
Description: Use this option to treat INTEGER and LOGICAL variables as either two, four, or eight bytes. INTEGER*8 values not only occupy 8 bytes of storage, but operations use 64 bits, instead of 32 bits. Related options: None
–K Requests that the compiler provide special compilation semantics. Default: The compiler does not provide special compilation semantics. Syntax: –K Where flag is one of the following: ieee
Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled, and on some systems a more accurate math library is linked if –Kieee is used during the link step. To set this option in PVF, use the Fortran | Floating Point Options | IEEE Arithmetic property, described in “IEEE Arithmetic,” on page 342.
noieee
Default flag. Use the fastest available means to perform floating-point operations, link in faster non-IEEE libraries if available, and disable underflow traps. 187
Generic PGI Compiler Options
trap=option [,option]...
Controls the behavior of the processor when floating-point exceptions occur. Possible options include: • fp • align (ignored) • inv • denorm • divz • ovf • unf • inexact
Usage: In the following example, the compiler performs floating-point operations in strict conformance with the IEEE 754 standard $ pgfortran -Kieee myprog.f
Description: Use –K to instruct the compiler to provide special compilation semantics. The default is –Knoieee. –Ktrap is only processed by the compilers when compiling main functions or programs. The options inv,
denorm, divz, ovf, unf, and inexact correspond to the processor’s exception mask bits: invalid operation, denormalized operand, divide-by-zero, overflow, underflow, and precision, respectively. Normally, the processor’s exception mask bits are on, meaning that floating-point exceptions are masked—the processor recovers from the exceptions and continues. If a floating-point exception occurs and its corresponding mask bit is off, or "unmasked", execution terminates with an arithmetic exception (C's SIGFPE signal). –Ktrap=fp is equivalent to –Ktrap=inv,divz,ovf.
To set this option in PVF, use the Fortran | Floating Point Options | Floating Point Exception Handling property, described in “Floating Point Exception Handling,” on page 342
Note The PGI compilers do not support exception-free execution for–Ktrap=inexact. The purpose of this hardware support is for those who have specific uses for its execution, along with the appropriate signal handlers for handling exceptions it produces. It is not designed for normal floating point operation code support. Related options: None.
--keeplnk (Windows only.) Preserves the temporary file when the compiler generates a temporary indirect file for a long linker command. Usage: In the following example the compiler preserves each temporary file rather than deleting it. 188
Chapter 18. Command-Line Options Reference
$ pgfortran --keeplnk myprog.f
Description: If the compiler generates a temporary indirect file for a long linker command, use this option to instruct the compiler to preserve the temporary file instead of deleting it. Related options: None.
–L Specifies a directory to search for libraries.
Note Multiple –L options are valid. However, the position of multiple –L options is important relative to –l options supplied. Syntax: -Ldirectory
Where directory is the name of the library directory. Default: The compiler searches the standard library directory. Usage: In the following example, the library directory is /lib and the linker links in the standard libraries required by PGFORTRAN from this directory. $ pgfortran -L/lib myprog.f
In the following example, the library directory /lib is searched for the library file libx.a and both the directories /lib and /libz are searched for liby.a. $ pgfortran -L/lib -lx -L/libz -ly myprog.f
Use the –L option to specify a directory to search for libraries. Using –L allows you to add directories to the search path for library files. Related options: -l
–l Instructs the linker to load the specified library. The linker searches in addition to the standard libraries.
Note The linker searches the libraries specified with –l in order of appearance before searching the standard libraries. Syntax: -llibrary
Where library is the name of the library to search. Usage: In the following example, if the standard library directory is /lib the linker loads the library /lib/ libmylib.a, in addition to the standard libraries. $ pgfortran myprog.f -lmylib
189
Generic PGI Compiler Options
Description: Use this option to instruct the linker to load the specified library. The compiler prepends the characters lib to the library name and adds the .a extension following the library name. The linker searches each library specifies before searching the standard libraries. Related options: –L
–m Displays a link map on the standard output. Default: The compiler does display the link map and does not use the –m option. Usage: When the following example is executed on Windows, pgfortran creates a link map in the file myprog.map. $ pgfortran -m myprog.f
Description: Use this option to display a link map. • On Linux, the map is written to stdout. • On Windows, the map is written to a .map file whose name depends on the executable. If the executable is myprog.f, the map file is in myprog.map. Related options: –c, –o, -s, –u
–m32 Use the 32-bit compiler for the default processor type. Usage: When the following example is executed on Windows, pgfortran uses the 32-bit compiler for the default processor type. $ pgfortran -m32 myprog.f
Description: Use this option to specify the 32-bit compiler as the default processor type.
–m64 Use the 64-bit compiler for the default processor type. Usage: When the following example is executed on Windows, pgfortran uses the 64-bit compiler for the default processor type. $ pgfortran -m64 myprog.f
Description: Use this option to specify the 64-bit compiler as the default processor type.
–M Selects options for code generation. The options are divided into the following categories: Code generation
Fortran Language Controls
Optimization
Environment
C/C++ Language Controls
Miscellaneous
Inlining 190
Chapter 18. Command-Line Options Reference
The following table lists and briefly describes the options alphabetically and includes a field showing the category. For more details about the options as they relate to these categories, refer to “–M Options by Category,” on page 213. Table 18.6. –M Options Summary pgflag
Description
Category
allocatable=95|03
Controls whether to use Fortran 95 or Fortran 2003 Fortran Language semantics in allocatable array assignments.
anno
Annotate the assembly code with source code.
Miscellaneous
[no]autoinline
C/C++ when a function is declared with the inline keyword, inline it at –O2 and .
Inlining
[no]backslash
Determines how the backslash character is treated in quoted strings (pgf77, pgf95, pgfortran, and pghpf only).
Fortran Language
[no]bounds
Specifies whether array bounds checking is enabled Miscellaneous or disabled.
byteswapio
Swap byte-order (big-endian to little-endian or vice Miscellaneous versa) during I/O of Fortran unformatted data.
cache_align
Where possible, align data objects of size greater Optimization than or equal to 16 bytes on cache-line boundaries.
chkfpstk
Check for internal consistency of the x87 FP stack in the prologue of a function and after returning from a function or subroutine call (–tp px/p5/p6/ piii targets only).
Miscellaneous
chkptr
Check for NULL pointers (pgf95, pgfortran, and pghpf only).
Miscellaneous
chkstk
Check the stack for available space upon entry to Miscellaneous and before the start of a parallel region. Useful when many private variables are declared.
concur
Enable auto-concurrentization of loops. Multiple processors or cores will be used to execute parallelizable loops.
Optimization
cpp
Run the PGI cpp-like preprocessor without performing subsequent compilation steps.
Miscellaneous
cray
Force Cray Fortran (CF77) compatibility (pgf77, pgf95, pgfortran, and pghpf only).
Optimization
[no]daz
Do/don’t treat denormalized numbers as zero.
Code Generation
[no]dclchk
Determines whether all program variables must be Fortran Language declared (pgf77, pgf95, pgfortran, and pghpf only). 191
Generic PGI Compiler Options
192
pgflag
Description
Category
[no]defaultunit
Determines how the asterisk character ("*") is Fortran Language treated in relation to standard input and standard output (regardless of the status of I/O units 5 and 6, pgf77, pgf95, pgfortran, and pghpf only).
[no]depchk
Checks for potential data dependencies.
[no]dse
Enables [disables] dead store elimination phase for Optimization programs making extensive use of function inlining.
[no]dlines
Determines whether the compiler treats lines Fortran Language containing the letter "D" in column one as executable statements (pgf77, pgf95, pgfortran, and pghpf only).
dollar,char
Specifies the character to which the compiler maps Fortran Language the dollar sign code.
[no]dwarf
Specifies not to add DWARF debug information.
dwarf1
When used with –g, generate DWARF1 format debug Code Generation information.
dwarf2
When used with –g, generate DWARF2 format debug Code Generation information.
dwarf3
When used with –g, generate DWARF3 format debug Code Generation information.
extend
Instructs the compiler to accept 132-column source Fortran Language code; otherwise it accepts 72-column code (pgf77, pgf95, pgfortran, and pghpf only).
extract
invokes the function extractor.
Inlining
fixed
Instructs the compiler to assume F77-style fixed format source code (pgf95, pgfortran, and pghpf only).
Fortran Language
[no]flushz
Do/don’t set SSE flush-to-zero mode
Code Generation
[no]fpapprox
Specifies not to use low-precision fp approximation Optimization operations.
[no]f[=option]
Perform certain floating point intrinsic functions using relaxed precision.
Optimization
free
Instructs the compiler to assume F90-style free format source code (pgf95, pgfortran and pghpf only).
Fortran Language
func32
The compiler aligns all functions to 32-byte boundaries.
Code Generation
gccbug[s]
Matches behavior of certain gcc bugs
Miscellaneous
Optimization
Code Generation
Chapter 18. Command-Line Options Reference
pgflag
Description
Category
info
Prints informational messages regarding Miscellaneous optimization and code generation to standard output as compilation proceeds.
inform
Specifies the minimum level of error severity that the Miscellaneous compiler displays.
inline
Invokes the function inliner.
Inlining
instrumentation
Generates code to enable instrumentation of functions.
Miscellaneous
[no]ipa
Invokes interprocedural analysis and optimization.
Optimization
[no]iomutex
Determines whether critical sections are generated around Fortran I/O calls.
Fortran Language
keepasm
Instructs the compiler to keep the assembly file.
Miscellaneous
largeaddressaware
Enables support for 64-bit indexing and single static Code Generation data objects of size larger than 2GB.
[no]large_arrays
Enables support for 64-bit indexing and single static Code Generation data objects of size larger than 2GB.
[no]loop32
Aligns/does not align innermost loops on 32 byte boundaries with –tp barcelona
Code Generation
[no]lre
Disable/enable loop-carried redundancy elimination.
Optimization
list
Specifies whether the compiler creates a listing file. Miscellaneous
makedll
Generate a dynamic link library (DLL).
Miscellaneous
makeimplib
Passes the -def switch to the librarian without a deffile, when used without –def:deffile.
Miscellaneous
mpi=option
Link to MPI libraries: MPICH1, MPICH2, or Microsoft MPI libraries
Code Generation
neginfo
Instructs the compiler to produce information on why certain optimizations are not performed.
Miscellaneous
noframe
Eliminates operations that set up a true stack frame Optimization pointer for functions.
noi4
Determines how the compiler treats INTEGER variables.
nomain
When the link step is called, don’t include the object Code Generation file that calls the Fortran main program. .
noopenmp
When used in combination with the –mp option, the compiler ignores OpenMP parallelization directives or pragmas, but still processes SGI-style parallelization directives or pragmas.
Optimization
Miscellaneous
193
Generic PGI Compiler Options
194
pgflag
Description
Category
nopgdllmain
Do not link the module containing the default DllMain() into the DLL.
Miscellaneous
nosgimp
When used in combination with the –mp option, the Miscellaneous compiler ignores SGI-style parallelization directives or pragmas, but still processes OpenMP directives or pragmas.
nostdinc
Instructs the compiler to not search the standard location for include files. To set this option in PVF, use the Fortran | Preprocessor | Ignore Standard Include Path property.
Environment
nostdlib
Instructs the linker to not link in the standard libraries.
Environment
[no]onetrip
Determines whether each DO loop executes at least Language once.
novintr
Disable idiom recognition and generation of calls to Optimization optimized vector functions.
pfi
Instrument the generated code and link in libraries for dynamic collection of profile and data information at runtime.
pre
Read a pgfi.out trace file and use the information to Optimization enable or guide optimizations.
[no]pre
Force/disable generation of non-temporal moves and prefetching.
Code Generation
[no]prefetch
Enable/disable generation of prefetch instructions.
Optimization
preprocess
Perform cpp-like preprocessing on assembly language and Fortran input source files.
Miscellaneous
prof
Set profile options; function-level and line-level profiling are supported.
Code Generation
[no]r8
Determines whether the compiler promotes REAL variables and constants to DOUBLE PRECISION.
Optimization
[no]r8intrinsics
Determines how the compiler treats the intrinsics CMPLX and REAL.
Optimization
[no]recursive
Allocate / do not allocate local variables on the Code Generation stack, this allows recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
[no]reentrant
Specifies whether the compiler avoids optimizations Code Generation that can prevent code from being reentrant.
Optimization
Chapter 18. Command-Line Options Reference
pgflag
Description
Category
[no]ref_externals
Do/don’t force references to names appearing in EXTERNAL statements.
Code Generation
safe_lastval
In the case where a scalar is used after a loop, but is not defined on every iteration of the loop, the compiler does not by default parallelize the loop. However, this option tells the compiler it is safe to parallelize the loop. For a given loop, the last value computed for all scalars make it safe to parallelize the loop.
Code Generation
[no]save
Determines whether the compiler assumes that all local variables are subject to the SAVE statement.
Fortran Language
[no]scalarsse
Do/don’t use SSE/SSE2 instructions to perform scalar floating-point arithmetic.
Optimization
[no]second_underscore Do/don’t add the second underscore to the name of a Fortran global if its name already contains an underscore.
Code Generation
[no]signextend
Do/don’t extend the sign bit, if it is set.
Code Generation
[no]smart
Do/don’t enable optional post-pass assembly optimizer.
Optimization
[no]smartalloc[=huge| huge:|hugebss]
Add a call to the routine mallopt in the main routine. Environment Supports large TLBs on Linux and Windows. Tip. To be effective, this switch must be specified when compiling the file containing the Fortran, C, or C++ main program.
standard
Causes the compiler to flag source code that does not conform to the ANSI standard.
[no]stride0
Do/do not generate alternate code for a loop that Code Generation contains an induction variable whose increment may be zero.
unix
Uses UNIX calling and naming conventions for Fortran subprograms.
[no]unixlogical
Determines how the compiler treats logical values. . Fortran Language
[no]unroll
Controls loop unrolling.
Optimization
[no]upcase
Determines whether the compiler preserves uppercase letters in identifiers..
Fortran Language
varargs
Forces Fortran program units to assume calls are to Code Generation C functions with a varargs type interface.
[no]vect
Do/don’t invoke the code vectorizer.
Fortran Language
Code Generation
Optimization
195
Generic PGI Compiler Options
–module Allows you to specify a particular directory in which generated intermediate .mod files should be placed. Default: The compiler places .mod files in the current working directory, and searches only in the current working directory for pre-compiled intermediate .mod files. Usage: The following command line requests that any intermediate module file produced during compilation of myprog.f be placed in the directory mymods; specifically, the file ./mymods/myprog.mod is used. $ pgfortran -module mymods myprog.f
Description: Use the –module option to specify a particular directory in which generated intermediate .mod files should be placed. If the –module option is present, and USE statements are present in a compiled program unit, then is searched for .mod intermediate files prior to a search in the default local directory. To set this option in PVF, use the Fortran | Output | Module Path property, described in “Module Path,” on page 332. Related options: None.
–mp[=all, align,bind,[no]numa] Instructs the compiler to interpret user-inserted OpenMP shared-memory parallel programming directives, and to generate an executable file which will utilize multiple processors in a shared-memory parallel system. Default: The compiler ignores user-inserted shared-memory parallel programming directives. Usage: The following command line requests processing of any shared-memory directives present in myprog.f: $ pgfortran -mp myprog.f
Description: Use the –mpoption to instruct the compiler to interpret user-inserted OpenMP shared-memory parallel programming directives and to generate an executable file which utilizes multiple processors in a shared-memory parallel system. The sub-options are one or more of the following: align Forces loop iterations to be allocated to OpenMP processes using an algorithm that maximizes alignment of vector sub-sections in loops that are both parallelized and vectorized for SSE. This allocation can improve performance in program units that include many such loops. It can also result in load-balancing problems that significantly decrease performance in program units with relatively short loops that contain a large amount of work in each iteration. The numa suboption uses libnuma on systems where it is available. allcores Instructs the compiler to all available cores. You specify this sub-option at link time. bind Instructs the compiler to bind threads to cores. You specify this sub-option at link time. 196
Chapter 18. Command-Line Options Reference
[no]numa Uses [does not use] libnuma on systems where it is available. For a detailed description of this programming model and the associated directives, refer to Chapter 9, “Using OpenMP”. To set this option in PVF, use the Fortran | Language | Process OpenMP Directives property, described in “Process OpenMP Directives,” on page 338. Related options: –Mconcur, –Mvect
–noswitcherror Issues warnings instead of errors for unknown switches. Ignores unknown command line switches after printing a warning message. Default: The compiler prints an error message and then halts. Usage: In the following example, the compiler ignores unknown command line switches after printing a warning message. $ pgfortran -noswitcherror myprog.f
Description: Use this option to instruct the compiler to ignore unknown command line switches after printing an warning message.
Tip You can configure this behavior in the siterc file by adding: set NOSWITCHERROR=1. Related options: None.
–O Invokes code optimization at the specified level. Default: The compiler optimizes at level 2. Syntax: –O [level] Where level is an integer from 0 to 4. Usage: In the following example, since no –O option is specified, the compiler sets the optimization to level 1. $ pgfortran myprog.f
In the following example, since no optimization level is specified and a –O option is specified, the compiler sets the optimization to level 2. $ pgfortran -O myprog.f
Description: Use this option to invoke code optimization at the specified level - one of the following: 197
Generic PGI Compiler Options
0 creates a basic block for each statement. Neither scheduling nor global optimization is done. To specify this level, supply a 0 (zero) argument to the –O option. 1 schedules within basic blocks and performs some register allocations, but does no global optimization. 2 performs all level-1 optimizations, and also performs global scalar optimizations such as induction variable elimination and loop invariant movement. 3 level-three specifies aggressive global optimization. This level performs all level-one and level-two optimizations and enables more aggressive hoisting and scalar replacement optimizations that may or may not be profitable. 4 level-four performs all level-one, level-two, and level-three optimizations and enables hoisting of guarded invariant floating point expressions. To set this option (–O2 or –O3) in PVF, use the Fortran | Optimization | Global Optimizations property, described in “Global Optimizations,” on page 334. Table 18.7 shows the interaction between the –O option, –g option, –Mvect, and –Mconcur options. Table 18.7. Optimization and –O, –g, –Mvect, and –Mconcur Options Optimize Option Debug Option –M Option
Optimization Level
none
none
none
1
none
none
–Mvect
2
none
none
–Mconcur
2
none
–g
none
0
–O
none or –g
none
2
–Olevel
none or –g
none
level
–Olevel < 2
none or –g
–Mvect
2
–Olevel < 2
none or –g
–Mconcur
2
Unoptimized code compiled using the option –O0 can be significantly slower than code generated at other optimization levels. Like the –Mvect option, the –Munroll option sets the optimization level to level-2 if no –O or –g options are supplied. The –gopt option is recommended for generation of debug information with optimized code. For more information on optimization, see Chapter 7, “Optimizing & Parallelizing”. Related options: –g, –M, –gopt
–o Names the executable file. Use the –o option to specify the filename of the compiler object file. The final output is the result of linking. 198
Chapter 18. Command-Line Options Reference
Syntax: –o filename Where filename is the name of the file for the compilation output. The filename must not have a .f extension. Default: The compiler creates executable filenames as needed. If you do not specify the –o option, the default filename is the linker output file with a name comprised of the base file name, such as myprog, plus the extension .exe, for example: myprog.exe. Usage: In the following example, the executable file is myp.exe instead of the default a.outmyprog.exe. $ pgfortran myprog.f -o myp
To set this option in PVF, use the Fortran | Output | Object File Name property, described in “Object File Name,” on page 333. Related options: –c, –E, –F, –S
–pc Note This option is available only for –tp px/p5/p6/piii targets. Allows you to control the precision of operations performed using the x87 floating point unit, and their representation on the x87 floating point stack. Syntax: –pc { 32 | 64 | 80 } Usage: $ pgfortran -pc 64 myprog.f
Description: The x87 architecture implements a floating-point stack using 8 80-bit registers. Each register uses bits 0-63 as the significant, bits 64-78 for the exponent, and bit 79 is the sign bit. This 80-bit real format is the default format, called the extended format. When values are loaded into the floating point stack they are automatically converted into extended real format. The precision of the floating point stack can be controlled, however, by setting the precision control bits (bits 8 and 9) of the floating control word appropriately. In this way, you can explicitly set the precision to standard IEEE double-precision using 64 bits, or to single precision using 32 bits.1 The default precision is system dependent. To alter the precision in a given program unit, the main program must be compiled with the same -pc option. The command line option –pc val lets the programmer set the compiler’s precision preference. Valid values for val are: 32 single precision
64 double precision
80 extended precision
Extended Precision Option – Operations performed exclusively on the floating-point stack using extended precision, without storing into or loading from memory, can cause problems with accumulated values within 1
According to Intel documentation, this only affects the x87 operations of add, subtract, multiply, divide, and square root. In particular, it does not appear to affect the x87 transcendental instructions.
199
Generic PGI Compiler Options
the extra 16 bits of extended precision values. This can lead to answers, when rounded, that do not match expected results. For example, if the argument to sin is the result of previous calculations performed on the floating-point stack, then an 80-bit value used instead of a 64-bit value can result in slight discrepancies. Results can even change sign due to the sin curve being too close to an x-intercept value when evaluated. To maintain consistency in this case, you can assure that the compiler generates code that calls a function. According to the x86 ABI, a function call must push its arguments on the stack (in this way memory is guaranteed to be accessed, even if the argument is an actual constant). Thus, even if the called function simply performs the inline expansion, using the function call as a wrapper to sin has the effect of trimming the argument precision down to the expected size. Using the –Mnobuiltin option on the command line for C accomplishes this task by resolving all math routines in the library libm, performing a function call of necessity. The other method of generating a function call for math routines, but one that may still produce the inline instructions, is by using the –Kieee switch. A second example illustrates the precision control problem using a section of code to determine machine precision: program find_precision w = 1.0 100 w=w+w y=w+1 z=y-w if (z .gt. 0) goto 100 C now w is just big enough that |((w+1)-w)-1| >= 1 ... print*,w end
In this case, where the variables are implicitly real*4, operations are performed on the floating-point stack where optimization removes unnecessary loads and stores from memory. The general case of copy propagation being performed follows this pattern: a = x y = 2.0 + a
Instead of storing x into a, then loading a to perform the addition, the value of x can be left on the floatingpoint stack and added to 2.0. Thus, memory accesses in some cases can be avoided, leaving answers in the extended real format. If copy propagation is disabled, stores of all left-hand sides will be performed automatically and reloaded when needed. This will have the effect of rounding any results to their declared sizes. When executed using default (extended) precision, the find_precision program has a value of 1.8446744E +19. If, however, –Kieee is set, the value becomes 1.6777216E+07 (single precision.) This difference is due to the fact that –Kieee disables copy propagation, so all intermediate results are stored into memory, then reloaded when needed. Copy propagation is only disabled for floating-point operations, not integer. With this particular example, setting the –pc switch will also adjust the result. The –Kieee switch also has the effect of making function calls to perform all transcendental operations. Except when the –Mnobuiltin switch is set in C, the function still produces the x86 machine instruction for computation, and arguments are passed on the stack, which results in a memory store and load. 200
Chapter 18. Command-Line Options Reference
Finally, –Kieee also disables reciprocal division for constant divisors. That is, for a/b with unknown a and constant b, the expression is usually converted at compile time to a*(1/b), thus turning an expensive divide into a relatively fast scalar multiplication. However, numerical discrepancies can occur when this optimization is used. Understanding and correctly using the –pc, –Mnobuiltin, and –Kieee switches should enable you to produce the desired and expected precision for calculations which utilize floating-point operations. Related options: –Kieee, –Mnobuiltin
- -pedantic Prints warnings from included . Syntax: -- pedantic Default: The compiler prints the warnings from the included system header files. Usage: In the following example, the compiler prints the warnings from the included system header files. $ pgfortran --pedantic myprog.f
Related options:
–pgcpplibs Instructs the compiler to append C++ runtime libraries to the link line for programs built using either PGF90 or PGF77. Default: The C/C++ compilers do not append the C++ runtime libraries to the link line. Usage: In the following example the C++ runtime libraries are linked with an object file compiled with pgf77. $ pgf90 main.f90 mycpp.o -pgcpplibs
Description: Use this option to instruct the compiler to append C++ runtime libraries to the link line for programs built using either PGF90 or PGF77. Related options:–pgf90libs, –pgf77libs
–pgf77libs Instructs the compiler to append PGF77 runtime libraries to the link line. Default: The C/C++ compilers do not append the PGF77 runtime libraries to the link line. Usage: In the following example a .c main program is linked with an object file compiled with pgf77. $ pgcc main.c myf77.o -pgf77libs
Description: Use this option to instruct the compiler to append PGF77 runtime libraries to the link line. Related options:–pgcpplibs, –pgf90libs 201
Generic PGI Compiler Options
–pgf90libs Instructs the compiler to append PGF90/PGF95/PGFORTRAN runtime libraries to the link line. Default: The C/C++ compilers do not append the PGF90/PGF95/PGFORTRAN runtime libraries to the link line. Usage: In the following example a .c main program is linked with an object file compiled with pgfortran. $ pgcc main.c myf95.o -pgf90libs
Description: Use this option to instruct the compiler to append PGF90/PGF95/PGFORTRAN runtime libraries to the link line. Related options:–pgcpplibs, –pgf77libs
–r4 and –r8 Interprets DOUBLE PRECISION variables as REAL (–r4), or interprets REAL variables as DOUBLE PRECISION (–r8). Usage: In this example, the double precision variables are interpreted as REAL. $ pgfortran -r4 myprog.f
Description: Interpret DOUBLE PRECISION variables as REAL (–r4) or REAL variables as DOUBLE PRECISION (–r8). Related options: –i2, –i4, –i8, –nor8
–rc Specifies the name of the driver startup configuration file. If the file or pathname supplied is not a full pathname, the path for the configuration file loaded is relative to the $DRIVER path (the path of the currently executing driver). If a full pathname is supplied, that file is used for the driver configuration file. Syntax: -rc [path] filename
Where path is either a relative pathname, relative to the value of $DRIVER, or a full pathname beginning with "/". Filename is the driver configuration file. Default: The driver uses the configuration file .pgirc. Usage: In the following example, the file .pgfortranrctest, relative to /usr/pgi/linux86/bin, the value of $DRIVER, is the driver configuration file. $ pgfortran -rc .pgfortranrctest myprog.f
Description: Use this option to specify the name of the driver startup configuration file. If the file or pathname supplied is not a full pathname, the path for the configuration file loaded is relative to the $DRIVER path - the path of the currently executing driver. If a full pathname is supplied, that file is used for the driver configuration file. Related options: –show 202
Chapter 18. Command-Line Options Reference
–S Stops compilation after the compiling phase and writes the assembly-language output to a file. Default: The compiler does not produce a .s file. Usage: In this example, pgfortran produces the file myprog.s in the current directory. $ pgfortran -S myprog.f
Description: Use this option to stop compilation after the compiling phase and then write the assemblylanguage output to a file. If the input file is filename.f, then the output file is filename.s. Related options: –c, –E, –F, –Mkeepasm, –o
–show Produces driver help information describing the current driver configuration. Default: The compiler does not show driver help information. Usage: In the following example, the driver displays configuration information to the standard output after processing the driver configuration file. $ pgfortran -show myprog.f
Description: Use this option to produce driver help information describing the current driver configuration. Related options: –V, –v, –###, –help, –rc
–silent Do not print warning messages. Default: The compiler prints warning messages. Usage: In the following example, the driver does not display warning messages. $ pgfortran -silent myprog.f
Description: Use this option to suppress warning messages. Related options: –v, –V, –w
–stack (Windows only.) Allows you to explicitly set stack properties for your program. Default: If –stack is not specified, then the defaults are as followed: Win32 Setting is -stack:2097152,2097152, which is approximately 2MB for reserved and committed bytes. Win64 No default setting Syntax: 203
Generic PGI Compiler Options
-stack={ (reserved bytes)[,(committed bytes)] }{, [no]check }
Usage: The following example demonstrates how to reserve 524,288 stack bytes (512KB), commit 262,144 stack bytes for each routine (256KB), and disable the stack initialization code with the nocheck argument. $ pgfortran -stack=524288,262144,nocheck myprog.f
Description: Use this option to explicitly set stack properties for your program. The –stack option takes one or more arguments: (reserved bytes), (committed bytes), [no]check. reserved bytes Specifies the total stack bytes required in your program. committed bytes Specifies the number of stack bytes that the Operating System will allocate for each routine in your program. This value must be less than or equal to the stack reserved bytes value. Default for this argument is 4096 bytes [no]check Instructs the compiler to generate or not to generate stack initialization code upon entry of each routine. Check is the default, so stack initialization code is generated. Stack initialization code is required when a routine's stack exceeds the committed bytes size. When your committed bytes is equal to the reserved bytes or equal to the stack bytes required for each routine, then you can turn off the stack initialization code using the -stack=nocheck compiler option. If you do this, the compiler assumes that you are specifying enough committed stack space; and therefore, your program does not have to manage its own stack size. For more information on determining the amount of stack required by your program, refer to –Mchkstk compiler option, described in “Miscellaneous Controls”.
Note -stack=(reserved bytes),(committed bytes) are linker options. -stack=[no]check is a compiler option.
If you specify -stack=(reserved bytes),(committed bytes) on your compile line, it is only used during the link step of your build. Similarly, -stack=[no]check can be specified on your link line, but its only used during the compile step of your build. Related options: –Mchkstk
–ta=nvidia(,nvidia_suboptions),host Defines the target accelertator.
Note This flag is valid only for Fortran and C. Default: The compiler uses NVIDIA. 204
Chapter 18. Command-Line Options Reference
Usage: In the following example, NVIDEA is the accelerator target architecture and the accelerator generates code for compute capability 1.3. $ pgfortran -ta=nvidia,cc13
Description: Use this option to select the accelerator target and, optionally, to define the type of code to genertate. The –ta flag has the following options: nvidia Select NVIDIA accelerator target. This option has the following nvidia-suboptions: analysis Perform loop analysis only; do not generate GPU code. cc10 Generate code for compute capability 1.0. cc11 Generate code for compute capability 1.1. cc12 Generate code for compute capability 1.2. cc13 Generate code for compute capability 1.3. cc20 Generate code for compute capability 2.0. cuda2.3 or 2.3 Specify the NVIDIA CUDA 2.3 version of the toolkit. cuda3.0 or 3.0 Specify the NVIDIA CUDA 3.0 version of the toolkit.
Note Compiling with the CUDA 3.0 toolkit, either by using the -ta=nvidia:cuda3.0 option or by adding set CUDAVERSION=3.0 to the siterc file, generates binaries that may not work on machines with a 2.3 CUDA driver. pgaccelinfo prints the driver version as the first line of output.
For a 2.3 driver: CUDA Driver Version 2030 For a 3.0 driver: CUDA Driver Version 3000 fastmath Use routines from the fast math library. keepbin Keep the binary (.bin) files. keepgpu Keep the kernel source (.gpu) files. 205
Generic PGI Compiler Options
keepptx Keep the portable assembly (.ptx) file for the GPU code. maxregcount:n Specify the maximum number of registers to use on the GPU. Leaving this blank indicates no limit. mul24 Use 24-bit multiplication for subscripting. nofma Do not generate fused multiply-add instructions. time Link in a limited-profiling library, as described in “Profiling Accelerator Kernels,” on page 118. [no]wait Wait for each kernel to finish before continuing in the host program. host Select NO accelerator target. Generate PGI Unified Binary Code, as described in “PGI Unified Binary for Accelerators,” on page 116. Related options: –#
–time Print execution times for various compilation steps. Default: The compiler does not print execution times for compilation steps. Usage: In the following example, pgfortran prints the execution times for the various compilation steps. $ pgfortran -time myprog.f
Description: Use this option to print execution times for various compilation steps. Related options: –#
–tp [,target...] Sets the target architecture. Default: The PGI compilers produce code specifically targeted to the type of processor on which the compilation is performed. In particular, the default is to use all supported instructions wherever possible when compiling on a given system. The default style of code generation is auto-selected depending on the type of processor on which compilation is performed. Further, the –tp x64 style of unified binary code generation is only enabled by an explicit –tp x64 option.
Note Executables created on a given system may not be usable on previous generation systems. (For example, executables created on a Pentium 4 may fail to execute on a Pentium III or Pentium II.) 206
Chapter 18. Command-Line Options Reference
Usage: In the following example, pgfortran sets the target architecture to EM64T: $ pgfortran -tp p7-64 myprog.f
Description: Use this option to set the target architecture. By default, the PGI compiler uses all supported instructions wherever possible when compiling on a given system. As a result, executables created on a given system may not be usable on previous generation systems. For example, executables created on a Pentium 4 may fail to execute on a Pentium III or Pentium II. Processor-specific optimizations can be specified or limited explicitly by using the –tp option. Thus, it is possible to create executables that are usable on previous generation systems. With the exception of k8-64, k8-64e, p7-64, and x64, any of these sub-options are valid on any x86 or x64 processor-based system. The k8-64, k8-64e, p7-64 and x64 options are valid only on x64 processor-based systems. The –tp x64 option generates unified binary object and executable files, as described in the section called “Using –tp to Generate a Unified Binary”. To set this option in PVF, use the Fortran | Target Processors | Unified Binary Information property, described in “Unified Binary Information,” on page 351. The following list contains the possible sub-options for –tp and the processors that each sub-option is intended to target: athlon generate 32-bit code for AMD Athlon XP/MP and compatible processors. barcelona generate 32-bit code for AMD Opteron/Quadcore and compatible processors. barcelona-32 generate 32-bit code for AMD Opteron/Quadcore and compatible processors. Same as bareclona suboption. barcelona-64 generate 64-bit code for AMD Opteron/Quadcore and compatible processors. core2 generate 32-bit code for Intel Core 2 Duo and compatible processors. core2-32 generate 32-bit code for Intel Core 2 Duo and compatible processors. Same as core2 option. core2-64 generate 64-bit code for Intel Core 2 Duo EM64T and compatible processors. istanbul generate 32-bit code that is usable on any Istanbul processor-based system. istanbul-32 generate 32-bit code that is usable on any Istanbul processor-based system. istanbul-64 generate 64-bit code that is usable on any Istanbul processor-based system. 207
Generic PGI Compiler Options
k8-32 generate 32-bit code for AMD Athlon64, AMD Opteron and compatible processors. k8-64 generate 64-bit code for AMD Athlon64, AMD Opteron and compatible processors. k8-64e generate 64-bit code for AMD Opteron Revision E, AMD Turion, and compatible processors. nehalem generate 32-bit code that is usable on any Nehalem processor-based system. nehalem-32 generate 32-bit code that is usable on any Nehalem processor-based system. nehalem-64 generate 64-bit code that is usable on any Nehalem processor-based system. p6 generate 32-bit code for Pentium Pro/II/III and AthlonXP compatible processors. p7 generate 32-bit code for Pentium 4 and compatible processors. p7-32 generate 32-bit code for Pentium 4 and compatible processors. Same as p7 option. p7-64 generate 64-bit code for Intel P4/Xeon EM64T and compatible processors. penryn generate 32-bit code for Intel Penryn Architecture and compatible processors. penryn-32 generate 32-bit code for Intel Penryn Architecture and compatible processors. Same as penryn suboption. penryn-64 generate 64-bit code for Intel Penryn Architecture and compatible processors. piii generate 32-bit code for Pentium III and compatible processors, including support for single-precision vector code using SSE instructions. px generate 32-bit code that is usable on any x86 processor-based system. px-32 generate 32-bit code that is usable on any x86 processor-based system. Same as px suboption. shanghai generate 32-bit code that is usable on any AMD Shanghai processor-based system. shanghai-32 generate 32-bit code that is usable on any AMD Shanghai processor-based system. 208
Chapter 18. Command-Line Options Reference
shanghai-64 generate 64-bit code that is usable on any AMD Shanghai processor-based system. x64 generate 64-bit unified binary code including full optimizations and support for both AMD and Intel x64 processors. Refer to the PGI Release Notes for a concise list of the features of these processors that distinguish them as separate targets when using the PGI compilers and tools. The syntax for 64-bit and 32-bit targets is similar, even though the target information varies. Syntax for 64-bit targets: -tp {k8-64 | k8-64e | p7-64 | core2-64 | x64}
Syntax for 32-bit targets: -tp {k8-32 | p6 | p7 | core2 | piii | px}
Using –tp to Generate a Unified Binary Different processors have differences, some subtle, in hardware features such as instruction sets and cache size. The compilers make architecture-specific decisions about such things as instruction selection, instruction scheduling, and vectorization. Any of these decisions can have significant effects on performance and compatibility. PGI unified binaries provide a low-overhead means for a single program to run well on a number of hardware platforms. You can use the –tp option to produce PGI Unified Binary programs. The compilers generate, and combine into one executable, multiple binary code streams, each optimized for a specific platform. At runtime, this one executable senses the environment and dynamically selects the appropriate code stream. The target processor switch, –tp, accepts a comma-separated list of 64-bit targets and will generate code optimized for each listed target. For example, the following switch generates optimized code for three targets: k8-64, p7-64, and core2-64. Syntax for optimizing for multiple targets: -tp k8-64,p7-64,core2-64
The –tp k8-64 and –tp k8-64e options result in generation of code supported on and optimized for AMD x64 processors, while the –tp p7-64 option results in generation of code that is supported on and optimized for Intel x64 processors. Performance of k8-64 or k8-64e code executed on Intel x64 processors, or of p7-64 code executed on AMD x64 processors, can often be significantly less than that obtained with a native binary. The special –tp x64 option is equivalent to –tp k8-64,p7-64. This switch produces PGI Unified Binary programs containing code streams fully optimized and supported for both AMD64 and Intel EM64T processors. For more information on unified binaries, refer to “Processor-Specific Optimization & the Unified Binary,” on page 73. Related options: –M options that control environments 209
Generic PGI Compiler Options
–[no]traceback Adds debug information for runtime traceback for use with the environment variable PGI_TERM. Default: The compiler enables traceback for FORTRAN 77 and Fortran 90/95 and disables traceback for C and C++. Syntax: -traceback
Usage: In this example, pgfortran enables traceback for the program myprog.f. $ pgfortran -traceback myprog.f
Description: Use this option to enable or disable runtime traceback information for use with the environment variable PGI_TERM. Setting setTRACEBACK=OFF; in siterc or .mypg*rc also disables default traceback. Using ON instead of OFF enables default traceback.
–u Initializes the symbol-table with , which is undefined for the linker. Default: The compiler does not use the –u option. Syntax: -usymbol
Where symbol is a symbolic name. Usage: In this example, pgfortran initializes symbol-table with test. $ pgfortran -utest myprog.f
Description: Use this option to initialize the symbol-table with , which is undefined for the linker. An undefined symbol triggers loading of the first member of an archive library. Related options: –c, –o, –s
–U Undefines a preprocessor macro. Syntax: -Usymbol
Where symbol is a symbolic name. Usage: The following examples undefine the macro test. $ pgfortran -Utest myprog.F $ pgfortran -Dtest -Utest myprog.F
210
Chapter 18. Command-Line Options Reference
Description: Use this option to undefine a preprocessor macro. You can also use the #undef preprocessor directive to undefine macros. To set this option in PVF, use the Fortran | Preprocessor | Undefine Preprocessor Definitions property, described in “Undefine Preprocessor Definitions,” on page 336. Related options: –D, –Mnostddef.
–V[release_number] Displays additional information, including version messages. Further, if a release_number is appended, the compiler driver attempts to compile using the specified release instead of the default release.
Note There can be no space between -V and release_number. Default: The compiler does not display version information and uses the release specified by your path to compile. Usage: The following command-line shows the output using the -V option. % pgfortran -V myprog.f
The following command-line causes pgcc to compile using the 5.2 release instead of the default release. % pgcc -V5.2 myprog.c
Description: Use this option to display additional information, including version messages or, if a release_number is appended, to instruct the compiler driver to attempt to compile using the specified release instead of the default release. The specified release must be co-installed with the default release, and must have a release number greater than or equal to 4.1, which was the first release that supported this functionality. To set this option in PVF, use the Fortran | General | Display Startup Banner property, described in “Display Startup Banner,” on page 332. Related options: –Minfo, –v
–v Displays the invocations of the compiler, assembler, and linker. Default: The compiler does not display individual phase invocations. Usage: In the following example you use –v to see the commands sent to compiler tools, assembler, and linker. $ pgfortran -v myprog.f90
Description: Use the –v option to display the invocations of the compiler, assembler, and linker. These invocations are command lines created by the compiler driver from the files and the –W options you specify on the compiler command-line. 211
Generic PGI Compiler Options
Related options: –dryrun, –Minfo, –V, –W
–W Passes arguments to a specific phase.
Syntax: -W{0 | a | l },option[,option...]
Note You cannot have a space between the –W and the single-letter pass identifier, between the identifier and the comma, or between the comma and the option. 0 (the number zero) specifies the compiler. a specifies the assembler. l (lowercase letter l) specifies the linker. option is a string that is passed to and interpreted by the compiler, assembler or linker. Options separated by commas are passed as separate command line arguments. Usage: In the following example the linker loads the text segment at address 0xffc00000 and the data segment at address 0xffe00000. $ pgfortran -Wl,-k,-t,0xffc00000,-d,0xffe00000 myprog.f
Description: Use this option to pass arguments to a specific phase. You can use the –W option to specify options for the assembler, compiler, or linker.
Note A given PGI compiler command invokes the compiler driver, which parses the command-line, and generates the appropriate commands for the compiler, assembler, and linker. Related options: –Minfo, –V, –v
–w Do not print warning messages. Default: The compiler prints warning messages. Usage: In the following example no warning messages are printed. $ pgfortran -w myprog.f
212
Chapter 18. Command-Line Options Reference
Description: Use the –w option to not print warning messages. Sometimes the compiler issues many warning in which you may have no interest. You can use this option to not issue those warnings. Related options: –silent
–M Options by Category This section describes each of the options available with –M by the categories: Code generation
Fortran Language Controls
Optimization
C/C++ Language Controls
Inlining
Miscellaneous
Environment
For a complete alphabetical list of all the options, refer to “ –M Options Summary,” on page 191. The following sections provide detailed descriptions of several, but not all, of the –M options. For a complete alphabetical list of all the options, refer to “ –M Options Summary,” on page 191. These options are grouped according to categories and are listed with exact syntax, defaults, and notes concerning similar or related options.
Code Generation Controls This section describes the –M options that control code generation. Default: For arguments that you do not specify, the default code generation controls are these: nodaz
norecursive
nosecond_underscore
noflushz
noreentrant
nostride0
largeaddressaware
noref_externals
signextend
Related options: –D, –I, –L, –l, –U The following list provides the syntax for each–M option that controls code generation. Each option has a description and, if appropriate, any related options. –Mdaz Set IEEE denormalized input values to zero; there is a performance benefit but misleading results can occur, such as when dividing a small normalized number by a denormalized number. To take effect, this option must be set for the main program. To set this option in PVF, use the Fortran | Floating Point Options | Treat Denormalized Values as Zero property, described in “Treat Denormalized Values as Zero,” on page 342. –Mnodaz Do not treat denormalized numbers as zero.To take effect, this option must be set for the main program. –Mnodwarf Specifies not to add DWARF debug information; must be used in combination with –g. –Mdwarf1 Generate DWARF1 format debug information; must be used in combination with –g. 213
–M Options by Category
–Mdwarf2 Generate DWARF2 format debug information; must be used in combination with –g. –Mdwarf3 Generate DWARF3 format debug information; must be used in combination with –g. –Mflushz Set SSE flush-to-zero mode; if a floating-point underflow occurs, the value is set to zero.To take effect, this option must be set for the main program. To set this option in PVF, use the Fortran | Floating Point Options | Flush Denormalized Results to Zero property, described in “Flush Denormalized Results to Zero,” on page 342. –Mnoflushz Do not set SSE flush-to-zero mode; generate underflows.To take effect, this option must be set for the main program. –Mfunc32 Align functions on 32-byte boundaries. –Minstrument[=functions] linx86-64 only Generate additional code to enable instrumentation of functions. The option – Minstrument=functions is the same as –Minstrument. Implies –Minfo=ccff and –Mframe. –Mlargeaddressaware=[no] [Win64 only] Generates code that allows for addresses greater than 2GB, using RIP-relative addressing. Use–Mlargeaddressaware=no for a direct addressing mechanism that restricts the total addressable memory.
Note Do not use –Mlargeaddressaware=no if the object file will be placed in a DLL. If –Mlargeaddressaware=no is used to compile any object file, it must also be used when linking. –Mlarge_arrays Enable support for 64-bit indexing and single static data objects larger than 2GB in size. This option is default in the presence of –mcmodel=medium. Can be used separately together with the default small memory model for certain 64-bit applications that manage their own memory space. For more information, refer to Chapter 16, “Programming Considerations for 64-Bit Environments”. –Mmpi=option –Mmpi adds the include and library options to the compile and link commands necessary to build an MPI application using MPI header files and libraries. To use –Mmpi, you must have a version of MPI installed on your system. This option tells the compiler to use the headers and libraries for the specified version of MPI. On Windows, PGI compilers and tools support Microsoft’s implementation of MPI, MSMPI. This version of MPI is available with Microsoft’s HPC Pack 2008 SDK. 214
Chapter 18. Command-Line Options Reference
–Mmpi=msmpi - Select Microsoft MPI libraries. For more information on these options, refer to Chapter 4, “Using MPI in PVF,” on page 35.
Note On Windows, the user can set the appropriate environment variable, either CCP_HOME or CCP_SDK to override the default location for the directory associated with using MSMPI. For –Mmpi=msmpi to work, the CCP_HOME environment variable must be set. When the Microsoft HPC Pack 2008 SDK is installed, this variable is typically set to point to the MSMPI library directory. –Mnolarge_arrays Disable support for 64-bit indexing and single static data objects larger than 2GB in size. When placed after –mcmodel=medium on the command line, disables use of 64-bit indexing for applications that have no single data object larger than 2GB. –Mnomain Instructs the compiler not to include the object file that calls the Fortran main program as part of the link step. This option is useful for linking programs in which the main program is written in C/C++ and one or more subroutines are written in Fortran (pgf77, pgf95, pgfortran, and pghpf only). –M[no]movnt Instructs the compiler to generate nontemporal move and prefetch instructions even in cases where the compiler cannot determine statically at compile-time that these instructions will be beneficial. –M[no]pre enables or disables partial redundancy elimination. –Mprof[=option[,option,...]] Set performance profiling options. Use of these options causes the resulting executable to create a performance profile that can be viewed and analyzed with the PGPROF performance profiler. In the descriptions that follow, PGI-style profiling implies compiler-generated source instrumentation. MPICHstyle profiling implies the use of instrumented wrappers for MPI library routines. The option argument can be any of the following: [no]ccff
Enable [disable] common compiler feedback format, CCFF, information. dwarf
Generate limited DWARF symbol information sufficient for most performance profilers. func
Perform PGI-style function-level profiling. lines
Perform PGI-style line-level profiling. msmpi
Perform MPICH-style profiling for Microsoft MPI. Implies –Mmpi=msmpi. This option is valid only if Microsoft HPC Pack 2008 SDK is installed. 215
–M Options by Category
For more information, refer to Chapter 4, “Using MPI in PVF”. To set this option in PVF, use the Fortran | General | Profiling property, described in “Line-Level Profiling,” on page 352. To enable profiling you must also set the Linker | General | Profiling property, described in “Line-Level Profiling,” on page 352. –Mrecursive instructs the compiler to allow Fortran subprograms to be called recursively. –Mnorecursive Fortran subprograms may not be called recursively. –Mref_externals force references to names appearing in EXTERNAL statements . –Mnoref_externals do not force references to names appearing in EXTERNAL statements (pgf77, pgf95, pgfortran, and pghpf only). –Mreentrant instructs the compiler to avoid optimizations that can prevent code from being reentrant. –Mnoreentrant instructs the compiler not to avoid optimizations that can prevent code from being reentrant. –Msecond_underscore instructs the compiler to add a second underscore to the name of a Fortran global symbol if its name already contains an underscore. This option is useful for maintaining compatibility with object code compiled using g77, which uses this convention by default. –Mnosecond_underscore instructs the compiler not to add a second underscore to the name of a Fortran global symbol if its name already contains an underscore. –Msignextend instructs the compiler to extend the sign bit that is set as a result of converting an object of one data type to an object of a larger signed data type. –Mnosignextend instructs the compiler not to extend the sign bit that is set as the result of converting an object of one data type to an object of a larger data type. –Msafe_lastval When a scalar is used after a loop, but is not defined on every iteration of the loop, the compiler does not by default parallelize the loop. However, this option tells the compiler it’s safe to parallelize the loop. For a given loop the last value computed for all scalars makes it safe to parallelize the loop. –Mstride0 instructs the compiler to inhibit certain optimizations and to allow for stride 0 array references. This option may degrade performance and should only be used if zero-stride induction variables are possible. –Mnostride0 instructs the compiler to perform certain optimizations and to disallow for stride 0 array references. 216
Chapter 18. Command-Line Options Reference
–Munix use UNIX symbol and parameter passing conventions for Fortran subprograms. –Mvarargs force Fortran program units to assume procedure calls are to C functions with a varargs-type interface.
Environment Controls This section describes the –M options that control environments. Default: For arguments that you do not specify, the default environment option depends on your configuration. The following list provides the syntax for each –M option that controls environments. Each option has a description and, if appropriate,a list of any related options. –Mnostartup instructs the linker not to link in the standard startup routine that contains the entry point (_start) for the program.
Note If you use the –Mnostartup option and do not supply an entry point, the linker issues the following error message: Warning: cannot find entry symbol _start –M[no]smartalloc[=huge|h[uge:|hugebss|nohuge] adds a call to the routine mallopt in the main routine. This option supports large TLBs on Linux and Windows. This option must be used to compile the main routine to enable optimized malloc routines. The option arguments can be any of the following: huge Link in the huge page runtime library. Enables large 2-megabyte pages to be allocated. The effect is to reduce the number of TLB entries required to execute a program. This option is most effective on Barcelona and Core 2 systems; older architectures do not have enough TLB entries for this option to be beneficial. By itself, the huge suboption tries to allocate as many huge pages as required. huge: Link the huge page runtime library and allocate n huge pages. Use this suboption to limit the number of huge pages allocated to n. You can also limit the pages allocated by using the environment variable PGI_HUGE_PAGES. hugebss (64-bit only) Puts the BSS section in huge pages; attempts to put a program's uninitialized data section into huge pages.
Note This flag dynamically links the library libhugetlbfs_pgi even if –Bstatic is used. 217
–M Options by Category
nohuge Overrides a previous –Msmartalloc=huge setting.
Tip To be effective, this switch must be specified when compiling the file containing the Fortran, C, or C++ main program.
–Mnostdinc instructs the compiler to not search the standard location for include files. To set this option in PVF, use the Fortran | Preprocessor | Ignore Standard Include Path property, described in “Ignore Standard Include Path,” on page 336. –Mnostdlib instructs the linker not to link in the standard libraries in the library directory lib within the standard directory. You can link in your own library with the –l option or specify a library directory with the –L option.
Fortran Language Controls This section describes the –M options that affect Fortran language interpretations by the PGI Fortran compilers. These options are valid only for the pgf77, pgf95, and pgfortran compiler drivers. Default: For arguments that you do not specify, the defaults are as follows: backslash
nodefaultunit
dollar,_
noonetrip
nounixlogical
nodclchk
nodlines
noiomutex
nosave
noupcase
The following list provides the syntax for each–M option that affect Fortran language interpretations. Each option has a description and, if appropriate, a list of any related options. –Mallocatable=95|03 controls whether Fortran 95 or Fortran 2003 semantics are used in allocatable array assignments. The default behavior is to use Fortran 95 semantics; the 03 option instructs the compiler to use Fortran 2003 semantics. –Mbackslash the compiler treats the backslash as a normal character, and not as an escape character in quoted strings. –Mnobackslash the compiler recognizes a backslash as an escape character in quoted strings (in accordance with standard C usage). –Mcuda the compiler enables Cuda Fortran. The following suboptions exist: 218
Chapter 18. Command-Line Options Reference
Note If more than one option is on the command line, all the specified options occur. cc10 Generate code for compute capability 1.0. cc11 Generate code for compute capability 1.1. cc12 Generate code for compute capability 1.2. cc13 Generate code for compute capability 1.3. cc20 Generate code for compute capability 2.0. cuda2.3 or 2.3 Sets the toolkit compatibility version to 2.3. cuda3.0 or 3.0 Sets the toolkit compatibility version to 3.0.
Note Compiling with the CUDA 3.0 toolkit, either by using the -ta=nvidia:cuda3.0 option or by adding set CUDAVERSION=2.0 to the siterc file, generates binaries that may not work on machines with a 2.3 CUDA driver. pgaccelinfo prints the driver version as the first line of output.
For a 2.3 driver: CUDA Driver Version 2030 For a 3.0 driver: CUDA Driver Version 3000 emu Enable Cuda Fortran emulation mode. fastmath Use routines from the fast math library. keepbin Keep the generated binary (.bin) file for CUDA Fortran. keepgpu Keep the generated GPU code for CUDA Fortran. keepptx Keep the portable assembly (.ptx) file for the GPU code. maxregcount:n Specify the maximum number of registers to use on the GPU. Leaving this blank indicates no limit. 219
–M Options by Category
nofma Do not generate fused multiply-add instructions. –Mdclchk the compiler requires that all program variables be declared. –Mnodclchk the compiler does not require that all program variables be declared. –Mdefaultunit the compiler treats "*" as a synonym for standard input for reading and standard output for writing. –Mnodefaultunit the compiler treats "*" as a synonym for unit 5 on input and unit 6 on output. –Mdlines the compiler treats lines containing "D" in column 1 as executable statements (ignoring the "D"). –Mnodlines the compiler does not treat lines containing "D" in column 1 as executable statements (does not ignore the "D"). –Mdollar,char char specifies the character to which the compiler maps the dollar sign. The compiler allows the dollar sign in names. –Mextend the compiler accepts 132-column source code; otherwise it accepts 72-column code. –Mfixed the compiler assumes input source files are in FORTRAN 77-style fixed form format. –Mfree the compiler assumes the input source files are in Fortran 90/95 freeform format. –Miomutex the compiler generates critical section calls around Fortran I/O statements. –Mnoiomutex the compiler does not generate critical section calls around Fortran I/O statements. –Monetrip the compiler forces each DO loop to execute at least once. –Mnoonetrip the compiler does not force each DO loop to execute at least once. This option is useful for programs written for earlier versions of Fortran. –Msave the compiler assumes that all local variables are subject to the SAVE statement. Note that this may allow older Fortran programs to run, but it can greatly reduce performance. –Mnosave the compiler does not assume that all local variables are subject to the SAVE statement. 220
Chapter 18. Command-Line Options Reference
–Mstandard the compiler flags non-ANSI-conforming source code. –Munixlogical directs the compiler to treat logical values as true if the value is non-zero and false if the value is zero (UNIX F77 convention.) When –Munixlogical is enabled, a logical value or test that is non-zero is .TRUE., and a value or test that is zero is .FALSE.. In addition, the value of a logical expression is guaranteed to be one (1) when the result is .TRUE.. –Mnounixlogical directs the compiler to use the VMS convention for logical values for true and false. Even values are true and odd values are false. –Mupcase the compiler preserves uppercase letters in identifiers. With –Mupcase, the identifiers "X" and "x" are different. Keywords must be in lower case. This selection affects the linking process. If you compile and link the same source code using –Mupcase on one occasion and –Mnoupcase on another, you may get two different executables - depending on whether the source contains uppercase letters. The standard libraries are compiled using the default –Mnoupcase . –Mnoupcase the compiler converts all identifiers to lower case. This selection affects the linking process: If you compile and link the same source code using –Mupcase on one occasion and –Mnoupcase on another, you may get two different executables (depending on whether the source contains uppercase letters). The standard libraries are compiled using –Mnoupcase.
Inlining Controls This section describes the –M options that control function inlining. Before looking at all the options, let’s look at a couple examples. Usage: In the following example, the compiler extracts functions that have 500 or fewer statements from the source file myprog.f and saves them in the file extract.il. $ pgfortran -Mextract=500 -o extract.il myprog.f
In the following example, the compiler inlines functions with fewer than approximately 100 statements in the source file myprog.f. $ pgfortran -Minline=size:100 myprog.f
Related options: –o, –Mextract The following list provides the syntax for each–M option that controls function inlining. Each option has a description and, if appropriate, a list of any related options. – M[no]autoinline[=option[,option,...]] instructs the compiler to inline [not to inline] a C/C++ function at –O2, where the option can be any of these: levels:n instructs the compiler to perform n levels of inlining. The default number of levels is 10. 221
–M Options by Category
maxsize:n instructs the compiler not to inline functions of size > n. The default size is 100. totalsize:n instructs the compiler to stop inlining when the size equals n. The default size is 800. –Mextract[=option[,option,...]] Extracts functions from the file indicated on the command line and creates or appends to the specified extract directory where option can be any of: name:func instructs the extractor to extract function func from the file. size:number instructs the extractor to extract functions with number or fewer statements from the file. lib:filename.ext Use directory filename.ext as the extract directory (required in order to save and re-use inline libraries). If you specify both name and size, the compiler extracts functions that match func, or that have number or fewer statements. For examples of extracting functions, see Chapter 8, “Using Function Inlining”. –Minline[=option[,option,...]] This passes options to the function inliner, where the option can be any of these: except:func instructs the inliner to inline all eligible functions except func, a function in the source text. Multiple functions can be listed, comma-separated. [name:]func instructs the inliner to inline the function func. The func name should be a non-numeric string that does not contain a period. You can also use a name: prefix followed by the function name. If name: is specified, what follows is always the name of a function. [lib:]filename.ext instructs the inliner to inline the functions within the library file filename.ext. The compiler assumes that a filename.ext option containing a period is a library file. Create the library file using the –Mextract option. You can also use a lib: prefix followed by the library name. If lib: is specified, no period is necessary in the library name. Functions from the specified library are inlined. If no library is specified, functions are extracted from a temporary library created during an extract prepass. levels:number instructs the inliner to perform number levels of inlining. The default number is 1. [no]reshape instructs the inliner to allow (disallow) inlining in Fortran even when array shapes do not match. The default is –Minline=noreshape, except with –Mconcur or –mp, where the default is – Minline=reshape,=reshape. 222
Chapter 18. Command-Line Options Reference
[size:]number instructs the inliner to inline functions with number or fewer statements. You can also use a size: prefix followed by a number. If size: is specified, what follows is always taken as a number. If you specify both func and number, the compiler inlines functions that match the function name or have number or fewer statements. For examples of inlining functions, refer to Chapter 8, “Using Function Inlining”. To set this option in PVF, use the Fortran | Optimization | Inlining property, described in “Inlining,” on page 334.
Optimization Controls This section describes the –M options that control optimization. Before looking at all the options, let’s look at the defaults. Default: For arguments that you do not specify, the default optimization control options are as follows: depchk
noipa
nounroll
nor8
i4
nolre
novect
nor8intrinsics
nofprelaxed
noprefetch
Note If you do not supply an option to –Mvect, the compiler uses defaults that are dependent upon the target system. Usage: In this example, the compiler invokes the vectorizer with use of packed SSE instructions enabled. $ pgfortran -Mvect=sse -Mcache_align myprog.f
Related options: –g, –O The following list provides the syntax for each–M option that controls optimization. Each option has a description and, if appropriate, a list of any related options. –Mcache_align Align unconstrained objects of length greater than or equal to 16 bytes on cache-line boundaries. An unconstrained object is a data object that is not a member of an aggregate structure or common block. This option does not affect the alignment of allocatable or automatic arrays.
Note To effect cache-line alignment of stack-based local variables, the main program or function must be compiled with –Mcache_align. –Mconcur[=option [,option,...]] Instructs the compiler to enable auto-concurrentization of loops. If –Mconcur is specified, multiple processors will be used to execute loops that the compiler determines to be parallelizable. Where option is one of the following: 223
–M Options by Category
allcores Instructs the compiler to use all available cores. Use this option at link time. [no]altcode:n Instructs the parallelizer to generate alternate serial code for parallelized loops. If altcode is specified without arguments, the parallelizer determines an appropriate cutoff length and generates serial code to be executed whenever the loop count is less than or equal to that length. If altcode:n is specified, the serial altcode is executed whenever the loop count is less than or equal to n. If noaltcode is specified, the parallelized version of the loop is always executed regardless of the loop count. bind Instructs the parallelizer to bind threads to cores. Use this option at link time. cncall Calls in parallel loops are safe to parallelize. Loops containing calls are candidates for parallelization. Also, no minimum loop count threshold must be satisfied before parallelization will occur, and last values of scalars are assumed to be safe. dist:block Parallelize with block distribution (this is the default). Contiguous blocks of iterations of a parallelizable loop are assigned to the available processors. dist:cyclic Parallelize with cyclic distribution. The outermost parallelizable loop in any loop nest is parallelized. If a parallelized loop is innermost, its iterations are allocated to processors cyclically. For example, if there are 3 processors executing a loop, processor 0 performs iterations 0, 3, 6, etc.; processor 1 performs iterations 1, 4, 7, etc.; and processor 2 performs iterations 2, 5, 8, etc. [no]innermost Enable parallelization of innermost loops. The default is to not parallelize innermost loops, since it is usually not profitable on dual-core processors. noassoc Disables parallelization of loops with reductions. When linking, the –Mconcur switch must be specified or unresolved references will result. The NCPUS environment variable controls how many processors or cores are used to execute parallelized loops. To set this option in PVF, use the Fortran | Optimization | Auto-Parallelization property, described in “AutoParallelization,” on page 335.
Note This option applies only on shared-memory multi-processor (SMP) or multi-core processorbased systems. –Mcray[=option[,option,...]] Force Cray Fortran (CF77) compatibility with respect to the listed options. Possible values of option include:
224
Chapter 18. Command-Line Options Reference
pointer for purposes of optimization, it is assumed that pointer-based variables do not overlay the storage of any other variable. –Mdepchk instructs the compiler to assume unresolved data dependencies actually conflict. –Mnodepchk Instructs the compiler to assume potential data dependencies do not conflict. However, if data dependencies exist, this option can produce incorrect code. –Mdse Enables a dead store elimination phase that is useful for programs that rely on extensive use of inline function calls for performance. This is disabled by default. –Mnodse Disables the dead store elimination phase. This is the default. –M[no]fpapprox [=option] Perform certain fp operations using low-precision approximation. –Mnofpapprox specifies not to use low-precision fp approximation operations.
By default –Mfpapprox is not used. If –Mfpapprox is used without suboptions, it defaults to use approximate div, sqrt, and rsqrt. The available suboptions are these: div Approximate floating point division sqrt Approximate floating point square root rsqrt Approximate floating point reciprocal square root –M[no]fpmisalign Instructs the compiler to allow (not allow) vector arithmetic instructions with memory operands that are not aligned on 16-byte boundaries. The default is –Mnofpamisalign on all processors.
Note Applicable only with one of these options: –tp barcelona or –tp barcelona-64 –Mf[=option] Instructs the compiler to use relaxed precision in the calculation of some intrinsic functions. Can result in improved performance at the expense of numerical accuracy. To set this option in PVF, use the Fortran | Floating Point Options | Floating Point Consistency property. For more information on this property, refer to “Floating Point Consistency,” on page 342. The possible values for option are: 225
–M Options by Category
div Perform divide using relaxed precision. noorder Perform reciprocal square root (1/sqrt) using relaxed precision. order Perform reciprocal square root (1/sqrt) using relaxed precision. recip Perform reciprocal using relaxed precision. rsqrt Perform reciprocal square root (1/sqrt) using relaxed precision. sqrt Perform square root with relaxed precision. With no options, –Mfprelaxed generates relaxed precision code for those operations that generate a significant performance improvement, depending on the target processor. –Mnofprelaxed (default) instructs the compiler to not use relaxed precision in the calculation of intrinsic functions. –Mi4 the compiler treats INTEGER variables as INTEGER*4. –Mipa=