Preview only show first 10 pages with watermark. For full document please download

Tivoli Common Reporting Enablement Guide

   EMBED


Share

Transcript

IBM Tivoli Composite Application Manager for Transactions V7.4 Cognos Reports README Document Version 2.2 January 2014 What’s new This readme documentation describes how to run the ITCAM for Transactions V7.4 Cognos data model, Cognos reports, and BIRT reports in Tivoli Common Reporting v3.1. Tivoli Common Reporting (TCR) v3.1 integrates Cognos and BIRT reporting into a single user interface by providing tools to import BIRT reports into the Cognos environment; there, BIRT reports are run in the same way as Cognos reports. ITCAM for Transactions V7.4 Cognos reports are similar to previous releases. However, Client Response Time Cognos reports are deprecated in this release. 2 Table of Contents WHAT’S NEW 2 SECTION 1INTRODUCTION 4 1.1GETTING STARTED: 4 1.1.1Introduction 4 1.1.2Database Prerequisites & setup 5 1.1.3Importing the Report package into TCR11 1.1.4Configuring the data source 14 1.1.5Data Source Signon Name 21 1.1.6Running a Report 22 1.1.7ITCAM for Transactions (Query) package 23 1.1.8ITCAM for Transactions (Analysis) package 28 1.1.9Editing the Data Model in IBM Cognos Framework Manager 32 1.1.10Importing Birt reports into Cognos 32 1.1.11Setting up JDBC data sources for Birt reports 33 1.1.12Running Birt reports 33 SECTION 2SAMPLE REPORTS34 2.1APPLICATION SCORECARD 34 2.1.1Summary 35 2.1.2User Experience 36 2.1.3Errors 37 2.1.4Clients 38 2.1.5Impacted Users 39 2.1.6Transactions 40 2.1.7Backend Servers 41 2.1.8Web Servers 42 2.1.9Network 43 2.2ROBOTIC STEPS AND PERFORMANCE 44 2.2.1Robotic Steps Summary 45 2.2.2Robotic Steps Client Breakdown 2.2.3Robotic Steps Detailed Breakdown 2.2.4Robotic Steps Script Summary 48 2.3USER ANALYSIS 49 2.3.1User Analysis 49 2.4INTERNET SERVICE AVAILABILITY 51 2.4.1Internet Service Availability 51 46 47 SECTION 3DEBUGGING AND KNOWN PROBLEMS 53 3.1ERRORS WITH THE FILE SYSTEM 53 3.1.1Too Much Data 53 3.1.2No Relationship Defined 54 3.2ARITHMETIC OVERFLOW ERRORS IN AD HOC QUERYING 55 3.3‘NO DATA AVAILABLE’ IN AD HOC QUERYING ON QUERYING TWO TABLES BUT DATA SHOWS UP IF THE TWO TABLES ARE QUERIED INDIVIDUALLY 55 3.4ERROR FOR MISSING TABLE OR ATTRIBUTE 55 3.5FAILURE TO CREATE TABLES IN THE WAREHOUSE 55 3.6PERFORMANCE CONSIDERATIONS 55 3 Section 1 Introduction 1.1 Getting started: 1.1.1 Introduction The ITCAM for Transactions Cognos Reports are historical reports, reporting against raw and summarized data collected in Tivoli Data Warehouse with any supported version of IBM Tivoli Monitoring. These reports are built to run against ITCAM for Transactions agents, including Application Management Console, Web Response Time, Robotic Response Time, Internet Service Monitoring, and Transaction Tracking. Tivoli Common Reporting supports the multiple data sources for reports, as determined by Cognos and the Jazz for Service Management supported operating systems. Tivoli Common Reporting 3.1 provides the reporting services in Jazz for Service Management 1.1. Tivoli Common Reporting supports all database servers that IBM Cognos Business Inteligence 10.2 supports. View the current supported relational databases for each operating system for IBM Cognos Business Inteligence 10.2 here: http://www-01.ibm.com/support/docview.wss?uid=swg27027080 or alternatively complete the following steps: 1. Go to the Detailed system requirements for a product page: http://publib.boulder.ibm.com/infocenter/prodguid/v1r0/clarity/softwareReqsForProduct.html 2. In the Full or partial product name filed, enter 'IBM Cognos Business Intelligence' and click Search, from the Search Results list, select Cognos Business Intelligence. 3. In the Version field, select '10.2' 4. From the Scope of Report field, select Operating system family, and from the Operating system family field, select your operating system. Tivoli Common Reporting Information Center: The reports can be administered and run on Tivoli Common Reporting v3.1. For more information about installing, configuring, and running Tivoli Common Reporting v3.1, visit the Jazz for Service Management 1.1 knowledge center http://www-01.ibm.com/support/knowledgecenter/SSEKCU_1.1.0/com.ibm.psc.doc_1.1.0/qsg/tcr_qsg.html?lang=en 4 1.1.2 Database Prerequisites & setup The Tivoli Common Reporting Cognos reports require certain tables to exist in the database for connectivity. These tables are described in section 1.1.2.1, “Enable Historical Data Collection”. If all of these tables are not already created in your database, see section 1.1.2.2, “Pre-create all warehouse tables” for information about creating every ITCAM for Transactions database table (including summarization). After you create the tables, you should run a script provided with the reports to create some indexes to improve database performance (see section 1.1.2.3, “Add database performance indexes”). Finally, there are some specific tables required by Tivoli Common Reporting Cognos reports, created using a script described in section 1.1.2.4, “Set up database with Tivoli Reporting and Analytics Model tables.” 1.1.2.1 Enable Historical Data Collection ITCAM for Transactions agents must be installed and configured to run with historical data collection enabled. Summarized tables are created and populated in the Tivoli Data Warehouse. At the minimum, the following tables MUST exist in the warehouse database to run the reports. Application Management Console (raw data, hourly): 1. 2. 3. 4. 5. 6. 7. AMC_Application AMC_Client AMC_Transaction AMC_Server AMC_Internet_Service AMC_Internet_Service_Agent AMC_Internet_Service_Element Client Response Time (raw data, hourly): 1. 2. CRT_Application_Status CRT_Transaction_Status Note: Client Response Time is deprecated in ITCAM for Transactions V7.4. Because Cognos reports rely on all components being installed, you must run scripts from the integration package, included in the Cognos report package, to create dummy Client Response Time tables in the Tivoli Data Warehouse. Depending on which database you use, in utilities/mssql, utilities/db2, or utilities/oracle, run the following scripts to create the tables: 1. 2. Run tdw_schema_table.sql Run tdw_schema_view.sql Web Response Time (raw data, hourly, daily): 3. 4. 5. WRT_Application_Status WRT_Transaction_Status WRT_User_Sessions Robotic Response Time (raw data, hourly): 1. 2. 3. 4. RRT_Application_Status RRT_Transaction_Status RRT_SubTransaction_Status RRT_Robotic_Playback_Events (raw data only) Transaction Tracking (raw data, hourly): 1. Aggregates 5 2. Interactions If any of these tables are not present, some report pages will fail to run. 1.1.2.2 Pre-create all warehouse tables If you do not already have all of the required warehouse and summarization tables defined in the ITCAM for Transactions database, e.g. you only have Robotic Response Agent installed, you can create them by running scripts provided in the reports package. Note that these scripts create every possible ITCAM for Transactions table. The scripts to pre-create all warehouse tables are in the \utilities directory. The scripts to set up the database with Tivoli Reporting and Analytics Model tables are in the \schema directory. The following list displays the files you will see in the reports package: cognos\sql\utilities\mssql\itcamt_mssql_indexes.sql cognos\sql\utilities\mssql\mssql_quotedIdentifier.sql cognos\sql\utilities\mssql\tdw_schema_function.sql cognos\sql\utilities\mssql\tdw_schema_index.sql cognos\sql\utilities\mssql\tdw_schema_insert.sql cognos\sql\utilities\mssql\tdw_schema_table.sql cognos\sql\utilities\mssql\tdw_schema_view.sql cognos\sql\utilities\db2\itcamt_db2_indexes.sql cognos\sql\utilities\db2\tdw_schema_function.sql cognos\sql\utilities\db2\tdw_schema_index.sql cognos\sql\utilities\db2\tdw_schema_insert.sql cognos\sql\utilities\db2\tdw_schema_table.sql cognos\sql\utilities\db2\tdw_schema_view.sql cognos\sql\utilities\oracle\itcamt_oracle_indexes.sql cognos\sql\utilities\oracle\tdw_schema_function.sql cognos\sql\utilities\oracle\tdw_schema_index.sql cognos\sql\utilities\oracle\tdw_schema_insert.sql cognos\sql\utilities\oracle\tdw_schema_table.sql cognos\sql\utilities\oracle\tdw_schema_view.sql cognos\sql\schema\mssql\clean.sql cognos\sql\schema\mssql\createProcedure.sql cognos\sql\schema\mssql\createSchema.sql cognos\sql\schema\mssql\populateTimeDimension.sql cognos\sql\schema\mssql\README.txt cognos\sql\schema\db2\clean.db2 cognos\sql\schema\db2\create_schema_IBM_TRAM.db2 cognos\sql\schema\db2\gen_time_dim_granularity_min.db2 cognos\sql\schema\db2\README.txt cognos\sql\schema\oracle\clean.sql cognos\sql\schema\oracle\create_IBM_TRAM.sql cognos\sql\schema\oracle\create_schema.sql cognos\sql\schema\oracle\gen_time_dim_granularity_hr.sql cognos\sql\schema\oracle\grant_IBM_TRAM.sql cognos\sql\schema\oracle\local_setup_IBM_TRAM.sql cognos\sql\schema\oracle\populateLookup.sql cognos\sql\schema\oracle\populateTimeDimension.sql cognos\sql\schema\oracle\setup_IBM_TRAM.sql cognos\sql\schema\oracle\README.txt 6 DB2 and Oracle: MSSQL: DB2: 1.Login as db2 instance owner, and source the db2profile, for example For example, $ su – db2inst1 [db2inst1@81 ~]$ cd sqllib/ [db2inst1@81 sqllib]$ . ./db2profile 2. Connect to the DB2 database warehouse as the ITM Tivoli Database Warehouse user and run these SQL scripts in the following order: utilities\db2\tdw_schema_table.sql utilities\db2\tdw_schema_view.sql utilities\db2\tdw_schema_insert.sql utilities\db2\tdw_schema_function.sql utilities\db2\ itcamt_db2_indexes.sql utilities\db2\ tdw_schema_index.sql The following example shows how to login as db2 instance owner, source db2profile, and run the sql scripts. $ su – db2inst1 [db2inst1@81 ~]$ cd sqllib/ [db2inst1@81 sqllib]$ . ./db2profile [db2inst1@81 sqllib]$ cd /tmp/tcr [db2inst1@81 tcr]$ db2 connect to warehous user itmuser using [db2inst1@81 tcr]$ db2 -tf tdw_schema_table.sql [db2inst1@81 tcr]$ db2 -tf tdw_schema_view.sql [db2inst1@81 tcr]$ db2 -tf tdw_schema_insert.sql [db2inst1@81 tcr]$ db2 -td# -f tdw_schema_function.sql [db2inst1@81 tcr]$ db2 –tf itcamt_db2_indexes.sql [db2inst1@81 tcr]$ db2 –tf tdw_schema_index.sql The new database tables, views and indexes are created in the schema of the logged-in user. Oracle: Connect to the Oracle warehouse database as the ITM Tivoli Database Warehouse user and run these SQL scripts in the following order: 1.utilities\oracle\tdw_oracle_schema.sql 2.utilities\oracle\tdw_oracle_function.sql For example: [oracle]$ sqlplus SQL*Plus: Release 11.1.0.6.0 - Production on Wed Mar 10 13:36:29 2010 Copyright (c) 1982, 2007, Oracle. All rights reserved. Enter user-name: itmuser 7 Enter password: Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> @tdw_oracle_schema.sql SQL> @tdw_oracle_function.sql The new database tables, views and indexes are created in the table space of the logged-in user. Microsoft® SQL Server: Connect to the MS-SQL database warehouse as the ITM Tivoli Database Warehouse user and run these SQL scripts in the following order: 1.utilities\mssql\tdw_mssql_schema.sql 2.utilities\mssql\tdw_mssql_function.sql For example: 2000: C:\>isql –i tdw_mssql_schema.sql –U ITMuser –P -d WAREHOUS C:\>isql –i tdw_mssql_function.sql –U ITMuser –P -d WAREHOUS 2005; C:\>sqlcmd -i tdw_mssql_schema.sql -U ITMuser -P -d WAREHOUS C:\>sqlcmd -i tdw_mssql_function.sql -U ITMuser -P -d WAREHOUS The owner of these tables must be ITMUser. 1.1.2.3 Add database performance indexes Certain reports require database indexes to be added to improve performance. Run one of the following scripts depending on your database type: DB2: While connected to the WAREHOUS database as the ITM TDW owner: db2 -tf itcamt_db2_indexes.sql Oracle: From the SQL Plus command line after connecting to the WAREHOUS database as the ITM owner: SQL> @itcamt_oracle_indexes.sql Microsoft® SQL Server: From the SQL Server command line run the following script: For MSSQL 2000: isql -i itcamt_mssql_indexes.sql -U ITMuser -P -d WAREHOUS For MSSQL 2005: sqlcmd -i itcamt_mssql_indexes.sql -U ITMuser -P -d WAREHOUS 1.1.2.4 Set up database with Tivoli Reporting and Analytics Model tables In addition to the tables created in the previous sections, you must create the Tivoli Reporting and Analytics Model (TRAM) tables in the warehouse database for the data model and reports to work. TRAM tables are common tables that are shared across Tivoli products (for example, the IBM_TRAM.TIME_DIMENSION table). If you have IBM Tivoli Monitoring installed and have already run these scripts, skip to the next section. Otherwise, continue and run the script that matches your database system. The scripts needed to create the additional tables to the warehouse are located under the \schema directory on the installation media. These scripts are separated into the following folders for each supported database type: 8    db2 mssql oracle Follow these instructions to setup TRAM tables, or refer to the readme available here: schema\db2\README.txt. DB2: Use the scripts to create the following items: 1. schema IBM_TRAM 2. Table TIME_DIMENSION: Time dimensional data spanning up to several years’ worth of data, with granularity up to N minutes (N can be specified by you). Each row of this table is a unique minute key with various related dimensions, such as hour, weekday, day of month, quarter, etc. 3. Table MONTH_LOOKUP: Globalized Month names for Time Dimension 4. Table WEEKDAY_LOOKUP: Globalized weekday names for Time Dimension 5. Other dimensions conforming to Tivoli's Common Data Model (for example, ComputerSystem, BusinessService, SiteInfo etc.) You will need administrative access to create the IBM_TRAM schema. Login as instance owner, and source db2profile Connect to the database in which you want to create these dimension tables. It could be your Tivoli Data Warehouse or any other database. (For example, use the following command for DB2: db2 connect to WAREHOUS;) 1. Create schema and tables by running the following command: db2 -tf create_schema_IBM_TRAM.db2 After this command runs successfully you should see several tables under IBM_TRAM, such as TIME_DIMENSION, MONTH_LOOKUP, WEEKDAY_LOOKUP, ComputerSystem, BusinessService, SiteInfo etc 2. Populate the TIME_DIMENSION table. Run the following command to create the stored procedure for generating the time dimension: db2 -td@ -vf gen_time_dim_granularity_min.db2 3. Call the time dimension stored procedure with dates and granularity to generate the timestamps. For example the following command generates timestamps from 12/31/2010 to 12/31/2011 for each 5 minutes of granularity. You can generate up to 5 years at a time or have it regenerate the data everyday. db2 "call IBM_TRAM.CREATE_TIME_DIMENSION('2010-01-01-00.00.00.000000','201212-3100.00.00.000000', 5)" Calling this stored procedure might require the size of the database log file to be increased. If the DB2 message "SQL0964C The transaction log for the database is full." is encountered, then the LOGFILSIZ DB2 parameter might need to be increased using the following command: db2 UPDATE DB CFG USING LOGFILSIZ 10000 A value of 10000 should be enough to create a time dimension of five years with a five minute interval. Increase the value and rerun if necessary. To clean-up the database from changes, start clean.sql. Oracle: 9 MANUAL INSTALLATION: Before starting, make sure that you can access remotely as sys or system user. If ‘yes’ perform steps from (a) if ‘no’ perform steps from (b) a) REMOTE CONNECTION 1) start sqlplus 2) start script @\setup_IBM_TRAM.sql and provide all information that the script requires b) LOCAL CONNECTION 1) start sqlplus 2) start script @\local_setup_IBM_TRAM.sql and provide all information that the script requires BATCH INSTALLATION: The scripts are interactive, prompting you for installation details and then calling the appropriate scripts. To install in "batch mode" inside your installer, you need to call the following scripts in the order shown below. NOTE: Take care to ensure that you enter the parameters correctly and that scripts create the proper connections (the scripts run from different users) 1) Create user IBM_TRAM. The script must be called by a user with system rights (for example, SYS/SYSTEM) create_IBM_TRAM.sql parameters: - password for new user IBM_TRAM - default user tablespaces name (must already exist) - default temporary tablespaces name (must already exist) 2) Create IBM_TRAM tables. The script must be called by IBM_TRAM user (created in previous step) create_schema.sql parameter: - default user tablespaces name (must already exist) 3) Grant privileges to your user (for example, ITMUSER). The script has to be called by IBM_TRAM user. grant_IBM_TRAM.sql parameter: - user name which has to receive grants 4) Create procedure (the script must be called by IBM_TRAM user). gen_time_dim_granularity_hr.sql 5) Load lookup data (the script must be called by IBM_TRAM user). populateLookup.sql 6) Generate time dimension. Start script as below. User: IBM_TRAM, pass parameters in format as shown in the example below. populateTimeDimension.sql Parameters (varchar): StartDate - varchar in format 'yyyy-mm-dd HH:MM' 10 EndDate - varchar in format 'yyyy-mm-dd HH:MM' Granularity (in minutes) - varchar populateTimeDimension.sql '2010-12-31 00:00' '2012-12-31 00:00' '60' To clean-up the database, start clean.sql (user system, no one can be logged as the IBM_TRAM user) MSSQL: 1. Customize script a) createSchema.sql Note: Modify default database name in use statement ('USE IBM_TRAM') if it is different from the default b) createProcedure.sql Note: Modify default database name in use statement ('USE IBM_TRAM') if it is different from the default c)populateTimeDimension.sql Modify the default database name in the use statement ('USE IBM_TRAM') if it is different from the default. Modify the boundary for the time dimension and granularity parameters. If the Monday must be the first day of the week, add the fourth parameter equal to 1. In other cases release the three parameters. d) clean.sql !) Modify default database name in use statement ('USE IBM_TRAM') if it is different from the default. 2. Run scripts in following order using MS SQL command line (sqlcmd -i [-U -P ] [-H ]) a) c) d) createSchema.sql createProcedure.sql populateTimeDimension.sql To clean-up database start clean.sql 11 1.1.3 Importing the Report package into Tivoli Common Reporting The ITCAM for Transactions Reports are provided in compressed file packages named ITCAMT074000_TCR_Cognos.zip for DB2 and Oracle, and ITCAMT074000_TCR_Cognos_MSSQL.zip for Microsoft SQL, located in the integration/tcr/cognos folder on the installation media. Import the reports package into Tivoli Common Reporting by completing the following steps: 1.Unpack the ITCAMT074000_TCR_Cognos.zip or ITCAMT074000_TCR_Cognos_MSSQL.zip package for your database type. Each of these packages contain two additional compressed packages: The ITCAM for Transactions (Analysis) package contains the dimensional model which allows drill through capabilities. The ITCAM for Transactions (Query) package contains the relational model. ITCAM for Transactions reports are located in the (Query) package. 2.Copy both of these packages into the following directory: C:\Program Files\IBM\JazzSM\reporting\cognos\deployment 3.Access Tivoli Common Reporting v3.1 by pointing your Web browser to http://:16310/ibm/console. 4.Login to Tivoli Common Reporting v3.1 using a valid user name and password. 5.In the left navigation pane, expand the Reporting node and select Common Reporting. 6.On the top right of the screen, click Launch → Administration 7.Select the Configuration tab. 8.Select Content Administration. 9.Click the New Import icon. 10.The ITCAM for Transactions packages should be displayed, as shown in the following figure: 12 Select the ITCAM for Transactions (Analysis) package to be imported, and click Next. 11.You are prompted for a name for the deployment specification (it is filled in by default for you) and an optional description and screen tip. Accept the defaults and click Next. 12.Select the check box next to the public folders content name and click Next. 13.Accept the defaults for general options and click Next. 14.Confirm the summary information and click Next to perform the import operation. 15.Select the Save and run once option and click Finish. 16.Run the import step at the end. Repeat these steps to import the ITCAM for Transactions (Query) package. Additional information about importing reports can be found in the Tivoli Common Reporting Information Center. After the Import operation completes, click the Home icon to display the imported packages in the Public Folders listing, similar to the following example: 13 14 1.1.4 Configuring the data source For a quick setup, you can follow the instructions below. Essentially there are two steps: Connect to the warehouse database Create a data source for the warehouse database For more details on how to create and configure your data source, see the Tivoli Common Reporting knowledge center: http://www01.ibm.com/support/knowledgecenter/SSEKCU_1.1.0/com.ibm.psc.doc_1.1.0/qsg/tcr_qsg.html?lang=en . 1.1.4.1 Connecting to your warehouse database First, you need to be able to connect to your warehouse database. If your Tivoli warehouse database is running on a different server than Cognos, then you need to have a native database client installed and have a local database alias created. The setup might vary depending on the database type. Connecting to a remote DB2 database If your Tivoli warehouse database is running on a different server than Cognos, run the following DB2 commands to connect to the remote warehouse database: db2 catalog tcpip node remote host.domain.com server 50000 db2 catalog database WAREHOUS as at node authentication server In the above command examples: is the short hostname where your warehouse database is running. can be the same as WAREHOUS or any name of your choice. WAREHOUS is your warehouse database name. Example: db2 => catalog tcpip node dbserver remote dbserver.domain.com server 50000 db2 => catalog database WAREHOUS as WAREHOU1 at node dbserver authentication server Connecting to an Oracle database Install the required Oracle Client software on the machine where Tivoli Common Reporting is installed. Using the Net Configuration Assistant, create the database alias to a remote Oracle database as follows. 15 16 17 Complete and finish the wizard. Connecting to a Microsoft SQL Server database Install the Microsoft SQL Server 2005 native client on the computer where Tivoli Common Reporting is installed. Creation of the Database alias is not necessary if the required connection information is provided during the data source creation, in the Cognos Administration page (see the next section for details). 1.1.4.2 Creating a data source for your warehouse database After you have verified that you can connect to your database, use the Administration page in Cognos to create a data source named TDW in Cognos. 1.From the Common Reporting portlet, go to the Launch drop-down list, and select Administration. 2.In the Administration page, select the Configuration tab. 3.Select Data Source Connections. 4.Select New Data Source. 5.Enter TDW as the data source name. 6.Select your database type. 7.Specify your warehouse database name or your local alias name. 8.Specify a valid user name and password. See the examples in the following sections. Creating a DB2 data source The following figures show an example of creating the data source for DB2: 18 19 Creating an Oracle data source To create an Oracle Data Source, Select Oracle as the database type, as shown in the following figure: In the connections page, for SQL* Net Connection string, specify the Service Name (WAREHOUS) provided in the Net configuration Manager during the Oracle alias creation, as shown in the following figure: Creating a Microsoft SQL Server data source To create a Microsoft SQL Server Data Source connection using the native client, Select Microsoft SQL Server (SQL 2005 Native Client) as the database type, as shown in the following figure: 20 In the Connections Page, for the Server name field, specify the hostname of the SQL Server, similar to the example in the following figure. For the Database name field, specify the warehouse database name (usually WAREHOUS), and specify the sign-on information as needed. After successful creation, you should see the data source displayed similar to the following example: 21 1.1.5 Data Source Sign-on Name If the user name that is configured in the data source Sign-on page is different than the schema or tablespace name in the database, then additional configuration is needed: 1.In the IBM Cognos Connection page, select Configuration. 2.Select Data Source Connections. 3.Click the data source name (for example, TDW) 4.Under Actions -> More, click Set properties. 5.In the Connection tab, click the Commands link to expand that section. 6.Click the Set link for the Open connection commands line and add the following XML database command, where itmuser is the name of the database schema or tablespace: SET CURRENT SCHEMA itmuser 7.Click OK. 8.Select the Open connection commands check box. 9.Click OK. 22 1.1.6 Running a Report After you have set up your environment with historical data being collected in your warehouse database, you can run the reports by going to the Tivoli Common Reporting Cognos home page, similar to the following example. Select the ITCAM for Transactions (Query) package. The list of available reports is displayed, similar to the following example: Click a report name to run that report. You can also click on the green arrow icon to the right to run with options, such as changing the output format. Further instructions on running Cognos reports can be found in the Tivoli Common Reporting Information Center. 23 1.1.7 ITCAM for Transactions (Query) package You can use this package to create ad hoc reports in Query Studio, or to create reports in Report Studio. The ITCAM for Transactions relational data model, which is included in this package, is a star schema with dimensions separated from facts or metrics. Metrics are measurable numeric attributes which can be aggregated by dimensions. The relationships between the metric tables are defined using various dimensions, including the common dimension, Time. Data Model: The ITCAM for Transactions agents collect a variety of metrics. The dimensions and metrics are organized under separate namespaces for each agent for easier navigation in the tree. For example, the following figure shows Response Time dimensions along with metrics defined for each Response Time agent. You can see either the raw metrics or summarized (hourly, daily, etc) metrics. The metrics are organized in two sets, Base Metrics and Extended Metrics. Extended Metrics include all metrics, while Base Metrics is a subset of Extended Metrics and contains only commonly used metrics. Using Base Metrics makes metric selection easier because there are fewer metrics from which to choose. 24 Dimensions: Dimensions are used to link the metrics data across agents. Each ITCAM for Transactions agent has a set of dimensions to use with its data. The following tables summarize these dimensions with descriptions, and specify the tables that are related to each dimension. When you select a dimension (typically by left-clicking the dimension and dragging it to the report area with your mouse cursor) followed by a metric, the metric must come from one of the tables related to the dimension; otherwise no data is displayed. Response Time Dimensions: Dimension Time Applications Description This is a Tivoli Reporting and Analytics Model shared dimension and includes various attributes of time by which the metrics can be grouped, such as Date (08/14/2009), Standard Timestamp (08/14/2009 12:00 AM), Weekday (Friday), Month (August), Quarter (3), Year (2009) etc. This has a relationship to the metric tables for all agents. Includes Application listing unique application names from the AMC_Application table. Related Metrics Tables Metric tables for all agents. Raw data tables:  Client Transactions Raw data and summarization tables:  AMC_Application  AMC_Agent  AMC_Client  AMC_Server  AMC_Transaction  WRT_Application_Status  WRT_Transaction_Status  WRT_User_Sessions  RRT_Application_Status  RRT_Transaction_Status  RRT_Robotic_Playback_Status  RRT_Robotic_Playback_Events Transactions Includes unique application and transaction names from the Client, Web and Robotic agent Transaction Status tables, joined as unions. Can be used to show subtransactions data. Raw data tables:  WRT_Transaction_Instance  WRT_SubTransaction_Instance  RRT_Transaction_Instance  RRT_SubTransaction_Instance Raw data and summarization tables:  WRT_SubTransaction_Status  RRT_SubTransaction_Status Clients Includes Client which lists unique client names from the AMC_Client table. Raw data and summarization tables:  AMC_Client  WRT_Transaction_Status  WRT_Client_Status  RRT_Transaction_Status Servers Includes Server which lists unique server names from the AMC_Server table. Raw data and summarization tables:  AMC_Server  WRT_Transaction_Status  WRT_Server_Status  RRT_Transaction_Status Robotic Scripts Includes unique names for application, client and transaction from the RRT_Transaction_Status_H (hourly) table. Raw data tables:  RRT_Robotic_Playback_Status  RRT_SubTransaction_Instance 25 User Application Client Components Use this to show robotic steps (first level subtransactions) data. Raw data and summarization tables:  RRT_Transaction_Status  RRT_SubTransaction_Status  RRT_Robotic_Playback_Events Includes unique names for user, application and client from the WRT_User_Sessions_H table. Raw data and summarization tables:  WRT_User_Sessions Use this to show unique users and related performance data. Includes unique component names from WRT_TCP_Status. Raw data and summarization tables:  WRT_TCP_Status Transaction Groups Includes unique transaction group names from WRT_Trans_Group. Raw data and summarization tables:  WRT_Trans_Group  WRT_Trans_Group_Instance  WRT_Trans_Step  WRT_Trans_Step_Instance  WRT Raw Trans Instance Internet Service Profiles Includes unique names for profile, host, service, element and agent from the AMC_Internet_Service_Agent table. Use this to show internet service monitoring data collected by the AMC. Raw data and summarization tables:  AMC_Internet_Service  AMC_Internet_Service_Agent  AMC_Internet_Service_Element Internet Service Monitoring Dimensions: Dimension Description Related Metrics Tables Time This is a shortcut to the Tivoli Reporting and Analytics Model shared dimension used in the Response Time dimension. Metric tables for all agents. Internet Service Profiles Includes unique names for profile, host, service, element and agent from the AMC_Internet_Service_Agent table. Raw data and summarization tables: Use this to show data from individual internet service monitoring tables. KIS_HOST_STATISTICS (Host) KIS_SERVICE_INSTANCE_STATISTICS (Profile, Service) KIS_SERVICE_STATISTICS (Service) KIS_HTTP (Profile, Host, Service) KIS_ICMP (Profile, Host, Service) Parentheses show the relationship attributes. Note: There are no dimensions defined for KIS_MONITOR_STATUS. To see data from this table, drag the service type and metric from the table. Transaction Tracking Dimensions: 26 Dimension Description Related Metrics Tables Time This is a shortcut to the Tivoli Reporting and Analytics Model shared dimension used in the Response Time dimension. Metric table for all agents. Servers Includes server names from the Aggregates table. There might be duplicate names depending on collected data. Raw data and summarization tables: Aggregates Interactions Use this to show data for servers. Data from the Aggregates table shows aggregates for the server and data from the Interactions tables shows interactions for the server. Components Includes component names from the Aggregates table. There might be duplicate names depending on collected data. Note: There are two sets of metrics, Source Interactions (Interactions from Server) and Destination Interactions (Interactions to Server). Same as Servers. Same usage as Servers. Applications Includes application names from the Aggregates table. There might be duplicate names depending on collected data. Same as Servers. Same usage as Servers. Transactions Server Components Includes transaction names from the Aggregates table. There might be duplicate names depending on collected data. Same usage as Servers. Includes server, component and application names from the Aggregates table. This is used by the Application Scorecard report. Same as Servers. Aggregates_H. One row per unique server, component and application is returned. Can only be used on Aggregates Hourly table. Using Query Studio to create ad hoc reports: When doing an ad-hoc query you can either see the data live as you drag selected metrics and dimensions into the report area, or you can switch the mode to show placeholders for the data and then run the report. To run the report without data or limited data, select Run Report from the navigation tree and select Preview with limited data or Preview with No Data. To start an ad-hoc query, drag the Application dimension followed by any of the Key Metrics. This action groups the data by application. If you drag Application followed by Timestamp it will show No Data available because there is no direct relationship between the two dimensions. The dimensions are related to the metrics. For this reason, always drag the metric into the report area first, followed by dimensions. Using Report Studio to create reports: 27 Reports can be customized or created using the Cognos Report Studio, a Web based report editor. To edit a report click the Edit icon ( ) next to a report. The report studio launches in a separate window. Alternatively, select Report Studio under the Launch menu. After editing your report, when you save it, the report is stored in the same location. For more details on how to use Query and Report Studio, refer to Cognos product manuals. More information can also be found in the Tivoli Common Reporting 3.1 knowledge center: http://www-01.ibm.com/support/knowledgecenter/SSEKCU_1.1.0/com.ibm.psc.doc_1.1.0/qsg/tcr_qsg.html?lang=en 28 1.1.8 ITCAM for Transactions (Analysis) package This package contains the dimensional model that you can use in the Query and Report studio to create reports with automatic drill through. For example, if you have your data aggregated by month, you can click on the month and it will show you data aggregated by day, and so on. These drill up and drill downs are created automatically because of the dimensional modeling. Data Model: The dimensions and metrics are organized under separate namespaces for each agent. For example the following figure shows Response Time dimensions along with the metrics. For the metrics, raw data and hourly metrics are provided. The metrics are organized in Base and Extended Metrics in the same way as the Query package. Dimensions: The Dimensions in Analysis package provides drill through capabilities. Response Time Dimensions: Dimension Time (year, quarter, month, week, day, hour, min) Description This is a Tivoli Reporting and Analytics Model shared dimension allowing drill through from month to week, from week to day, etc. This has a relationship to the metric tables for all agents. Related Metrics Tables Metric tables for all agents. 29 Drill down path: year -> quarter -> month -> week -> day -> hour -> min Applications (application, transaction, agent) Includes application, transaction and agent names from the AMC_Transaction table. Allows drill down from the application to the transaction and from the transaction to the agent. Raw data and Hourly tables: AMC_Application AMC_Application_H Drill down path: application -> transaction -> agent Servers (application, server) Includes application and transaction names from the AMC_Server table. Allows drill down from application to server. Raw data and Hourly tables: AMC_Server AMC_Server_H Drill down path: application -> server Clients (application, client, user) Includes application and client from the AMC_Client table and also includes application, client and users from the WRT_User_Sessions_H(hourly) table. Note: Data is pulled from two different tables. When an application with no user sessions data is selected, no user data is displayed. Raw data and Hourly tables: AMC_Client AMC_Client_H WRT_User_Sessions WRT_User_Sessions_H Drill down path: application -> client -> user. Internet Service Profiles (profile, service, host, element, agent) Includes profile, service, host, element and agent from the AMC_Internet_Service_Agent table. Internet Service Hosts (profile, host, service, element, agent) Includes profile, service, host, element and agent from the AMC_Internet_Service_Agent table. Robotic Transactions (application, transaction, subtransaction, agent) Includes application and transaction names from the RRT_Transaction_Status_H(hourly) table, and subtransaction and agent from the RRT_SubTransaction_Status_H(hourly) table. Raw data and Hourly tables: AMC_Internet_Service_Agent AMC_Internet_Service_Agent_H Drill down path: Profile -> service -> host -> element -> agent. Raw data and Hourly tables: AMC_Internet_Service_Agent AMC_Internet_Service_Agent_H Drill down path: Profile -> host -> service -> element -> agent. Raw data and Hourly tables: RRT_Transaction_Status RRT_Transaction_Status_H RRT_SubTransaction_Status RRT_SubTransaction_Status_H Drill down path: application -> transaction -> subtransaction -> agent. Transaction Tracking Dimensions: 30 Dimension Description Related Metrics Tables Time This is a shortcut to the Tivoli Reporting and Analytics Model shared dimension used in Response Time Dimension. Metric tables for all agents. Components (server, component, application, transaction) Includes server, component, application and transaction names from the Aggregates table. Raw data and Hourly tables: Aggregates Aggregates_H Use this dimension to see aggregated data by server, component, application and transaction from the Aggregates table. Drill down path: server -> component -> application -> transaction. Using Query Studio to create ad-hoc reports: When doing an ad-hoc query you can see that the dimensions are hyper links, allowing you to click for drill through. The example below shows a table where Month from the Time dimension and Application from the Application dimension have been added. Clicking an application shows transactions for the selected application with response time that is now aggregated by application and its transactions. Now, click on month to show the data aggregated by weeks. You can keep drilling down or drill up. 31 Using Report Studio to create reports: You can create reports the same way you create reports using the Query package. The difference is your reports will have dimensions that allow drill through. 32 1.1.9 Editing the Data Model in IBM Cognos Framework Manager You can edit the data model to add any custom models specific to your reports requirements. The model files are found on the installation media as ITCAM for Transactions Model.zip and /mssql/ ITCAM for Transactions Model.zip (MSSQL). Unpack the appropriate compressed package and import the ITCAM for Transactions Model.cpf into your Cognos Framework Manager. Note: Do not modify any existing models and structures; only add your new models. Editing existing models might impact model upgrades to future releases. See the Tivoli Common Reporting knowledge for more details on connecting IBM Cognos Framework Manager with Tivoli Common Reports: http://www-01.ibm.com/support/knowledgecenter/SSEKCU_1.1.0/com.ibm.psc.doc_1.1.0/qsg/tcr_qsg.html?lang=en 1.1.10 Importing BIRT reports into Cognos Tivoli Common Reporting v3.1 installs TCR components into C:\Program Files\IBM\JazzSM\reporting by default. Use the trcmd command found in ~\reporting\bin to import BIRT reports into Cognos. Command Syntax: trcmd -import -bulk pkgFile [-reportSetBase rsBase] [-resourceBase resourceBase] [-designBase designBase] [-help] For example, the following command imports the reports into a folder named BIRT: 33 trcmd -import -bulk C:\download\IITCAMT07400000_TCR.zip -reportSetBase /content/folder[@name='BIRT'] -user tipadmin -password admin As another example, the following command imports the reports into Public Folder: trcmd -import -bulk C:\download\ITCAMT07400000_TCR.zip -reportSetBase /content -user tipadmin -password admin 1.1.11 Setting up JDBC data sources for BIRT reports There are two methods to configure JDBC data sources for BIRT reports, either using JNDI, or configuring JDBC for direct access. JDBC for direct access is explained in this document; refer to the TCR Information Center for details on JNDI configuration. To set up direct JDBC access: Copy the required JDBC driver files to the following directory: JazzSM\Reporting\lib\birt-runtime-2_2_2\ReportEngine\plugins\org.eclipse.birt.report.data.oda.jdbc_2.2.2.r22x_v20071206\drivers (Replace tcr_install_dir with the name of the Tivoli Common Reporting installation directory.) For a DB2 data source, copy the DB2 JDBC drivers as well as the license jar file to the same location. You can either download the db2jcc.jar and db2jcc_licence_cu.jar files from the DB2 website, or you can copy them to the DB2 server system from the following location: C:\Program Files\IBM\SQLLIB\java Use the trcmd –modify command to set the data source for the reports. Go to the following link to see more details on how to run this command: http://www-01.ibm.com/support/knowledgecenter/SSEKCU_1.1.0/com.ibm.psc.doc_1.1.0/tcr_original/ctcr_birt_reps_in_cog.html?lang=en 1.1.12 Running BIRT reports BIRT reports are generated in the same way as Cognos reports. Run a report and check your infrastructure status. See the Tivoli Common Reporting Information Center for a complete coverage of BIRT reports in Cognos: http://www-01.ibm.com/support/knowledgecenter/SSEKCU_1.1.0/com.ibm.psc.doc_1.1.0/qsg/tcr_qsg.html?lang=en 34 Section 2 2.1 Sample Reports Application Scorecard All reports require input parameters. For example, the Application Scorecard report requires you to specify a Report Day and an application name. Selection fields for these required parameters are displayed in the prompt page the first time that the report is run, similar to the following example: The Application Scorecard report consists of several sub-reports: Summary User Report Experience Report Errors Report Clients Report Impacted Users Transactions Backend Web Servers Servers Network 35 2.1.1 Summary Name Summary Description This report is displayed in the first page of the Application Scorecard report. This report displays a table view with different application metrics for the selected day and the previous day for quick comparison. The metrics are displayed in microcharts for trending information and also as a number, representing either the sum or the average for the day. This report also includes three similar stacked bar charts with average response time for different days for quick comparison of data trends. Purpose Gives an overview of the selected application status. Parameters Report Day: select a day from the calendar. Application name: unique application name from the AMC_Application table. Source AMC_Application_H, WRT_Application_Status_H, AMC_Transaction_H Usage The application owner can quickly compare status and data trends for today (or for the selected day) against the previous day and 7 days ago. 36 Chart Application Metrics Metric Total Requests Source Definition AMC_Application_H.SUM_Current_R equests The total number of requests during the current data interval, displayed for the highest priority monitoring agent (if multiple agents are monitoring the same application). The order of precedence is: 1) Transaction Record 2) Web Response Time 3) Client Response Time 4) Robotic Response Time For example, if two robotic agents and a Web Response Time monitoring agent all monitor the same application, the status for the Web Response Time monitoring agent takes precedence. Total Errors AMC_Application_H.SUM_Bad_Requests The number of transactions that did not complete correctly or reported an error during the data interval. Percent Available AMC_Application_H.AVG_Percent_A vailable The percentage of transactions with a transaction status of Good or Slow, but not Failed. The sum of this attribute value and Percent Failed should total 100 percent. Any failure is considered important, so the table cell for this attribute is displayed in the TEP with a green background only when the value for Percent Available is 100 percent. Any value less than 100 percent is displayed in the TEP with a red background. Percent Slow AMC_Application_H.AVG_Percent_Sl ow The percentage of transactions for which requests were marked as Slow. This value is calculated by dividing Slow Requests by Total Requests and multiplying by 100%. The sum of this attribute value and the value of the Percent Good attribute should equal the value of the Percent Available attribute. Any value for this attribute that is greater than 0 percent is displayed with a yellow background in the TEP. Percent Failed 37 Chart Metric Source Average Response Time AMC_Application_H.AVG_Response_ Time Average Con- WRT_Application_Status_H.MAX_A current Users verage_Users Definition Reports the end user response time status as Good, Fair, or Poor on the Application Management Console. The total number of unique users for the time period. A user who experiences a Good, Failed, or Slow performance for a single Web Response Time transaction is counted once. For attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages 38 Chart Metric Source Definition Average Impacted Users WRT_Application_Status_H.MAX_Av erage_Failed_Users + WRT_Application_Status_H.MAX_Average_Slow_ Users Average_Failed_Users: The total number of unique users experiencing Failed performance (that is, a failed transaction). For example, if the user at IP address 128.1.2.3 experiences a Failed performance for a single WRT transaction, and the same user later experiences a Failed performance during the same time period, that user is counted only once in both the Failed count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For all Current Status and Summary attribute groups, the values are averages. Average_Slow_Users: The total number of unique users experiencing slow performance for the time period. The count includes transactions that were slower than the minimum response time threshold, but not those that failed. For example, if the user at IP address 128.1.2.3 experiences a slow performance for a single WRT transaction, and the same user later experiences slow performance during the same time period, that user is counted only once in both the Slow count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages. Selected Day 7 days ago Total User Logins WRT_Application_Status_H.SUM_Us er_Logins Unique Transactions count(distinct(AMC_Transaction_H.Transaction)) Failed AMC_Application_H.SUM_Bad_Requests Slow AMC_Application_H.SUM_Slow_Requests 39 Chart Previous Day Metric Source Good AMC_Application_H.SUM_Good_Requests Response Time AMC_Application_H.AVG_Response_ Time Definition See ITCAM for Transactions knowledge center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 2.1.2 User Experience Name User Experience Description This report is displayed in the second page of the Application Scorecard report. The report displays a table view with user related metrics for the selected day and the previous day for quick comparison. The metrics are displayed in charts for trending information and also as a number which represents summary, average, or maximum amounts. This report also has three similar stacked bar charts with user error, session, and user counts, and additional metrics for the selected day. Purpose Gives an overview of user activities on the selected application. Parameters Report Day: select a day from the calendar. Application name: unique application name from the AMC_Application table. Source AMC_Application, WRT_Application_Status_H, WRT_Transaction_Status_H, RRT_Transaction_Status_H Usage The application owner can quickly observe the status of user activity and see trends of any user errors. 40 Chart Metric Summary Total Users Source WRT_Transaction_Status_H.MAX_Average_Users Definition The total number of unique users for the time period. A user who experiences a Good, Failed, or Slow performance for a single Web Response Time transaction is counted once. For attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages 41 Chart Metric Total Impacted Users Source WRT_Application_Status_H.MAX_Average_Slow_Users + WRT_Application_Status_H.MAX_Average_Failed_Users Definition Average_Failed_Users: The total number of unique users experiencing Failed performance (a failed transaction). For example, if the user at IP address 128.1.2.3 experiences a Failed performance for a single WRT transaction, and the same user later experiences Failed performance during the same time period, that user is counted only once in both the Failed count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For all of the Current Status and Summary attribute groups, the values are averages. Average_Slow_Users: The total number of unique users experiencing slow performance for the time period. The count includes transactions that were slower than the minimum response time threshold, but not those that failed. For example, if the user at IP address 128.1.2.3 experiences a slow performance for a single WRT transaction, and the same user later experiences slow performance again during the same time period, that user is counted only once in both the Slow count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages Number of Sessions WRT_Application_Status_H.SUM_Number_Active_Sessions The number of active user sessions. 42 Chart Impacted Users Metric Source Definition Number of Errors WRT_Application_Status_H.SUM_Failed_Requests The number of recorded transactions that did not complete correctly, reported an error during the monitoring interval, or whose response time was greater than or equal to the Maximum Response Time Threshold. Failed status is indicated by a transaction Status Code with a value greater than 0. This value is added to the values of the Slow Requests and Good Requests attributes to obtain the value of the Total Requests attribute. Average Session Duration WRT_Application_Status_H.AVG _Average_Session_Duration The average duration, in seconds, of user sessions for the time period. Average Page Views WRT_Application_Status_H.AVG _SESSPAGES The average number of page views per user session. Failed Users WRT_Application_Status_H.SUM_Average_Failed_Users The total number of unique users experiencing Failed performance (a failed transaction). For example, if the user at IP address 128.1.2.3 experiences a Failed performance for a single WRT transaction, and the same user later experiences Failed performance during the same time period, that user is counted only once in both the Failed count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For all Current Status and Summary attribute groups, the values are averages. Slow Users WRT_Application_Status_H.SUM_Average_Slow_Users The total number of unique users experiencing slow performance for the time period. The count includes transactions that were slower than the minimum response time threshold, but not those that failed. For example, if the user at IP address 128.1.2.3 experiences a slow performance for a single WRT transaction, and the same user later experiences slow performance during the same time period, that user is counted only once in both the Slow count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages. Overall Errors WRT_Application_Status_H.SUM_Failed_Requests The number of recorded transactions that did not complete correctly, report43 Chart Metric Source Definition ed an error during the monitoring interval, or whose response time was greater than or equal to the Maximum Response Time Threshold. Failed status is indicated by a transaction Status Code with a value greater than 0. This value is added to the values of the Slow Requests and Good Requests attributes to obtain the value of the Total Requests attribute. Session Activity User Activity Failed Sessions WRT_Application_Status_H.SUM_Number_Failed_Sessions The number of failed user sessions. Slow Sessions WRT_Application_Status_H.SUM_Number_Slow_Sessions The number of slow user sessions. Good Sessions WRT_Application_Status_H.SUM_Number_Good_Sessions The number of good user sessions. Session Duration WRT_Application_Status_H.AVG _Average_Session_Duration The average duration, in seconds, of user sessions for the time period. Total Users WRT_Application_Status_H.SUM_Average_Users The total number of unique users for the time period. A user who experiences a Good, Failed, or Slow performance for a single Web Response Time transaction is counted once. For attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages Page Views per session WRT_Application_Status_H.SUM_SESSPAGES The average number of page views per user session. See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 2.1.3 Errors Name Errors Description This report is displayed in the third page of the Application Scorecard report. The report displays a table view with different types of errors that were encountered for the selected day and the previous day for quick comparison. The metrics are displayed in charts for trending information, and also as a number which represents summary, average, or maximum amounts. The report also includes a stacked bar chart with content errors and robotic errors over time. Purpose Gives an overview of all errors for the selected application that the user experienced. Parameters Report Day: select a day from the calendar. 44 Application name: unique application name from AMC_Application table. Source AMC_Application, WRT_Application_Status_H, RRT_Robotic_Playback_Event Usage The application owner can get an overview of all errors for the application. Chart Summary Selected Day Metric Source Definition Total 403's WRT_Application_Status_H.SUM_Number_of_403s The number of HTTP requests with a 403 status code. Total 404's WRT_Application_Status_H.SUM_Number_of_404s The number of HTTP requests with a 404 status code. Total 500's WRT_Application_Status_H.SUM_Number_of_500s The number of HTTP requests with a 500 status code. Total Content Errors WRT_Application_Status_H.SUM_NUMCCERR The number of requests with content check errors. Total Robotic Errors count(RRT_Robotic_Playback_Ev ents.Event_Type) The type of event (Timeout, Return Code, HTTP Return Code, Content Size, Content Failure, Expected Text Failure, Expected Data Failure, Expected Property Failure, Expected Image Failure, Component Failure, Authentication Failure, Page Title Failure, Custom Failure, URL Unavailable Failure, Generic Failure, or Verification Point Failure). Number of 403's WRT_Application_Status_H.SUM_Number_of_403s The number of HTTP requests with a 403 status code. Number of 404's WRT_Application_Status_H.SUM_Number_of_404s The number of HTTP requests with a 404 status code. Number of 500's WRT_Application_Status_H.SUM_Number_of_500s The number of HTTP requests with a 500 status code. 45 Chart Metric Number of Content Errors Robotic Monitored Events Source Definition WRT_Application_Status_H.SUM_NUMCCERR The number of requests with content check errors. RRT_Robotic_Playback_Events.E vent_Type The type of event (Timeout, Return Code, HTTP Return Code, Content Size, Content Failure, Expected Text Failure, Expected Data Failure, Expected Property Failure, Expected Image Failure, Component Failure, Authentication Failure, Page Title Failure, Custom Failure, URL Unavailable Failure, Generic Failure, or Verification Point Failure). See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 2.1.4 Clients Name Clients Description This report is displayed on the fourth page of the Application Scorecard report. The report displays a table view with different metrics aggregated by clients for Web Response Time, and Robotic Response Time data. The report also includes a multi line chart per client with average response time for trending information, and a crosstab table showing all transactions aggregated by unique client. Purpose Gives an overview of all clients defined in Response Time agents. Parameters Report Day: select a day from the calendar. Application name: unique application name from AMC_Application table. Source AMC_Application, WRT_Transaction_Status_H, RRT_Transaction_Status_H Usage The application owner can get an overview of all client activity. 46 Chart Client Metrics Metric Source Definition Client CRT_Transaction_Status_H.Client AND WRT_Transaction_Status_H.Client AND RRT_Transaction_Status_H.Client The name of the client that initiated the request (or transaction). Failed Requests CRT_Transaction_Status_H.SUM_Failed_Requests + WRT_Transaction_Status_H.SUM_Failed_Requests + RRT_Transaction_Status_H.SUM_Failed_Requests The number of recorded transactions that either did not complete correctly, reported an error during the monitoring interval, or whose response time was greater than or equal to the Maximum Response Time Threshold. Failed status is indicated by a transaction Status Code with a value greater than 0. This value is added to the values of the Slow Requests and Good Requests attributes to obtain the value of the Total Requests attribute. Slow Requests CRT_Transaction_Status_H.SUM_Slow_Requests + WRT_Transaction_Status_H.SUM_Slow_Requests + RRT_Transaction_Status_H.SUM_Slow_Requests The number of recorded transactions that are slow during the current data interval. 47 Chart Metric Source Definition Average Response Time average (CRT_Transaction_Status_H.AVG_Average_Response_ Time, WRT_Transaction_Status_H.AVG _Average_Response_Time, RRT_Transaction_Status_H.AVG _Average_Response_Time) The average response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range of response times for the transaction. Total kBytes WRT_Transaction_Status_H.SUM_Total_kBytes The total number of bytes transferred for all request during the time period. Unique Users WRT_Transaction_Status_H.MAX_Average_Users The total number of unique users for the time period. A user who experiences a Good, Failed, or Slow performance for a single Web Response Time transaction is counted once. For attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages. Slow Users WRT_Transaction_Status_H.MAX_Average_Slow_Users The total number of unique users experiencing slow performance for the time period. The count includes transactions that were slower than the minimum response time threshold, but not those that failed. For example, if the user at IP address 128.1.2.3 experiences a slow performance for a single WRT transaction, and the same user later experiences slow performance again during the same time period, that user is counted only once in both the Slow count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For the Current Status and Summary attribute groups, the values are averages. Failed Users WRT_Transaction_Status_H.MAX_Average_Failed_Users The total number of unique users experiencing Failed performance (a 48 Chart Metric Source Definition failed transaction). For example, if the user at IP address 128.1.2.3 experiences a Failed performance for a single WRT transaction, and the same user later experiences Failed performance during the same time period, that user is counted only once in both the Failed count and the All count. For the attribute groups that monitor a specific time interval, the value is the actual count for the time period. For all Current Status and Summary attribute groups, the values are averages Client Response Time Client Availability Client CRT_Transaction_Status_H.Client AND WRT_Transaction_Status_H.Client AND RRT_Transaction_Status_H.Client The name of the client that initiated the request (or transaction). Response Time average (CRT_Transaction_Status_H.AVG_Average_Response_ Time, WRT_Transaction_Status_H.AVG _Average_Response_Time, RRT_Transaction_Status_H.AVG _Average_Response_Time) The average response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range of response times for the transaction. Clients CRT_Transaction_Status_H.Client AND WRT_Transaction_Status_H.Client AND RRT_Transaction_Status_H.Client The name of the client that initiated the request (or transaction). Transactions CRT_Transaction_Status_H.Transaction AND WRT_Transaction_Status_H.Transaction AND RRT_Transaction_Status_H.Transaction A user-defined name of the monitored transaction. When defining a transaction pattern, if you select to aggregate by pattern, the transaction that matches the defined pattern is replaced by the Transaction Name and aggregated together with all other unique transactions that also match the defined pattern. Percent Available average (CRT_Transaction_Status_H.AVG_Percent_Available, WRT_Transaction_Status_H.AVG _Percent_Available, RRT_Transaction_Status_H.AVG_Percent_A vailable) The percentage of transactions with a transaction status of Good or Slow, but not Failed. The sum of this attribute value and Percent Failed should total 100 percent. Any failure is considered important, so the table cell for this attribute is displayed in the TEP with a green background 49 Chart Metric Source Definition only when the value for Percent Available is 100 percent. Any value less than 100 percent is displayed in the TEP with a red background. See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 2.1.5 Impacted Users Name Impacted Users Description This report shows all users with slow and failed requests in a simple table view. Purpose Gives an overview of all users experiencing slow or failed requests. Parameters Report Day: select a day from the calendar. Application name: unique application name from AMC_Application table. Source AMC_Application, WRT_User_Sessions_D Usage The application owner can get an overview of all users impacted by any slow or failed requests. Chart Top Impacted Users Metric Source Definition User WRT_User_Sessions_D.User Specifies the user name for the user session. The valid format is an alphanumeric string with a maximum of 64 characters. Failed Requests WRT_User_Sessions_D.SUM_Fail ed_Requests The number of recorded transactions that either did not complete correctly, or reported an error during the monitoring interval, or whose response time was greater than or equal to the Maximum Response Time Threshold. Failed status is indicated by a transaction Status Code with a value greater than 0. This value is added to the values of the Slow Requests and Good Requests attributes to obtain the value of the Total Requests attribute. 50 Chart Metric Source Definition Slow Requests WRT_User_Sessions_D.SUM_Slo w_Requests The number of recorded transactions that completed successfully, but whose response time was greater than or equal to the Minimum Response Time Threshold. This value is added to the values of the Good Requests and Failed Requests attributes to obtain the value of the Total Requests attribute. Total Requests WRT_User_Sessions_D.SUM_Total_Requests The total number of recorded transactions observed during the monitoring interval. The value for this attribute is the sum of the Good Requests, Slow Requests, and Failed Requests attributes. Minimum Response Time WRT_User_Sessions_D.MIN_Min imum_Response_Time The minimum response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range of response times for the transaction. Average Response Time WRT_User_Sessions_D.AVG_Average_Response_Time The average response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range of response times for the transaction. Maximum Response Time WRT_User_Sessions_D.MAX_Ma ximum_Response_Time The maximum response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range of response times for the transaction. 51 Chart Metric Source Definition Total kBytes WRT_User_Sessions_D.SUM_Total_kBytes The total number of bytes transferred for all request during the time period. kBytes Retransmitted WRT_User_Sessions_D.SUM_Kil oBytes_Retransmitted The number of kilobytes that were retransmitted. # 403s WRT_User_Sessions_D.SUM_Nu mber_of_403s The number of HTTP requests with the status code 403. # 404s WRT_User_Sessions_D.SUM_Nu mber_of_404s The number of HTTP requests with the status code 404. # 500s WRT_User_Sessions_D.SUM_Nu mber_of_500s The number of HTTP requests with the status code 500. # Content Errors WRT_User_Sessions_D.SUM_NU MCCERR The number of requests with content check errors. See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 2.1.6 Transactions Name Transactions Description This report shows all transactions and associated metrics for Web Response Time and Robotic Response Time agents transaction data combined. A line graph shows performance over time for each transaction. A table shows availability over time. Purpose Gives an overview of all transactions displaying their performance and availability. Parameters Report Day: select a day from the calendar. Application name: unique application name from AMC_Application table. Source AMC_Application, WRT_Transaction_Status_H, RRT_Transaction_Status_H Usage The application owner can get an overview of all transaction status. 52 Chart Transaction Metrics Metric Source Definition Transaction CRT_Transaction_Status_H.Transaction AND WRT_Transaction_Status_H.Transaction AND RRT_Transaction_Status_H.Transaction A user-defined name for the monitored transaction. When defining a transaction pattern, if you aggregate by pattern, the transaction that matches the defined pattern is replaced by the Transaction Name and aggregated together with all other unique transactions that also match the defined pattern. Average Response Time average (CRT_Transaction_Status_H.AVG_Average_Response_ Time, WRT_Transaction_Status_H.AVG _Average_Response_Time, RRT_Transaction_Status_H.AVG _Average_Response_Time) The average response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range of response times for the transaction. Maximum Average Response Time CRT_Transaction_Status_H.MAX_Average_Response_Time + WRT_Transaction_Status_H.MAX_Average_Response_Time + RRT_Transaction_Status_H.MAX_Average_Response_Time The maximum average response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range of response times for the transaction. 53 Chart Transaction Response Time Metric Source Definition Failed Requests CRT_Transaction_Status_H.SUM_Failed_Requests + WRT_Transaction_Status_H.SUM_Failed_Requests + RRT_Transaction_Status_H.SUM_Failed_Requests The number of recorded transactions that either did not complete correctly, or reported an error during the monitoring interval, or whose response time was greater than or equal to the Maximum Response Time Threshold. Failed status is indicated by a transaction Status Code with a value greater than 0. This value is added to the values of the Slow Requests and Good Requests attributes to obtain the value of the Total Requests attribute. Slow Requests CRT_Transaction_Status_H.SUM_Slow_Requests + WRT_Transaction_Status_H.SUM_Slow_Requests + RRT_Transaction_Status_H.SUM_Slow_Requests The number of recorded transactions that completed successfully, but whose response time was greater than or equal to the Minimum Response Time Threshold. This value is added to the values of the Good Requests and Failed Requests attributes to obtain the value of the Total Requests attribute. Total Requests CRT_Transaction_Status_H.SUM_Total_Requests + WRT_Transaction_Status_H.SUM_Total_Requests + RRT_Transaction_Status_H.SUM_Total_Requests The total number of recorded transactions observed during the monitoring interval. The value for this attribute is the sum of the Good Requests, Slow Requests, and Failed Requests attributes. Transaction CRT_Transaction_Status_H.Transaction AND WRT_Transaction_Status_H.Transaction AND RRT_Transaction_Status_H.Transaction A user-defined name of the monitored transaction. When defining a transaction pattern , if you select to aggregate by pattern, then the transaction that matches the defined pattern is replaced by the Transaction Name and aggregated together with all other unique transactions that also match the defined pattern. Average Response Time average(CRT_Transaction_Status_H.AVG_Average_Response_Ti me, WRT_Transaction_Status_H.AVG _Average_Response_Time, RRT_Transaction_Status_H.AVG_ Average_Response_Time) The average response time, in seconds, for a single transaction instance that was observed during the monitoring interval. During each monitoring interval, minimum, maximum, and average response times for the aggregate records are recorded. Use these attributes to analyze the range 54 Chart Metric Source Definition of response times for the transaction. Transaction Availability Transactions CRT_Transaction_Status_H.Transaction AND WRT_Transaction_Status_H.Transaction AND RRT_Transaction_Status_H.Transaction A user-defined name for the monitored transaction. When defining a transaction pattern, if you select to aggregate by pattern, the transaction that matches the defined pattern is replaced by the Transaction Name and aggregated together with all other unique transactions that also match the defined pattern. Availability average(CRT_Transaction_Status_H.AVG_Percent_Available, WRT_Transaction_Status_H.AVG _Percent_Available, RRT_Transaction_Status_H.AVG_Percent_A vailable) The percentage of transactions with a transaction status of Good or Slow, but not Failed. The sum of this attribute value and Percent Failed should total 100 percent. Any failure is considered important, so the table cell for this attribute is displayed in the TEP with a green background only when the value for Percent Available is 100 percent. Any value less than 100 percent is displayed in the TEP with a red background. See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.h tml 2.1.7 Backend Servers Name Backend Servers Description This report focuses on Transaction Tracking agent data, showing all servers with their associated components. Metrics are displayed for each component in a table view. Purpose Gives an overview of all servers and components. Parameters Report Day: select a day from the calendar. Application name: unique application name from AMC_Application table. Tables Used AMC_Application, Aggregates_H Usage The application owner can get an overview of all Transaction Tracking servers and components for the selected application. 55 56 Chart Backend Servers Metric Server/Component Source Definition Aggregate: Aggregate can refer to Aggregates_H.Aggregate AND the following circumstances: Aggregates_H.Aggregate_ID AND Aggregates_H.Enclosing_ID  Name of aggregate. The label 'Aggregate' corresponds to the label 'Name' shown in the workspace. b.  If the aggregate has a source aggregate, it is the name of the destination aggregate in the interaction. Aggregate_ID: Aggregate ID can refer to the following circumstances:  In a simple aggregate, the identifier of the aggregate.  If there is a source aggregate, the identifier of the destination aggregate in the interaction.  If it is part of an instance, the identifier of the aggregate transaction.  If there is a source transaction instance, the identifier of the destination aggregate transaction.  For a situation, the identifier of the source aggregate. Enclosing ID: Identifier of the enclosing aggregate. Transaction Count Aggregates_H.SUM_Transaction _Count Number of transaction instances, not including failed transactions. 57 Chart Metric Failed Requests Slow Requests Availability Source Aggregates_H.SUM_Failed Aggregates_H.SUM_Slow Aggregates_H.AVG_Percent_Slo w + Aggregates_H.AVG_Percent_Good Definition Failed can refer to one of the following circumstances:  In a simple aggregate, the number of transaction instances that failed.  Number of failed transactions in the destination aggregate when initiated by a transaction in the source aggregate. Slow can refer to the following circumstances:  In a simple aggregate, the number of transaction instances that were slow.  Number of slow transactions in the destination aggregate when initiated by a transaction in the source aggregate. Slow: Slow can refer to the following circumstances:  In a simple aggregate, the number of transaction instances that were slow.  Number of slow transactions in the destination aggregate when initiated by a transaction in the source aggregate. Good: Good can refer to the following circumstances:  In a simple aggregate, the number of transaction instances that were good.  Number of good transactions in the destination aggregate when initiated by a transaction in the source aggregate. 58 Chart Metric Average Response Time Source Aggregates_H.AVG_Total_Time Definition Total Time can refer to the following circumstances:  Average total transaction time of the transactions that make up the aggregate.  Total response time for this transaction instance. Average Response Time Deviation Aggregates_H.AVG_Total_Time_ Deviation Deviation of the total response time from the determined baseline, measured as a percentage from the baseline. Max Response Time Deviation Aggregates_H.MAX_Total_Time_ Deviation Deviation of the maximum response time from the determined baseline, measured as a percentage from the baseline. Average Percent Deviation percent(Aggregates_H.AVG_Total_Time_Deviation, Aggregates_H.AVG_Total_Time) Deviation of the average response time from the determined baseline, measured as a percentage from the baseline. See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 59 2.1.8 Web Servers Name Web Servers Description This report focuses on Web servers to help spot trends. The Server Availability heat map highlights problems, and the line graphs provide volume and response time information. Purpose Gives an overview of all servers with performance and availability information. Parameters Report Day: select a day from the calendar. Application name: unique application name from AMC_Application table. Source AMC_Application, AMC_Server_H Usage The application owner can get an overview of all server status. Chart Server Availability Metric Source Definition Server AMC_Server_H.Server The name of the server that processed the transaction. This could be the hostname of the physical machine, the IP, or the Sysplex. Availability AMC_Server_H.AVG_Percent_Av ailable The percentage of transactions with a transaction status of Good or Slow, but not Failed. The sum of this attribute value and Percent Failed should total 100 percent. Any failure is considered important, so the table cell for this attribute is displayed in the TEP with a green background only when the value for Percent Available is 100 percent. Any value less than 100 percent is 60 Chart Metric Source Definition displayed in the TEP with a red background. Server Total Requests Server Response Time Server AMC_Server_H.Server The name of the server that processed the transaction. This could be the hostname of the physical machine, the IP, or the Sysplex. Requests AMC_Server_H.AVG_Average_Re quests The average number of requests for a data interval during the time span for which data is displayed. Server AMC_Server_H.Server The name of the server that processed the transaction. This could be the hostname of the physical machine, the IP, or the Sysplex. Response Time AMC_Server_H.AVG_Response_ Time The elapsed time, in seconds, required for the transaction to complete. See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 61 2.1.9 Network Name Network Description This report focuses on Web Response Time agent data, and displays network time, and bandwidth used, including retransmission bandwidth. Purpose Gives an overview of network performance. Parameters Report Day: select a day from the calendar. Application name: unique application name from AMC_Application table. Source AMC_Application, WRT_Application_Status_H Usage The application owner can get an overview of time and bandwidth used on the network for the selected application. 62 Chart Network Time Network Bandwidth Metric Source Definition Minimum network time WRT_Application_Status_H.MIN _Average_Network_Time The minimum elapsed time, in seconds, spent transmitting all required data through the network. This is a calculated time. For instance data, this field is an absolute value, not an average. Average network time WRT_Application_Status_H..AVG _Average_Network_Time The average elapsed time, in seconds, spent transmitting all required data through the network. This is a calculated time. For instance data, this field is an absolute value, not an average. Maximum network time WRT_Application_Status_H.MAX_Average_Network_Time The maximum elapsed time, in seconds, spent transmitting all required data through the network. This is a calculated time. For instance data, this field is an absolute value, not an average. Sent kBytes WRT_Application_Status_H.SUM_Request_kBytes The total number of bytes in the request during the data interval. Received kBytes WRT_Application_Status_H.SUM_Reply_kBytes The total number of bytes in each reply of the request during the data interval. Retransmitted kBytes WRT_Application_Status_H.SUM_KiloBytes_Retransmitted The number of kilobytes that were retransmitted. Total packet count WRT_Application_Status_H.SUM_Request_Ack_Packet_Coun t + WRT_Application_Status_H.SUM_Request_Packet_Count + WRT_Application_Status_H.SUM_Reply_Packet_Count + WRT_Application_Status_H.SUM_Reply_Ack_Packet_Count Request_Ack_Packet_Count: The average number of acknowledgement packets in a request during the data interval. For instance data, this field is an absolute value, not an average. Request_Packet_Count: The average number of packets in the request during the data interval. For instance data, this field is an absolute value, not an average. Reply_Packet_Count: The average number of reply packets returned from the server for requests made during the data interval. Reply_Ack_Packet_Count: The average number of acknowledge63 Chart Metric Source Definition ment packets from the server for requests made during the data interval. For instance data, this field is an absolute value, not an average. See ITCAM for Transactions Information Center for more information about list attributes: http://www01.ibm.com/support/knowledgecenter/SS5MD2_7.4.0/com.ibm.itcamt.doc_7.4.0.0/itcamtrans_welcome.html 2.2 Robotic Steps and Performance This report focuses on Robotic scripts and their steps or first level sub-transactions. Required inputs are Report Day, Application name, and Script name. The Robotic Steps and Performance report consists of several sub-reports: Robotic Steps Summary Report Robotic Steps Client Breakdown Report Robotic Steps Detailed Breakdown Report Robotic Steps Script Summary Report 64 2.2.1 Robotic Steps Summary Name Robotic Steps Summary Description This report is displayed on the first page of the Robotic Steps and Performance report. This report displays a table view with average response time and volume for each unique step. The report also shows verification point failure types (the event type as defined in the Robotic Playback Events table) in a stacked bar chart over time, and a line chart showing response time for each step. Purpose Gives an overview of the steps for the selected script. Parameters Report Day: select a day from the calendar. Application name: unique application name Script: transaction name Source RRT_Transaction_Status_H, RRT_SubTransaction_Status_H Usage The application owner can get an overview of all the steps for the selected application and script, and any errors and error types. 65 2.2.2 Robotic Steps Client Breakdown Name Robotic Steps Client Breakdown Description This report is displayed on the second page of the Robotic Steps and Performance report. The report shows availability data aggregated by the first two level steps and client groups. The last two columns show totals for the transaction and client groups. Purpose Gives an overview of the availability of each script step by client. Parameters Report Day: select a day from the calendar. Application name: unique application name Script: transaction name Source RRT_Transaction_Status_H, RRT_SubTransaction_Status_H Usage The application owner can get an overview of availability by client. (The following image is a continuation of the previous chart, including the last two columns with totals.) 66 2.2.3 Robotic Steps Detailed Breakdown Name Robotic Steps Detailed Breakdown Description This report is displayed on the third page of the Robotic Steps and Performance report. The report displays a table view with different performance metrics for each step aggregated by timestamp. Purpose Gives an overview of the steps for the selected script over time. Parameters Report Day: select a day from the calendar Application name: unique application name Script: transaction name Source RRT_Transaction_Status_H, RRT_SubTransaction_Status_H, RRT_Transaction_Status Usage The application owner can get an overview of all the steps for any failed or warning status by timestamps. The application owner can observe a failed step and time when errors or warning conditions occurred. 67 2.2.4 Robotic Steps Script Summary Name Robotic Steps Script Summary Description This report displays a summary of the average response, good and total requests, and percent good for the whole script. Purpose Gives an overview of the steps for the selected script. Parameters Report Day: select a day from the calendar Application name: unique application name Script: transaction name Source RRT_SubTransaction_Status, RRT_Transaction_Status, RRT_Transaction_Status_H Usage The application owner view a summary of the selected script. 68 2.3 2.3.1 User Analysis User Analysis Name User Analysis Description This report shows response time and bandwidth consumed by the selected user along with other metrics. Purpose Gives an overview of user activity and experience. Parameters Report Time: select a time range from the calendar. User: unique user name Application: application name (optional) Client: client (optional) Source WRT_User_Sessions_H Usage The user can see an overview of all applications and transactions that a selected user accessed, with response time and bandwidth consumed, in addition to other information. 69 70 2.4 2.4.1 Internet Service Availability Internet Service Availability Name Internet Service Availability Description This report shows availability and performance for services and hosts monitored by the selected Internet Service Monitoring profile. Purpose Gives an overview of the services monitored by the selected profile. Parameters Report Time: select a time range from the calendar. Profile: unique profile name Host: transaction name (optional) Service: service name (optional) Element: element (optional) Source AMC_Internet_Service_Agent, AMC_Internet_Service_Element Usage The user can observe status trends on services monitored by ISM profiles. 71 72 Section 3 Debugging and Known Problems 3.1 Errors with the file system This section describes several errors that might occur due to problems with the data in the file system. 3.1.1 Too Much Data This error might occur after you drag a dimension and a metric, and usually indicates that you have too much data to process for the query. Set a filter on your dimension to limit the amount of data processed. Your database system might also need to be tuned to process larger amounts of data. 73 3.1.2 No Relationship Defined This error might occur when you associate a metric and a dimension that have no relationship. For example, you might choose a metric from the Robotic Scripts table and attempt to associate it with an Application from the Application Status table. There is no relationship between the two tables, as a result, this would cause an error similar to the example shown below. To resolve this problem, refer to section 1.1.7 to see the list of relationships between Dimensions and Metrics. 74 3.2 Arithmetic Overflow Errors in Ad Hoc Querying If you drag certain columns during an ad-hoc query and it returns an arithmetic overflow error, switch to Limited Data or No Data preview and add Standard Timestamp to the query. Certain columns might average or add up to a total that is greater than the maximum size supported by the database. An SQL arithmetic overflow error is returned. If you see the data by hourly timestamp, or the daily timestamp, or if you set a query to limit the data, the aggregated value is forced to be within the supported size. 3.3 ‘No data available’ in Ad Hoc Querying when querying two tables but data shows up if the two tables are queried individually This happens if there is no relationship defined between the two tables. Ensure that all of your ad hoc queries have at least one dimension. 3.4 Error for missing table or attribute Ensure that all of the prerequisites are met and the warehouse is collecting historical data. 3.5 Failure to create tables in the warehouse If you receive errors because of table space limitations when you run the Tivoli Data Warehouse schema scripts to load the tables into the warehouse database, see the ITCAM for Transactions Troubleshooting Guide for information about resolving this issue. 3.6 Performance Considerations The Tivoli Data Warehouse can have large amounts of data that need to be processed by the reports. Consult with your database administrator about keeping your database tuned. For DB2 this involves running REORGCHK frequently to keep the statistics updated on the key tables. 75 Trademarks IBM, the IBM logo, Tivoli, and DB2 are trademarks of International Business Machines Corporation in the United States, other countries or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Other product and service names might be trademarks of IBM or other companies. ® © Copyright IBM Corporation 2004, 2014. All rights reserved. INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PAPER “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. Information in this paper as to the availability of products (including portlets) was believed accurate as of the time of publication. IBM cannot guarantee that identified products (including portlets) will continue to be made available by their suppliers. This information could include technical inaccuracies or typographical errors. Changes may be made periodically to the information herein; these changes may be incorporated in subsequent versions of the paper. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this paper at any time without notice. Any references in this document to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation 4205 South Miami Boulevard Research Triangle Park, NC 27709 U.S.A. 76