Quantcast
Channel: Teradata Developer Exchange - Viewpoint
Viewing all 81 articles
Browse latest View live

Teradata Viewpoint 14.10 Release

$
0
0
Short teaser: 
This article is the official release announcement of Teradata Viewpoint 14.10
Cover Image: 

This article is the official release announcement of Teradata Viewpoint14.10 with an effective release date of May 6th 2013. With new enhancements in Alerting, Workload Management and Monitoring areas, this release of Viewpoint 14.10 continues to expand its scope and provide ability to monitor Hadoop systems along with Aster and Teradata systems.

Summary

The primary themes of the Viewpoint 14.10 release are to provide front end and visualization for new Teradata Database 14.10 features and Hadoop system monitoring. There are enhancements in Alerting, Monitoring and Management areas. Following are the highlights of Viewpoint 14.10:

  1. Stats Manager
  2. Hadoop System Monitoring
  3. Workload Management enhancements (Group throttle, New classifications, ability to unlock rulesets, etc..)
  4. Reports in the Query Monitor portlet
  5. Alerting Enhancement

Browser support has also been updated to reflect support for Firefox 18, Chrome 24, Safari 5.1, IE 8.x and 9.x.

Stats Manager

The Stats Manager portlet complements the Auto Stats feature of Teradata Database 14.10 and will work with relaese 14.10 and later. Stats Manager allows DBAs/Users to efficiently manage their stats collection process. It is a new Tool option in Add Content | Tools menu.

Before we go into details of this new feature, let’s discuss why this is needed.  Accurate cardinality and cost helps Teradata optimizer to decide an optimal plan. Statistics provides cardinality information to Teradata optimizer. Cardinality changes significantly with bulk load jobs making stats stale and inaccurate. Some times it is even challenging for an experienced DBA to understand which object stats would be beneficial which can result in collecting extra stats or missing collections of critical stats. Collect stats jobs usually are resource intensive jobs as they have many collect stats statements; it is always good to know what is needed and what is not and save some CPU cycles. Due to scheduling issues the user may not have enough time to complete the collect stats job and there is a need to prioritize and run collect stats for important or stale stats first. Stats Manager tool simplifies some of these tasks and help users automate the stats collection process. The Stats Manager portlet can be used to:

  • View statistics on a system
  • Schedule statistic collection jobs
  • Identify missing stats
  • Detect and refresh stale statistics
  • Identify and discontinue collecting unused statistics
  • View when statistics were last collected and are scheduled for collection again
  • Set priority of collect stats statement with regards to to other collect stats statement
  • Shows CPU Utilization of collect stats jobs allowing the user to analyse if a particular job consumes more than anticipated amount of CPU.

There are two main tabs in Stats Manager– Statistics and Job.

Statistic Tab

The Statistics tab shows all objects (e.g. databases and tables) on the system, that have at least one statistic or that has at least one outstanding recommendation. The user can drill down on the data grid to navigate between the database, tables and Column. Figure 1 is example of Statistics by Database view.

Figure 1

Actions has three options - Automate enables statistics to be collected by collect jobs. Deautomate stops statistics from being collected by collect jobs.  Edit Collect Settings  allows the user to edit thresholds, sampling, and histogram settings. The information bar displays the percentage of statistics that are approved for automation, allowing the user to determine if more statistics need to be approved for automation. Percentage of automated stats have collect jobs allows the user to determine if additional collect jobs are needed. Recommendations display a list of the recommendations by an analyze job.  By clicking the link the user has an option to approve or reject recommendations given by analyze job. Statistics Table displays all objects with at least one statistic, or one recommendation that has not been approved or rejected. The table is configured using Configure Columns from the Table Actions menu. The user can automate any objects for stats collection process in this tab. This allows the user to approve statistics for collection by collect jobs. The user can also view Statistics detail reports by drilling down to stats object, see Figure 2. 

Figure 2

Job Tab

The Job tab displays the list of user-defined collect and analyze job definitions. From this view, the user can create collect stats and analyze jobs, manage existing jobs, and review job reports. Figure 3 represents the top Job tab layout. Actions has three option - New Collect Job enables user to define a job to collect statistics, New Analyse Job enables user to define a job to evaluate statistic use and make recommendations and View History lists the run status and reports for collect and analyze jobs over time.  

Figure 3

Job Definitions Table displays summary information about jobs and allows drill down to show the details. Job Schedule displays a nine-day view of jobs that are running, scheduled to run, or have already run. Mouse over a date to have it show the list of jobs.

A Collect job generates and submits COLLECT STATISTICS statements to the Teradata Database for objects that were approved for automation in Statistic Tab. The user can assign a priority to individual COLLECT STATISTICS statements. see Figure 4.

Figure 4

The user can schedule a job to run for limited time and then have a new schedule to resume the job at a different time of the day (See Figure 5).

Figure 5

An Analyze jobs option allows the user to evaluate statistics status and get statistic-related recommendations. Analyzing objects enables the user to determine where additional statistics might be useful and identify existing statistics that are used frequently or are stale. Once the recommendation are generated the  user can review and automate the object for stats collection process in Tab. See Figure 6 for various functions that Analyze job can perform.

Figure 6

The Viewpoint Log Table Clean Up feature can be used to cleanup job results stored in DBS TDStats database.

Hadoop System Monitoring

Teradata Viewpoint 14.10 supports Hadoop system monitoring for Hortonworks provided Hadoop solutions packaged as part of Aster 3 Big Analytic Appliance. A new Hadoop Services portlet allows users to monitor status of various services running on the Hadoop systems. Using expandable service view on MapReduce, HDFS and HBase users can view key metrics details for the selected services (See Figure 7).

Figure 7

The Aster Node Monitor portlet has been renamed to Node Monitor as it now monitors both Aster and Hadoop systems. Using the Node monitor portlet for Hadoop systems, users can view node level metrics, available Hadoop services, and the status of services for each node on the system. User can also view hardware statistics details such as CPU usage, memory usage and network activity. Navigating through the Hadoop system topology, users can also view detailed service component and JVM metrics for the HDFS and MapReduce services. (See Figure 8)

Figure 8

Like Aster system monitoring, Hadoop systems monitoring was also integrated with existing portlets. The usability, look and feel of the portlets were maintained but underlying data and metrics corresponded to the monitored Hadoop system. Below are the existing portlets that were modified to support Hadoop system monitoring:

  • Alert Viewer – View all the Alerts logged for Hadoop systems.
  • Capacity Heatmap– Displays trends for key metric usage related to system, HDFS and MapReduce.
  • Metrics Analysis - Displays and compares trends for key metric usage related to system, HDFS and MapReduce in a graphical format across different Hadoop systems.
  • Metrics Graph– Displays trends for key metric usage related to system, HDFS and MapReduce in a graphical format.
  • Space Usage– Monitors space usage on a Node such as total space, current space, percent in use and available space.
  • Admin – Provides the ability to add Hadoop systems and define Alerts for Hadoop systems.
  • System Health- Hadoop systems can be identified a “H” in the system's icon and drill down shows all the key metrics related to Hadoop system. See Figure 9

Figure 9

Reports in Query Monitor

In Viewpoint 14.10 we added three new reports in Query Monitor.

  1. Multi-Session report: New option in Query Monitor By Utility|By Job was added to display all the utility jobs that are running with drill down capabilities for individual sessions logged on by a particular Utility Job and the ability to further drill down to see session details.  (See Figure 10)
  2. Hot AMP report: A new option By Vproc|By Skewed AMP displays AMPs with most skewed sessions that exceeded the CPU skew threshold set in the PREFERENCES view. (See Fig 10)
  3. By PE report: A new option By Vproc|By PE  displays total number of sessions logged on to the PE and CPU value for the PE. (See Fig 10)

Figure 10

Teradata Workload Management enhancements

Teradata Viewpoint 14.10 introduced group throttles where a user can define throttle on a group of workloads. We also added new classifications by UDF, UDM, memory usage and collect stats. These features are dependent on Teradata 14.10. In Teradata Viewpoint 14.10 user can now unlock any ruleset if they have the appropriate permissions. Users can now also model a system Ruleset this is useful for comparing the Workload management features for different platforms (Appliance v/s EDW) or for different versions of Teradata.

Alerting Enhancement

Various new Alert options and Alert type were added in this release of Viewpoint. 

  • An option to send an alert for Teradata Database restart was added.
  • In Session alert include or Exclude users option was added. If user wants to define a session alert for small set of users they need not add other users to the exclude user list instead include user option can be used. It also supports splat wildcard. (See Figure 11)

Figure 11

  • Users can now send an alert for long running sessions using newly added Active time alert option in Session Alert type.
  • Spool space (MB) alert option was added in session alert to send an alert if a session uses more than anticipated amount of spool space.
  • Delta I/O (logical I/Os) alert option was added to send an alert for a session consuming excessive logical I/O during the last collection interval.
  • In Database Space alert type users can now specify threshold for Current Spool Space (%) and Peak Spool Space (%) to send an alert when Current Spool Space and Peak Spool Space exceeds the threshold. Splat wildcard support was added to Database space include/exclude user list
  • A new Alert type Table space was added late in the Viewpoint 14.01 release with a new alert option on DBC.TransientJournal table with ability to specify current perm and skew threshold.

Lock Logger

In Viewpoint 14.10 we modified the Lock Logger architecture for Teradata Database14.10 and follow-on releases. When Viewpoint 14.10 is used with Teradata Database 14.10 the Lock Info collector uses the data written to the DBQL Lock Log table to capture lock information therefore DBQL query logging must be enabled with the “WITH LOCK” option.

Finally, please refer the associated Viewpoint Configuration Guide for details of the upgrade process and the User Guide for details of new features.

We continue to have a voluminous release with copious features across a number of strategic areas. Hope you avail the new additions and improvements in Teradata Viewpoint 14.10. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Viewpoint Integration with Apache Ambari for Hadoop Monitoring

$
0
0
Cover Image: 

Teradata’s Unified Data Architecture is a powerful combination of Teradata, Aster, and Hadoop in a single platform.  Viewpoint has always provided monitoring and management of Teradata systems and launched support for monitoring of Aster in Viewpoint 14.01.  In order to complete Viewpoint’s monitoring of the different systems in Teradata’s Unified Data Architecture, Viewpoint 14.10 includes support for monitoring of Hadoop running in this architecture.

The biggest technical challenge Viewpoint faced when monitoring a Hadoop system was how to reliably and easily collect the necessary data from Hadoop.  The different components of Hadoop expose their data in a variety of different ways, including using Ganglia, Nagios, JMX, and some really ugly web interfaces.  There are two primary issues with using these existing technologies for Hadoop monitoring: parsing the data from each different interface and being able to locate and connect to these interfaces on each Hadoop node.  Each of these technologies exposes their data in a different format, and it would take significant development time to properly parse the data from each source.  There’s also a challenge in locating and communicating with the nodes to obtain this data.  Just to collect data from the namenode and jobtracker, the location of these services would have to be configured or discovered, and then failover would have to be accounted for as well.  Expanding the monitoring solution beyond that to collect data from every node poses both connectivity and security issues as well.  Surely there must be a better way!

Luckily Apache Ambari addresses all of these technical challenges by providing a collection of RESTful APIs from which a plethora of Hadoop monitoring data can be obtained.  Ambari handles the work of collecting the monitoring data from a variety of the monitoring technologies mentioned above.  It then aggregates this data and provides a series of RESTful APIs.  These APIs can all be accessed by making web service calls against a central node in the Hadoop cluster.  All data is provided in JSON format so it can easily be parsed by just about any programming language.

Since Viewpoint is written in Java and uses the Spring Framework quite extensively, Spring’s RestTemplate class was a natural choice for calling the RESTful APIs and parsing the results into Java model objects.  Here is some sample code to demonstrate the collection of the number of running MapReduce jobs, map tasks, and reduce tasks from Ambari.

 

package com.teradata.viewpoint.ambari;

import java.io.IOException;
import java.net.HttpURLConnection;
import java.util.ArrayList;
import java.util.List;

import org.apache.commons.codec.binary.Base64;
import org.codehaus.jackson.annotate.JsonProperty;
import org.codehaus.jackson.map.DeserializationConfig;
import org.springframework.http.MediaType;
import org.springframework.http.client.SimpleClientHttpRequestFactory;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.http.converter.json.MappingJacksonHttpMessageConverter;
import org.springframework.web.client.RestTemplate;

public class AmbariClient
{
    private String host;

    private String clusterName;

    private String user;

    private String password;

    private RestTemplate restTemplate;

    public AmbariClient(String host, String clusterName, String user, String password)
    {
        this.host = host;
        this.clusterName = clusterName;
        this.user = user;
        this.password = password;

        List<MediaType> supportedMediaTypes = new ArrayList<MediaType>();
        MediaType plainTextType = new MediaType("text", "plain");
        MediaType jsonType = new MediaType("application", "json");

        supportedMediaTypes.add(plainTextType);
        supportedMediaTypes.add(jsonType);

        MappingJacksonHttpMessageConverter mappingJacksonHttpMessageConverter = new MappingJacksonHttpMessageConverter();
        mappingJacksonHttpMessageConverter.setSupportedMediaTypes(supportedMediaTypes);
        mappingJacksonHttpMessageConverter.getObjectMapper().configure(
                DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false);

        List<HttpMessageConverter<?>> messageConverters = new ArrayList<HttpMessageConverter<?>>();
        messageConverters.add(mappingJacksonHttpMessageConverter);

        restTemplate = new RestTemplate();
        restTemplate.setMessageConverters(messageConverters);
    }

    public <T> T getAmbariHadoopObject(String url, Class<?> clazz)
    {
        SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory()
        {
            @Override
            protected void prepareConnection(HttpURLConnection connection, String httpMethod)
                    throws IOException
            {
                super.prepareConnection(connection, httpMethod);

                String authorisation = user + ":" + password;
                String encodedAuthorisation = Base64.encodeBase64String(authorisation.getBytes());
                connection.setRequestProperty("Authorization", "Basic " + encodedAuthorisation);
                connection.setConnectTimeout(30000);
                connection.setReadTimeout(120000);
            }
        };

        restTemplate.setRequestFactory(requestFactory);

        String fullUrl = "http://" + host + "/api/v1/clusters/" + clusterName + url;
        return (T) restTemplate.getForObject(fullUrl, clazz);
    }

    /**
     * Model class to hold the data from the JSON response.
     */
    private static final class JobTrackerData
    {
        public class Metrics
        {
            public class MapReduce
            {
                public class JobTracker
                {
                    @JsonProperty("jobs_running")
                    private Integer jobsRunning;

                    @JsonProperty("running_maps")
                    private Integer runningMaps;

                    @JsonProperty("running_reduces")
                    private Integer runningReduces;
                }

                @JsonProperty("jobtracker")
                private JobTracker jobTracker;
            }

            @JsonProperty("mapred")
            private MapReduce mapReduce;
        }

        @JsonProperty("metrics")
        private Metrics metrics;
    }

    public static void main(String[] args)
    {
        AmbariClient client = new AmbariClient("ambari.teradata.com",
                "clustername", "admin", "admin");
        JobTrackerData data = client.getAmbariHadoopObject(
                "/services/MAPREDUCE/components/JOBTRACKER", JobTrackerData.class);
        System.out.println("Jobs running: " + data.metrics.mapReduce.jobTracker.jobsRunning);
        System.out.println("Map tasks running: " + data.metrics.mapReduce.jobTracker.runningMaps);
        System.out.println("Reduce tasks running: "
                + data.metrics.mapReduce.jobTracker.runningReduces);
    }
}

Following Viewpoint’s standard data collection practices, all of the data collected from Ambari is stored in the Viewpoint database.  The data is collected from Ambari every minute by default, and therefore the database has a view of the state of the Hadoop system over the course of an hour, day, or week.  This historical data is used to generate a variety of different charts in the Viewpoint web portal, and also to enable the use of Rewind to enable users to go back and see exactly what was occurring on the Hadoop cluster at a specific point in time.

By using Ambari for monitoring of a Hadoop cluster, Viewpoint was able to deliver a comprehensive Hadoop monitoring solution in a relatively short amount of time.  Viewpoint’s Java and web developers were able to focus on the tasks at which they excel: getting the data from the source system (Ambari) and displaying it in Viewpoint’s portlets.  No time was wasted trying to get up to speed on Ganglia, JMX, or many of the details of Hadoop’s inner workings.  Ambari was a critical piece of technology to help Viewpoint roll out this solution and enhance Viewpoint’s support of Teradata’s Unified Data Architecture.

Channel: 
Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Teradata Alerts (CAM) 14.10 Release

$
0
0
Short teaser: 
This article describes what's new in Teradata Alerts 14.10
Cover Image: 

This article describes what's new in the Teradata Alerts 14.10 release (also known internally as CAM). This release was made available on May 30th 2013.

Summary

The Teradata Alerts 14.10 release restructures the alert delivery type in the Viewpoint "Admin" -> "Alert Setup" portlet to make it more modular and introduces a new timeout setting feature for various delivery types.  The remainder of this article provides feature details for all the highlights of the Teradata Alerts 14.10 release.

  1. New modular look for delivery setting.
  2. Timeout options for Alert action such as BTEQ scripts, Run a Program and SQL Queries.
  3. New display name option when sending an e-mail.
  4. If needed, SMTP configuration can now be cleared or disabled.
  5. Big Numbers support for Alert Viewer portlet.

Features Details:

The Delivery settings layout in the Alert Setup portlet has been restructured. The BTEQ/SQL Login configuration has moved to the new Authentication area and renamed Teradata Login (See Fig 1). A new Notification Service area can be used to easily identify user defined scripts and programs running on notification server.

 Figure 1

Users now have an option to terminate or get notified if a program or script runs more than an anticipated amount of time. In Alert Setup portlet, the user can select the Notify option when setting up the delivery types for SQL Queries or when setting up Notification service such as BTEQ Scripts or Programs and get notified for a long running/hung scripts or programs. Users can also choose to terminate a hung program or a script immediately or after certain period of time using Terminate option (See Figure 2). In the delivery type setting Notify and Terminate options are available for SQL Queries delivery type. See below.

Figure 2

For easy identification of e-mails a new display name option has been added (See Figure 3).

Figure 3

Delivery Types can now be enabled or disabled. For Example, an administrator can now disable an already configured SNMP delivery type. For easy identification of disabled Delivery Types, while setting up the Action Sets, the Delivery Types are displayed in red and if scripts are disabled they are indicated as disabled (See Figure 4).

Figure 4

Finally, please refer the associated Alert Configuration Guide for full details of the upgrade process and User Guide for all new feature details.

Hope these new additions and improvements in Alerting mechanisms are helpful to you. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Teradata Viewpoint 15.00 Release Article

$
0
0
Short teaser: 
Official release announcement for Teradata Viewpoint 15.00
Cover Image: 

This article is the official release announcement for Teradata Viewpoint 15.00 with an effective release date of April 9th 2014. The Viewpoint 15.00 release has a whole new look and feel. The upgraded infrastructure embraces newer web technologies, improves performance, and enhances user accessibility, interaction, and discovery. As the versioning suggests, Viewpoint 15.00 supports the Teradata Database 15.00 release. 

Summary

Themes of the Teradata Viewpoint 15.00 release are currency with the latest web technologies, support of Teradata Database 15.00, and to formally address Section 508 compliance and Web Content Accessibility Guidelines (WCAG). As such, there have been significant modifications to the entire Viewpoint look and feel. Highlights:

Viewpoint New Look

Viewpoint has undergone a significant foundational re-architecture. However, the majority of portlet monitoring and management logic (functions and flow within portlets) remains the same. Below is some of the foundational changes you will enjoy:

  • Flat design with new color scheme.
  • Font size is changed from Verdana to Arial font.
  • New icons for each portlet.
  • Chrome is redesigned increasing the vertical real estate.  

Here is a snapshot of the new Viewpoint 15.00 look:

 

Header Changes. The next view displays how the header and Rewind bar have changed. The circled header icons represent access to "Help", the "Viewpoint Admin" menu, and a pull down for the Viewpoint "Profile" and "Log Out" options.  Notice the fresh new look of Rewind. All the discrete time increments are clearly shown as separate buttons. Lastly, the Rewind bar now stays visible even when scrolling down a page.

 

Add Content: The Viewpoint Add Content menu has been redesigned significantly improving the user interaction and discovery. One can add one or more portlets in an operation including multiple instances of the same portlet if desired. There is also a search option at the top to assist in finding the right portlet. New portlet category groupings assist in search and also understanding of portlet relationships. Lastly, notice that all portlets now have new unique representative icon. 

 

New "Help"including on-line search capability as well as context sensitive help directly within portlets taking you automatically to that portlet assistance.

   

Teradata Database 15.00 New Features: Here is an overview of the Viewpoint additions related to this new Teradata release.

New Query Monitor Report – By Blocker View: This report is very useful in understanding the blocking contention going on in a Teradata 15.00 or newer system. This report lists of all sessions that are responsible for blocking another sessions or blocked by another sessions. The sessions are grouped in three categories:

  • Root Cause – Sessions that are blocking other sessions.
  • Granted – Sessions that are blocked but is also blocking other sessions. Consider a case of BT-ET transaction where there can be multiple SQLs.
  • Waiting – Sessions that are blocked and is waiting

Blocking Tab: A new blocking tab is added when user drilling down on a session in Query Monitor that is blocking other sessions. This shows information about the locks held by the session, the count of all sessions blocked by this session, and how long it has been blocking other sessions. It also lists the sessions that are blocked by this session so that user can see the sessions it is blocking and take appropriate action. 

Blocked By Tab in Query Monitor: This tab was redesigned to list all the sessions that is blocking the current session. 

 

Workload Management:  Teradata Viewpoint 15.00 will support Teradata Database 15.00 Workload Management features such as:

  • One can throttle a request at virtual partition level.
  • One can specify a maximum estimated step processing time.
  • One can sub classify on percent of table accessed.
  • One can classify on usage of a Table in a particular statement.
  • Added a new report displaying resource allocation across all SLG tier workloads in all Virtual Partitions for a Planned Environment.

For users of Teradata Integrated Workload Management, you can now define planned environments as state matrix options. 

Please refer to Teradata Database 15.00 documentation for further information on this exciting new Database release.

Aster Workload Management

Teradata Viewpoint 15.00 supports Workload Management for Aster 6.0 and newer. With this new addition, the Viewpoint Workload Designer portlet provides an alternate method of configuring rulesets as well as providing additional functionality such as:

  • Have more than one named ruleset which can be editable by multiple users to make incremental updates
  • Provides lock/unlock capabilities
  • One can export, import, and clone rulesets

Ruleset Features such as Throttles and Workloads are added.

New Metric Heatmap portlet: The prior Capacity Heatmap and Metrics Graph metric portlets have been merged into one super metric portlet called Metric Heatmap (even the name is integrated). It provide a view toggle for doing easy transition to the different displays as shown below where the system CPU usage is shown with two different views within the same portlet.

Alert Viewer portlet hide alerts: As a new type of filter, a hide option has been built into the Alert Viewer portlet allowing certain alerts to be hidden from view. One may use this to selectively hide duplicates or possibly as part of tracking resolved issues. The hide option can be executed for an individual query or through a tables action menu bulk operation (via check boxes). There is a new setting that then allows if hidden alerts should be displayed or not. If hidden alerts are displayed, they will have a strike-through representing the exception. All of these aspects are shown below.

Enhanced Node Resources portlet: The Node Resources portlet has been re-designed but still servers the same purpose for helping to identify over/under utilized nodes and vprocs. This new version is much easier to navigate and understand. The chanes were enough to warrant its own article. Please refer to the "Node Resources Take-2" article for more details.

With the underlying infrastructural changes, all product portlets need to be upgraded in sync. The listing below documents the minimal product versions necessary for Teradata Viewpoint 15.00 compatibility.

  • Viewpoint 15.00
  • Data Lab 15.00
  • DSA 15.00
  • Unity Ecosystem Manager 15.00
  • Unity Data Mover 14.11
  • Unity Director / Unity Loader 14.11

Please refer to the Viewpoint Configuration Guide for details of the upgrade process and the User Guide for details of new features.

We sincerely hope you like the new Teradata Viewpoint 15.00 changes and how it helps in discovery and usability of the product. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Node Resources Portlet - Take 2

$
0
0
Cover Image: 

As part of the Viewpoint 15.00 release, the Viewpoint team built a brand new version of the Node Resources portlet.  The primary purpose of this portlet continues to be to identify skew on a Teradata Database system.  The original incarnation of this portlet required a fair amount of manual intervention in order to achieve this goal.  The new version of this portlet includes a simpler user interface and a new algorithm to identify skewed resources (or “outliers”) automatically.

Since the Teradata Database is a massively parallel architecture, it’s important that all of the units of parallelism are performing approximately the same amount of work.  If some of the nodes or VPROCs within the system are performing too much or too little work when compared with the system-wide average, this is called skew.  When work for a specific query is skewed, the query isn’t taking full advantage of the power of the system, and therefore doesn’t complete as quickly as possible.  When the work on nodes or VPROCs is skewed, this can affect the performance of the system and also reduce the effective capacity of the system.

There are three primary enhancements to the Node Resources portlet.  The first is the use of a histogram to visually display the data distribution for a particular metric.  The automatic calculation of “outliers” based upon the data distribution is the second improvement.  The final significant change is the ability to analyze the data over a time range instead of just the last sample of data.

The visualization in the previous version of this portlet depicted a square for each node or VPROC on the system.  For larger systems it was hard to see all the squares on a single screen, and this representation of the data didn’t really add much insight into the actual data for a particular metric.  The new version of the portlet instead uses a histogram to plot the data for the selected metric.  The histogram contains 20 buckets of equal size, and the height of each bar represents the number of nodes or VPROCs that fall into each bucket or range.

The red bars in the histogram represent the buckets that contain “outliers”, which are nodes or VPROCs that are significantly skewed.  Outliers are calculated as resources that fall 1.5x above or below the interquartile range.  This is a standard statistical analysis for finding outliers in a distributed data set.  In this way, the portlet automatically calculates any nodes or VPROCs that are significantly skewed for the selected metric.  For a system that is working in a reasonably parallel fashion it’s definitely possible that you won’t see any outliers in the histogram.  If the histogram does show any outliers, you might want to investigate further to discover the cause of the skewing on your system.

The third significant change is the ability to analyze up to an hour’s worth of data while using this portlet.  In Viewpoint 14.10 and earlier, the Node Resources portlet only reported data for the last sample period.  This data typically represented the data for a minute or less of elapsed time on your system, which is too short a time period to reliably discover significant skewing issues on a system.  The new version of the portlet lets you choose the last collection time as before, but also an aggregation of 5, 15, 30, or 60 minutes of data.

While viewing the main screen of the portlet, you can click on any of the bars in the histogram to drill down and view the data for just the nodes or VPROCs in that particular bucket.  From the main screen you can also click the “Down” or “Outliers” bubbles to change the filter for the data grid so that only those particular resources are displayed.  You can click on any of the rows in either of the data grids to drill down to a detail screen that displays all of the metrics for that particular node or VPROC.  The detail screen is different for nodes, AMPs, PEs and other VPROC types so that only the applicable metrics for that particular resource are displayed.

This new version of Node Resources should make it much simpler to monitor and identify potential skewing issues across the nodes and VPROCs of your Teradata Database system.

Note that the Node Resources portlet only applies to Teradata DB systems whereas the Node Monitor portlet provides monitoring aspects for Aster or Hadoop system nodes.

 

 

Channel: 
Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Automated Statistics Management

$
0
0
Course Number: 
51253
Training Format: 
Recorded webcast

Managing Teradata Optimizer statistics can be labor intensive and error prone. Many users struggle to know what columns and indexes to collect statistics on, how to detect and prevent statistics from becoming stale, and how to know if collected statistics are being used.

Enter AutoStats! Automated Statistics Management (or AutoStats for short) is a new feature in Teradata 14.10 that helps automate statistics collection and provides intelligence surrounding statistics management. Using the new “Stats Manager” portlet in Teradata Viewpoint 14.10, users can schedule jobs to collect statistics and analyze a system for missing, stale, and unused statistics. Join us for a demo of the new “Stats Manager” portlet in Teradata Viewpoint and learn what AutoStats can do for you! The session includes discussions on best practices and how AutoStats works behind the scenes.Presenters:
Eric Scheie - Teradata Corporation
Louis Burger - Teradata Corporation

Price: 
$195
Credit Hours: 
2
Channel: 

Performance Testing - When is it time for a System Tune Up?

$
0
0
Course Number: 
49658
Training Format: 
Recorded webcast

How do you determine your system’s performance prior to the dreaded customer call asking why their queries are running longer?  What’s the impact to the business when the warehouse environment is under performing?

This presentation addresses best practices on performance testing and monitoring and the realistic actions that can be taken.  Performance baselines, benchmarks, workload queries, and canary queries are covered.  When/how to test are also described. Short term and long term ‘tune up’ performance ideas are given including capacity planning.

Note: This was a 2010 Partners Conference session, updated and re-recorded in 2014

Presenter: Jim Blair, Teradata Customer Briefing Team

Audience: 
Data Warehouse, Administrator, Data Warehouse Technical Specialist, Data Warehouse Project/Program Mgmt
Price: 
$195
Credit Hours: 
1
Channel: 

What's New With Viewpoint 15.00 Shared Pages

$
0
0
Short teaser: 
The Shared Pages feature in Viewpoint has been updated to make it simpler and more intuitive.
Cover Image: 

The Shared Pages feature in Viewpoint has been updated to make it simpler and more intuitive. Now, if you want to create a template page to share with others or to ensure that new users are not greeted with a blank Home page, you can find everything you need in the new and improved Shared Pages admin portlet.

For anyone not familiar with the Shared Pages feature, it allows admin users to create template Viewpoint pages and have full control over the layout of portlets on the page and the settings and views within the portlets. You can allow users in a role to take a copy of a shared page and customize it, to view a read-only page that cannot be customized so you can ensure your users have a consistent view of data, and you can automatically add pages to the portal.

The main changes in the Shared Pages interface from the last release are:

  • Features and functions have been consolidated in one area – Now the Shared Pages admin portlet is a one stop shop for creating and managing shared pages
  • Removed banner messages that appeared to end users on portal pages – The message banners that took up valuable screen real estate for little value have been eliminated
  • Permission to administer shared pages is now a normal portlet permission – It is no longer a 'special' permission that lives in a different location from other permissions

The Making of a Shared Page

To get started, ensure the Shared Pages portlet is enabled and that you belong to a role that has permission to use the portlet.

Upon opening the portlet, you see a list of all of the shared pages that have been created and some useful data points for each.

Page Properties

When you click the Add Shared Page button you are presented with some options for how the page should be applied. Normally, shared pages are editable and can be added or removed at any time.

Assign to Role

This is how you choose who can use the page. After you create the page, you cannot change the role since different roles can have very different access rights so switching roles could get messy.

Enable page

This specifies whether or not the page is accessible to users. Remember that as soon as you click Create, the page is generated. If you don't want end users to see that page until after you have added and edited the portlets, do not enable the page until you are happy with the end result.

Show this page the first time a user logs in

This option helps new Viewpoint users by automatically displaying a useful portal page rather than a blank page.

Read-only

This ensures that everyone who adds the shared page to the portal will see the same portlets in the same configuration.

Mandatory

This variant of a read-only page that is automatically added for everyone in the target role and cannot be removed.

Edit the Page

Now that Viewpoint knows the target role and its associated permissions, you can build a shared page and see how it will look to users. The page edit screen looks nearly identical to a normal page.


In the first step, you already configured the Page Properties so the next step is to add portlets to the page. The Add Content interface allows you to add only the portlets available to the target role. This removes some of the confusion of the previous release where you could add any portlet but it might not appear to the user.

When customizing the shared page, 'Edit and Preview' is the normal mode and ensures you see exactly which features and functions are available to users in the target role. It is recommended that you do most page configuration in the preview mode and that you verify the shared page in that mode before saving.

The gear icons allow you to configure additional settings that might not be available to the target role such as the ability to configure which table columns appear, select a target system, and configure portlet Settings (previously known as Preferences).

When you are happy with your creation, ensure the page is enabled and click Save; the new page will be available to all users of the target role.

The Taking of a Shared Page

After the page has been created, it can be added (assuming it is a normal or read-only page) using the Add Page menu at the top of the portal (the plus icon to the right of the page tabs). Mandatory pages do not appear in the list because they are added automatically for all users in the role.

 

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Unified Data Architecture Monitoring & Management

$
0
0
Course Number: 
51539
Training Format: 
Recorded webcast

The Unified Data Architecture enables Teradata, Aster and Hadoop to deliver unparalleled value.

Teradata Viewpoint, Studio, Table Operators, and the Unity product suite provide enabling technologies for connectivity, monitoring, and management for all these systems. This presentation highlights the features and capabilities of these client enabling solutions.


Presenter: Gary Ryback - Teradata Corporation

Price: 
$195
Credit Hours: 
2
Channel: 

Introduction to TASM/Workload Management Portlets (SLES11)

$
0
0
Course Number: 
50178
Training Format: 
Recorded webcast

This session provides an overview and demonstration of the TASM portlets and TASM feature functionalities, focusing primarily on the new features introduced in Teradata 14.0 SLES 10 and SLES 11.

It demonstrates how to use the Workload Designer portlet to configure TASM rule sets that exercise these features. It also highlights where these features are available in the Workload Monitor portlet while monitoring workloads on a Teradata system. This session covers in depth the TASM portlet changes that support the new workload management methods introduced in Teradata 14.0 SLES 11.

This presentation was updated and re-recorded in October 2014.

Presenter: Betsy Cherian, Engineering Manager, Viewpoint Core - Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Application Specialist
Price: 
$195
Credit Hours: 
1
Channel: 

Introduction to Teradata Active System Management (TASM)

$
0
0
Course Number: 
39365
Training Format: 
Recorded webcast

TASM gives customers the ability to manage Teradata resource usage and performance on platforms executing diverse types of work.

This session discusses how to get started using TASM, as well as what workload management rules are available within TASM to control concurrency, to determine what queries are allowed to run, or to differentiate the priorities of active workloads. Examples of commonly used options from current TASM users are shared.

Key Points

  • Overview of TASM
  • TASM Features
  • Frequently used TASM options 

Presenter: Carrie Ballinger, Sr. Technical Consultant – Teradata Corporation

Audience: 
DataWarehouse Architects, DataWarehouse System Administrators, DBAs, Enterprise or IT Architects
Price: 
$195
Credit Hours: 
1
Channel: 

Ad Hoc Workloads and Business Performance SLAs

$
0
0
Course Number: 
47902
Training Format: 
Recorded webcast

How do you determine you’re who gets priority when every user thinks their work should come 1st? What’s the impact to the business when the warehouse environment is thought to be under performing? Or is it just that one user?

This presentation addresses best practices in establishing workload SLAs, both tactical and strategic, in an Ad Hoc environment and how to engage the business users in determining priority. Included are realistic testing methods, what testing can and should be performed prior to Query implementation. Query SLAs, both tactical and strategic, are discussed, as are methodologies for gaining consensus on priority. Ongoing SLA monitoring, tracking and reporting are also discussed.

Updated and re-recorded in January 2015.

Presenter: Jim Blair, Data Warehouse Consultant – Teradata Corporation

 

Audience: 
Data Warehouse Administrators, Data Warehouse Business Users, Data Warehouse Technical Specialists
Price: 
$195
Credit Hours: 
1
Channel: 

Teradata Data Lab (15.10) Overview

$
0
0
Course Number: 
53411
Training Format: 
Recorded webcast

This course provides an overview of the Data Lab product concepts and value proposition and how it applies to DBAs, Power Users, and the business community.

It is comprised of both a slide presentation as well as a live product demonstration. The demonstration shows the new and improved look & feel introduced with the Data Lab 15.10 release. The course also reviews how other Teradata technologies can interact with Teradata Data Lab extending the self-service aspects of data analytics and exploration.

Presenter: Gary Ryback, Product Management Director - Teradata Corporation

Audience: 
This course is blended to apply to both the DBA / IT staff, power users, as well as the analytical business community.
Price: 
$195
Credit Hours: 
1
Channel: 

Teradata Data Lab 15.10 Release

$
0
0
Short teaser: 
Release article covering all the significant enhancements released with Data Lab 15.10.
Cover Image: 

Very pleased to announce the formal release of Teradata Data Lab 15.10 effective April 2015! Teradata Data Lab 15.10 is a very exciting release as it is a significant product evolution in feature functionality, navigation, performance and ease of use. The Data Lab Viewpoint interface has been combined into one redesigned double-wide portlet offering more information, context sensitive requests, and enhanced navigation. Read on to discover more about this new exciting Data Lab product release.

Data Lab Concept:

To review, the key concept around Teradata Data Lab is to stop moving production data out of production to feed analytical islands, such as spread marts or data marts. Instead, move the analytical proofing data into a production intelligent sandbox environment where Data Lab provides production protection for the DBA while allowing self-service provisioning and governance capabilities desired by the analytical community. A much better approach in terms of data security, user efficiency, system resource usage, and proofing confidence to name just a few of the benefits.

Data Lab 15.10 Release Overview:

Here are the highlights of the Data Lab 15.10 Release:

See below for more details on all of these exciting new features.

The New Data Labs Portlet

With the Data Lab 15.10 release, the Data Labs portlet is now an all-inclusive, covering all functionality of the prior Data Lab release Lab Group Setup and Data Labs portlets. In addition to combining the two portlets, we have expanded the real estate of the portlet to allow for more column displays as well as improved navigation between lab groups, data labs, and tables. Within this new architecture, there are also performance improvements! Here's the new portlet look showing all three levels of Data Lab (lab groups, labs, and tables) in a single "Monitoring" view. 

There are a number of new columns for display across the lab group. lab, and table views, in particular at the lab group level. As a reminder, you select and configure your column preferences through the "Configure Columns" option in the upper right "Table Actions" pull down menu as highlighted above. Here are the available columns across the three levels:

In addition to the new format, we have added Situational Awareness for easy informational display and request creation / submission whether that is requesting a display of lab details or extending an existing lab. These new menus, triggered via the pull down in the left margin, are available at all three levels as shown below. The lab group or data lab creation is now done through the "+" in the blue left margin section headers for Lab Groups and Labs. Note that the Requests tab has been maintained so you can still generate and submit requests there as well. 

In the view above, you will also notice a new lab request option, that being "Reinstate Lab". Since the first Data Lab release, we have always offered expirations of labs driving the point that data labs should not be permanent. A lab that expires simply has the access permissions removed, so the underlying tables and data remain intact. In the past, if you wanted to transition an expired lab to be active again, access permissions had to be manually rebuilt. With Data Lab 15.10, the access permissions are captured as part of the expiration. The new Reinstate Lab now allows an easy automated way to reapply the expired permissions. This should make resurrecting expired labs much less painful. 

Data Lab 15.10 also now offers new owner and user/role access displays at both the lab group and lab levels. These displays are initiated through the View Lab Group Details and View Lab Details requests. Here are the resulting displays when these new request options are selected. Worth noting, to enable these new displays, there is a new lab group "Enable display of users and roles for the lab group and labs" toggle checkbox. Refer to the New Lab Group Features section of this article for the exact location of this checkbox.

Here are two views of the View Lab Group Details, the first showing the "General" lab group informational display and the other displaying the user / role access at the lab group level. The "Owners" display option would obviously display the Lab Group Owners listing. The View Lab Details shows very similar information but for the labs themselves.

 

New Lab Group Features

There are a number of new lab group features that have been realized in the Data Lab 15.10 release. Many of these were direct customer requests and we listened! So let's walk through these by looking at some of the setup steps in the refreshed "new look" lab group 6-steps setup wizard. 

Step #1 in setting up a lab group is to define the aspects of the lab group itself. For this step in Data Lab 15.10, you will see the new lab group description field which will assist users in those Data Lab installations where the number of lab groups is increasing. There is also a new database search capability when selecting the "Parent database" which makes it easier to navigate the Teradata system objects. This setup step is also where the new toggle checkbox for user / role displays is located. 

Step #2 in setting up a lab group is to define the settings and defaults related to the labs within this lab group. This step in particular has a number of customer requested additions. Hopefully it is not getting too busy. The first enhancement to mention is the configurable day settings for the number of day settings for expirations, deletions, and the new maximum cumulative life. The number of day selections in the past was limited to a specific pick list (i.e. 30, 60, 90 ...). This just didn't provide enough flexibility for all the different customer use cases. Although we didn't abandon the pick list due to its ease-of-use value, you can now manually enter whatever number of days you want. So the "99" days to expiration and "33" days to deletion in the screen shot below.

We have also added the "Remove no expiration option from requests" checkbox. Yet another customer request where those pesky users would ALWAYS select no expiration when requesting a lab extension. Those customers that believe labs should be temporary would never approve a "no expiration" extension request so why even offer it. So you no longer have to offer this extension request. Another checkbox addition in this setup step is the "Grant lab users CREATE TABLE privilege". This is for customers that don't allow users to create objects due to concerns around object ownership and grant access capability. So for those customers that require objects created in a different manner, through a stored procedure for instance, they can now implement Data Lab without object ownership concerns. 

The most significant addition in this step is "Request Limits". Request limits implement maximum setting controls for both the life and size of a data lab within this lab group. In the majority of cases, a data lab should be temporary and the new maximum cumulative life provides a secondary level of control. For instance, a lab owner continues to request lab extensions at an extension that is automatically approved. The request that exceeds the cumulative life will go for approval regardless of any automated approval settings. The same idea applies to the maximum size where the new automated approval logic is that certain size increases will be automatically approved but only to a certain limit, they then go for formal approval regardless. These new limits should help both DBAs and Lab Group Owners in managing lab groups and labs in their environment.

The other four steps (#3-#6) of the lab group setup wizard have not really changed in Data Lab 15.10 with one minor exception on step #6. Below is a quick review of these other steps and information on the minor addition to step #6.

  • Setup step #3: Owners - The description of this step is "Select which Viewpoint users and roles own the lab group." This assigns Lab Group Owners who are then enabled to approve and thus manage lab group requests within this lab group. Note this is a Viewpoint assignment so there is no actual additional Teradata permissions granted as part of this operation. Another point of confusion is Lab Group Owners are NOT enabled to create new or edit existing lab groups. 
  • Setup step #4: Private - The optional setup step is described as "Restrict which Teradata users and roles can access the lab group." So a private lab group is not only controlled from a visibility standpoint but also limited to what Teradata user and roles can be included in lab group requests, for instance an "Add User" request. 
  • Setup step #5: Default Users - Access within Data Lab can be at a lab group or lab level. So as the step description mentions "Select which Teradata users and roles have access to all labs in this lab group." This is access assignment at the lab group level. Note that access at the lab level is done during a new lab request or subsequent add user request.
  • Setup step #6: Approvers - The approvers step defines what requests at what thresholds are automatically approved or require approval and by whom it may be approved, either a lab owner or lab group owner. The step description describes this as "Set the thresholds and approver for each request type." There was one informative addition to this screen that being an informational notation for the lab default size setting (100MB in the graphic below). The reason we added this is to enforce compliance with the presented defaults in a add lab request. For the lab group below when a request for a new lab is generated, the size default presented is 100MB. The self-service auto-provisioning thinking here is that if the requester chooses the default or something smaller, than that should be a consideration for automatic approval. This simply saves from having to go back to step #2 and retrieve that lab setting.

Enhanced My Notifications:

My Notifications is a preference setting to allow you to manage your emails generated from the Data Lab product. The same as other Viewpoint preferences, this is located in the "Portlet menu" pull down in the upper right corner of the Data Labs portlet. This enhanced preferences now offer the following email configuration options:

Release Compatibility:

  • Teradata Data Lab 15.10 requires Teradata Viewpoint 15.10 and vice versa.
  • Teradata Viewpoint and Data Lab 15.10 are supported with Teradata Database 15.10, 15.00, 14.10, 14.00, and 13.10 releases.
  • Teradata Data Lab is not currently supported with Teradata Aster or Hadoop but both are future roadmap considerations.

Thanks and let us know what you think of this most excellent Data Lab release.

 

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Overview of Teradata Viewpoint 15.10

$
0
0
Short teaser: 
This article is the official release announcement for Teradata Viewpoint 15.10
Cover Image: 

This article is the official release announcement for Teradata Viewpoint 15.10 with an effective release date of April 9th 2015. In addition to providing support for various Teradata products, the Viewpoint 15.10 release offers two new strategic reporting portlets, Query Log and Application Queries.  

Summary

The primary themes of the Viewpoint 15.10 release are to enhance Teradata DB reporting capabilities and to support various products including Teradata Database 15.10. The highlights of the Viewpoint 15.10 release are:

Performance Reports

Two new portlets were added in Viewpoint 15.10 which support Teradata Database 14.10 and newer.  These portlets use data from Performance Data Collection and Reporting (PDCR) infrastructure.

Query Log

The Query Log portlet enables Teradata Database Administrators to view key reports based on the historical DBQL data in the PDCRDATA.DBQLogTbl_Hst table in Teradata Database. In the screenshot below, it shows that 4 applications or 3 users utilized the system named "paper" on May 18th 2015. Next to it the bar chart displays a visual representation of the numbers of queries that fall into each category for the selected metric. The chart provides details such as number queries that had single AMP, two AMP or all AMP steps or number of queries resulted in an error etc.. The trend chart next to the bar chart helps analyze key performance indicators, aggregated by day for a period of time. The trend chart also helps users understand the impact of certain events such as a Teradata version upgrade or a TASM ruleset change on the key metric. Towards the bottom of the screen, the Logged Queries tab provides key metrics for queries logged on the selected date. The Suspect Queries tab displays information for all logged queries that are designated as suspect. Suspect queries are those whose values surpass thresholds defined for the Query Log data collector in the Monitored Systems portlet.

On drill down to Users or Applications displays details about the Users or Applications that were running on the system, such as logged queries  or how many queries were classified as suspect queries etc. for a particular user or application. The screen shot below is from a drill down on Users. 

Further drill down to a particular user gives summary stats about the user such as number of logged queries, number of queries classified as suspect queries etc.. The queries tab lists all the queries submitted by the user as well as all the queries that were classified as suspect queries. The trend tab can be used to  plot multiple trend charts which helps analyze key performance indicators aggregated daily or weekly over a period of time. It also helps in analyzing  the impact of certain events such as the Teradata version change or TASM ruleset change on the key metrics for a user. The trend chart for applications also helps analyze impact of an application version change if Teradata recommended QueryBand format is followed, as Viewpoint picks up the application name and version from the QueryBand.

In the queries tab, further drill down to a query gives query level stats as to time spent in delay queue, KPI, workload details etc in the SQL and Query Band tab, one can see SQL and Query Band information.  

Application Queries

The Application Queries portlet helps application users understand their application performance. The Application Queries view displays summary information for each application and its different versions that submitted queries on a selected date. Drill down to a particular application gives summary stats about the application such as number of logged queries, number of queries classified as suspect queries etc.. The queries tab lists all the queries submitted by the application and  all the queries that were classified as suspect queries. The trend tab allows users to plot multiple trend charts which helps analyze key performance indicators, aggregated daily or weekly over a period of time for the application. Users can also see the impact of certain events such as Teradata version change or TASM ruleset change or application version change. 

In the queries tab, further drill down to a query gives query level stats as to time spent in delay queue, KPI, workload details etc, in the SQL and QueryBand tabs, one can see this information.

If the Teradata recommended QueryBand format is followed, Viewpoint automatically generates the application name using QueryBand information, administrator can assign Viewpoint users or roles to those applications in Query Group Setup portlet so that application users can see details about the applications in Application Queries portlet. If the Teradata recommended QueryBand format is not follwed, one can define applications in the Query Group Setup portlet and assign Viewpoint users or roles to them. 

Performance Data Collection and Reporting (PDCR) Scheduling

PDCR is a Teradata PS offering which collects historical data from various Teradata system tables (ResUsage, QueryLog, AmpUsage, LogonOff, TDWM) and stores within a PDCR database defined on a Teradata system. This database is then used to generate a series of customer performance analysis reports (Excel Toolkit and PS Viewpoint portlets).

PDCR involves:

  • Creation of PDCR infrastructure –  This can be done using PDCR dip script which is part of Teradata Database 14.00 and newer
  • Regular scheduling of maintenance job which moves data from Teradata system tables to the PDCR database. This can now be done using Viewpoint 15.10 on Teradata Database 15.00 and newer. Prior to this new offering, PS developed scripts were used for maintenance jobs.
  • Upgrade & Migration of PDCR database is still handled by Teradata PS
  • The PDCR excel tool kit reports and PDCR Viewpoint reporting portlets remain a Teradata PS offer.
  • The Query Log and Application Queries portlets provided in Viewpoint 15.10 use the PDCR data repository.

The PDCR scheduling portlet allows you to create, monitor and manage PDCR scheduling jobs. You can see when a particular job was last executed, whether a job was successful or failed, if failed what was the error, how many rows were accessed, tables that were loaded etc.. It also allows you to send alerts for failed jobs or when PDCR staging/reporting database reaches  space limit thresholds. 

Products Support - Teradata Database 15.10 Support

Following features in Viewpoint 15.10 requires Teradata Database 15.10 and above.

Secure Zone

Teradata Database 15.10 added the Secure Zones feature to support multi-tenancy environment or sandbox environment. This feature restricts user access to  set of database objects. Viewpoint 15.10 is built with Secure Zone awareness by assigning zones to a role in the Roles Manager portlet. Once a zone is assigned, Viewpoint users in that role when accessing the listed portlets below will only see details or queries accessing the objects assigned to a particular zone.

  • Queries Portlets (Query Monitor, MyQueries, etc.)
  • Query Log
  • Application Queries
  • Space Usage
  • Lock Viewer

In the below screenshot, the "0 of 2" for "WD" system means that out of 2 zones defined on the WD system, no zones are assigned to Administrator role. One can click on the "0 of 2" screen to assign zones to a role.

Partition Level Lock:

Teradata Database 15.10 introduced a new locking mechanism to improve partition level access. Viewpoint will display these locks in the Query Monitor and Lock Viewer portlets.

The QueryBand Option in Profiles

Teradata Database 15.10 added an option to set QueryBands in profile which will be the default queryband for a session. In Viewpoint, the user can view the profile queryband in the QueryBand tab in the Query Monitor portlet.  

Stored Procedure Monitoring

Viewpoint 15.10 can now differentiate between a SQL that is part of a stored procedure or not.  The SQL tab in the Query Monitor drill down will now display the stored procedure name.

Proxy User Information

Teradata Database 15.10 will allow users to log on as a proxy user and use access rights of that proxy user. Viewpoint 15.10 will show the proxy user details in the Query Monitor overview drill down tab and the Workload Monitor portlet. See the screenshot below.

Request Level Skew

Viewpoint 15.10 along with Teradata Database 15.10 can now report request level CPU and IO skew in addition to snapshot skew information. For this, the overview tab in the Query Monitor portlet was re-arranged.  

Products Support – Workload Management

Unless stated explicitly all new workload management features require Teradata Database 15.10 and newer. Viewpoint 15.10 supports the following workload management features:

  • Users now have an option to prioritize the delay queue based on workload priority. Users can now choose to release queries based on workload priority instead of only FIFO.
  • Users can now separately classify backup and restore job
  • Users can now classify on a MloadX utility
  • Users can now define an AMP Worker Task (AWT) throttle for Utility Name, Request Source, Query Band, DSA job type, or a combination of these criteria.
  • Users now have a new minimum response time option which will allow them to hold a query in a response state until the minimum workload response time threshold is met

In the Workload Monitor portlet when drilling into the delay request throttle view, there are additional tabs as shown in the screen shot below. This is supported for Teradata Database 13.10 and newer.

  •     By Workload - same as the previous view, displays all the sessions that are currently delayed
  •     By Throttle - Display all queries included in a throttle counter. A query that is included in a throttle counter might still be executing, it is only delayed if the limit is exceeded.
  •     By Throttle count - Displays the counters for each active throttle. For Teradata DB 15.10, this will now also display system default throttle. 

Products Support –  Aster 6.10 Support

The Viewpoint 15.10 release supports Aster Database 6.10. With this release of Viewpoint 15.10 users can cancel any process or queries running on Aster Database 6.10 and above. This is done by having Viewpoint submit an asynchronous abort to the database.

Alerts

  • Include/Exclude by Account string was added for session alerts to include/exclude account strings while defining events for session alerts.
  • User can now send alerts for a session stuck in responding state.

Online Restore and Server Migration

Migrating to a new Viewpoint server or restoring a Viewpoint server has been made easy with minimized downtime. This was accomplished by only taking Viewpoint services offline while configuration data is restored. They system is then made available as the historical data is restored in the background.

Progress of the restore or migration can be monitored in the Viewpoint portal notification area.

Below are three restore/migration options that are now supported:

  • A configuration only restore.
  • Configuration only restore or migration into a clean database.
  • A full restore or migration

Cluster Notification 

A list of e-mail addresses can now be configured to receive cluster related e-mail notifications

Please refer the compatibility matrix and associated Viewpoint Configuration Guide for details of the upgrade process and the User Guide for details of new features.

Hope you like these new changes in Tearadata Viewpoint 15.10. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

What's New in Viewpoint 15.10

$
0
0
Course Number: 
54208
Training Format: 
Recorded webcast

This presentation provides an overview of the Teradata Viewpoint 15.10 release and includes live demonstration of some of the new portlets added in this release.

Presenter: Shrity Verma, Product Manager - Teradata

Audience: 
Database Administrators, Application Developers, Business Users
Price: 
$195
Credit Hours: 
1
Channel: 

Viewpoint 15.11 Charting Update

$
0
0
Short teaser: 
Viewpoint has adopted an open source charting framework to replace the proprietary TjsChart.
Cover Image: 

Technology Overview

The technologies used in this framework are highly responsive to data manipulation on charts.

The new charting frameworks uses following third-party client side libraries:

  • crossfilter.js - Crossfilter is a Javascript library for exploring large multivariate datasets in the browser.
  • d3.js - D3.js is a JavaScript library for manipulating documents based on data.
  • dc.js - dc.js is a javascript charting library with native crossfilter support and allowing highly efficient exploration on large multi-dimensional dataset. It leverages d3 engine to render charts in css friendly scalable vector graphics (SVG) format.

These technologies are configured to work with the current Viewpoint Portal. Hammer.js is used to provide support for chart balloons on mobile platforms.

Features Supported

The following scales are supported on both the Axis:

  • X-Axis Linear and Time Scales
  • Y-Axis Linear and Percent Scales

Other basic features like coloring, grid-lines, rendering area, rendering data points, configuring min-max are supported.
The framework is flexible enough to provide native D3 chart object if required, to add special features on D3 charts.

Special features added by viewpoint include:

  • threshold-based coloring
  • area charts (stacked min-avg-max)
  • support for line segment to the last point before the chart start
  • support for partial width first bar
  • support for "off the chart" indicators
  • null values and line graphs with data islands (nulls on either side)
  • custom axis labelling (to support Viewpoint Profile adjust time)
  • detail information balloons
  • NOW line and label

Data for the views can be provided as a Backbone model or as raw data in chartConfig.
A chart by default takes the size of the container in which its rendered. The client for the chartView can also provide 'width' and 'height' options for the chart if it needs to be rendered with specific dimensions.


If the size of chart container changes (resize), the chart needs to be re-rendered to fit the new container size. Whenever new data is available from the server (refresh), the charts will update its crossfilter with new data and re-draw the chart.

Developing with Chart Views

In 15.11, most of the portlets with charts were converted to use our backbone views.  The new chart stack is not implemented in any old style portlets.

There are two ways in which the chart views can be constructed. It can be directly configured as childView or it can be extended to add other views/functionality into that view. If the chart is configured as a childView, then it is the parent view's responsibility to fetch the data and pass it to the chart as a 'chartData' option to the view. When the view is extended, the data is to be fetched by the extending view and can directly use it as its chartData variable.

There are 2 categores of charts, simple dc-based charts with 1 series and no thresholds can use the streamlined barChartView or lineChartView.

Charts with multiple series or with thresholds need to use the more involved compositeChartView.js, which uses an array of sub-charts.

Chart configuration options are documented in baseChartView.js.

 

 

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

System Performance Monitoring

$
0
0
Course Number: 
54351
Training Format: 
Recorded webcast

This session focuses on Monitoring for System Performance. 

We look at how to leverage Viewpoint and Performance Data Collection in concert in order to identify performance bottle necks and focus performance tuning efforts.

Presenter: Dan Fritz - Teradata Corporation

Price: 
$195
Credit Hours: 
1
Channel: 

Teradata Data Lab 15.11 Release

$
0
0
Short teaser: 
Provides an overview of new feature functionality introduced with the Data Lab 15.11 release
Cover Image: 

The Teradata Data Lab 15.11 release is now available with formal GCA release in early November 2015. This article will discuss the latest enhancements. Note that Data Lab 15.11 was a relatively "light" release due to the significant changes made in the Teradata Data Lab 15.10 release in both visualization as well as feature functionality. There is still some good stuff here however so read on.

Data Lab Concept:

To review, the key concept around Teradata Data Lab is to stop moving production data out of production to feed analytical islands, such as spread marts or data marts. Instead, move the analytical proofing data into a production intelligent sandbox environment where Data Lab provides production protection for the DBA while allowing self-service provisioning and governance capabilities desired by the analytical community. A much better approach in terms of data security, user efficiency, system resource usage, and proofing confidence to name just a few of the benefits.

Data Lab 15.11:

This latest release of Data Lab provides some nice updates but as mentioned was a light release to allow some settling due to the extensive changes in the Data Lab 15.10 release. This article will discuss four new features released with Data Lab 15.11, those being:

So let's take a look at each of these in more detail.

Expedite a Data Lab Expiration:

Teradata Data Lab has always had the concept of an “expiration date” for data labs which included the ability to request an extension to postpone the expiration. However some customer situations encountered where the expiration date actually wanted to be moved in instead of moved out. For instance, when a lab group owner inadvertently approves a longer extension than desired. When this happened in previous releases, there was no easy way to expedite or pull in that extended date. This new feature allows data lab owners to change the expiration date to an earlier date through a simple menu modification option. This new feature however does not allow one to extend the date. In short, the data lab extensions work the same as they did before. This new option is offered as part of the Edit Lab Details selection as shown below which is only visible if you have full permissions to the data lab. Note the expiration date for the Americas data lab is a year later than the Asia and Europe data labs and let's say therefore needs an adjustment.

Below is the updated "Edit Lab Details" screen including the new section stating the existing data lab expiration date and the ability to set a new expedited (earlier) expiration date. A request for a later date will result in an error stating that the "New expiration must predate current expiration".

Lab Group Access Display:

Teradata Data Lab offers two types of access (read-only or full permissions) at two different levels (lab group or data lab). The ability to easily view these access levels was added in the Data Lab 15.10 release through the "View Lab Group Details" and "View Lab Details" requests. Worth reminding everyone that these views are not defaults and need to be enabled within the configuration of each lab group through a simple checkbox. With Data Lab 15.11 release, we modified the "View Lab Details" to display two sections, one for access at the data lab level and another for displaying access from the lab group level. This now presents a full view of who has access and from what level has it been granted. Here is an example of the new "View Lab Details" access display:

Data Lab Access Reports:

The new Data Lab “Access Reports” (offered as a new tab in the Data Labs portlet) provide an easy way for permission enabled users to understand Teradata Database access rights within the Teradata System Data Lab infrastructure. So the ability to easily see user access across all aspects of a lab group or lab groups without the need to individually look at each data lab for understanding set access. The “Report type” includes three options: User, Role, and Lab group to report on user access, role access, or all access within a specified lab group or groups. One could use these new reports for example to get an access layout of an entire lab group or to understand where a particular user has access to across all lab groups. As an example, here is creation of a report to determine where Ann and Sam have access and associated permission:

And the resulting report. Note there is an export option if this information needs to be used outside of Viewpoint.

As mentioned, viewing Data Lab Access Reports is a permissioned operation within Viewpoint Administration - Roles Manager and specifically within the Data Lab settings. This is the same location for granting lab group modification privileges. 

Data Lab Migration:

This new process solution leveraging the new “migratedatalabssystem” command can be used in conjunction with Teradata DB System “floor sweeps / system migrations” where a new Teradata Database System is replacing an older Teradata Database System and where the customer also wants their Data Lab environment to migrate to the new system. There are two parts for success here. The first is that the Teradata DB system – Data Lab objects, users, roles, etc. must move over as part of the Teradata DB system NPARC. The second portion is a migration of the Viewpoint “Data Lab” infrastructure within the Viewpoint instance (must be Viewpoint / Data Lab 15.11 version). This is the new process where we can now automatically migrate the “Data Lab” infrastructure with the floor sweep. Some points to understand about this new feature. The Viewpoint “Data Lab” migration must occur on the same Viewpoint instance so there is no export / import from one Viewpoint instance to another. Also this feature should not be perceived as a “quick start” way to implement a new Data Lab environment as it is a transfer of the infrastructure, not a copy. The process for a Data Lab migration is documented in the Teradata Viewpoint Software Installation, Configuration, and Upgrade Guide for Customers, Release 15.11.

Some compatibility aspects worth mentioning. Data Lab 15.11 requires Viewpoint 15.11 and vice versa. Viewpoint and Data Lab 15.11 are supported with the following versions of Teradata Database (15.10, 15.00, 14.10, 14.00, 13.10). Teradata Data Lab is not currently offered with Aster or Hadoop systems. 

Thanks for taking the time to peruse this article!

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

An Overview of the new Viewpoint 15.11 Dashboard

$
0
0
Additional contributors: 
Cover Image: 

Summary

The Viewpoint Dashboard (released with Viewpoint 15.11) is designed to give you an at-a-glance overview of monitored systems while you perform your everyday tasks. If something catches your eye you can quickly switch to the Dashboard for a system and begin a more in-depth investigation.

The Viewpoint Dashboard pulls together data from the System Health, Query Monitor, Alert Viewer, Metrics Analysis, and Workload Monitor portlets in order to support monitoring and management of Teradata, Aster, and Hadoop systems. To get started with the dashboard, be sure it is enabled in Roles Manager and then look for System Health status icons stacked vertically on the left side of the screen. Hover over a status icon to see the system name and health state. When you want to see more detail about a system, click its health icon.

Dashboard System Overview

When expanded, the Dashboard initially shows an overview for the selected system. For this at-a-glance system overview, there are 5 main content areas:

  1. Trend graphs for key metrics
  2. System Health metrics that have exceeded thresholds
  3. Workload details such as the current ruleset, state, and top active workloads
  4. Query details showing counts of queries in each state and the top 5 lists for queries including
    • Highest Request CPU
    • Highest CPU Skew Overhead
    • Longest Duration
    • Longest Delayed
  5. Alert details showing counts of alerts in each state

 

System Health Details

Clicking the System Health section of the system overview screen takes you to the System Health details page, which displays detailed information about each key performance indicator metric used to evaluate the overall health of a system.

To navigate back to the system overview, click the system name in the list on the left. To navigate to other data associated with the current system, click one of the boxes to the right of the list of system names. Each box contains a summary of the most important information from each section so that you can continue to see a system overview while investigating the details of individual sections.

 

Workload Details

To see more information about workload management workloads, click the name of an active workload on the Workloads section of the system overview screen. In addition to the ruleset, state, and active workload information from the system overview, new information is available such as counts for current queries, cumulative data for queries that have been processed, and details for all workloads in the active ruleset. Click a workload name to display trend graphs for various metrics associated with the workload.

Query Details

For queries, there are a couple of ways to see detailed information depending on what piques your interest. At the top of the Queries section on the system overview screen, click a box containing a count of queries in given state to see all queries that are in the selected state. You can also click a query in the Top 5 section to see details for that query. Once you are on the details page you can filter the list of queries just like in Query Monitor. Click any query to see details below. The down arrow next to the session number in the details area provides standard functions such as aborting and changing workloads.

Alert Details

The Alerts detail page allows you to view triggered alerts for the selected system. To access the alert details, click the box containing an alert count on the Alerts section of the system overview screen. Once you are on the details page, click an alert see more information such as general alert information (e.g. when the alert occurred and the alert criteria), what triggered the alert (e.g. details about the actual issue), and any additional messages.

Dashboard Rewind

In addition to viewing current data in the dashboard, you can use the Viewpoint Rewind feature to roll back and see data from the past just like you can for portlets outside of the dashboard. When using Rewind with the full dashboard displayed, all data for all systems is rewound, not just the current system. To help prevent confusion when the dashboard is minimized, the dashboard icons show the current status regardless of whether or not Rewind is enabled for the dashboard.

The Viewpoint 15.11 dashboard should make Teradata system monitoring easier than ever before. Enjoy. 

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0
Viewing all 81 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>