Quantcast
Channel: Teradata Developer Exchange - Viewpoint
Viewing all 81 articles
Browse latest View live

Utilization Tips for Viewpoint Rewind

$
0
0

This blog discusses tips, tricks, and use cases for Teradata Viewpoint's "Rewind" feature. Hopefully this blog will give you some new ideas on more efficient navigation and different ways of utilizing rewind that you may have overlooked in the past.  

read more


printf() in JavaScript???

$
0
0

Well not exactly printf(), but still pretty darn cool. I consider myself a decent JavaScript programmer with a fair amount of debugger knowledge. However, I stumbled across an interesting bit of new (to me) information last night. Maybe everybody else is already hip to this. If so then I am an uninformed idiot; well I guess this isn’t exactly the straw that broke the camel’s back on that front. Anyway…

Are you tired of writing the following?

read more

Currying via ECMAScript 5 Binding

$
0
0

This article will exam function currying in JavaScript and introduce currying via native binding.

read more

Event binding in jQuery 1.7+

$
0
0

This article introduces jQuery.on and jQuery.off as the preferred methods for binding events in jQuery 1.7+.

read more

Is your Internal Monitor monitoring (TMSM 13.10)?

$
0
0

The Teradata Multi-System Manager (TMSM) product monitors and controls hardware components, processes, and data loads.  Ever wonder who is monitoring the monitor? The Internal Monitor should not be confused with the external fail over monitor, the fail over monitor is responsible for monitoring the TMSM Master.

This article would be useful to anyone attempting to better understand the TMSM Internal Monitor.

read more

Teradata Viewpoint 14.01 Release

$
0
0

This article is the official release announcement for Teradata Viewpoint/CAM 14.01 with an effective release date of October 12th, 2012. There are some great new features, in particular enhancements for usability, supportability, administration, and extensions to monitoring capabilities. We here at Teradata are really psyched with everything we brought you in this exciting TENTH Viewpoint release! Aligned with this release is also a new release of Teradata Data Lab 14.01. Check it out.

read more

Teradata Data Lab 14.01 Release

$
0
0

This article is the official release announcement for Teradata Data Lab 14.01 (the second product release) with an effective release date of October 12th, 2012. As a reminder, Data Lab provides automation, management, and governance for sandboxing within a Teradata production system. Here's the link to the initial Data Lab 14.0 release article in case you want to review all the product features.  

read more

Killer queries: Track them, find them, fix them

$
0
0

With increased workload complexity, it is common for a system to run in full capacity. There is always a need to find resource consuming sessions and bad running queries. The intention of this article is to discuss a few ways to identify resource consuming sessions or bad running queries using Teradata Viewpoint.

read more


Teradata Viewpoint & Teradata Alerts Configuration Guides

$
0
0

The Teradata Viewpoint Configuration Guide is intended for Teradata Customer Services, Teradata Professional Services, and Customers for configuring and upgrading a Teradata Viewpoint instance. The Teradata Viewpoint Configuration Guide guide covers all aspects of installing, configuring, and enabling the Teradata Viewpoint solution.

read more

Teradata Viewpoint & Teradata Alerts User Guides

$
0
0

The Teradata Viewpoint User Guide provides information about how to use the Teradata Viewpoint portal and select portlet bundles. This user guide is essentially the content found in the Teradata Viewpoint on-line help offered in PDF format.

read more

Upgrading a Viewpoint 14.00 Portlet to 14.01

$
0
0

A portlet developed for Viewpoint 14.0 will, for the most part, work correctly on Viewpoint 14.01. The most impactful change to consider will be to make your portlet support horizontal resizing, and to make any code changes to use the latest versions of jQuery and jQuery UI included with Viewpoint 14.01.

 

read more

Displaying Tabular Data using the DataGrid and BigNumbers Widget

$
0
0

The following tutorial is a guide on how to implement the Viewpoint DataGrid and BigNumbers Widget.  In this example we will show how these widgets were incorporated into the SkewedSessions Portlet.  The actual source code can be found in the SkewedSessions Portlet supplied in PDK version 14.01.

 

read more

How to Support "Resizing" for Viewpoint Portlets

$
0
0

Teradata Viewpoint 14.01 introduces a wider, more flexible layout, which better utilizes space on wide screen monitors, and gives the user more flexibility in controlling how much information is being displayed.

 

read more

Introduction to Teradata Viewpoint

$
0
0

Teradata Viewpoint is a state-of-the-art web portal solution built as an SOV (Single Operational View) for the Teradata Ecosystem and Unified Data Architecture.

Audience: 
Data Warehouse Administrator, Data Warehouse Application Specialist, Data Warehouse Architect/Designer, Data Warehouse Business Users
Training details
This course is offered by the Teradata Education Network. To enroll online, click the Training URL link below to go to the TEN site and Log in. If you're not a member click browse, select your region, and search on the Course Number. Or to enroll by phone, call the Enrollment Center at 1-937-242-4460. Note: You must be a member to register for a course.
Course Number: 
45704
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1

read more

Teradata Viewpoint 14.10 Release

$
0
0
Short teaser: 
This article is the official release announcement of Teradata Viewpoint 14.10
Cover Image: 

This article is the official release announcement of Teradata Viewpoint14.10 with an effective release date of May 6th 2013. With new enhancements in Alerting, Workload Management and Monitoring areas, this release of Viewpoint 14.10 continues to expand its scope and provide ability to monitor Hadoop systems along with Aster and Teradata systems.

Summary

The primary themes of the Viewpoint 14.10 release are to provide front end and visualization for new Teradata Database 14.10 features and Hadoop system monitoring. There are enhancements in Alerting, Monitoring and Management areas. Following are the highlights of Viewpoint 14.10:

  1. Stats Manager
  2. Hadoop System Monitoring
  3. Workload Management enhancements (Group throttle, New classifications, ability to unlock rulesets, etc..)
  4. Reports in Query Monitor portlet
  5. Alerting Enhancement

Browser support has also been updated to reflect support for Firefox 18, Chrome 24, Safari 5.1, IE 8.x and 9.x.

Stats Manager

The Stats Manager portlet complement the Auto Stats feature of Teradata Database 14.10 and will work with relaese 14.10 and later. Stats Manager allows DBAs/Users to efficiently manage their stats collection process. It is a new Tool option in Add Content | Tools menu.

Before we go into details of this new feature, let’s discuss why this is needed.  Accurate cardinality and cost helps Teradata optimizer to decide an optimal plan. Statistics provides cardinality information to Teradata optimizer. Cardinality changes significantly with bulk load jobs making stats stale and inaccurate. Some times it is even challenging for an experienced DBA to understand which object stats would be beneficial which can result in collecting extra stats or missing collections of critical stats. Collect stats jobs usually are resource intensive jobs as they have many collect stats statements; it is always good to know what is needed and what is not and save some CPU cycles. Due to scheduling issues the user may not have enough time to complete the collect stats job and there is a need to prioritize and run collect stats for important or stale stats first. Stats Manager tool simplifies some of these tasks and help users automate the stats collection process. The Stats Manager portlet can be used to:

  • View statistics on a system
  • Schedule statistic collection jobs
  • Identify missing stats
  • Detect and refresh stale statistics
  • Identify and discontinue collecting unused statistics
  • View when statistics were last collected and are scheduled for collection again
  • Set priority of collect stats statement with regards to to other collect stats statement
  • Shows CPU Utilization of collect stats jobs allowing the user to analyse if a particular job consumes more than anticipated amount of CPU.

There are two main tabs in Stats Manager– Statistics and Job.

Statistic Tab

The Statistics tab shows all objects (e.g. databases and tables) on the system, that have at least one statistic or that has at least one outstanding recommendation. The user can drill down on the data grid to navigate between the database, tables and Column. Figure 1 is example of Statistics by Database view.

Figure 1

Actions has three options - Automate enables statistics to be collected by collect jobs. Deautomate stops statistics from being collected by collect jobs.  Edit Collect Settings  allows the user to edit thresholds, sampling, and histogram settings. Information bar displays the percentage of statistics that are approved for automation, allowing the user to determine if more statistics need to be approved for automation. Percentage of automated stats have collect jobs allows the user to determine if additional collect jobs are needed. Recommendations display a list of the recommendations by an analyze job.  By clicking the link the user has an option to approve or reject recommendations given by analyze job. Statistics Table displays all objects with at least one statistic, or one recommendation that has not been approved or rejected. The table is configured using Configure Columns from the Table Actions menu. The user can automate any objects for stats collection process in this tab. This allows the user to approve statistics for collection by collect jobs. The user can also view Statistics detail reports by drilling down to stats object, see Figure 2. 

Figure 2

Job Tab

The Job tab displays the list of user-defined collect and analyzes job definitions. From this view, user can create collect stats and analyses jobs, manage existing jobs, and review job reports. Figure 3 represents the top Job tab layout. Actions has three option - New Collect Job enables user to define a job to collect statistics, New Analyse Job enables user to define a job to evaluate statistic use and make recommendations and View History lists the run status and reports for collect and analyze jobs over time.  

Figure 3

Job Definitions Table displays summary information about jobs. Drill down will show the details. Job Schedule displays a nine-day view of jobs that are running, scheduled to run, or have already run. Mouse over a date will show the list of jobs.

A Collect job generates and submits COLLECT STATISTICS statements to the Teradata Database for objects that were approved for automation in Statistic Tab. The user can assign a priority to individual COLLECT STATISTICS statements. see Figure 4.

Figure 4

The user can schedule a job to run for limited time and then have a new schedule to resume the job at a different time of the day See Figure 5.

Figure 5

An Analyze jobs option allows the user to evaluate statistics status and get statistic-related recommendations. Analyzing objects enables the user to determine where additional statistics might be useful and identify existing statistics that are used frequently or are stale. Once the recommendation are generated the  user can review and automate the object for stats collection process in Tab. See Figure 6 for various functions that Analyze job can perform.

Figure 6

The Viewpoint Log Table Clean Up feature can be used to cleanup Job results stored in DBS TDStats database.

Hadoop System Monitoring

Teradata Viewpoint 14.10 supports Hadoop system monitoring for Hortonworks provided Hadoop solutions packaged as part of Aster 3 Big Analytic Appliance. A new Hadoop Services portlet allows users to monitor status of various services running on the Hadoop systems. Using expandable service view on MapReduce, HDFS and HBase users can view key metrics details for the selected services (See Figure 7).

Figure 7

 Aster Node Monitor portlet is now renamed as Node Monitor portlet to monitor both Aster and Hadoop systems. Using Node monitor portlet for Hadoop systems, users can view node level metrics, available Hadoop services, and the status of services for each node on the system. User can also view hardware statistics details such as CPU usage, memory usage and network activity. Navigating through the Hadoop system topology, users can also view detailed service component and JVM metrics for the HDFS and MapReduce services. (See Figure 8)

Figure 8

Like Aster system monitoring, Hadoop systems monitoring was also integrated to with the existing portlets. The usability, look and feel of the portlets were maintained but underlying data and metrics corresponded to monitored system which is Hadoop in this case. Below are the existing portlets that were modified to support Hadoop system monitoring

  • Alert Viewer – View all the Alerts logged for Hadoop systems.
  • Capacity Heatmap– Displays trends for key metric usage related to system, HDFS and MapReduce.
  • Metrics Analysis - Displays and compares trends for key metric usage related to system, HDFS and MapReduce in a graphical format across different Hadoop systems.
  • Metrics Graph– Displays trends for key metric usage related to system, HDFS and MapReduce in a graphical format.
  • Space Usage– Monitors space usage on a Node such as total space, current space, percent in use and available space.
  • Admin – Provides the ability to add Hadoop systems and define Alerts for Hadoop systems.
  • System Health- Hadoop systems can be identified a “H” in the system's icon and drill down shows all the key metrics related to Hadoop system. See Figure 9

Figure 9

Reports in Query monitor

In Viewpoint 14.10 we added three new reports in Query Monitor.

  1. Multi-Session report: New option in Query Monitor By Utility|By Job was added to display all the utility jobs that are running with drill down capabilities for individual sessions logged on by a particular Utility Job and the ability to further drill down to see session details.  (See Figure 10)
  2. Hot AMP report: A new option By Vproc|By Skewed AMP displays AMPs with most skewed sessions that exceeded the CPU skew threshold set in the PREFERENCES view. (See Fig 10)
  3. By PE report: A new option By Vproc|By PE  displays total number of sessions logged on to the PE and CPU value for the PE. (See Fig 10)

Figure 10

Teradata Workload Management enhancements

Teradata Viewpoint 14.10 introduced group throttles where a user can define throttle on a group of workloads. We also added new classifications by UDF, UDM, memory usage and collect stats. These features are dependent on Teradata 14.10. In Teradata Viewpoint 14.10 user can now unlock any ruleset if they have the appropriate permissions. Users can now also model a system Ruleset this is useful for comparing the Workload management features for different platforms (Appliance v/s EDW) or for different versions of Teradata.

Alerting Enhancement

Various new Alert options and Alert type were added in this release of Viewpoint. 

  • An option to send an alert for Teradata Database restart was added.
  • In Session alert include or Exclude users option was added. If user wants to define a session alert for small set of users they need not add other users to the exclude user list instead include user option can be used. It also supports splat wildcard. (See Figure 11)

Figure 11

  • Users can now send an alert for long running sessions using newly added Active time alert option in Session Alert type.
  • Spool space (MB) alert option was added in session alert to send an alert if a session uses more than anticipated amount of spool space.
  • Delta I/O (logical I/Os) alert option was added to send an alert for a session consuming excessive logical I/O during the last collection interval.
  • In Database Space alert type users can now specify threshold for Current Spool Space (%) and Peak Spool Space (%) to send an alert when Current Spool Space and Peak Spool Space exceeds the threshold. Splat wildcard support was added to Database space include/exclude user list
  • A new Alert type Table space was added late in the Viewpoint 14.01 release with a new alert option on DBC.TransientJournal table with ability to specify current perm and skew threshold.

Lock Logger In Viewpoint 14.10 we modified Lock Logger architecture for Teradata Database14.10 and later. When Viewpoint 14.10 is used with Teradata Database 14.10 or later the Lock Info collector uses the data written to the DBQL Lock Log table to capture lock information therefore DBQL query logging must be enabled with the “WITH LOCK” option.

Finally, please refer the associated Viewpoint Configuration Guide for details of the upgrade process and the User Guide for details of new features.

We continue to have a voluminous release with copious features across a number of strategic areas. Hope you avail the new additions and improvements in Teradata Viewpoint 14.10. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Viewpoint Integration with Apache Ambari for Hadoop Monitoring

$
0
0
Cover Image: 

Teradata’s Unified Data Architecture is a powerful combination of Teradata, Aster, and Hadoop in a single platform.  Viewpoint has always provided monitoring and management of Teradata systems and launched support for monitoring of Aster in Viewpoint 14.01.  In order to complete Viewpoint’s monitoring of the different systems in Teradata’s Unified Data Architecture, Viewpoint 14.10 includes support for monitoring of Hadoop running in this architecture.

The biggest technical challenge Viewpoint faced when monitoring a Hadoop system was how to reliably and easily collect the necessary data from Hadoop.  The different components of Hadoop expose their data in a variety of different ways, including using Ganglia, Nagios, JMX, and some really ugly web interfaces.  There are two primary issues with using these existing technologies for Hadoop monitoring: parsing the data from each different interface and being able to locate and connect to these interfaces on each Hadoop node.  Each of these technologies exposes their data in a different format, and it would take significant development time to properly parse the data from each source.  There’s also a challenge in locating and communicating with the nodes to obtain this data.  Just to collect data from the namenode and jobtracker, the location of these services would have to be configured or discovered, and then failover would have to be accounted for as well.  Expanding the monitoring solution beyond that to collect data from every node poses both connectivity and security issues as well.  Surely there must be a better way!

Luckily Apache Ambari addresses all of these technical challenges by providing a collection of RESTful APIs from which a plethora of Hadoop monitoring data can be obtained.  Ambari handles the work of collecting the monitoring data from a variety of the monitoring technologies mentioned above.  It then aggregates this data and provides a series of RESTful APIs.  These APIs can all be accessed by making web service calls against a central node in the Hadoop cluster.  All data is provided in JSON format so it can easily be parsed by just about any programming language.

Since Viewpoint is written in Java and uses the Spring Framework quite extensively, Spring’s RestTemplate class was a natural choice for calling the RESTful APIs and parsing the results into Java model objects.  Here is some sample code to demonstrate the collection of the number of running MapReduce jobs, map tasks, and reduce tasks from Ambari.

 

package com.teradata.viewpoint.ambari;

import java.io.IOException;
import java.net.HttpURLConnection;
import java.util.ArrayList;
import java.util.List;

import org.apache.commons.codec.binary.Base64;
import org.codehaus.jackson.annotate.JsonProperty;
import org.codehaus.jackson.map.DeserializationConfig;
import org.springframework.http.MediaType;
import org.springframework.http.client.SimpleClientHttpRequestFactory;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.http.converter.json.MappingJacksonHttpMessageConverter;
import org.springframework.web.client.RestTemplate;

public class AmbariClient
{
    private String host;

    private String clusterName;

    private String user;

    private String password;

    private RestTemplate restTemplate;

    public AmbariClient(String host, String clusterName, String user, String password)
    {
        this.host = host;
        this.clusterName = clusterName;
        this.user = user;
        this.password = password;

        List<MediaType> supportedMediaTypes = new ArrayList<MediaType>();
        MediaType plainTextType = new MediaType("text", "plain");
        MediaType jsonType = new MediaType("application", "json");

        supportedMediaTypes.add(plainTextType);
        supportedMediaTypes.add(jsonType);

        MappingJacksonHttpMessageConverter mappingJacksonHttpMessageConverter = new MappingJacksonHttpMessageConverter();
        mappingJacksonHttpMessageConverter.setSupportedMediaTypes(supportedMediaTypes);
        mappingJacksonHttpMessageConverter.getObjectMapper().configure(
                DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false);

        List<HttpMessageConverter<?>> messageConverters = new ArrayList<HttpMessageConverter<?>>();
        messageConverters.add(mappingJacksonHttpMessageConverter);

        restTemplate = new RestTemplate();
        restTemplate.setMessageConverters(messageConverters);
    }

    public <T> T getAmbariHadoopObject(String url, Class<?> clazz)
    {
        SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory()
        {
            @Override
            protected void prepareConnection(HttpURLConnection connection, String httpMethod)
                    throws IOException
            {
                super.prepareConnection(connection, httpMethod);

                String authorisation = user + ":" + password;
                String encodedAuthorisation = Base64.encodeBase64String(authorisation.getBytes());
                connection.setRequestProperty("Authorization", "Basic " + encodedAuthorisation);
                connection.setConnectTimeout(30000);
                connection.setReadTimeout(120000);
            }
        };

        restTemplate.setRequestFactory(requestFactory);

        String fullUrl = "http://" + host + "/api/v1/clusters/" + clusterName + url;
        return (T) restTemplate.getForObject(fullUrl, clazz);
    }

    /**
     * Model class to hold the data from the JSON response.
     */
    private static final class JobTrackerData
    {
        public class Metrics
        {
            public class MapReduce
            {
                public class JobTracker
                {
                    @JsonProperty("jobs_running")
                    private Integer jobsRunning;

                    @JsonProperty("running_maps")
                    private Integer runningMaps;

                    @JsonProperty("running_reduces")
                    private Integer runningReduces;
                }

                @JsonProperty("jobtracker")
                private JobTracker jobTracker;
            }

            @JsonProperty("mapred")
            private MapReduce mapReduce;
        }

        @JsonProperty("metrics")
        private Metrics metrics;
    }

    public static void main(String[] args)
    {
        AmbariClient client = new AmbariClient("ambari.teradata.com",
                "clustername", "admin", "admin");
        JobTrackerData data = client.getAmbariHadoopObject(
                "/services/MAPREDUCE/components/JOBTRACKER", JobTrackerData.class);
        System.out.println("Jobs running: " + data.metrics.mapReduce.jobTracker.jobsRunning);
        System.out.println("Map tasks running: " + data.metrics.mapReduce.jobTracker.runningMaps);
        System.out.println("Reduce tasks running: "
                + data.metrics.mapReduce.jobTracker.runningReduces);
    }
}

Following Viewpoint’s standard data collection practices, all of the data collected from Ambari is stored in the Viewpoint database.  The data is collected from Ambari every minute by default, and therefore the database has a view of the state of the Hadoop system over the course of an hour, day, or week.  This historical data is used to generate a variety of different charts in the Viewpoint web portal, and also to enable the use of Rewind to enable users to go back and see exactly what was occurring on the Hadoop cluster at a specific point in time.

By using Ambari for monitoring of a Hadoop cluster, Viewpoint was able to deliver a comprehensive Hadoop monitoring solution in a relatively short amount of time.  Viewpoint’s Java and web developers were able to focus on the tasks at which they excel: getting the data from the source system (Ambari) and displaying it in Viewpoint’s portlets.  No time was wasted trying to get up to speed on Ganglia, JMX, or many of the details of Hadoop’s inner workings.  Ambari was a critical piece of technology to help Viewpoint roll out this solution and enhance Viewpoint’s support of Teradata’s Unified Data Architecture.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Teradata Alerts (CAM) 14.10 Release

$
0
0
Short teaser: 
This article describes what's new in Teradata Alerts 14.10
Cover Image: 

This article describes what's new in the Teradata Alerts 14.10 release (also known internally as CAM). This release was made available on May 30th 2013.

Summary

The Teradata Alerts 14.10 release restructures the alert delivery type in the Viewpoint "Admin" -> "Alert Setup" portlet to make it more modular and introduces a new timeout setting feature for various delivery types.  The remainder of this article provides feature details for all the highlights of the Teradata Alerts 14.10 release.

  1. New modular look for delivery setting.
  2. Timeout options for Alert action such as BTEQ scripts, Run a Program and SQL Queries.
  3. New display name option when sending an e-mail.
  4. If needed, SMTP configuration can now be cleared or disabled.
  5. Big Numbers support for Alert Viewer portlet.

Features Details:

The Delivery settings layout in the Alert Setup portlet has been restructured. The BTEQ/SQL Login configuration has moved to the new Authentication area and renamed Teradata Login (See Fig 1). A new Notification Service area can be used to easily identify user defined scripts and programs running on notification server.

 Figure 1

Users now have an option to terminate or get notified if a program or script runs more than an anticipated amount of time. In Alert Setup portlet, the user can select the Notify option when setting up the delivery types for SQL Queries or when setting up Notification service such as BTEQ Scripts or Programs and get notified for a long running/hung scripts or programs. Users can also choose to terminate a hung program or a script immediately or after certain period of time using Terminate option (See Figure 2). In the delivery type setting Notify and Terminate options are available for SQL Queries delivery type. See below.

Figure 2

For easy identification of e-mails a new display name option has been added (See Figure 3).

Figure 3

Delivery Types can now be enabled or disabled. For Example, an administrator can now disable an already configured SNMP delivery type. For easy identification of disabled Delivery Types, while setting up the Action Sets, the Delivery Types are displayed in red and if scripts are disabled they are indicated as disabled (See Figure 4).

Figure 4

Finally, please refer the associated Alert Configuration Guide for full details of the upgrade process and User Guide for all new feature details.

Hope these new additions and improvements in Alerting mechanisms are helpful to you. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Managing Teradata Data Lab users with TASM

$
0
0
Short teaser: 
How DBA's can use TASM to manage queries that execute against tables in Teradata Data Labs
Cover Image: 

This article describes how Database Administrators can use Teradata Active System Management (TASM) to manage queries that execute against tables in Teradata Data Labs. This allows analysts to obtain the information they need without negatively impacting production applications.

Although this article focuses on TASM management, the general workload management guidelines can apply to other Teradata platform workload management strategies, not only ones running TASM.

What is Teradata Data Labs?

A data lab is a separate dedicated space within a production data warehouse for agile development of new analytic queries that can combine personal, ad hoc, or temporary data with production data. The Teradata Data Lab product provides Viewpoint portlets to assist and automate the operations in doing this "sandboxing" in production and is intended for use by all the users of data labs.

Refer to the Teradata Data Lab release article for more information on the Teradata Data Lab product.

Why do Data Labs users need special consideration?

By its nature as an environment that fosters quick, agile development of new queries and new data, Data Labs provides the analyst with the opportunity to gain new insight. While these insights may yield great benefits for the corporation, it is up to the DBA to make sure that these untested queries that may be running against skewed lab tables do not negatively impact production operation of the data warehouse. TASM provides the DBA with the tools to balance the needs of the Data Lab Analyst and the normal production workloads.

What do I need to know?

Before you can manage a Data Lab, you need to know how Data Labs works under the covers. The following diagram helps explain some of the new concepts in Teradata Data Labs.

 

 

 

 

 

 

As a DBA, you will likely be responsible for creating a few Lab Groups. A Lab Group is a container for one or more labs. For example, you might create a Lab Group named Finance for use by the finance department. When you create this lab group, you will be prompted for a prefix name. This is the key to creating TASM rules. In the Lab Group Setup screen shot below, you will see that we have created a prefix of findpt. This prefix will be prepended to the name of every lab that is created in this lab group. For example, if a user creates a lab named FY2011, then the Data Labs portlet will create a corresponding database named findptFY2011. This will be the database that contains all of the tables created in this lab.

Since we know the prefix that will be applied to all databases in a lab group, we can create TASM classification criteria that will match all queries that target tables in this database. If you are using Teradata 13.0 or higher this will be easy because classification criteria can contain wildcards. For previous releases, you will have to create a classification criteria for each Data Lab.

What is the process for creating new TASM rules?

Modification of your TASM ruleset should not be taken lightly. TASM controls how resources are applied to all your workloads. If you are not comfortable with using TASM, you may want to consider contacting Teradata Professional Services. They have people who are experts in Data Labs and TASM that can provide analysis, advice, and training specific to your needs.

In order to know where TASM rules are needed, we need to collect data about how the Data Labs are being used. We can do that by enabling logging of all queries to the Database Query Log (DBQL). The Performance Management Guide describes how to enable and utilize the Query Log. After query logging has been enabled for a week or two, you can use the Viewpoint Query Spotlight portlet to determine if there are specific users or tables that might need special attention. In the meantime, let’s take a look at how to create a workload restricts the impact of queries running in our Finance Lab Group.

Creating a Workload

When working with TASM, we always want to be able to revert our changes if they cause unexpected resource shifts. So we will start by cloning the active ruleset. If the current active ruleset is Production.v6, then we would choose Clone from the drop down menu associated with Production.v6. In the new cloned ruleset, change the ruleset name to Production.v7.

In order to prevent Data Labs queries from impacting your production workloads, it may be necessary to have them run at a lower priority. To do this we need to create a new workload. In Workload Designer, click on your cloned ruleset and navigate to the Workloads page. Then click on the Create Workload button to create a workload.

Give the workload an appropriate name and description. Then choose an Enforcement Priority. In this example, the Background priority has been selected so that queries that run in this workload will run at the lowest default priority.

Next, we need to define a classification criteria that will match queries running against the tables in the Finance Lab Group. Click on the Classification tab, set the criteria type to Target, and click Add. The following dialog will be displayed.

We want our workload to match all queries running against tables in Data Labs that are members of the Finance Lab Group, so we create a classification criteria that matches all queries that target a table in a Data Lab database whose name starts with findpt.

Congratulations, you have defined a workload for the Finance Lab Group. To put your changes into effect, navigate up to the top page in the Workload Designer portlet and choose Make Active from the drop down menu associated with your new cloned ruleset. Once the ruleset is activated, all queries that match the classification criteria will run at the priority associated with the Background Enforcement Priority.

Note: if you find that some queries that run against Data Labs in the Finance Lab Group are being classified to other workloads, you may need to adjust the Workload Evaluation Order. To do this, navigate to the Workloads page, select the Evaluation Order tab, and move your Lab Group workload up or down as needed.

Creating a Throttle

The next step that you might want to take is to place a limit on the number of concurrent queries that can execute against the tables in a lab group. This will help to even out the load on the system. For example, if one of your users has several very long queries to run and decides to start them all before leaving for lunch, a TASM workload throttle can reduce the impact on other workloads by preventing them from all running at the same time.

Let’s create a throttle for the Finance lab group. Navigate back to the Workloads page and click on your workload’s name to drill down to the detail page. Click on the Throttles tab to configure a throttle.

In this example, we have limited the number of concurrent queries that can run in this workload to two. In addition, when the system is in the Critical state, only one concurrent query will be permitted.

That’s all you need to do to create a throttle. You’ll need to activate your ruleset again before the throttle will take effect.

Conclusion

By combining the simplified sandbox management provided by Teradata Data Lab with the workload management features of TASM and Workload Designer, we safely permit analysts to extract additional value from production data by using new queries and combining production data with data in the data lab.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Teradata Viewpoint 15.00 Release Article

$
0
0
Short teaser: 
Official release announcement for Teradata Viewpoint 15.00
Cover Image: 

This article is the official release announcement for Teradata Viewpoint 15.00 with an effective release date of April 9th 2014. The Viewpoint 15.00 release has a whole new look and feel. The upgraded infrastructure embraces newer web technologies, improves performance, and enhances user accessibility, interaction, and discovery. As the versioning suggests, Viewpoint 15.00 supports the Teradata Database 15.00 release. 

Summary

Themes of the Teradata Viewpoint 15.00 release are currency with the latest web technologies, support of Teradata Database 15.00, and to formally address Section 508 compliance and Web Content Accessibility Guidelines (WCAG). As such, there have been significant modifications to the entire Viewpoint look and feel. Highlights:

Viewpoint New Look

Viewpoint has undergone a significant foundational re-architecture. However, the majority of portlet monitoring and management logic (functions and flow within portlets) remains the same. Below is some of the foundational changes you will enjoy:

  • Flat design with new color scheme.
  • Font size is changed from Verdana to Arial font.
  • New icons for each portlet.
  • Chrome is redesigned increasing the vertical real estate.  

Here is a snapshot of the new Viewpoint 15.00 look:

 

Header Changes. The next view displays how the header and Rewind bar have changed. The circled header icons represent access to "Help", the "Viewpoint Admin" menu, and a pull down for the Viewpoint "Profile" and "Log Out" options.  Notice the fresh new look of Rewind. All the discrete time increments are clearly shown as separate buttons. Lastly, the Rewind bar now stays visible even when scrolling down a page.

 

Add Content: The Viewpoint Add Content menu has been redesigned significantly improving the user interaction and discovery. One can add one or more portlets in an operation including multiple instances of the same portlet if desired. There is also a search option at the top to assist in finding the right portlet. New portlet category groupings assist in search and also understanding of portlet relationships. Lastly, notice that all portlets now have new unique representative icon. 

 

New "Help"including on-line search capability as well as context sensitive help directly within portlets taking you automatically to that portlet assistance.

   

Teradata Database 15.00 New Features: Here is an overview of the Viewpoint additions related to this new Teradata release.

New Query Monitor Report – By Blocker View: This report is very useful in understanding the blocking contention going on in a Teradata 15.00 or newer system. This report lists of all sessions that are responsible for blocking another sessions or blocked by another sessions. The sessions are grouped in three categories:

  • Root Cause – Sessions that are blocking other sessions.
  • Granted – Sessions that are blocked but is also blocking other sessions. Consider a case of BT-ET transaction where there can be multiple SQLs.
  • Waiting – Sessions that are blocked and is waiting

Blocking Tab: A new blocking tab is added when user drilling down on a session in Query Monitor that is blocking other sessions. This shows information about the locks held by the session, the count of all sessions blocked by this session, and how long it has been blocking other sessions. It also lists the sessions that are blocked by this session so that user can see the sessions it is blocking and take appropriate action. 

Blocked By Tab in Query Monitor: This tab was redesigned to list all the sessions that is blocking the current session. 

 

Workload Management:  Teradata Viewpoint 15.00 will support Teradata Database 15.00 Workload Management features such as:

  • One can throttle a request at virtual partition level.
  • One can specify a maximum estimated step processing time.
  • One can sub classify on percent of table accessed.
  • One can classify on usage of a Table in a particular statement.
  • Added a new report displaying resource allocation across all SLG tier workloads in all Virtual Partitions for a Planned Environment.

For users of Teradata Integrated Workload Management, you can now define planned environments as state matrix options. 

Please refer to Teradata Database 15.00 documentation for further information on this exciting new Database release.

Aster Workload Management

Teradata Viewpoint 15.00 supports Workload Management for Aster 6.0 and newer. With this new addition, the Viewpoint Workload Designer portlet provides an alternate method of configuring rulesets as well as providing additional functionality such as:

  • Have more than one named ruleset which can be editable by multiple users to make incremental updates
  • Provides lock/unlock capabilities
  • One can export, import, and clone rulesets

Ruleset Features such as Throttles and Workloads are added.

New Metric Heatmap portlet: The prior Capacity Heatmap and Metrics Graph metric portlets have been merged into one super metric portlet called Metric Heatmap (even the name is integrated). It provide a view toggle for doing easy transition to the different displays as shown below where the system CPU usage is shown with two different views within the same portlet.

Alert Viewer portlet hide alerts: As a new type of filter, a hide option has been built into the Alert Viewer portlet allowing certain alerts to be hidden from view. One may use this to selectively hide duplicates or possibly as part of tracking resolved issues. The hide option can be executed for an individual query or through a tables action menu bulk operation (via check boxes). There is a new setting that then allows if hidden alerts should be displayed or not. If hidden alerts are displayed, they will have a strike-through representing the exception. All of these aspects are shown below.

Enhanced Node Resources portlet: The Node Resources portlet has been re-designed but still servers the same purpose for helping to identify over/under utilized nodes and vprocs. This new version is much easier to navigate and understand. The chanes were enough to warrant its own article. Please refer to the "Node Resources Take-2" article for more details.

With the underlying infrastructural changes, all product portlets need to be upgraded in sync. The listing below documents the minimal product versions necessary for Teradata Viewpoint 15.00 compatibility.

  • Viewpoint 15.00
  • Data Lab 15.00
  • DSA 15.00
  • Unity Ecosystem Manager 15.00
  • Unity Data Mover 14.11
  • Unity Director / Unity Loader 14.11

Please refer to the Viewpoint Configuration Guide for details of the upgrade process and the User Guide for details of new features.

We sincerely hope you like the new Teradata Viewpoint 15.00 changes and how it helps in discovery and usability of the product. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Node Resources Portlet - Take 2

$
0
0
Cover Image: 

As part of the Viewpoint 15.00 release, the Viewpoint team built a brand new version of the Node Resources portlet.  The primary purpose of this portlet continues to be to identify skew on a Teradata Database system.  The original incarnation of this portlet required a fair amount of manual intervention in order to achieve this goal.  The new version of this portlet includes a simpler user interface and a new algorithm to identify skewed resources (or “outliers”) automatically.

Since the Teradata Database is a massively parallel architecture, it’s important that all of the units of parallelism are performing approximately the same amount of work.  If some of the nodes or VPROCs within the system are performing too much or too little work when compared with the system-wide average, this is called skew.  When work for a specific query is skewed, the query isn’t taking full advantage of the power of the system, and therefore doesn’t complete as quickly as possible.  When the work on nodes or VPROCs is skewed, this can affect the performance of the system and also reduce the effective capacity of the system.

There are three primary enhancements to the Node Resources portlet.  The first is the use of a histogram to visually display the data distribution for a particular metric.  The automatic calculation of “outliers” based upon the data distribution is the second improvement.  The final significant change is the ability to analyze the data over a time range instead of just the last sample of data.

The visualization in the previous version of this portlet depicted a square for each node or VPROC on the system.  For larger systems it was hard to see all the squares on a single screen, and this representation of the data didn’t really add much insight into the actual data for a particular metric.  The new version of the portlet instead uses a histogram to plot the data for the selected metric.  The histogram contains 20 buckets of equal size, and the height of each bar represents the number of nodes or VPROCs that fall into each bucket or range.

The red bars in the histogram represent the buckets that contain “outliers”, which are nodes or VPROCs that are significantly skewed.  Outliers are calculated as resources that fall 1.5x above or below the interquartile range.  This is a standard statistical analysis for finding outliers in a distributed data set.  In this way, the portlet automatically calculates any nodes or VPROCs that are significantly skewed for the selected metric.  For a system that is working in a reasonably parallel fashion it’s definitely possible that you won’t see any outliers in the histogram.  If the histogram does show any outliers, you might want to investigate further to discover the cause of the skewing on your system.

The third significant change is the ability to analyze up to an hour’s worth of data while using this portlet.  In Viewpoint 14.10 and earlier, the Node Resources portlet only reported data for the last sample period.  This data typically represented the data for a minute or less of elapsed time on your system, which is too short a time period to reliably discover significant skewing issues on a system.  The new version of the portlet lets you choose the last collection time as before, but also an aggregation of 5, 15, 30, or 60 minutes of data.

While viewing the main screen of the portlet, you can click on any of the bars in the histogram to drill down and view the data for just the nodes or VPROCs in that particular bucket.  From the main screen you can also click the “Down” or “Outliers” bubbles to change the filter for the data grid so that only those particular resources are displayed.  You can click on any of the rows in either of the data grids to drill down to a detail screen that displays all of the metrics for that particular node or VPROC.  The detail screen is different for nodes, AMPs, PEs and other VPROC types so that only the applicable metrics for that particular resource are displayed.

This new version of Node Resources should make it much simpler to monitor and identify potential skewing issues across the nodes and VPROCs of your Teradata Database system.

Note that the Node Resources portlet only applies to Teradata DB systems whereas the Node Monitor portlet provides monitoring aspects for Aster or Hadoop system nodes.

 

 

Channel: 
Ignore ancestor settings: 
0
Apply supersede status to children: 
0
Viewing all 81 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>