Whilst at the Teradata Partners 2011 show last October we bought several copies of a book on Teradata indexes called ‘Teradata Database Index Essentials’, written by Alison Torres of Teradata, no less.
This book on Teradata indexes even has it’s own web site at http://www.teradataindexes.com/.
So, predictably, we raced each other to read the Teradata indexes book on the flight back to the UK from San Diego. Even more predictably, after reading the Teradata indexes book, I then watched 3 films and arrived back at Heathrow in the early hours having had no sleep…and we still had to travel up to Manchester. Doh!
Anyway, back to Teradata indexes. The first chapter of the Teradata indexes book focuses on the primary index (PI), as you would probably expect if you knew even a little about Teradata, and Teradata indexes in particular. The primary index is important, very important. In fact, it is by far the most important of all Teradata indexes. If you are going to have a favourite amongst the Teradata indexes, make it the PI!
The PI is used by Teradata to hash distribute the data across all of the virtual processors or VPROCS (well actually the disks belonging to the virtual processors) in the system. There is a new capability to create a table without a PI – called a No Primary Index (NOPI) table – but that’s another story. Almost all tables in all Teradata systems are hash distributed, are likely to continue to be, and should be, in my opinion. The usefulness or otherwise of randomly distributed fact tables is a debate for another day.
The PI choice, which must be made for each table, is perhaps the most important thing to get right in a Teradata system, and should be coded explicitly. Yes, the system will take action if you don’t explicitly specify a PI, but who would ever forget something that important?
Given the importance of the Teradata PI, we started to think about other pearls of wisdom on the subject, which we believe aren’t covered in the Teradata indexes book, so here is our guidance on that most important of all the Teradata indexes, the PI:
1. Impact of data type on distribution
Users often change the data type of a column. Sometimes that’s a good thing, sometimes it’s a bad thing, epecially when it’s a PI column.
Many a time we’ve seen production tables with a PI column that has INTEGER as the data type, for instance, and a user version where the only material difference is that INTEGER has been changed to CHAR(10), or something similar, for the PI data type.
So what? They prefer working with CHARs in their queries!
Maybe so, but hashing and therefore data distribution is based on the PI input value and the data type. As a result of changing the PI data type, the production table and the ‘identical’ user table no longer have the same distribution, which was normally not intentional. Table skewing should stay the same as the PI values have the same cardinality in both cases.
So what? We’ll see the second point, that’s what!
When considering the PI choices available for a table, it’s important to consider how the table will be joined to other related tables – this is a relational database system, remember?
A classic example is the relationship between an order table and an order item table. Both tables could have a PI that consists of the primary key (PK) columns, which would guarantee even distribution (i.e. no skewing) in both cases. On the face of it that would be ‘a good thing’.
However, let’s consider that queries frequently join the order and order item tables, as you might expect. For each order there are many order items, possibly hundreds or even thousands, who knows. If both tables were hashed distributed on the PK there would be only a small % of random cases where an order row and the corresponding order item rows are stored on the same AMP vproc. These cases are a happy accident.
All of the rows in one of the tables would have to ‘move’ so that the rows between the tables could be joined. This happens either via re-distribution or table duplication, which involves making a copy on each AMP. In either case, shipping rows across the bynet is unavoidable. This has to happen because there is no ‘co-location’ of rows between the order and order item table.
An alternative approach might be to use a common PI of order_id or account_id and purchase_date for both tables. As both tables have the same PI column(s) the rows that join are guaranteed to be ‘AMP local’ or co-located so there is no redistribution required, and therefore no bynet traffic.
The choice of PI for a table must not be made in isolation. A little bit of skewing – say up to 10% – is perfectly acceptable if co-location of frequently joined fact tables can be achieved. Balancing skew and co-location is the goal.
3. Hash functions
Within the Teradata indexes capability there are several Teradata functions that can be used to interrogate hashing – namely HASHAMP (and HASHBACKAMP), HASHBUCKET and HASHROW. Taken together, they can be used to find the AMP to which a given value hashes:
SELECT COL1,HASHAMP(HASHBUCKET(HASHROW(COL1))) AS AMP_NO FROM TAB1;
Armed with Teradata’s hashing functions it is possible to investigate various hashing options without having to create multiple copies of a table.
Note that 0, null and spaces all tend to hash the the highest AMP number in a system. If the highest AMP in ths system looks like it has a lot of data for a given table, it could be caused by default values being present.
4. Number of PI columns
For a given table, we’d normally expect to see 1, 2 or 3 columns in the PI, with 4 columns as the maximum. In 90% of cases we’d expect 1-2 columns.
Although there is no hard fast rule on this, a maximum of 4 columns in a composite PI is the guidance we’d recommend. On certain non-Teradata MPP systems the maximum that can be specified in the ‘distribution key’ (PI equivalent) used to hash the data is…wait for it…4 columns. Great minds think alike etc…
Hopefully, through deeper understanding of Teradata indexes, we can avoid the 48 column PI that we’ve seen recently at a customer site (no names!).
5. Non-explicit PI columns
NOPI tables aside, if we create a table without explicitly specifiying a PI the system has to create one for us in order to hash distribute the rows across the AMPs.
The system will use the following, in order:
- primary key (PK)
- primary constraint
- first column
In the absence of explicit guidance in the DDL, primary keys or unique constraints makes good PI candidates as they will guarantee even table distribution i.e. no skewing. However, in practice, neither of them are used very often. So, in reality, the first column in the table becomes the PI used to hash distribute the data – no matter how bad a choice this is.
If the first column contains nulls or some other default value then ALL of the data in the table will hash to the same AMP. That is not a good thing. Such an extreme case is unlikely, although possible.
Skewing as a result of a system-assigned PI tends to be what happens when we get lazy/sloppy. Use of ‘CREATE TABLE AS’ (CTAS) without an explicit PI in the SQL is often where this kicks in. The resultant table is often silently created with bad skewing as a result of the first column being a poor PI choice.
So, how do we avoid such an unhappy event? Explicitly code a PI in the table create SQL, whether it’s straight DDL or DML in the form of a CTAS query. Simples!
6. Table create DDL
Following on from the point above, a Teradata indexes standard we have used for many years is to code PI column(s) first in table create DDL (often followed by the non-PI columns in alphabetic order).
CREATE TABLE FOO
(PI_COL INT NOT NULL
UNIQUE PRIMARY INDEX(PI_COL);
This can save the day when the PI clause is left out in CTAS operations, or when the DDL is copied and used elsewhere but the PI clause is deleted for whatever reason.
7. Row hash collisions
Row hash collisions, or synonyms, are what happens when two or more input values generate the same hash value e.g. ‘foo’ and ‘bar’ both generating the same value X.
Given that the hash value is non-unique, Teradata uses a row id to uniquely identify a row within a table. The row id is the hash value plus a uniqueness value. For a given hash value, the first occurence has a uniqueness value of 1, which is then incremented as further hash collisions occur.
Hash collisions cause a reduction in data acccess efficiency by the AMPs. They can be minimised by not creating tables with a non-unique primary index (NUPI) that consists of highly non-unique values in the PI columns.
8. Nullable PI columns
Another Teradata indexes standard we have used for many years is that all PI columns should always be ‘NOT NULL’ – that’s the law!
9. Duplicate row checking
Teradata, be default, does not allow for duplicate rows in a table. This is because ‘SET’ tables are created by default. In order to allow duplicate rows in a table, it must be created as a ‘MULTISET’ table rather than a ‘SET’ table.
If a SET table has a non-unique primary index (NUPI), the system has to ensure that no duplicate rows are created during the insert operation. This duplicate row checking can get very costly in terms of CPU demand. It is possible to witness 10x performance differences between identical insert operations where the only difference is SET/MULTISET for the target table.
Alternately, using a unique primary index (UPI) rather than a non-unique primary index (NUPI) might be considered.
So, there you have it. A few more hints and tips to consider relating to Teradata indexes, and the Teradata primary index in particular.
To quote my own tutor, and first published Teradata author, Brian Marshall:
“While the physical implementation team will probably receive few accolades if they choose indexes wisely, they will certainly be noticed if they do not.”
Brian further states that we should “…avoid the kind of errors which, through ignorance or carelessness, prevent the Teradata Database from performing to its true potential”.
Some wisdom never changes.