Friday, July 3, 2015

Set operations in SQL Server - Part 2: ANY() and ALL()

In the previous post Set operations in SQL Server - Part 1, we looked at the basic set options Union, Intersection, Complement (or subtraction) and how they can be used and combined.  We also looked at the problem of determining table equality in a relational database -- especially when duplicate rows are permitted.  In this installment, we will look at various ways of testing set membership.

Set membership is a property of an item that may or may not be a member of a given set.  In database terms, membership implies that there exists some row in a table that satisfies the membership test.  This usually means that one or more columns contain data that satisfy some constraint such as equality or inequality.  Testing for set membership is an essential operation in processing data in a relational database, such as SQL Server.

Starting easy

Let's start off with a simple example.  Suppose you have a table that contains information on bicycles.  Perhaps it looks something like this:

CREATE TABLE Bicycle ( BicycleId INT , ModelName VARCHAR(50) , Manufacturer VARCHAR(50) , Cost DECIMAL(6, 2) ) -- Add some data INSERT INTO Bicycle (BicycleId, ModelName, Manufacturer, Cost) VALUES (1, '700', 'Trek', 1000.01), (2, '500', 'Giant', 600.00)

Now, suppose this table was populated with various bike data and you want to see if there are any bicycles that cost over $1,000.00.  A typical query might look like:

IF EXISTS(SELECT * FROM Bicycle WHERE Cost > 1000.00) PRINT 'There is at least one bicycle costing over $1,000.00'

This is simple test for membership that prints a message if the constraint is satisfied. Note that it uses the function EXISTS(). There are actually four functions available to help us with membership testing like this: EXISTS(), IN(), ANY() (a.k.a SOME()) and ALL(). You're probably familiar with EXISTS() and IN() and may have used them before. Perhaps you are curious about the ANY() and ALL() functions and what they can do for you.

Using ANY()

Let's rewrite the sample query using the ANY() function instead:

IF ANY (SELECT Cost FROM Bicycle) > 1000.00 PRINT 'There is at least one bicycle costing over $1,000.00'

This query returns the same result as the first. It is actually a more direct translation of the requirement and is shorter to boot! Before we go further though, I'm going to rewrite this query this way:

IF 1000 < ANY (SELECT Cost FROM Bicycle) PRINT 'There is at least one bicycle costing over $1,000.00'

Why did I do that? Well, sometimes the query inside the call to the ALL() function can be long, making it harder to spot the condition being tested.

Using ALL()

Now that we've seen how ANY() operates, what about ALL()?  It turns out that this is fairly intuitive. The function performs the desired comparison against all rows in the query and only returns True if the comparison is True for every member examined. Sticking with out bicycle example:

IF 1000 < ALL (SELECT Cost FROM Bicycle) PRINT 'All bicycles cost more than $1,000.00'

If you try this against the sample data inserted above, you should receive no message, since there is at least one bicycle (the Giant 500) that costs less than $1,000.00.

Negating ANY() and ALL()

SQL allows for these operators when using ANY() and ALL():
    { = | <> | != | > | >= | !> | < | <= | !< } 
 
However the first time you use the negative forms, you may be surprised at the results.  For example, if you run this query:

IF 1000 !< ANY (SELECT Cost FROM Bicycle) PRINT 'No bicycle costs less than $1,000.00'

you might be surprised to see that the message is indeed printed! If so, that is probably because you are reading the query in the ordinary English meaning of the command, "If there is no bicycle that costs less than $1,000.00, print the message." However, SQL sees it differently. You see,

    <condition> ANY()

is really short hand for:

    value <condition> first item OR value <condition> second item, etc.

With our sample data, negating that:

    1000.00 !< ANY(...)

is executed as:

    1000.00 !< 1000.01 OR 1000.00 !< 600.00

Clearly. 1000.00 > 600.00 so the expression evaluates to True and the message is printed.  You might have expected behavior similar to:

IF NOT EXISTS(SELECT * FROM Bicycle WHERE Cost < 1000.00) PRINT 'No bicycle costs less than $1,000.00'

which would result in no message, but NOT EXISTS() is not the same as !< ANY() as shown. In fact, this is a case where using the alias SOME() can make the intent clearer.  That is:

IF 1000 !< SOME (SELECT Cost FROM Bicycle) PRINT 'No bicycle costs less than $1,000.00'

more clearly indicates what we're asking for. What about negating the ALL() function? Well, similar to ANY(), ALL() is evaluated like this:

    value <condition> first item AND value <condition> second item etc. So, this query:

IF 1000.00 !< ALL (SELECT Cost FROM Bicycle) PRINT 'No bicycle costs less than $1,000.00'

is executed as:

    1000.00 !< 1000.01 AND 1000.00 !< 600.00

which evaluates to False, so the message is not printed.

Note:  We can rewrite the !< ANY() query (that didn't work) to achieve the desired result in this way:

IF NOT 1000 < ALL (SELECT Cost FROM Bicycle) PRINT 'No bicycle costs less than $1,000.00'

which, when you think about it, makes perfect sense!

Summary

ANY() and ALL() are valuable functions in SQL Server that can be used to write succint queries that reflect the business logic.  However, we need to be careful when using them with negative comparisons, since their execution differs from the normal, English reading. 

Friday, June 26, 2015

Set operations in SQL Server - Part 1: Testing for equality

Sets and set operations are fundamental to relational databases -- both in theory and in practical day-to-day operation.  A table ("relation" in mathematical terms) represents a set (more properly, a multiset, or bag) and the rows of the table ("tuples") represent the members of a set.  It's no surprise then, that SQL contains many operators designed to operate on sets.  Some of these are familiar to anyone who recalls basic set theory from first-year college courses:
  • Union (UNION): Return a combination of two sets, eliminating duplicates
  • Intersection(INTERSECT): Return a set consisting of members present in both of two sets
  • Complement(EXCEPT): Return a set consisting of members present in one set but not the other
  • Symmetric difference (A EXCEPT B UNION B EXCEPT A): Union of the two complements
  • Cartesian product (A CROSS JOIN B): Set of all possible tuples from two sets
  • Cardinality (COUNT(*))
Note that I mentioned that in a relational database, sets are more properly called "multisets" or "bags".  This is because an RDBMS typically allows duplicate rows (that is, when a primary key is not defined).

Testing set equality

One interesting problem is how to determine the equality of two sets in an RDBMS while allowing for duplicate rows.  In set theory, two sets are equivalent if every member of set A is also a member of set B.  That means we must have:

A \ B = B \ A = {}, the empty set, where "\" is the operator for the complement operation.  We can also look at the cardinality:

|A\B| = |B\A| = 0

For tables in an RDBMS with a primary key (which can have no duplicate rows), we can check equality simply:

( SELECT * FROM A EXCEPT SELECT * FROM B ) UNION ( SELECT * FROM B EXCEPT SELECT * FROM A )

which you might recognize as computing the symmetric difference. If this query returns no rows, the two tables are equal. (The parentheses are needed to overcome the default operator precedence in SQL Server. See EXCEPT and INTERSECT (Transact-SQL) for details) For tables in an RDBMS without a primary key (which enforces uniqueness), we need to use the cardinality to account for cases where some rows are duplicated.  In SQL we can write:

SELECT ( SELECT count(*) FROM ( SELECT * FROM A EXCEPT SELECT * FROM B ) [A\B] ) AS [|A\B|] , ( SELECT count(*) FROM ( SELECT * FROM B EXCEPT SELECT * FROM A ) [B\A] ) AS [|B\A|]

but this won't work if the two tables have the same unique rows but different duplicated rows. For example, if we include the above logic in a WITH statement such as this:

WITH A AS ( SELECT * FROM (VALUES (1),(1),(2),(3),(4)) v(n) ) , B AS ( SELECT * FROM (VALUES (1),(2),(2),(3),(4)) v(n) ) -- query from previous example --

we can see that the results indicate equality. That is, the orders of the two complements is indeed 0, but the tables are not equal, since table A has two rows of 1's but table B has two rows of 2's. What is happening here? SQL removes duplicates as part of the set operations UNION, EXCEPT and INTERSECT!

To solve this problem, we need to compare the two tables row-by-row.  However, since there is no primary key, how should we match them up?  One simple way is to impose an ordering on the tables and assign a surrogate key.  In this case we simply use the ROW_NUMBER() function.  e.g.

WITH A(col1) AS ( SELECT * FROM (VALUES (1),(1),(2),(3),(4)) v(n) ) , B(col1) AS ( SELECT * FROM (VALUES (1),(2),(2),(3),(4)) v(n) ) ,[A'] AS ( SELECT *, ROW_NUMBER() OVER(ORDER BY col1) AS rn FROM A ) ,[B'] AS ( SELECT *, ROW_NUMBER() OVER(ORDER BY col1) AS rn FROM B ) SELECT ( SELECT count(*) FROM [A'] a INNER JOIN [B'] b ON a.rn = b.rn AND a.col1 = b.col1 ) AS [|A join B|] , ( SELECT COUNT(*) FROM A ) AS [|A|] , ( SELECT COUNT(*) FROM B ) AS [|B|]

This produces the results:

|A join B| |A| |B| 4 5 5

We can see that, even though A and B have the same cardinality, the JOIN produces a different cardinality, showing us that at some point, the two sets diverge. Hence the two sets are not equal.

Saturday, June 20, 2015

Re-imagining Linked Servers

Introduction


Linked servers are an essential part of the SQL Server developer's tool kit.  They are often necessary for pulling/pushing data to/from remote servers whether those servers run SQL Server, Oracle, DB2, PostgreSQL or any other RDBMS. Linked servers are flexible and can adapt to many different types of "servers" including Excel workbooks and MS-Access databases among others

Creating a linked server is relatively easy.  There is both a GUI method in SSMS (under Server Objects/Linked Servers) and a T-SQL method, using the system stored procedures sp_addlinkedserver and sp_addlinkedsrvlogin

Problem

Securing access to linked servers falls outside the normal SQL Server permissions controls.  For example, to give some user -- say CORP\USER1 -- access to a linked server, you need to add that login id as a linked server login and map it to some login on the linked server.  That means that, for every user who needs to access the linked server, you potentially need to add a new linked server login.

There are options available to mitigate this somewhat.  If the remote user is in the same AD domain as the local user, you can specify that the connection should be made using their credentials.  This cannot work across domains unless those domains are in a trust relationship.  You can also specify a catch-all remote user which is used for connections where there is no explicit mapping specified. Also, you can simply deny access for any users trying to connect that are not in the explicit login list. (I'm using the terms "user" and "login" rather loosely here.  Strictly speaking, linked servers only know about logins since they are a server-level object.)

Solution

I find these options deficient and inflexible.  I do not want to edit the linked server object every time I need to give a new login access or remove an old login.  I'd rather use SQL Server's roles and credentials.  Here's how I envision this working:


  1. When you create a new login for a linked server, you should have additional options for the local login, specifically:
    1. An AD group that is in the Server Logins.  This would mean that any member of the group that can connect to the server (that is, not DENYd), can use the linked server.
    2. A SQL Server user-defined role.  This is the most flexible.  Any member of the role would have access to the linked server.  Note that you can add AD groups to roles if those groups are included in the server login list.
  2. In addition to specifying a specific remote login, you should have the ability to specify a Credential. 

These additional options, should they be added, would make the management of linked servers easier and more consistent with the SQL Server security model.  With these in place, adding logins to the list of those permitted to use the linked server would be a simple matter of adding them to the AD group or add the login directly to the server role.  Also, adding or changing the remote login would be managed as credentials,

Conclusion

I'm convinced that it is time to align linked server management with the SQL Server roles and credentials security model.  Let's see if we can raise this issue for SQL Server Next!

Friday, June 12, 2015

Query performance scuttled by ARITHABORT OFF

Part of my daily routine is to have a look at jobs that ran overnight to see which ones ran longer than seems reasonable.  I know, "seems reasonable" is a little vague!  What I mean is that how long did the job run relative to previous runs and relative to the amount of data processed.

The other day while going through the overnight jobs, I found one that had been running between 90 minutes and 3.5 hours, yet only processed about 25000 rows.  I suddenly got very interested in that job!  What was it doing that could possible take so long?

Digging into SSIS

The job in question runs an SSIS package that loads a table into our data warehouse that is needed by many daily, weekly and monthly reports -- some of which go to C-level executives and above.  One might say this is important!  The job doesn't do anything very special:  it gets data -- previously staged -- from the same server it loads it to.  Sounds simple, right?  Well, actually it is simple.  So, why the long run time?

Digging into the package I discovered that the data flow uses an OLE DB source transformation, specifying Table or View as data access mode and referencing a view.  "OK," I thought, "What's up with that view?"  I opened the view definition in SSMS to look at it.  It involved a join of six tables but nothing suspicious.  Some of the tables could use extra indexes but the amount of data (25,000 rows) was small enough that I figured those indexes wouldn't make much difference.  It turned out that I was right and that the problem was something else entirely.

It's the plan, dummy!

Before trying the run the view in SSMS, I took a look at the estimated execution plan.  Essentially, it was a series of hash joins, which are usually a good choice.  With a little trepidation, I ran the code in the view.  To my surprise, it returned results in under 10 seconds!  So, where did the 3.5 hours come from?  Good question!

Next, I decided to rerun the package to see if it finished in 10 seconds, as my standalone query had.  I waited and waited and waited some more.  Finally I killed it.  Something else was going on.  I recalled that I had installed Adam Machanic's excellent sp_whoisactive stored procedure some time ago.  You can find the current release version here: sp_whoisactive

This procedure has a number  of options to control its output.  I noticed a couple I thought I could use:

--If 1, gets the full stored procedure or running batch, when available --If 0, gets only the actual statement that is currently running in the batch or procedure @get_full_inner_text BIT = 0, --Get associated query plans for running tasks, if available --If @get_plans = 1, gets the plan based on the request's statement offset --If @get_plans = 2, gets the entire plan based on the request's plan_handle @get_plans TINYINT = 0,
So, I started up the package again, and while it was running, ran sp_whoisactive in a second session using those options.  The procedure returned a row for my running package with a link to the query plan.  Using that link, I opened the query plan to see what it looked like.  It didn't look at all like the plan from my first, interactive session.  Instead of hash joins the plan had nested loops!  Given the sizes of the tables involved, no wonder it ran for 3.5 hours!

So, why the nested loops?  Digging further, I discovered that the optimizer had timed out trying to find a good plan, leaving us with what it had at the moment.  Not only that, but the estimated row counts for two of the tables were 1 -- that's right, one!  I knew that those tables had thousands of rows, so nested loops would kill performance.  However, the optimizer had given up before finding a good plan. Why? For that matter, why did it not simply reuse the plan from my (fast) interactive query?

ARITHABORT OFF messes up the plan

The next thing to look at was the settings in use when both plans were generated.  They were the same except for one: ARITHABORT.   The plan that caused the optimizer to time out was generated with SET ARITHABORT OFF in effect. Googling this setting led me to ARITHABORT and performance where it is stated clearly:

Client applications setting ARITHABORT to OFF can receive different query plans making it difficult to troubleshoot poorly performing queries.
Sooooo, I knew the reason.  Now, what about the root cause?  It turns out that the default setting for ARITHABORT in the model database is --- you guessed it! -- OFF.  My database inherited that setting.  I could have changed the database setting (and I still might do that) but I did not want to break any queries that might depend on it, so I changed the package instead.

The fix is in!

I changed the Data Access mode to SQL Statement and wrote:

SET ARITHABORT OFF; SELECT * from dbo.myview;

I deployed the package to the server, deleted the old plans from the cache and reran the package.  It ran in under ten seconds!  Worrying that it might be a fluke, I monitored the package for the next several days and not once did it run longer than a minute.  Problem solved!

Summary

The moral of the story?  Don't always assume that the problem is the way the query is written or how the tables are indexed.  The problem can be subtler than than and you need to look at the query plan and the options used to generate it.

Saturday, May 30, 2015

A diversion: Python

For almost ten years I've been an ardent devotee of the Python language, initially developed by Guido van Rossum - Python at Centrum Wiskunde & Informatica in Amsterdam, the Netherlands. I learned it from the ground up during my participation the GRAMPS project, motivated by the unfolding history of my remarkable family.
A few years back, I was approached by Raul Chong of IBM's Big Data University to help author an introductory book on the language. That book is nearing completion!
Along with the book, I was invited to give a two-hour introductory class on Python at Ryerson University. That class went very well, with more than 100 students hanging on to the end. Naturally two hours is hardly enough to do more than scratch the surface, especially for non-Computer Science students, many of whom had little programming experience. Still, I was very much energized by the opportunity and the response.
I was able to use Visual Studio 2013 Express with Python support added via NuGet as my demonstration IDE. Though there are some evident growing pains (especially wrt debugging) VSCE has the potential to become one of the best (free) Python IDEs available.

Update: Link to book: Getting Started with Python

Sunday, May 24, 2015

Integration Services, MySql and SET FMTONLY ON



Like many large organizations, the company I work for has a large, heterogeneous database environment.  We have at least four RDBMSs (five, if you count MS Access) in use to support the various lines of business.  That list includes major, multiple instances of SQL Server, Oracle, DB2 and MySql.  Some of those came as part of certain vendor packages.  This article concerns my adventures integrating data from MySql into SQL Server for reporting purposes.

One of the instances of MySql that I need to pull data from to populate a data warehouse on SQL Server is the database used by SANscreen, developed by Onaro and now part of the NetApp product suite.  It provides information on storage allocations and usage in our SAN. Developing an ETL process for this data load has been challenging -- even frustrating at times -- but very satisfying as the barriers we encountered along the way began to fall.

The first thing we did was install the MySql ODBC drivers into our development instance of SQL Server and verify that we could at least connect to the SANScreen database.  That worked and we could see the SANScreen tables under the Linked Servers node of System Objects in SSMS.  On to the queries!

We learned right away that we could not use SQL Server's four-part naming to query the MySql tables, since the MySql ODBC driver does not suport the required interfaces.  So, we built views using OPENQUERY that work quite well.  A typical view looks like this:
CREATE VIEW MySqlTable AS SELECT col1, col2, col3 FROM OPENQUERY(SANSCREEN, ' SELECT col1, col2, col3 FROM MySqlSchema.MySqlTable; ';
No problem!  We used the views to build ETL packages in Integration Services and the world was a happier place. After a few runs however, it became obvious that were pulling largely the same data every night for some of the biggest tables.  With 20/20 hindsight we realized that this was not unexpected, since those tables are essentially data warehouse fact tables.  Only a few hundred or thousand rows were new for any given run.  The rest (millions of rows, in some cases) were from older transactions.  An incremental load approach was needed.  

Writing incremental loads in Integration Services means that you need some column you can use to define the start of the new data.  That is commonly a column containing a transaction date or something that can be mapped to such a date.  You want to be able to restrict your query to rows that are new with respect to that column.  Then, you want to save the new, maximum value of that column to use for the next incremental load.  We found columns that mapped to dates and decided to use those.  Adding those dates to our query created a new challenge.

We didn't want to retrieve the whole table from MySql and then filter out the new rows.  That would defeat one of our key goals in the incremental load approach: only pull data from MySql that we need.  So, we needed to pass that date to the call to OPENQUERY.  Unfortunately, OPENQUERY does not support a variable as its second parameter.  Our nice, neat views would not do the job.  We needed new stored procedures.  The procedures have the general form:
CREATE PROCEDURE usp_MySqlTable @StartDate datetime = null AS DECLARE @query varchar(max) = ' SELECT col1, col2, col3 from MySqlSchema.MySqlTable WHERE datecol >= ''{StartDate}''; '; SET @query = REPLACE(@query, '{StartDate}', @StartDate); EXEC sp_executesql @sql;
(Note that I like to declare a variable containing the whole query including place-holders for variables, then replace those variables before executing the query.  This way, I can see the whole query at a glance.  I find this easier to follow than alternatives that assemble the query from parts.  YMMV.)

Voila!  A stored procedure that calls MySql with a variable start date!  Worked like a charm in all our unit tests.  So, now we went on to build SSIS packages that use the new procs.  More pain ensued...

Since we were no longer working with views, there has to be some way for Integration Services to figure out the metatdata for the source adapter. Since we are using OLE DB Sources in our packages, we changed the input from Table or View to SQL Command. The commands look like:
EXEC usp_MySqlTable ?
and we put a variable holding a default starting date in the  Parameters section.  (As an aside: For some reason, when you do this, the Integration Services designer cannot preview the results.  It throws an error, complaining that parameters were not supplied, even though they clearly are.) Once we had changed the view to the call to the stored procedure, I clicked on Columns to make sure that everything was still OK. We waited...and waited...and waited some more...until the designer threw an error saying that the query has timed out!  What on earth was going on?  To find out, we fired up SQL Server Profiler to observer.  we found a sequence like this (reformatted for readability):
DECLARE @p1 int SET @p1 = 0 EXEC sp_prepare @p1 OUTPUT , N'@P1 datetime' , N'exec [dbo].[usp_MySqlTable] @StartDate = @P1;' , 1 SELECT @p1 SET NO_BROWSETABLE ON SET FMTONLY ON EXEC sp_execute 1 , '01-01-1992 01:01:01:000'
Of course, SSIS needs to know the metadata, hence the SET FMTONLY ON.  But where did that weird date come from?  We had not specified January 1, 1992 as the start date!  (Note that this remains a mystery.)  The consequence of the bogus date is that MySql is asked to return the entire table, all four million rows of it.  Since this is inside an OPENQUERY, SQL Server cannot know what the metadata is until something is returned.  MySql chose to process a large portion of the table before returning anything to SQL Server, hence the timeout.

What to do?  Well, we thought if we could somehow limit the results returned by MySql, in spite of the bogus date in 1992, we might be able to avoid the timeout and make the package design less frustrating as well.  Reading through the MySql docs, we saw that there is a LIMIT clause available for MySql queries.  That's it! we thought.  Now, we needed a way to include that in the query, but conditionally, since we only wanted to do that when FMTONLY was ON.  So how could we know that?  BOL was no help, but some Internet searches turned up this gem:


In a nutshell, when FMTONLY is ON, SQL Server tries all conditional branches in the stored procedure, looking for metadata that it can use.  That means that this statement:
SET FMTONLY ON DECLARE @fmtonly bit = 0; IF 1 = 0 SET @fmtonly = 1;
will actually set the variable @fmtonly to 1, if  FMTONLY is indeed ON.  What we wanted to do is to add a LIMIT clause.  So, to begin with, we modified the query template in the stored procedure to:
DECLARE @query varchar(max) = ' SELECT col1, col2, col3 from MySqlSchema.MySqlTable WHERE datecol >= ''{StartDate}'' /* LIMIT */; ';

Note that adding a comment to the query passed to MySql does not have any ill effects.  However, it gives us a place to insert a proper limit clause.  So, before actually executing the dynamic SQL, we added this one line:
IF 1=0 SET @query = replace(@query, '/* LIMIT */', ' LIMIT 1');
The effect?  It worked!  Now, when SSIS does its FMTONLY run, only one row is returned from MySql, almost instantly.  No more lengthy waits and time outs!


Friday, May 22, 2015

Problems in SSIS land

Using SQL Server 2012, SSDT with Visual Studio 2012

There's an interesting article on MSDN: OLE DB Source discussing, among other things, how to use parameters when calling a stored procedure in an OLE DB Source component. The section "Specifying Parameters by Using Ordinal Positions" (about halfway down the page), shows an example using the AdventureWorks database. Unfortunately, the example doesn't work, at least not if you try to Preview the results.

Note that I want to use Preview as a sanity check that I've set things up correctly. Instead, it's driving me insane!

I set up my test using exactly the setup described:
  1. OLE DB Connection Manager to AdventureWorks (I used the 2012 version for the test) 
  2. OLE DB Source Component using the Connection Manager in step 1 
  3. Data Access mode: SQL Command 
  4. SQL Command text: EXEC uspGetWhereUsedProductID ?, ? -- copied from the web page 
  5. Two variables created: User::StartProductId (int) and User::CheckDate (datetime), set to 0 and getdate(), respectively 
  6. Two parameters in the SQL query Parameters window: Parameter0 mapped to StartProductId Parameter1 mapped to CheckDate 
  7. back in the OLE DB Source Editor, clicked Preview ------->>> BOOM!  

"No value given for one or more required parameters."

 
So, it seems that the documentation is wrong (or incomplete) on this point. Curious to see what was going on, I fired up Profiler to trace the calls. I was surprised to see this sequence:


declare @p1 int
set @p1=0
exec sp_prepare @p1 output,N'@P1 int,@P2 datetime',N'EXEC uspGetWhereUsedProductID @P1, @P2',1
select @p1
exec sp_execute 1,0,'01-01-1992 01:01:01:000' 

So, I learned that the Preview did not use the variable values I had set. Why?

Since the ordinal positions method didn't work, I tried the second option: Specifying Parameters by Using Names

To do this, I changed my SQL command to:



EXEC uspGetWhereUsedProductID 
    @StartProductID=?,
    @CheckDate=?
;
 
 
And changed the mappings to:

@StartProductID --> User::StartProductID
 @CheckDate --> User::CheckDate

Then I clicked Preview again. -- Same result ----> BOOM! Same error message. Still didn't use my variable settings So, this is totally messed up.

HOWEVER! When I actually execute the task under debug in SSDT, it runs fine, uses the variable values I set, etc.

So, the big questions:
  1. Why does Preview bomb? From what I can see in the trace, it shouldn't. That is, it properly prepares the call to the stored procedure (except for my variable values), FWIW I copied the statements from Profiler and ran them fine in SSMS. So why does this fail in SSDT debug? 
  2. Why doesn't it use my variable values? 
  3. Where did the date '01-01-1992 01:01:01:000' come from? Is it just a historical reference to SQL-92?