MS Terminology:
MS Standards:
Objective |
Group data into entities by applying normalization rules |
C3L1 |
Identify primary keys |
C3L1, C5L1, C5L2 |
Normalisation Rules:
Normalisation Benefits:
MS book also lists: "Compact and narrow indexes". Which, presuming it is different to (1) is meaningless.
Objective: |
|
Choose the foreign key that will enforce a relationship between entities and that will ensure referential integrity |
xx
Objective: |
|
Identify the business rules that relate to data integrity |
xx
Objective: |
|
Incorporate business rules and constraints into the data model |
xx
Objective: |
C3L1, C3L4, C15L1 |
In a given situation, decide whether denormalization is appropriate |
MS says:
"Denormalisation is acceptable when it creates definitive benefits and appropriate steps are taken to ensure data integrity."
"... reasonable normalisation often helps rather than hinders performance."
"As a rough rule-of-thumb, extensive normalisation is recommended unless it causes many queries to join in excess of four ways."
To ensure data integrity use constraints, rules and triggers.
Objective: |
C3L1 |
Access the potential impact of the logical design on performance, maintainability, extensibility, scalability, availability and security |
xx
Objective: |
C2L3 |
Access data by using the dynamic SQL model |
Dynamic SQL
Options, table and variables created by dynamically constructed SQL are automatically destroyed at the end of the execution of the statement.
Formatting:
<---- Wrong place?
Objective: Access data by using the Stored Procedure model |
xx
Objective: |
C9L2 |
Manipulate data by using Transact-SQL cursors:
|
Cursor Support
For ADO, OLE DB, ODBC & DB-Library
Transact-SQL Cursors
Forward-Only Cursors | Static Cursors (aka snapshot or insensitive) |
|
|
Dynamic (aka sensitive) | Keyset-Driven |
|
|
DECLARE CURSOR (ANSI)
DECLARE name [INSENSITIVE] [SCROLL] CURSOR FOR SQLSelectStatement
[FOR {READ ONLY | UPDATE [OF
col [, col2, ..., colN]]}]
DECLARE CURSOR (SQL Server)
DECLARE name CURSOR [LOCAL | GLOBAL] [FORWARD ONLY | SCROLL]
[STATIC | KEYSET | DYNAMIC |
FAST_FORWARD]
[READ_ONLY | SCROLL_LOCKS |
OPTIMISTIC] [TYPE_WARNING]
FOR SQLSelectStatement
[FOR UPDATE [OF col [,
col2, ..., colN]]]
Parameter Notes SQLSelectStatement Keywords such as DISTINCT, TOP, UNION, GROUP BY and HAVING force the cursor type to static LOCAL The cursor is local to the stored procedure, trigger or batch in which it is defined. When that unit is exited, the cursor is destroyed on exit and cannot be referenced externally. GLOBAL The cursor is not limited to the stored procedure, trigger or batch in which it is defined and can be referenced until the connection in which it was defined is dropped (or the cursor is closed/deallocated)
Note: Although global is the default if neither local or global was specified, this behaviour can be overridden with SET CURSOR_DEFAULT LOCAL and SET CURSOR_DEFAULT GLOBALFORWARD_ONLY Data can only be read sequentially and by FETCH NEXT. If neither FORWARD_ONLY or SCROLL is specified FORWARD ONLY is the default only if the cursor type clause (static/keyset/etc.) is not used. FORWARD_ONLY cannot be used with SCROLL in either the SQL Server or ANSI 92 definitions. SCROLL All fetch options are available STATIC Cursor is created in tempdb and changes to it are not reflected in the base tables KEYSET Key fields for records in the cursor are stored in tempdb, but not other values. Changes made by other users are visible, but inserts are not (because the keyset records in tempdb are static). Records that have been deleted (or have had their key fields changed) by other users set @@FETCH_STATUS to -2. DYNAMIC Changes made by other users, including inserts and deletes, are visible to every fetch. FAST_FORWARD Combination of FORWARD_ONLY, READ_ONLY and some additional internal performance optimisations. READ_ONLY Prevents updates through the cursor even if another clause indicates writability SCROLL_LOCKS ??Locks the row as they are read into the cursor to guarantee the ability to update those rows. (?? this is the MS Help file version, the MS Book says something else) OPTIMISTIC Rows are not locked as they are read, instead the update fails if modified outside of the cursor. TYPE_WARNING Tell the user if the selected cursor type was changed (for example, by use of one of the keywords listed in SQLSelectStatement). UPDATE Entire cursor is writable UPDATE OF col [, col...] Only the specified columns are writable
OPEN
OPEN
OPEN [GLOBAL] name
OPEN @variableName
Parameter Notes GLOBAL If there are both local and global cursors with the same name, the local one one will be opened unless the GLOBAL clause is specified. @variableName If the cursor was generated through SET @variableName = cursorName or SET @variableName = CURSOR [FORWARD_ONLY | SCROLL ] [ STATIC | KEYSET | DYNAMIC | FAST_FORWARD ]
...<NONE> ?? Help file and MS book says this syntax is legal, but neither says what it means.
FETCH
FETCH [[NEXT | PRIOR | FIRST | LAST | ABSOLUTE n | RELATIVE n]
FROM] {[GLOBAL] cursorName | @variableName}
[INTO @variableName [,
@var2 ...]]
Parameter Notes GLOBAL As abovce n A number (can be a @variable) ABSOLUTE Cannot be used with dynamic cursors NEXT If this is the first fetch performed on the cursor it effectively performs a FETCH FIRST.
FETCH NEXT returns the current record before moving forwardPRIOR If performed on first record in cursor, leaves us on a fictional BOF record.
FETCH PRIOR moves then returns the record.<NONE> ??Presumable if NEXT, PRIOR, etc is not specified the record is returned without moving the pointer. Need to test 'cos its not covered by the help file or book.
@@FETCH_STATUS
Value Notes 0 Success -1 Failed, BOF or EOF -2 Row is missing (e.g. deleted by another user)
CLOSE
CLOSE
CLOSE {[GLOBAL] cursorName | @variable}
DEALLOCATE
DEALLOCATE
DEALLOCATE {[GLOBAL] cursorName | @variable}
Updating Cursors
The current record can be updated using the WHERE CURRENT OF ... clause,
provided the cursor has been declared updateable. For example:
UPDATE table SET field = newValue WHERE CURRENT OF cursorBasedOnTable
?? Need more detail
?? Need detail on passing local cursors out of stored procedures (p280)
Cursor Stored Procedures
sp_cursor_list Returns a list of open cursors (on the current connection) and their attributes sp_describe_cursor... Returns attributes of a cursor sp_describe_cursor_columns... Returns attributes of columns in a cursor sp_describe_cursor_tables... Returns information about the base tables used by the specified cursor
Objective: |
|
Create and manage explicit, implicit and distributed transactions to
ensure data consistency and recoverability:
|
Transactions
Full syntax is:
where transactionName makes the code more readable.
WITH MARK is used to mark the transaction log with string that can be referred
to in restore options.
@@TRANCOUNT returns the transaction level. DBCC OPENTRAN can be used to return more information on active transactions.
ROLLBACK rolls back all outstanding transactions, not just the last one begun.
Not all stored procedures (e.g. sp_dboption) can be issues in a transaction
Not all commands can be issued in a transaction (including, but not limited to: ALTER DATABASE, BACKUP LOG, CREATE DATABASE, DROP DATABASE, RECONFIGURE, RESTORE DATABASE, RESTORE LOG, UPDATE STATISTICS)
Isolation Levels
The translaction isolation level is set at the connection level and specifies the default locking behaviour for that connection. The Transaction Iolation Level is set by the SET TRANSACTION ISOLATION LEVEL level command (note that it is level not to level.). level can have the following values:
READ UNCOMMITTED Dirty reads are allowed, no share locks are issued and exclusive locks are ignored.
Equivalent to stating the NOLOCK hint for every table in the select statement.READ COMMITTED The default. Shared locks are issued and all locks are honoured.
Dirty reads cannot occur.REPEATABLE READ As per READ COMMITTED, except that shared locks caused by reads are not released until the transaction completes. Thus a read operation is "repeatable" since another connection cannot change the data while another has it locked. Note that this does not apply to new rows added by another connection (i.e. REPEATABLE READ does not prevent phantoms.) SERILIZABLE As per REPEATABLE READ except prevents other connections adding rows that would be returned by WHERE clauses issues in statements by the locking connection.
Value in the above cursor are shown with the least restrictive locking mechanism first through to the mot restrictive mechanism last.
DBCC USEROPTIONS returns (amongst other things) the current transaction isolation level. If the default isolation level has not been overridden, it will not be listed by DBCC USEROPTIONS.
Recovery and Checkpoints
When the system recovers from a crash the following occurs:
Transactions completed before last checkpoint No action required, database already contains this information. Transactions completed after last checkpoint, but before system failure Read from transaction log and apply to database (aka "Roll Forward") Transactions completed (or outstanding) after system failure No action is taken, database is left at its checkpoint (plus above changes) state. Aka "Roll Back"
Note that this does mean than that a short transaction started after after a long slow transaction can be rolled forwards while the slow transaction is rolled back. Since inner transactions are never committed until the outermost transaction is committed, this is not a problem.
SET IMPLICIT_TRANSACTIONS ON | OFF
Levels of Locks (finest granularity first)
Object Scope
Row Identifier Single row in a table Key Single row in an index ?? doesn't sound right Page Single Page Extent Single Extent (8 pages) Table Entire table and all it's indexes Database Entire database
Concurrency Problems Prevented by Locks
Name Description
Lost Update Two users update the same record and where only the second set of changes is saved. Uncommitted Dependency /
Dirty ReadWhere one transaction reads the uncommitted changes of another transaction Inconsistent Analysis /
Non-repeatable ReadWhere one transaction has to read the same record several times, but another transaction has
updated the record in the meantime.Phantoms Where one transaction adds records that would be visible to a concurrent transaction
Types of Lock
Name Description
Shared Locks Applied automatically by SQL during operations that do not update data. A shared lock prevents other transaction updating the locked data. Additional transactions can acquire shared locks on the same data even while the first transaction is still active.
Exclusive Locks Applied automatically by SQL during INSERT, DELETE & UPDATE statements. Once a transaction has an exclusive lock no other transactions can obtain a lock of any type on that data. Exclusively locked data cannot be read by other transactions. Intent Locks Applied automatically by SQL to prevent long winded searches for locks. For example, if a transaction acquires a row-lock it will also obtain a table level intent lock. While another transaction holds an Intent lock, no other transaction can acquire an exclusive lock at that level. This means that if another table required an exclusive table level lock it would be refused by just checking for Intent locks rather than scanning the entire table looking for row-locks. SQL supports the following level of Intent locks: Intent Share (IS), Intent Exclusive (IX) and Shared with Intent Exclusive (SIX). ?? Need to amplify SIX
Update Locks Applied at the page level automatically by SQL when it plans to update a page later. When the time to update the the page actually arrives the update lock is upgraded to an exclusive lock.
?? This somehow prevents the presence of lots of shared locks blocking an exclusive lock, but I don't understand howSchema Locks A schema lock prevents a table or index being dropped or modified by another transaction.
SQL supports two types of Schema: Schema Stability (Sch-S) which prevents dropping and Schema Modification (Sch-M) which prevents modification.
Lock Compatibility Matrix
Existing Lock In Another Transaction
Intent Shared (IS) Intent Exclusive (IX) Shared With IX (SIX) Shared (S) Update (U) Exclusive (X) R
e
q
u L
e o
s c
t k
e
dIntent Shared (IS) Y Y Y Y Y N Intent Exclusive (IX) Y N N N N N Shared With IX (SIX) Y N N N N N Shared (S) Y N N Y Y N Update (U) Y N N Y N N Exclusive (X) N N N N N N Notes:
- Table shows diagonal symmetry. I.e. opA -> opB is same as opB -> opA
- Sch-M is not compatible with any other type of lock
- Sch-S is compatible with all locks except Sch-M
Optimistic & Pessimistic Concurrency
MS book's terms for standard Optimistic and Pessimistic locking. Under SQL Server you (or the client interface you are using) is responsible for implementing both sorts of locking.
SET LOCK_TIMEOUT timeInMilliseconds
By default SQL Server will wait forever for a required resource to be unlocked. This default behaviour can be overridden by SET LOCK_TIMEOUT n and restored by a SET LOCK_TIMEOUT -1. The current value of SET LOCK_TIMEOUT can be obtained from @@LOCK_TIMEOUT.
Locking Hints
Locking hints can be applied to the FROM clause in SELECT, INSERT, DELETE and UPDATE statements.
Take the form FROM table WITH (hint [, hint
...])
where hint can be:
ROWLOCK Use Row Locks PAGLOCK Use Page Locks (Note: PAGLOCK is correct (no E)) TABLOCK Use Shared Table Lock TABLOCKX Use Exclusive Table Lock NOLOCK Equivalent to READ UNCOMMITTED HOLDLOCK Equivalent to SERIALIZABLE. These locks remain applied until the end of the transaction. UPDLOCK Use update locks instead of shared locks when reading. These locks remain applied until the end of the transaction.
Deadlocks
SQL Server terminates (and rolls back) which ever transaction has had the least time invested in it (that's elapsed time, not CPU time ??confirm). Error 1205 is sent to tbhe terminated transaction's connection.
Reduce deadlocks by:
Misc. Locking Information
Objective: |
C9L3, AppD |
Write INSERT, DELETE, UPDATE and SELECT statements that retrieve and modify data |
INSERT [INTO] table SELECT ...
DELETE [FROM] tableOrView [FROM tableOrView2 [FROM ...]] [{WHERE expr | WHERE CURRENT OF cursorName}]
UPDATE tableOrView SET field = expr [FROM tableOrView2 [FROM ...]] [{WHERE expr | WHERE CURRENT OF cursorName}]
Objective: Write Transact-SQL statements that use joins or sub-queries to combined data from multiple tables. |
Join Abbreviations
Cross Join = Cartensian product = no "ON" clause
Select InTo
Subqueries
Examples (silly examples because they could all be done
better without a subselect):
SELECT EmployeeID, (SELECT COUNT(*)
FROM Orders o WHERE o.EmployeeID = p.EmployeeID) FROM Orders p
GROUP BY
EmployeeID
SELECT DISTINCT EmployeeID FROM
Orders Where EmployeeID IN
(SELECT
EmployeeID FROM Orders WHERE CustomerID ='HANAR')
Objective: Create scripts using Transact-SQL. Programming elements including control-of-flow methods, local and global variables, functions and error handling methods |
Standard Identifiers
Naming Guidelines
Object References
Are of the form server.database.owner.object. All 4 elements together represent a fully qualified name. All elements except object can be omitted and will be defaulted as follows:
server Defaults to local server database Defaults to current database owner Defaults to the user name associated with the user login in that database.
Elements in the middle of references can be omitted by just including the colons, for example: server...object
During creation the user's role can be specified in place of owner thereby making that role the owner. This greatly simplifies administration. For example, a member of the dbo group might issue CREATE TABLE MyDatabase.dbo.MyTable (...)
Local Variables
T-SQL Syntax: DECLARE @variable_name data_type
SET @variable_name = expressionT-SQL Example #1: DECLARE @cWelcome char(20)
SET @cWelcome = 'Hello World'
SELECT * FROM messages WHERE msgtxt = @cWelcomeT-SQL Example #2: DECLARE @nCount int
SELECT @nCount=Count(*) FROM messages
Note: If the above SQL returned more than one record (obviously not possible in the above example) the variables is set to the value returned in the last row.T-SQL Example #3: DECLARE @myspid integer
SELECT @myspid=@@spid
EXECUTE sp_who @myspid
Global Variables
No longer used in Transact-SQL. Now called functions (after all, if they were a variable you should have been able to assign a value to them). Note that although the terminology has changed the syntax has not. For example, it is still SELECT @@VERSION and not Select Version(). Global Variables are still used in DTS.
Operators
Logicals: The usual (=,+,-,*, /, %, <, >, <>, AND, NOT) plus the
unusual (!=, !<, !>).
Binaries: ^ (XOR), | (OR), & (AND)
SQLs: ALL, ANY, BETWEEN, IN, LIKE, OR, SOME
Note that Logical, Binary, SQL is the operator precedents order except than AND and NOT actually belong between Binaries and SQLs and multiplication/division/modulus has a higher preference than addition/subtraction.
Functions
In SQL speak there are three types of function:
Useful functions:
Function | Notes |
@@VERSON | Human readable version info |
@@TRANCOUNT | Current transaction level |
@@ROWCOUNT | Number of rows affected by the last executed query. |
@@SPID | The process ID associated with the connection |
CONVERT(toType, value, flags) | Convert from one datatype to another. For example SELECT CONVERT(varchar(30), GETDATE(), 102) convert a date to a character with formatting options 112 (long datetime format) |
SUSER_SNAME() | The user name for the connection this command is executed from. |
APP_NAME() | The application names for the connection this command was executed from. There is nothing magical about this value. The application making the connection calls a corresponding function to set the value early on. For example, Query Analyzer sets this to "SQL Query Analyzer". |
OBJECT_ID(tablename) | Returns the number that SQL uses to reference the table internally. Not particularly useful in itself this value is needed by other functions. For example, to determine if the "myField" field in the "myTable" table accepts NULLs we could SELECT COLUMNPROPERTY(OBJECT_ID('myTable'), 'myField', 'AllowsNulls') |
DB_NAME() | Current database name |
DB_ID([databaseName]) | Database ID |
USER_ID([userName]) | User userName's user id. |
USER_NAME([userID]) | User userID's name |
COL_LENGTH(table, column) | Returns column width |
DATALENGTH(expression) | Returns width of expression in bytes. Expression can be a table column name. Unlike COL_LENGTH, DATALENGTH requires a from clause if used with a column name. For example: select datalength(surname) from voters |
STATS_DATE | ?? |
NEWID() | Returns a GUID. Often used in association with uniqueidentifier columns |
Advanced (or unique to SQL) functions:
SQL Advanced Functions | Equivalent In a Language I Already Know |
CASE WHEN bValue THEN result [WHEN ...] [ELSE result] END |
FUNCTION SQLCASE1 DO CASE CASE bValue luResult = result [CASE ...] [OTHERWISE luResult = result] ENDCASE RETURN luResult |
CASE expression WHEN value THEN result [WHEN ...] [ELSE result] END |
Function SQLCASE2 DO CASE CASE expression = value luResult = result [CASE expression = ...] [OTHERWISE luResult = result] ENDCASE RETURN luResult |
Notes:
Control Structures
SQL Structure | Equivalent In a Language I Already Know |
BEGIN ... END |
{ ... } |
IF bExpression command [ELSE command] ENDIF |
|
WHILE bExpression command |
DO WHILE bExpression command ENDDO |
RETURN |
Notes:
Multiple commands
There is no line terminator. Multiple commands can be included on a single line. For example:
EXECUTE sp_who (select @@spid)
doesn't call sp_who with the result of select @@spid as the parameter, but instead executes "execute sp_who" and then executes "select @@spid" as a separate command. The follow achieves the desired result on a single line, but is unreadable:
DECLARE @myspid integer select @myspid=@@spid EXECUTE sp_who @myspid
Comments
Single line comments: Double hyphen (--). Can be used at the end of an
existing line, just like VFP's &&
Multiline comments: As per C (/* & */)
xx C2L2
Objective: Design, create, use and alter views:
|
VIEWS
CREATE VIEW
ALTER VIEW
DROP VIEW
Updateable View
Objective: |
C14L1 |
Create and execute stored procedures to enforce business rules, to modify
data in multiple tables, to perform calculations and to use input and
output parameters:
|
SQL Server supports 5 types of stored procedure:
Procedural Cache
The procedural cache holds the query plans for stored procedures. Its size isn't fixed, but when it runs out of room the least recently used procedure is dropped from it to make room.
Recompile
Stored procedures are automatically recompiled when:
Why use Stored Procedures?
CREATE PROCEDURE
ALTER PROC[EDURE] ...
DROP PROC[EDURE] procName [, procName2 ...]
EXEC[UTE]
Other stuff
Example
CREATE PROCEDURE dbo.MyExample
@var1 int, @var2 int, @result varchar(30) = NULL OUTPUT
AS
IF @var1 > @var2
SET @result = 'First number is bigger'
ELSE IF @var2 > @var1
SET @result = 'Second number is bigger'
ELSE
SET @result = 'Numbers are the same'
GO
DECLARE @returnVar varchar(30)
EXEC MyExample 1,2,@returnVar OUTPUT
SELECT @returnVar
> Second number is bigger
Error Handling
Note that RAISERROR contains only one E.
User defined message can be created in master.sbo.sysmessages by using the
sp_addmessage stored procedure and removed using the sp_dropmessage.
Objective: Create triggers that implement rules, that enforce data integrity and that perform cascading updates and deletes:
|
Triggers
Stored Procedures
sp_helptext triggerName Displays trigger definition (provided its not defined with the WITH ENCRYPTION clauses) sp_depends triggerName Displays details of objects referenced by the trigger sp_helptrigger tableName Displays all triggers for the specified table sp_configure 'nested triggers', N Enables nested triggers (N=1) or disables nested triggers (N=0) sp_dboption 'database',
'recursive triggers',
{TRUE | FALSE}Enableds (TRUE) or disables (FALSE) recursive triggering for database database
Objective: |
C10 |
Create results sets that provide summary data. Query type types include TOP n PERCENT and GROUP BY, specifically HAVING, CUBE and ROLLUP |
Aggregate Functions
Function |
Notes |
Function |
Notes |
AVG | Average | SUM | |
COUNT |
Excludes nulls. Can't be used with uniqueidentifier, text, image or ntext. |
STDEV | STatistical DEVivation |
COUNT(*) | Includes nulls, otherwise as above | STDEVP | STatistical DEVivation for the Population |
MAX | Can be used with (var)char, datetime and numerics (expect bool) | VAR | Statistical VARiance |
MIN | As above | VARP | Statistical VARiance for the Population |
Group By
Group By ALL fieldList
Group By fieldList WITH ROLLUP
For example:
Code Output Output Notes Create Table #Test (f1 integer, f2 integer, f3 integer)
Insert Into #Test Values (1,1,1)
Insert Into #Test Values (1,1,2)
Insert Into #Test Values (1,1,3)
Insert Into #Test Values (1,2,1)
Insert Into #Test Values (1,2,2)
Insert Into #Test Values (1,2,3)
Insert Into #Test Values (2,1,1)
Insert Into #Test Values (2,1,1)
Select f1, f2, sum(f3) from #Test group by f1,f2,f3 with rollup
order by f1, f2NULL NULL 14 Grand Summary 1 NULL 12 Summary record for f1=1 1 1 1 1 1 2 1 1 3 1 1 6 Summary record for f1 = 1, f2 = 1 1 2 1 1 2 2 1 2 3 1 2 6 Summary record for f1 = 1, f2 = 2 2 NULL 2 Summary record for f1 = 2 2 1 2 2 1
2 Summary record for f1 = 2, f2 = 1
Group By fieldList WITH CUBE
As per "WITH ROLLUP" except all combinations of fields in fieldList are used rather than just removing a field at a time.
Code Output Output Notes Select f1, f2, sum(f3) from #Test group by f1,f2,f3 with cube
order by f1, f2NULL NULL 14 Summary Record for everything NULL NULL 4 Summary Record for something NULL NULL 6 Summary Record for something NULL 1 8 Summary Record for f1 = *, f2 = 1 NULL 1 3 Summary Record for something NULL 1 2 Summary Record for something NULL 1 3 Summary Record for something NULL 2 1 Summary Record for something NULL 2 2 Summary Record for something NULL 2 3 Summary Record for something 1 NULL 6 Summary Record for something 1 NULL 4 Summary Record for something 1 NULL
12 Summary Record for something 1 1 1 1 1 2 1 1 6 1 2 1 1 2 2 1 2 3 1 2 6 2 NULL 2 Summary Record for something 2 NULL 2 Summary Record for something 2 1 2 2 1 2
Code Output Output Notes Select f1, f2, sum(f3) from #Test group by f1,f2 with cube
order by f1, f2
(f3 removed from grouping)NULL NULL 14 Summary record for everything NULL 1 8 Summary record for f2 =1 NULL 2 6 Summary record for f2 = 2 1 NULL 12 Summary record for f1 = 1 1 1 6 Aggregated result 1 2 6 Aggregated result 2 NULL 2 Summary record for f1 = 2 2 1 2 Aggregated result
GROUPING(fieldName)
The GROUPING function makes WITH ROLLUP and WITH CUBE usable with data that may control NULLs. If the row is generated as the result of a WITH ROLLUP or WITH CUBE clause the function returns 1, otherwise it returns 0.
Code Output Output Notes Select grouping(f1), f1, grouping (f2), f2, sum(f3)
from #Test
group by f1,f2 with cube
order by f1, f2
1 NULL 1 NULL 14 Summary record for everything 1 NULL 0 1 8 Summary record for f2 =1 1 NULL 0 2 6 Summary record for f2 = 2 0 1 1 NULL 12 Summary record for f1 = 1 0 1 0 1 6 Aggregated result 0 1 0 2 6 Aggregated result 0 2 1 NULL 2 Summary record for f1 = 2 0 2 0 1 2 Aggregated result Normally, you are only interested in whether the record is a summary record or not, in which case the number of columns can be reduced using "|". E.g SELECT GROUPING(f1) | GROUPING (f2) as IsSummary.
Having
Compute / Compute By
Example:
Select Product, Qty, OrderNo From OrderHistory Order By Product, OrderNo
Compute Sum(Qty)
> Product Qty OrderNo
> --------------------------------
> beans 10 1
> beans 5 1
> oranges 7 1
>
> sum
> ----
> 22
Top n [PERCENT] [WITH TIES]
Objective: |
|
Configure session-level options |
SQL default is "not null", ANSI default is allow nulls.
SET ANSI_NULLS controls interpretation of null values. When ON, expr = NULL and expr <> NULL always returns false. ANSI_NULLS is ON by default. (Note: This handling of nulls is not the same as VFP which handles NULLs correctly, i.e. expr = NULL and expr = NULL returns NULL)
xxC2L3
Objective: Access data from static or dynamic sources by using remote stored procedures, linked servers and openrowset:
|
MSDTC (Microsoft Distributed Transaction Coordinator)
Replication
Distributed Queries
OPENROWSET
providerName datasource Description SQLOLEDB
SQL Server Network Name
SQL Server 7.0 Microsoft.Jet.OLEDB.4.0 Full path and filename of an access database Access MSDAORA SQL* Net alias MS provided OLE DB for Oracle MSDASQL System DSN or ODBC connection string Any ODBC datasource Examples:
SELECT SomeRemoteTable.*
FROM OPENROWSET('SQLOLEDB', 'SPIKE_NT';'sa';'',
'select productid from northwind.dbo.products') as SomeRemoteTable
SELECT SomeRemoteTable.*
FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'C:\Program Files\Microsoft Office\Office10\Samples\Northwind.mdb';'Admin';'',
Orders) AS SomeRemoteTable
Linked Servers
sp_addlinkedserver @server='NameToCallLinkedServerWhenCreated',
@srvproduct = 'OLE DB Product Name',
@provider = 'OLE DB Provider Name (as per OPENROWSET)',
@datasrc = 'Data Source Name (as per OPENROWSET)',
@location = '
@provider_string = '
@catalog = 'Catalogue (SQL Database Name)'
Most parameters are optional, as is the "@xxx=" portion of commands (if dropped, position must be maintained). When connecting to SQL Server, only the server & srvproduct names are required.
Examples:
sp_addlinkedserver 'LONDON', 'SVR_LDN_02'
sp_addlinkedserver 'test', 'Microsoft Access', 'Microsoft.Jet.OLEDB.4.0',
'C:\Program Files\Microsoft Office\Office10\Samples\Northwind.mdb'
sp_addlinkedsrvlogin @rmtsrvname = 'LinkedServerName',
@useself = 'TRUE|FALSE',
-- True = use userId/pwd supplied in this command, False = use current
credentials
@locallogin = 'LocalUserName',
-- If NULL, the userId/pwd supplied will be used by all users, otherwise just
the specified user
@rmtuser = 'UserName',
@rmtpassword = 'Password'
Example:
sp_addlinkedsrvlogin 'test',
'false', NULL, 'Admin', ''
Objective: |
C2L3 |
Create and manage files, file groups and transaction logs that define a database |
There are 3 types of database files:
Sizes:
Misc:
Creating Databases
Note that just about everything is options. For example, "CREATE DATABASE naff" creates (on my test system) a 0.75MB datafile called naff.mdf and a 0.25MB datafile called naff_log.ldf.
Deleting Databases
<--- Wrong section?
Viewing Database Information
xx Need V7 for GUI instructions
sp_helpdb [database] | Summary information (one record per database). Includes database name, size, owner, id, creation date, status & compatibility level. If database is specified only one of the above summary lines are produced followed by a summary line per (data/log) file in database. Note that the size figure returned is a combined figure for data and log. |
sp_spaceused [objectname] | Returns summary information on space used. Can be restricted to an object such as a table. |
Database Options
Syntax:
sp_dboption ['database' [, 'option_name' [, 'value']]]
Where:
option_name | |
dbo use only | Only the database owner can use the database |
read only | Typically used for decision support databases |
select into / bulkcopy | Allows non-logged operations during bcp and SELECT INTOs |
single user | Only one user can use the database at a time |
trunc. log on chkpt. | Truncates log every time the checkpoint process occurs. If set, full backups are required to ensure recovery. Note that setting is as shown (abbreviations and periods included) |
autoshrink | Determines if the database size is automatically shrunk |
When value is "on/off" its true/false rather than 1/0. e.g. sp_dboption 'Northwind', 'trunc. log on chkpt.', true
---->
ALTER DATABASE
Alters a database file rather than properties of the database itself.
ALTER DATABASE database command
where:
database is the name of the database the file being modified is associated (or will be associated) with.
command is one (and only one per ALTER DATABASE statement) of:
- ADD FILE (filespec) [TO FILEGROUP filegroup][FOR RESTORE]
Adds a database file to the database. xx doc filegroup and FOR RESTORE- ADD LOG FILE (filespec)
- REMOVEFILE logicalFilename
- MODIFY FILE (filespec)
- CREATE FILEGROUP filegroupName
- DROP FILEGROUP filegroupNane
Note that for files its "REMOVE" but for groups its "DROP"filespec is always enclosed in round brackets and has the syntax:
- (NAME = 'logicalFilename' [, FILENAME = 'physicalFilename'] [, SIZE = size] [, MAXSIZE = {size | UNLIMITED}] [, FILEGROWTH = growth])
Note that SIZE is really "minimum size" and that this figure cannot be decreased by ALTER DATABASE (or DBCC SHRINKFILE). Unlike previous size parameters to change an existing file MAXSIZE to unlimited use the keyword UNLIMITED rather than omitting the parameter.
Log File Performance
1. Use Performance Monitor:
Log Bytes Per Flush | Size of log buffer when it was last flushed |
Log Flushs/sec | Number of times (per seconds) the log was flushed |
Log Flush Wait Time | Time (in milliseconds) between flushes |
Log Flush Waits/sec | Number of commits (per seconds) that are not committed due to waiting for the log to flush |
2. Watch out of instructions or processes that:
- Performed (logged) bulk loads
- Modify large amounts of data (especially data that is indexed or involves text and image fields (e.g. WRITETEXT & UPDATETEXT))
3. You can quickly reduce the amount of space used in the log file by backing up or truncating the log, but you cannot force the log file to be shrunk there and then (it is a deferred operation).
DBCC SHRINKFILE (fileNameOrID [, targetSize] [, {EMPTYFILE | NOTRUNCATE | TRUNCATEONLY}]]
DBCC SHRINKDATABASE ('databaseName' [, targetFreeSpace%] [, {NOTRUNCATE | TRUNCATE])
Filegroups
sp_helpfile ['logicalFileName'] Returns information on the named data file or all data files (in the current database) sp_helpfilegroup ['fileGroup'] Returns information on the filegroup or all file groups (in the current database)
<---- Wrong place?
Database Catalog
Metadata contained in each database containing information about that database. Even master contains a database catalog
System Catalog
Metadata contained in master that is about or is for all databases. All system tables are prefixed with "sys".
To ensure future compatibility (and SQL 92 compatibility) access the system catalog tables through the information schema views only:
---->
Objective: |
C2L3, C4L1, C4L2, C5L1, C5L2 |
Create tables that enforce data integrity and referential integrity.
|
Pre-defined Data Types
Name | Description | ANSI equivalent |
char(length) | Fixed length string of up to 8,000 characters (requiring 8,000 bytes) | character(length) |
varchar(length) | Variable length string of up to 8,000 characters (requiring 8,000 bytes) | character varying(length) |
nchar(length) | Fixed length string of up to 4,000 characters (requiring 8,000 bytes) | national character |
nvarchar(length) | Variable length string of up to 4,000 characters (requiring 8,000 bytes) | national character varying |
smalldatetime | 2 x 2 bytes. The first block of 2 bytes holds the number of days since 1/1/1900 and the second block the number of minutes past midnight. That gives a range of 1/1/1900 to 6/6/2079 with minute resolution. | NONE |
datetime | 2 x 4 bytes. The first block of 4 bytes holds the number of days since 1/1/1900 and the second block of 4 bytes the number of milliseconds since midnight. | NONE |
tinying | 0-255 in 1 byte | NONE |
smallint | -32768 to 32767 in 2 bytes | NONE |
int | 4 bytes | integer |
decimal(p, s) | Exact numeric in 5 to 17 bytes | dec |
numeric(p, s) | Synonym for numeric | NONE |
float(n) | Approximate numeric in 4 to 8 bytes | double precision | float(n) |
real | Approximate numeric in 4 bytes | float(n) |
money | 8 bytes with 8 decimal places | |
smallmoney | An int with 4 decimal places | |
uniqueidentifier | 128-bit GUID (16 bytes) | NONE |
binary(length) | Binary data (up to 8,000 bytes) | NONE |
varbinary(length) | Binary data (up to 8,000 bytes) | binary varying(length) |
text | 0-2GB(-2bytes) of text. 16 byte pointer in row | NONE |
ntext | 0-2GB(-2bytes) of Unicode text. 16 byte pointer in row | national text |
image | 0-2GB(-2bytes) of binary data. 16 byte pointer in row | NONE |
bit | ||
cursor | ||
sysname | ||
timestamp |
Note: (length) is actually optional. For example, in the case of char it defaults to 1 character
User-defined Data Types
Tables
Integrities (RUDE)
plus:
Data (All of the above as one group)
Declaratively by constraints and field definitions and
Procedural with defaults, rules, triggers and stored procedures.
Constraints
There is no "ALTER TABLE... ALTER CONSTRAINT...". You have to drop the old constraint then add the new one.
Examples above are restricted to ALTER TABLE for clarity. They can be used in other statements, such as CREATE TABLE:
CREATE TABLE myTable (id int NOT NULL, name char(50) NOT NULL, dob datetime, CONSTRAINT pk PRIMARY KEY(id),
CONSTRAINT chk CHECK(name IN ('Dave', 'John')), CONSTRAINT dob CHECK (dob <= GetDate()))
When used in a CREATE TABLE statement the CONSTRAINT clause can either follow on from the field definition or be listed separately (in which case it needs to be comma delimited to prevent accidental connection to the last field). For example:
CREATE TABLE myTable (id int NOT NULL CONSTRAINT pk PRIMARY KEY(id), name char(50) NOT NULL CONSTRAINT chk
CHECK(name<>''))is legal and:
CREATE TABLE myTable (id int NOT NULL CONSTRAINT pk PRIMARY KEY(id), name char(50) NOT NULL CONSTRAINT chk
CHECK(name<>''))is legal (and achieves exactly the same results), but:
CREATE TABLE myTable(id int NOT NULL, name char(50) CONSTRAINT pk PRIMARY KEY(id) CONSTRAINT chk CHECK(name<>''))has two errors (firstly the pk constraint is trying to place a constraint on the name field based on the id field and secondly the two constraints need to be separated with a comma)
For performance purposes, checking of existing data can be skipped when adding a new constraint using the WITH NOCHECK option. WITH NOCHECK can only be used with CHECK and FOREIGN KEY. WITH NOCHECK should be placed in the command before the constraint definition.
Constraints can be turned off (for example, before a big data upload) by ALTER TABLE table NOCHECK CONSTRAINT constrantName. They can be re-enabled later using the ALTER TABLE table CHECK CONSTRAINT constrantName command.
Defaults
Rules
Objective: |
C6ALL |
Create and maintain indexes:
|
Physical Storage of Data
Tables that have a clustered index store the data in key order. Pages are stored in a doubly-linked list (i.e. each page has a reference to the page in front and the page after). By altering the links of existing pages new pages can be inserted into the sequence. The tables always have associated index pages because of the clustered indexes.
Data pages for tables that do not have a clustered index are stored in heaps. Data is stored in no particular order and new pages are just added to the heap. Heaps only have index pages if indexes are specifically added to the heap.
When a page is full and SQL is asked to add more data to it it creates a new page and move approximately half the records from the old page to the new page. This process is called a page split. Page splitting occurs for both data and index pages.
Table Scan
A table-scan is a data location method that simply starts at the first data page in a table and works through the pages in sequence. Every row of ever page is examined and those that meet the search criteria extracted. Table scans are efficient when dealing with small tables or when a large proportion of the table is to be returned.
Indexed Access
Indexed access navigates the index tree to locate the index entries for the records to be returned. The corresponding rows from data pages are then read and returned. Indexed Access is efficient when dealing with single rows or small percentages of the entire file. ?? For reasons I don't fully understand, if a clustered index exists on the table then all further (non-clustered) indexes are based on that index.
Query Optimiser
Is the component responsible for determining whether to use a table scan or indexed access. The decision to use Indexed Access depends on more than whether the corresponding index exists or not.
Composite Indexes
Are Indexes Worthwhile?
Indexes generally accelerate queries that join tables, sort data or group data. Indexes are required to support column uniqueness. However, indexes also consume disk space, incur overheads and require automatic updating when modifying indexed data. In fact, on really small tables, it can take longer to read the index and locate the record (Indexed Access) than to read the entire table (Table Scan).
In short:
Miscellaneous Information about Indexes
Index Creation
Index Destruction
Objective: |
C13L1 |
Populate the database with data from an external data source. Methods include bulk copy program and Data Transformation Services (DTS) |
BCP
bcp {[[database_name.][owner].]{table_name
| view_name} | "query"}
{in | out | queryout | format} data_file
[-m max_errors] [-f format_file] [-e
err_file]
[-F first_row] [-L last_row] [-b
batch_size]
[-n] [-c] [-w] [-N] [-V (60 |
65 | 70)] [-6]
[-q] [-C code_page] [-t field_term] [-r
row_term]
[-i input_file] [-o output_file] [-a
packet_size]
[-S server_name[\instance_name]] [-U
login_id] [-P password]
[-T] [-v] [-R] [-k] [-E] [-h "hint
[,...n]"]
-m Max errors before bcp if terminated. Defaults to 10 -f Format file -e Error file where more details error information is logged -F Row number in source file to start from -L Row number in source file to stop after -b Batch size (in rows). If not specified, all rows are read and committed in a single transaction -n Use native datatypes ?? No idea -c Use character datatype. \t is used for field delimiting, \n for row terminator -w Use UNICODE datatype. \t is used for field delimiting, \n for row terminator -N Use native datatypes, except for char where UNICODE will be used instead. ?? No idea -V x Use datatype from a particular version of SQL Server (x = 60, 65 or 70). ?? Used with dates /ODBC ?? -6 Equivalent to -V 60 / -V 65 -q Equivalent to SET QUOTED_IDENTIFIERS ON -C x Specify code page (obsolete. Use -f and specify in format file instead). x= ACP, OEM, RAW or code page number -t Specify field delimitator -r Specify row delimitator -i File containing responses to prompts from bcp -o File to contain output from bcp -a Packet size. Default is 4096 -S Server Name -U User Name -P Password -T Use Trusted connection -v Display version info for bcp -R Use PC's Regional Settings when interpreting data -k Retain NULLs and don't use any default values -E Use values in import file for identity columns rather than SQL Server automatically generating them, ?? SQL2K only ?? -h Import hint.ORDER(...), ROWS_PER_BATCH = x, KILOBYTES_PER_BATCH = x, TABLOCK, CHECK_CONSTRAINTS, FIRE_TRIGGERS.
Non-logged bulk copying can only occur when:
SQL7 BCP uses ODBC (previous versions used DB-Library)
DTS
Objective: |
C7L3 |
Implement full-text search |
Overview
The following stored procedures are used to configure full-text indexing:
Name Description sp_fulltext_database Create or remove full-text catalogues for a database sp_fulltext_catalog Create or remove a full-text catalogue
Start / Stop indexing a cataloguesp_fulltext_table Add or remove a table from full-text indexing sp_fulltext_column Add or remove a column from a table for full-text indexing sp_fulltext_service Control the properties and behaviour of the MS Search service
The following stored procedures are used to examine full-text indexing:
Name Description sp_help_fulltext_catalogs Returns info on a catalogue sp_help_fulltext_tables Lists tables involved in full-text indexing sp_help_fulltext_columns Lists columns involved in full-text indexing
Full-text Queries
SearchExpr | Description |
"Phrase" | Phrases can be searched for by enclosing the text in quotes. e.g. WHERE CONTAINS(*, '"A fist full of dollars"') Phrase searches are case insensitive and noise words and punctuation are ignored. For example, the above will match "A fist? Full of dollars!". |
"Phrase*" | Phrases beginning with Phrase followed by zero or
more additional characters. Each word of Phrase is considered a
separate prefix so "fist full*" really means "fist*
full*". Example: WHERE CONTAINS(*, "fi*") |
FORMSOF(INFLECTIONAL, "Phrase") | Matches plurals and different forms of verbs. |
AND, AND NOT, OR | As you would expect. Note that AND is evaluated before OR and NOT can only be included as part of AND NOT (i.e. it is impossible to say CONTAINS(*, 'NOT "Dave"'). |
"Phrase1" NEAR "Phrase2" "Phrase1" ~ "Phrase2" "Phrase1" NEAR "Phrase2" NEAR "Phrase3"... |
Returns a higher score the nearer each term is the next term. |
ISABOUT(term WEIGHT(value)) | Where term is any of the above and WEIGHT is the
associated weighting. For example: CONTAINS(Description, 'ISABOUT (good WEIGHT (.8), bad WEIGHT (.4), ugly WEIGHT (.2) )' ) |
Objective: |
C7L2 |
Evaluate and optimise the performance of an execution plan by using DBCC SHOW CONTIG, SHOWPLAN_TEXT, SHOWPLAN_ALL and UPDATE STATISTICS |
Data Fragmentation & Rebuilding indexes
As page splits occur data and index pages no longer appear sequentially on disk. Because not all pages are updated equally, the FILLFACTOR per page rapidly becomes different between pages. This increases the number of reads required to complete a query. This is called data fragmentation.
The extent of data fragmentation can be seen by issuing the DBCC SHOWCONTIG([tableID [, indexID]) statement. Example output from Northwind database using "declare @objid int / set @objid = OBJECT_ID('Order Details') / dbcc showcontig(objid))" (because "dbcc showcontig(OBJECT_ID('Order Details'))" doesn't work and dbcc showcontig('Order Details') is SQL2K only):
DBCC SHOWCONTIG scanning 'Order Details' table...
Table: 'Order Details' (325576198); index ID: 1, database ID: 6
TABLE level scan performed.
- Pages Scanned................................: 9
- Extents Scanned..............................: 6
- Extent Switches..............................: 5
- Avg. Pages per Extent........................: 1.5
- Scan Density [Best Count:Actual Count].......: 33.33% [2:6]
- Logical Scan Fragmentation ..................: 11.11%
- Extent Scan Fragmentation ...................: 16.67%
- Avg. Bytes Free per Page.....................: 673.2
- Avg. Page Density (full).....................: 91.68%
Where:
- Pages Scanned
Number of pages in the table or index- Extents Scanned
Number of extents in the table or index- Extent Switches
Number of times DBCC had to change the extent while following pages links through the table or tindex.- Avg. Pages per Extent
Number of pages per extent- Scan Density [Best Count:Actual Count]
Scan Desity: Best Count / Actual Count. A figure of 100% indicates no fragmentation and minimum extent usage.
Best Count: The number of extent switches that would occur if the data was not fragmented and only the minimum possible number of extents had been created.
Actual Count: The number of extent switches that actually occurred (?? why isn't this the same as the figure for Extent Switches??)- Logical Scan Fragmentation
Percentage of pages that are not in sequential order.- Extent Scan Fragmentation
Percentage of extents that are not in sequential order.- Avg. Bytes Free per Page
Free space per page. Note that large record size can make this figure large because the remaining space is insufficient to store a whole record.- Avg. Page Density (full)
Percentage full. This is based on actual row size.- Index ID
If no index id is supplied in the DBCC statement, 0 if table does not have a clustered index (i.e. heap), 1 if it does
Rebuilding indexes reorganizes the pages. If its a clustered index, this includes the data pages. Pages and records are reallocated such that the required FILLFACTOR is restored.
Indexes can be rebuilt by:
System indexes (i.e. those on system tables) cannot be rebuilt.
Index Tuning
Users have to be in the sysadmin role to use the Index Tuning Wizard.
The Index Tuner can produce the following report:
Statistics
CREATE STATISTICS statsName ON table(field1 [, field2
[...]])
[ WITH [FULLSCAN | SAMPLE sampleSize {PERCENT | ROWS}]
[,] [NORECOMPUTE]
where
If neither FULLSCAN or SAMPLE percentage PERCENT is included, SQL decides for itself how many rows to use.
DROP STATISTICS statsName removes statistics that you have finished with.
To update statistics either use the:
UPDATE STATISTICS table
[index | (statsName [, statsName2 [,
...]]]
[WITH [FULLSCAN | SAMPLE sampleSize {PERCENT | ROWS},]
[ALL | COLUMNS | INDEX ,]
[NO RECOMPUTE]
After populating a table which had indexes prior to being populated it is
recommended to manually update that tables statistics.
Objective: |
C2L3 & C7L2 |
Evaluate and optimise the performance of query execution plans |
PROCE:
Ad hoc / Uncached queries follow PROCE. Cached queries PROCE only the first time, thereafter skipping PRO entirely, reads the Compile stage from the Procedure Cache and performs the Execute as before.
Query Plans
Form the PROC of PROCE. Defines access method and order (i.e. which tables first). Query plans can be seen graphically in Query Analyzer (options under Query menu) or programmatically using the following SET options (these can only be used in batch commands, i.e. not via ODBC or in Stored Procedures, and only one can be active at a time) :
Caching
Compiled queries are stored in the procedure cache. Memory in the cache is freed based on least usage and compilation cost. SQL uses three forms of caching:
Optimiser Hints
Optimiser hints allow overriding of specific decision normally made by the query optimiser. For example:
SELECT ... FROM ... WITH (INDEX(indexNumberOrName))
tells SQL to use the nominated index. (Specifying indexNumber as 0 tells SQL to use a table scan. If a clustered index is present specifying 1 tells SQL to use a clustered index scan. Specifying 1 if no clustered index is present will generate an error).
Objective: |
C12L2 |
Diagnose and resolve locking problems |
xx
Objective: |
C14L3 |
Identify SQL Server events and performance problems by using SQL Server Profiler |
SQL Server Performance Monitor
SQL Profiler
Transact-SQL fully supports ANSI SQL-92
Division of labour:
MSSQLServer | SQL Server Agent | MS Distributed Transaction Coordinator |
|
|
Distributed Transaction Management |
Command Prompt tools:
osql | Executes commands via ODBC (for example, from a batch file). Replaces isql which executes commands via DB-Library. |
bcp | Import & export functions |
Net Libraries:
Not all Net Libraries are equal. For example, the Multiprotocol Net-Library supports data encryption but not server name enumeration.
APIs:
SQL Server supports two APIs natively: OLE DB (COM based) & ODBC (Call Level Interface based)
ADO is the wrapper for OLE DB, RDO is the wrapper for ODBC. As far as MS is concerned ADO is "in", RDO is "old hat".
SQL-DMO (Distributed Management Objects) is a COM wrapper for administrative Transact-SQL statements. SQL's graphical admin tools use SQL-DMO internally.
ODBC Connections
ODBC connections with the following options set:
* Equivalent to setting on: ANSI_NULLS, ANSI_NULL_DFLT_ON, ANSI_PADDING, ANSI_WARNINGS, CURSOR_CLOSE_ON_COMMIT, IMPLICIT_TRANSACTIONS, QUOTED_IDENTIFIER. Yes, there is OFF overlap.
Security:
Two layers:
Roles:
Query Analyser:
Master Databases
Statement Types:
By default, members of the sysadmin, dbcreator, and db_owner roles can issue DDL and DML statements. Additionally, members of the db_securityadmin role can also issue DCL statements, members of the db_ddladmin role can all issue DDL statements and members of the db_datareader and / or db_datawriter roles can issue DML statements.
Miscellaneous Useful Stored Procedures
sp_dbcmptlevel database [level] | Database compatibility level. First parameter is a database name. If no 2ndparameter, returns the DB version. With a 2nd parameter specified, sets the DB version. Different DBs can have different compatibility levels. Level parmeter does not contain decimal point (e.g. for SQL 7 compatibility it would be 70 not 7.0, for SQL2000 80 not 8.0 etc.) |
sp_who [spid] | Returns information about current server activity. If spid is specified only details for that process id is returned |
Miscellaneous Useful Settings
NOCOUNT ON | OFF | While off, prevents the updating of @@ROWCOUNT and returning of the number of rows updated. |
<---- Move as some of this is in the exam
Capacities
Maximum data file size | 32 terrabytes |
Maximum log file size | 4 terrabytes |
Maximum number of tables per database | 2 billion |
Maximum number of columns per table | 1024 |
Maximum bytes per row | 8060 |
Maximum number of tables in a query | 256 |
Maximum size of as stored procedure | 128K |
Store Procedure recursion level | 32 |
Maximum number of parameters in a stored procedure | 1024 |
Maximum size (in bytes)
of the fields used in an order by clause |
8060 |
Maximum length (in characters) of a column alias | 128 |
---->
Presentation Layer / Services formats and presents data to users and provides an interface for data entry. For example as a layer between a data server and a internet browser client.
Business Layer / Services implements business rules such as data validation and calculations.
Data Layer / Services implements data storage, retrieval, security and integrity.
OLTP On-Line Transaction Processing. High change, low query systems.
DSS Decision Support Services. Low change, high query systems.
Positioned Update Jargon for when a change to a row in a cursor is reflected in the base table(s)
Atomicity Units of code that are indivisible. For example, a transaction that debits an amount from one bank account and credits the same amount to another.
Niladic functions Functions that allow default values to be inserted into tables when no value is supplied. For example, the CREATE TABLE's DEFAULT clause
Heterogeneous Query A query that is executed using the resources of only one server.
DCL Data Control Language
Batch A set of SQL commands issues and executed as a single group. Batches are separated by the GO command.
Script A collection of batches.