Thursday, December 17, 2009

Whats New SQL2008 XML

In this post i used (,) instead of <,> because of technical problems

Microsoft introduced XML-related capabilities in Microsoft SQL Server 2000 with the FOR XML and OPENXML Transact-SQL keywords, which enabled developers to write Transact-SQL code to retrieve a query result as a stream of XML, and to shred an XML document into a rowset. These XML capabilities were extended significantly in SQL Server 2005 with the introduction of a native xml data type that supports XSD schema validation, XQuery-based operations, and XML indexing. SQL Server 2008 builds on the XML capabilities of previous releases and provides enhancements to meet the challenges that customers have faced when storing and manipulating XML data in the database.
The Evolution of SQL Server XML Capabilities
The XML features of SQL Server have evolved with each version of SQL Server since SQL Server 2000. Before we examine the enhancements in SQL Server 2008, it might be useful to chart the evolution of XML functionality through the previous versions.
XML Functionality in SQL Server 2000
In SQL Server 2000, Microsoft introduced the FOR XML and OPENXML Transact-SQL keywords. FOR XML is an extension to the SELECT statement that returns the query results as a stream of XML as shown in the following example.
SELECT ProductID, ProductName
FROM Products Product
FOR XML AUTO
This query returns an XML fragment like the following example.
(Product ProductID="1" ProductName="Widget"/)
(Product ProductID="2" ProductName="Sprocket"/)
The OPENXML function performs the opposite function to the FOR XML clause by creating a rowset from an XML document, as shown in the following example.
DECLARE @doc nvarchar(1000)
SET @doc = '(Order OrderID = "1011")
(Item ProductID="1" Quantity="2"/)
(Item ProductID="2" Quantity="1"/)
(/Order)'
DECLARE @xmlDoc integer
EXEC sp_xml_preparedocument @xmlDoc OUTPUT, @doc
SELECT * FROM
OPENXML (@xmlDoc, 'Order/Item', 1)
WITH
(OrderID integer '../@OrderID',
ProductID integer,
Quantity integer)
EXEC sp_xml_removedocument @xmlDoc

Note the use of the sp_xml_preparedocument and sp_xml_removedocument stored procedures to create an in-memory representation of the node tree for the XML document. This Transact-SQL code returns the following rowset.

OrderID ProductID Quantity
1011 1 2
1011 2 1
XML Functionality in SQL Server 2005
In SQL Server 2005, the FOR XML feature was enhanced with new options for root elements and element names, the ability to nest FOR XML calls so you can build complex hierarchies, and a new PATH mode that enables you to define the structure of the XML to be retrieved by using XPath syntax, as shown in the following example.
SELECT ProductID AS '@ProductID',
ProductName AS 'ProductName'
FROM Products
FOR XML PATH ('Product'), ROOT ('Products')

This query returns the following XML.
(Products)
(Product ProductID="1")
(ProductName)Widget(/ProductName)
(/Product)
(Product ProductID="2")
(ProductName)Sprocket(/ProductName)
(/Product)
(/Products)

In addition to enhancing the existing XML features that had been introduced in SQL Server 2000, SQL Server 2005 added a new, native xml data type that enables you to create variables and columns for XML data, as shown in the following example.
CREATE TABLE SalesOrders
(OrderID integer PRIMARY KEY,
OrderDate datetime,
CustomerID integer,
OrderNotes xml)

You can use the xml data type to store markup documents or semi-structured data in the database. Columns and variables can be used for untyped XML or typed XML, the latter of which is validated against an XML Schema Definition (XSD) schema. To define the schemas for data validation, developers can use the CREATE XML SCHEMA COLLECTION statement, as shown in the following example.
CREATE XML SCHEMA COLLECTION ProductSchema AS
'(?xml version="1.0" encoding="UTF-16"?)
(xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema")
(!-- schema declarations go here --)
(/xs:schema)'

After creating a schema collection, you can associate an xml variable or column with the schema declarations it contains by referencing the schema collection as shown in the following example.
CREATE TABLE SalesOrders
(OrderID integer PRIMARY KEY,
OrderDate datetime,
CustomerID integer,
OrderNotes xml(ProductSchema))

Typed XML is validated against the declarations in the associated schema collection when values are inserted or updated, which makes it possible to enforce business rules about the structure of XML data for compliance or compatibility reasons.
The xml data type also provides a number of methods, which you can use to query and manipulate the XML data in an instance. For example, you can use the query method to query the XML in an instance of the xml data type, as shown in the following example.
declare @x xml
set @x=
'(Invoices)
(Invoice)
(Customer)Kim Abercrombie(/Customer)
(Items)
(Item ProductID="2" Price="1.99" Quantity="1" /)
(Item ProductID="3" Price="2.99" Quantity="2" /)
(Item ProductID="5" Price="1.99" Quantity="1" /)
(/Items)
(/Invoice)
(Invoice)
(Customer)Margaret Smith(/Customer)
(Items)
(Item ProductID="2" Price="1.99" Quantity="1"/)
(/Items)
(/Invoice)
(/Invoices)'
SELECT @x.query(
'(CustomerList)
{
for $invoice in /Invoices/Invoice
return $invoice/Customer
}
(/CustomerList)')

The query in this example uses an XQuery expression that finds each Invoice element in the document and returns an XML document that contains the Customer element from each Invoice element, as shown in the following example.
(CustomerList)
(Customer)Kim Abercrombie(/Customer)
(Customer)Margaret Smith(/Customer)
(/CustomerList)

Another significant XML-related feature that was introduced in SQL Server 2005 is support for XML indexes. You can create primary and secondary XML indexes for columns of type xml to enhance XML query performance. A primary XML index is a shredded representation of all of the nodes in an XML instance, which the query processor can use to quickly find nodes within an XML value. After you have created a primary XML index, you can create secondary XML indexes to improve the performance of specific types of query. The following example creates a primary XML index, and a secondary XML index of type PATH, which can improve performance of queries that use XPath expressions to identify nodes in an XML instance.
CREATE PRIMARY XML INDEX idx_xml_Notes
ON SalesOrders (Notes)
GO

CREATE XML INDEX idx_xml_Path_Notes
ON SalesOrders (Notes)
USING XML INDEX idx_xml_Notes
FOR PATH
GO
XML Functionality in SQL Server 2008
The XML functionality that was introduced in SQL Server 2000 and SQL Server 2005 has been enhanced in SQL Server 2008. Key XML-related enhancements in SQL Server 2008 include:
• Improved schema validation capabilities
• Enhancements to XQuery support
• Enhanced functionality for performing XML data manipulation language (DML) insertions
The rest of this whitepaper examines these enhancements and demonstrates how you can use them to implement better XML solutions in SQL Server 2008.
XML Schema Validation Enhancements
You can validate XML data by enforcing compliance with one or several XSD schemas. A schema defines the permissible XML elements and attributes for a particular XML data structure, and is often used to ensure that XML documents contain all of the required data elements in the correct structure.
SQL Server 2005 introduced validation of XML data through the use of XML schema collections. The general approach is to a create schema collection that contains the schema rules for your XML data by using the CREATE XML SCHEMA COLLECTION statement, and then to reference the schema collection name when you define an xml column or variable that must conform to the schema rules in the schema collection. SQL Server then validates any data that is inserted or updated in the column or variable against the schema declarations in the schema collection.
XML Schema support in SQL Server 2005 implemented a broad subset of the full XML Schema specification, and covered the most common XML validation scenarios. SQL Server 2008 extends that support to include the following additional schema validation requirements that have been identified by customers:
• Support for lax validation
• Full support for dateTime, time and date validation, including preservation of time zone information
• Improved Support for union and list types
Lax Validation Support
XML Schemas support wildcard sections in XML documents through the any, anyAttribute, and anyType declarations. For example, consider the following XML schema declaration.
(xs:complexType name="Order" mixed="true")
(xs:sequence)
(xs:element name="CustomerName"/)
(xs:element name="OrderTotal"/)
(xs:any namespace="##other" processContents="skip"
minOccurs="0" maxOccurs="unbounded"/)
(/xs:sequence)
(/xs:complexType)

This schema declaration defines an XML element named Order, which must contain sub-elements named CustomerName and OrderTotal. Additionally, the element can contain an unlimited number of other elements that belong to a different namespace than the one to which the Order type belongs. The following XML shows an XML document that contains an instance of an Order element as defined by this schema declaration. Note that the order also contains a shp:Delivery element, which is not explicitly defined in the schema.
(Invoice xmlns="http://adventure-works.com/order"
xmlns:shp="http://adventure-works.com/shipping")
(Order)
(CustomerName)Graeme Malcolm(/CustomerName)
(OrderTotal)299.99(/OrderTotal)
(shp:Delivery)Express(/shp:Delivery)
(/Order)
(/Invoice)

Validation for wildcard sections depends on the processContents attribute for the wildcard section in the schema definition. In SQL Server 2005, schemas can use processContents values of skip and strict for any and anyAttribute declarations. In the previous example, the processContents attribute for the wildcard element has been set to skip, so no attempt to validate the contents of that element is made. Even if the schema collection includes a declaration for the shp:Delivery element (for example, defining a list of valid delivery methods), the element is not validated unless the declaration for the wildcard in the Order element has its processContents attribute set to strict.
SQL Server 2008 adds support for a third validation option. By setting the processContents attribute for a wildcard section to lax, you can enforce validation for any elements that have schema declarations associated with them, but ignore any elements that are not defined in the schema. To continue the previous example, if you set the processContents attribute for the wildcard element declaration in the schema to lax and add a declaration for the shp:Delivery element, shp:Delivery element in the XML document is validated. However, if instead of the shp:Delivery element, the document includes an element that is not defined in the schema, the element is ignored.
In addition, the XML Schema specification defines that the anyType declaration has lax processing of its content model. SQL Server 2005 does not support lax processing, so the content is validated strictly instead. SQL Server 2008 does support lax processing of the anyType contents, and so the content is validated correctly.
Full xs:dateTime Support
You can use the dateTime data type in an XML schema to define date and time data. Date and time data is expressed in the format 2007-08-01T09:30:00:000Z, which represents the 1st of August 2007 at 9:30 in the morning in the coordinated universal time zone (UTC), which is indicated by the Z. Other time zones are represented by the time difference from UTC, so for example you can represent 6:00 in the morning on December 25th 2007 in Pacific Standard Time (which is 8 hours behind UTC) with the value 2007-12-25T06:00:00:000-8:00.
The XML Schema specification defines the time zone component of the dateTime, date and time data types as optional. However, in SQL Server 2005 you must provide a time zone for dateTime, time and date data. Additionally, SQL Server 2005 does not preserve the time zone information for your data for dateTime or time, but normalizes it to UTC (so for example, if your XML contains the value 2007-12-25T06:00:00:000-8:00, SQL Server 2005 normalizes this as 2007-12-25T14:00:00:000Z.) In SQL Server 2008, these limitations have been removed, so you can omit the time zone information when you store dateTime, date or time data, and any time zone information that you do provide is preserved.
Union and List Types
You can use XML schemas to define data types for your XML data that allow a limited set of values to be assigned to multi-value elements and attributes. For example, you might define a sizeListType type that restricts the list of possible values that can be assigned to an AvaliableSizes element in the product definition to S, M, and L. SQL Server 2005 supports XML schemas that contain these simple type definitions and restrictions. For example, you can use a list type to define the valid sizes for a product as shown in the following example.
(xs:simpleType name="sizeListType")
(xs:list)
(xs:simpleType)
(xs:restriction base="xs:string")
(xs:enumeration value="S"/)
(xs:enumeration value="M"/)
(xs:enumeration value="L"/)
(/xs:restriction)
(/xs:simpleType)
(/xs:list)
(/xs:simpleType)

This schema declaration enables you to create an element that lists all of the sizes in which a product can be purchased as a list of values separated by white space, as shown in the following example:
(AvailableSizes)S M L(/AvailableSizes)

However, what if you want to support two different ways to express the size of a product? For example, suppose a cycling equipment retailer sells cycling clothes in small, medium, and large sizes, but also sells bicycles in numerical sizes relating to the frame size (such as 18, 20, 22, and 24)? To enable you to accomplish this, SQL Server 2008 adds support for union types that contain list types, which you can use to merge multiple list type definitions and restrictions into a single type. For example, the following Transact-SQL code creates an XML schema collection that defines a productSizeType type in which valid values include a list of numeric sizes (18, 20, 22, and 24) and a list of named sizes (S, M, and L).
CREATE XML SCHEMA COLLECTION CatalogSizeSchema AS
N'(?xml version="1.0" encoding="UTF-16"?)
(xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema")
(xs:simpleType name="productSizeType")
(xs:union)
(xs:simpleType)
(xs:list)
(xs:simpleType)
(xs:restriction base="xs:integer")
(xs:enumeration value="18"/)
(xs:enumeration value="20"/)
(xs:enumeration value="22"/)
(xs:enumeration value="24"/)
(/xs:restriction)
(/xs:simpleType)
(/xs:list)
(/xs:simpleType)
(xs:simpleType)
(xs:list)
(xs:simpleType)
(xs:restriction base="xs:string")
(xs:enumeration value="S"/)
(xs:enumeration value="M"/)
(xs:enumeration value="L"/)
(/xs:restriction)
(/xs:simpleType)
(/xs:list)
(/xs:simpleType)
(/xs:union)
(/xs:simpleType)
(/xs:schema)'

With this declaration in the schema, any elements based on the productSizeType can contain either kind of list; so both of the product elements in the following example would be valid instances of the productSizeType data type.
(Catalog)
(Product)
(ProductName)Road Bike(/ProductName)
(AvailableSizes)22 24(/AvailableSizes)
(/Product)
(Product)
(ProductName)Cycling Jersey(/ProductName)
(AvailableSizes)S M L(/AvailableSizes)
(/Product)
(/Catalog)

Similarly, SQL Server 2008 supports schema declarations for list types that contain union types.
XQuery Enhancements
SQL Server 2005 introduced the xml data type, which provides a number of methods that you can use to perform operations on the XML data stored in a column or variable. Most of the operations you can perform use XQuery syntax to navigate and manipulate the XML data. The XQuery syntax supported by SQL Server 2005 includes the for, where, order by, and return clauses of the so called FLWOR expression, which you can use to iterate over the nodes in an XML document and return values.
SQL Server 2008 adds support for the let clause, which is used to assign values to variables in an XQuery expression such as the following example:
declare @x xml
set @x=
'(Invoices)
(Invoice)
(Customer)Kim Abercrombie(/Customer)
(Items)
(Item ProductID="2" Price="1.99" Quantity="1" /)
(Item ProductID="3" Price="2.99" Quantity="2" /)
(Item ProductID="5" Price="1.99" Quantity="1" /)
(/Items)
(/Invoice)
(Invoice)
(Customer)Margaret Smith(/Customer)
(Items)
(Item ProductID="2" Price="1.99" Quantity="1"/)
(/Items)
(/Invoice)
(/Invoices)'
SELECT @x.query(
'(Orders)
{
for $invoice in /Invoices/Invoice
let $count :=count($invoice/Items/Item)
order by $count
return
(Order)
{$invoice/Customer}
(ItemCount){$count}(/ItemCount)
(/Order)
}
(/Orders)')

This example returns the following XML.
(Orders)
(Order)
(Customer)Margaret Smith(/Customer)
(ItemCount)1(/ItemCount)
(/Order)
(Order)
(Customer)Kim Abercrombie(/Customer)
(ItemCount)3(/ItemCount)
(/Order)
(/Orders)

Note that SQL Server 2008 does not allow the assignment of constructed elements.
XML DML Enhancements
As well as being able to use XQuery expressions to perform operations on XML data, the xml data type supports the XML DML expressions insert, replace value of, and delete through its modify method. You can use these XML DML expressions to manipulate the XML data in an xml column or variable.
SQL Server 2008 adds support for using an xml variable in an insert expression to insert XML data into an existing XML structure. For example, suppose an xml variable named @productList contains the following XML:
(Products)
(Bike)Mountain Bike(/Bike)
(Bike)Road Bike(/Bike)
(/Products)

You could use the following code to insert a new bike into the product list:
DECLARE @newBike xml
SET @newBike = '(Bike)Racing Bike(/Bike)'
SET @productList.modify
('insert sql:variable("@newBike") as last into (/Products)[1]')

After running this code, the @productList variable would contain the following XML.
(Products)
(Bike)Mountain Bike(/Bike)
(Bike)Road Bike(/Bike)
(Bike)Racing Bike(/Bike)
(/Products)

Tuesday, September 22, 2009

An A-Z Index of the SQL Server 2005 database

Aggregate - CREATE AGGREGATE
- DROP AGGREGATE
Application Role - CREATE APPLICATION ROLE
- ALTER APPLICATION ROLE
- DROP APPLICATION ROLE
Assembly - CREATE ASSEMBLY
- ALTER ASSEMBLY
- DROP ASSEMBLY
ALTER AUTHORIZATION

BACKUP
BACKUP CERTIFICATE
BEGIN [DIALOG [CONVERSATION]]

Certificate - ALTER CERTIFICATE
- CREATE CERTIFICATE
- DROP CERTIFICATE
CHECKPOINT
COMMIT
Contract - CREATE CONTRACT
- DROP CONTRACT
Credential - CREATE CREDENTIAL
- ALTER CREDENTIAL
- DROP CREDENTIAL

Database - CREATE DATABASE
- ALTER DATABASE
- DROP DATABASE
DBCC CHECKALLOC - Check consistency of disk allocation.
DBCC CHECKCATALOG - Check catalog consistency
DBCC CHECKCONSTRAINTS - Check integrity of table constraints.
DBCC CHECKDB - Check allocation, and integrity of all objects.
DBCC CHECKFILEGROUP - Check all tables and indexed views in a filegroup.
DBCC CHECKIDENT - Check identity value for a table.
DBCC CHECKTABLE - Check integrity of a table or indexed view.
DBCC CLEANTABLE - Reclaim space from dropped variable-length columns.
DBCC dllname - Unload a DLL from memory.
DBCC DROPCLEANBUFFERS - Remove all clean buffers from the buffer pool.
DBCC FREE... CACHE - Remove items from cache.
DBCC HELP - Help for DBCC commands.
DBCC INPUTBUFFER - Display last statement sent from a client to a database instance.
DBCC OPENTRAN - Display information about recent transactions.
DBCC OUTPUTBUFFER - Display last statement sent from a client to a database instance.
DBCC PROCCACHE - Display information about the procedure cache
DBCC SHOW_STATISTICS - Display the current distribution statistics
DBCC SHRINKDATABASE - Shrink the size of the database data and log files.
DBCC SHRINKFILE - Shrink or empty a database data or log file.
DBCC SQLPERF - Display transaction-log space statistics. Reset wait and latch statistics.
DBCC TRACE... - Enable or Disable trace flags
DBCC UPDATEUSAGE - Report and correct page and row count inaccuracies in catalog views
DBCC USEROPTIONS - Return the SET options currently active
DBCC deprecated commands
DECLARE
Default - CREATE DEFAULT
- DROP DEFAULT
DELETE
DENY - DENY Object permissions
- DENY User/Role permissions
Endpoint - CREATE ENDPOINT
- ALTER ENDPOINT
- DROP ENDPOINT
Event - CREATE EVENT NOTIFICATION
- DROP EVENT NOTIFICATION
EXECUTE
EXECUTE AS

Fulltext Catalog - CREATE FULLTEXT CATALOG
- ALTER FULLTEXT CATALOG
- DROP FULLTEXT CATALOG
Fulltext Index - CREATE FULLTEXT INDEX
- ALTER FULLTEXT INDEX
- DROP FULLTEXT INDEX
Function - CREATE FUNCTION
- ALTER FUNCTION
- DROP FUNCTION

GO
GRANT - GRANT Object permissions
- GRANT User/Role permissions

Index - CREATE INDEX
- ALTER INDEX
- DROP INDEX
INSERT
iSQL -U user -P password -i script.sql -o logfile.log

Key - CREATE ASYMMETRIC KEY
- ALTER ASYMMETRIC KEY
- DROP ASYMMETRIC KEY
- CREATE SYMMETRIC KEY
- OPEN SYMMETRIC KEY
- CLOSE SYMMETRIC KEY
- ALTER SYMMETRIC KEY
- DROP SYMMETRIC KEY
KILL
KILL QUERY NOTIFICATION
KILL STATS JOB

Login - CREATE LOGIN
- ALTER LOGIN
- DROP LOGIN

Master Key - CREATE MASTER KEY
- ALTER MASTER KEY
- BACKUP MASTER KEY
- DROP MASTER KEY
- RESTORE MASTER KEY
- ALTER SERVICE MASTER KEY
- BACKUP SERVICE MASTER KEY
- RESTORE SERVICE MASTER KEY
Message Type - CREATE MESSAGE TYPE
- ALTER MESSAGE TYPE
- DROP MESSAGE TYPE

Partition Function - CREATE PARTITION FUNCTION
- ALTER PARTITION FUNCTION
- DROP PARTITION FUNCTION
Partition Scheme - CREATE PARTITION SCHEME
- ALTER PARTITION SCHEME
- DROP PARTITION SCHEME
Procedure - CREATE PROCEDURE
- ALTER PROCEDURE
- DROP PROCEDURE

Queue - CREATE QUEUE
- ALTER QUEUE
- DROP QUEUE

Remote Service Binding - CREATE REMOTE SERVICE BINDING
- ALTER REMOTE SERVICE BINDING
- DROP REMOTE SERVICE BINDING

RESTORE - RESTORE DATABASE Complete
RESTORE DATABASE Partial
RESTORE DATABASE Files
RESTORE LOGS
RESTORE DATABASE_SNAPSHOT
RESTORE FILELISTONLY - List database and log files
RESTORE HEADERONLY - List backup header info
RESTORE LABELONLY - Media info
RESTORE REWINDONLY - Rewind and close tape device
RESTORE VERIFYONLY
REVERT
REVOKE - REVOKE Object permissions
- REVOKE User/Role permissions
Role - CREATE ROLE
- ALTER ROLE
- DROP ROLE
ROLLBACK
Route - CREATE ROUTE
- ALTER ROUTE
- DROP ROUTE

Schema - CREATE SCHEMA
- ALTER SCHEMA
- DROP SCHEMA
SELECT
SEND
SERVERPROPERTY
Service - CREATE SERVICE
- ALTER SERVICE
- DROP SERVICE
SESSION_USER
SESSIONPROPERTY
SET @local_variable
SET
SHUTDOWN
Signature - ADD SIGNATURE
- DROP SIGNATURE
Statistics - CREATE STATISTICS
- UPDATE STATISTICS
- DROP STATISTICS

Synonym - CREATE SYNONYM
- DROP SYNONYM

Table - CREATE TABLE
- ALTER TABLE
- DROP TABLE
- TRUNCATE TABLE
Transaction - BEGIN DISTRIBUTED TRANSACTION
- BEGIN TRANSACTION
- COMMIT TRANSACTION
Trigger - CREATE TRIGGER
- ALTER TRIGGER
- ENABLE TRIGGER
- DISABLE TRIGGER
- DROP TRIGGER
Type - CREATE TYPE
- DROP TYPE

UNION
UPDATE
User - CREATE USER
- ALTER USER
- DROP USER
USE

View - CREATE VIEW
- ALTER VIEW
- DROP VIEW

XML Schema Collection - CREATE XML SCHEMA COLLECTION
- ALTER XML SCHEMA COLLECTION
- DROP XML SCHEMA COLLECTION

Friday, September 11, 2009

Query to get the all field names of a table - sql

select A.name from sys.columns A join sys.tables B on A.object_id=b.object_id and B.name='tablename'

To find dupliactes in a table data - sql

here is the code for find out is there is any duplicates in the table data or not.

WITH T1 AS
(
Select szname, ROW_NUMBER()
OVER (PARTITION BY szname Order By szname) AS NUMBER From tblUsers
)
select * from T1 where Number>1

here Number is the number of dupliacation of a single record.

ways to substring a string with a delimiter - sql

here is the code for substring a string with specific delimiter in sql

DECLARE @data NVARCHAR(MAX),
@delimiter NVARCHAR(5)
SELECT @data = 'duplicate@Createrrrm@dddfdfddfdf',
@delimiter = '@'
DECLARE @textXML XML;
SELECT @textXML = CAST('' + REPLACE(@data, @delimiter, '') + '' AS XML);
--SELECT @textXML
SELECT T.split.value('.', 'nvarchar(max)') AS data
FROM @textXML.nodes('/d') T (split)
/* doing it through a function */
select * from [dbo].fn_ParseDelimitedStrings ('duplicate@Createrrrm@dddfdfddfdf','@')

Function Code:

Create Function [dbo].[fn_ParseDelimitedStrings]

(@String nvarchar(3500), @Delimiter char(1))

Returns @Values Table

(

RowId int Not Null Identity(1,1) Primary Key

,Value nvarchar(255) Not Null

)

As

Begin

Declare @startPos smallint

,@endPos smallint

If (Right(@String, 1) != @Delimiter)

Set @String = @String + @Delimiter

Set @startPos = 1

Set @endPos = CharIndex(@Delimiter, @String)

While @endPos > 0

Begin

Insert @Values(Value)

Select LTrim(RTrim(SubString(@String, @startPos, @endPos - @startPos)))

-- remove the delimiter just used

Set @String = Stuff(@String, @endPos, 1, '')

-- move string pointer to next delimiter

Set @startPos = @endPos

Set @endPos = CharIndex(@Delimiter, @String)

End

Return

End

Thursday, September 10, 2009

Why parseInt(08) & parseInt(09) is showing the value 0 ?

That's because "08" and "09" are invalid numbers, in octal.

The parseInt() function actually allows two arguments, the string to
parse and a radix, which is optional. This radix value allows you to
convert a binary (base 2), hexadecimal (base 16) or other base string to
a decimal integer. For example

parseInt("FF", 16);

returns 255. This is very useful for parsing things like HTML color values.

Most people aren't aware of the optional radix argument. The problem is
that if you leave it off, the function will doesn't necessarily assume
you want a decimal (base 10) conversion. Instead it checks the input
string (the first argument) and if it starts with "0x" it assumes it's a
hexadecimal value. If it starts with "0" - not followed by an "x" - it
takes it as an octal value. This follows the JavaScript convention for
numeric constants. If you code

var x = 0x18;
alert(x);

it will display 24 which is the decimal equivalent of the hex number
"18". Likewise,

var x = 014;
alert(x);

displays 12 which is the decimal value of the octal number "14".

As you should know, hexadecimal uses the digits 0-9 and the letters A-F,
16 in all. Octal is base 8, so only the digits 0-7 are valid. Hence,
"08" and "09" are not valid octal numbers and the function returns zero
just as it would for "xyz" in decimal - it's not a valid number.

To avoid this, always add the second argument, in this case

parseInt("08", 10);

returns 8 (decimal), as desired.

Saturday, August 29, 2009

Pass Array List through Query string .

Last week i have faced a challange in my project. To pass a string array list throght query string while googlig i found no solution for this so i thought it will be help full to post my solution.

In Parent Page:

protected void Page_Load(object sender, EventArgs e)
{
string[] namesArray = {"Welcome", "To" ,"C#" , "World"};
string Params= String.Join(",", ((string[])namesArray.ToArray(typeof(String))));
Response.Redirect("Page2.aspx?items=" + Params);
}

In Child Page

protected void Page_Load(object sender, EventArgs e)
{
string[] Params= Request["Params"].ToString().Split(',');

foreach (string strNm in Params)
{
Response.Write(strNm);
}
}

Export to Excel with bold headers in C# asp.net

public void ExportToExcel(DataTable dt, int[] iColumns, string[] sHeaders, HttpResponse Response)
{
try
{
/// iColumns are the Ordinals(positions of colums) of real datatable .
/// sHeaders are desired Headers Names for iColums .
/// Order of iColums and sHeaders has to be in same order.

/// Creating namesArray with orginal datatable headers .
string Se = string.Empty;
foreach (int UniCount in iColumns)
{
Se += dt.Columns[UniCount].ColumnName + ",";
}
string Re = Se.Substring(0, Se.Length - 1);
string[] namesArray = Re.Split(',');


/// Creating another table with export colums.

DataTable dtExport = dt.DefaultView.ToTable("tempTableName", false, namesArray);

/// Assiging desired names to original colums names.
for (int Cnt = 0; Cnt < sHeaders.Length; Cnt++)
{
dtExport.Columns[Cnt].ColumnName = sHeaders.ElementAt(Cnt).ToString();
}

Response.ClearContent();
Response.AddHeader("content-disposition", "attachment; filename=" + "Export.XLS");
Response.ContentType = "application/excel";
System.IO.StringWriter stringWrite = new System.IO.StringWriter();
System.Web.UI.HtmlTextWriter htmlWrite = new System.Web.UI.HtmlTextWriter(stringWrite);
System.Web.UI.WebControls.DataGrid dg = new System.Web.UI.WebControls.DataGrid();
dg.AutoGenerateColumns = true;
dg.DataSource = dtExport;
dg.DataBind();
dg.HeaderStyle.Font.Bold = true;
dg.RenderControl(htmlWrite);
Response.Write(stringWrite.ToString());
Response.End();
}
catch (Exception ex)
{
throw ex;
}

}

Wednesday, August 12, 2009

How To Handle Large Amounts of Data Quickly and Easily
So the last few weeks have been crazy busy. I have been completely swamped with client work, and while that is a good problem to have, it does take away time to do other stuff (like blogging!).
One of the things I have been working on deals with handling large blobs of data. Its a bit of a tangent since it has absolutely nothing to do with UI, but I figured that since there are a lot of developers reading this site, it might be helpful.
So what do I mean by large amounts of data? Well, for the app I am building we were testing our theoretical limits for performance reasons. Imagine a grid or an Excel spreadsheet that has 700 columns (Excel itself only allows 256) and 30,000 rows. That basically equates to 1.65 million cells in the grid. Now if you compare that lump of data to what, say, Google throws around, its pretty insignificant; however, when it comes to keeping it in memory and accessing it quickly it is a bit beefy.
Most of the time you would get around this problem by paging through the data somehow, or having some sort of filter. Unfortunately for us, we didn’t have this luxury. The application needed the whole kit and caboodle accessible to it at all times. With that as the challenge we went to work.
In the beginning…
Before I started testing the theoretical limits of the app, I was using a run-of-the-mill XML Serialized cache file. It was kinda big with the small sample data I was initially using, but I didn’t think anything of it. Once I started dealing with lumpy up there, the XML file grew to 550 Megs of pure, unadulterated crap. I don’t know about you, but I sure don’t want to throw that bad boy into memory (we actually did a few times and watched my dev box cry uncle…kinda funny and sad at the same time).
So the original solution was out the door…time for plan B. Well my partner is a big database guy so we went with his idea and tried to throw el lumpo into a table called “tblCache”. Ever try throwing 1.6 million inserts at Sql Server Express? Even using BulkCopy it still performed like a one-legged man in a butt-kicking competition. The end result of plan B was a database that quintupled in size to over a gig and an app that performed so poorly that it made the Xml file seem speedy…
That is when the crying began…
My friend came up with the idea of “lets try and find an object based database and just put the whole cache object in there!” Fortunately we both realized that that was a horrendous idea about as soon as it came out of his mouth. However, the idea to cache the object itself stuck so my buddy began researching that a bit and came up with a solution.
The Answer
So how do you deal with a large amount of data quickly and easily? Two words: Binary Serialization. It sounds fancy, but what it basically means is that you take your big fancy object and store it in a bunch of 1s and 0s. This took lumpy and turned him into a 100 Meg blob of goodness (a full 5 times smaller than the XML file). When we were talking about our solution to the problem to another guy on the team he suggested looking up in memory compression. After we got that working our data file shrunk down to a trim 1 meg (my buddy owes that guy dinner for that idea by the way).
Couple that with a few database tricks (like doing a few smaller hits rather than one huge one) and we went from loading our data in a crawling 138 seconds down to a svelte 4. Sounds too good to be true? Its not…and what is better is how easy the code is once you get your mind around it.
That’s Nice…Can I Do It?
So by now you are probably thinking…nice story…but can I do the same thing with my app? The answer is resoundingly yes! and what is even better is, you can do it completely free (i.e. no third party tools) using only the standard libraries of .NET. You could spend some money on a 3rd party tool if you need something a bit specialized (i.e. higher compression than the gzip stuff), but you definitely don’t need to.
Lets get started…
First we need some imports (this project is in VB.NET, but could easily be converted to C#).
Imports System.IO
Imports System.IO.Compression
Imports System.Runtime.Serialization
Imports System.Runtime.Serialization.Formatters.Binary
Nothing too strange there, but there are probably a few you haven’t seen before.
The code itself is relatively simple as well.
Dim compressedzipStream As GZipStream = Nothing
Dim ms As MemoryStream = Nothing
Dim b As BinaryFormatter = Nothing
Dim fs As FileStream = Nothing

Try
ms = New MemoryStream
b = New BinaryFormatter
b.Serialize(ms, cache)

Dim buffer(ms.Length - 1) As Byte
ms.Position = 0
ms.Read(buffer, 0, buffer.Length)

fs = New FileStream("C:\FileGoesHere\FileName.whatever", FileMode.Create)
compressedzipStream = New GZipStream(fs, CompressionMode.Compress, True)
compressedzipStream.Write(buffer, 0, buffer.Length)

Finally
compressedzipStream.Close()
fs.Close()
ms.Close()
End Try

Got all that? Lets break it down.
The first few lines are just initializing some variables we are gonna use later. The GZipStream is the compression class of .NET. It is pretty good, but if you need some serious compression you might want to go with a third party tool.
Next you notice is a classic Try/Catch. This is simply so we can be sure that no matter what happens our streams will be closed. The first line to really notice is b.Serialize(ms, cache). This is basically where the magic happens, or rather where your object (in our case it is called cache) is changed from what you built to a bunch of gobbledygook that only the computer can read. The good news is, its pretty efficient when compared to other serialization stuff (i.e. xml). The bad news is that you can’t read it like you can Xml. It’s not a big deal, but I do find myself missing that little feature.
So what is happening is your object (which can be as simple as a string, or a custom object) is being squooshed down and stored into the memory stream ms.
The compression class needs a byte array to process so that is what we build next. Buffer is our little array that we create to mimic the size of the memory stream we just created After he is initialized we read the stream in. Note: Make sure to set the position of the memory stream to 0 before reading, otherwise nothing happens.
Once our buffer is loaded we create a file stream object that points to the place on your hard drive that you want to store your data. Next we create a new GZipStream class and point our new file stream at it. Finally write the buffer into the compressed stream. Voila! Your data is now saved and zipped up in a nice neat bow.
So its on the disk now…the next thing you need to know is how to access it right? No problem.
dim reader as New StreamReader(cacheFile.FullName)

'this stream has been squooshed so we unsquoosh it here
Dim decomp as New GZipStream(reader.BaseStream, CompressionMode.Decompress, True)

Dim b As New BinaryFormatter
cache = b.Deserialize(decomp)
This looks pretty similar to what we had before. Basically we use a StreamReader to open up the file we saved earlier, then we use the GZipStream to decompress (notice the compression mode). Finally we use our handy dandy BinaryFormatter to deserialize the now unsquooshed data into our object.
Now that you know how to use binary serialization here is the golden rule…
Keep your objects simple
The more complex the object you are trying to serialize is, the larger your file will be and the slower it will be to access. For instance…when I started using this my data file was 17 megs (umcompressed it would have been over 500 Megs!!). The reason for this was because I was storing most of my data in a generic list. Now I love generic lists, and I use them where ever I can, but in this instance, they are absolute memory pigs. The reason for this is when the serializer crunches down objects it creates some overhead. That means for each item in a list you get a little overhead. Serializing a few objects is no big deal, but when you are dealing with hundreds of thousands of little ones, it starts to create a big problem.
In our case we took the same data and changed from a generic list to a comma delimited string and the data file shrunk down to just under 1 meg. From a loading perspective time I went from 24 seconds to 4, so it is a big difference. When we decompress it I change the string back into a list so my code didn’t have to change at all.
Now if you want to go a step further, you can make your app really fly by moving the entire save process to a seperate thread. It is out of scope for this article, but it isn’t as difficult as some people would lead you to believe (*cough* job security *cough*). If you use threading the whole process will seem instant to your user. Can’t beat that!
So there you have it. Dealing with large blobs of data isn’t all that uncommon nowadays. I hope this gives you a different approach to use when speed is of the essence.

Sql faqs

What is RDBMS?
Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of tables. Interdependencies among these tables are expressed by data values rather than by pointers. This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage.
What is normalization?
Database normalization is a data design and organization process applied to data structures based on rules that help build relational databases. In relational database design, the process of organizing data to minimize redundancy. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.

What are different normalization forms?
1NF: Eliminate Repeating Groups
Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key
BCNF: Boyce-Codd Normal Form
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many-to-many relationships.
ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain-Key Normal Form
A model free from all modification anomalies.
Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.
What is Stored Procedure?
A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_helpdb, sp_renamedb, sp_depends etc.
What is Trigger?
A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and managed by the DBMS.Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed; the DBMS automatically fires the trigger as a result of a data modification to the associated table. Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event-drive and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures.
Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.
What is View?
A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T-SQL select command and can come from one to many different base tables or even other views.
What is Index?
An index is a physical structure containing pointers to the data. Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes, they are just used to speed up queries. Effective indexes are one of the best ways to improve performance in a database application. A table scan happens when there is no index available to help a query. In a table scan SQL Server examines every row in the table to satisfy the query results. Table scans are sometimes unavoidable, but on large tables, scans have a terrific impact on performance.
Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index.
Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.
What is the difference between clustered and a non-clustered index?
A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.
What are the different index configurations a table can have?
A table can have one of the following index configurations:
No indexes
A clustered index
A clustered index and many nonclustered indexes
A nonclustered index
Many nonclustered indexes
What is cursors?
Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis, instead of the typical SQL commands that operate on all the rows in the set at one time.
In order to work with a cursor we need to perform some steps in the following order:
Declare cursor
Open cursor
Fetch row from the cursor
Process fetched row
Close cursor
Deallocate cursor
What is the use of DBCC commands?
DBCC stands for database consistency checker. We use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB - Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC - To check that all pages in a db are correctly allocated.
DBCC CHECKFILEGROUP - Checks all tables file group for any damage.
What is a Linked Server?
Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data.
Storped Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.
What is Collation?
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.
What are different type of Collation Sensitivity?
Case sensitivity
A and a, B and b, etc.
Accent sensitivity
a and á, o and ó, etc.
Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive.
Width sensitivity
When a single-byte character (half-width) and the same character when represented as a double-byte character (full-width) are treated differently then it is width sensitive.
What’s the difference between a primary key and a unique key?
Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.
How to implement one-to-one, one-to-many and many-to-many relationships while designing tables?
One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.
What is a NOLOCK?
Using the NOLOCK query optimiser hint is generally considered good practice in order to improve concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which means that another process could be updating the data at the exact time you are reading it. There are no guarantees that your query will retrieve the most recent data. The advantage to performance is that your reading of data will not block updates from taking place, and updates will not block your reading of data. SELECT statements take Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but other processes are blocked from modifying the data. The updates will queue until all the reads have completed, and reads requested after the update will wait for the updates to complete. The result to your system is delay(blocking).
What is difference between DELETE & TRUNCATE commands?
Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command.
TRUNCATE
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE can not be Rolled back using logs.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table.
DELETE
DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE Can be Rolled back using logs.
DELETE is DML Command.
DELETE does not reset identity of the table.
Difference between Function and Stored Procedure?
UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as Stored procedures cannot be.
UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline UDF’s can be though of as views that take parameters and can be used in JOINs and other Rowset operations.
When is the use of UPDATE_STATISTICS command?
This command is basically used when a large processing of data has occurred. If a large amount of deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the indexes on these tables accordingly.
What types of Joins are possible with Sql Server?
Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table.
Types of joins: INNER JOINs, OUTER JOINs, CROSS JOINs. OUTER JOINs are further classified as LEFT OUTER JOINS, RIGHT OUTER JOINS and FULL OUTER JOINS.
What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?
Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query. HAVING criteria is applied after the the grouping of rows has occurred.
What is sub-query? Explain properties of sub-query.
Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of parentheses. Sub-queries are generally used to return a single row as an atomic value, though they may be used to compare values against multiple rows with the IN keyword.
A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT statement if executed independently of the T-SQL statement, in which it is nested, will return a result set. Meaning a subquery SELECT statement can standalone and is not depended on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T-SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used.
Properties of Sub-Query
A subquery must be enclosed in the parenthesis.
A subquery must be put in the right hand of the comparison operator, and
A subquery cannot contain a ORDER-BY clause.
A query can contain more than one sub-queries.
What are types of sub-queries?
Single-row subquery, where the subquery returns only one row.
Multiple-row subquery, where the subquery returns multiple rows,.and
Multiple column subquery, where the subquery returns multiple columns.
What is SQL Profiler?
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to analyze later. For example, you can monitor a production environment to see which stored procedures are hampering performance by executing too slowly.
Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time.
What is User Defined Functions?
User-Defined Functions allow to define its own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.
What kind of User-Defined Functions can be created?
There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-Valued and Multi-statement Table-valued.
Scalar User-Defined Function
A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.
Inline Table-Value User-Defined Function
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables.
Multi-statement Table-Value User-Defined Function
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.
Which TCP/IP port does SQL Server run on? How can it be changed?
SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties –> Port number.both on client and the server.
What are the authentication modes in SQL Server? How can it be changed?
Windows mode and mixed mode (SQL & Windows).
To change authentication mode in SQL Server click Start, Programs, Microsoft SQL Server and click SQL Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server program group. Select the server then from the Tools menu select SQL Server Configuration Properties, and choose the Security page.
Where are SQL server users names and passwords are stored in sql server?
They get stored in master db in the sysxlogins table.
Which command using Query Analyzer will give you the version of SQL server and operating system?
SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'),
What is SQL server agent?
SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to schedule your own jobs and scripts.
Can a stored procedure call itself or recursive stored procedure? How many level SP nesting possible?
Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps. Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code references up to 32 levels.
What is @@ERROR?
The @@ERROR automatic variable returns the error code of the last Transact-SQL statement. If there was no error, @@ERROR returns zero. Because @@ERROR is reset after each Transact-SQL statement, it must be saved to a variable if it is needed to process it further after checking it.
What is Raiseerror?
Stored procedures report errors to client applications via the RAISERROR command. RAISERROR doesn’t change the flow of a procedure; it merely displays an error message, sets the @@ERROR automatic variable, and optionally writes the message to the SQL Server error log and the NT application event log.
What is log shipping?
Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. Enterprise Editions only supports log shipping. In log shipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db can be used this as the Disaster Recovery plan. The key feature of log shipping is that is will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined interval.
What is the difference between a local and a global variable?
A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.
A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection are closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time.
What command do we use to rename a db?
sp_renamedb ‘oldname’ , ‘newname’
If someone is using db it will not accept sp_renmaedb. In that case first bring db to single user using sp_dboptions. Use sp_renamedb to rename database. Use sp_dboptions to bring database to multi user mode.
What is sp_configure commands and set commands?
Use sp_configure to display or change server-level settings. To change database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.
What are the different types of replication? Explain.
The SQL Server 2000-supported replication types are as follows:
• Transactional
• Snapshot
• Merge
Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. Snapshot replication is best used as a method for replicating data that changes infrequently or where the most up-to-date values (low latency) are not a requirement. When synchronization occurs, the entire snapshot is generated and sent to Subscribers.
Transactional replication, an initial snapshot of data is applied at Subscribers, and then when data modifications are made at the Publisher, the individual transactions are captured and propagated to Subscribers.
Merge replication is the process of distributing data from Publisher to Subscribers, allowing the Publisher and Subscribers to make updates while connected or disconnected, and then merging the updates between sites when they are connected.
What are the OS services that the SQL Server installation adds?
MS SQL SERVER SERVICE, SQL AGENT SERVICE, DTC (Distribution transac co-ordinator)
What are three SQL keywords used to change or set someone’s permissions?
GRANT, DENY, and REVOKE.
What does it mean to have quoted_identifier on? What are the implications of having it off?
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers.

What is the STUFF function and how does it differ from the REPLACE function?
STUFF function to overwrite existing characters. Using this syntax, STUFF(string_expression, start, length, replacement_characters), string_expression is the string that will have characters substituted, start is the starting position, length is the number of characters in the string that are substituted, and replacement_characters are the new characters interjected into the string.
REPLACE function to replace existing characters of all occurance. Using this syntax REPLACE(string_expression, search_string, replacement_string), where every incidence of search_string found in the string_expression will be replaced with replacement_string.
Using query analyzer, name 3 ways to get an accurate count of the number of records in a table?
SELECT *
FROM table1
SELECT COUNT(*)
FROM table1
SELECT rows
FROM sysindexes
WHERE id = OBJECT_ID(table1)
AND indid < 2
How to rebuild Master Database?
Shutdown Microsoft SQL Server 2000, and then run Rebuildm.exe. This is located in the Program Files\Microsoft SQL Server\80\Tools\Binn directory.
In the Rebuild Master dialog box, click Browse.
In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in the shared network directory from which SQL Server 2000 was installed, and then click OK.
Click Settings. In the Collation Settings dialog box, verify or change settings used for the master database and all other databases.
Initially, the default collation settings are shown, but these may not match the collation selected during setup. You can select the same settings used during setup or select new collation settings. When done, click OK.
In the Rebuild Master dialog box, click Rebuild to start the process.
The Rebuild Master utility reinstalls the master database.
To continue, you may need to stop a server that is running.
Source: http://msdn2.microsoft.com/en-us/library/aa197950(SQL.80).aspx
What is the basic functions for master, msdb, model, tempdb databases?
The Master database holds information for all databases located on the SQL Server instance and is the glue that holds the engine together. Because SQL Server cannot start without a functioning master database, you must administer this database with care.
The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in the instance.
What are primary keys and foreign keys?
Primary keys are the unique identifiers for each row. They must contain unique values and cannot be null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables.
What is data integrity? Explain constraints?
Data integrity is an important feature in SQL Server. When used properly, it ensures that data is accurate, correct, and valid. It also acts as a trap for otherwise undetectable bugs within applications.
A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should have a primary key constraint to uniquely identify each row and only one primary key constraint can be created for each table. The primary key constraints are used to enforce entity integrity.
A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values are entered. The unique key constraints are used to enforce entity integrity as the primary key constraints.
A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the corresponding data values. A foreign key in one table points to a primary key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no primary keys with that value. The foreign key constraints are used to enforce referential integrity.
A CHECK constraint is used to limit the values that can be placed in a column. The check constraints are used to enforce domain integrity.
A NOT NULL constraint enforces that the column will not accept null values. The not null constraints are used to enforce domain integrity, as the check constraints.
What are the properties of the Relational tables?
Relational tables have six properties:
• Values are atomic.
• Column values are of the same kind.
• Each row is unique.
• The sequence of columns is insignificant.
• The sequence of rows is insignificant.
• Each column must have a unique name.
What is De-normalization?
De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access.
How to get @@error and @@rowcount at the same time?
If @@Rowcount is checked after Error checking statement then it will have 0 as the value of @@Recordcount as it would have been reset.
And if @@Recordcount is checked before the error-checking statement then @@Error would get reset. To get @@error and @@rowcount at the same time do both in same statement and store them in local variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR
What is Identity?
Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers, the value of this cannot be controled. Identity/GUID columns do not need to be indexed.
What is a Scheduled Jobs or What is a Scheduled Tasks?
Scheduled tasks let user automate processes that run on regular or predictable cycles. User can schedule administrative tasks, such as cube processing, to run during times of slow business activity. User can also determine the order in which tasks run by creating job steps within a SQL Server Agent job. E.g. Back up database, Update Stats of Tables. Job steps give user control over flow of execution. If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop execution.
What is a table called, if it does not have neither Cluster nor Non-cluster Index? What is it used for?
Unindexed table or Heap. Microsoft Press Books and Book On Line (BOL) refers it as Heap.
A heap is a table that does not have a clustered index and, therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together.
Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table and than do bulk of inserts and to restore those indexes after that.
What is BCP? When does it used?
BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the structures same as source to destination.
How do you load large data to the SQL server database?
BulkCopy is a tool used to copy huge amount of data from tables. BULK INSERT command helps to Imports a data file into a database table or view in a user-specified format.
Can we rewrite subqueries into simple select statements or with joins?
Subqueries can often be re-written to use a standard outer join, resulting in faster performance. As we may know, an outer join uses the plus sign (+) operator to tell the database to return all non-matching rows with NULL values. Hence we combine the outer join with a NULL test in the WHERE clause to reproduce the result set without using a sub-query.
Can SQL Servers linked to other servers like Oracle?
SQL Server can be lined to any server provided it has OLE-DB provider from Microsoft to allow a link. E.g. Oracle has a OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL Server group.
How to know which index a table is using?
SELECT table_name,index_name FROM user_constraints
How to copy the tables, schema and views from one SQL server to another?
Microsoft SQL Server 2000 Data Transformation Services (DTS) is a set of graphical tools and programmable objects that lets user extract, transform, and consolidate data from disparate sources into single or multiple destinations.
What is Self Join?
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company have a hierarchal reporting structure whereby one member of staff reports to another.
What is Cross Join?
A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.
Which virtual table does a trigger use?
Inserted and Deleted.
List few advantages of Stored Procedure.
• Stored procedure can reduced network traffic and latency, boosting application performance.
• Stored procedure execution plans can be reused, staying cached in SQL Server’s memory, reducing server overhead.
• Stored procedures help promote code reuse.
• Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.
• Stored procedures provide better security to your data.
What is DataWarehousing?
• Subject-oriented, meaning that the data in the database is organized so that all the data elements relating to the same real-world event or object are linked together;
• Time-variant, meaning that the changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time;
• Non-volatile, meaning that data in the database is never over-written or deleted, once committed, the data is static, read-only, but retained for future reporting;
• Integrated, meaning that the database contains data from most or all of an organization’s operational applications, and that this data is made consistent.
What is OLTP(OnLine Transaction Processing)?
In OLTP - online transaction processing systems relational database design use the discipline of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute data integrity. Using these rules complex information is broken down into its most simple structures (a table) where all of the individual atomic level elements relate to each other and satisfy the normalization rules.
How do SQL server 2000 and XML linked? Can XML be used to access data?
FOR XML (ROW, AUTO, EXPLICIT)
You can execute SQL queries against existing relational databases to return results as XML rather than standard rowsets. These queries can be executed directly or from within stored procedures. To retrieve XML results, use the FOR XML clause of the SELECT statement and specify an XML mode of RAW, AUTO, or EXPLICIT.
OPENXML
OPENXML is a Transact-SQL keyword that provides a relational/rowset view over an in-memory XML document. OPENXML is a rowset provider similar to a table or a view. OPENXML provides a way to access XML data within the Transact-SQL context by transferring data from an XML document into the relational tables. Thus, OPENXML allows you to manage an XML document and its interaction with the relational environment.
What is an execution plan? When would you use it? How would you view the execution plan?
An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure since the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or query. From within Query Analyzer is an option called “Show Execution Plan” (located on the Query drop-down menu). If this option is turned on it will display query execution plan in separate window when query is ran again.