Sunday, July 27, 2008

SQL Server Interview Questions

SQL Interview Questions with Answers

What is RDBMS.
Relational Data Base Management Systems (RDBMS) are database management systems that maintain
data records and indices in tables. Relationships may be created and maintained across and among the
data and tables. In a relational database, relationships between data items are expressed by means of
tables. Interdependencies among these tables are expressed by data values rather than by pointers.
This allows a high degree of data independence. An RDBMS has the capability to recombine the data
items from different files, providing powerful tools for data usage.

What is normalization.
Database normalization is a data design and organization process applied to data structures based on
rules that help build relational databases. In relational database design, the process of organizing data
to minimize redundancy. Normalization usually involves dividing a database into two or more tables and
defining relationships between the tables. The objective is to isolate data so that additions, deletions,
and modifications of a field can be made in just one table and then propagated through the rest of the
database via the defined relationships.

What are different normalization forms.

1NF: Eliminate Repeating Groups
Make a separate table for each set of related attributes, and give each table a primary key. Each field
contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All
attributes must be directly dependent on the primary key
BCNF: Boyce-Codd Normal Form
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct
4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many-to-many
ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain-Key Normal Form
A model free from all modification anomalies.

Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first
fulfill all the criteria of a 2NF and 1NF database.

What is Stored Procedure.
A stored procedure is a named group of SQL statements that have been previously created and stored
in the server database. Stored procedures accept input parameters so that a single procedure can be
used over the network by several clients using different input data. And when the procedure is
modified, all clients automatically get the new version. Stored procedures reduce network traffic and
improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_helpdb, sp_renamedb, sp_depends etc.

What is Trigger.
A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE)
occurs. Triggers are stored in and managed by the DBMS.Triggers are used to maintain the referential

integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed;
the DBMS automatically fires the trigger as a result of a data modification to the associated table.
Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is
stored at the database level. Stored procedures, however, are not event-drive and are not attached to a
specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the
procedure while triggers are implicitly executed. In addition, triggers can also execute stored

Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the
trigger is fired because of data modification it can also cause another data modification, thereby firing
another trigger. A trigger that contains data modification logic within itself is called a nested trigger.

What is View.
A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as
updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the
view was created with. It should also be noted that as data in the original table changes, so does data
in the view, as views are the way to look at part of the original table. The results of using a view are
not permanently stored in the database. The data accessed through a view is actually constructed using
standard T-SQL select command and can come from one to many different base tables or even other

What is Index.
An index is a physical structure containing pointers to the data. Indices are created in an existing table
to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of
a table, and each index is given a name. The users cannot see the indexes, they are just used to speed
up queries. Effective indexes are one of the best ways to improve performance in a database
application. A table scan happens when there is no index available to help a query. In a table scan SQL
Server examines every row in the table to satisfy the query results. Table scans are sometimes
unavoidable, but on large tables, scans have a terrific impact on performance.

Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this
reason, each database table may have only one clustered index.
Non-clustered indexes are created outside of the database table and contain a sorted list of references
to the table itself.

What is the difference between clustered and a non-clustered index.
A clustered index is a special type of index that reorders the way records in the table are physically
stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain
the data pages.

A nonclustered index is a special type of index in which the logical order of the index does not match
the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of
the data pages. Instead, the leaf nodes contain index rows.

What are the different index configurations a table can have.
A table can have one of the following index configurations:

No indexes
A clustered index
A clustered index and many nonclustered indexes
A nonclustered index
Many nonclustered indexes

What is cursors.
Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis,
instead of the typical SQL commands that operate on all the rows in the set at one time.

In order to work with a cursor we need to perform some steps in the following order:
Declare cursor
Open cursor
Fetch row from the cursor
Process fetched row
Close cursor
Deallocate cursor

What is the use of DBCC commands.
DBCC stands for database consistency checker. We use these commands to check the consistency of
the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB - Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC - To check that all pages in a db are correctly allocated.
DBCC CHECKFILEGROUP - Checks all tables file group for any damage.

What is a Linked Server.
Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query
both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy
to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data.
Storped Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.

What is Collation.
Collation refers to a set of rules that determine how data is sorted and compared. Character data is
sorted using rules that define the correct character sequence, with options for specifying case-
sensitivity, accent marks, kana character types and character width.

What are different type of Collation Sensitivity.
Case sensitivity
A and a, B and b, etc.

Accent sensitivity
a and á, o and ó, etc.

Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana

Width sensitivity
When a single-byte character (half-width) and the same character when represented as a double-byte
character (full-width) are treated differently then it is width sensitive.

What's the difference between a primary key and a unique key.
Both primary key and unique enforce uniqueness of the column on which they are defined. But by
default primary key creates a clustered index on the column, where are unique creates a nonclustered
index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key
allows one NULL only.

How to implement one-to-one, one-to-many and many-to-many relationships while
designing tables.
One-to-One relationship can be implemented as a single table and rarely as two tables with primary
and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and
foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables
forming the composite primary key of the junction table.

Using the NOLOCK query optimiser hint is generally considered good practice in order to improve
concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are
taken when data is read. The result is a Dirty Read, which means that another process could be
updating the data at the exact time you are reading it. There are no guarantees that your query will
retrieve the most recent data. The advantage to performance is that your reading of data will not block
updates from taking place, and updates will not block your reading of data. SELECT statements take
Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but
other processes are blocked from modifying the data. The updates will queue until all the reads have
completed, and reads requested after the update will wait for the updates to complete. The result to
your system is delay(blocking).

What is difference between DELETE & TRUNCATE commands.
Delete command removes the rows from a table based on the condition that we provide with a WHERE
clause. Truncate will actually remove all the rows from a table and there will be no data in the table
after we run the truncate command.

TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the
page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure and its columns, constraints, indexes
and so on remain. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE can not be Rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table.

DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition
and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE Can be Rolled back.
DELETE is DML Command.
DELETE does not reset identity of the table.

Difference between Function and Stored Procedure.
UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as
Stored procedures cannot be.
UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline UDF's can be though of as views that take parameters and can be used in JOINs and other
Rowset operations.

When is the use of UPDATE_STATISTICS command.
This command is basically used when a large processing of data has occurred. If a large amount of
deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to
take these changes into account. UPDATE_STATISTICS updates the indexes on these tables

What types of Joins are possible with Sql Server.
Joins are used in queries to explain how different tables are related. Joins also let you select data from
a table depending upon data from another table.
Types of joins: INNER JOINs, OUTER JOINs, CROSS JOINs. OUTER JOINs are further classified as LEFT

What is the difference between a HAVING CLAUSE and a WHERE CLAUSE.
Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT
statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING
behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a
query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query.

What is sub-query. Explain properties of sub-query.
Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed
arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of
parentheses. Sub-queries are generally used to return a single row as an atomic value, though they
may be used to compare values against multiple rows with the IN keyword.

A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT
statement if executed independently of the T-SQL statement, in which it is nested, will return a result
set. Meaning a subquery SELECT statement can standalone and is not depended on the statement in
which it is nested. A subquery SELECT statement can return any number of values, and can be found
in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a
T-SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery
can be used anywhere an expression can be used.

Properties of Sub-Query
A subquery must be enclosed in the parenthesis.
A subquery must be put in the right hand of the comparison operator, and
A subquery cannot contain a ORDER-BY clause.
A query can contain more than one sub-queries.

What are types of sub-queries.
Single-row subquery, where the subquery returns only one row.
Multiple-row subquery, where the subquery returns multiple rows,.and
Multiple column subquery, where the subquery returns multiple columns.

What is SQL Profiler.
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of
Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to
analyze later. For example, you can monitor a production environment to see which stored procedures
are hampering performance by executing too slowly.

Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too
large, you can filter them based on the information you want, so that only a subset of the event data is
collected. Monitoring too many events adds overhead to the server and the monitoring process and can
cause the trace file or trace table to grow very large, especially when the monitoring process takes
place over a long period of time.

What is User Defined Functions.
User-Defined Functions allow to define its own T-SQL functions that can accept 0 or more parameters
and return a single scalar data value or a table data type.

What kind of User-Defined Functions can be created.
There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-
Valued and Multi-statement Table-valued.

Scalar User-Defined Function
A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp
data types are not supported. These are the type of user-defined functions that most developers are
used to in other programming languages. You pass in 0 to many parameters and you get a return

Inline Table-Value User-Defined Function
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative
to a view as the user-defined function can pass parameters into a T-SQL select command and in
essence provide us with a parameterized, non-updateable view of the underlying tables.

Multi-statement Table-Value User-Defined Function
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional
alternative to a view as the function can support multiple T-SQL statements to build the final result
where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-
SQL select command or a group of them gives us the capability to in essence create a parameterized,
non-updateable view of the data in the underlying tables. Within the create function command you
must define the table structure that is being returned. After creating this type of user-defined function,
It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored
procedure which can also return record sets.

Which TCP/IP port does SQL Server run on. How can it be changed.
SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties –> Port
number.both on client and the server.

What are the authentication modes in SQL Server. How can it be changed.
Windows mode and mixed mode (SQL & Windows).

To change authentication mode in SQL Server click Start, Programs, Microsoft SQL Server and click SQL
Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server program group.
Select the server then from the Tools menu select SQL Server Configuration Properties, and choose the
Security page.

Where are SQL server users names and passwords are stored in sql server.
They get stored in master db in the sysxlogins table.

Which command using Query Analyzer will give you the version of SQL server and operating

What is SQL server agent.
SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It
is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the
implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to
schedule your own jobs and scripts.

Can a stored procedure call itself or recursive stored procedure. How many level SP nesting
Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves.
Recursion can be defined as a method of problem solving wherein the solution is arrived at by
repetitively applying it to subsets of the problem. A common application of recursive logic is to perform
numeric computations that lend themselves to repetitive evaluation by the same processing steps.
Stored procedures are nested when one stored procedure calls another or executes managed code by
referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code
references up to 32 levels.

What is @@ERROR.
The @@ERROR automatic variable returns the error code of the last Transact-SQL statement. If there
was no error, @@ERROR returns zero. Because @@ERROR is reset after each Transact-SQL statement,
it must be saved to a variable if it is needed to process it further after checking it.

What is Raiseerror.
Stored procedures report errors to client applications via the RAISERROR command. RAISERROR
doesn't change the flow of a procedure; it merely displays an error message, sets the @@ERROR
automatic variable, and optionally writes the message to the SQL Server error log and the NT
application event log.

What is log shipping.
Log shipping is the process of automating the backup of database and transaction log files on a
production SQL server, and then restoring them onto a standby server. Enterprise Editions only
supports log shipping. In log shipping the transactional log file from one server is automatically updated
into the backup database on the other server. If one server fails, the other server will have the same db
can be used this as the Disaster Recovery plan. The key feature of log shipping is that is will
automatically backup transaction logs throughout the day and automatically restore them on the
standby server at defined interval.

What is the difference between a local and a global variable.
A local temporary table exists only for the duration of a connection or, if defined inside a compound
statement, for the duration of the compound statement.

A global temporary table remains in the database permanently, but the rows exist only within a given
connection. When connection are closed, the data in the global temporary table disappears. However,
the table definition remains with the database for access when database is opened next time.

What command do we use to rename a db.
sp_renamedb ‘oldname’ , ‘newname’
If someone is using db it will not accept sp_renmaedb. In that case first bring db to single user using
sp_dboptions. Use sp_renamedb to rename database. Use sp_dboptions to bring database to multi user

What is sp_configure commands and set commands.
Use sp_configure to display or change server-level settings. To change database-level settings, use
ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.

What are the different types of replication. Explain.
The SQL Server 2000-supported replication types are as follows:

• Transactional
• Snapshot
• Merge

Snapshot replication distributes data exactly as it appears at a specific moment in time and does not
monitor for updates to the data. Snapshot replication is best used as a method for replicating data that
changes infrequently or where the most up-to-date values (low latency) are not a requirement. When
synchronization occurs, the entire snapshot is generated and sent to Subscribers.

Transactional replication, an initial snapshot of data is applied at Subscribers, and then when data
modifications are made at the Publisher, the individual transactions are captured and propagated to

Merge replication is the process of distributing data from Publisher to Subscribers, allowing the
Publisher and Subscribers to make updates while connected or disconnected, and then merging the
updates between sites when they are connected.

What are the OS services that the SQL Server installation adds.
MS SQL SERVER SERVICE, SQL AGENT SERVICE, DTC (Distribution transac co-ordinator)
What are three SQL keywords used to change or set someone’s permissions.?


What does it mean to have quoted_identifier on. What are the implications of having it off.
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and
literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers
cannot be quoted and must follow all Transact-SQL rules for identifiers.

What is the STUFF function and how does it differ from the REPLACE function.
STUFF function to overwrite existing characters. Using this syntax, STUFF(string_expression, start,
length, replacement_characters), string_expression is the string that will have characters substituted,
start is the starting position, length is the number of characters in the string that are substituted, and
replacement_characters are the new characters interjected into the string.
REPLACE function to replace existing characters of all occurance. Using this syntax
REPLACE(string_expression, search_string, replacement_string), where every incidence of
search_string found in the string_expression will be replaced with replacement_string.

Using query analyzer, name 3 ways to get an accurate count of the number of records in a
SELECT * FROM table1
SELECT rows FROM sysindexes WHERE id = OBJECT_ID(table1) AND indid < 2

How to rebuild Master Database.
Shutdown Microsoft SQL Server 2000, and then run Rebuildm.exe. This is located in the Program
Files\Microsoft SQL Server\80\Tools\Binn directory.
In the Rebuild Master dialog box, click Browse.
In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in
the shared network directory from which SQL Server 2000 was installed, and then click OK.
Click Settings. In the Collation Settings dialog box, verify or change settings used for the master
database and all other databases.
Initially, the default collation settings are shown, but these may not match the collation selected during
setup. You can select the same settings used during setup or select new collation settings. When done,
click OK.
In the Rebuild Master dialog box, click Rebuild to start the process.
The Rebuild Master utility reinstalls the master database.
To continue, you may need to stop a server that is running.

What is the basic functions for master, msdb, model, tempdb databases.
The Master database holds information for all databases located on the SQL Server instance and is the
glue that holds the engine together. Because SQL Server cannot start without a functioning master
database, you must administer this database with care.
The msdb database stores information regarding database backups, SQL Agent information, DTS
packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in
the instance.

What are primary keys and foreign keys.
Primary keys are the unique identifiers for each row. They must contain unique values and cannot be
null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys
and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship
between tables.

What is data integrity. Explain constraints.
Data integrity is an important feature in SQL Server. When used properly, it ensures that data is

accurate, correct, and valid. It also acts as a trap for otherwise undetectable bugs within applications.

A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should
have a primary key constraint to uniquely identify each row and only one primary key constraint can be
created for each table. The primary key constraints are used to enforce entity integrity.

A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values
are entered. The unique key constraints are used to enforce entity integrity as the primary key

A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the
corresponding data values. A foreign key in one table points to a primary key in another table. Foreign
keys prevent actions that would leave rows with foreign key values when there are no primary keys
with that value. The foreign key constraints are used to enforce referential integrity.

A CHECK constraint is used to limit the values that can be placed in a column. The check constraints
are used to enforce domain integrity.

A NOT NULL constraint enforces that the column will not accept null values. The not null constraints
are used to enforce domain integrity, as the check constraints.

What are the properties of the Relational tables.
Relational tables have six properties:

• Values are atomic.
• Column values are of the same kind.
• Each row is unique.
• The sequence of columns is insignificant.
• The sequence of rows is insignificant.
• Each column must have a unique name.

What is De-normalization.
De-normalization is the process of attempting to optimize the performance of a database by adding
redundant data. It is sometimes necessary because current DBMSs implement the relational model
poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while
providing physical storage of data that is tuned for high performance. De-normalization is a technique
to move from higher to lower normal forms of database modeling in order to speed up database access.

How to get @@error and @@rowcount at the same time.
If @@Rowcount is checked after Error checking statement then it will have 0 as the value of
@@Recordcount as it would have been reset.
And if @@Recordcount is checked before the error-checking statement then @@Error would get reset.
To get @@error and @@rowcount at the same time do both in same statement and store them in local
variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR

What is Identity.
Identity (or AutoNumber) is a column that automatically generates numeric values. A start and
increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers,
the value of this cannot be controled. Identity/GUID columns do not need to be indexed.

What is a Scheduled Jobs or What is a Scheduled Tasks.

Scheduled tasks let user automate processes that run on regular or predictable cycles. User can
schedule administrative tasks, such as cube processing, to run during times of slow business activity.
User can also determine the order in which tasks run by creating job steps within a SQL Server Agent
job. E.g. Back up database, Update Stats of Tables. Job steps give user control over flow of execution.

If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop

What is a table called, if it does not have neither Cluster nor Non-cluster Index. What is it
used for.
Unindexed table or Heap. Microsoft Press Books and Book On Line (BOL) refers it as Heap.
A heap is a table that does not have a clustered index and, therefore, the pages are not linked by
pointers. The IAM pages are the only structures that link the pages in a table together.
Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table
and than do bulk of inserts and to restore those indexes after that.

What is BCP. When does it used.
BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the
structures same as source to destination.

How do you load large data to the SQL server database.
BulkCopy is a tool used to copy huge amount of data from tables. BULK INSERT command helps to
Imports a data file into a database table or view in a user-specified format.

Can we rewrite subqueries into simple select statements or with joins.
Subqueries can often be re-written to use a standard outer join, resulting in faster performance. As we
may know, an outer join uses the plus sign (+) operator to tell the database to return all non-matching
rows with NULL values. Hence we combine the outer join with a NULL test in the WHERE clause to
reproduce the result set without using a sub-query.

Can SQL Servers linked to other servers like Oracle.
SQL Server can be lined to any server provided it has OLE-DB provider from Microsoft to allow a link.
E.g. Oracle has a OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL
Server group.

How to know which index a table is using.
SELECT table_name,index_name FROM user_constraints

How to copy the tables, schema and views from one SQL server to another.
Microsoft SQL Server 2000 Data Transformation Services (DTS) is a set of graphical tools and
programmable objects that lets user extract, transform, and consolidate data from disparate sources
into single or multiple destinations.

What is Self Join.
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self
join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it
involves a relationship with only one table. The common example is when company have a hierarchal
reporting structure whereby one member of staff reports to another.

What is Cross Join.
A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved
in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied
by the number of rows in the second table. The common example is when company wants to combine
each product with a pricing table to analyze each product at each price.

Which virtual table does a trigger use.
Inserted and Deleted.

List few advantages of Stored Procedure.

• Stored procedure can reduced network traffic and latency, boosting application performance.
• Stored procedure execution plans can be reused, staying cached in SQL Server's memory,

reducing server overhead.
• Stored procedures help promote code reuse.
• Stored procedures can encapsulate logic. You can change stored procedure code without
affecting clients.
• Stored procedures provide better security to your data.

What is DataWarehousing.

• Subject-oriented, meaning that the data in the database is organized so that all the data
elements relating to the same real-world event or object are linked together;
• Time-variant, meaning that the changes to the data in the database are tracked and recorded
so that reports can be produced showing changes over time;
• Non-volatile, meaning that data in the database is never over-written or deleted, once
committed, the data is static, read-only, but retained for future reporting;
• Integrated, meaning that the database contains data from most or all of an organization's
operational applications, and that this data is made consistent.

What is OLTP(OnLine Transaction Processing).
In OLTP - online transaction processing systems relational database design use the discipline of data
modeling and generally follow the Codd rules of data normalization in order to ensure absolute data
integrity. Using these rules complex information is broken down into its most simple structures (a table)
where all of the individual atomic level elements relate to each other and satisfy the normalization

How do SQL server 2000 and XML linked. Can XML be used to access data.
You can execute SQL queries against existing relational databases to return results as XML rather than
standard rowsets. These queries can be executed directly or from within stored procedures. To retrieve
XML results, use the FOR XML clause of the SELECT statement and specify an XML mode of RAW, AUTO,

OPENXML is a Transact-SQL keyword that provides a relational/rowset view over an in-memory XML
document. OPENXML is a rowset provider similar to a table or a view. OPENXML provides a way to
access XML data within the Transact-SQL context by transferring data from an XML document into the
relational tables. Thus, OPENXML allows you to manage an XML document and its interaction with the
relational environment.

What is an execution plan. When would you use it. How would you view the execution plan.
An execution plan is basically a road map that graphically or textually shows the data retrieval methods
chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful
tool for a developer to understand the performance characteristics of a query or stored procedure since
the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or
query. From within Query Analyzer is an option called "Show Execution Plan" (located on the Query
drop-down menu). If this option is turned on it will display query execution plan in separate window
when query is ran again.

Joins In SQL

Inner Join Review:
The most commonly used join is an INNER JOIN. This type of join combines rows from two tables only when they match on the joining condition. Usually the primary key from one table matches a foreign key on another table, but join conditions can be more complex than that.

INNER JOIN will retrieve a results row only where there is a perfect match between the two tables in the join condition. You will also often see one row from one of the tables matching multiple rows in the other table. For example, one customer can have many orders. One order can have many order details. The data on the one side will be repeated for each row on the many side. The following query is an example showing how the information from the Sales.SalesOrderHeader is repeated on each matching row:
SELECT s.SalesOrderID, OrderDate,ProductID
FROM Sales.SalesOrderHeader AS s
INNER JOIN Sales.SalesOrderDetail AS d ON s.SalesOrderID = d.SalesOrderID
ORDER BY s.SalesOrderID, ProductID
Outer Join Introduction:
OUTER JOIN is used to join two tables even if there is not a match. An OUTER JOIN can be used to return a list of all the customers and the orders even if no orders have been placed for some of the customers. A keyword, RIGHT or LEFT, is used to specify which side of the join returns all possible rows. I like using LEFT because it makes sense to me to list the most important table first. Except for one example demonstrating RIGHT OUTER JOIN, this article will use left joins. Just a note: the keywords INNER and OUTER are optional.

The next example returns a list of all the customers and the SalesOrderID for the orders that have been placed, if any.
SELECT c.CustomerID, s.SalesOrderID
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID
It uses the LEFT keyword because the Sales.Customer table is located on the left side and we want all rows returned from that table even if there is no match in the Sales.SalesOrderHeader table. This is an important point. Notice also that the CustomerID column is the primary key of the Sales.Customer table and a foreign key in the Sales.SalesOrderHeader table. This means that there must be a valid customer for every order placed. Writing a query that returns all orders and the customers if they match doesn’t make sense. The LEFT table should always be the primary key table when performing a LEFT OUTER JOIN.

If the location of the tables in the query are switched, the RIGHT keyword is used and the same results are returned:
SELECT c.CustomerID, s.SalesOrderID
FROM Sales.SalesOrderHeader s
RIGHT OUTER JOIN Sales.Customer c ON c.CustomerID = s.CustomerID

If I have a LEFT OUTER JOIN, what is returned from the table on the right side of the join where there is not a match? Each column from the right side will return a NULL. Try this query which lists the non-matching rows first:

SELECT c.CustomerID, s.SalesOrderID
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID
ORDER BY s.SalesOrderID
By adding a WHERE clause to check for a NULL SalesOrderID, you can find all the customers who have not placed an order. My copy of AdventureWorks returns 66 customers with no orders:
SELECT c.CustomerID, s.SalesOrderID
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID

WHERE s.SalesOrderID IS NULLOccasionally, you will need to be more specific. How can you find all the customers who have not placed an order in 2002? There are several ways to solve this problem. You could create a view of all the orders placed in 2002 and join the view on the Sales.Customer table. Another option is to create a CTE, or Common Table Expression, of the orders placed in 2002. This example shows how to use a CTE to get the required results:

( SELECT SalesOrderID, customerID
FROM Sales.SalesOrderHeader
WHERE OrderDate between '1/1/2002' and '12/31/2002'
SELECT c.CustomerID, s.SalesOrderID
FROM Sales.Customer c
LEFT OUTER JOIN s ON c.customerID = s.customerID
My favorite technique to solve this problem is much simpler. Additional criteria, in this case filtering on the OrderDate, can be added to the join condition. The query joins all customers to the orders placed in 2002. Then the results are restricted to those where there is no match. This query will return exactly the same results as the previous, more complicated query:
SELECT c.CustomerID, s.SalesOrderID
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID
and s.OrderDate between '1/1/2002' and '12/31/2002'
Using Aggregates with Outer Joins:
Aggregate queries introduce another pitfall watch out for. The following example is an attempt to list all the customers and the count of the orders that have been placed. Can you spot the problem?
SELECT c.CustomerID, count(*) OrderCount
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID
GROUP BY c.CustomerID
ORDER BY OrderCount
Now the customers with no orders look like they have placed one order. That is because this query is counting the rows returned. To solve this problem, count the SalesOrderID column. NULL values are eliminated from the count.
SELECT c.CustomerID, count(SalesOrderID) OrderCount
FROM Sales.Customer c LEFT OUTER JOIN Sales.SalesOrderHeader s
ON c.CustomerID = s.CustomerID
GROUP BY c.CustomerID
ORDER BY OrderCount
Multiple Joins:
Once more than two tables are involved in the query, things get a bit more complicated. When a table is joined to the RIGHT table, a LEFT OUTER JOIN must be used. That is because the NULL rows from the RIGHT table will not match any rows on the new table. An INNER JOIN causes the non-matching rows to be eliminated from the results. If the Sales.SalesOrderDetail table is joined to the Sales.SalesOrderHeader table and an INNER JOIN is used, none of the customers without orders will show up. NULL cannot be joined to any value, not even NULL.

To illustrate this point, when I add the Sales.SalesOrderDetail table to one of the previous queries that checked for customers without orders, I get back no rows at all.
SELECT c.CustomerID, s.SalesOrderID, d.SalesOrderDetailID
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID
INNER JOIN Sales.SalesOrderDetail d ON s.SalesOrderID = d.SalesOrderID
To get correct results, change the INNER JOIN to a LEFT JOIN.

SELECT c.CustomerID, s.SalesOrderID, d.SalesOrderDetailID
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID
LEFT OUTER JOIN Sales.SalesOrderDetail d ON s.SalesOrderID = d.SalesOrderID
What about additional tables joined to Sales.Customer, the table on the left? Must outer joins be used? If it is possible that there are some rows without matches, it must be an outer join to guarantee that no results are lost. The Sales.Customer table has a foreign key pointing to the Sales.SalesTerritory table. Every customer’s territory ID must match a valid value in Sales.SalesTerritory. This query returns 66 rows as expected because it is impossible to eliminate any customers by joining to Sales.SalesTerritory:

SELECT c.CustomerID, s.SalesOrderID, t.Name
FROM Sales.Customer c
LEFT OUTER JOIN Sales.SalesOrderHeader s ON c.CustomerID = s.CustomerID
INNER JOIN Sales.SalesTerritory t ON c.TerritoryID = t.TerritoryID
Sales.SalesTerritory is the primary key table; every customer must match a valid territory. If you wanted to write a query that listed all territories, even those that had no customers, an outer join will be used. This time, Sales.Customers is on the right side of the join.
SELECT t.Name, CustomerID
FROM Sales.SalesTerritory t
LEFT OUTER JOIN Sales.Customer c ON t.TerritoryID = c.TerritoryID

Queries with outer joins can be tricky to write. Extra time and care must be spent making sure the results are correct. Think about the relationship between the tables and make sure that the outer join is continued down the path. This article covered almost everything you need to know about outer joins.

Tuesday, July 22, 2008

Password encryption and decryption in coldfusion:-

You might have already noticed that even database servers like Microsoft's SQL Server 2000 have no method of hiding even password fields from prying eyes. Instead passwords are stored as plain text. Not good. Even Microsoft Access provides a way to mask fileds you would prefer not to be easily read. Not so SQL Server - and probably quite a few other database servers suffer the same issue.
This is easily rectified however using two handy functions built-in to ColdFusion, from at at least version 5.0 and above (I think they were in 4.x also). I've developed the following code using CFMX updated 3.
The Problem
Many database servers, including MS SQL Server 2000, will store even fields you designate as a "password" as freely readable text. MS Access allows you to mask a field with characters like your typical HTML form password field. But how secure is it?
You might have access to Secure Sockets Layer (SSL) to encrypt user logins between the client and server - but what happens if other people have access to your database and can read passwords because they are not encrypted or masked in any way? Who do you trust? Using SSL and this method together is an ideal solution. If you cannot use SSL you should at least implement this solution to protect your passwords.
The Solution
The solution is simple. The same technique can be used across multiple sites/applications. We are going to use an application specific "key" to use for encrypting and decryption our passwords. Without the key it is difficult - if not impossible - for the data to be read. The advantage of this solution is that you might use the same password across multiple sites. But with a unique "key" the same password in a database will be different to the same password in another database.
I don't suggest you use this method for "super user" or administration accounts - utilise your OS security for that wherever you can.
The Code

Starting with our "application.cfm" file we are going to define the encryption key.
Add the following lines to the application.cfm file of the application you wish to protect:
<cfif not isdefined("Request.PasswordKey")>
<cfparam name="Request.PasswordKey" default="L2OIhfkjsyIJHK23jhfkuIYU">

We are using the REQUEST scope because it is available to ALL areas of the application and does not require locking (as in application/session variables). Because the application.cfm template is executed before every other template we test if it exists first (CFIF NOT ISDEFINED) - if it doesn't use CFPARAM to set the default. Future iterations of application.cfm will ignore the code in future (unless the server is restarted). The REQUEST scope is ideal for values that rarely change like DataSource Names, Administrator email addresses, Copyright messages, etc.
Important: make the default key value as random as you can.
Using the ColdFusion ENCRYPT function
Encrypt uses a symmetric key-based algorithm in which the same key is used to encrypt and decrypt a string. The security of the encrypted string depends on maintaining the secrecy of the key. Encrypt uses an XOR-based algorithm that uses a pseudo-random 32-bit key based on a seed passed by the user as a parameter to the function. The resultant data is UUencoded and may be as much as three times the original size (keep this in mind when setting the storage limit of your password field in your database).
Below is an example of a password (stored in MS SQL Server) AFTER it has been encrypted using a custom key (not the one above mind you):
In order to utilise encryption we need to let the user "register" so that we can encrypt their password. Your user will complete a HTML form and submit the form to our action page. Though not necessary it would be better to use SSL here.
User completes standard Register Now form and submits to our form action page:
<cfset Encrypted = Encrypt(Form.UserPassword, Request.PasswordKey)>
<cfset Form.UserPassword = #Encrypted#>
<cfinsert datasource="#Request.DataSourceName#"

Assuming no errors have occured your users account will now contain the password they submitted - but in an encrypted form according to the key we defined in the application.cfm.
User now needs to login to our application
In order for our user to gain access to our application we now need to "decrypt" their stored password. Easy with the login form action page like below:
<cfset Encrypted = encrypt(Form.UserPassword, Request.PasswordKey)>
<cfquery name="MailingListUpdate" datasource="#Request.DataSourceName#">
FROM Users
WHERE EmailAddress = <cfqueryparam cfsqltype="cf_sql_varchar" value="#Form.UserEmailAddress#">
AND UserPassword = <cfqueryparam cfsqltype="cf_sql_varchar" value="#Encrypted#">

User wants to "update" their details
To allow a user to do things like change their password we need to un-encrypt, or decrypt, their password and populate the form password field:
<cfoutput query="UserUpdate">
<cfform name="UserUpdate" action="index.cfm">
<table width="100%" border="0">
<td halign="left" valign="top">
<td halign="left" valign="top">
<cfinput type="text" name="password"
maxlength="16" size="20"
message="Please enter in a password"
value="#Decrypt(Password,Request.PasswordKey)#"> <sup>*</sup>
<input name="ID" type="hidden" value="#ID#">
<input name="fuseaction" type="hidden" value="SaveChanges">
<td> </td>
<td valign="top" align="center"><input type="Submit" value=" Save Changes " style="cursor:hand"></td>

When re-submitted to the database we re-use the encrypt code just as we did when the user registered to encrypt any changes.

Saturday, July 19, 2008

Automatically Sending Email from Sql Table Through Windows Services

Here is the windows services Code sample for automatically sending emails from your sql table.

The example uses a Timer to fire the event automatically. The mail is sent when time elapses.

public System.Timers.Timer timer1;
protected override void OnStart(string[] args)

timer1.Enabled = true;


private void timer1_Elapsed(object sender, System.Timers.ElapsedEventArgs e)


public void SendEmail()

conn = new SqlConnection(@"Server=;UID=sa;pwd=sa;Database=");
da = new SqlDataAdapter("select top 1 * from tbl_mail where status=0 order by MailIn_DT", conn);
ds = new DataSet();

if (ds != null)
DataRow dr = ds.Tables[0].Rows[0];
string MailTo = dr["To_Mail"].ToString();
string MailSubject = dr["Subject"].ToString();
string MailMessage = dr["Message"].ToString();

System.Net.Mail.SmtpClient client = new System.Net.Mail.SmtpClient("");
client.UseDefaultCredentials = false;
client.Credentials = new System.Net.NetworkCredential("Userid", "Password");
client.EnableSsl = true;
client.DeliveryMethod = System.Net.Mail.SmtpDeliveryMethod.Network;
System.Net.Mail.MailMessage Mail = new System.Net.Mail.MailMessage("Userid", MailTo, MailSubject, MailMessage);
Mail.IsBodyHtml = true;


Unable to open D (Drive): by double clicking

In some situation especially when anti-virus program has cleaned, healed, disinfected or removed a worm, trojan horse or virus from computer, there may be error happening whenever users try to open or access the drive by double clicking on the disk drive icon in Explorer or My Computer window to try to enter the driveâ??s folder. The problem or symptom happens in hard disk drive, portable hard disk drive or USB flash drive, and Windows will prompt a dialog box with the following message:

Windows Script Host

Can not find script file autorun.vbs.

Sometimes you will be asked to debug the VBScript with error code of 800A041F - Unexpected â??Nextâ??.


Choose the program you want to use to open this file with:

In this case, the â??Always use the selected program to open this kind of fileâ?? option is grayed out.

The symptom occurs because when autorun.vbs is created by trojan horse or virus. The virus normally loads autorun.inf file to root folder of all hard drive or USB drive, and then execute autorun.bat file which contains script to apply and merge autorun.reg into the registry, with possible change to the following registry key to ensure that virus is loaded when system starts:

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]

Finally, autorun.bat will call wscript.exe to run autorun.vbs.

When antivirus or security software detected the autorun.vbs file as infected, the file will be deleted or removed or quarantined. However, other files (autorun.*ï¼? and registry value still referring to autorun.vbs, and this document no longer exists, hence the error when users double click to open a drive folder.

To correct and solve this error, follow this steps:

Run Task Manager (Ctrl-Alt-Del or right click on Taskbar)
Stop wscript.exe process if available by highlighting the process name and clicking End Process.
Then terminate explorer.exe process.
In Task Manager, click on File -> New Task (Runâ?¦).
Type â??cmdâ?? (without quotes)(cmd) into the Open text box and click OK.
Type the following command one by one followed by hitting Enter key:
del c:\autorun.* /f /s /q /a
del d:\autorun.* /f /s /q /a
del e:\autorun.* /f /s /q /a

c, d, e each represents drive letters on Windows system. If there are more drives or partitions available, continue to command by altering to other drive letter. Note that you must also clean the autorun files from USB flash drive or portable hard disk as the external drive may also be infected.

In Task Manager, click on File -> New Task (Runâ?¦).
Type â??regeditâ?? (without quotes)(regedit) into the Open text box and click OK.
Navigate to the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon

Check if the value name and value data for the key is correct (the value data of userint.exe include the path which may be different than C drive, which is also valid, note also the comma which is also needed):

If the value is incorrent, modify it to the valid value data.

Wednesday, July 16, 2008

pagination in coldfusion

Previous / Next n Records
This tutorial will demonstrate how to incorporate a navigation system the displays next X records per page. Not sure what I mean? Take a look at the same below:
 Previous 5 records
Next 5 records 

This example will demonstrate how to incorporate it into your query results. The first thing we must do is define two variables:
The first variable we're defining by default is "Start". We're giving this a value of 1. The variable "start" will be used to let the ColdFusion server know which record we will want to begin at to display on the current page. The next variable is "disp", this variable will let ColdFUsion server know how many records to display per page. You can make this value any number you want.
<!--- Start displaying with record 1 if not specified via url --->
<CFPARAM name="start" default="1">
<!--- Number of records to display on a page --->
<CFPARAM name="disp" default="5">

The varaible displayed above will make the magic appear, next let's query the database for all records:
<!--- Fetch records --->
<CFQUERY name="data" datasource="MyDB">

Ok, now that you have your database queried and you've defined where to start and how many rows to display, we're ready to begin displaying the data on the page:
<CFSET end=Start + disp>
<CFIF start + disp GREATER THAN data.RecordCount>
<CFSET end=999>
<CFSET end=disp>

<CFOUTPUT query="data" startrow="#start#" maxrows="#end#">
#CurrentRow#. #MyField#<br>
<table border="0" cellpadding="10">
<!--- Display prev link --->
<CFIF start NOT EQUAL 1>
<CFIF start GTE disp>
<CFSET prev=disp>
<CFSET prevrec=start - disp>
<CFSET prev=start - 1>
<CFSET prevrec=1>
<td><font face="wingdings">ç</font> <a
href="NextN.cfm?start=#prevrec#">Previous #prev#

<!--- Display next link --->
<CFIF end LT data.RecordCount>
<CFIF start + disp * 2 GTE data.RecordCount>
<CFSET next=data.RecordCount - start - disp + 1>
<CFSET next=disp>
<td><a href="NextN.cfm?start=#Evaluate("start + disp")#">Next
#next# records</a> <font face="wingdings">è</font></td>

That's it! That's all there is to adding " Previous/Next 'X' " into your website..

CaSe SensitiVe password logins! in COLDFUSION

This tutorial will show you how to check your users that are logging in to your application for Case SenSitIVe passwords.
The first thing you need is the form that asks your users to log in, we'll call this page "login.cfm":
<form action="login_process.cfm" method="post">
Username: <input type="text" name="user_name" value=""><BR>
Password: <input type="password" name="pass_word" value=""><BR>
<input type="submit" name="login_user" value="Log In"><BR>

Now create the page that will log the user is, we'll call this page "login_process.cfm".
<cfquery name="qVerify" datasource="YourDSN">
SELECT ID, username, password
FROM Members
WHERE password = '#pass_word#'
AND username = '#user_name#'

<!--- ok now check for the case sensative --->
<cfset comparison = Compare(FORM.pass_word, qVerify.password)>

<!--- see what to do --->
<cfif comparison eq 0>
<!--- User is good, log in and redirect --->
<!--- user did not supply a valid CAsE sensative password, alert and ask to login again! --->
Ok, the location that actually checks for the CaseSenSitive format is the line:
<!--- ok now check for the case sensative --->
<cfset comparison = Compare(FORM.pass_word, qVerify.password)>
That's it, you can now implement another layer of security by ensuring that your users are using their information correctly..

Creating a user authentication (Login) area in ColdFusion

Creating a user authentication (Login) area.

A lot of users are always asking me how they can create a way on their web site that is only allowed to be used by registered users. This is called "User Authentication (Member's Only)" and can be created easily with ColdFusion.
The first thing that you must do is to create a table in your database called "tblAdmins".
Create the following fields in the database table:
Field Name Type
user_id AutoNumber
user_name text
user_pass text
This is where you will have your users login data be default, this will allow you a "location" to verify against.
You will need to create and ODBC connection for this database, to create an ODBC data source, simply open up your ColdFusion administrator ( or contact your ISP. Call the ODBC "userLogin".
This tutorial will require 4 pages to be created.
- Application.cfm
- login.cfm
- login_process.cfm
- members_only.cfm
The first page that will need to be created is titled "Application.cfm". One thing you must keep in mind with this file, is that it will always be execute before any ColdFusion (.cfm) file. Think of this as the page where you can define things and/or check for things. This will always run before any page, so this is a great place to check to see if a user is logged in. So naturally, this is the place we're putting the following code on :)
Within this page you will create the following code:
<!--- Create the application --->
<cfapplication name="MyApp" clientmanagement="Yes"
<!--- Now define that this user is logged out by default --->
<CFPARAM NAME="session.allowin" DEFAULT="false">
<!--- Now define this user id to zero by default, this will be used later on to access specific information about this user. --->
<CFPARAM NAME="session.user_id" DEFAULT="0">
<!--- Now if the variable "session.allowin" does not equal true, send user to the login page --->
the other thing you must check for is if the page calling this application.cfm is the "login.cfm" page
and the "Login_process.cfm" page since the Application.cfm is always called, if this is not checked
the application will simply Loop over and over. To check that, you do the following call
<cfif session.allowin neq "true">
<cfif ListLast(CGI.SCRIPT_NAME, "/") EQ "login.cfm">
<cfelseif ListLast(CGI.SCRIPT_NAME, "/") EQ "login_process.cfm">
<!--- this user is not logged in, alert user and redirect to the login.cfm page --->
alert("You must login to access this area!");

[NOTE: I updated the code above, because a lot of people were having problems implementing it because they did not understand how CGI.SCRIPT_NAME works. This will resolve those issues and should work AS IS in all cases - Pablo]
That is all you need in the Application.cfm page. I'll explain the items as best as possible:
The first thing you created was the "cfapplication". This command creates the ability to track users, create session variable and much more. This is needed to keep order within the application. One crucial section of this tag is the value you specify on "sessiontimeout". This is the time that that specifies how long the user will be logged as "Loggedin" before having to re login. This time is only counted if the user does nothing. If your pages are small and do not require large amounts of reading, then 15 minutes should be enough time. However, if your pages contain a lot of text and require lots of reading, then you might have to increase the time specified. The CreateTimeSpan values are as follows:


The next thing you created was a <cfparam>. What the <cfparam> does is to define a value for a variable if (and only if) that variable doesn't already exist. If that variable does exist, it simply does nothing.

The last thing you created was a way to check on all page to make sure that the user is correctly logged in. It simply checks for a session variable called "session.allowin". If this variable has a value of "TRUE" then that user is logged in, if that variable has a value other than "TRUE" (i.e. "FALSE") then this user is not logged in, send them to the login page.
The next step in this tutorial is the "login.cfm" page.
This page is simply HTML, it doesn't really require any Coldfusion code, here is what it must have:
<form action="login_process.cfm" method="post">
Username: <input type="text" name="user_name" value=""><BR>
Password: <input type="password" name="user_pass" value=""><BR>
<input type="submit" name="login_user" value="Log In"><BR>
The login page, will submit the form to the "login_process.cfm" page. That page however, is where the magic takes place. I'll create the entire page, and then come back and explain it.

<!--- Get all records from the database that match this users credentials --->
<cfquery name="qVerify" datasource="userLogin">
SELECT user_id, user_name, user_pass
FROM tblAdmins
WHERE user_name = '#user_name#'
AND user_pass = '#user_pass#'
<cfif qVerify.RecordCount>
<!--- This user has logged in correctly, change the value of the session.allowin value --->
<cfset session.allowin = "True">
<cfset session.user_id = qVerify.user_id>
<!--- Now welcome user and redirect to "members_only.cfm" --->
alert("Welcome user, you have been successfully logged in!");
< cfelse>
<!--- this user did not log in correctly, alert and redirect to the login page --->
alert("Your credentials could not be verified, please try again!!!");
That is all that I needed on this page. What you're basically doing is as follows:
First you are making a connection to the database with the username/password the user typed in on the "login.cfm" page. You are making a call to the database to look in the "tblAdmins" table for a user with this combination of username/password. if a match is found, then you have a record, if no matches are found, then there are no records.

The next step is to do a <cfif> to see if any records were found. If a record was found, then this user is good, go ahead and log them in. if there are no matches, then this user is no good, keep him out of the members only section.
if the user was good, then you overwrite the existing value of the "session.allowin" variable to "TRUE". Remember that the "Application.cfm" is checking for this value to be anything other than true to make the user log in. Since this now has a value of "TRUE" the user is logged in and therefore, does not need to login once again.
The last page you must create is the "members_only.cfm". This can be anything you want, this is the content that the user is logging in for, so make it good :)

Monday, July 7, 2008

ColdFusion Security Checklist

Validate Input Parameters
All input parameters (url, form, cookie, cgi) are controlled by outside sources and should not be trusted. Always be sure to validate this data on the server side before using it. Don't forget that hidden form fields are not hidden! Do not rely on JavaScript to validate variables. Look into isValid() for an easy way to validate data.

Along with validating data types, the htmlEditFormat() function can be used to help prevent cross-site scripting attacks. In general the htmlEditFormat() function should be used on all input parameters.

Use cfqueryparam in Dynamic Queries
Any query that makes use of dynamic data should employ cfqueryparam. This tag not only helps validate the data and prevent SQL injection attacks, it also results in a faster query. (In most database systems.)

Turn Off Robust Exception Information
The ColdFusion administrator has an option to show a great deal of information when errors occur. While this is handy on a development machine, it shows too much information on a productionmachine. Turn this off.

Use Error Handling
ColdFusion allows for easy error handling using the onError method of Application.cfc, the tag, or the global error handler defined in the Administrator. At best, you should log errors and email reports to the administrator. At the least you should ensure errors do not get presented to the user.

Use username/password attributes of , do not store in DSN
When creating a DSN, you have the option of setting the username and password. You should instead store the username and password in the code itself. This prevents your DSN from being useable across a shared server. Note that your ISP can (and should) use sandbox security, which would make this tip irrelevant. The flip side to this is that if someone gains access to your code, they will have access to the username and password. If working on a shared server, you must ensure that the ISP has protected your files and folders. Again - use sandbox security. Do not use the sa or root level username and password for connecting to a DSN.

Remove permissions from DSNs
ColdFusion lets you restrict what types of operations can be done via a DSN. Remove any unnecessary permission.

Use Encryption
ColdFusion comes with built-in encryption tools. There is no reason to not encrypt sensitive information like credit card numbers and password. See encrypt() and encryptBinary(). for more information.)

Keep files out of web root
Any file that does not need to be in web root (like an include, custom tag, etc), should be moved. The only files that should live under web root are files that your intend to directly serve up in the browser.

Run ColdFusion as a User
By default ColdFusion will run as a system user. You should create a user with the bare minimum rights and have ColdFusion run as that user.

Scopes in a CFC

Variables Scope:-
The variables scope is available to the entire CFC. A value set in the
variables scope in one method, or in the constructor, will be available
to any other method, or the constructor, in the CFC. I typically use
variables in much the same way I use Application variables in an site.
The This scope acts like the Variables scope in that it is "global" to
the CFC. However, it is also accessible outside the CFC. So if foo is
an instance of the CFC, and you do: = "Rabid" outside of
the CFC, then you have just creating a key called "name" in the This
scope with the value of "Rabid." You can also cfoutput the value of Because of this accessibility, many folks recommend
against using the This scope, and instead suggest relying on
methods to set data (or get data) inside the CFC. These methods
then write to the Variables scope. To repeat:
<cfset = "Rabid"> (where foo is an instance of the CFC) is
the same as <cfset = "Rabid"> inside the CFC.
Var scoped variables exist only for the duration of the method. You
must use the var scope for any variable that should exist only
inside the method, like query names, loop iterators, etc. Sorry to
"shout" but the lack of var scoping is one of the trickiest things to
debug when things go wrong. Unlike other scope, you do not prefix
the scope name in front of the variable.
The arguments scope consists of every argument passed to the
method. So if the calling code did foo(name="King Camden"), then
the foo method will have a variable called
Form, URL,Application,Session, Server,CGI, Client, Request, Cookie:-

These scopes act exactly as they do anywhere else. In general you
should not use these scopes inside a CFC. When you do you are
making your CFCs less portable between applications

Caller, Attributes:-

Caller and Attributes do not exist inside a CFC. They should only be
used inside custom tags


While not a variable scope, the super scope is a pointer to the
methods in the CFC that acts as the parent to the current CFC. So if
a CFC inherits a CFC with similar methods names, the child CFC can
refer to the parent's method by using super. Example: <cfset result =

Tuesday, July 1, 2008

How Web 3.0 Will Work

Web 3.0 Basics

Internet experts think Web 3.0 is going to be like having a personal assistant who knows practically everything about you and can access all the information on the Internet to answer any question. Many compare Web 3.0 to a giant database. While Web 2.0 uses the Internet to make connections between people, Web 3.0 will use the Internet to make connections with information. Some experts see Web 3.0 replacing the current Web while others believe it will exist as a separate network.

Planning a tropical getaway? Web 3.0 might help simplify your travel plans.

t's easier to get the concept with an example. Let's say that you're thinking about going on a vacation. You want to go someplace warm and tropical. You have set aside a budget of $3,000 for your trip. You want a nice place to stay, but you don't want it to take up too much of your budget. You also want a good deal on a flight.

With the Web technology currently available to you, you'd have to do a lot of research to find the best vacation options. You'd need to research potential destinations and decide which one is right for you. You might visit two or three discount travel sites and compare rates for flights and hotel rooms. You'd spend a lot of your time looking through results on various search engine results pages. The entire process could take several hours.

Your Life on the Web
If your Web 3.0 browser retrieves information for you based on your likes and dislikes, could other people learn things about you that you'd rather keep private by looking at your results? What if someone performs an Internet search on you? Will your activities on the Internet become public knowledge? Some people worry that by the time we have answers to these questions, it'll be too late to do anything about it.

According to some Internet experts, with Web 3.0 you'll be able to sit back and let the Internet do all the work for you. You could use a search service and narrow the parameters of your search. The browser program then gathers, analyzes and presents the data to you in a way that makes comparison a snap. It can do this because Web 3.0 will be able to understand information on the Web.

Right now, when you use a Web search engine, the engine isn't able to really understand your search. It looks for Web pages that contain the keywords found in your search terms. The search engine can't tell if the Web page is actually relevant for your search. It can only tell that the keyword appears on the Web page. For example, if you searched for the term "Saturn," you'd end up with results for Web pages about the planet and others about the car manufacturer.

A Web 3.0 search engine could find not only the keywords in your search, but also interpret the context of your request. It would return relevant results and suggest other content related to your search terms. In our vacation example, if you typed "tropical vacation destinations under $3,000" as a search request, the Web 3.0 browser might include a list of fun activities or great restaurants related to the search results. It would treat the entire Internet as a massive database of information available for any query.

Web 3.0 Approaches

You never know how future technology will eventually turn out. In the case of Web 3.0, most Internet experts agree about its general traits. They believe that Web 3.0 will provide users with richer and more relevant experiences. Many also believe that with Web 3.0, every user will have a unique Internet profile based on that user's browsing history. Web 3.0 will use this profile to tailor the browsing experience to each individual. That means that if two different people each performed an Internet search with the same keywords using the same service, they'd receive different results determined by their individual profiles.

Web 3.0 will likely plug into your individual tastes and browsing habits.

The technologies and software required for this kind of application aren't yet mature. Services like TiVO and Pandora provide individualized content based on user input, but they both rely on a trial-and-error approach that isn't as efficient as what the experts say Web 3.0 will be. More importantly, both TiVO and Pandora have a limited scope -- television shows and music, respectively -- whereas Web 3.0 will involve all the information on the Internet.

Some experts believe that the foundation for Web 3.0 will be application programming interfaces (APIs). An API is an interface designed to allow developers to create applications that take advantage of a certain set of resources. Many Web 2.0 sites include APIs that give programmers access to the sites' unique data and capabilities. For example, Facebook's API allows developers to create programs that use Facebook as a staging ground for games, quizzes, product reviews and more.

One Web 2.0 trend that could help the development of Web 3.0 is the mashup. A mashup is the combination of two or more applications into a single application. For example, a developer might combine a program that lets users review restaurants with Google Maps. The new mashup application could show not only restaurant reviews, but also map them out so that the user could see the restaurants' locations. Some Internet experts believe that creating mashups will be so easy in Web 3.0 that anyone will be able to do it.

Widgets are small applications that people can insert into Web pages by copying and embedding lines of code into a Web page's code. They can be games, news feeds, video players or just about anything else. Some Internet prognosticators believe that Web 3.0 will let users combine widgets together to make mashups by just clicking and dragging a couple of icons into a box on a Web page. Want an application that shows you where news stories are happening? Combine a news feed icon with a Google Earth icon and Web 3.0 does the rest. How? Well, no one has quite figured that part out yet.

Other experts think that Web 3.0 will start fresh. Instead of using HTML as the basic coding language, it will rely on some new -- and unnamed -- language. These experts suggest it might be easier to start from scratch rather than try to change the current Web. However, this version of Web 3.0 is so theoretical that it's practically impossible to say how it will work.

The man responsible for the World Wide Web has his own theory of what the future of the Web will be. He calls it the Semantic Web, and many Internet experts borrow heavily from his work when talking about Web 3.0.