Thursday, December 20, 2007

Speed Optimization in ASP.NET 2.0 Web Applications

Use HTML controls whenever possible

HTML controls is lighter than server controls especially if you are using server controls with its default properties. Server controls generally is easier to use than HTML controls, and on the other side they are slower than HTML controls. So, it is recommended to use HTML controls whenever possible and avoid using unnecessary server controls.

Avoid round trips to server whenever possible

Using server controls will extensively increase round trips to the server via their post back events which wastes a lot of time. You typically need to avoid these unnecessary round trips or post back events as possible. For example, validating user inputs can always (or at least in most cases) take place in the client side. There is no need to send these inputs to the server to check their validity. In general you should avoid code that causes a round trip to the server.

The Page.IsPostBack Property

The Page.IspostBack Boolean property indicates whether this page is loaded as a response to a round trip to the server, or it is being loaded for the first time. This property helps you to write the code needed for the first time the page is loaded, and avoiding running this same code each time the page is posted back. You can use this property efficiently in the page_load event. This event is executed each time a page is loaded, so you can use this property conditionally to avoid unnecessary re-running of certain code.

Server Control's AutoPostBack Property

Always set this property to false except when you really need to turn it on. This property automatically post back to the server after some action takes place depending on the type of the control. For example, in the Text Control this property automatically

Leave Buffering on

It is important to leave page buffering in its on state to improve your page speed, unless you have a serious reason to turn it off.

Server Controls View State

Server control by default saves all the values of its properties between round trips, and this increases both page size and processing time which is of course an undesired behavior. Disable the server control view state whenever possible. For example, if you bind data to a server control each time the page is posted back, then it is useful to disable the control's view state property. This reduces page size and processing time.

Methods for redirection

There are many ways you can use to redirect a user from the current page to another one in the same application, however the most efficient methods to do this are: the Server.Transfer method or cross-page posting.
Web Applications

The following topics give you some tips about how to make an efficient web application:

Precompilation

When an already deployed ASP.NET web application page is requested for the first time, that page needs to be compiled (by the server) before the user gets a response. The compiled page or code is then cached so that we need not to compile it again for the coming requests. It is clear that the first user gets a slow response than the following users. This scenario is repeated for each web page and code file within your web site.

When using precompilation then the ASP.NET entire web application pages and code files will be compiled ahead. So, when a user requests a page from this web application he will get it in a reasonable response time whatever he is the first user or not.

Precompiling the entire web application before making it available to users provides faster response times. This is very useful on frequently updated large web applications.

Encoding

By default ASP.NET applications use UTF-8 encoding. If your application is using ASCII codes only, it is preferred to set your encoding to ASCII to improve your application performance.

Authentication

It is recommended to turn authentication off when you do not need it. The authentication mode for ASP.NET applications is windows mode. In many cases it is preferred to turn off the authentication in the 'machin.config' file located on your server and to enable it only for applications that really need it.

Debug Mode

Before deploying your web application you have to disable the debug mode. This makes your deployed application faster than before. You can disable or enable debug mode form within your application's 'web.config' file under the 'system.web' section as a property to the 'compilation' item. You can set it to 'true' or 'false'.
Coding Practices

The following topics give you guidelines to write efficient code:

Page Size

Web page with a large size consumes more bandwidth over the network during its transfer. Page size is affected by the numbers and types of controls it contains, and the number of images and data used to render the page. The larger the slower, this is the rule. Try to make your web pages small and as light as possible. This will improve response time.

Exception Handling

It is better for your application in terms of performance to detect in your code conditions that may cause exceptions instead of relying on catching exceptions and handling them. You should avoid common exceptions like null reference, dividing by zero , and so on by checking them manually in your code.

The following code gives you two examples: The first one uses exception handling and the second tests for a condition. Both examples produce the same result, but the performance of the first one suffers significantly.




8 ' This is not recommended.
9 Try
10 Output = 100 / number
11 Catch ex As Exception
12 Output = 0
13 End Try
14
15 ' This is preferred.
16 If Not (number = 0) Then
17 Output = 100 / number
18 Else
19 Output = 0
20 End If






Garbage Collector

ASP.NET provides automatic garbage collection and memory management. The garbage collector's main task is to allocate and release memory for your application. There are some tips you can take care of when you writing your application's code to make the garbage collector works for your benefit:

Avoid using objects with a Finalize sub as possible and avoid freeing resources in Finalize functions.
Avoid allocating too much memory per web page because the garbage collector will have to do more work for each request and this increases CPU utilization (not to mention you can go out of memories in larger web applications)
Avoid having unnecessary pointers to objects because this makes these objects alive until you free them yourself within your code not in an automatic way.

Use Try / Finally

If you are to use exceptions anyway, then always use a try / finally block to handle your exceptions. In the finally section you can close your resources if an exception occurred or not. If an exception occurs, then the finally section will clean up your resources and frees them up.

String Concatenation

Many string concatenations are time consuming operations. So, if you want to concatenate many strings such as to dynamically build some HTML or XML strings then use the System.Text.StringBuilder object instead of system.string data type. The append method of the StringBuilder class is more efficient than concatenation.

Google Checkout

Google checkout enables us to process customer orders in two ways,



Option 1: Integrate only your shopping cart (easier: requires only a working knowledge of HTML)

* Send shopping carts to Google using HTML code you add to your site.
* Process orders through the Google Checkout Merchant Center.
* Integrate using the HTML API Developer's Guide


Option 2: Integrate your cart AND order processing system (more complex: requires API programming experience)

* Send shopping carts to Google via an XML-based API.
* Process orders through your existing order processing system.
* Integrate using the XML API Developer's Guide



In option 1, we have to build one form in html with hidden fields containing necessary information for Google Checkout process. Orders will then be processed through Google Checkout Merchant Account.



While in Option 2, we can integrate our existing order processing system with the Google checkout with XML API.





In Option 1, customer will have to log in into his customer Account for Google checkout, and then he can view his purchase information which is sent by us (Our site) with hidden fields, in HTML form.

Same way, we can (Site Administrator) analyze the order information and can do operations like charging the customer for his order.



Getting Started Doc for Google Checkout is,



Google Checkout

SQL Interview questions

What is RDBMS?
Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of tables. Interdependencies among these tables are expressed by data values rather than by pointers. This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage.

What is normalization?
Database normalization is a data design and organization processes applied to data structures based on rules that help build relational databases. In relational database design, the process of organizing data to minimize redundancy. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.

What are different normalization forms?

1NF: Eliminate Repeating Groups
Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key
BCNF: Boyce-Codd Normal Form
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many-to-many relationships.
ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain-Key Normal Form
A model free from all modification anomalies.

Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.

What is Stored Procedure?
A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_helpdb, sp_renamedb, sp_depends etc.

What is Trigger?
A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and managed by the DBMS.Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed; the DBMS automatically fires the trigger as a result of a data modification to the associated table. Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event-drive and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures.
Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.

What is View?
A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T-SQL select command and can come from one to many different base tables or even other views.

What is Index?
An index is a physical structure containing pointers to the data. Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes; they are just used to speed up queries. Effective indexes are one of the best ways to improve performance in a database application. A table scan happens when there is no index available to help a query. In a table scan SQL Server examines every row in the table to satisfy the query results. Table scans are sometimes unavoidable, but on large tables, scans have a terrific impact on performance.

Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index.
Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.

What are the difference between clustered and a non-clustered index?
A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.

A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

What are the different index configurations a table can have?
A table can have one of the following index configurations:

No indexes
A clustered index
A clustered index and many nonclustered indexes
A nonclustered index
Many nonclustered indexes

What is cursor?
Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis, instead of the typical SQL commands that operate on all the rows in the set at one time.

In order to work with a cursor we need to perform some steps in the following order:

Declare cursor
Open cursor
Fetch row from the cursor
Process fetched row
Close cursor
Deallocate cursor

What is the use of DBCC commands?
DBCC stands for database consistency checker. We use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB - Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC - To check that all pages in a db are correctly allocated.
DBCC CHECKFILEGROUP - Checks all tables file group for any damage.

What is a Linked Server?
Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data.
Storped Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.

What is Collation?
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.

What are different types of Collation Sensitivity?
Case sensitivity
A and a, B and b, etc.

Accent sensitivity
a and á, o and ó, etc.

Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive.

Width sensitivity
when a single-byte character (half-width) and the same character when represented as a double-byte character (full-width) are treated differently then it is width sensitive.

What’s the difference between a primary key and a unique key?
Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.

How to implement one-to-one, one-to-many and many-to-many relationships while designing tables?
One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.

What is a NOLOCK?
Using the NOLOCK query optimizer hint is generally considered good practice in order to improve concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which means that another process could be updating the data at the exact time you are reading it. There are no guarantees that your query will retrieve the most recent data. The advantage to performance is that your reading of data will not block updates from taking place, and updates will not block your reading of data. SELECT statements take Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but other processes are blocked from modifying the data. The updates will queue until all the reads have completed, and reads requested after the update will wait for the updates to complete. The result to your system is delay (blocking).

What is difference between DELETE & TRUNCATE commands?
Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command.

TRUNCATE
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE can not be Rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table.

DELETE
DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE can be rolled back.
DELETE is DML Command.
DELETE does not reset identity of the tabl.

Difference between Function and Stored Procedure?
UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as Stored procedures cannot be.
UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline UDF’s can be though of as views that take parameters and can be used in JOINs and other Rowset operations.

When is the use of UPDATE_STATISTICS command?
This command is basically used when a large processing of data has occurred. If a large amount of deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the indexes on these tables accordingly.

What types of Joins are possible with Sql Server?
Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table.
Types of joins: INNER JOINs, OUTER JOINs, CROSS JOINs. OUTER JOINs are further classified as LEFT OUTER JOINS, RIGHT OUTER JOINS and FULL OUTER JOINS.

What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?
Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query. HAVING criteria is applied after the grouping of rows has occurred.

What is sub-query? Explain properties of sub-query.
Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of parentheses. Sub-queries are generally used to return a single row as an atomic value, though they may be used to compare values against multiple rows with the IN keyword.

A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT statement if executed independently of the T-SQL statement, in which it is nested, will return a result set. Meaning a subquery SELECT statement can standalone and is not depended on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T-SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used.

Properties of Sub-Query
A subquery must be enclosed in the parenthesis.
A subquery must be put in the right hand of the comparison operator, and
A subquery cannot contain a ORDER-BY clause.
A query can contain more than one sub-query.

What are types of sub-queries?
Single-row subquery, where the subquery returns only one row.
Multiple-row subquery, where the subquery returns multiple rows,.and
Multiple column subquery, where the subquery returns multiple columns.

What is SQL Profiler?
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to analyze later. For example, you can monitor a production environment to see which stored procedures are hampering performances by executing too slowly.

Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time.

What is User Defined Functions?
User-Defined Functions allow defining its own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.

What kind of User-Defined Functions can be created?
There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-Valued and Multi-statement Table-valued.

Scalar User-Defined Function
A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.

Inline Table-Value User-Defined Function
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables.

Multi-statement Table-Value User-Defined Function
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

Which TCP/IP port does SQL Server run on? How can it be changed?
SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties –> Port number. both on client and the server.

What are the authentication modes in SQL Server? How can it be changed?
Windows mode and mixed mode (SQL & Windows).

To change authentication mode in SQL Server click Start, Programs, and Microsoft SQL Server and click SQL Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server program group. Select the server then from the Tools menu select SQL Server Configuration Properties, and choose the Security page.

Where SQL server user’s names and passwords are are stored in sql server?
They get stored in master db in the sysxlogins table.

Which command using Query Analyzer will give you the version of SQL server and operating system?

SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')

What is SQL server agent?
SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to schedule your own jobs and scripts.

Can a stored procedure call itself or recursive stored procedure? How many levels SP nesting possible?
Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps. Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code references up to 32 levels.

What is @@ERROR?
The @@ERROR automatic variable returns the error code of the last Transact-SQL statement. If there was no error, @@ERROR returns zero. Because @@ERROR is reset after each Transact-SQL statement, it must be saved to a variable if it is needed to process it further after checking it.

What is Raise error?
Stored procedures report errors to client applications via the RAISERROR command. RAISERROR doesn’t change the flow of a procedure; it merely displays an error message, sets the @@ERROR automatic variable, and optionally writes the message to the SQL Server error log and the NT application event log.

What is log shipping?
Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. Enterprise Editions only supports log shipping. In log shipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db can be used this as the Disaster Recovery plan. The key feature of log shipping is that is will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined interval.

What is the difference between a local and a global variable?
A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.

A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection is closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time.

What are the different types of replication? Explain.
The SQL Server 2000-supported replication types are as follows:

* Transactional
* Snapshot
* Merge

Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. Snapshot replication is best used as a method for replicating data that changes infrequently or where the most up-to-date values (low latency) are not a requirement. When synchronization occurs, the entire snapshot is generated and sent to Subscribers.

Transactional replication, an initial snapshot of data is applied at Subscribers, and then when data modifications are made at the Publisher, the individual transactions are captured and propagated to Subscribers.

Merge replication is the process of distributing data from Publisher to Subscribers, allowing the Publisher and Subscribers to make updates while connected or disconnected, and then merging the updates between sites when they are connected.

What are the OS services that the SQL Server installation adds?
MS SQL SERVER SERVICE, SQL AGENT SERVICE, DTC (Distribution transac co-ordinator)

What are three SQL keywords used to change or set someone’s permissions?
GRANT, DENY, and REVOKE.

What does it mean to have quoted_identifier on? What are the implications of having it off?
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers.

What is the STUFF function and how does it differ from the REPLACE function?
STUFF function to overwrite existing characters. Using this syntax, STUFF(string_expression, start, length, replacement_characters), string_expression is the string that will have characters substituted, start is the starting position, length is the number of characters in the string that are substituted, and replacement_characters are the new characters interjected into the string.
REPLACE function to replace existing characters of all occurance. Using this syntax REPLACE(string_expression, search_string, replacement_string), where every incidence of search_string found in the string_expression will be replaced with replacement_string.

Using query analyzer, name 3 ways to get an accurate count of the number of records in a table?
SELECT * FROM table1
SELECT COUNT(*) FROM table1
SELECT rows FROM sysindexes WHERE id = OBJECT_ID(table1) AND indid < 2

How to rebuild Master Database?
Shutdown Microsoft SQL Server 2000, and then run Rebuildm.exe. This is located in the Program Files\Microsoft SQL Server\80\Tools\Binn directory.
In the Rebuild Master dialog box, click Browse.
In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in the shared network directory from which SQL Server 2000 was installed, and then click OK.
Click Settings. In the Collation Settings dialog box, verify or change settings used for the master database and all other databases.
Initially, the default collation settings are shown, but these may not match the collation selected during setup. You can select the same settings used during setup or select new collation settings. When done, click OK.
In the Rebuild Master dialog box, click Rebuild to start the process.
The Rebuild Master utility reinstalls the master database.
To continue, you may need to stop a server that is running.
Source: http://msdn2.microsoft.com/en-us/library/aa197950(SQL.80).aspx

What is the basic functions for master, msdb, model, tempdb databases?
The Master database holds information for all databases located on the SQL Server instance and is the glue that holds the engine together. Because SQL Server cannot start without a functioning master database, you must administer this database with care.
The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in the instance.

What are primary keys and foreign keys?
Primary keys are the unique identifiers for each row. They must contain unique values and cannot be null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables.

What is data integrity? Explain constraints?
Data integrity is an important feature in SQL Server. When used properly, it ensures that data is accurate, correct, and valid. It also acts as a trap for otherwise undetectable bugs within applications.
A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should have a primary key constraint to uniquely identify each row and only one primary key constraint can be created for each table. The primary key constraints are used to enforce entity integrity.

A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values are entered. The unique key constraints are used to enforce entity integrity as the primary key constraints.

A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the corresponding data values. A foreign key in one table points to a primary key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no primary keys with that value. The foreign key constraints are used to enforce referential integrity.

A CHECK constraint is used to limit the values that can be placed in a column. The check constraints are used to enforce domain integrity.

A NOT NULL constraint enforces that the column will not accept null values. The not null constraints are used to enforce domain integrity, as the check constraints.

What are the properties of the Relational tables?
Relational tables have six properties:

* Values are atomic.
* Column values are of the same kind.
* Each row is unique.
* The sequence of columns is insignificant.
* The sequence of rows is insignificant.
* Each column must have a unique name.

What is De-normalization?
De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access.

How to get @@error and @@rowcount at the same time?
If @@Rowcount is checked after Error checking statement then it will have 0 as the value of @@Recordcount as it would have been reset.
And if @@Recordcount is checked before the error-checking statement then @@Error would get reset. To get @@error and @@rowcount at the same time do both in same statement and store them in local variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR

What is Identity?
Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers, the value of this cannot be controled. Identity/GUID columns do not need to be indexed.

What is a Scheduled Jobs or What is a Scheduled Tasks?
Scheduled tasks let user automate processes that run on regular or predictable cycles. User can schedule administrative tasks, such as cube processing, to run during times of slow business activity. User can also determine the order in which tasks run by creating job steps within a SQL Server Agent job. E.g. Back up database, Update Stats of Tables. Job steps give user control over flow of execution. If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop execution.

What is a table called, if it does not have neither Cluster nor Non-cluster Index? What is it used for?
Unindexed table or Heap. Microsoft Press Books and Book On Line (BOL) refers it as Heap.
A heap is a table that does not have a clustered index and, therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together.
Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table and than do bulk of inserts and to restore those indexes after that.

What is BCP? When does it used?
BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the structures same as source to destination.

How do you load large data to the SQL server database?
BulkCopy is a tool used to copy huge amount of data from tables. BULK INSERT command helps to Imports a data file into a database table or view in a user-specified format.

Can we rewrite subqueries into simple select statements or with joins?
Subqueries can often be re-written to use a standard outer join, resulting in faster performance. As we may know, an outer join uses the plus sign (+) operator to tell the database to return all non-matching rows with NULL values. Hence we combine the outer join with a NULL test in the WHERE clause to reproduce the result set without using a sub-query.

Can SQL Servers linked to other servers like Oracle?
SQL Server can be lined to any server provided it has OLE-DB provider from Microsoft to allow a link. E.g. Oracle has a OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL Server group.

How to know which index a table is using?
SELECT table_name,index_name FROM user_constraints

How to copy the tables, schema and views from one SQL server to another?
Microsoft SQL Server 2000 Data Transformation Services (DTS) is a set of graphical tools and programmable objects that lets user extract, transform, and consolidate data from disparate sources into single or multiple destinations.

What is Self Join?
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company have a hierarchal reporting structure whereby one member of staff reports to another.

What is Cross Join?
A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.

Which virtual table does a trigger use?
Inserted and Deleted.

List few advantages of Stored Procedure.

* Stored procedure can reduced network traffic and latency, boosting application performance.
* Stored procedure execution plans can be reused, staying cached in SQL Server’s memory, reducing server overhead.
* Stored procedures help promote code reuse.
* Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.
* Stored procedures provide better security to your data.

What is DataWarehousing?

* Subject-oriented, meaning that the data in the database is organized so that all the data elements relating to the same real-world event or object are linked together;
* Time-variant, meaning that the changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time;
* Non-volatile, meaning that data in the database is never over-written or deleted, once committed, the data is static, read-only, but retained for future reporting;
* Integrated, meaning that the database contains data from most or all of an organization’s operational applications, and that this data is made consistent.

What is OLTP(OnLine Transaction Processing)?
In OLTP - online transaction processing systems relational database design use the discipline of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute data integrity. Using these rules complex information is broken down into its most simple structures (a table) where all of the individual atomic level elements relate to each other and satisfy the normalization rules.

How do SQL server 2000 and XML linked? Can XML be used to access data?
FOR XML (ROW, AUTO, EXPLICIT)
You can execute SQL queries against existing relational databases to return results as XML rather than standard rowsets. These queries can be executed directly or from within stored procedures. To retrieve XML results, use the FOR XML clause of the SELECT statement and specify an XML mode of RAW, AUTO, or EXPLICIT.

OPENXML
OPENXML is a Transact-SQL keyword that provides a relational/rowset view over an in-memory XML document. OPENXML is a rowset provider similar to a table or a view. OPENXML provides a way to access XML data within the Transact-SQL context by transferring data from an XML document into the relational tables. Thus, OPENXML allows you to manage an XML document and its interaction with the relational environment.

What is an execution plan? When would you use it? How would you view the execution plan?
An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure since the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or query. From within Query Analyzer is an option called “Show Execution Plan” (located on the Query drop-down menu). If this option is turned on it will display query execution plan in separate window when query is ran again.

Ways to improve SQL performance

1. Normalizing:

Normalize your table’s schema in such a way that all tables are reduced in columns and are related to other table with some reference.

That will improve performance while you are fetching data from tables that will also reduce the fetching of redundant data.



2. Define Primary and Foreign keys:

Make relation of your table in such a way that you can access any combination of data from various tables just by referencing keys.



3. Choose the most appropriate data types.

This is the main issue when actual data will be stored on local disk, because if you have given some inappropriate data type then it will consume more space than actually needed. That will degrade the disk IO performance.



4. Make Index:

Try to make Index on such columns which are frequently used in searching operations, so it will improve our query performance.

Do not make more indexes on one table that it will degrade the performance, because when table is to be updated by updating or inserting new data, indexes also will be updated, so try to make fewer indexes on table.



5. Return Values:

Return only those columns and rows which are actually needed for requirement. Do not try to fetch all rows and columns which are not needed, because it will slow down the fetching as well as make the high IO operations.



6. Avoid Expensive operations:

Try to avoid expensive operations such as “LIKE”, because we are normally using “LIKE %abc%” wildcard entries…and it will require table scan and it will make very slow response of query result.



7. Avoid Explicit or Implicit functions in WHERE Clause:

The optimizer cannot always select an index by using columns in a WHERE clause that are inside functions. Columns in a WHERE clause are seen as an expression rather than a column. Therefore, the columns are not used in the execution plan optimization.



EX: do not use where clause like this,

SELECT OrderID FROM NorthWind.dbo.Orders WHERE DATEADD(day, 15, OrderDate) = '07/23/1996'



Instead, we can use like,

SELECT OrderID FROM NorthWind.dbo.Orders WHERE OrderDate = DATEADD(day, -15, '07/23/1996')


8. Use stored Procedures and Parameterized Queries:
Advantages of using stored procedures are,
Logical separation of business logic: we can reduce the amount of code which is written in application code, so we can separate out some logic from application and embed into stored procedure. This has its own advantage like we have to change only single change in stored procedure and it will be reflected into all places.
Reduced deployment time: When we have embedded sql command in application code then if we want to change that business logic then after changes we have to deploy whole application again, but is we have used stored procedures then it requires only change in stored procedures.
Reduced Network bandwidth: if we supply whole sql command to the server then it will require more bandwidth, but if we have used stored procedures then we just have to supply only stored procedure name and its parameters.
SQL injections: stored procedure will protect against sql injections which are produced by direct user input which is used in sql command.

9. Minimize cursor use:
Cursors force database engine to repeatedly fetch rows, negotiate blocking, manage locks, and transmit results. Use forward only and read only cursors unless you wan tot update the tables. In cursors more locks are needed.
Forward only and read only cursors are fastest and least resource intensive to get data from the server.
10. Use Temporary Tables and Table Variables Appropriately

If your application frequently creates temporary tables, consider using the table variable or a permanent table. You can use the table data type to store a row set in memory. Table variables are cleaned up automatically at the end of the function, stored procedure, or batch that they are defined in. Many requests to create temporary tables may cause contention in both the tempdb database and in the system tables. Very large temporary tables are also problematic. If you find that you are creating many large temporary tables, you may want to consider a permanent table that can be truncated between uses.

Table variables use the tempdb database in a manner that is similar to how table variables use temporary tables, so avoid large table variables. Also, table variables are not considered by the optimizer when the optimizer generates execution plans and parallel queries. Therefore, table variables may cause decreased performance. Finally, table variables cannot be indexed as flexibly as temporary tables.

You have to test temporary table and table variable usage for performance. Test with many users for scalability to determine the approach that is best for each situation. Also, be aware that there may be concurrency issues when there are many temporary tables and variables that are requesting resources in the tempdb database.
11. Avoid LEFT JOINs and NULLs:
LEFT JOIN can be used to retrieve all of the rows from a first table and all matching rows from a second table, plus all rows from the second table that do not match the first one. For example, if you wanted to return every Customer and their orders, a LEFT JOIN would show the Customers who did and did not have orders.
LEFT JOINs are costly since they involve matching data against NULL (nonexistent) data. In some cases this is unavoidable, but the cost can be high. A LEFT JOIN is more costly than an INNER JOIN, so if you could rewrite a query so it doesn't use a LEFT JOIN, it could pay huge dividends.
One technique to speed up a query that uses a LEFT JOIN involves creating a TABLE datatype and inserting all of the rows from the first table (the one on the left-hand side of the LEFT JOIN), then updating the TABLE datatype with the values from the second table. This technique is a two-step process, but could save a lot of time compared to a standard LEFT JOIN. A good rule is to try out different techniques and time each of them until you get the best performing query for your application.
12. Rewriting query with OR conditions as a UNION

You can speedup your query,

SELECT * FROM table WHERE (Field1 = 'Value1') OR (Field2 = 'Value2')


By creating indexes on each field in the above conditions and by using a UNION operator instead
of using OR:



SELECT ... WHERE Field1 = 'Value1'

UNION

SELECT ... WHERE Field2 = 'Value2'


13. SP Performance Improvement without changing T-SQL



There are two ways, which can be used to improve the performance of Stored Procedure (SP) without making T-SQL changes in SP.

1. Do not prefix your Stored Procedure with sp_.
In SQL Server, all system SPs are prefixed with sp_. When any SP is called which begins sp_ it is looked into masters database first before it is looked into the database it is called in.
2. Call your Stored Procedure prefixed with dbo.SPName - fully qualified name.
When SP is called prefixed with dbo. Or database.dbo. It will prevent SQL Server from placing a COMPILE lock on the procedure. While SP executes it determines if all objects referenced in the code have the same owners as the objects in the current cached procedure plan.





Some more, little but very important tips…..



1. Know your data and business application well.
Familiarize yourself with these sources; you must be aware of the data volume and distribution in your database.

2. Test your queries with realistic data.
A SQL statement tested with unrealistic data may behave differently when used in production. To ensure rigorous testing, the data distribution in the test environment must also closely resemble that in the production environment.

3. Write identical SQL statements in your applications.
Take full advantage of stored procedures, and functions wherever possible. The benefits are performance gain as they are precompiled.

4. Use indexes on the tables carefully.
Be sure to create all the necessary indexes on the tables. However, too many of them can degrade performance.

5. Make an indexed path available.
To take advantage of indexes, write your SQL in such a manner that an indexed path is available to it. Using SQL hints is one of the ways to ensure the index is used.

6. Understand the Optimizer.
Understand the optimizer how it uses indexes, where clause, order by clause, having clause, etc.

7. Think globally when acting locally.
Any changes you make in the database to tune one SQL statement may affect the performance of other statements used by applications and users.

8. The WHERE clause is crucial.
The following WHERE clauses would not use the index access path even if an index is available.
e.g. Table1Col1 (Comparision Operator like >, <, >=, <=,) Table1Col2, Table1Col1 IS (NOT) NULL, Table1Col1 NOT IN (value1, value2), Table1Col1 != expression, Table1Col1 LIKE ‘%pattern%’, NOT Exists sub query.

9. Use WHERE instead of HAVING for record filtering.
Avoid using the HAVING clause along with GROUP BY on an indexed column.

10. Specify the leading index columns in WHERE clauses.
For a composite index, the query would use the index as long as the leading column of the index is specified in the WHERE clause.

11. Evaluate index scan vs. full table scan. (Index Only Searches Vs Large Table Scan, Minimize Table Passes)
If selecting more than 15 percent of the rows from a table, full table scan is usually faster than an index access path. An index is also not used if SQL Server has to perform implicit data conversion. When the percentage of table rows accessed is 15 percent or less, an index scan will work better because it results in multiple logical reads per row accessed, whereas a full table scan can read all the rows in a block in one logical read.

12. Use ORDER BY for index scan.
SQL Server optimizer will use an index scan if the ORDER BY clause is on an indexed column. The following query illustrates this point.

13. Minimize table passes.
Usually, reducing the number of table passes in a SQL query results in better performance. Queries with fewer table passes mean faster queries.

14. Join tables in the proper order.
Always perform the most restrictive search first to filter out the maximum number of rows in the early phases of a multiple table join. This way, the optimizer will have to work with fewer rows in the subsequent phases of join, improving performance.

15. Redundancy is good in where condition.
Provide as much information as possible in the WHERE clause. It will help optimizer to clearly infer conditions.

16. Keep it simple, stupid.
Very complex SQL statements can overwhelm the optimizer; sometimes writing multiple, simpler SQL will yield better performance than a single complex SQL statement.

17. You can reach the same destination in different ways.
Each SQL may use a different access path and may perform differently.

18. Reduce network traffic and increase throughput.
Using T-SQL blocks over Multiple SQL statements can achieve better performance as well as reduce network traffic. Stored Procedures are better over T-SQL blocks as they are stored in SQL Server and they are pre-compiled.

19. Better Hardware.
Better hard ware always helps performance. SCACI drives, Raid 10 Array, Multi processors CPU, 64-bit operating system improves the performance by great amount.

20. Avoid Cursors.
Using SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated sub query or derived tables if you need to perform row-by-row operations.

Date format validation

function isDate(dtStr){
var daysInMonth = DaysArray(12)
var pos1=dtStr.indexOf(dtCh)
var pos2=dtStr.indexOf(dtCh,pos1+1)
var strMonth=dtStr.substring(0,pos1)
var strDay=dtStr.substring(pos1+1,pos2)
var strYear=dtStr.substring(pos2+1)
strYr=strYear
if (strDay.charAt(0)=="0" && strDay.length>1) strDay=strDay.substring(1)
if (strMonth.charAt(0)=="0" && strMonth.length>1) strMonth=strMonth.substring(1)
for (var i = 1; i <= 3; i++) {
if (strYr.charAt(0)=="0" && strYr.length>1) strYr=strYr.substring(1)
}
month=parseInt(strMonth)
day=parseInt(strDay)
year=parseInt(strYr)
if (pos1==-1 || pos2==-1){
alert("The date format should be : mm/dd/yyyy")
return false
}
if (strMonth.length<1 || month<1 || month>12){
alert("Please enter a valid month")
return false
}
if (strDay.length<1 || day<1 || day>31 || (month==2 && day>daysInFebruary(year)) || day > daysInMonth[month]){
alert("Please enter a valid day")
return false
}
if (strYear.length != 4 || year==0 || yearmaxYear){
alert("Please enter a valid 4 digit year between "+minYear+" and "+maxYear)
return false
}
if (dtStr.indexOf(dtCh,pos2+1)!=-1 || isInteger(stripCharsInBag(dtStr, dtCh))==false){
alert("Please enter a valid date")
return false
}
return true
}

Email validation Script

function validateEmail(str) {

var at="@"
var dot="."
var lat=str.indexOf(at)
var lstr=str.length
var ldot=str.indexOf(dot)
if (str.indexOf(at)==-1){
alert("Invalid E-mail ID")
return false
}

if (str.indexOf(at)==-1 || str.indexOf(at)==0 || str.indexOf(at)==lstr){
alert("Invalid E-mail ID")
return false
}

if (str.indexOf(dot)==-1 || str.indexOf(dot)==0 || str.indexOf(dot)==lstr){
alert("Invalid E-mail ID")
return false
}

if (str.indexOf(at,(lat+1))!=-1){
alert("Invalid E-mail ID")
return false
}

if (str.substring(lat-1,lat)==dot || str.substring(lat+1,lat+2)==dot){
alert("Invalid E-mail ID")
return false
}

if (str.indexOf(dot,(lat+2))==-1){
alert("Invalid E-mail ID")
return false
}

if (str.indexOf(" ")!=-1){
alert("Invalid E-mail ID")
return false
}

return true
}

IP address to long and Long value to IP address string

this methods are depricated from framework 2.0 but still we need way to do this...so this is the way...

public static uint IPAddressToLong(string IPAddr)
{
IPAddress oIP = IPAddress.Parse(IPAddr);
byte[] byteIP = oIP.GetAddressBytes();


uint ip = (uint)byteIP[0] << 24;
ip += (uint)byteIP[1] << 16;
ip += (uint)byteIP[2] << 8;
ip += (uint)byteIP[3];

return ip;
}

public static string LongToIPAddress(uint ipLong)
{
//string ipAddress = string.Empty;
byte[] addByte = new byte[4];

addByte[0] = (byte)((ipLong >> 24) & 0xFF);
addByte[1] = (byte)((ipLong >> 16) & 0xFF);
addByte[2] = (byte)((ipLong >> 8) & 0xFF);
addByte[3] = (byte)((ipLong >> 0) & 0xFF);

return addByte[0].ToString() + "." + addByte[1].ToString() + "." + addByte[2].ToString() + "." + addByte[3].ToString();
}

Random string and Random numbers of variable length

//method for random strign generation...

public static String RandomStringOfLength(int strLength)
{
StringBuilder builder = new StringBuilder();
Random random = new Random();
char ch;
for (int i = 0; i < strLength; i++)
{
System.Threading.Thread.Sleep(1);
ch = Convert.ToChar(Convert.ToInt32(Math.Floor(26 * random.NextDouble() + 65)));
builder.Append(ch);
}
return builder.ToString();
}

//method for random number generation...

public static String RandomNumberOfLength(int numLength)
{
StringBuilder builder = new StringBuilder();
Random random = new Random();
//char ch;
for (int i = 0; i < numLength; i++)
{
System.Threading.Thread.Sleep(1);
builder.Append(random.Next(0,9).ToString());
}
return builder.ToString();
}

Working with UTF8 characters...

Some times situation is like we have to parse the string which has both Unicode and ASCII characters in single string, at that time Encoding functions in .NET will not be help ful,
so i have created two usefull functions for that situations...

public static string GetUTF8StringFrombytes(byte[] byteVal)
{
byte[] btOne = new byte[1];
StringBuilder sb = new StringBuilder("");
char uniChar;
for (int i = 0; i < byteVal.Length; i++)
{
btOne[0] = byteVal[i];
if (btOne[0] > 127)
{
uniChar = Convert.ToChar(btOne[0]);
sb.Append(uniChar);
}
else
sb.Append(Encoding.UTF8.GetString(btOne));
}
return sb.ToString();
}

public static byte[] GetBytesFromUTF8Chars(string strVal)
{
if (strVal != string.Empty || strVal != null)
{
byte btChar;
byte[] btArr = new byte[strVal.Length * 2];
byte[] tempArr;
int arrIndex = 0;
for (int i = 0; i < strVal.Length; i++)
{
btChar = (byte)strVal[i];
if (btChar > 127 && btChar < 256)
{
btArr[arrIndex] = btChar;
arrIndex++;
}
else
{
tempArr = Encoding.UTF8.GetBytes(strVal[i].ToString());
Array.Copy(tempArr, 0, btArr, arrIndex, tempArr.Length);
arrIndex += tempArr.Length;
tempArr = null;
}
}
byte[] retVal = new byte[arrIndex];
Array.Copy(btArr, 0, retVal, 0, arrIndex);
return retVal;
}
else
return new byte[0];
}

Credit card validation

Introduction

Credit card validation using Regular Expressions.
Background

you must have some knowledge of regular expression, because it is the core part of this functionality.

//
// first Client side validation using JavaScript...
// you have to put to controls one is drop down list (for different credit
//cards names)and other is textbox (for card number).
//just pass two parameter to this function one is id of dropdownlist and
//textbox.
function ValidateCC(CCType, CCNum)
{
var cctype= document.getElementById(CCType);
var ccnum= document.getElementById(CCNum);
var validCCNum=false;
var validCC=false;
if(ccnum.value == "")
{
return false;
}
validCC= isValidCreditCard
(cctype.options[cctype.selectedIndex].value,ccnum.value);
if( validCC)
{
return true;
}
return false;
}
// this function is calling another function isValidCreditCard
//for number validation and it is here...
function isValidCreditCard(type, ccnum)
{
if (type == "Visa")
var re = /^[4]([0-9]{15}$|[0-9]{12}$)/;
else if (type == "MasterCard")
var re = /^[5][1-5][0-9]{14}$/;
else if (type == "Discover")
var re = /^6011-?d{4}-?d{4}-?d{4}$/;
else if (type == "Diners Club")
var re = /(^30[0-5][0-9]{11}$)|(^(36|38)[0-9]{12}$)/;
else if (type == "American Express")
var re = /^[34|37][0-9]{14}$/;
else if (type == "enRoute")
var re = /^(2014|2149)[0-9]{11}$/;
else if (type == "JCB")
var re = /(^3[0-9]{15}$)|(^(2131|1800)[0-9]{11}$)/;
if (!re.test(ccnum))
return false;
ccnum = ccnum.split("-").join("");
var checksum = 0;
for (var i=(2-(ccnum.length % 2)); i<=ccnum.length; i+=2)
{
checksum += parseInt(ccnum.charAt(i-1));
}
for (var i=(ccnum.length % 2) + 1; i
var digit = parseInt(ccnum.charAt(i-1)) * 2;
if (digit < 10)
{ checksum += digit; }
else
{ checksum += (digit-9);
}
}
if ((checksum % 10) == 0)
{
return true;
}
else
return false;
}
//now at the server side in asp.net with c#...
private bool checkCCValidation()
{
bool validCC = false;
if(txtCCNumber.Text == "")
{
return false;
}
validCC= isValidCreditCard(selectCCType.Value,txtCCNumber.Text.Trim());
if( validCC)
{
return true;
}
return false;
}
//this method is also calling another method and it is here..
private bool isValidCreditCard(string type, string ccnum)
{
string regExp = "";
if (type == "Visa")
regExp = "^[4]([0-9]{15}$|[0-9]{12}$)";
else if (type == "MasterCard")
regExp = "^[5][1-5][0-9]{14}$";
else if (type == "Discover")
regExp = "^6011-?d{4}-?d{4}-?d{4}$";
else if (type == "Diners Club")
regExp = "(^30[0-5][0-9]{11}$)|(^(36|38)[0-9]{12}$)";
else if (type == "American Express")
regExp = "^[34|37][0-9]{14}$";
else if (type == "enRoute")
regExp = "^(2014|2149)[0-9]{11}$";
else if (type == "JCB")
regExp = "(^3[0-9]{15}$)|(^(2131|1800)[0-9]{11}$)";
if (!Regex.IsMatch(ccnum,regExp))
return false;
string[] tempNo = ccnum.Split('-');
ccnum = String.Join("", tempNo);
int checksum = 0;
for (int i = (2-(ccnum.Length % 2)); i <= ccnum.Length; i += 2)
{
checksum += Convert.ToInt32(ccnum[i-1].ToString());
}
int digit = 0;
for (int i = (ccnum.Length % 2) + 1; i < ccnum.Length; i += 2)
{
digit = 0;
digit = Convert.ToInt32(ccnum[i-1].ToString()) * 2;
if (digit < 10)
{ checksum += digit; }
else
{ checksum += (digit - 9); }
}
if ((checksum % 10) == 0)
return true;
else
return false;
}
//so that is the end of simple way of validating credit card, without using
//any third party controls..so enjoy...

"Using" keyword in Data Access Base classes..

Introduction

this example demonstrates use of "Using" keyword which disposes all resources used inside it and
in Data Access base classes we don't have to worried about connection and adapter ispose, they will
be disposed automatically immediatly after use..

internal DataTable GetNetworkSettingsByNetworkID(int networkID)
{
try
{
using (Connection = new MySqlConnection(connectionString))
{
using (MySqlCommand command = new MySqlCommand())
{
command.CommandText = SQL_GET_NETWORKSETTINGS_BY_NETWORKID;
command.CommandType = CommandType.StoredProcedure;
command.Connection = Connection;
command.Parameters.Add(new MySqlParameter("?_NetworkID", MySqlDbType.Int32));
command.Parameters[0].Value = networkID;
using (MySqlDataAdapter dataAdapter = new MySqlDataAdapter(command))
{
using (DataTable dtNetworkSettings = new DataTable())
{
dataAdapter.Fill(dtNetworkSettings);
return dtNetworkSettings;
}
}
}
}
}
catch (Exception ex)
{
PacketUtils.WriteLogError(ex, "NetworkDA::GetNetworkSettingsByNetworkID");
throw ex;
}
finally
{
if (Connection != null)
{
if (Connection.State == ConnectionState.Open)
Connection.Close();
Connection.Dispose();
}
}
}

Url rewritting in some simple steps...

Introduction

Sometimes, we want that our web url will not contain any "?","&" or other kind of non readable symbols, but the question is that how to remove them from the browser's address bar, but that information is very essential for us because they are query parameters for us. So it is like we are hiding information of query string from the user by displaying some nice and readable contains, but with at server side we want our query parameters back. So that is called "URL rewriting". Don't worry it is not a difficult task.
Background

Before that, we have to have some knowledge of how the requests are processed on ASP.NET server? Remember that when any request is coming it has to pass through "Application_BeginRequest()" method in Global.asax file.

This is that access point of each and every requests those are coming to our application.So cheers it is that point where we have to work for URL rewriting.

// Write this method prototype like this,
void Application_BeginRequest(object sender, EventArgs e)
// write structure of code like this...
// First get the url of the request,
string absoluteUrl = Request.Url.AbsolutePath.ToString();
//then search for the specific page into that url of request,(means you can
//rewrite the url for some speific pages only.
if (absoluteUrl.Contains("/test/default.aspx"))
{
//now search for some redable query string parameter like you have embedded
//url link like this...http://test.com/test/default.aspx?mypageid=33
//here instade of this we can write like..
//http://test.com/test/default.aspx/MyPage33
//so you can see we have removed unreadable and ugly codes from url.
// now how to handle this and converting this back to our real information.
//find the location of last "/" character because from that "MyPage33"
//word is starting.
string pageid = absoluteUrl.Substring
(absoluteUrl.LastIndexOf('/') + 1).Trim();
//now skip first 5 characters because they are "MyPage".
pageid = pageid.Substring(6); //this will give "33".
// now generate the original Url which is required by our application.
string path = "~/test/default.aspx?mypageid=" + pageid;
//now redirect to the target page,
Response.Redirect(path);
//Bengo!!!!
//We have successfully done with Url rewriting.

FTP Operation in just 4-5 lines...

Introduction

This article will enable you to write FTP operation in just 5 minutes...

We
know that FTP same as direcotry structure which we have on our system,
but the difference is that FTP directory is located at some other place.

So when you upload afile you are transfering one file from one place to
another, just like in Unix programming, if we want to copy one file
from one place to another we




//

// Just Rename File name and FTP location that you want to upload or download.

//

//Generate the File name which you want to upload on the FTP...

string ftpFilename = ftpServer + FileName;

//Create a Object of class WebClient..

WebClient client = new WebClient();

//Give the Credincials which will help you to enter into the FTP..

client.Credentials = new NetworkCredential(loginName, ftpPwd);

//Now use .UploadFile Method of WebClient Object to upload the file,

//this method will return response from the FTP server which you can use as,

//result of FTP operation..

responseArray = client.UploadFile(ftpFilename, filetobeUploaded);

//here you have to pass full path of FTP directory with filename included as

// ftpFilename, like if you want to upload file to ftp://simpleftp.com then

//use this kind of structure..

//ftpFilename = "ftp://simpleftp.com/test.txt";

//and in the 'filetobeUploaded' give name of your file to be uploaded..

//same way you can download the file by using .downloadfile method of the

//Webclient class Object.

//so enjoy the FTP operation in just 5 minutes.

Regular Expressions

Guys , we all know that regular expressions are very useful for checking any kind of patterns.
So we have to know how to write regular expressions for our patterns. But don't worry here i will provide some but very useful regular expressions which will be useful for our coding.


//
// this are the regular expressions for all purposes...
//For checking opening and closing tab of specific HTML tags..
]*>(.*?)
//for any HTML tags starting and ending..
<([A-Z][A-Z0-9]*)b[^>]*>(.*?)ItemTemplate>

asp:FormView>
div>
form>


This little trick can also be used on the server-side when calling FindControl():

TextBox tb = this.FindControl("form1$formVw$txtName") as TextBox;
if (tb != null)

{
//Access TextBox control
}

Setting focus during a validation error

The validation controls now (starting with ASP.NET 2.0) allow you to easily set focus on a control in error using SetfocusOnError property...

<asp:RequiredFieldValidator
SetFocusOnError="true"
ErrorMessage="Enter something" ControlToValidate="TextBox1"
runat="server" />

How to maintain the position of the scrollbar on postbacks

In ASP.NET 1.1 it was a pain to maintain the position of the scrollbar when doing a postback operation. This was especially true when you had a grid on the page and went to edit a specific row. Instead of staying on the desired row, the page would reload and you'd be placed back at the top and have to scroll down. In ASP.NET 2.0 you can simply add the MaintainScrollPostionOnPostBack attribute to the Page directive:

<% Page Language="C#" ... MaintainScrollPositionOnPostback="true" %>


if we want this functionality in entire site then use this...
modify the entry of the section with this...

<
pages maintainScrollPositionOnPostBack="true" />