Nashville #581

 It is a very cold and rainy day for the first SQL Saturday of 2017. Nashville is one of the events I attend every year as there are many members of the #SQLFamily that also attend. I am getting to do two sessions this year. First up at 8:30 am is my presentation on “The PROCESS of Queries”. This is the one I have been requested to present the most often. It is a very beginner level into how queries are optimized and how this effects batches, transactions, and errors. The second session will be at 1:30 pm and is a brand new presentation for a SQL Saturday, “The Basics of Data Manipulation”. This is from one of my favorite chapters from the T-SQL courses that I teach and I am really excited to put it into a one hour session format.

The PROCESS of Queries

If you can’t make it to a SQL Saturday to see my presentation on The PROCESS of Queries? That is OK. Because I made it into a video. Just as a reminder, I will be delivering this presentation at the SQL Saturday in Columbus, Ohio on July 16th and then again in Louisville, KY on August 6th. Or you can just take my 20461 SQL Query Writing class at New Horizons and see the three hour version of this presentation.

Working with Batches

SetBatch2

In this demonstration we are going to look at working with batches. To begin our example we will need to run both the CREATE SCHEMA and CREATE TABLE statements to hold sample data. Then we will use two INSERT statements to add data. Notice we do not have a GO statement on line 7 to divide the CREATE statements from the INSERT statements into separate batches This will produce the following errors. To continue reading click here.

Working with Schemas

Schema

In a previous post we discussed the four part name of a SQL Server object in 2005 and later that included the SCHEMA as part of its name.

There are many advantages to using schemas, but mainly to allow better and more flexible organization of objects as well as allowing for an additional security boundary. In the example above we are creating a new schema named Sales and allowing the Bjorn identity to be the owner of the Sales schema. Next we GRANT the SELECT permission to Jane. To continue reading click here.

Delete vs Truncate vs Drop

10411854_1079414808743441_7763801048928456046_n

One of the questions I get frequently is what is the difference between using DELETE versus using TRUNCATE versus using DROP when working with data in tables. It basically comes down to what do  you want to remove and what do you want to keep.

First of all the DELETE statement is used to remove individual records from a table while still keeping the underlying data pages and the table structure to be re-used to insert additional data.

TRUNCATE TABLE on the other hand does not care about removing individual rows, it will just remove the underlying data pages that are being used for the table. However, the table structure will still exist in this scenario as well.

DROP TABLE will completely remove the entire table structure from existence within the database.

IDENTITY Columns

Identityexample In this demonstration we will be showing how to use an IDENTITY field in a table. First of all, an IDENTITY field is very similar to an auto number field, where the number will automatically be added as each record is inserted. By default, the value will start at 1 and then count by 1. This can be changed by changing the SEED and INCREMENT value when you create the IDENTITY field. To finish reading this post click here.

Using the INSERT statement

For this demonstration we will use the HR.Employees table that we created in the Table Constraints post.

We will begin with just a simple INSERT statement that will add one row into the table. First you may notice that we are not including a value for the EmpID field. This is because it is an IDENTITY column which will automatically supply a value into that field.

INSERTJD

We then supplied character data into the FirstName, LastName, and JobTitle fields. Since the HireDate field was supplied with a DEFAULT constraint we just need to use the DEFAULT keyword that will supply the current date into that field. The BirthDate field allows for a NULL value and for now we will use the NULL statement for that field. Finally, we add the final two fields. To continue reading click here.

Data Manipulation Language

Full

For this demonstration we will create a table named dbo.TestTable with four columns. Then we will be using the INSERT statement to add records to the table, the UPDATE statement to modify existing records, and finally the DELETE statement to remove records from the table

While it is not required , it is a good idea to always use a WHERE statement with the DELETE and UPDATE. That way ALL the records in your table are not accidently deleted or updated. Many developers like to write them as a SELECT statement first to make sure they are working with the correct data. To continue reading more click here.

SQL Table Constraints

When creating tables you would want to use constraints to enforce data integrity. Here is a list of the constraints and why they are used in your tables.

  • PRIMARY KEY – Ensures that a column has a value and that it is an unique value which helps to search and sort your records.
  • NOT NULL – Enforces that every record has a value for this column.
  • DEFAULT – Specifies a value for the column when a value is not included with the insert statement.
  • UNIQUE – Enforces that each column uses distinct and unique values.
  • CHECK – Enforces specific rules that a column must follow.
  • FOREIGN KEY – Enforces the child relationship with a parent table to ensure referential integrity.

In the code example above on Line 2 we see a demonstration of using a PRIMARY KEY constraint for the EmpID field. To continue reading click here.