I will be in Atlanta this weekend to talk about SQL Server certifications. However, I know many of you will not be able to attend. So I have made a short video to explain almost everything you need to know about getting certified. I hope it is helpful in clearing up in confusion on the direction you may want to take to further your goals.
It is a very cold and rainy day for the first SQL Saturday of 2017. Nashville is one of the events I attend every year as there are many members of the #SQLFamily that also attend. I am getting to do two sessions this year. First up at 8:30 am is my presentation on “The PROCESS of Queries”. This is the one I have been requested to present the most often. It is a very beginner level into how queries are optimized and how this effects batches, transactions, and errors. The second session will be at 1:30 pm and is a brand new presentation for a SQL Saturday, “The Basics of Data Manipulation”. This is from one of my favorite chapters from the T-SQL courses that I teach and I am really excited to put it into a one hour session format.
If you can’t make it to a SQL Saturday to see my presentation on The PROCESS of Queries? That is OK. Because I made it into a video. Just as a reminder, I will be delivering this presentation at the SQL Saturday in Columbus, Ohio on July 16th and then again in Louisville, KY on August 6th. Or you can just take my 20461 SQL Query Writing class at New Horizons and see the three hour version of this presentation.
Here is a short 13 minute tutorial I put together on how to use XML within SQL Server. While I made this for the Born to Learn Study Groups, I am sure it will be helpful for others who might be studying for the 70-461 exam.
When Transactions are processed they must pass the ACID test. The isolation part of the process is handled by Locks which ensure that Transactions do not interfere with each other. To read more click here.
In a previous post we discussed how to work with batches where we created a table named Accounting.BankAccounts and inserted two records. When the two records were inserted we received the following message. To continue reading click here.
Transactions are used to ensure that a series of statements written to modify data pass the ACID test that enforces the data integrity, accuracy, concurrency, and recoverability of the data in the database. The ACID properties of a transaction are to read more click here.
In this demonstration we are going to look at working with batches. To begin our example we will need to run both the CREATE SCHEMA and CREATE TABLE statements to hold sample data. Then we will use two INSERT statements to add data. Notice we do not have a GO statement on line 7 to divide the CREATE statements from the INSERT statements into separate batches This will produce the following errors. To continue reading click here.
In a previous post we discussed the four part name of a SQL Server object in 2005 and later that included the SCHEMA as part of its name.
There are many advantages to using schemas, but mainly to allow better and more flexible organization of objects as well as allowing for an additional security boundary. In the example above we are creating a new schema named Sales and allowing the Bjorn identity to be the owner of the Sales schema. Next we GRANT the SELECT permission to Jane. To continue reading click here.
As part of the T-SQL naming convention in SQL Server every object (Table, View, Stored Procedure, etc…) has a four part name used to reference that object. For SQL Servers prior to 2005 the four part name used the Server.Database.Owner.Object format. For SQL Server 2005 and later the naming convention changed to Server.Database.Schema.Object. To continue reading click here.
- The first step is to Parse the statement into keywords, expressions, and operators. This is where the syntax of your query statement is checked for accuracy. To continue reading click here.
So I was teaching SQL Functions today and came to the section on working with the COALESCE function and decided to share my example for todays blog post. So the first section of code is creating a table with five columns to hold sample data. Next I use an INSERT statement to add 4 rows of data to my demo table. To continue reading click here.
When you run DML statements such as an INSERT, UPDATE, or DELETE in most cases you will receive a message similar to the one below. Click here to read the rest of the post.
One of the questions I get frequently is what is the difference between using DELETE versus using TRUNCATE versus using DROP when working with data in tables. It basically comes down to what do you want to remove and what do you want to keep.
First of all the DELETE statement is used to remove individual records from a table while still keeping the underlying data pages and the table structure to be re-used to insert additional data.
TRUNCATE TABLE on the other hand does not care about removing individual rows, it will just remove the underlying data pages that are being used for the table. However, the table structure will still exist in this scenario as well.
DROP TABLE will completely remove the entire table structure from existence within the database.
In this demonstration we will be showing how to use an IDENTITY field in a table. First of all, an IDENTITY field is very similar to an auto number field, where the number will automatically be added as each record is inserted. By default, the value will start at 1 and then count by 1. This can be changed by changing the SEED and INCREMENT value when you create the IDENTITY field. To finish reading this post click here.
For this demonstration we will use the HR.Employees table that we created in the Table Constraints post.
We will begin with just a simple INSERT statement that will add one row into the table. First you may notice that we are not including a value for the EmpID field. This is because it is an IDENTITY column which will automatically supply a value into that field.
We then supplied character data into the FirstName, LastName, and JobTitle fields. Since the HireDate field was supplied with a DEFAULT constraint we just need to use the DEFAULT keyword that will supply the current date into that field. The BirthDate field allows for a NULL value and for now we will use the NULL statement for that field. Finally, we add the final two fields. To continue reading click here.
For this demonstration we will create a table named dbo.TestTable with four columns. Then we will be using the INSERT statement to add records to the table, the UPDATE statement to modify existing records, and finally the DELETE statement to remove records from the table
While it is not required , it is a good idea to always use a WHERE statement with the DELETE and UPDATE. That way ALL the records in your table are not accidently deleted or updated. Many developers like to write them as a SELECT statement first to make sure they are working with the correct data. To continue reading more click here.
When pulling data from more than one table you will need to JOIN the tables together. Based off the type of JOIN will determine what records will be returned in the result set. See full post here.
There are two types of DML Triggers that can be written for each INSERT, UPDATE, or DELETE statements. Either AFTER triggers or INSTEAD OF triggers. To continue reading click here.
When creating tables you would want to use constraints to enforce data integrity. Here is a list of the constraints and why they are used in your tables.
- PRIMARY KEY – Ensures that a column has a value and that it is an unique value which helps to search and sort your records.
- NOT NULL – Enforces that every record has a value for this column.
- DEFAULT – Specifies a value for the column when a value is not included with the insert statement.
- UNIQUE – Enforces that each column uses distinct and unique values.
- CHECK – Enforces specific rules that a column must follow.
- FOREIGN KEY – Enforces the child relationship with a parent table to ensure referential integrity.
In the code example above on Line 2 we see a demonstration of using a PRIMARY KEY constraint for the EmpID field. To continue reading click here.