Pro C#10 CHAPTER 20 Data Access with ADO.NET

CHAPTER 20

Data Access with ADO.NET

The .NET platform defines several namespaces that allow you to interact with relational database systems. Collectively speaking, these namespaces are known as ADO.NET. In this chapter, you will learn about the overall role of ADO.NET and the core types and namespaces, and then you will move on to the topic of ADO. NET data providers. The .NET platform supports numerous data providers (both provided as part of the
.NET Framework and available from third-party sources), each of which is optimized to communicate with a specific database management system (e.g., Microsoft SQL Server, Oracle, and MySQL).
After you understand the common functionality provided by various data providers, you will then look at the data provider factory pattern. As you will see, using types within the System.Data namespaces (including System.Data.Common and database provider-specific namespaces like Microsoft.Data.
SqlClient, System.Data.Odbc, and the Windows only database provider namespace System.Data.Oledb), you can build a single code base that can dynamically pick and choose the underlying data provider without the need to recompile or redeploy the application’s code base.
Next, you will learn how to work directly with the SQL Server database provider, creating and opening connections to retrieve data, and then move on to inserting, updating, and deleting data, followed by examining the topic of database transactions. Finally, you will execute SQL Server’s bulk copy feature using ADO.NET to load a list of records into the database.

■ Note This chapter focuses on the raw ADO.NET. Starting with Chapter 21, I cover Entity Framework (EF) Core, Microsoft’s object-relational mapping (ORM) framework. Since EF Core uses ADO.NET for data access under the covers, a solid understanding of how ADO.NET works is vital when troubleshooting data access. There are also scenarios that are not solved by EF Core (such as executing a SQL bulk copy), and you will need to know ADO.NET to solve those issues.

ADO.NET vs. ADO
If you have a background in Microsoft’s previous COM-based data access model (Active Data Objects [ADO]) and are just starting to work with the .NET platform, you need to understand that ADO.NET has little to do with ADO beyond the letters A, D, and O. While it is true that there is some relationship between the two systems (e.g., each has the concept of connection and command objects), some familiar ADO types (e.g., the Recordset) no longer exist. Furthermore, you can find many new types that have no direct equivalent under classic ADO (e.g., the data adapter).

© Andrew Troelsen, Phil Japikse 2022
A. Troelsen and P. Japikse, Pro C# 10 with .NET 6, https://doi.org/10.1007/978-1-4842-7869-7_20

797

Understanding ADO.NET Data Providers
ADO.NET does not provide a single set of objects that communicate with multiple database management systems (DBMSs). Rather, ADO.NET supports multiple data providers, each of which is optimized to interact with a specific DBMS. The first benefit of this approach is that you can program a specific data provider to access any unique features of a particular DBMS. The second benefit is that a specific data provider can connect directly to the underlying engine of the DBMS in question without an intermediate mapping layer standing between the tiers.
Simply put, a data provider is a set of types defined in a given namespace that understand how to communicate with a specific type of data source. Regardless of which data provider you use, each defines a set of class types that provide core functionality. Table 20-1 documents some of the core base classes and the key interfaces they implement.

Table 20-1. The Core Objects of an ADO.NET Data Provider

Base Class Relevant Interfaces Meaning in Life
DbConnection IDbConnection Provides the ability to connect to and disconnect from the data store. Connection objects also provide access to a related transaction object.
DbCommand IDbCommand Represents a SQL query or a stored procedure. Command objects also provide access to the provider’s data reader object.
DbDataReader IDataReader, IDataRecord Provides forward-only, read-only access to data using a server- side cursor.
DbDataAdapter IDataAdapter, IDbDataAdapter Transfers DataSets between the caller and the data store. Data adapters contain a connection and a set of four internal command objects used to select, insert, update, and delete information from the data store.
DbParameter IDataParameter, IDbDataParameter Represents a named parameter within a parameterized query.
DbTransaction IDbTransaction Encapsulates a database transaction.

Although the specific names of these core classes will differ among data providers (e.g., SqlConnection versus OdbcConnection), each class derives from the same base class (DbConnection, in the case of connection objects) that implements identical interfaces (e.g., IDbConnection). Given this, you would be correct to assume that after you learn how to work with one data provider, the remaining providers prove quite straightforward.

■ Note When you refer to a connection object under ADO.NET, you’re actually referring to a specific DbConnection-derived type; there is no class literally named Connection. The same idea holds true for a command object, data adapter object, and so forth. As a naming convention, the objects in a specific data provider are prefixed with the name of the related DBMS (e.g., SqlConnection, SqlCommand, and SqlDataReader).

Figure 20-1 shows the big picture behind ADO.NET data providers. The client assembly can be any type of .NET application: console program, Windows Forms, Windows Presentation Foundation, ASP.NET Core,
.NET code library, and so on.

Figure 20-1. ADO.NET data providers provide access to a given DBMS

A data provider will supply you with other types beyond the objects shown in Figure 20-1; however, these core objects define a common baseline across all data providers.

ADO.NET Data Providers
As with all of .NET, data providers ship as NuGet packages. There are several supported by Microsoft as well as a multitude of third-party providers available. Table 20-2 documents some of the data providers supported by Microsoft.

Table 20-2. Some of the Microsoft-Supported Data Providers

Data Provider Namespace/NuGet Package Name
Microsoft SQL Server Microsoft.Data.SqlClient
ODBC System.Data.Odbc
OLE DB (Windows only) System.Data.OleDb

The Microsoft SQL Server data provider offers direct access to Microsoft SQL Server data stores—and only SQL Server data stores (including SQL Azure). The Microsoft.Data.SqlClient namespace contains the types used by the SQL Server provider.

■ Note While System.Data.SqlClient is still supported, all development effort for interaction with SQL Server (and SQL Azure) is focused on the new Microsoft.Data.SqlClient provider library.

The ODBC provider (System.Data.Odbc) provides access to ODBC connections. The ODBC types defined within the System.Data.Odbc namespace are typically useful only if you need to communicate with a given DBMS for which there is no custom .NET data provider. This is true because ODBC is a widespread model that provides access to several data stores.
The OLE DB data provider, which is composed of the types defined in the System.Data.OleDb namespace, allows you to access data located in any data store that supports the classic COM-based OLE DB protocol. Due to the dependence on COM, this provider will work only on the Windows operating system and should be considered deprecated in the cross-platform world of .NET.

The Types of the System.Data Namespace
Of all the ADO.NET namespaces, System.Data is the lowest common denominator. This namespace contains types that are shared among all ADO.NET data providers, regardless of the underlying
data store. In addition to a number of database-centric exceptions (e.g., NoNullAllowedException, RowNotInTableException, and MissingPrimaryKeyException), System.Data contains types that represent various database primitives (e.g., tables, rows, columns, and constraints), as well as the common interfaces implemented by data provider objects. Table 20-3 lists some of the core types you should be aware of.

Table 20-3. Core Members of the System.Data Namespace

Type Meaning in Life
Constraint Represents a constraint for a given DataColumn object
DataColumn Represents a single column within a DataTable object
DataRelation Represents a parent-child relationship between two DataTable objects
DataRow Represents a single row within a DataTable object
DataSet Represents an in-memory cache of data consisting of any number of interrelated
DataTable objects
DataTable Represents a tabular block of in-memory data
DataTableReader Allows you to treat a DataTable as a firehose cursor (forward-only, read-only data access)
DataView Represents a customized view of a DataTable for sorting, filtering, searching, editing, and navigation
IDataAdapter Defines the core behavior of a data adapter object
IDataParameter Defines the core behavior of a parameter object
IDataReader Defines the core behavior of a data reader object
IDbCommand Defines the core behavior of a command object
IDbDataAdapter Extends IDataAdapter to provide additional functionality of a data adapter object
IDbTransaction Defines the core behavior of a transaction object

Your next task is to examine the core interfaces of System.Data at a high level; this can help you understand the common functionality offered by any data provider. You will also learn specific details throughout this chapter; however, for now it is best to focus on the overall behavior of each interface type.

The Role of the IDbConnection Interface
The IDbConnection type is implemented by a data provider’s connection object. This interface defines a set of members used to configure a connection to a specific data store. It also allows you to obtain the data provider’s transaction object. Here is the formal definition of IDbConnection:

public interface IDbConnection : IDisposable
{
string ConnectionString { get; set; } int ConnectionTimeout { get; }
string Database { get; } ConnectionState State { get; }

IDbTransaction BeginTransaction();
IDbTransaction BeginTransaction(IsolationLevel il); void ChangeDatabase(string databaseName);
void Close();
IDbCommand CreateCommand(); void Open();
void Dispose();
}

The Role of the IDbTransaction Interface
The overloaded BeginTransaction() method defined by IDbConnection provides access to the provider’s transaction object. You can use the members defined by IDbTransaction to interact programmatically with a transactional session and the underlying data store.

public interface IDbTransaction : IDisposable
{
IDbConnection Connection { get; } IsolationLevel IsolationLevel { get; }

void Commit(); void Rollback(); void Dispose();
}

The Role of the IDbCommand Interface
Next up is the IDbCommand interface, which will be implemented by a data provider’s command object. Like other data access object models, command objects allow programmatic manipulation of SQL statements, stored procedures, and parameterized queries. Command objects also provide access to the data provider’s data reader type through the overloaded ExecuteReader() method.

public interface IDbCommand : IDisposable
{
string CommandText { get; set; } int CommandTimeout { get; set; }
CommandType CommandType { get; set; } IDbConnection Connection { get; set; } IDbTransaction Transaction { get; set; } IDataParameterCollection Parameters { get; } UpdateRowSource UpdatedRowSource { get; set; }

void Prepare(); void Cancel();
IDbDataParameter CreateParameter(); int ExecuteNonQuery();
IDataReader ExecuteReader();
IDataReader ExecuteReader(CommandBehavior behavior); object ExecuteScalar();
void Dispose();
}

The Role of the IDbDataParameter and IDataParameter Interfaces
Notice that the Parameters property of IDbCommand returns a strongly typed collection that implements IDataParameterCollection. This interface provides access to a set of IDbDataParameter-compliant class types (e.g., parameter objects).

public interface IDbDataParameter : IDataParameter
{
//Plus members in the IDataParameter interface byte Precision { get; set; }
byte Scale { get; set; } int Size { get; set; }
}

IDbDataParameter extends the IDataParameter interface to obtain the following additional behaviors:

public interface IDataParameter
{
DbType DbType { get; set; } ParameterDirection Direction { get; set; } bool IsNullable { get; }
string ParameterName { get; set; } string SourceColumn { get; set; }
DataRowVersion SourceVersion { get; set; } object Value { get; set; }
}

As you will see, the functionality of the IDbDataParameter and IDataParameter interfaces allows you to represent parameters within a SQL command (including stored procedures) through specific ADO.NET parameter objects, rather than through hard-coded string literals.

The Role of the IDbDataAdapter and IDataAdapter Interfaces
You use data adapters to push and pull DataSets to and from a given data store. The IDbDataAdapter interface defines the following set of properties that you can use to maintain the SQL statements for the related select, insert, update, and delete operations:

public interface IDbDataAdapter : IDataAdapter
{
//Plus members of IDataAdapter IDbCommand SelectCommand { get; set; } IDbCommand InsertCommand { get; set; } IDbCommand UpdateCommand { get; set; } IDbCommand DeleteCommand { get; set; }
}

In addition to these four properties, an ADO.NET data adapter picks up the behavior defined in the base interface, IDataAdapter. This interface defines the key function of a data adapter type: the ability to
transfer DataSets between the caller and underlying data store using the Fill() and Update() methods. The IDataAdapter interface also allows you to map database column names to more user-friendly display names with the TableMappings property.

public interface IDataAdapter
{
MissingMappingAction MissingMappingAction { get; set; } MissingSchemaAction MissingSchemaAction { get; set; } ITableMappingCollection TableMappings { get; }

DataTable[] FillSchema(DataSet dataSet, SchemaType schemaType); int Fill(DataSet dataSet);
IDataParameter[] GetFillParameters(); int Update(DataSet dataSet);
}

The Role of the IDataReader and IDataRecord Interfaces
The next key interface to be aware of is IDataReader, which represents the common behaviors supported by a given data reader object. When you obtain an IDataReader-compatible type from an ADO.NET data provider, you can iterate over the result set in a forward-only, read-only manner.

public interface IDataReader : IDisposable, IDataRecord
{
//Plus members from IDataRecord int Depth { get; }
bool IsClosed { get; }
int RecordsAffected { get; }

void Close();
DataTable GetSchemaTable(); bool NextResult();
bool Read();
Dispose();
}

Finally, IDataReader extends IDataRecord, which defines many members that allow you to extract a strongly typed value from the stream, rather than casting the generic System.Object retrieved from the data reader’s overloaded indexer method. Here is the IDataRecord interface definition:

public interface IDataRecord
{
int FieldCount { get; } object this[ int i ] { get; }
object this[ string name ] { get; } bool GetBoolean(int i);
byte GetByte(int i);
long GetBytes(int i, long fieldOffset, byte[] buffer, int bufferoffset, int length);
char GetChar(int i);
long GetChars(int i, long fieldoffset, char[] buffer, int bufferoffset, int length);
IDataReader GetData(int i); string GetDataTypeName(int i); DateTime GetDateTime(int i); Decimal GetDecimal(int i); double GetDouble(int i);
Type GetFieldType(int i); float GetFloat(int i); Guid GetGuid(int i); short GetInt16(int i); int GetInt32(int i);
long GetInt64(int i); string GetName(int i);
int GetOrdinal(string name); string GetString(int i); object GetValue(int i);
int GetValues(object[] values); bool IsDBNull(int i);
}

■ Note You can use the IDataReader.IsDBNull() method to discover programmatically whether a specified field is set to null before attempting to obtain a value from the data reader (to avoid triggering a runtime exception). Also recall that C# supports nullable data types (see Chapter 4), which are ideal for interacting with data columns that could be null in the database table.

Abstracting Data Providers Using Interfaces
At this point, you should have a better idea of the common functionality found among all .NET data providers. Recall that even though the exact names of the implementing types will differ among data providers, you can program against these types in a similar manner—that is the beauty of interface-based polymorphism. For example, if you define a method that takes an IDbConnection parameter, you can pass in any ADO.NET connection object, like so:

public static void OpenConnection(IDbConnection cn)
{
// Open the incoming connection for the caller.
connection.Open();
}

■ Note Interfaces are not strictly required; you can achieve the same level of abstraction using abstract base classes (such as DbConnection) as parameters or return values. however, using interfaces instead of base classes is the generally accepted best practice.

The same holds true for member return values. Create a new .NET Console application named MyConnectionFactory. Add the following NuGet packages to the project (note that the OleDb package is valid only on Windows):
Microsoft.Data.SqlClient System.Data.Common System.Data.Odbc System.Data.OleDb
Next, add a new file named DataProviderEnum.cs and update the code to the following:

namespace MyConnectionFactory;
//OleDb is Windows only and is not supported in .NET Core/.NET 5+ enum DataProviderEnum
{
SqlServer, #if PC
OleDb, #endif
Odbc, None
}

If you are using a Windows OS on your development machine, update the project file to define the conditional compiler symbol PC.


PC

If you are using Visual Studio, right-click the project, select Properties, and then go to the Build tab to enter the “Conditional compiler symbols” values.
The following code example allows you to obtain a specific connection object based on the value of a custom enumeration. For diagnostic purposes, you simply print the underlying connection object using reflection services.

using System.Data; using System.Data.Odbc; #if PC
using System.Data.OleDb; #endif
using Microsoft.Data.SqlClient; using MyConnectionFactory;

Console.WriteLine("** Very Simple Connection Factory ***\n"); Setup(DataProviderEnum.SqlServer);

if PC

Setup(DataProviderEnum.OleDb); //Not supported on macOS #endif
Setup(DataProviderEnum.Odbc); Setup(DataProviderEnum.None); Console.ReadLine();

void Setup(DataProviderEnum provider)
{
// Get a specific connection.
IDbConnection myConnection = GetConnection(provider);
Console.WriteLine($"Your connection is a {myConnection?.GetType().Name ?? "unrecognized type"}");
// Open, use and close connection…
}

// This method returns a specific connection object
// based on the value of a DataProvider enum. IDbConnection GetConnection(DataProviderEnum dataProvider)
=> dataProvider switch
{
DataProviderEnum.SqlServer => new SqlConnection(), #if PC
//Not supported on macOS
DataProviderEnum.OleDb => new OleDbConnection(), #endif
DataProviderEnum.Odbc => new OdbcConnection(),
_ => null,
};

The benefit of working with the general interfaces of System.Data (or, for that matter, the abstract base classes of System.Data.Common) is that you have a much better chance of building a flexible code base that can evolve over time. For example, today you might be building an application that targets Microsoft SQL Server; however, it is possible your company could switch to a different database. If you build a solution that hard-codes the Microsoft SQL Server–specific types of System.Data.SqlClient, you will need to edit, recompile, and redeploy the code for the new database provider.
At this point, you have authored some (quite simple) ADO.NET code that allows you to create different types of provider-specific connection objects. However, obtaining a connection object is only one aspect of working with ADO.NET. To make a worthwhile data provider factory library, you would also have to account for command objects, data readers, transaction objects, and other data-centric types. Building such a code library would not necessarily be difficult, but it would require a considerable amount of code and time.

Since the release of .NET 2.0, the kind folks in Redmond have built this exact functionality directly into the .NET base class libraries. This functionality has been significantly updated for .NET Core and .NET 5+.
You will examine this formal API in just a moment; however, first you need to create a custom database to use throughout this chapter (and for many chapters to come).

Setting Up SQL Server and Azure Data Studio
As you work through this chapter, you will execute queries against a simple SQL Server test database named AutoLot. In keeping with the automotive theme used throughout this book, this database will contain five interrelated tables (Inventory, Makes, Orders, Customers, and CreditRisks) that contain various bits of data representing information for a fictional automobile sales company. Before getting into the database details, you must set up SQL Server and a SQL Server IDE.

■ Note If you are using a Windows-based development machine and have installed Visual Studio 2022, you also have a special instance of SQL Server Express (called LocalDb) installed, which can be used for all the examples in this book. If you are content to use that version, please skip to the section “Installing a SQL Server IDE.”

Installing SQL Server
For this chapter and many of the remaining chapters in this book, you will need to have access to an instance of SQL Server. If you are using a non-Windows-based development machine and do not have an external instance of SQL Server available, or choose not to use an external SQL Server instance, you can run SQL Server inside a Docker container on your Mac- or Linux-based workstation. Docker also works on Windows machines, so you are welcome to run the examples in this book using Docker regardless of your operating system of choice.

■ Note Containerization is a large topic, and there just isn’t space in this book to get into the deep details of containers or Docker. This book will cover just enough so you can work through the examples.

Installing SQL Server in a Docker Container
Docker Desktop can be downloaded from www.docker.com/get-started. Download and install the appropriate version (Windows, Mac, Linux) for your workstation (you will need a free DockerHub user account). Make sure you select Linux containers when prompted.

■ Note The container choice (Windows or Linux) is the operating system running inside the container, not the operating system of your workstation.

Pulling the Image and Running SQL Server 2019
Containers are based on images, and each image is a layered set that builds up the final product. To get the image needed to run SQL Server 2019 in a container, open a command window and enter the following command:

docker pull mcr.microsoft.com/mssql/server:2019-latest

Once you have the image loaded onto your machine, you need to start SQL Server. To do that, enter the following command (all on one line):

docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P@ssw0rd" -p 5433:1433 –name AutoLot -h AutoLotHost -d mcr.microsoft.com/mssql/server:2019-latest

The previous command accepts the end user license agreement, sets the password (in real life, you need to use a strong password), sets the port mapping (port 5433 on your machine maps to the default port for SQL Server, which is 1433) in the container, names the container (AutoLot), names the host (AutoLotHost), and finally informs Docker to use the previously downloaded image.

■ Note These are not settings you want to use for real development. For information on changing the SA password and to see a tutorial, go to https://docs.microsoft.com/en-us/sql/linux/quickstart– install-connect-docker?view=sql-server-ver15&pivots=cs1-cmd.

To confirm that it is running, enter the command docker ps -a in your command prompt. You will see output like the following (some columns omitted for brevity):

C:\Users\japik>docker ps -a
CONTAINER ID IMAGE PORTS NAMES
347475cfb823 mcr.microsoft.com/mssql/server:2019-latest 0.0.0.0:5433->1433/tcp AutoLot
To stop the container, enter docker stop 34747, where the numbers 34747 are the first five characters of the container ID. To restart the container, enter docker start 34747, again updating the command with the beginning of your container’s ID.

■ Note You can also use the container’s name (AutoLot in this example) with the Docker CLI commands, for example, docker start AutoLot. Be aware that the Docker commands, regardless of operating system, are case sensitive.

If you want to use the Docker Dashboard, right-click the Docker ship (in your system tray) and select Dashboard, and you should see the image running on port 5433. Hover over the image name with your mouse, and you will see the commands to stop, start, and delete (among others), as shown in Figure 20-2.

Figure 20-2. Docker Dashboard

■ Note To connect to SQL Server with an encrypted connection, there must be a certificate installed on the host. Follow the instructions in the docs to install a certificate in your Docker container and enable secure connections: https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-docker
container-security?view=sql-server-ver15. For this book, we will use an unencrypted connection to
SQL Server, which should not be used in real development.

Installing SQL Server 2019
A special instance of SQL Server named LocalDb is installed with Visual Studio 2022. If you choose not to use SQL Server Express LocalDB or Docker and you are using a Windows machine, you need to install SQL Server 2019 Developer Edition. SQL Server 2019 Developer Edition is free and can be downloaded from here:

https://www.microsoft.com/en-us/sql-server/sql-server-downloads

If you have another edition, you can use that instance with this book as well; you will just need to change your connection string to match your installed instance.

Installing a SQL Server IDE
Azure Data Studio is a new IDE for use with SQL Server. It is free and cross-platform, so it will work on Windows, Mac, or Linux. It can be downloaded from here:

https://docs.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio

■ Note If you are using a Windows machine and prefer to use SQL Server Management Studio (SSMS), you can download the latest copy from here: https://docs.microsoft.com/en-us/sql/ssms/download– sql-server-management-studio-ssms.

Connecting to SQL Server
Once you have Azure Data Studio or SSMS installed, it is time to connect to your database instance. The following sections cover connecting to SQL Server in either a Docker container or using a LocalDb instance. If you are using another instance of SQL Server, please update the connection string used in the following sections accordingly.

Connecting to SQL Server in a Docker Container
To connect to your SQL Server instance running in a Docker container, first make sure it is up and running. Next, click “Create a connection” in Azure Data Studio, as shown in Figure 20-3.

Figure 20-3. Creating a connection in Azure Data Studio

In the Connection Details dialog, enter .,5433 for the Server value. The dot indicates the current host, and 5433 is the port that you indicated when creating the SQL Server instance in the Docker container. The host and the port must be separated by a comma. Enter sa for the username; the password is the same one that you entered when creating the SQL Server instance. The name is optional but allows you to quickly select this connection in subsequent Azure Data Studio sessions. Figure 20-4 shows these connection options.

Figure 20-4. Setting the connection options for Docker SQL Server

Connecting to SQL Server LocalDb
To connect to the Visual Studio–installed version of SQL Server Express LocalDb, update the connection information to match what is shown in Figure 20-5.

Figure 20-5. Setting the connection options for SQL Server LocalDb

When connecting to LocalDb, you can use Windows Authentication, since the instance is running on the same machine as Azure Data Studio and the same security context as the currently logged in user.

Connecting to Any Other SQL Server Instance
If you are connecting to any other SQL Server instance, update the connection properties accordingly.

Restoring the AutoLot Database Backup
Instead of building the database from scratch, you can use either SSMS or Azure Data Studio to restore one of the supplied backups contained in the chapter’s files in the repository. There are two backups supplied: the one named AutoLotWindows.ba is designed for use on a Windows machine (LocalDb, Windows Server, etc.), and the one named AutoLotDocker.ba is designed for use in a Docker container.

■ Note Git by default ignores files with a bak extension. You will need to rename the extension from ba_ to
bak before restoring the database.

Copying the Backup File to Your Container
If you are using SQL Server in a Docker container, you first must copy the backup file to the container. Fortunately, the Docker CLI provides a mechanism for working with a container’s file system. First, create a new directory for the backup using the following command in a command window on your host machine:

docker exec -it AutoLot mkdir var/opt/mssql/backup

The path structure must match the container’s operating system (in this case Ubuntu), even if your host machine is Windows based. Next, copy the backup to your new directory using the following command (updating the location of AutoLotDocker.bak to your local machine’s relative or absolute path):

[Windows]
docker cp .\AutoLotDocker.bak AutoLot:var/opt/mssql/backup

[Non-Windows]
docker cp ./AutoLotDocker.bak AutoLot:var/opt/mssql/backup

Note that the source directory structure matches the host machine (in my example, Windows), while the target is the container name and then the directory path (in the target OS format).

Restoring the Database with SSMS
To restore the database using SSMS, right-click the Databases node in Object Explorer. Select Restore Database. Select Device and click the ellipses. This will open the Select Backup Device dialog.

Restoring the Database to SQL Server (Docker)
Keep “Backup media type” set to File, and then click Add, navigate to the AutoLotDocker.bak file in the container, and click OK. When you are back on the main restore screen, click OK, as shown in Figure 20-6.

Figure 20-6. Restoring the database with SSMS

Restoring the Database to SQL Server (Windows)
Keep “Backup media type” set to File, and then click Add, navigate to AutoLotWindows.bak, and click OK. When you are back on the main restore screen, click OK, as shown in Figure 20-7.

Figure 20-7. Restoring the database with SSMS

Restoring the Database with Azure Data Studio
To restore the database using Azure Data Studio, click View, select the Command Palette (or press Ctrl+Shift+P), and select Restore. Select “Backup file” as the “Restore from” option and then the file you just copied. The target database and related fields will be filled in for you, as shown in Figure 20-8.

Figure 20-8. Restoring the database to Docker using Azure Data Studio

■ Note The process is the same to restore the Windows version of the backup using Azure Data Studio. Simply adjust the filename and paths.

Creating the AutoLot Database
This entire section is devoted to creating the AutoLot database using Azure Data Studio (or SQL Server Management Studio). If you restored the backup, you can skip ahead to the section “The ADO.NET Data Provider Factory Model.”

■ Note All of the script files are located in a folder named Scripts along with this chapter’s code in the Git repository.

Creating the Database
To create the AutoLot database, connect to your database server using Azure Data Studio. Open a new query by selecting File ➤ New Query (or by pressing Ctrl+N) and entering the following command text:

USE [master] GO
CREATE DATABASE [AutoLot] GO
ALTER DATABASE [AutoLot] SET RECOVERY SIMPLE GO

Besides changing the recovery mode to simple, this creates the AutoLot database using the SQL Server defaults. Click Run (or press F5) to create the database.

Creating the Tables
The AutoLot database contains five tables: Inventory, Makes, Customers, Orders, and CreditRisks.

Creating the Inventory Table
With the database created, it is time to create the tables. First up is the Inventory table. Open a new query and enter the following SQL:

USE [AutoLot] GO
CREATE TABLE [dbo].[Inventory](
[Id] [int] IDENTITY(1,1) NOT NULL,
[MakeId] [int] NOT NULL,
[Color] nvarchar NOT NULL, [PetName] nvarchar NOT NULL, [TimeStamp] [timestamp] NULL,
CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY] GO

Click Run (or press F5) to create the table.

■ Note If you are not familiar with the SQL Server TimeStamp data type (that maps to a byte[] in C#), don’t worry about it at this time. Just know that it is used for row-level concurrency checking and will be covered with Entity Framework Core.

Creating the Makes Table
The Inventory table stores a foreign key to the (not yet created) Makes table. Create a new query and enter the following SQL to create the Makes table:

USE [AutoLot] GO
CREATE TABLE [dbo].[Makes](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] nvarchar NOT NULL, [TimeStamp] [timestamp] NULL,
CONSTRAINT [PK_Makes] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY] GO

Click Run (or press F5) to create the table.

Creating the Customers Table
The Customers table (as the name suggests) will contain a list of customers. Create a new query and enter the following SQL commands:

USE [AutoLot] GO
CREATE TABLE [dbo].[Customers](
[Id] [int] IDENTITY(1,1) NOT NULL,
[FirstName] nvarchar NOT NULL, [LastName] nvarchar NOT NULL, [TimeStamp] [timestamp] NULL,
CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY] GO

Click Run (or press F5) to create the Customers table.

Creating the Orders Table
You will use the next table, Orders, to represent the automobile a given customer has ordered. Create a new query, enter the following code, and click Run (or press F5):

USE [AutoLot] GO
CREATE TABLE [dbo].[Orders](
[Id] [int] IDENTITY(1,1) NOT NULL,
[CustomerId] [int] NOT NULL,

[CarId] [int] NOT NULL, [TimeStamp] [timestamp] NULL,
CONSTRAINT [PK_Orders] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY] GO

Creating the CreditRisks Table
You will use your final table, CreditRisks, to represent the customers who are considered a credit risk. Create a new query, enter the following code, and click Run (or press F5):

USE [AutoLot] GO
CREATE TABLE [dbo].[CreditRisks]( [Id] [int] IDENTITY(1,1) NOT NULL,
[FirstName] nvarchar NOT NULL, [LastName] nvarchar NOT NULL, [CustomerId] [int] NOT NULL, [TimeStamp] [timestamp] NULL,
CONSTRAINT [PK_CreditRisks] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY] GO

Creating the Table Relationships
This section will add the foreign key relationships between the interrelated tables.

Creating the Inventory to Makes Relationship
Open a new query, enter the following SQL, and click Run (or press F5):

USE [AutoLot] GO
CREATE NONCLUSTERED INDEX [IX_Inventory_MakeId] ON [dbo].[Inventory] (
[MakeId] ASC
) ON [PRIMARY] GO
ALTER TABLE [dbo].[Inventory] WITH CHECK ADD CONSTRAINT [FK_Make_Inventory] FOREIGN KEY([MakeId])
REFERENCES [dbo].[Makes] ([Id]) GO
ALTER TABLE [dbo].[Inventory] CHECK CONSTRAINT [FK_Make_Inventory] GO

Creating the Inventory to Orders Relationship
Open a new query, enter the following SQL, and click Run (or press F5):

USE [AutoLot] GO
CREATE NONCLUSTERED INDEX [IX_Orders_CarId] ON [dbo].[Orders] (
[CarId] ASC
) ON [PRIMARY] GO
ALTER TABLE [dbo].[Orders] WITH CHECK ADD CONSTRAINT [FK_Orders_Inventory] FOREIGN KEY([CarId])
REFERENCES [dbo].[Inventory] ([Id]) GO
ALTER TABLE [dbo].[Orders] CHECK CONSTRAINT [FK_Orders_Inventory] GO

Creating the Orders to Customers Relationship
Open a new query, enter the following SQL, and click Run (or press F5):

USE [AutoLot] GO
CREATE UNIQUE NONCLUSTERED INDEX [IX_Orders_CustomerId_CarId] ON [dbo].[Orders] (
[CustomerId] ASC, [CarId] ASC
) ON [PRIMARY] GO
ALTER TABLE [dbo].[Orders] WITH CHECK ADD CONSTRAINT [FK_Orders_Customers] FOREIGN KEY([CustomerId])
REFERENCES [dbo].[Customers] ([Id]) ON DELETE CASCADE
GO
ALTER TABLE [dbo].[Orders] CHECK CONSTRAINT [FK_Orders_Customers] GO

Creating the Customers to CreditRisks Relationship
Open a new query, enter the following SQL, and click Run (or press F5):

USE [AutoLot] GO
CREATE NONCLUSTERED INDEX [IX_CreditRisks_CustomerId] ON [dbo].[CreditRisks] (
[CustomerId] ASC
) ON [PRIMARY] GO

ALTER TABLE [dbo].[CreditRisks] WITH CHECK ADD CONSTRAINT [FK_CreditRisks_Customers] FOREIGN KEY([CustomerId])
REFERENCES [dbo].[Customers] ([Id]) ON DELETE CASCADE
GO
ALTER TABLE [dbo].[CreditRisks] CHECK CONSTRAINT [FK_CreditRisks_Customers] GO

■ Note If you are wondering why there are columns for FirstName and LastName and a relationship to the customer table, it’s merely for demo purposes. I could think up a creative reason for it, but at the end of the day, they are used in later examples in the book.

Creating the GetPetName() Stored Procedure
Later in this chapter, you will learn how to use ADO.NET to invoke stored procedures. As you might already know, stored procedures are code routines stored within a database that do something. Like C# methods, stored procedures can return data or just operate on data without returning anything. You will add a single stored procedure that will return an automobile’s pet name, based on the supplied carId. To do so, create a new query window and enter the following SQL command:

USE [AutoLot] GO
CREATE PROCEDURE [dbo].[GetPetName] @carID int,
@petName nvarchar(50) output AS
SELECT @petName = PetName from dbo.Inventory where Id = @carID GO

Click Run (or press F5) to create the stored procedure.

Adding Test Records
Databases are rather boring without data, and it is a good idea to have scripts that can quickly load test records into the database.

Makes Records
Create a new query and execute the following SQL statements to add records into the Makes table:

USE [AutoLot] GO
SET IDENTITY_INSERT [dbo].[Makes] ON
INSERT INTO [dbo].[Makes] ([Id], [Name]) VALUES (1, N’VW’)
INSERT INTO [dbo].[Makes] ([Id], [Name]) VALUES (2, N’Ford’)
INSERT INTO [dbo].[Makes] ([Id], [Name]) VALUES (3, N’Saab’)
INSERT INTO [dbo].[Makes] ([Id], [Name]) VALUES (4, N’Yugo’)

INSERT INTO [dbo].[Makes] ([Id], [Name]) VALUES (5, N’BMW’)
INSERT INTO [dbo].[Makes] ([Id], [Name]) VALUES (6, N’Pinto’) SET IDENTITY_INSERT [dbo].[Makes] OFF

Inventory Table Records
To add records to your first table, create a new query and execute the following SQL statements to add records into the Inventory table:

USE [AutoLot] GO
SET IDENTITY_INSERT [dbo].[Inventory] ON GO
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (1, 1, N’Black’, N’Zippy’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (2, 2, N’Rust’, N’Rusty’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (3, 3, N’Black’, N’Mel’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (4, 4, N’Yellow’, N’Clunker’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (5, 5, N’Black’, N’Bimmer’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (6, 5, N’Green’, N’Hank’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (7, 5, N’Pink’, N’Pinky’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (8, 6, N’Black’, N’Pete’)
INSERT INTO [dbo].[Inventory] ([Id], [MakeId], [Color], [PetName]) VALUES (9, 4, N’Brown’, N’Brownie’)SET IDENTITY_INSERT [dbo].[Inventory] OFF
GO

Customer Records
To add records to the Customers table, create a new query and execute the following SQL statements:

USE [AutoLot] GO
SET IDENTITY_INSERT [dbo].[Customers] ON
INSERT INTO [dbo].[Customers] ([Id], [FirstName], [LastName]) VALUES (1, N’Dave’, N’Brenner’)
INSERT INTO [dbo].[Customers] ([Id], [FirstName], [LastName]) VALUES (2, N’Matt’, N’Walton’) INSERT INTO [dbo].[Customers] ([Id], [FirstName], [LastName]) VALUES (3, N’Steve’, N’Hagen’) INSERT INTO [dbo].[Customers] ([Id], [FirstName], [LastName]) VALUES (4, N’Pat’, N’Walton’) INSERT INTO [dbo].[Customers] ([Id], [FirstName], [LastName]) VALUES (5, N’Bad’, N’Customer’)
SET IDENTITY_INSERT [dbo].[Customers] OFF

Order Records
Now add data to your Orders table. Create a new query, enter the following SQL, and click Run (or press F5):

USE [AutoLot] GO
SET IDENTITY_INSERT [dbo].[Orders] ON
INSERT INTO [dbo].[Orders] ([Id], [CustomerId], [CarId]) VALUES (1, 1, 5) INSERT INTO [dbo].[Orders] ([Id], [CustomerId], [CarId]) VALUES (2, 2, 1) INSERT INTO [dbo].[Orders] ([Id], [CustomerId], [CarId]) VALUES (3, 3, 4) INSERT INTO [dbo].[Orders] ([Id], [CustomerId], [CarId]) VALUES (4, 4, 7) SET IDENTITY_INSERT [dbo].[Orders] OFF

CreditRisk Records
The final step is to add data to the CreditRisks table. Create a new query, enter the following SQL, and click Run (or press F5):

USE [AutoLot] GO
SET IDENTITY_INSERT [dbo].[CreditRisks] ON
INSERT INTO [dbo].[CreditRisks] ([Id], [FirstName], [LastName], [CustomerId]) VALUES (1, N’Bad’, N’Customer’, 5)
SET IDENTITY_INSERT [dbo].[CreditRisks] OFF

With this, the AutoLot database is complete! Of course, this is a far cry from a real-world application database, but it will serve your needs for this chapter and will be added to in the Entity Framework Core chapters. Now that you have a database to test with, you can dive into the details of the ADO.NET data provider factory model.

The ADO.NET Data Provider Factory Model
The .NET data provider factory pattern allows you to build a single code base using generalized data access types. To understand the data provider factory implementation, recall from Table 20-1 that the classes within a data provider all derive from the same base classes defined within the System.Data.Common namespace.
• DbCommand: The abstract base class for all command classes
• DbConnection: The abstract base class for all connection classes
• DbDataAdapter: The abstract base class for all data adapter classes
• DbDataReader: The abstract base class for all data reader classes
• DbParameter: The abstract base class for all parameter classes
• DbTransaction: The abstract base class for all transaction classes

Each of the .NET–compliant data providers contains a class type that derives from System.Data.
Common.DbProviderFactory. This base class defines several methods that retrieve provider-specific data objects. Here are the members of DbProviderFactory:

public abstract class DbProviderFactory
{
..public virtual bool CanCreateDataAdapter { get;};
..public virtual bool CanCreateCommandBuilder { get;}; public virtual DbCommand CreateCommand();
public virtual DbCommandBuilder CreateCommandBuilder(); public virtual DbConnection CreateConnection();
public virtual DbConnectionStringBuilder CreateConnectionStringBuilder();
public virtual DbDataAdapter CreateDataAdapter(); public virtual DbParameter CreateParameter(); public virtual DbDataSourceEnumerator
CreateDataSourceEnumerator();
}

To obtain the DbProviderFactory-derived type for your data provider, each provider provides a static property used to return the correct type. To return the SQL Server version of the DbProviderFactory, use the following code:

// Get the factory for the SQL data provider.
DbProviderFactory sqlFactory = Microsoft.Data.SqlClient.SqlClientFactory.Instance;
To make the program more versatile, you can create a DbProviderFactory factory that returns a specific flavor of a DbProviderFactory based on a setting in the appsettings.json file for the application. You will learn how to do this shortly; for the moment, you can obtain the associated provider-specific data objects (e.g., connections, commands, and data readers) once you have obtained the factory for your data provider.

A Complete Data Provider Factory Example
For a complete example, create a new C# Console Application project (named DataProviderFactory) that prints out the automobile inventory of the AutoLot database. For this initial example, you will hard-code the data access logic directly within the console application (to keep things simple). As you progress through this chapter, you will see better ways to do this.
Add the Microsoft.Extensions.Configuration.Json, System.Data.Common, System.Data.Odbc, System.Data.OleDb, and Microsoft.Data.SqlClient packages to the project. Next, define the PC compiler constant (if you are using a Windows OS).


PC

Next, add a new file named DataProviderEnum.cs and update the code to the following:
namespace DataProviderFactory;
//OleDb is Windows only enum DataProviderEnum
{

SqlServer, #if PC
OleDb, #endif
Odbc
}

Add a new JSON file named appsettings.json to the project and update its contents to the following (update the connection strings based on your specific environment):

{
"ProviderName": "SqlServer",
//"ProviderName": "OleDb",
//"ProviderName": "Odbc", "SqlServer": {
// for localdb use @"Data Source=(localdb)\mssqllocaldb;Integrated Security=true; Initial Catalog=AutoLot"
"ConnectionString": "Data Source=.,5433;User Id=sa;Password=P@ssw0rd;Initial Catalog=Aut oLot;Encrypt=False;"
},
"Odbc": {
// for localdb use @"Driver={ODBC Driver 17 for SQL Server};Server=(localdb)\mssqllocald b;Database=AutoLot;Trusted_Connection=Yes";
"ConnectionString": "Driver={ODBC Driver 17 for SQL Server};Server=localhost,5433; Database=AutoLot;UId=sa;Pwd=P@ssw0rd;Encrypt=False;"
},
"OleDb": {
// if localdb use @"Provider=SQLNCLI11;Data Source=(localdb)\mssqllocaldb;Initial Catalog=AutoLot;Integrated Security=SSPI"),
"ConnectionString": "Provider=SQLNCLI11;Data Source=.,5433;User Id=sa;Password=P@ssw0rd; Initial Catalog=AutoLot;Encrypt=False;"
}
}

■ Note When using SQL Server in a Docker container that does not have a certificate installed, the connection string must be unencrypted, which is why we have the Encrypt=False; setting in the Docker connection strings. For real-world applications, do not use this setting; instead, make sure the container (or your SQL Server instance) has a certificate, and use Encrypt=True; instead.

Configure MSBuild to copy the JSON settings file to the output directory on every build. Update the project file by adding the following:



Always

■ Note The CopyToOutputDirectory is whitespace sensitive. Make sure it is all on one line without any spaces around the word Always.

Now that you have a proper appsettings.json, you can read in the provider and connectionString values using .NET configuration. Start by clearing out all the code in the file and adding the following using statements at the top of the Program.cs file:

using System.Data.Common; using System.Data.Odbc; #if PC
using System.Data.OleDb; #endif
using Microsoft.Data.SqlClient;
using Microsoft.Extensions.Configuration; using DataProviderFactory;

Add the following code to the Program.cs file:

Console.WriteLine(" Fun with Data Provider Factories \n"); var (provider, connectionString) = GetProviderFromConfiguration(); DbProviderFactory factory = GetDbProviderFactory(provider);
// Now get the connection object.
using (DbConnection connection = factory.CreateConnection())
{
Console.WriteLine($"Your connection object is a: {connection.GetType().Name}"); connection.ConnectionString = connectionString;
connection.Open();

// Make command object.
DbCommand command = factory.CreateCommand();
Console.WriteLine($"Your command object is a: {command.GetType().Name}"); command.Connection = connection;
command.CommandText =
"Select i.Id, m.Name From Inventory i inner join Makes m on m.Id = i.MakeId ";

// Print out data with data reader.
using (DbDataReader dataReader = command.ExecuteReader())
{
Console.WriteLine($"Your data reader object is a: {dataReader.GetType().Name}"); Console.WriteLine("\n Current Inventory ");
while (dataReader.Read())
{
Console.WriteLine($"-> Car #{dataReader["Id"]} is a {dataReader["Name"]}.");
}
}
}
Console.ReadLine();

Next, add the following code to the end of the Program.cs file. These methods read the configuration, set the DataProviderEnum to the correct value, get the connection string, and return an instance of the DbProviderFactory:

static DbProviderFactory GetDbProviderFactory(DataProviderEnum provider)
=> provider switch
{
DataProviderEnum.SqlServer => SqlClientFactory.Instance, DataProviderEnum.Odbc => OdbcFactory.Instance,

if PC

DataProviderEnum.OleDb => OleDbFactory.Instance, #endif
_ => null
};

static (DataProviderEnum Provider, string ConnectionString) GetProviderFromConfiguration()
{
IConfiguration config = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", true, true)
.Build();
var providerName = config["ProviderName"]; if (Enum.TryParse
(providerName, out DataProviderEnum provider))
{
return (provider,config[$"{providerName}:ConnectionString"]);
};
throw new Exception("Invalid data provider value supplied.");
}

Notice that, for diagnostic purposes, you use reflection services to print the name of the underlying connection, command, and data reader. If you run this application, you will find the following current data in the Inventory table of the AutoLot database printed to the console:

Fun with Data Provider Factories Your connection object is a: SqlConnection Your command object is a: SqlCommand
Your data reader object is a: SqlDataReader

Current Inventory
-> Car #1 is a VW.
-> Car #2 is a Ford.
-> Car #3 is a Saab.
-> Car #4 is a Yugo.
-> Car #9 is a Yugo.
-> Car #5 is a BMW.
-> Car #6 is a BMW.
-> Car #7 is a BMW.
-> Car #8 is a Pinto.

Now change the settings file to specify a different provider. The code will pick up the related connection string and produce the same output as before, except for the type-specific information.
Of course, based on your experience with ADO.NET, you might be a bit unsure exactly what the connection, command, and data reader objects actually do. Do not sweat the details for the time being (quite a few pages remain in this chapter, after all!). At this point, it is enough to know that you can use the ADO.NET data provider factory model to build a single code base that can consume various data providers in a declarative manner.

A Potential Drawback with the Data Provider Factory Model
Although this is a powerful model, you must make sure that the code base uses only types and methods common to all providers through the members of the abstract base classes. Therefore, when authoring your code base, you are limited to the members exposed by DbConnection, DbCommand, and the other types of the System.Data.Common namespace.
Given this, you might find that this generalized approach prevents you from directly accessing some of the bells and whistles of a particular DBMS. If you must be able to invoke specific members of the underlying provider (e.g., SqlConnection), you can do so using an explicit cast, as in this example:

if (connection is SqlConnection sqlConnection)
{
// Print out which version of SQL Server is used. WriteLine(sqlConnection.ServerVersion);
}

When doing this, however, your code base becomes a bit harder to maintain (and less flexible) because you must add a number of runtime checks. Nevertheless, if you need to build ADO.NET data access libraries in the most flexible way possible, the data provider factory model provides a great mechanism for doing so.

■ Note Entity Framework Core and its support for dependency injection greatly simplifies building data access libraries that need to access disparate data sources.

With this first example behind you, you can now dive into the details of working with ADO.NET.

Diving Deeper into Connections, Commands, and DataReaders
As shown in the previous example, ADO.NET allows you to interact with a database using the connection, command, and data reader objects of your data provider. Now you will create an expanded example to get a deeper understanding of these objects in ADO.NET.
In the previous example demonstrated, you need to perform the following steps when you want to connect to a database and read the records using a data reader object:
1.Allocate, configure, and open your connection object.
2.Allocate and configure a command object, specifying the connection object as a constructor argument or with the Connection property.
3.Call ExecuteReader() on the configured command class.
4.Process each record using the Read() method of the data reader.

To get the ball rolling, create a new Console Application project named AutoLot.DataReader and add the Microsoft.Data.SqlClient package. Here is the complete code within the Program.cs file (analysis will follow):

using Microsoft.Data.SqlClient;

Console.WriteLine(" Fun with Data Readers \n");

// Create and open a connection.
using (SqlConnection connection = new SqlConnection())
{
connection.ConnectionString =
@" Data Source=.,5433;User Id=sa;Password=P@ssw0rd;Initial Catalog=AutoLot;Encryp t=False;";
connection.Open();
// Create a SQL command object. string sql =
@"Select i.id, m.Name as Make, i.Color, i.Petname FROM Inventory i
INNER JOIN Makes m on m.Id = i.MakeId"; SqlCommand myCommand = new SqlCommand(sql, connection);

// Obtain a data reader a la ExecuteReader().
using (SqlDataReader myDataReader = myCommand.ExecuteReader())
{
// Loop over the results. while (myDataReader.Read())
{
Console.WriteLine($"-> Make: {myDataReader["Make"]}, PetName: {myDataReader ["PetName"]}, Color: {myDataReader["Color"]}.");
}
}
}
Console.ReadLine();

Working with Connection Objects
The first step to take when working with a data provider is to establish a session with the data source using the connection object (which, as you recall, derives from DbConnection). .NET connection objects are provided with a formatted connection string; this string contains a number of name-value pairs, separated by semicolons. You use this information to identify the name of the machine you want to connect to, the required security settings, the name of the database on that machine, and other data provider–specific information.
As you can infer from the preceding code, the Initial Catalog name refers to the database you want to establish a session with. The Data Source name identifies the name of the machine that maintains the database. I am using .,5433, which refers to the host machine (the period is the same as using “localhost”), and port 5433, which is the port the Docker container has mapped to the SQL Server port. If you were using a different instance, you would define the property as machinename,port\instance. For example, MYSERVER\ SQLSERVER2019 means MYSERVER is the name of the server the SQL Server is running on, the default port
is being used, and SQLSERVER2019 is the name of the instance. If the machine is local to the development, you can use a period (.) or the token (localhost) for the server name. If the SQL Server instance is the

default instance, the instance name is left off. For example, if you created AutoLot on a Microsoft SQL Server installation set up as the default instance on your local computer, you would use "Data Source=localhost".
Beyond this, you can supply any number of tokens that represent security credentials. If Integrated Security is set to true, current Windows account credentials are used for authentication and authorization.
After you establish your connection string, you can use a call to Open() to establish a connection with the DBMS. In addition to the ConnectionString, Open(), and Close() members, a connection object provides a number of members that let you configure additional settings regarding your connection,
such as timeout settings and transactional information. Table 20-4 lists some (but not all) members of the
DbConnection base class.

Table 20-4. Members of the DbConnection Type

Member Meaning in Life

BeginTransaction() You use this method to begin a database transaction.
ChangeDatabase() You use this method to change the database on an open connection.
ConnectionTimeout This read-only property returns the amount of time to wait while establishing a connection before terminating and generating an error (the default value is provider dependent). If you would like to change the default, specify a Connect Timeout segment in the connection string (e.g., Connect Timeout=30).
Database This read-only property gets the name of the database maintained by the connection object.
DataSource This read-only property gets the location of the database maintained by the connection object.
GetSchema() This method returns a DataTable object that contains schema information from the data source.
State This read-only property gets the current state of the connection, which is represented by the ConnectionState enumeration.

The properties of the DbConnection type are typically read-only in nature and are useful only when you want to obtain the characteristics of a connection at runtime. When you need to override default settings, you must alter the connection string itself. For example, the following connection string sets the connection timeout setting from the default (15 seconds for SQL Server) to 30 seconds:

using(SqlConnection connection = new SqlConnection())
{
connection.ConnectionString =
@" Data Source=.,5433;User Id=sa;Password=P@ssw0rd;Initial Catalog=AutoLot;Encrypt=False
;Connect Timeout=30"; connection.Open();
}

The following code outputs details about the SqlConnection that is passed into it:

static void ShowConnectionStatus(SqlConnection connection)
{
// Show various stats about current connection object. Console.WriteLine(" Info about your connection ");

Console.WriteLine($"Database location: {connection.DataSource}"); Console.WriteLine($"Database name: {connection.Database}"); Console.WriteLine($"Timeout: {connection.ConnectionTimeout}"); Console.WriteLine($"Connection state: {connection.State}");
}

While most of these properties are self-explanatory, the State property is worth a special mention. You can assign this property any value of the ConnectionState enumeration, as shown here:

public enum ConnectionState
{
Broken, Closed, Connecting, Executing, Fetching, Open
}

However, the only valid ConnectionState values are ConnectionState.Open, ConnectionState.
Connecting, and ConnectionState.Closed (the remaining members of this enum are reserved for future use). Also, it is always safe to close a connection, even if the connection state is currently ConnectionState.Closed.

Working with ConnectionStringBuilder Objects
Working with connection strings programmatically can be cumbersome because they are often represented as string literals, which are difficult to maintain and error-prone at best. The .NET-compliant data providers support connection string builder objects, which allow you to establish the name-value pairs using strongly typed properties. Consider the following update to the current code:

var connectionStringBuilder = new SqlConnectionStringBuilder
{
InitialCatalog = "AutoLot", DataSource = ".,5433", UserID = "sa",
Password = "P@ssw0rd", ConnectTimeout = 30, Encrypt=false
};
connection.ConnectionString = connectionStringBuilder.ConnectionString;

In this iteration, you create an instance of SqlConnectionStringBuilder, set the properties accordingly, and obtain the internal string using the ConnectionString property. Also note that you use the default constructor of the type. If you so choose, you can also create an instance of your data provider’s connection string builder object by passing in an existing connection string as a starting point (this can be helpful when you read these values dynamically from an external source). Once you have hydrated the object with the initial string data, you can change specific name-value pairs using the related properties.

Working with Command Objects
Now that you understand better the role of the connection object, the next order of business is to check out how to submit SQL queries to the database in question. The SqlCommand type (which derives from DbCommand) is an OO representation of a SQL query, table name, or stored procedure. You specify the type of command using the CommandType property, which can take any value from the CommandType enum, as shown here:

public enum CommandType
{
StoredProcedure, TableDirect,
Text // Default value.
}

When you create a command object, you can establish the SQL query as a constructor parameter or directly by using the CommandText property. Also, when you create a command object, you need to specify the connection you want to use. Again, you can do so as a constructor parameter or by using the Connection property. Consider this code snippet:

// Create command object via ctor args. string sql =
@"Select i.id, m.Name as Make, i.Color, i.Petname FROM Inventory i
INNER JOIN Makes m on m.Id = i.MakeId"; SqlCommand myCommand = new SqlCommand(sql, connection);
// Create another command object via properties. SqlCommand testCommand = new SqlCommand(); testCommand.Connection = connection; testCommand.CommandText = sql;

Realize that, at this point, you have not actually submitted the SQL query to the AutoLot database but instead prepared the state of the command object for future use. Table 20-5 highlights some additional members of the DbCommand type.

Table 20-5. Members of the DbCommand Type

Member Meaning in Life
CommandTimeout Gets or sets the time to wait while executing the command before terminating the attempt and generating an error. The default is 30 seconds.
Connection Gets or sets the DbConnection used by this instance of the DbCommand.
Parameters Gets the collection of DbParameter objects used for a parameterized query.
Cancel() Cancels the execution of a command.
ExecuteReader() Executes a SQL query and returns the data provider’s DbDataReader object, which provides forward-only, read-only access for the result of the query.
ExecuteNonQuery() Executes a SQL nonquery (e.g., an insert, update, delete, or create table).
(continued)

Table 20-5. (continued)

Member Meaning in Life
ExecuteScalar() A lightweight version of the ExecuteReader() method that was designed specifically for singleton queries (e.g., obtaining a record count).
Prepare() Creates a prepared (or compiled) version of the command on the data source. As you might know, a prepared query executes slightly faster and is useful when you need to execute the same query multiple times (typically with different parameters each time).

Working with Data Readers
After you establish the active connection and SQL command, the next step is to submit the query to the data source. As you might guess, you have a number of ways to do this. The DbDataReader type (which
implements IDataReader) is the simplest and fastest way to obtain information from a data store. Recall that
data readers represent a read-only, forward-only stream of data returned one record at a time. Given this, data readers are useful only when submitting SQL selection statements to the underlying data store.
Data readers are useful when you need to iterate over large amounts of data quickly and you do not need to maintain an in-memory representation. For example, if you request 20,000 records from a table to store in a text file, it would be rather memory intensive to hold this information in a DataSet (because a DataSet holds the entire result of the query in memory at the same time).
A better approach is to create a data reader that spins over each record as rapidly as possible. Be aware, however, that data reader objects (unlike data adapter objects, which you will examine later) maintain an open connection to their data source until you explicitly close the connection.
You obtain data reader objects from the command object using a call to ExecuteReader(). The data reader represents the current record it has read from the database. The data reader has an indexer method (e.g., [] syntax in C#) that allows you to access a column in the current record. You can access the column either by name or by zero-based integer.
The following use of the data reader leverages the Read() method to determine when you have reached the end of your records (using a false return value). For each incoming record that you read from the database, you use the type indexer to print out the make, pet name, and color of each automobile. Also note that you call Close() as soon as you finish processing the records, which frees up the connection object.


// Obtain a data reader via ExecuteReader().
using(SqlDataReader myDataReader = myCommand.ExecuteReader())
{
// Loop over the results.
while (myDataReader.Read())
{
WriteLine($"-> Make: { myDataReader["Make"]}, PetName: { myDataReader["PetName"]}, Color: { myDataReader["Color"]}.");
}
}
ReadLine();

In the preceding snippet, you overload the indexer of a data reader object to take either a string (representing the name of the column) or an int (representing the column’s ordinal position). Thus, you can clean up the current reader logic (and avoid hard-coded string names) with the following update (note the use of the FieldCount property):

while (myDataReader.Read())
{
for (int i = 0; i < myDataReader.FieldCount; i++)
{
Console.Write(i != myDataReader.FieldCount – 1
? $"{myDataReader.GetName(i)} = {myDataReader.GetValue(i)}, "
: $"{myDataReader.GetName(i)} = {myDataReader.GetValue(i)} ");
}
Console.WriteLine();
}

If you compile and run your project at this point, you should see a list of all automobiles in the
Inventory table of the AutoLot database.

Fun with Data Readers

Info about your connection Database location: .,5433
Database name: AutoLot Timeout: 30
Connection state: Open

id = 1, Make = VW, Color = Black, Petname = Zippy id = 2, Make = Ford, Color = Rust, Petname = Rusty id = 3, Make = Saab, Color = Black, Petname = Mel
id = 4, Make = Yugo, Color = Yellow, Petname = Clunker id = 5, Make = BMW, Color = Black, Petname = Bimmer
id = 6, Make = BMW, Color = Green, Petname = Hank id = 7, Make = BMW, Color = Pink, Petname = Pinky id = 8, Make = Pinto, Color = Black, Petname = Pete
id = 9, Make = Yugo, Color = Brown, Petname = Brownie

Obtaining Multiple Result Sets Using a Data Reader
Data reader objects can obtain multiple result sets using a single command object. For example, if you want to obtain all rows from the Inventory table, as well as all rows from the Customers table, you can specify both SQL SELECT statements using a semicolon delimiter, like so:

sql += ";Select * from Customers;";

■ Note The semicolon in the beginning is not a typo. When using multiple statements, they must be separated by semicolons. And since the initial statement did not contain one, it is added here at the beginning of the second statement.

After you obtain the data reader, you can iterate over each result set using the NextResult() method. Note that you are always returned the first result set automatically. Thus, if you want to read over the rows of each table, you can build the following iteration construct:

do
{
while (myDataReader.Read())
{
for (int i = 0; i < myDataReader.FieldCount; i++)
{
Console.Write(i != myDataReader.FieldCount – 1
? $"{myDataReader.GetName(i)} = {myDataReader.GetValue(i)}, "
: $"{myDataReader.GetName(i)} = {myDataReader.GetValue(i)} ");
}
Console.WriteLine();
}
Console.WriteLine();
} while (myDataReader.NextResult());

At this point, you should be more aware of the functionality data reader objects bring to the table. Always remember that a data reader can process only SQL Select statements; you cannot use them to modify an existing database table using Insert, Update, or Delete requests. Modifying an existing database requires additional investigation of command objects.

Working with Create, Update, and Delete Queries
The ExecuteReader() method extracts a data reader object that allows you to examine the results of a SQL Select statement using a forward-only, read-only flow of information. However, when you want to submit SQL statements that result in the modification of a given table (or any other nonquery SQL statement, such as creating tables or granting permissions), you call the ExecuteNonQuery() method of your command object. This single method performs inserts, updates, and deletes based on the format of your command text.

■ Note Technically speaking, a nonquery is a SQL statement that does not return a result set. Thus, Select statements are queries, while Insert, Update, and Delete statements are not. Given this, ExecuteNonQuery() returns an int that represents the number of rows affected, not a new set of records.

All the database interaction examples in this chapter so far have only opened connections and used them to retrieve data. This is just one part of working with a database; a data access framework would not be of much use unless it also fully supported Create, Read, Update, and Delete (CRUD) functionality. Next, you will learn how to do this using calls to ExecuteNonQuery().
Begin by creating a new C# Class Library project named AutoLot.Dal (short for AutoLot data access layer), delete the default class file, and add the Microsoft.Data.SqlClient package to the project. Add a new class file named GlobalUsings.cs to the project, and update the file to the following global using statements:

global using System.Data; global using System.Reflection;
global using Microsoft.Data.SqlClient;

Before building the class that will conduct the data operations, we will first create a C# class that represents a record from the Inventory table with its related Make information.

Create the Car and CarViewModel Classes
Modern data access libraries use classes (commonly called models or entities) that are used to represent and transport the data from the database. Additionally, classes can be used to represent a view into the data that combines two or more tables to make the data more meaningful. Entity classes are used to work with the database directory (for update statements), and view model classes are used for displaying the data in a meaningful way. You will see in the next chapter that these concepts are a foundation of object relational mappers (ORMs) like the Entity Framework Core, but for now, you are just going to create one model (for a raw inventory row) and one view model (combining an inventory row with the related data in the Makes table). Add a new folder to your project named Models, and add two new files, named Car.cs and CarViewModel.cs. Update the code to the following:

//Car.cs
namespace AutoLot.Dal.Models; public class Car
{
public int Id { get; set; } public int MakeId { get; set; } public string Color { get; set; }
public string PetName { get; set; } public byte[] TimeStamp {get;set;}
}

//CarViewModel.cs
namespace AutoLot.Dal.Models; public class CarViewModel : Car
{
public string Make { get; set; }
}

These classes will be used shortly, but first, add the AutoLot.Dal.Models namespace into the
GlobalUsings.cs file:

global using System.Data; global using System.Reflection;
global using Microsoft.Data.SqlClient; global using AutoLot.Dal.Models;

Adding the InventoryDal Class
Next, add a new folder named DataOperations. In this new folder, add a new class named InventoryDal.cs and change the class to public. This class will define various members to interact with the Inventory table of the AutoLot database.

Adding Constructors
Create a constructor that takes a string parameter (connectionString) and assigns the value to a class- level variable. Next, create a parameterless constructor that passes a default connection string to the other constructor. This enables the calling code to change the connection string from the default. The relevant code is as follows:

namespace AuoLot.Dal.DataOperations; public class InventoryDal
{
private readonly string _connectionString; public InventoryDal() : this(
@"Data Source=.,5433;User Id=sa;Password=P@ssw0rd;Initial Catalog=AutoLot;Encryp t=False;")
{
}
public InventoryDal(string connectionString)
=> _connectionString = connectionString;
}

Opening and Closing the Connection
Next, add a class-level variable to hold a connection that will be used by the data access code. Also, add two methods, one to open the connection (OpenConnection()) and the other to close the connection (CloseConnection()). In the CloseConnection() method, check the state of the connection, and if it is not closed, then call Close() on the connection. The code listing follows:

private SqlConnection _sqlConnection = null; private void OpenConnection()
{
_sqlConnection = new SqlConnection
{
ConnectionString = _connectionString
};
_sqlConnection.Open();
}
private void CloseConnection()
{
if (_sqlConnection?.State != ConnectionState.Closed)
{
_sqlConnection?.Close();
}
}

For the sake of brevity, most of the methods in the InventoryDal class will not use try/catch blocks to handle possible exceptions, nor will they throw custom exceptions to report various issues with the execution (e.g., a malformed connection string). If you were to build an industrial-strength data access library, you would absolutely want to use structured exception handling techniques (as covered in Chapter 7) to account for any runtime anomalies.

Adding IDisposable
Add the IDisposable interface to the class definition, like this:

public class InventoryDal : IDisposable
{

}

Next, implement the disposable pattern, calling Dispose on the SqlConnection object.

bool _disposed = false;
protected virtual void Dispose(bool disposing)
{
if (_disposed)
{
return;
}
if (disposing)
{
_sqlConnection.Dispose();
}
_disposed = true;
}
public void Dispose()
{
Dispose(true); GC.SuppressFinalize(this);
}

Adding the Selection Methods
You start by combining what you already know about Command objects, DataReaders, and generic collections to get the records from the Inventory table. As you saw earlier in this chapter, a data provider’s data reader object allows for a selection of records using a read-only, forward-only mechanism using the Read() method. In this example, the CommandBehavior property on the DataReader is set to automatically close the connection when the reader is closed. The GetAllInventory() method returns a List to represent all the data in the Inventory table.

public List GetAllInventory()
{
OpenConnection();
// This will hold the records.
List inventory = new List();

// Prep command object. string sql =
@"SELECT i.Id, i.Color, i.PetName,m.Name as Make FROM Inventory i
INNER JOIN Makes m on m.Id = i.MakeId"; using SqlCommand command =
new SqlCommand(sql, _sqlConnection)
{
CommandType = CommandType.Text
};
command.CommandType = CommandType.Text; SqlDataReader dataReader =
command.ExecuteReader(CommandBehavior.CloseConnection); while (dataReader.Read())

{
inventory.Add(new CarViewModel
{
Id = (int)dataReader["Id"],
Color = (string)dataReader["Color"], Make = (string)dataReader["Make"], PetName = (string)dataReader["PetName"]
});
}
dataReader.Close(); return inventory;
}

The next selection method gets a single CarViewModel based on the CarId.

public CarViewModel GetCar(int id)
{
OpenConnection(); CarViewModel car = null;
//This should use parameters for security reasons string sql =
$@"SELECT i.Id, i.Color, i.PetName,m.Name as Make FROM Inventory i
INNER JOIN Makes m on m.Id = i.MakeId WHERE i.Id = {id}";
using SqlCommand command =
new SqlCommand(sql, _sqlConnection)
{
CommandType = CommandType.Text
};
SqlDataReader dataReader = command.ExecuteReader(CommandBehavior.CloseConnection);
while (dataReader.Read())
{
car = new CarViewModel
{
Id = (int) dataReader["Id"],
Color = (string) dataReader["Color"], Make = (string) dataReader["Make"], PetName = (string) dataReader["PetName"]
};
}
dataReader.Close(); return car;
}

■ Note It’s generally a bad practice to accept user input into raw SQL statements as is done here. Later in this chapter, this code will be updated to use parameters.

Inserting a New Car
Inserting a new record into the Inventory table is as simple as formatting the SQL Insert statement (based on user input), opening the connection, calling the ExecuteNonQuery() using your command object, and closing the connection. You can see this in action by adding a public method to your InventoryDal type named InsertAuto() that takes three parameters that map to the nonidentity columns of the Inventory table (Color, Make, and PetName). You use these arguments to format a string type to insert the new record. Finally, use your SqlConnection object to execute the SQL statement.

public void InsertAuto(string color, int makeId, string petName)
{
OpenConnection();
// Format and execute SQL statement.
string sql = $"Insert Into Inventory (MakeId, Color, PetName) Values (‘{makeId}’, ‘{color}’, ‘{petName}’)";
// Execute using our connection.
using (SqlCommand command = new SqlCommand(sql, _sqlConnection))
{
command.CommandType = CommandType.Text; command.ExecuteNonQuery();
}
CloseConnection();
}

This previous method takes three values for Car and works as long as the calling code passes the values in the correct order. A better method uses Car to make a strongly typed method, ensuring all the properties are passed into the method in the correct order.

Create the Strongly Type InsertCar() Method
Add another InsertAuto() method that takes Car as a parameter to your InventoryDal class, as shown here:

public void InsertAuto(Car car)
{
OpenConnection();
// Format and execute SQL statement.
string sql = "Insert Into Inventory (MakeId, Color, PetName) Values " +
$"(‘{car.MakeId}’, ‘{car.Color}’, ‘{car.PetName}’)";

// Execute using our connection.
using (SqlCommand command = new SqlCommand(sql, _sqlConnection))
{
command.CommandType = CommandType.Text; command.ExecuteNonQuery();
}
CloseConnection();
}

Adding the Deletion Logic
Deleting an existing record is as simple as inserting a new record. Unlike when you created the code for InsertAuto(), this time you will learn about an important try/catch scope that handles the possibility of attempting to delete a car that is currently on order for an individual in the Customers table. The default INSERT and UPDATE options for foreign keys default to preventing the deletion of related records in linked tables. When this happens, a SqlException is thrown. A real program would handle that error intelligently; however, in this sample, you are just throwing a new exception. Add the following method to the InventoryDal class type:

public void DeleteCar(int id)
{
OpenConnection();
// Get ID of car to delete, then do so.
string sql = $"Delete from Inventory where Id = ‘{id}’";
using (SqlCommand command = new SqlCommand(sql, _sqlConnection))
{
try
{
command.CommandType = CommandType.Text; command.ExecuteNonQuery();
}
catch (SqlException ex)
{
Exception error = new Exception("Sorry! That car is on order!", ex); throw error;
}
}
CloseConnection();
}

Adding the Update Logic
When it comes to the act of updating an existing record in the Inventory table, the first thing you must decide is what you want to allow the caller to change, whether it is the car’s color, the pet name, the make, or all of these. One way to give the caller complete flexibility is to define a method that takes a string type to represent any sort of SQL statement, but that is risky at best.
Ideally, you want to have a set of methods that allow the caller to update a record in a variety of ways. However, for this simple data access library, you will define a single method that allows the caller to update the pet name of a given automobile, like so:

public void UpdateCarPetName(int id, string newPetName)
{
OpenConnection();
// Get ID of car to modify the pet name.
string sql = $"Update Inventory Set PetName = ‘{newPetName}’ Where Id = ‘{id}’"; using (SqlCommand command = new SqlCommand(sql, _sqlConnection))
{
command.ExecuteNonQuery();
}
CloseConnection();
}

Working with Parameterized Command Objects
Currently, the insert, update, and delete logic for the InventoryDal type uses hard-coded string literals for each SQL query. With parameterized queries, SQL parameters are objects, rather than simple blobs of text. Treating SQL queries in a more object-oriented manner helps reduce the number of typos (given strongly typed properties); plus, parameterized queries typically execute much faster than a literal SQL string because they are parsed exactly once (rather than each time the SQL string is assigned to the CommandText property).
Parameterized queries also help protect against SQL injection attacks (a well-known data access security issue).
To support parameterized queries, ADO.NET command objects maintain a collection of individual parameter objects. By default, this collection is empty, but you can insert any number of parameter objects that map to a placeholder parameter in the SQL query. When you want to associate a parameter within a SQL query to a member in the command object’s parameters collection, you can prefix the SQL text parameter with the @ symbol (at least when using Microsoft SQL Server; not all DBMSs support this notation).

Specifying Parameters Using the DbParameter Type
Before you build a parameterized query, you need to familiarize yourself with the DbParameter type (which is the base class to a provider’s specific parameter object). This class maintains a number of properties that allow you to configure the name, size, and data type of the parameter, as well as other characteristics, including the parameter’s direction of travel. Table 20-6 describes some key properties of the DbParameter type.

Table 20-6. Key Members of the DbParameter Type

Property Meaning in Life
DbType Gets or sets the native data type of the parameter, represented as a CLR data type
Direction Gets or sets whether the parameter is input-only, output-only, bidirectional, or a return value parameter
IsNullable Gets or sets whether the parameter accepts null values
ParameterName Gets or sets the name of the DbParameter
Size Gets or sets the maximum parameter size of the data in bytes; this is useful only for textual data
Value Gets or sets the value of the parameter

Now let’s look at how to populate a command object’s collection of DBParameter-compatible objects by reworking the InventoryDal methods to use parameters.

Update the GetCar Method
The original implementation of the GetCar() method used C# string interpolation when building the SQL string to retrieve the car data. To update this method, create an instance of SqlParameter with the appropriate values, as follows:

SqlParameter param = new SqlParameter
{
ParameterName = "@carId", Value = id,

SqlDbType = SqlDbType.Int,
Direction = ParameterDirection.Input
};

The ParameterName value must match the name used in the SQL query (you will update that next), the type must match the database column type, and the direction is dependent on whether the parameter is used to send data into the query (ParameterDirection.Input) or if it is meant to return data from the query (ParameterDirection.Output). Parameters can also be defined as input/output or as return values (e.g., from a stored procedure).
Next, update the SQL string to use the parameter name ("@carId") instead of the C# string interpolation construct ("{id}").

string sql =
@"SELECT i.Id, i.Color, i.PetName,m.Name as Make FROM Inventory i
INNER JOIN Makes m on m.Id = i.MakeId WHERE i.Id = @CarId";

The final update is to add the new parameter to the Parameters collection of the command object.

command.Parameters.Add(param);

Update the DeleteCar Method
Likewise, the original implementation of the DeleteCar() method used C# string interpolation. To update this method, create an instance of SqlParameter with the appropriate values, as follows:

SqlParameter param = new SqlParameter
{
ParameterName = "@carId", Value = id,
SqlDbType = SqlDbType.Int,
Direction = ParameterDirection.Input
};

Next, update the SQL string to use the parameter name ("@carId").

string sql = "Delete from Inventory where Id = @carId";

The final update is to add the new parameter to the Parameters collection of the command object.

command.Parameters.Add(param);

Update the UpdateCarPetName Method
This method requires two parameters, one for the car Id and the other for the new PetName. The first parameter is created just like the two previous examples (with the exception of a different variable name), and the second creates a parameter that maps to the database NVarChar type (the PetName field type from the Inventory table). Notice that a Size value is set. It is important this size matches your database field size so as to not create problems when executing the command.

SqlParameter paramId = new SqlParameter
{
ParameterName = "@carId", Value = id,
SqlDbType = SqlDbType.Int,
Direction = ParameterDirection.Input
};
SqlParameter paramName = new SqlParameter
{
ParameterName = "@petName", Value = newPetName,
SqlDbType = SqlDbType.NVarChar, Size = 50,
Direction = ParameterDirection.Input
};

Next, update the SQL string to use the parameters.

string sql = $"Update Inventory Set PetName = @petName Where Id = @carId";

The final update is to add the new parameters to the Parameters collection of the command object.

command.Parameters.Add(paramId); command.Parameters.Add(paramName);

Update the InsertAuto Method
Add the following version of the InsertAuto() method to leverage parameter objects:

public void InsertAuto(Car car)
{
OpenConnection();
// Note the "placeholders" in the SQL query. string sql = "Insert Into Inventory" +
"(MakeId, Color, PetName) Values" + "(@MakeId, @Color, @PetName)";

// This command will have internal parameters.
using (SqlCommand command = new SqlCommand(sql, _sqlConnection))
{
// Fill params collection.
SqlParameter parameter = new SqlParameter
{
ParameterName = "@MakeId", Value = car.MakeId, SqlDbType = SqlDbType.Int,
Direction = ParameterDirection.Input
};
command.Parameters.Add(parameter); parameter = new SqlParameter

{
ParameterName = "@Color", Value = car.Color,
SqlDbType = SqlDbType. NVarChar, Size = 50,
Direction = ParameterDirection.Input
};
command.Parameters.Add(parameter);

parameter = new SqlParameter
{
ParameterName = "@PetName", Value = car.PetName,
SqlDbType = SqlDbType. NVarChar, Size = 50,
Direction = ParameterDirection.Input
};
command.Parameters.Add(parameter);

command.ExecuteNonQuery(); CloseConnection();
}
}

While building a parameterized query often requires more code, the end result is a more convenient way to tweak SQL statements programmatically, as well as to achieve better overall performance. They also are extremely helpful when you want to trigger a stored procedure.

Executing a Stored Procedure
Recall that a stored procedure is a named block of SQL code stored in the database. You can construct stored procedures so they return a set of rows or scalar data types or do anything else that makes sense (e.g., insert, update, or delete records); you can also have them take any number of optional parameters. The end result is a unit of work that behaves like a typical method, except that it is located on a data store rather than a binary business object. Currently, your AutoLot database contains a single stored procedure named GetPetName.
Now consider the following final method (for now) of the InventoryDal type, which invokes your stored
procedure:

public string LookUpPetName(int carId)
{
OpenConnection(); string carPetName;

// Establish name of stored proc.
using (SqlCommand command = new SqlCommand("GetPetName", _sqlConnection))
{
command.CommandType = CommandType.StoredProcedure;

// Input param.
SqlParameter param = new SqlParameter
{

ParameterName = "@carId", SqlDbType = SqlDbType.Int, Value = carId,
Direction = ParameterDirection.Input
};
command.Parameters.Add(param);

// Output param.
param = new SqlParameter
{
ParameterName = "@petName", SqlDbType = SqlDbType.NVarChar, Size = 50,
Direction = ParameterDirection.Output
};
command.Parameters.Add(param);

// Execute the stored proc. command.ExecuteNonQuery();

// Return output param.
carPetName = (string)command.Parameters["@petName"].Value; CloseConnection();
}
return carPetName;
}

One important aspect of invoking a stored procedure is to keep in mind that a command object can represent a SQL statement (the default) or the name of a stored procedure. When you want to inform a command object that it will be invoking a stored procedure, you pass in the name of the procedure (as a constructor argument or by using the CommandText property) and must set the CommandType property to the value CommandType.StoredProcedure. (If you fail to do this, you will receive a runtime exception because the command object is expecting a SQL statement by default.)
Next, notice that the Direction property of the @petName parameter is set to ParameterDirection.
Output. As before, you add each parameter object to the command object’s parameters collection.
After the stored procedure completes with a call to ExecuteNonQuery(), you can obtain the value of the output parameter by investigating the command object’s parameter collection and casting accordingly.

// Return output param.
carPetName = (string)command.Parameters["@petName"].Value;

At this point, you have an extremely simple data access library that you can use to build a client to display and edit your data. You have not yet examined how to build graphical user interfaces, so next you will test your data library from a new console application.

Creating a Console-Based Client Application
Add a new console application (named AutoLot.Client) to the AutoLot.Dal solution and add a reference to the AutoLot.Dal project. Clear out the generated code in the Program.cs file and add the following using statements to the top of the file:

using AutoLot.Dal.Models;
using AutoLot.Dal.DataOperations; using AutoLot.Dal.BulkImport;

Next, add the following top-level statements to exercise the AutoLot.Dal code:

InventoryDal dal = new InventoryDal(); List list = dal.GetAllInventory();
Console.WriteLine(" ** All Cars ** "); Console.WriteLine("Id\tMake\tColor\tPet Name");
foreach (var itm in list)
{
Console.WriteLine($"{itm.Id}\t{itm.Make}\t{itm.Color}\t{itm.PetName}");
}
Console.WriteLine();
CarViewModel car = dal.GetCar(list.OrderBy(x=>x.Color).Select(x => x.Id).First()); Console.WriteLine(" ** First Car By Color ** "); Console.WriteLine("CarId\tMake\tColor\tPet Name"); Console.WriteLine($"{car.Id}\t{car.Make}\t{car.Color}\t{car.PetName}");

try
{
//This will fail because of related data in the Orders table dal.DeleteCar(5);
Console.WriteLine("Car deleted.");
}
catch (Exception ex)
{
Console.WriteLine($"An exception occurred: {ex.Message}");
}
dal.InsertAuto(new Car { Color = "Blue", MakeId = 5, PetName = "TowMonster" }); list = dal.GetAllInventory();
var newCar = list.First(x => x.PetName == "TowMonster"); Console.WriteLine(" ** New Car ** "); Console.WriteLine("CarId\tMake\tColor\tPet Name");
Console.WriteLine($"{newCar.Id}\t{newCar.Make}\t{newCar.Color}\t{newCar.PetName}"); dal.DeleteCar(newCar.Id);
var petName = dal.LookUpPetName(car.Id);
Console.WriteLine(" ** New Car ** "); Console.WriteLine($"Car pet name: {petName}"); Console.Write("Press enter to continue…"); Console.ReadLine();

Understanding Database Transactions
The next tool that we will examine is the use of database transactions. Simply put, a transaction is a set of database operations that succeed or fail as a collective unit. If one of the operations fails, all other operations are rolled back, as if nothing ever happened. As you might imagine, transactions are quite important to ensure that table data is safe, valid, and consistent.

Transactions are important when a database operation involves interacting with multiple tables or multiple stored procedures (or a combination of database atoms). The classic transaction example involves the process of transferring monetary funds between two bank accounts. For example, if you were to transfer $500 from your savings account into your checking account, the following steps should occur in a transactional manner:
1.The bank should remove $500 from your savings account.
2.The bank should add $500 to your checking account.
It would be an extremely bad thing if the money were removed from the savings account but not transferred to the checking account (because of some error on the bank’s part) because then you would be out $500! However, if these steps are wrapped up into a database transaction, the DBMS ensures that all related steps occur as a single unit. If any part of the transaction fails, the entire operation is rolled back to the original state. On the other hand, if all steps succeed, the transaction is committed.

■ Note You might be familiar with the acronym ACID from looking at transactional literature. This represents the four key properties of a prim-and-proper transaction: atomic (all or nothing), consistent (data remains stable throughout the transaction), isolated (transactions do not interfere with other operations), and durable (transactions are saved and logged).

It turns out that the .NET platform supports transactions in a variety of ways. This chapter will look at the transaction object of your ADO.NET data provider (SqlTransaction, in the case of Microsoft.Data. SqlClient).
In addition to the baked-in transactional support within the .NET base class libraries, it is possible to use the SQL language of your database management system. For example, you could author a stored procedure that uses the BEGIN TRANSACTION, ROLLBACK, and COMMIT statements.

Key Members of an ADO.NET Transaction Object
All the transactions we will use implement the IDbTransaction interface. Recall from the beginning of this chapter that IDbTransaction defines a handful of members as follows:

public interface IDbTransaction : IDisposable
{
IDbConnection Connection { get; } IsolationLevel IsolationLevel { get; }

void Commit(); void Rollback();
}

Notice the Connection property, which returns a reference to the connection object that initiated the current transaction (as you will see, you obtain a transaction object from a given connection object). You call the Commit() method when each of your database operations has succeeded. Doing this causes each of the pending changes to be persisted in the data store. Conversely, you can call the Rollback() method in the event of a runtime exception, which informs the DBMS to disregard any pending changes, leaving the original data intact.

■ Note The IsolationLevel property of a transaction object allows you to specify how aggressively a transaction should be guarded against the activities of other parallel transactions. By default, transactions are isolated completely until committed.

Beyond the members defined by the IDbTransaction interface, the SqlTransaction type defines an additional member named Save(), which allows you to define save points. This concept allows you to roll back a failed transaction up until a named point, rather than rolling back the entire transaction. Essentially, when you call Save() using a SqlTransaction object, you can specify a friendly string moniker. When you call Rollback(), you can specify this same moniker as an argument to perform an effective partial rollback. Calling Rollback() with no arguments causes all the pending changes to be rolled back.

Adding a Transaction Method to InventoryDal
Now let’s look at how you work with ADO.NET transactions programmatically. Begin by opening the AutoLot. Dal code library project you created earlier and add a new public method named ProcessCreditRisk() to the InventoryDal class to deal with perceived credit risks. The method will look up a customer, add them to the CreditRisks table, and then update their last name by adding “(Credit Risk)” to the end.

public void ProcessCreditRisk(bool throwEx, int customerId)
{
OpenConnection();
// First, look up current name based on customer ID. string fName;
string lName;
var cmdSelect = new SqlCommand(
"Select * from Customers where Id = @customerId",
_sqlConnection);
SqlParameter paramId = new SqlParameter
{
ParameterName = "@customerId", SqlDbType = SqlDbType.Int, Value = customerId,
Direction = ParameterDirection.Input
};
cmdSelect.Parameters.Add(paramId);
using (var dataReader = cmdSelect.ExecuteReader())
{
if (dataReader.HasRows)
{
dataReader.Read();
fName = (string) dataReader["FirstName"]; lName = (string) dataReader["LastName"];
}
else
{
CloseConnection(); return;
}
}

cmdSelect.Parameters.Clear();
// Create command objects that represent each step of the operation. var cmdUpdate = new SqlCommand(
"Update Customers set LastName = LastName + ‘ (CreditRisk) ‘ where Id = @customerId",
_sqlConnection); cmdUpdate.Parameters.Add(paramId); var cmdInsert = new SqlCommand(
"Insert Into CreditRisks (CustomerId,FirstName, LastName) Values( @CustomerId, @ FirstName, @LastName)", _sqlConnection);
SqlParameter parameterId2 = new SqlParameter
{
ParameterName = "@CustomerId", SqlDbType = SqlDbType.Int, Value = customerId,
Direction = ParameterDirection.Input
};
SqlParameter parameterFirstName = new SqlParameter
{
ParameterName = "@FirstName", Value = fName,
SqlDbType = SqlDbType.NVarChar, Size = 50,
Direction = ParameterDirection.Input
};
SqlParameter parameterLastName = new SqlParameter
{
ParameterName = "@LastName", Value = lName,
SqlDbType = SqlDbType.NVarChar, Size = 50,
Direction = ParameterDirection.Input
};

cmdInsert.Parameters.Add(parameterId2); cmdInsert.Parameters.Add(parameterFirstName); cmdInsert.Parameters.Add(parameterLastName);
// We will get this from the connection object. SqlTransaction tx = null;
try
{
tx = _sqlConnection.BeginTransaction();
// Enlist the commands into this transaction. cmdInsert.Transaction = tx; cmdUpdate.Transaction = tx;
// Execute the commands. cmdInsert.ExecuteNonQuery(); cmdUpdate.ExecuteNonQuery();
// Simulate error. if (throwEx)
{
throw new Exception("Sorry! Database error! Tx failed…");

}
// Commit it! tx.Commit();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
// Any error will roll back transaction. Using the new conditional access operator to check for null.
tx?.Rollback();
}
finally
{
CloseConnection();
}
}

Here, you use an incoming bool parameter to represent whether you will throw an arbitrary exception when you attempt to process the offending customer. This allows you to simulate an unforeseen
circumstance that will cause the database transaction to fail. Obviously, you do this here only for illustrative purposes; a true database transaction method would not want to allow the caller to force the logic to fail
on a whim!
Note that you use two SqlCommand objects to represent each step in the transaction you will kick off. After you obtain the customer’s first and last names based on the incoming customerID parameter, you can obtain a valid SqlTransaction object from the connection object using BeginTransaction(). Next, and most importantly, you must enlist each command object by assigning the Transaction property to the transaction object you have just obtained. If you fail to do so, the Insert/Update logic will not be under a transactional context.
After you call ExecuteNonQuery() on each command, you throw an exception if (and only if ) the value of the bool parameter is true. In this case, all pending database operations are rolled back. If you do not throw an exception, both steps will be committed to the database tables once you call Commit().

Testing Your Database Transaction
Select one of the customers you added to the Customers table (e.g., Dave Benner, Id = 1). Next, add a new method to the Program.cs file in the AutoLot.Client project named FlagCustomer().

void FlagCustomer()
{
Console.WriteLine(" Simple Transaction Example \n");

// A simple way to allow the tx to succeed or not. bool throwEx = true;
Console.Write("Do you want to throw an exception (Y or N): "); var userAnswer = Console.ReadLine();
if (string.IsNullOrEmpty(userAnswer) || userAnswer.Equals("N",StringComparison. OrdinalIgnoreCase))
{
throwEx = false;
}
var dal = new InventoryDal();

// Process customer 1 – enter the id for the customer to move. dal.ProcessCreditRisk(throwEx, 1);
Console.WriteLine("Check CreditRisk table for results"); Console.ReadLine();
}

If you were to run your program and elect to throw an exception, you would find that the customer’s last name is not changed in the Customers table because the entire transaction has been rolled back. However,
if you did not throw an exception, you would find that the customer’s last name is updated in the Customers
table and has been added to the CreditRisks table.

Executing Bulk Copies with ADO.NET
In cases where you need to load lots of records into the database, the methods shown so far would be rather inefficient. SQL Server has a feature called bulk copy that is designed specifically for this scenario, and it is wrapped up in ADO.NET with the SqlBulkCopy class. This section of the chapter shows how to do this with ADO.NET.

Exploring the SqlBulkCopy Class
The SqlBulkCopy class has one method, WriteToServer() (and the async version WriteToServerAsync()), that processes a list of records and writes the data to the database more efficiently than writing a series of insert statements and running them with a Command object. The WriteToServer overloads take a DataTable, a DataReader, or an array of DataRows. To keep with the theme of this chapter, you are going to use the DataReader version. For that, you need to create a custom data reader.

Creating a Custom Data Reader
You want your custom data reader to be generic and hold a list of the models that you want to import. Begin by creating a new folder in the AutoLot.Dal project named BulkImport; in the folder, create a new interface class named IMyDataReader.cs that implements IDataReader, and update the code to the following:

namespace AutoLot.Dal.BulkImport;
public interface IMyDataReader : IDataReader
{
List Records { get; set; }
}

Next comes the task of implementing the custom data reader. As you have already seen, data readers have lots of moving parts. The good news for you is that, for SqlBulkCopy, you must implement only a handful of them. Create a new class named MyDataReader.cs, update the class to public and sealed, and implement IMyDataReader. Add a constructor to take in the records and set the property.

public sealed class MyDataReader : IMyDataReader
{
public List Records { get; set; } public MyDataReader(List records)
{
Records = records;

}
}

Have Visual Studio or Visual Studio Code implement all the methods for you (or copy them from the following code listings that follow Table 20-7), and you will have your starting point for the custom data reader. Table 20-7 details the only methods that need to be implemented for this scenario.

Table 20-7. Key Methods of IDataReader for SqlBulkCopy

Method Meaning in Life
Read Gets the next record; returns true if there is another record or returns false if at the end of the list
FieldCount Gets the total number of fields in the data source
GetValue Gets the value of a field based on the ordinal position
GetSchemaTable Gets the schema information for the target table

Starting with the Read() method, return false if the reader is at the end of the list, and return true (and increment a class-level counter) if the reader is not at the end of the list. Add a class-level variable to hold the current index of the List and update the Read() method like this:

public class MyDataReader : IMyDataReader
{

private int _currentIndex = -1; public bool Read()
{
if (_currentIndex + 1 >= Records.Count)
{
return false;
}
_currentIndex++; return true;
}
}

Each of the get methods and the FieldCount methods requires an intimate knowledge of the specific model to be loaded. An example of the GetValue() method (using the Car class) is as follows:

public object GetValue(int i)
{
Car currentRecord = Records[_currentIndex] as Car; return i switch
{
0 => currentRecord.Id,
1 => currentRecord.MakeId,
2 => currentRecord.Color,
3 => currentRecord.PetName,
4 => currentRecord.TimeStamp,

_ => string.Empty,
};
}

The database has only four tables, but that means you still have four variations of the data reader. Imagine if you had a real production database with many more tables! You can do better than this using reflection (covered in Chapter 17) and LINQ to Objects (covered in Chapter 13).
Add readonly variables to hold the PropertyInfo values for the model as well as a dictionary that will be used to hold the field position and name for the table in SQL Server. Update the constructor to get the properties of the generic type and initialize the Dictionary. The added code is as follows:

private readonly PropertyInfo[] _propertyInfos;
private readonly Dictionary<int, string> _nameDictionary;

public MyDataReader(List records)
{
Records = records;
_propertyInfos = typeof(T).GetProperties();
_nameDictionary = new Dictionary<int,string>();
}

Next, update the constructor to take a SQLConnection as well as strings for the schema and table names for the table the records are going to be inserted into and add class-level variables for the values.

private readonly SqlConnection _connection; private readonly string _schema;
private readonly string _tableName;
public MyDataReader(List records, SqlConnection connection, string schema, string tableName)
{
Records = records;
_propertyInfos = typeof(T).GetProperties();
_nameDictionary = new Dictionary<int, string>();

_connection = connection;
_schema = schema;
_tableName = tableName;
}

Implement the GetSchemaTable() method next. This retrieves the SQL Server information regarding the target table.

public DataTable GetSchemaTable()
{
using var schemaCommand = new SqlCommand($"SELECT * FROM {_schema}.{tableName}", connection);
using var reader = schemaCommand.ExecuteReader(CommandBehavior.SchemaOnly); return reader.GetSchemaTable();
}

Update the constructor to use the SchemaTable to construct the dictionary that contains the fields of the target table in database order.

public MyDataReader(List records, SqlConnection connection, string schema, string tableName)
{

DataTable schemaTable = GetSchemaTable();
for (int x = 0; x<schemaTable?.Rows.Count;x++)
{
DataRow col = schemaTable.Rows[x];
var columnName = col.Field("ColumnName");
_nameDictionary.Add(x,columnName);
}
}

Now, the following methods can be implemented generically, using the reflected information:

public int FieldCount => _propertyInfos.Length; public object GetValue(int i)
=> _propertyInfos
.First(x=>x.Name.Equals(_nameDictionary[i],StringComparison.OrdinalIgnoreCase))
.GetValue(Records[_currentIndex]);

The remainder of the methods that must be present (but not implemented) are listed here for reference:

public string GetName(int i) => throw new NotImplementedException(); public int GetOrdinal(string name) => throw new NotImplementedException();
public string GetDataTypeName(int i) => throw new NotImplementedException(); public Type GetFieldType(int i) => throw new NotImplementedException(); public int GetValues(object[] values) => throw new NotImplementedException(); public bool GetBoolean(int i) => throw new NotImplementedException();
public byte GetByte(int i) => throw new NotImplementedException();
public long GetBytes(int i, long fieldOffset, byte[] buffer, int bufferoffset, int length)
=> throw new NotImplementedException();
public char GetChar(int i) => throw new NotImplementedException();
public long GetChars(int i, long fieldoffset, char[] buffer, int bufferoffset, int length)
=> throw new NotImplementedException();
public Guid GetGuid(int i) => throw new NotImplementedException(); public short GetInt16(int i) => throw new NotImplementedException(); public int GetInt32(int i) => throw new NotImplementedException(); public long GetInt64(int i) => throw new NotImplementedException(); public float GetFloat(int i) => throw new NotImplementedException(); public double GetDouble(int i) => throw new NotImplementedException(); public string GetString(int i) => throw new NotImplementedException(); public decimal GetDecimal(int i) => throw new NotImplementedException();
public DateTime GetDateTime(int i) => throw new NotImplementedException(); public IDataReader GetData(int i) => throw new NotImplementedException(); public bool IsDBNull(int i) => throw new NotImplementedException();
object IDataRecord.this[int i] => throw new NotImplementedException(); object IDataRecord.this[string name] => throw new NotImplementedException();

public void Close() => throw new NotImplementedException();
public DataTable GetSchemaTable() => throw new NotImplementedException(); public bool NextResult() => throw new NotImplementedException();
public int Depth { get; } public bool IsClosed { get; }
public int RecordsAffected { get; }

Executing the Bulk Copy
Add a new public static class named ProcessBulkImport.cs to the BulkImport folder. Add the code to handle opening and closing connections (like the code in the InventoryDal class), as follows:

private const string ConnectionString =
@"Data Source=.,5433;User Id=sa;Password=P@ssw0rd;Initial Catalog=AutoLot;Encrypt=False;"; private static SqlConnection _sqlConnection = null;

private static void OpenConnection()
{
_sqlConnection = new SqlConnection
{
ConnectionString = ConnectionString
};
_sqlConnection.Open();
}

private static void CloseConnection()
{
if (_sqlConnection?.State != ConnectionState.Closed)
{
_sqlConnection?.Close();
}
}

The SqlBulkCopy class requires the name (and schema, if different than dbo) to process the records. After creating a new SqlBulkCopy instance (passing in the connection object), set the
DestinationTableName property. Then, create a new instance of the custom data reader holding the list to be bulk copied, and call WriteToServer(). The ExecuteBulkImport method is shown here:

public static void ExecuteBulkImport(IEnumerable records, string tableName)
{
OpenConnection();
using SqlConnection conn = _sqlConnection; SqlBulkCopy bc = new SqlBulkCopy(conn)
{
DestinationTableName = tableName
};
var dataReader = new MyDataReader(records.ToList(),_sqlConnection, "dbo",tableName); try
{
bc.WriteToServer(dataReader);
}

catch (Exception ex)
{
//Should do something here
}
finally
{
CloseConnection();
}
}

Testing the Bulk Copy
Back in the AutoLot.Client project, add a new method to the Program.cs file named DoBulkCopy(). Create a list of Car objects and pass that (and the name of the table) into the ExecuteBulkImport() method. The rest of the code displays the results of the bulk copy.

void DoBulkCopy()
{
Console.WriteLine(" ** Do Bulk Copy ** "); var cars = new List
{
new Car() {Color = "Blue", MakeId = 1, PetName = "MyCar1"}, new Car() {Color = "Red", MakeId = 2, PetName = "MyCar2"}, new Car() {Color = "White", MakeId = 3, PetName = "MyCar3"}, new Car() {Color = "Yellow", MakeId = 4, PetName = "MyCar4"}
};
ProcessBulkImport.ExecuteBulkImport(cars, "Inventory"); InventoryDal dal = new InventoryDal(); List list = dal.GetAllInventory();
Console.WriteLine(" ** All Cars ** "); Console.WriteLine("CarId\tMake\tColor\tPet Name");
foreach (var itm in list)
{
Console.WriteLine(
$"{itm.Id}\t{itm.Make}\t{itm.Color}\t{itm.PetName}");
}
Console.WriteLine();
}

While adding four new cars does not show the merits of the work involved in using the SqlBulkCopy class, imagine trying to load thousands of records. I have done this with customers, and the load time has been mere seconds, where looping through each record took hours! As with everything in .NET, this is just another tool to keep in your toolbox to use when it makes the most sense.

Summary
ADO.NET is the native data access technology of the .NET platform. In this chapter, you began by learning the role of data providers, which are essentially concrete implementations of several abstract base classes (in the System.Data.Common namespace) and interface types (in the System.Data namespace). You also saw that it is possible to build a provider-neutral code base using the ADO.NET data provider factory model.
You also learned that you use connection objects, transaction objects, command objects, and data reader objects to select, update, insert, and delete records. Also, recall that command objects support an internal parameter collection, which you can use to add some type safety to your SQL queries; these also prove quite helpful when triggering stored procedures.
Next, you learned how to safeguard your data manipulation code with transactions and wrapped up the chapter with a look at using the SqlBulkCopy class to load large amounts of data into SQL Server using ADO.NET.

发表评论