Start by installing the EntityFrameworkCore.Jet package.
dotnet add package EntityFrameworkCore.Jet
This is an EF Core provider built on top of the Jet/ACE database engine which enables connecting to Access and Excel files.
There are two different drivers for connecting to the engine–OleDB and ODBC. For this example, we’ll use OleDB. To do that, we also need to install the System.Data.OleDb package
dotnet add package System.Data.OleDb
Now, let’s assume we have an Excel file named Signups.xlsx
containing a single sheet with the following data.
Name | Phone number | Party size |
---|---|---|
Brice | 555-5551 | 4 |
Ryan | 555-5552 | 1 |
David | 555-5553 | 2 |
The first thing we need to do is tell EF how to connect to the file. Inside your DbContext.OnConfiguring
(or similar), call UseJet
with an appropriate connection string.
class SignupContext : DbContext
{
protected override void OnConfiguring(DbContextOptionsBuilder options)
=> options.UseJet(
"""
Provider = Microsoft.ACE.OLEDB.12.0;
Data Source = Signups.xlsx;
Extended Properties = 'Excel 12.0 Xml';
""");
}
This is an OleDB connection string that says to use the ACE provider. It contains the path to our Signups.xlsx
file. And finally, it includes additional properties that tell the provider we’re connecting to an modern Excel file.
Normally, we’d ask EF to reverse engineer a model and scaffold the appropriate classes, but the EF Core Jet provider is created primarily with Access databases in mind, so it currently isn’t able to reverse engineer Excel files.
Instead, we’ll use the following code to see what tables and columns are available.
using var db = new SignupContext();
db.Database.OpenConnection();
var connection = db.Database.GetDbConnection();
using var tables = connection.GetSchema("Tables");
foreach (DataRow table in tables.Rows)
{
var tableName = (string)table["TABLE_NAME"];
Console.WriteLine(tableName);
var command = connection.CreateCommand();
command.CommandType = CommandType.TableDirect;
command.CommandText = tableName;
using var reader = command.ExecuteReader(CommandBehavior.SchemaOnly);
using var columns = reader.GetSchemaTable();
foreach (DataRow column in columns.Rows)
{
Console.WriteLine($" {column["DataType"]} {column["ColumnName"]}");
}
}
When we run it against our Signups.xlsx file, we get the following output.
Sheet1$
System.String Name
System.String Phone number
System.Double Party size
With this information, we’re able to create a class that maps to our spreadsheet.
[Keyless, Table("Sheet1$")]
class SignupEntry
{
public string Name { get; set; }
[Column("Phone number")]
public string PhoneNumber { get; set; }
[Column("Party size")]
public double PartySize { get; set; }
}
I’m using a keyless entity type since I’m only going to be reading the data. If your data has a column (or columns) that can serve as the primary key, you probably want to specify them in your model.
Don’t forget to add a DbSet
property to your DbContext
.
public DbSet<SignupEntry> Signups { get; set; }
And that’s it! You should be able to query your Excel file just like any other data source.
var partyCount = db.Signups.Count();
Console.WriteLine($"Parties: {partyCount}");
var averagePartySize = db.Signups.Average(s => s.PartySize);
Console.WriteLine($"Average size: {averagePartySize}");
var largestParty = db.Signups.OrderByDescending(s => s.PartySize).First();
Console.WriteLine($"Largest: {largestParty.Name}, party of {largestParty.PartySize}");
Parties: 3
Average size: 2.3333333333333335
Largest: Brice, party of 4
]]>I wasn’t looking for a job, but God works in mysterious ways. He lead me to an opportunity that warranted my consideration. No, it doesn’t pay more, and it certainly isn’t more prestigious. Ultimately, I felt I was needed there for a season, so I accepted. They were generous enough to let me stay through the .NET 8 release.
The good news is they use Entity Framework. I strive to be a valuable member of the .NET open source community, and I will continue to do so. I’m excited to be working alongside you in the trenches using the product that I’ve poured my blood, sweat, and tears into. I anticipate experiencing a lot of customer empathy in the near future, and seriously questioning some of my past design decisions.
My new employer is a small, non-tech company. I’ll be working in the IT department on a team of about seven (plus or minus two). Like all good SQLite developers, we’ll live according to the Rule of St. Benedict.
Until we meet again, goodbye.
]]>SQLite3 Multiple Ciphers is an extension to SQLite for reading and writing encrypted databases. It supports five different encryption schemes including the ones for System.Data.SQLite, SQLCipher, and wxSQLite3. It’s also cross-platform which means it can be used with Linux, macOS, Windows, Android, and iOS.
The new package makes it super easy to use with various .NET libraries.
Start using it with Microsoft.Data.Sqlite and Dapper by installing the right packages. Be sure to use the package ending in .Core and not the main Microsoft.Data.Sqlite once. This avoids installing two conflicting bundles into your project.
dotnet add package Microsoft.Data.Sqlite.Core
dotnet add package SQLitePCLRaw.bundle_e_sqlite3mc
dotnet add package Dapper
After that, you can simply use the Password
keyword in your connection string to create and open an encrypted database. This uses the default encryption scheme.
using Dapper;
using Microsoft.Data.Sqlite;
using var connection = new SqliteConnection("Data Source=example.db;Password=Password12!");
var version = connection.ExecuteScalar<string>("select sqlite3mc_version()");
Console.WriteLine(version);
For SQLite-net, be sure to use the sqlite-net-base
package instead of the main one to avoid conflicting bundles.
dotnet add package sqlite-net-base
dotnet add package SQLitePCLRaw.bundle_e_sqlite3mc
SQLite-net has a convenient key
parameter you can pass to the connection string.
using SQLite;
SQLitePCL.Batteries_V2.Init();
var connection = new SQLiteConnection(new("example.db", storeDateTimeAsTicks: true, key: "Password12!"));
var version = connection.ExecuteScalar<string>("select sqlite3mc_version()");
Console.WriteLine(version);
On EF Core, again, use the package ending in .Core to avoid conflicting bundles.
dotnet add package Microsoft.EntityFrameworkCore.Sqlite.Core
dotnet add package SQLitePCLRaw.bundle_e_sqlite3mc
Under the covers, EF Core uses Microsoft.Data.Sqlite, so again, you can just specify the Password
keyword in the connection string.
options.UseSqlite("Data Source=example.db;Password=Password12!");
At the beginning, I mentioned SQLite3 Multiple Ciphers supports multiple encryption schemes. Let’s look at how to configure the scheme.
If you have an existing database encrypted by System.Data.SQLite, you can open it by using a URI filename in your connection string and specifying the rc4
cipher.
Data Source=file:example.db?cipher=rc4;Password=Password12!
It looks a little funny, but it should work. See the docs for additional options that can be specified in the URI.
I think being able to read encrypted databases created by System.Data.SQLite will unblock several projects that have been wanting to move to Microsoft.Data.Sqlite or EF Core and go cross-platform.
If you have a database created by SQLCipher (including SQLitePCLRaw.bundle_e_sqlcipher
), you can open it by specifying sqlcipher
in the connection string.
Data Source=file:example.db?cipher=sqlcipher&legacy=4;Password=Password12!
SQLite3 Multiple Ciphers is not intended to be a drop-in replacement for SQLCipher, but it’s a great tool for working with existing SQLCipher databases.
The legacy
option is just to avoid breaks in the future when SQLCipher changes their defaults. Again, be sure to check the docs for additional options that can be specified.
I’m really excited to have a new encryption option for SQLite in .NET. Please give it a try and let us know if you run into any issues.
]]>The first time I remember thinking, “Wow, this terminal is beautiful!” was twenty years ago when I first installed Gentoo Linux. It was a high-resolution (well, for the time), framebuffered terminal, and something about the blue and green hues immediately sparked joy.
Years later, I researched it, and it turned out that there wasn’t anything special about Gentoo’s color scheme. It’s just the default VGA text mode palette. But, it was so much more beautiful than the vintage Windows palette.
Vintage | VGA |
---|---|
Those bright colors just look so much better to me!
Another thing I love about the VGA pallet is the orange it uses instead of that ugly olive color. None of the preset color schemes in Windows Terminal have duplicated this improvement. All the dark yellow hues are still kinda ugly. What a shame!
Campbell |
---|
There is one thing, however, that the Windows Terminal schemes do better. Some of ‘em use a lovely shade of purple for their dark magenta hue. So, in order to create the perfect color scheme, I mathematically reverse engineered the formula the VGA palette uses to shift dark yellow to orange, and I used it to shift dark magenta to purple. Behold!
Brice’s Color Scheme |
---|
Oh, I also created this PowerShell script that you can use to install it.
$settingsPath = "$env:LOCALAPPDATA\Packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\LocalState\settings.json"
$settings = ConvertFrom-Json (Get-Content -Raw $settingsPath)
$settings.schemes += [PSCustomObject]@{
name = "Brice's Color Scheme"
black = '#000000'
blue = '#0000AA'
green = '#00AA00'
cyan = '#00AAAA'
red = '#AA0000'
purple = '#5500AA'
yellow = '#AA5500'
white = '#AAAAAA'
brightBlack = '#555555'
brightBlue = '#5555FF'
brightGreen = '#55FF55'
brightCyan = '#55FFFF'
brightRed = '#FF5555'
brightPurple = '#FF55FF'
brightYellow = '#FFFF55'
brightWhite = '#FFFFFF'
foreground = '#AAAAAA'
background = '#000000'
cursorColor = '#FFFFFF'
selectionBackground = '#FFFFFF'
}
Set-Content $settingsPath (ConvertTo-Json $settings -Depth 100)
Before you can access the WinRT namespaces, you need to reference them. Do this by updating your project’s target framework (TFM) to be Windows-specific. Inside your project file, and add -windows10.0.17763 to it.
<TargetFramework>net6.0-windows10.0.17763</TargetFramework>
In the same way that using the -android or -ios lets you access those platforms’ native APIs, this lets you access the Windows ones.
The first question we want to answer using the WinRT APIs is: When was a photo taken?
There are lots of ways to access this information, but in my opinion, the best is also one of the simplest.
var file = await StorageFile.GetFileFromPathAsync(path);
var properties = await file.GetBasicPropertiesAsync();
var dateTaken = properties.ItemDate;
Console.WriteLine($"Date Taken: {dateTaken}");
The BasicProperties.ItemDate API will give you “the most relevant date for the item”. For photos, that means the date it was taken. I prefer this API because you don’t have to worry about how the metadata is stored. It will look everywhere it can to find the information.
The next question we want to answer is: Where was the photo taken?
For this, we’ll use GeotagHelper. Like ItemDate above, this will look everywhere it can for the location.
var geotag = await GeotagHelper.GetGeotagAsync(file);
var latitude = geotag?.Position.Latitude;
var longitude = geotag?.Position.Longitude;
Console.WriteLine($"Location: {latitude},{longitude}");
The previous APIs were nice and simple. They’re also general-purpose–they work just as well on video files. But to answer the question Who is this a photo of? we’ll need to dive deeper into the metadata APIs.
Photo apps like Picasa, Windows Live Photo Gallery (may they both rest in peace), and digiKam let you tag people in your photos the same way you would when posting on social media. This information gets embedded into the image’s metadata.
If all you need are the names, you can use ImageProperties.PeopleNames.
var imageProperties = await file.Properties.GetImagePropertiesAsync();
var peopleNames = imageProperties.PeopleNames;
Unfortunately, this doesn’t tell you which name belongs to which face. To get the corresponding rectangle on the image, we need to do some intense querying of the metadata.
There are two main ways this information is stored. The first metadata standard for it was Microsoft Photo (you guessed it, made popular by our beloved Windows Live Photo Gallery). Later, the big tech companies came together as the Metadata Working Group to create another standard that “fixed” all their complaints about the first one. So, now we always have two places to look instead of one.
Here’s a method to query the metadata using BitmapDecoder.BitmapProperties. Fun fact, this API is backed by the Windows Imaging Component (or WIC) so every image format imaginable is supported. Well, so long as you have a codec installed for it anyway.
static async Task<IReadOnlyList<(string Name, Rect Area)>> GetPeopleAsync(IStorageFile file)
{
var people = new List<(string Name, Rect Area)>();
using var stream = await file.OpenReadAsync();
BitmapDecoder decoder;
try
{
decoder = await BitmapDecoder.CreateAsync(stream);
}
catch
{
return Array.Empty<(string, Rect)>();
}
// Microsoft Photo
var regionList = (BitmapPropertiesView?)(await decoder.BitmapProperties.GetPropertiesAsync(new[] { "/xmp/MP:RegionInfo/MPRI:Regions" })).Values.SingleOrDefault()?.Value;
if (regionList is not null)
{
foreach (var region in (await regionList.GetPropertiesAsync(Enumerable.Empty<string>())).Values.Select(p => (BitmapPropertiesView)p.Value))
{
var name = (string)(await region.GetPropertiesAsync(new[] { "/MPReg:PersonDisplayName" })).Values.Single().Value;
var rectangle = (string?)(await region.GetPropertiesAsync(new[] { "/MPReg:Rectangle" })).Values.SingleOrDefault()?.Value;
if (rectangle is null)
continue;
var rectangleParts = rectangle.Split(',', StringSplitOptions.TrimEntries);
var x = double.Parse(rectangleParts[0]);
var y = double.Parse(rectangleParts[1]);
var w = double.Parse(rectangleParts[2]);
var h = double.Parse(rectangleParts[3]);
people.Add((name, new Rect(x, y, w, h)));
}
}
// Metadata Working Group
const string mwgRs = @"http\:\/\/www.metadataworkinggroup.com\/schemas\/regions\/";
regionList = (BitmapPropertiesView?)(await decoder.BitmapProperties.GetPropertiesAsync(new[] { $"/xmp/{mwgRs}:Regions/{mwgRs}:RegionList" })).Values.SingleOrDefault()?.Value;
if (regionList is not null)
{
foreach (var region in (await regionList.GetPropertiesAsync(Enumerable.Empty<string>())).Values.Select(p => (BitmapPropertiesView)p.Value))
{
var name = (string?)(await region.GetPropertiesAsync(new[] { $"/{mwgRs}:Name" })).Values.SingleOrDefault()?.Value;
if (name is null)
continue;
const string stArea = @"http\:\/\/ns.adobe.com\/xmp\/sType\/Area";
var cx = double.Parse((string)(await region.GetPropertiesAsync(new[] { $"/{mwgRs}:Area/{stArea}#:x" })).Values.Single().Value);
var cy = double.Parse((string)(await region.GetPropertiesAsync(new[] { $"/{mwgRs}:Area/{stArea}#:y" })).Values.Single().Value);
var w = double.Parse((string)(await region.GetPropertiesAsync(new[] { $"/{mwgRs}:Area/{stArea}#:w" })).Values.Single().Value);
var h = double.Parse((string)(await region.GetPropertiesAsync(new[] { $"/{mwgRs}:Area/{stArea}#:h" })).Values.Single().Value);
// Note, x and y represent the center of the rectangle in this format.
// We normalize it to left and top instead so it matches Rect.
people.Add((name, new Rect(cx - (w / 2.0), cy - (h / 2.0), w, h)));
}
}
return people;
}
As you can see, these APIs aren’t the friendliest to work with, but with a lot of casting and superfluous constructs, we’re able to get the information we need.
Now we can use this method to get the people in our photo.
var people = await GetPeopleAsync(file);
Console.WriteLine("People:");
foreach (var person in people)
Console.WriteLine($" {person.Name} [{person.Area}]");
One interesting thing to note is that the x, y, width, and height values are always between 0 and 1. That’s because they’re percentages of the photo’s actual with and height. This allows the metadata values to remain the same even if the image is resized. If you want to draw the rectangles, multiply the values by the image’s width and height.
var rectToDraw = new Rect(
faceArea.X * image.Width,
faceArea.Y * image.Height,
faceArea.Width * image.Width,
faceArea.Height * image.Height);
Hopefully this has given you a taste of all the untapped power inside Windows just waiting to be used by your .NET apps. Let me know if there are other areas you’d like to see covered, and let me know about all the cool APIs that I should be using in my apps. Happy coding!
]]>This technology is very old, and I suspect that parts of it even existed before .NET. Seeing an ADO.NET provider from DDEX’s perspective gave me a lot of insight into the design of ADO.NET. For example, the GetSchema method and its collections were always strange to me, and frankly, seemed kinda useless. But now, I see they exist primarily to support the DDEX provider. My new opinion is that GetSchema is actually just the result of bad architectural layering. 😉
I’ve been steadily making progress on this provider in my spare time, and I’ve found that having a read-only view of SQLite databases inside of Visual Studio’s Server Explorer can be pretty handy when debugging. It’ll never be able to compete with more robust tools like SQLite Toolbox, DB Browser for SQLite, or DataGrip, but coupled with the fact that it’s also a DDEX provider for Microsoft.Data.Sqlite that other Visual Studio extensions could use, I decided to release a preview.
You can download the preview from Visual Studio Marketplace or from the Manage Extensions dialog inside Visual Studio 2022. I’m eager to see if you think it’s useful.
]]>The year was 2011. We were just about to release version 4.3 which was the first to include Code First Migrations. We were thinking about how you might want to extend the Migrations SQL generation. We were also constantly thinking about how much DBAs hated us because of the convoluted SQL our queries would sometimes generate. (Don’t worry, we’ve since improved that.) We decided that instead of handing the DBAs a SQL script generated by Migrations, it might be better if you could just send a database creation request that described the application’s requirements. That way, they could write the SQL exactly the way they want it and couldn’t complain about how we generated ours.
Well, last night, I decided to have some fun and threw together a prototype. I present to you Dear DBA! Yes, the source code is even available on GitHub. It’s a Migrations SQL generator that, instead of generating SQL, generates a friendly message you can send to your DBA instead. Here’s an example of what it generates for the classic Blogs and Posts model.
Dear DBA,
We lowly developers would once again petition you for a new database.
We’ll need a Blogs table with the following columns. A required Id column to store unique INTEGER values. We’ll use the Id value as the primary way of identifying rows in the table. A required Url column to store TEXT values.
We’ll also need a Posts table with the following columns. A required Id column to store unique INTEGER values. We’ll use the Id value as the primary way of identifying rows in the table. A required Title column to store TEXT values. A required Content column to store TEXT values. A required BlogId column to store INTEGER values that reference the Blogs table.
We don’t really know what an index is, but in the past, you’ve been able to use them to improve performance and compensate for some of our more incompetent queries. We will, of course, defer to your far superior expertise on this matter. Nevertheless, we will suggest a few that we’re reasonably certain about.
We think the BlogId column on the Posts table should be indexed.
Sincerely,
The Developers
Hopefully my geeky, self-deprecating sense of humor can bring you a bit of cheer today. Happy coding!
]]>The underlying, native SQLite connections are now pooled by default. This greatly improves the performance of opening and closing connections. This is especially noticeable in scenarios where opening the underlying connection is expensive as is the case when using encryption or in scenarios where there are lots of short-lived connections to the database. The Orchard Core benchmarks went from 5.5K to 14.5K requests per second with the latest version of Microsoft.Data.Sqlite.
Beware, however, that the database file may still be locked after you close a connection. If this becomes a problem, you can manually clear the pool to release the lock:
SqliteConnection.ClearPool(connection);
// or
SqliteConnection.ClearAllPools();
If you run into any issues, you can turn off connection pooling by specifying Pooling=False
in your connection string. Please be sure to file an issue too!
This release implements the ADO.NET Savepoints API. Savepoints enable nested transactions. For more information and a sample, see the new Savepoints section in the docs.
.NET 6 added two new types for working with date and time values. These types work just as you’d expect in parameters, data readers, and user-defined functions.
Here are a few more minor changes also worth mentioning:
Command Timeout
connection string keyword as part of an effort to standardize it across providersSpan
overloads of SqliteBlob to avoid allocationsHappy coding! Don’t forget to vote on the issues you’d like to see implemented in a future release.
]]>First off, if you don’t know what T4 is, I highly recommend skimming through the docs. In a nutshell, it’s a templating engine that lets you use C# to generate text, and it’s very simple (literally, the language only has like three concepts). It’s similar to using a string builder, but instead focusing on the C# and managing the builder yourself, T4 lets you focus on the output, sprinkling in a bit of C# as needed.
I’ve found a lot of great uses for T4 over the years. Here are some examples that I’m particularly proud of:
Ok, maybe I’m not particularly proud of that last one since the templates are a bit sloppy and they deal way too much with whitespace. But T4 was a pretty big part of Entity Framework in the past. We used it to generate code from an .edmx file, and eventually (as linked above) to generate code from a model reverse engineered from a database.
The T4 ecosystem has been pretty good over the years, but it has also seen a lot of turnover. The following table shows some of the incredible extensions we’ve seen. The dots show when they were first released, and which subsequent versions of Visual Studio they supported.
2008 | 2010 | 2012 | 2013 | 2015 | 2017 | 2019 | |
---|---|---|---|---|---|---|---|
Clarius | ● | ||||||
Tangible | ● | ● | ● | ● | ● | ● | ● |
Devart | ● | ● | ● | ● | ● | ● | |
Oleg Sych | ● | ● | ● | ● | |||
Tim Maes | ● |
Recent years have even seen a bit of a revival in the ecosystem. Mikayla Hutchinson has put a ton of work into adding T4 support to Mono and .NET Core, and JetBrains even added T4 support to Rider. Even we, the EF team, hope to keep the momentum going by adding support for T4 templates to scaffolding (again) in the next release.
Why release yet another T4 editor for Visual Studio? The table seems indicate that Tangible and Devart have a pretty good track record, and Tim Maes only recently entered the mix, so he’ll probably release a version for VS 2022 too, right?
Well, it’s complicated. I certainly hope they all release updated versions for VS 2022! They have way more features than I’ll ever be able to add to my extension by myself. But there are three main reasons why I also wanted to enter the mix.
First, I’ve been using Visual Studio 2022 exclusively for a few months now, and I desperately miss having syntax highlighting for my templates. It’s very easy to go cross-eyed and lose your place without it. I’ve obsessively searched for T4 in the extension manager over the past few months, but the list was always empty. As soon as Visual Studio 2022 RC released and the list was still empty, I decided it was time to take action.
Second, this wasn’t a new idea for me. Way back in 2015 when we were still developing .NET Core, I started thinking about how we could bring T4 to .NET Core. I remember sitting down with Taylor Mullen to learn more about how the ASP.NET Razor editor worked, and how that architecture might be replicated inside a T4 editor. Ever since, I’ve kept tabs on the Razor editor’s development, and watched as they moved to using TextMate grammars and LSP. This has enabled them to work across additional editors like VS Code. All the other T4 extensions I’ve seen for Visual Studio still use the traditional language extensibility APIs. I wanted to build something modern and reusable.
Finally, it was Microsoft’s annual Hackathon. I had three whole distraction-free days to bring my idea to life. I knew I could never compete with the functionality of existing editors, but I could at least provide myself and others with syntax highlighting in the meantime, using modern and reusable technologies.
So where does it go from here? Well, I don’t know. I may be just another dot on the table above. Or maybe this release can be the catalyst for rich, up-to-date tooling built into Visual Studio. Who knows; only time will tell…
]]>