Kenny Kerr is a Software Engineer at Microsoft where he works on C++ and Rust tools and libraries for the Windows operating system. He is the creator of C++/WinRT and Rust for Windows. Kenny also wrote recurring columns and feature articles for C/C++ Users Journal, Visual Studio Magazine, MSDN Magazine, originally called Microsoft Systems Journal.

Getting Started with Rust

Kenny's courses on Pluralsight

Kenny on GitHub

Kenny on YouTube

Kenny on LinkedIn

Kenny's old blog on WordPress

Kenny's old blog on asp.net

Getting Started with Rust

The windows-rs project has been available for some time and while I still have a great deal of work left to do, I thought I should start spending some time writing about Rust for Windows and not simply building Rust for Windows. 😊 As I did for C++/WinRT, I thought I would start writing a few short "how to" or "how it works" articles to help developers understand some of the fundamentals of the windows-rs project.

Some of these topics will be obvious for Rust developers but perhaps not the Windows developer new to Rust. Other topics might be obvious to Windows developers but less so to the seasoned Rust developer new to Windows. Either way, I hope you find it useful. Feel free to open an issue on the repo if you have any questions.

Choosing between the windows and windows-sys crates

The windows crate provides bindings for the Windows API, including C-style APIs like CreateThreadpool as well as COM and WinRT APIs like DirectX. This crate provides the most comprehensive API coverage for the Windows operating system. Where possible, the windows crate also attempts to provide a more idiomatic and safe programming model for Rust developers.

The windows-sys crate provides raw bindings for the C-style Windows APIs. It lacks support for COM and WinRT APIs. The windows-sys crate was born out of the realization that the most expensive aspect of the windows crate, in terms of build time, is the cost of compiling function bodies. The Rust compiler just spends a great deal of effort compiling function bodies, so a version of the windows crate that only includes declarations is both much smaller and faster by comparison. The trouble is that COM-style virtual function calls require extra code gen in Rust (unlike C++) and this in turn leads to slower compile times. Enter the windows-sys crate.

Of course, we continue to work hard at improving performance both in terms of the underlying Rust compiler toolchain as well as the efficiency of the code generated for these crates. We are thus confident that the compile-time will continue to improve.

What do you need?windowswindows-sys
Fast compile times are one of your top concerns✅
You need no_std support✅
You need COM or WinRT support✅
You would prefer to use APIs that feel idiomatic to Rust✅
Minimum supported Rust version1.561.56

How are these crates built?

The windows and windows-sys crates are generated from metadata describing the Windows API. Originally only WinRT APIs included metadata, but metadata is now provided for older C and COM APIs as well. The win32metadata project provides the tools to produce the metadata and the windows-metadata and windows-bindgen crates are used to read the metadata and generate the windows and windows-sys crates. The bindings are generated differently based on the differing goals of the respective crates. You can find the exact metadata files used to generate a particular version of the windows and windows-sys crates here.

How do I find a particular API?

First pick the crate you would like to use. Then search the documentation for the chosen crate:

Note that the docs include a note indicating which features to enable in order to access a particular API.

What APIs are included?

All Windows APIs provided by the Windows SDK are included, with a few exceptions. The definitions of these APIs are collected from metadata and transformed into Rust bindings. The process of generating the Rust bindings purposefully omits a few APIs. APIs are only excluded if they are (1) unsuitable for Rust developers and (2) impose a large hit on the overall size of the windows and windows-sys crates.

The Xaml API is excluded because it is all but unusable without direct language support that only the Xaml team can provide. Xaml is also focused and tailored for C# app development so this API isn't applicable to Rust developers. The MsHtml API is also excluded because it is only intended for Microsoft's older scripting languages like JScript and VBScript. It is also by far the single largest module as measured in lines of code. Beyond that, a few deprecrated and unusable APIs are excluded. You can see exactly what the windows crate excludes and what the windows-sys crate excludes.

Beyond that, the windows-sys crate currently excludes all COM and WinRT APIs. The windows-sys crate only includes declarations and COM and WinRT calls are far too cumbersome without the abstractions provided by the windows crate. Here are some tips for choosing between the windows and windows-sys crates.

Where's my favorite macro from the Windows SDK?

The windows and windows-sys crates are generated from metadata. This metadata only includes type definitions and function signatures, not macros, header-only functions, or function bodies. You may find some equivalents of common C/C++ helper macros and functions in the windows crate, but in general the macros don't have direct equivalents in the windows or windows-sys crates.

Calling your first API with the windows crate

So you want to get a feel for calling a simple Windows API. Where to start? Let's look at a relatively simple API for submitting callbacks to the thread pool. You can read more about this API here.

The first step is to add a dependency on the windows crate and indicate which features you'd like to access:

[dependencies.windows]
version = "0.52"
features = [
    "Win32_Foundation",
    "Win32_System_Threading",
]

Why these two features? Well, the thread pool API is defined in the Win32::System::Threading module and we'll also use a handful of definitions from the Win32::Foundation module. If you're unsure, the docs for any given API provide a helpful comment indicating which features are required. For example, here are the docs for WaitForThreadpoolWorkCallbacks where you can see it depends on both of these features since it is defined in the Win32::System::Threading module and depends on BOOL which is defined in the Win32::Foundation module.

Cargo will now handle the heavy lifting, tracking down the dependencies and making sure the import libs are present, so that we can simply call these APIs in Rust without any further configuration. We can employ a use declaration to make these APIs a little more accessible:

#![allow(unused)]
fn main() {
use windows::{core::Result, Win32::System::Threading::*};
}

In order to "prove" that the code works and yet keep it real simple let's just use the thread pool to increment a counter some number of times. Here we can use a reader-writer lock for safe and multi-threaded access to the counter variable:

#![allow(unused)]
fn main() {
static COUNTER: std::sync::RwLock<i32> = std::sync::RwLock::new(0);
}

For this example, I'll just use a simple main function with a big unsafe block since virtually everything here is going to be unsafe. Why is that? Well the windows crate lets you call foreign functions and these are generally assumed to be unsafe.

fn main() -> Result<()> {
    unsafe {
        
    }

    Ok(())
}

The thread pool API is modeled as a set of "objects" exposed via a traditional C-style API. The first thing we need to do is create a work object:

#![allow(unused)]
fn main() {
let work = CreateThreadpoolWork(Some(callback), None, None)?;
}

The first parameter is a pointer to a callback function. The remaining parameters are optional and you can read more about them in my thread pool series on MSDN.

The callback itself must be a valid C-style callback according to the signature expected by the thread pool API. Here's a simple callback that will increment the count:

#![allow(unused)]
fn main() {
extern "system" fn callback(_: PTP_CALLBACK_INSTANCE, _: *mut std::ffi::c_void, _: PTP_WORK) {
    let mut counter = COUNTER.write().unwrap();
    *counter += 1;
}
}

The parameters can safely be ignored but do come in handy from time to time. At this point, we have a valid work object but nothing is happening yet. In order to kick off some "work", we need to submit the work object to the thread pool. You can do so as many times as you'd like, so lets go ahead and do it ten times:

#![allow(unused)]
fn main() {
for _ in 0..10 {
    SubmitThreadpoolWork(work);
}
}

You can now expect the callbacks to run concurrently, hence the RwLock above. Of course, with all of that concurrency we need some way to tell when the work is done. That's the job of the WaitForThreadpoolWorkCallbacks function:

#![allow(unused)]
fn main() {
WaitForThreadpoolWorkCallbacks(work, false);
}

The second parameter indicates whether we would like to cancel any pending callbacks that have not started to execute. Passing false here thus indicates that we would like the wait function to block until all of the submitted work has completed. At that point, we can safely close the work object to free its memory:

#![allow(unused)]
fn main() {
CloseThreadpoolWork(work);
}

And just to prove that it works reliably, we can print out the counter's value:

#![allow(unused)]
fn main() {
let counter = COUNTER.read().unwrap();
println!("counter: {}", *counter);
}

Running the sample should print something like this:

counter: 10

Here's the full sample for reference.

Calling your first API with the windows-sys crate

So you want to get a feel for calling a simple Windows API. Where to start? Let's look at a relatively simple API for submitting callbacks to the thread pool. You can read more about this API here.

The first step is to add a dependency on the windows-sys crate and indicate which features you'd like to access:

[dependencies.windows-sys]
version = "0.52"
features = [
    "Win32_Foundation",
    "Win32_System_Threading",
]

Why these two features? Well, the thread pool API is defined in the Win32::System::Threading module and we'll also use a handful of definitions from the Win32::Foundation module. If you're unsure, the docs for any given API provide a helpful comment indicating which features are required. For example, here are the docs for WaitForThreadpoolWorkCallbacks where you can see it depends on both of these features since it is defined in the Win32::System::Threading module and depends on BOOL which is defined in the Win32::Foundation module.

Cargo will now handle the heavy lifting, tracking down the dependencies and making sure the import libs are present, so that we can simply call these APIs in Rust without any further configuration. We can employ a use declaration to make these APIs a little more accessible:

#![allow(unused)]
fn main() {
use windows_sys::{Win32::Foundation::*, Win32::System::Threading::*};
}

In order to "prove" that the code works and yet keep it real simple let's just use the thread pool to increment a counter some number of times. Here we can use a reader-writer lock for safe and multi-threaded access to the counter variable:

#![allow(unused)]
fn main() {
static COUNTER: std::sync::RwLock<i32> = std::sync::RwLock::new(0);
}

For this example, I'll just use a simple main function with a big unsafe block since virtually everything here is going to be unsafe. Why is that? Well the windows crate lets you call foreign functions and these are generally assumed to be unsafe.

fn main() {
    unsafe {
        
    }
}

The thread pool API is modeled as a set of "objects" exposed via a traditional C-style API. The first thing we need to do is create a work object:

#![allow(unused)]
fn main() {
let work = CreateThreadpoolWork(Some(callback), std::ptr::null_mut(), std::ptr::null());
}

The first parameter is a pointer to a callback function. The remaining parameters are optional and you can read more about them in my thread pool series on MSDN.

Since this function allocates memory, it is possible that it might fail, and this is indicated by returning a null pointer rather than a valid work object handle. We'll check for this condition and call the GetLastError function to display any relevant error code:

#![allow(unused)]
fn main() {
if work == 0 {
    println!("{:?}", GetLastError());
    return;
}
}

The callback itself must be a valid C-style callback according to the signature expected by the thread pool API. Here's a simple callback that will increment the count:

#![allow(unused)]
fn main() {
extern "system" fn callback(_: PTP_CALLBACK_INSTANCE, _: *mut std::ffi::c_void, _: PTP_WORK) {
    let mut counter = COUNTER.write().unwrap();
    *counter += 1;
}
}

The parameters can safely be ignored but do come in handy from time to time. At this point, we have a valid work object but nothing is happening yet. In order to kick off some "work", we need to submit the work object to the thread pool. You can do so as many times as you'd like, so lets go ahead and do it ten times:

#![allow(unused)]
fn main() {
for _ in 0..10 {
    SubmitThreadpoolWork(work);
}
}

You can now expect the callbacks to run concurrently, hence the RwLock above. Of course, with all of that concurrency we need some way to tell when the work is done. That's the job of the WaitForThreadpoolWorkCallbacks function:

#![allow(unused)]
fn main() {
WaitForThreadpoolWorkCallbacks(work, 0);
}

The second parameter indicates whether we would like to cancel any pending callbacks that have not started to execute. Passing 0, meaning false, here thus indicates that we would like the wait function to block until all of the submitted work has completed. At that point, we can safely close the work object to free its memory:

#![allow(unused)]
fn main() {
CloseThreadpoolWork(work);
}

And just to prove that it works reliably, we can print out the counter's value:

#![allow(unused)]
fn main() {
let counter = COUNTER.read().unwrap();
println!("counter: {}", *counter);
}

Running the sample should print something like this:

counter: 10

Here's the full sample for reference.

Calling your first COM API

COM APIs are unique in that they expose functionality through interfaces. An interface is just a collection of virtual function pointers grouped together in what is known as a vtable, or virtual function table. This is not something that Rust supports directly, like C++ does, but the windows crate provides the necessary code gen to make it possible and seamless. A COM API will still typically start life through a traditional C-style function call in order to get your hands on a COM interface. From there you might call other methods via the interface.

Some COM-based APIs can get real complicated so let's start with a very simple example. The CreateUri function is officially documented on MSDN as returning the IUri interface representing the results of parsing the given URI. The Rust docs for the windows crate indicate that it resides in the Win32::System::Com module so we can configure our windows crate dependency accordingly:

[dependencies.windows]
version = "0.52"
features = [
    "Win32_System_Com",
]

And we can employ a use declaration to make this API a little more accessible. The windows crate's core module also provides a few helpers to make it easier to work with COM interfaces, so we'll include that as well:

#![allow(unused)]
fn main() {
use windows::{core::*, Win32::System::Com::*};
}

For this example, I'll just use a simple main function with a big unsafe block since virtually everything here is going to be unsafe. Why is that? Well the windows crate lets you call foreign functions and these are generally assumed to be unsafe.

fn main() -> Result<()> {
    unsafe {
        
        Ok(())
    }
}

The only "interesting" point here is the use of the Result type from the windows::core module that provides Windows error handling to simplify the following API calls. And with that, we can call the CreateUri function as follows:

#![allow(unused)]
fn main() {
let uri = CreateUri(w!("http://kennykerr.ca"), Uri_CREATE_CANONICALIZE, 0)?;
}

There's quite a lot going on here. The first parameter is actually a PCWSTR, representing a null-terminated wide string used by many Windows APIs. The windows crate provides the handy w! macro for creating a valid null-terminated wide string as a compile-time constant. The second parameter is just the default flag specified by the official documentation. The third parameter is reserved and should thus be zero.

The resulting IUri object has various methods that we can now use to inspect the URI. The official documentation describes the various interface methods and the Rust docs give you a quick glimpse at their various signatures so that you can quickly figure out how to call them in Rust. For this example, let's just call two of them to print out the URI's domain and the HTTP port number:

#![allow(unused)]
fn main() {
let domain = uri.GetDomain()?;
let port = uri.GetPort()?;

println!("{domain} ({port})");
}

Under the hood, those methods will invoke the virtual functions through the COM interface and into the implementation provided by the API. They also provide a bunch of error and signature transformation to make it very natural to use from Rust. And that's it, running the sample should print something like this:

kennykerr.ca (80)

Here's the full sample for reference.

Calling your first WinRT API

Windows 8 introduced the Windows Runtime, which at its heart, is just COM with a few more conventions thrown in to make language bindings appear more seamless. The windows crate already makes calling COM APIs far more seamless than it is for C++ developers, but WinRT goes further by providing first-class support for modeling things like constructors, events, and class hierarchies. In calling your first COM API, we saw that you still had to bootstrap the API with a C-style DLL export before calling COM interface methods. WinRT works the same way but abstracts this away in a generalized manner.

Let's use a simple example to illustrate. The XmlDocument "class" models an XML document that can be loaded from various sources. The Rust docs for the windows crate indicate that this type resides in the Data::Xml::Dom module so we can configure our windows crate dependency as follows:

[dependencies.windows]
version = "0.52" 
features = [
    "Data_Xml_Dom",
]

And we can employ a use declaration to make this API a little more accessible. The windows crate's core module just provides a few helpers to make it easier to work with Windows APIs, so we'll include that as well:

#![allow(unused)]
fn main() {
use windows::{core::*, Data::Xml::Dom::XmlDocument}; 
}

For this example, I'll just use a simple main function with a Result type from the windows::core module to provide automatic error propagation and simplify the subsequent API calls:

fn main() -> Result<()> {

    Ok(())
}

Unlike the previous Win32 and COM examples, you'll notice that this main function does not need an unsafe block since WinRT calls are assumed to be safe thanks to its more constrained type-system.

To begin, we can simply call the new method to create a new XmlDocument object:

#![allow(unused)]
fn main() {
let doc = XmlDocument::new()?;
}

This looks a lot more like an idiomatic Rust type than your typical COM API, but under the hood a similar mechanism is used to instantiate the XmlDocument implementation via a DLL export. We can then call the LoadXml method to test it out. There are various other options for loading XML from different sources, which you can read about in the official documentation or from the Rust docs for the XmlDocument API. The windows crate also provides the handy h! macro for creating an HSTRING, the string type used by WinRT APIs:

#![allow(unused)]
fn main() {
doc.LoadXml(h!("<html>hello world</html>"))?;
}

And just like that, we have a fully-formed Xml document that we can inspect. For this example, let's just grab the document element and then do some basic queries as follows:

#![allow(unused)]
fn main() {
let root = doc.DocumentElement()?;
assert!(root.NodeName()? == "html");
println!("{}", root.InnerText()?);
}

First we assert that the element's name is in fact "html" and then print out the element's inner text. As with the previous COM example, those methods all invoke virtual functions through COM interfaces, but the windows crate makes it very simple to make such calls directly from Rust. And that's it. Running the sample should print something like this:

hello world

Here's the full sample for reference.

How do I query for a specific COM interface?

COM and WinRT interfaces in the windows crate implement the ComInterface trait. This trait provides the cast method that will use QueryInterface under the hood to cast the current interface to another interface supported by the object. The cast method returns a Result<T> so that failure can be handled in a natural way in Rust.

For example, it is often necesary to get the IDXGIDevice interface for a given Direct3D device to interop with other rendering APIs. This is how you might create a swap chain for drawing and presenting to a Direct3D device. Let's imagine a simple function that accepts a Direct3D device and returns the underlying DXGI factory:

#![allow(unused)]
fn main() {
fn get_dxgi_factory(device: &ID3D11Device) -> Result<IDXGIFactory2> {
}
}

The first thing you need to do is query or cast the Direct3D device for its DXGI interface as follows:

#![allow(unused)]
fn main() {
let device = device.cast::<IDXGIDevice>()?;
}

If its more convenient, you can also make use of type inference as follows:

#![allow(unused)]
fn main() {
let device: IDXGIDevice = device.cast()?;
}

With the COM interface in hand, we need an unsafe block to call its methods:

#![allow(unused)]
fn main() {
unsafe {
}
}

Within the unsafe block, we can retrieve the device's physical adapter:

#![allow(unused)]
fn main() {
let adapter = device.GetAdapter()?;
}

And just for fun (or debugging), we might print out the adapter's name:

#![allow(unused)]
fn main() {
if cfg!(debug_assertions) {
    let mut desc = Default::default();
    adapter.GetDesc(&mut desc)?;
    println!("{}", String::from_utf16_lossy(&desc.Description));
}
}

Finally, we can return the adapter's parent and also the DXGI factory object for the device:

#![allow(unused)]
fn main() {
adapter.GetParent()
}

Running the sample I get the following impressive results:

AMD FirePro W4100

Here's a more comprehensive DirectX example.

The cast method works equally well for WinRT classes and interfaces. It is particularly useful for interop with WinRT APIs.

How do I implement an existing COM interface?

In some cases, you may need to implement an existing COM interface rather than simply calling an existing implementation provided by the operating system. This is where the implement feature and macro come in handy. The windows crate provides optional implementation support hidden behind the implement feature. Once enabled, the implement macro may be used to implement any number of COM interfaces. The macro takes care of implementing IUnknown itself.

Let's implement a simple interface defined by Windows to illustrate. The IPersist interface is defined in the Win32::System::Com module, so we'll start by adding a dependency on the windows crate and include the Win32_System_Com feature:

[dependencies.windows]
version = "0.52"
features = [
    "implement",
    "Win32_System_Com",
]

The implement feature unlocks the implementation support.

The implement macro is included by the windows::core module so we'll keep things simple by including it all as follows:

#![allow(unused)]
fn main() {
use windows::{core::*, Win32::System::Com::*};
}

Now its time for the implementation:

#![allow(unused)]
fn main() {
#[implement(IPersist)]
struct Persist(GUID);
}

The implement macro will provide the necessary implementation for the IUnknown interface's lifetime management and interface discovery for whatever interfaces are included in the attribute. In this case, only IPersist is to be implemented.

The implementation itself is defined by a trait that follows the <interface name>_Impl pattern and its up to us to implement it for our implementation as follows:

#![allow(unused)]
fn main() {
impl IPersist_Impl for Persist {
    fn GetClassID(&self) -> Result<GUID> {
        Ok(self.0)
    }
}
}

The IPersist interface, originally documented here, has a single method that returns a GUID, so we'll just implement it by returning the value contained within our implementation. The window crate and implement macro will take care of the rest by providing the actual COM virtual function call and virtual function table layout needed to turn this into a heap-allocated and reference-counted COM object.

All that remains is to move, or box, the implementation into the COM implementation provided by the implement macro through the Into trait:

#![allow(unused)]
fn main() {
let guid = GUID::new()?;
let persist: IPersist = Persist(guid).into();
}

At this point, we can simply treat persist as the COM object that it is:

#![allow(unused)]
fn main() {
let guid2 = unsafe { persist.GetClassID()? };
assert_eq!(guid, guid2);
println!("{:?}", guid);
}

Here's a complete example.

How do I create stock collections for WinRT collection interfaces?

Beyond implementing COM interfaces yourself, the windows crate provides stock collection implementations for common WinRT collection interfaces. Implementing WinRT collection interfaces can be quite challenging, so this should save you a lot of effort in many cases. The implement feature is required to make use of these stock implementations.

Let's consider a few examples. The WinRT collection interfaces are all defined in the Foundation::Collections module, so we'll start by adding a dependency on the windows crate and include the Foundation_Collections feature:

[dependencies.windows]
version = "0.52"
features = [
    "implement",
    "Foundation_Collections",
]

Creating a collection is as simple as using the TryFrom trait on existing Vec or BTreeMap, depending on the kind of collection:

WinRT interfaceFrom
IIterable<T>Vec<T::Default>
IVectorView<T>Vec<T::Default>
IMapView<K, V>BTreeMap<K::Default, V::Default>

So if you need a IIterable implementation of i32 values you can create it as follows:

use windows::{core::*, Foundation::Collections::*};

fn main() -> Result<()> {
    let collection = IIterable::<i32>::try_from(vec![1, 2, 3])?;

    for n in collection {
        println!("{n}");
    }

    Ok(())
}

The resulting collection will implement all of the specialized IIterable<i32> methods.

Did you notice the T::Default in the table above? The challenge is that when the WinRT collection contains nullable types, unlike i32, then the collection must necessarily support a backing implementation that support expressing this. The Default associated type just replaces T with Option<T> for such nullable, or reference, types.

Let's consider a slightly more contrived example. Here we'll create an IMapView with strings for keys and interfaces for values. WinRT strings are not nullable but interfaces are. WinRT strings are represented by HSTRING in the windows crate and for the interface we'll just use an IStringable implementation:

#![allow(unused)]
fn main() {
use windows::Foundation::*;

#[implement(IStringable)]
struct Value(&'static str);

impl IStringable_Impl for Value {
    fn ToString(&self) -> Result<HSTRING> {
        Ok(self.0.into())
    }
}
}

We can now create a std collection as follows:

#![allow(unused)]
fn main() {
use std::collections::*;

let map = BTreeMap::from([
    ("hello".into(), Some(Value("HELLO").into())),
    ("hello".into(), Some(Value("WORLD").into())),
]);
}

The Rust compiler naturally infers the exact type: BTreeMap<HSTRING, Option<IStringable>>.

Finally, we can wrap that BTreeMap inside a WinRT collection with the TryInto trait as follows:

#![allow(unused)]
fn main() {
let map: IMapView<HSTRING, IStringable> = map.try_into()?;

for pair in map {
    println!("{} - {}", pair.Key()?, pair.Value()?.ToString()?);
}
}

Understanding the windows-targets crate

The windows and windows-sys crates depend on the windows-targets crate for linker support. The windows-targets crate includes import libs, supports semantic versioning, and optional support for raw-dylib. It provides explicit import libraries for the following targets:

  • i686_msvc
  • x86_64_msvc
  • aarch64_msvc
  • i686_gnu
  • x86_64_gnu
  • x86_64_gnullvm
  • aarch64_gnullvm

An import lib contains information the linker uses to resolve external references to functions exported by DLLs. This allows the operating system to identify a specific DLL and function export at load time. Import libs are both toolchain- and architecture-specific. In other words, different lib files are required depending on whether you're compiling with the MSVC or GNU toolchains and whether you're compiling for the x86 or ARM64 architectures. Note that import libraries don't contain any code, as static libraries do.

While the GNU and MSVC toolchains often provide some import libs to support C++ development, those lib files are often incomplete, missing, or just plain wrong. This can lead to linker errors that are very difficult to diagnose. The windows-targets crate ensures that all functions defined by the windows and windows-sys crates can be linked without relying on implicit lib files distributed by the toolchain. This ensures that dependencies can be managed with Cargo and streamlines cross-compilation. The windows-targets crate also contains version-specific lib file names ensuring semver compatibility. Without this capability, the linker will simply pick the first matching lib file name and fail to resolve any missing or mismatched imports.

Note: Ordinarily, you don't need to think about the windows-targets crate at all. The windows and windows-sys crates depend on the windows-targets crate automatically. Only in rare cases will you need to use it directly.

Start by adding the following to your Cargo.toml file:

[dependencies.windows-targets]
version = "0.52"

Use the link macro to define the external functions you wish to call:

#![allow(unused)]
fn main() {
windows_targets::link!("kernel32.dll" "system" fn SetLastError(code: u32));
windows_targets::link!("kernel32.dll" "system" fn GetLastError() -> u32);
}

Make use of any Windows APIs as needed:

fn main() {
    unsafe {
        SetLastError(1234);
        assert_eq!(GetLastError(), 1234);
    }
}

By default the link macro will cause the linker to use the bundled import libs. Compiling with the windows_raw_dylib Rust build flag will cause Cargo to skip downloading the import libs altogether and instead use raw-dylib to resolve imports automatically. The Rust compiler will then create the import entries directly. This works without having to change any of your code. Without the windows-targets crate, switching between linker and raw-dylib imports requires very intricate code changes. As of this writing, the raw-dylib feature is not yet stable.

Standalone code generation

Even with a choice between the windows and windows-sys crates, some developers may prefer to use completely standalone bindings. The windows-bindgen crate lets you generate entirely standalone bindings for Windows APIs with a single function call that you can run from a test to automate the generation of bindings. This can help to reduce your dependencies while continuing to provide a sustainable path forward for any future API requirements you might have, or just to refresh your bindings from time to time to pick up any bug fixes automatically from Microsoft.

Warning: Standalone code generation should only be used as a last resort for the most demanding scenarios. It is much simpler to use the windows-sys crate and let Cargo manage this dependency. This windows-sys crate provides raw bindings, is heavily tested and widely used, and should not meaningfully impact your build time.

Start by adding the following to your Cargo.toml file:

[dependencies.windows-targets]
version = "0.52"

[dev-dependencies.windows-bindgen]
version = "0.52"

The windows-bindgen crate is only needed for generating bindings and is thus a dev dependency only. The windows-targets crate is a dependency shared by the windows and windows-sys crates and only contains import libs for supported targets. This will ensure that you can link against any Windows API functions you may need.

Write a test to generate bindings as follows:

#![allow(unused)]
fn main() {
#[test]
fn bindgen() {
    let args = [
        "--out",
        "src/bindings.rs",
        "--config",
        "flatten",
        "--filter",
        "Windows.Win32.System.SystemInformation.GetTickCount",
    ];

    windows_bindgen::bindgen(args).unwrap();
}
}

Make use of any Windows APIs as needed.

mod bindings;

fn main() {
    unsafe {
        println!("{}", bindings::GetTickCount());
    }
}

Creating your first DLL in Rust

As a systems programming language with similar linkage support to that of C and C++, it is quite straightforward to build a DLL in Rust. Rust does however have it's own notion of libraries that are quite different to that of C and C++, so it's just a matter of finding the right configuration to produce the desired output.

As with most Rust projects, you can start with Cargo and get started with a basic template but it's so simple we'll just build it by hand here to see what's involved. Let's create a directory structure as follows:

> hello_world
  Cargo.toml
  > src
    lib.rs

Just two directories and two files. There's the hello_world directory that contains the project as a whole. In that directory we have a Cargo.toml file that contains metadata for the project or package, information needed to compile the package:

[package]
name = "hello_world"
edition = "2021"

[lib]
crate-type = ["cdylib"]

At a minimum, the [package] section includes the name and Rust edition your package is compiled with.

Rust-only libraries don't generally include a [lib] section. This is necessary when you need to specifically control how the project will be used and linked. In this case, we're using 'cdylib' that represents a dynamic system library and maps to a DLL on Windows.

The src sub directory contains the lib.rs Rust source file where we can add any functions that we'd like to export from the DLL. Here's a simple example:

#[no_mangle]
extern "system" fn HelloWorld() -> i32 {
    123
}

The [no_mangle] attribute just tells the compiler to disable any name mangling and use the function name verbatim as the exported identifier. The extern "system" function qualifier indicates the ABI or calling convention expected for the function. The "system" string represents the system-specific calling convention which generally maps to "stdcall" on Windows.

And that's it! You can now build the package and it will produce a DLL:

> cargo build -p hello_world

Cargo will drop the resulting binaries in the target directory where you can then use them from any other programming language:

> dir /b target\debug\hello_world.*
hello_world.d
hello_world.dll
hello_world.dll.exp
hello_world.dll.lib
hello_world.pdb

Here's a simple example in C++:

#include <stdint.h>
#include <stdio.h>

extern "C" {
    int32_t __stdcall HelloWorld();
}

int main() {
    printf("%d\n", HelloWorld());
}

You can build it with MSVC as follows:

cl hello_world.cpp hello_world.dll.lib

The dumpbin tool can be used to further inspect imports and exports.

> dumpbin /nologo /exports hello_world.dll

Dump of file hello_world.dll

File Type: DLL

  Section contains the following exports for hello_world.dll

    00000000 characteristics
    FFFFFFFF time date stamp
        0.00 version
           1 ordinal base
           1 number of functions
           1 number of names

    ordinal hint RVA      name

          1    0 00001000 HelloWorld = HelloWorld
> dumpbin /nologo /imports hello_world.exe

Dump of file hello_world.exe

File Type: EXECUTABLE IMAGE

  Section contains the following imports:

    hello_world.dll
             140017258 Import Address Table
             140021200 Import Name Table
                     0 time date stamp
                     0 Index of first forwarder reference

                           0 HelloWorld

Implement a traditional Win32-style API

Now that we know how to create a DLL in Rust, let's consider what it takes to implement a simple Win32-style API. While WinRT is generally a better choice for new operating system APIs, Win32-style APIs continue to be important. You might need to re-implement an existing API in Rust or just need finer control of the type system or activation model for one reason or another.

To keep things simple but realistic, let's implement a JSON validator API. The idea is to provide a way to efficiently validate a given JSON string against a known schema. Efficiency requires that the schema is pre-compiled, so we can produce a logical JSON validator object that may be created and freed separately from the process of validating the JSON string. You can imagine a hypothetical Win32-style API looking like this:

HRESULT __stdcall CreateJsonValidator(char const* schema, size_t schema_len, uintptr_t* handle);

HRESULT __stdcall ValidateJson(uintptr_t handle, char const* value, size_t value_len, char** sanitized_value, size_t* sanitized_value_len);

void __stdcall CloseJsonValidator(uintptr_t handle);

The CreateJsonValidator function should compile the schema and make it available through the returned handle.

The handle can then be passed to the ValidateJson function to perform the validation. The function can optionally return a sanitized version of the JSON value.

The JSON validator handle can later be freed using the CloseJsonValidator function, causing any memory occupied by the validator "object" to be freed.

Both creation and validation can fail, so those functions return an HRESULT, with rich error information being available via the GetErrorInfo function.

Let's use the windows crate for basic Windows error handling and type support. The popular serde_json crate will be used for parsing JSON strings. Unfortunately, it doesn't provide schema validation. A quick online search reveals the jsonschema crate seems to be the main or only game in town. It will do for this example. The focus here is not really on the particular implementation as much as the process of building such an API in Rust generally.

Given these dependencies and what we learned about creating a DLL in Rust, here's what the project's Cargo.toml file should look like:

[package]
name = "json_validator"
edition = "2021"

[lib]
crate-type = ["cdylib"]

[dependencies]
jsonschema = "0.17"
serde_json = "1.0"

[dependencies.windows]
version = "0.52"
features = [
    "Win32_Foundation",
    "Win32_System_Com",
]

We can employ a use declaration to make things a little easier for ourselves:

#![allow(unused)]
fn main() {
use jsonschema::JSONSchema;
use windows::{core::*, Win32::Foundation::*, Win32::System::Com::*};
}

And let's begin with the CreateJsonValidator API function. Here's how the C++ declaration might look in Rust:

#![allow(unused)]
fn main() {
#[no_mangle]
unsafe extern "system" fn CreateJsonValidator(
    schema: *const u8,
    schema_len: usize,
    handle: *mut usize,
) -> HRESULT {
    create_validator(schema, schema_len, handle).into()
}
}

Nothing too exciting here. We're just using the definition of HRESULT from the windows crate. The implementation calls a different create_validator function for its implementaion. We'll do this so that we can use the syntactic convenience of the standard Result type for error propagation. The specialization of Result provided by the windows crate further supports turning a Result into an HRESULT while discharging its rich error information to the caller. That's what the trailing into() is used for.

The create_validator function looks as follows:

#![allow(unused)]
fn main() {
unsafe fn create_validator(schema: *const u8, schema_len: usize, handle: *mut usize) -> Result<()> {
    // ...

    Ok(())
}
}

As you can see, it carries the exact same parameters and simply switches out the HRESULT for a Result returning the unit type, or nothing other than success or error information.

First up, we need to parse the provided schema using serde_json. Since we need to parse JSON in a couple spots, we'll just drop this in a reusable helper function:

#![allow(unused)]
fn main() {
unsafe fn json_from_raw_parts(value: *const u8, value_len: usize) -> Result<serde_json::Value> {
    if value.is_null() {
        return Err(E_POINTER.into());
    }

    let value = std::slice::from_raw_parts(value, value_len);

    let value =
        std::str::from_utf8(value).map_err(|_| Error::from(ERROR_NO_UNICODE_TRANSLATION))?;

    serde_json::from_str(value).map_err(|error| Error::new(E_INVALIDARG, format!("{error}").into()))
}
}

The json_from_raw_parts function starts by checking that the pointer to a UTF-8 string is not null, return E_POINTER in such cases. We can then turn the pointer and length into a Rust slice and from there a string slice, ensuring that it is in fact a valid UTF-8 string. Finally, we call out to serde_json to turn the string into a JSON value for further processing.

Now that we can parse JSON, completing the create_validator function is relatively straightforward:

#![allow(unused)]
fn main() {
unsafe fn create_validator(schema: *const u8, schema_len: usize, handle: *mut usize) -> Result<()> {
    let schema = json_from_raw_parts(schema, schema_len)?;

    let compiled = JSONSchema::compile(&schema)
        .map_err(|error| Error::new(E_INVALIDARG, error.to_string().into()))?;

    if handle.is_null() {
        return Err(E_POINTER.into());
    }

    *handle = Box::into_raw(Box::new(compiled)) as usize;

    Ok(())
}
}

The JSON value, in this case the JSON schema, is passed to JSONSchema::compile to produce the compiled representation. While the value is known to be JSON at this point, it may not in fact be a valid JSON schema. In such cases, we'll return E_INVALIDARG and include the error message from the JSON schema compiler to aid in debugging. Finally, provided the handle pointer is not null, we can go ahead and box the compiled representation and return it as the "handle".

Now let's move on to the CloseJsonValidator function since it's closely related to the boxing code above. Boxing just means to move the value on to the heap. The CloseJsonValidator function therefore needs to "drop" the object and free that heap allocation:

#![allow(unused)]
fn main() {
#[no_mangle]
unsafe extern "system" fn CloseJsonValidator(handle: usize) {
    if handle != 0 {
        _ = Box::from_raw(handle as *mut JSONSchema);
    }
}
}

We can add a little safeguard if a zero handle is provided. This is a pretty standard convenience feature to simplify generic programming for callers, but a caller can generally avoid the indirection cost of calling CloseJsonValidator if they know the handle is zero.

Finally, let's consider the ValidateJson function's implementation:

#![allow(unused)]
fn main() {
#[no_mangle]
unsafe extern "system" fn ValidateJson(
    handle: usize,
    value: *const u8,
    value_len: usize,
    sanitized_value: *mut *mut u8,
    sanitized_value_len: *mut usize,
) -> HRESULT {
    validate(
        handle,
        value,
        value_len,
        sanitized_value,
        sanitized_value_len,
    )
    .into()
}
}

Here again the implementation forwards to a Result-returning function for convenience:

#![allow(unused)]
fn main() {
unsafe fn validate(
    handle: usize,
    value: *const u8,
    value_len: usize,
    sanitized_value: *mut *mut u8,
    sanitized_value_len: *mut usize,
) -> Result<()> {
    // ...
}
}

First up, we need to ensure that we even have a valid handle, before transforming it into a JSONSchema object reference:

#![allow(unused)]
fn main() {
if handle == 0 {
    return Err(E_HANDLE.into());
}

let schema = &*(handle as *const JSONSchema);
}

This looks a bit tricky but we're just turning the opaque handle into a JSONSchema pointer and then returning a reference to avoid taking ownership of it.

Next, we need to parse the provided JSON value:

#![allow(unused)]
fn main() {
let value = json_from_raw_parts(value, value_len)?;
}

Here again we use the handy json_from_raw_parts helper function and allow error propagation to be handled automatically via the ? operator.

At this point we can perform schema validation, optionally returning a sanitized copy of the JSON value:

#![allow(unused)]
fn main() {
if schema.is_valid(&value) {
    if !sanitized_value.is_null() && !sanitized_value_len.is_null() {
        let value = value.to_string();

        *sanitized_value = CoTaskMemAlloc(value.len()) as _;

        if (*sanitized_value).is_null() {
            return Err(E_OUTOFMEMORY.into());
        }

        (*sanitized_value).copy_from(value.as_ptr(), value.len());
        *sanitized_value_len = value.len();
    }

    Ok(())
} else {
    // ...
}
}

Assuming the JSON value checks out against the compiled schema, we see whether the caller provided pointers to return a sanitized copy of the JSON value. In that case, we call to_string to return a string representation straight from the JSON parser, use CoTaskMemAlloc to allocate a buffer to return to the caller and copy the resulting UTF-8 string into this buffer.

If things don't go well, we can get the compiled schema to produce a handy error message before returning E_INVALIDARG to the caller:

#![allow(unused)]
fn main() {
let mut message = String::new();

if let Some(error) = schema.validate(&value).unwrap_err().next() {
    message = error.to_string();
}

Err(Error::new(E_INVALIDARG, message.into()))
}

The validate method returns a collection of errors. We'll just return the first for simplicity.

And that's it! Your first Win32-style API in Rust. You can find the complete example here.