It's a good thing I'm not feverishly attached to any of the code I write, as I've probably scrapped entire sub-frameworks three times over in the process of building, then learning, then rebuilding my graphics and content pipeline with the SharpDX libraries. I can definitely say XNA makes things neat and tidy for development, and I think I'll stick with it if there's anything in the near future I just want to hash together to prototype something. At any rate, the good news is I've gotten back to where I was with XNA, only now it's built on a much cleaner framework with a newer underlying graphics API, and a snazzy deferred-rendering pipeline with a handful of associated flexible utility classes for multiple-render-target functionality.
The main intent of this post is to try and help out those people trying to move from XNA into a lower-level yet still managed graphics library. What follows after the jump is some of the things I had to replace and rebuild once I pulled the "using Microsoft.XNA" statements from my code. It's mostly a high-level look, I'll go into more depth (and more code) on specific issues down the line.
Primarily, I had to create a new graphics interface for the game. I had learned from the initial build that I wanted a more flexible interface if I ever needed to change internal functionality of the graphics object, as my XNA codebase had calls into the heart of the XNA GraphicsDevice all over the place (forget "loose coupling", this was a spiderweb of couplings). My new goal was to be able to hand anything that needed graphics information a reference to the new GraphicsInterface object, and define some useful properties that could be modified on the backend if necessary (and it's already been necessary!) but gave access to the tools required to render objects and build buffers.
There are many tutorials on initializing a Direct3D11 device, swapchain, and collection of depth/stencil/backbuffers, RasterTek has some great tutorial series and the SharpDX forums have folks who've replicated them in C#/SharpDX explicitly. All of the initialization is wrapped up in the GraphicsInterface private methods, and I left access to the Device, the ImmediateContext (really just to save typing, as you can get that context from the device anyway), and some methods to turn depth-buffering on and off (handy for rendering 2D quads to the screen).
The great thing about this class is once you've successfully initialized Direct3D, you can pretty much forget about it. Just make sure it's initialized at the start of the program, and gets properly disposed of at the end, and you're good to go.
I miss this the most from XNA, they built a handy pipeline. I rebuilt this in spirit more than completely accurately, as it's really just an instance object that has methods for loading in all manner of objects from filenames into memory. SharpDX/DirectX has FromFile() methods for most of its resources, so I wrapped those up in some cleaner calls to filenames, and gave the ContentManager object a "root directory" field so I didn't have to include the full relative pathname when I wanted to load files. My OCD tendencies agree that content.LoadTexture2D("dude.jpg") is much cleaner than content.LoadTexture2D("..\\..\\Textures\\dude.jpg").
The trickiest hurdle was how I was going to import models, as not only did I no longer have the content pipeline from XNA, I didn't have a Mesh class at all! The first part of the problem I fixed by adding AssImpNET to the toolbox, so I could load a whole range of filetypes. This returns a Scene object to you, which I then went on to pick apart for the information I wanted, and added it to my own Mesh object, which is a wrapper around a couple of buffers (vertex buffer, index buffer, instance buffer) and storage space for an array of bone matrices and mesh properties/effects.
This was rebuilt with the RawInput library, which is an event-based system that receives WM_ messages from any devices you register. My old input handling class used the KeyboardState and MouseState objects from XNA, and updated two saved states every frame whether something was happening or not. I haven't profiled the changes, but I'm sure I'm saving some CPU overhead by just updating states when key or mouse events actually occur. The external interface is almost identical (calls like "IsMouseButtonHeld(Buttons.Right)" and "GetMouseMovement()" ) as I just check updated lists of pressed vs released keys and buttons. The input handler does clear out it's "just released" and "just pressed" lists every frame to keep those properties as immediate as possible, but the "held" list persists until one of its members comes in as a "released state" message. This was honestly one of the easier conversions to make.
The Effect Framework (aka render-pipeline prep)
The biggest hurdle I had to overcome was learning how the DX11 pipeline actually handled vertex shaders, pixel shaders, geo/hull/compute shaders (which I'm skipping for now to keep the game playable on older hardware), buffers, and constant values to shaders. My current implementation wraps an Effect11 object, but something much similar could be wrapped around a PixelShader and VertexShader pair. The reason I went with the Effect was two-fold: I was already used to writing .fx files (which just contain a vertex and pixel shader in one file, along with pass descriptions), and the Effect11 class comes with a group of really useful variable-fetching and constant-setting methods. The DirectX API methods for setting constants in shaders directly relies solely on register index (so you better know that the diffuse texture is in slot 0 while the normal map is in slot 1), AND you need to specify which shader stage the constant belongs to. It's not prohibitively hard to handle on your own, but using the Effect11 interface makes the wrapper methods on my RenderEffect class much simpler to work with. As it stands I have a group of about 8 overloads to RenderEffect.SetParameter() that just take a variable name (string) and the data I want to set (floats, vectors, matrices, ShaderResourceViews, etc).
Then I created a RenderEffect.Apply() method that sets a couple things for the pipeline (like InputLayout, render targets, and blend states) and make a call to the internal effect's pass.apply() method to activate all the shader stages appropriately.
Since I no longer had XNA's game class, I rewrote an extendable Game class that followed a similar pattern (it has the same Initialize, LoadContent, Update, Draw, UnloadContent methods). In my engine testing I used the base class and fit all the work into it, but as I move towards a framework vs game division I'll likely extend the base class for the actual game to keep it cleaner. I'm using a wrapped Stopwatch class (called Clock) to create the timing values that XNA gives you through the GameTime interface, but I kept it incredibly low-key for now: the main program just grabs the "milliseconds since last checked" value from my Clock class and passes that to game.Update every frame.
I haven't rebuilt font or sprite rendering, though several key pieces of framework are already there (a RenderPlane class for an easily-drawn 2D billboard quad). With the current development on SharpDX, there's a toolkit on its way that handles both Sprites and Fonts, so we'll see if I get that built for myself first, or continued development makes my life that much easier :)
That's about it in a nutshell, I'm currently patching up my level editor where I left holes by pulling out the XNA references, and then it'll be full steam ahead!