Wednesday, March 26, 2014

Cert-ifiably IISane

Time for an annual web site certificate renewal. No problem, we've done this dozens of times before. Only one small difference this year - it's on IIS7 in a Windows Server 2008 machine instead of IIS6 on Windows Server 2003. That shouldn't matter right?

Sadly, yes, it does.

I opened the IIS manager, navigated to the root node for the machine and selected "Server Certificates." There, I right-clicked and selected "Renew..." No special options to choose from so how complicated could it be? Well, it turns out that there is a difference. When I opened the request file it was quite a bit larger than I was used to seeing. Not being able to read hex I decided that was probably just due to it being a 64-bit machine instead of our previous 32-bit OSes. I uploaded the request, logged on to the certificate authority, and approved my own request. That's just how we roll around here.

Then, back on the server, I downloaded the new certificate and completed the request. I selected the new certificate for out web application's HTTPS binding and immediately started getting some interesting event log messages:
Log Name:      System
Source:        Schannel
Date:          3/25/2014 2:03:14 PM
Event ID:      36874
Task Category: None
Level:         Error
Keywords:      
User:          SYSTEM
Computer:      [elided]
Description:
An TLS 1.0 connection request was received from a remote client application, but none of the cipher suites supported by the client application are supported by the server. The SSL connection request has failed.
And:
Log Name:      System
Source:        Schannel
Date:          3/25/2014 2:03:14 PM
Event ID:      36888
Task Category: None
Level:         Error
Keywords:      
User:          SYSTEM
Computer:      [elided]
Description:
The following fatal alert was generated: 40. The internal error state is 1205.
Woah- what's going on? I just renewed the certificate as is with no options, no way to change anything, and it modified what the certificate can do? Wonderful. After just a bit of searching I found Robert Lucero's post on Certificate Renewals in IIS 7. Basically, don't renew your certificates through IIS. Either create entirely new requests or use the certificates MMC snap-in.

The only difference we could find when inspecting the certificates was that the new one was only 1024 bits compared to the 2048 we'd had previously. There must have been some other flag under the covers we couldn't see that limited its permitted usage.

Your mileage may vary - test out the process on a different system before you jump in with both feet. At least it was easy to fix.

Monday, October 29, 2012

Please properly dispose old knowledge

Greetings!  I have a confession to make.  When I learn something, I tend to expect that knowledge to remain inviolate and don't continually check to be sure that it hasn't changed.  This is a Very Bad Thing®©TM in the programming world.  Coding conventions and guidance change all the time.  Still, as humans, we are loath to replace old, time worn techniques with new ones.

Disposing done wrong

For example, I learned from the .NET Framework 2.0's documentation that the IDisposable pattern consists of a "public void Dispose()" method, a "protected void Dispose(bool disposing)" method, and a finalizer.  The first method calls the second with true and then removes the object from the finalization queue.  The last method calls the second with false.  All actual disposing happens in the protected method.
using System;
public class ResourceUser: IDisposable
{
    bool m_disposed;
    ~ResourceUser()
    {
        Dispose(false);
    }
    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize();
    }
    protected void Dispose(bool disposing)
    {
        if (m_disposed) return;
        if (disposing)
        {
            //clean up managed resources
        }
        //clean up unmanaged resources
        m_disposed = true;
    }
}
Pretty simple, right?  Except that it's totally and utterly wrong and evil.

Will o'the Disp[ose]

Well, at least the finalizer part is.  If you read the latest Microsoft guidance you'll see that there is more complete information on the finalizer and its use, not to mention that it makes it a bit more clear than previously that the finalizer hook is necessary only to clean up unmanaged resources.  If your class includes only managed resources that must be disposed, you should not have a finalizer override.  Rather, just create the two Dispose method overloads to push through the dispose call and clean up on command.

Finally - let's get it right

The reason for this is rather simple - adding a finalizer increases the load on garbage collector significantly.  Creating an instance of an object with a finalizer adds it to the global finalization queue so that its memory will not be collected until the finalizer has executed.  Since the timing of this is non-deterministic, it means that your theoretically disposable objects will remain active in memory for longer than they have to when dispose is not properly called. Overall, this greatly increases management overhead.

So - the right way to go about it is not to create a finalizer, like so:
using System;
public class ResourceUser: IDisposable
{
    bool m_disposed;
    public void Dispose()
    {
        Dispose(true);
    }
    protected void Dispose(bool disposing)
    {
        if (m_disposed) return;
        if (disposing)
        {
            //clean up managed resources
        }
        m_disposed = true;
    }
}
Note that I also removed the comment "clean up unmanaged resources" after the "if (disposing)" code block.  This is because you should not take this shortcut if your class specifically includes unmanaged resources. Of course, most classes actually shouldn't include unmanaged resources - you should wrap them in a very thin, directed subclass of SafeHandle and then use an instance of that in your class.  The SafeHandle classes deal with the unmanaged cleanup and the majority of your classes never need to see the broad side of a finalizer again.

If you're working in a multithreaded environment, you may want to add some extra locking around the inner workings of your protected Dispose method, of course, but I wanted to leave something as an exercise for the reader.

Happy coding!

Friday, December 9, 2011

Happy Holidays!

So - you want a happy holidays messages for everyone you know but don't have the time to personalize it for each recipient?  Try this newfangled song writer class!  It's C# but should be easy to port to Java, C++, C, and pretty much any other object oriented language.  Enjoy, and have a wonderful holiday season!

Holiday Song Writer

Sunday, July 24, 2011

Please COM down

As part of our VB to C# conversion, we created an assembly compiling with the .NET 4.0 Framework.  One method in the converted class had a signature like:

void MethodName(ref object p1, ref object p2, ref object p3, ref object p4)

The first two paramenters are input parameters and the latter two are output.  p1 and p3 are really arrays of strings, and p2 and p4 are two dimensional arrays of strings, 16 and 32 bit ints, date/time, and potentially null values.  The ASP code worked fine on top of this.

Then, we realized we want to use this component from an ASP.NET site currently compiling with the .NET 3.5 Framework.  We weren't using any 4.0 specific code so simply changed the compile type, deployed and regasm'ed the new assembly.  Strangely, when the ASP code tries to call the method above, it occasionally chokes, claiming that the "Variable uses an Automation type not supported in VBScript" followed by the method name.  What's odd is that it doesn't always blow up - only sometimes.  Also, the method signature and code didn't change.  Why would .NET 4.0 framework and regasm work fine while .NET 3.5 framework with the 2.0 regasm not?

Well, it's still a mystery but the one clue is this - the second input to this function is an output param of a previous function.  The nulls that it contains aren't really "null" but DBNull.Value.  Perhaps the 2.0 regasm is somehow typing these in a different way than 4.0...

We're open to suggestions but at this point will compile the assembly as 4.0 and have the ASP.NET 3.5 site access it via COM, unfortunately.

Tuesday, May 3, 2011

Conversion excursion

My team's applications rely on a large array of assemblies, some C++, some VB6, and some C#. The front end is largely a web site that is a mix of ASP and ASP.NET. We also have some Windows services - mostly C++ with a few newer ones in C#. A few of the C# assemblies are COM enabled to be usable from the C++/VB code, ASP pages, and services while others are only used by the ASP.NET.

We're trying to migrate away from the VB6 code so I helped compile a list of which of the assemblies are in use by which other components. Our first pass will be to perform a direct port with COM wrappers so existing code can simply point at the new object with no API or behavior change. As we do so, we'll make a note of places where we think there are existing bugs or room for performance improvements.

Have you ever performed a massive migration or upgrade? How did your team go about it? What pitfalls did you encounter as you went? With the one assembly we've ported so far, we found that trying to fix bugs at the same time as we migrated to the new code caused many more issues than it solved.

Friday, September 17, 2010

On-time, thoroughly tested, and on budget: pick two

I'm sure many of you in the the corporate would know the age-old conundrum of the post subject. It's difficult to deliver all the right fixes on the original schedule and budget for pretty much any project. No matter how much we expect the unexpected there will always be even more unexpected issues to derail the schedule.

So how do you decide when to ship? We are performing a major rewrite of a component for our upcoming release and discovered an existing issue that would be very costly to fix. Now, we have three options: (1) fix it poorly, likely breaking something else in the process, (2) fix it correctly and delay the release, or (3) don't fix it in this release but slate it for future work.

My manager and I have opted for number three after careful consideration. The driving factors were the value of the other improvements to the component even without this fix and the time constraints on other work included in the release. As soon as slipping the timeline became a non-starter we were left with one very dangerous choice and one safer. Since we're working with patient data and using the information to drive patient care, safety wins almost always.

How do your teams decide when to hack around, when to slip, and when to delay?

Tuesday, August 3, 2010

Flow of control

I'm sure most experienced programmers already know that using exceptions for flow of control in a program is highly inefficient. The time it takes to unwind the stack alone is ridiculous, let alone the fact that compilers don't tend to optimize for exception paths. The question is then what to replace it with? Do you set status and return an empty or a null? Do you always return a "result" object that includes both the result state and any return types?

My team is working on updating our shared components' architecture and are considering this question as we go. We've decided to move towards returning a result object and having "out" parameters for most of the new methods we're writing. While not ideal, this does mean we always use the same type of return value. We could also create subclasses of that return type with more propoerties instead of using out parameters.

At the same time, I'm working on creating some unit tests for our new code and building both the "happy path" tests and the tests that supply null, empty, and otherwise invalid values. I do like the idea of not throwing exceptions unless something truely exceptional has occured but we can swing too far in that direction, getting ourselves into unknown states or overly complicating the program logic to the point where it constantly checks system state instead of simply executing. Trade offs all around, I suppose.