C++ #import and x64 builds

I already wrote earlier on 32/64-bit issues with Visual Studio. The problems are not frequent but when they happen they are pretty confusing. Here is another one today.

C++ code is simple:

    #import "libid:59941706-0000-1111-2222-7EE5C88402D2" raw_interfaces_only no_namespace

    CComPtr<IObject> pObject;
    BYTE* pnData;
    ATLENSURE_SUCCEEDED(pObject->Method((ULONG_PTR) (BYTE*) pnData));

A COM method returns a pointer to data – pretty straightforward, what could have gone wrong?

COM server vendor designed the library for easy .NET integration and defined the pointer argument as an integer value. They suppose the values to be used further with System.Runtime.InteropServices.Marshal class.

32-bit builds worked well and 64-bit builds experienced memory access violations. An attempt to consume the COM server from C# project showed the same problem: unexpected exception in the call.

The problem is that cross-compiler importing COM type library using LIBID takes 32-bit library even when it builds 64-bit code. This is the problem for both C++ #import "libid:..." and .NET COM reference using the identifier.

The type library imports as the following IDL in 32-bits:

                [in] unsigned long bufPtr);

It is supposed that 64-bit builds get the following import:

                [in] uint64 bufPtr);

Effectively though, 64-bit builds get the 32-bit import and the argument which is supposed to carry casted pointer value is truncated to 32-bits, ULONG type. Cast to ULONG_PTR in 64-bit C++ code is, of course, not helpful since it’s trimmed anyway further fitting the IDL argument type.

The same happens with C# build.

It was developer’s choice to publish ordinal type argument, they wanted this to be “better” and ended up in bitness mess. If the argument remained a pointer type in the IDL then even incorrect bitness would not necessarily result in value truncation.

All together it is unsafe to import [an untrusted] type library using LIBID when it comes to 64-bit builds. It’s 32-bit library to be taken and it can result in incorrect import. Instead, such build should explicitly point to 64-bit type library, for example:

#if defined(_WIN64)
    #import "Win64\ThirdParty.DLL" raw_interfaces_only no_namespace
    //#import "libid:59941706-0000-1111-2222-7EE5C88402D2" raw_interfaces_only no_namespace
    #import "Win32\ThirdParty.DLL" raw_interfaces_only no_namespace

Too bad! libid looked so nice and promising.

Crashes in Visual C++ 2012 vs. Visual C++ 2010

Great news for those suffering from Visual Studio 2010 IDE crashes with losing recent source code changes. Visual Studio 2012 is way more stable (event with Visual Studio 2010 Platform Toolset!) and suffers from crashes without losing editor changes.

The worst thing you seem to be getting is:

Which is an access violation or stack overflow issue in a child process, so that the crash does not kill the IDE itself. The problems seem to be all around cpfe.dll, and they are so less annoying.

If you happen to experience anything of the kind, be sure to vote the bug report on MS Connect: Memory access violation and unhandled exception in cpfe.dll while editing C++ source code with Visual Studio 2012. Microsoft is still accepting feedback on this release and will hopefully resolve the problem in future VS 2012 updates.


A colleague pointed out that code snippet in previous post is misusing ATL’s ATLENSURE_SUCCEEDED macro making it [possibly] evaluate its argument twice in case of failure, that is evaluating into failure HRESULT code. As it is defined like this:


It does things in a straightforward way, for a code line


It is doing “let’s CoCreateInstance the thing and if it fails, let’s CoCreateInstance it again to find out error code”. Disassembly shows this clearly:

This is exactly another spin of the story previously happened with HRESULT_FROM_WIN32 macro and possibly a number of others. With it being originally a macro, SDK offered an option to override the definition by pre-defining INLINE_HRESULT_FROM_WIN32. This way a user might be explicitly requesting a safer definition while still leaving legacy code live with macro. See more detailed story on this in Matthew’s blog.

A tricky thing is that with successful execution the problem does not come up. In case of failure, it depends on the functions called, some with just repeat the error code, some will return a different code on second run, some might create less desired and expected consequences. So you can find yourself having written quite some code before you even suspect a problem.

Having identified the issue, there are a few solutions.

1. First of all, the original ATLENSURE_SUCCEEDED macro can still be used, provided that you don’t put expressions as arguments.

This is going to do just fine:

const HRESULT nCoCreateInstanceResult = pFilterGraph.CoCreateInstance(CLSID_FilterGraph);

2. Second straightforward way is to replace the original ATL definition in ATL code (boo, woodenly)

3. As ATL code is checking for the macros to be already defined, and skipping its own definition in such case, it is possible to inject a safer private definition before including ATL headers (which would typically need one to do the define in stdafx.h):

#define ATLENSURE_SUCCEEDED(x){ const HRESULT nResult =(x); ATLENSURE_THROW(SUCCEEDED(nResult), nResult); }

#include <atlbase.h>
#include <atlstr.h>

Pre-evaluating the argument into local variable is going to resolve the original multi-evaluation problem.

4. There might be a new inline function defined on top of the original macro, which will be used instead and which is free from the problem:


Either way, the correct code compiles into single argument evaluation and throws an exception with failure code immediately:

Also, vote for the suggestion on Microsoft Connect. The issue is marked as fixed in future version of Visual Studio.

Your ATL service C++ project might need some extra care after upgrade to Visual Studio 2010

If you dare to convert your C++ ATL Service project created with an earlier version of Visual Studio to version 2010, as I recently did, you might find yourself surprised with why the hell the bloody thing does not work anymore as a regular executable.

After passing compiler/linker and SDK update issues, which you possibly might have too, the started executable will stumble on ATL error/exception with CO_E_NOTINITIALIZED (0x800401F0 “CoInitialize has not been called.”). Luckily the error code is good enough for quickly locating the problem and the reason is that the one you trusted, that is ATL, introduced an small improvement which is good for running as service but it not initializing COM anymore if you run your .EXE in application mode.

The bug is on MS Connect since August 3, 2010 closed as if it is going to be fixed some time in future when the fix is propagated to end developers. If you are in a rush and would like to write some code before the event, here are the details.

Previously, COM was initialized right in module constructor, in CAtlExeModuleT. CAtlServiceModuleT class just inherited from there. Later on, someone smart decided that it was not so cool and moved initialization to a later point into CAtlExeModuleT::WinMain. Well, this makes sense as you might (a) end up not needing COM at all, or (b) you want to do some important things before even initializing COM.

Unfortunately, the fact that CAtlServiceModuleT is inherited and relies on base class was not paid too much attention. CAtlServiceModuleT is not getting COM initialization from constructor any longer, CAtlServiceModuleT::WinMain is overridden in full and does not receive initialization from new location either. So well, it does not receive it at all unless run as service, which code execution branch looks still heatlhy here and exhibits another issue later soon.

To resolve the problem, the fragment in CAtlServiceModuleT::Start needs the correction as shown below (within #pragma region):

        if (::StartServiceCtrlDispatcher(st) == 0)
                m_status.dwWin32ExitCode = GetLastError();
            return m_status.dwWin32ExitCode;

        // local server - call Run() directly, rather than
        // from ServiceMain()
        #pragma region Run wrapped by InitializeCom/UninitializeCom
        // FIX: See http://connect.microsoft.com/VisualStudio/feedback/details/582774/catlservicemodulet-winmain-coinitialize-not-called-800401f0
        HRESULT hr = E_FAIL;
        hr = T::InitializeCom();
        if (FAILED(hr))
            // Ignore RPC_E_CHANGED_MODE if CLR is loaded. Error is due to CLR initializing
            // COM and InitializeCOM trying to initialize COM with different flags.
            if (hr != RPC_E_CHANGED_MODE || GetModuleHandle(_T("Mscoree.dll")) == NULL)
                return hr;
        } else
            m_bComInitialized = true;
        m_status.dwWin32ExitCode = pT->Run(nShowCmd);
        if (m_bComInitialized)
        m_status.dwWin32ExitCode = pT->Run(nShowCmd);
        #pragma endregion 

        return m_status.dwWin32ExitCode;

Going further from there, the introduced optimization also removed COM initialization from main process thread module Run function. Provided there earlier too through module constructor it is not longer there. So if you are doing something in application’s run when the application is set to run as service and is executed in application (where you might want to start application as a sort of a helper, or otherwise in specific mode), you need COM initialization there too.


Build Incrementer Add-In for Visual Studio: Latest Visual Studio Versions

If you share concept (as I do) that every build should have a unique file version stamp in it, for a simple purpose – at least – to distinguish between different version of the same binary, then a helpful tool of automatic incrementing fourth number in FILEVERSION’s file version is something you cannot live without. After going through several fixes and updates, it is finally here available for download.

The last issue was in particular that projects that are in solution’s folder are not found by the add-in with Visual Studio 2008. Why? OnBuildProjConfigBegin event provides you with a unique project name string in Project argument, but it appears that it is only good enough as a quick lookup argument with Visual Studio 2010.

// EnvDTE::_dispBuildEvents
    STDMETHOD(OnBuildBegin)(_EnvDTE_vsBuildScope Scope, _EnvDTE_vsBuildAction Action) throw()
    STDMETHOD(OnBuildDone)(_EnvDTE_vsBuildScope Scope, _EnvDTE_vsBuildAction Action) throw()
    STDMETHOD(OnBuildProjConfigBegin)(BSTR Project, BSTR ProjectConfig, BSTR Platform, BSTR SolutionConfig) throw()
        _Z4(atlTraceCOM, 4, _T("Project \"%s\", ProjectConfig \"%s\", Platform \"%s\", SolutionConfig \"%s\"\n"), CString(Project), CString(ProjectConfig), CString(Platform), CString(SolutionConfig));
            // NOTE: const CString& cast forces compiler to process statement as variable definition rather than function forward declaration
            CProjectConfiguration ProjectConfiguration((const CString&) CString(Project), CString(ProjectConfig), CString(Platform), CString(SolutionConfig));
            CRoCriticalSectionLock DataLock(m_DataCriticalSection);
            // NOTE: Check the project on the first run only (to skip multiple increments in batch build mode)
            if(!Project || m_VersionMap.Lookup(ProjectConfiguration))
                return S_FALSE;
            _Z3(atlTraceGeneral, 3, _T("Checking project \"%s\"...\n"), CString(Project));
            // Checking the project is of C++ kind
            CComPtr<EnvDTE::Project> pProject = GetProject(CComVariant(Project));

When the project is in a folder, Projects::Item can just fail if you are looking up for the element interface. In which case, you have to walk the collection taking into account SubProjects and additionally look there yourself. Visual Studio 2010 is one step smarter and gives the thing to you right from the start.

Eventually, the add-in is here. It’s job is to go to .RC file and increment file version each time you build the binary. It reports the action into build output window:

To install the add-in:

Or, alternatively, use installation file VisualStudioBuildIncrementerAddInSetup.msi (version 1.0.4, 379K) to have it done for you in a user-friendly way. Partial source code, a Visual Studio 2010 projectis also there in repository.

Attributed ATL: Accessing BLOB with ISequentialStream

Before attributed ATL was deprecated, it was a convenient way to access databases using attributed classes on top of OLEDB Consumer Templates. Does not it look nice?

    db_command("SELECT ServerData FROM Server WHERE Server = ?")
class CGetServerData
    [ db_param(1) ] LONG m_nServer;
    [ db_column(1, length = "m_nDataLength") ] ISequentialStream* m_pDataStream; DBLENGTH m_nDataLength;

It worked great with Visual Studio .NET 2003 and it failed to work with later releases. There are questions on internet about the problem, but there few answers if any. As I recently had to convert a project from 2003 version of the compiler to Visual Studio 2008, the problem was finally to be resolved.