• Home
  • About
  • Samples and Articles
    • .NET
      • Additional NAnt Tasks
      • Errata: Tips & Tricks to bolster your Managed C++ code in VS.NET
      • Project Cleaner
      • Unofficial Managed C++ FAQ
    • Artículos en Español
    • BizTalk
      • BizTalk 2004 Flat File Schema Tutorial 1
      • BizTalk 2004 Flat File Schema Tutorial 2
    • C/C++
      • Class Members as Callbacks
      • On C++ Namespaces
    • MFC
      • An MFC PopupEdit Control
      • Enumerating ODBC Data Sources and Drivers
      • Handling Keystrokes on a MFC Dialog
    • Network
      • An I/O Completion Port Template
      • SAMPLE: Enumerating Network Resources
    • Processes & Threads
      • A better thread class
      • A few synchronization classes
      • A simple media-locking program for Win95
      • Getting the Command line for any process on NT
      • SAMPLE: Enumerating 16-bit Processes under WinNT
      • Using EnumPrinters
    • Security
      • Adjusting Process Token Privileges
      • Authentication the SSPI way
    • UI and Shell
      • Getting an ITEMIDLIST For a Path
      • The internal WND structure
    • Windows Workflow Foundation

Using PowerShell with Clustered MSMQ

October 25, 2011 //
2

My good friend Lars Wilhelmsen asked me a couple of days ago if I knew of a way to automate creating Queues on a clustered instance of MSMQ. Normally, a cluster of machines using Microsoft Cluster Service (MSCS) with MSMQ in Windows Server 2008 and up will have several MSMQ instances installed. For example, a 2-node cluster will typically have 3 MSMQ instances: 2 local ones (one for each node) and the clustered instance, which will be configured/running on a single node at a time.

So what usually happens is that if you try using System.Messaging on PowerShell on an MSMQ cluster, you’ll end up creating the queues in the local instance, not the clustered instance. And MessageQueue.Create() doesn’t allow you to specify a machine name either.

The trick to getting this to work lies in KB198893 (Effects of checking ”Use Network Name for Computer Name” in MSCS). Basically all you have to do is set the _CLUSTER_NETWORK_NAME_ environment variable to the name of the Virtual Server that hosts the clustered MSMQ resource:

$env:_CLUSTER_NETWORK_NAME_ = 'myclusterMSMQ'
[System.Messaging.MessageQueue]::Create('.\Private$\MyQueue')

One thing to keep in mind, though: You have to set this environment variable before you attempt to use any System.Messaging feature; otherwise it will simply be ignored because the MSMQ runtime will already be bound to the local MSMQ instance.

Categories MSMQ

MSMQ and External Certificates

September 30, 2011 //
2

I’ve spent some time lately playing around with MSMQ, Authentication and the use of External Certificates. This is an interesting scenario, but one that I found to be documented in relatively unclear terms, with somewhat conflicting information all over the place. Plus, the whole process isn’t very transparent at first.

Normally, when you’re using MSMQ Authentication, you’ll have MSMQ configured with Active Directory Integration. When this setup, MSMQ will normally create an internal, self-signed certificate for each user/machine pair and register it in Active Directory. The sender would then request that the message be sent authenticated, which will cause the local MSMQ runtime to sign the message body + a set of properties using this certificate, and then include the public key of the certificate and the signature (encrypted with the certificate’s private key) alongside the message.

The MSMQ service where the authenticated queue is defined would then verify the signature using the certificate public key to ensure it wasn’t tampered with. If that check passed, it could then lookup a matching certificate in AD, allowing it to identity the user sending the message for authentication (there are a few more checks in there, but this is the basic process).

This basic setup runs fairly well and is easy to test. Normally all you have to do here is:

  • Mark the target queue with the Authenticated property:
    spacer
  • Have the sender make sure to request authentication. For System.Messaging, this would be the MessageQueue.Authenticate property, while using the C/C++ API it would mean setting the PROPID_M_AUTH_LEVEL property to an appropriate value (normally MQMSG_AUTH_LEVEL_ALWAYS).

External Certificates

So what if you didn’t want to use the MSMQ internal certificate, for some reason? Well, MSMQ also allows you to use your own certificates, which could be, for example, generated using your Certificate Server CA installation trusted by your domain, or some other well known Certification Authority. To do this you need to:

  1. Register your certificate + private key on the sender machine for the user that is going to be sending the messages. The docs all talk about registering the certificate on the “Microsoft Internet Explorer Certificate Store” which confused me at first, but turns out it just meant that it would normally be the CurrentUser\MY store.
  2. Register the certificate (public key only) on the corresponding AD object so that the receiving MSMQ server can find it. You can do this using the MSMQ admin console from the MSMQ server properties window, or using the MQRegisterCertificate() API.

I decided to try this using a simple self-signed certificate generated using MakeCert (a tool I hate having to use because I can never remember the right arguments to use!). I created my certificate into CurrentUser\MY, then registed the public part of the cert using with the MSMQ management console.

The next part was changing my sender application code to use the external certificate, which is done by setting MessageQueue.SenderCertificate (or the PROPID_M_SENDER_CERT property). It wasn’t immediately obvious at first what format the certificate was to be provided, but turns out it’s just the DER-encoded representation of the X.509 certificate. In .NET, this means you can just use X509Certificate.GetRawCertData()/X509Certificate2.RawData or X509Certificate.Export() with X509ContentType.Cert value, which is equivalent. Here’s the basic code:

X509Certificate cert = ...;

Message msg = new Message();
msg.Body = data;
msg.UseDeadLetterQueue = true;
msg.UseAuthentication = true;
msg.SenderCertificate = cert.GetRawCertData();
queue.Send(msg, MessageQueueTransactionType.Single);

I ran this on my Windows 7 machine and all I got was this error:

System.Messaging.MessageQueueException (0x80004005): Cryptographic function has failed.
   at System.Messaging.MessageQueue.SendInternal(Object obj, MessageQueueTransaction internalTransaction, MessageQueueTransactionType transactionType)
   at System.Messaging.MessageQueue.Send(Object obj, MessageQueueTransactionType transactionType)
   at MsmqTest1.Program.Main(String[] args) in C:\tomasr\tests\MSMQ\MsmqTest1\Program.cs:line 73

What follows is a nice chase down the rabbit hole in order to figure it out.

The Problem

After bashing my head against this error for a few hours, I have to admit I was stumped and had no clue what was going on. My first impression was that I had been specifying the certificate incorrectly (false) or that I was deploying my certificate to the wrong stores. That wasn’t it, either. Then I thought perhaps there was something in my self-signed certificate that was incorrect, so I started comparing its properties with those of the origina MSMQ internal certificate used in the first test. As far as I could see, the only meaningful difference was that the MSMQ one had 2048-bit keys while mine had 1024-bit ones, but that was hardly relevant at this point.

After some spelunking and lots of searching I ran into the MQTrace script [1]. With that in hand, I enabled MSMQ tracing and notice a 0×80090008 error code, which means NTE_BAD_ALGID ("Invalid algorithm specified"). So obviously somehow MSMQ was using a "wrong" algorithm, but which one, and why?

In the middle of all this I decided to try something: I switched my sender application to delivering the message over HTTP instead of the native MSMQ TCP-based protocol; all it required was changing the Format Name used to reference the queue. This failed with the same error, but the MSMQ log gave me another bit of information: The NTE_BAD_ALGID error was being returned by a CryptCreateHash() function call!

Armed with this dangerous knowledge, I whipped out trusty WinDBG, set up a breakpoint in CryptCreateHash() and ran into this:

003ee174 50b6453f 005e14d0 0000800e 00000000 CRYPTSP!CryptCreateHash

0x800e is CALG_SHA512. So MSMQ was trying to use the SHA-512 algorithm for the hash; good information! Logically, the next thing I tried was to force MSMQ to use the more common SHA-1 algorithm instead by setting the appropriate property in the message (PROPID_M_HASH_ALG):

msg.HashAlgorithm = HashAlgorithm.Sha;

And it worked! Well, for HTTP anyhow, as it broke again as soon as I switched the sender app back to the native transport. It turns out that the PROPID_M_HASH_ALG property is ignored when using the native transport.

Changes in MSMQ 4.0 and 5.0

Around this time I ran into a couple of interesting documents:

  • What’s new in Message Queuing 5.0
  • MSMQ 5.0 – Changes introduced with Windows 7 and Windows Server 2008 R2

The key part that came up from both of these is that in MSMQ 4.0 and 5.0 there were security changes around the Hash algorithm used for signing messages. In MSMQ 4.0, support for using SHA-2 was added and MAC/MD2/MD4/MD5 were disabled by default, while keeping SHA-1 as the default one. In MSMQ 5.0, however, SHA-1 was disabled by default again and SHA-2 (specifically SHA-512) was made the default [2]. So this explains readily why my test case was using SHA-512 as the hashing algorithm.

At this point I went back to using internal MSMQ certificates and checked what Hash algorithm was being used in that case and, sure enough, it was using SHA-512 as well. Obviously then something in the certificate I was providing was triggering the problem, but what?

And then it hit me that the problem had nothing to do with the certificate itself, but with the private/public key pair associated with the certificate. When you create/request a certificate, one aspect is what Cryptographic (CAPI) Provider to use to generate (and store it?) and I had been using the default, which is the "Microsoft Strong Cryptographic Provider"; according to this page it apparently isn’t quite strong enough to support SHA-2 [3].

So I created a new certificate, this time explicitly using a cryptographic provider that supports SHA-2, by providing the following arguments to my MakeCert.exe call

-sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 12

And sure enough, sending authenticated messages over the native protocol succeeded now. Sending over HTTP also worked with the updated certificate without having to set the HashAlgorithm property.

Exporting and Using Certificates

Something to watch out for that might not be entirely obvious: If you export a certificate + private key from a system store into a PFX (PKCS#12) file, your original choice of cryptographic service provider is stored alongside the key in the file:

spacer

Once you import the PFX file into a second machine, it will use the same crypto provider as the original one. This might be important if you’re generating certificates in one machine for using in another. The lesson so far seems to be:

When using external certificates for use with MSMQ 5.0, ensure you choose a cryptographic provider that supports SHA-2 when generating the public/private key pair of the certificate, like the "Microsoft Enhanced RSA and AES Cryptographic Provider".

At this point I’m not entirely sure how much control you have over this part if you’re generating the certificates in, say, Unix machines where this concept doesn’t apply (or how exactly it might work with external certification authorities).

External Certificates in Workgroup Mode

Another aspect I was interested in was the use of External Certificates when MSMQ is in Workgroup mode. In this mode, authenticated queues aren’t all that useful (at least as far as I understand things right now), because without AD the receiving MSMQ server has no way to match the certificate used to sign the message with a Windows identity to use for the access check on the queue. In this scenario messages appear to be rejected when they reach the queue with a "The signature is invalid" error.

However, if the queue does not have the Authenticated option checked, then messages signed with external certificates will reach the queue successfully. The receiving application can then check that the message was indeed signed because the Message.DigitalSignature (PROPID_M_SIGNATURE)property will contain the hash encrypted with the certificate private key as expected. The application could then simply retrieve the public key and look it up in whatever application-specific store it had to check it against known certificates.

My understanding here is that despite that it cannot lookup the certificate in DS, the MSMQ receiving server will still verify that the signature is indeed valid according to the certificate attached to the message. That’s only half the work so that’s why the application should then verify that the certificate is known and trusted.

[1] John Breakwell’s MSMQ blog on MSDN is a godsend when troubleshooting and understanding MSMQ. Glad he continued posting about this stuff on his alternate blog after he left MS.

[2] The System.Messaging stuff does not provide a way to specify SHA-2 algorithms in the message properties in .NET 4.0. No idea if this will be improved in the future.

[3] I always thought the CAPI providers seemed to be named by someone strongly bent on confusing his/her enemies…

Categories MSMQ

Having Fun with WinDBG

June 5, 2011 //
1

I’ve been spending lots of quality time with WinDBG and the rest of the Windows Debugging Tools, and ran into something I thought was fun to do.

For the sake of keeping it simple, let’s say I have a sample console application that looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.CompilerServices;

class Program {
  static void Main(string[] args) {
    Program p = new Program();
    for ( int i = 0; i < 10; i++ ) {
      p.RunTest("Test Run No. " + i, i);
    }
  }
  [MethodImpl(MethodImplOptions.NoInlining)]
  public void RunTest(String msg, int executionNumber) {
    Console.WriteLine("Executing test");
  }
}

Now, imagine I’m debugging such an application and I’d like to figure out what is passed as parameters to the RunTest() method, seeing as how the application doesn’t actually print those values directly. This seems contrived, but a classic case just like this one is a method that throws an ArgumentException because of a bad parameter input but the exception message doesn’t specify what the parameter value itself was.

For the purposes of this post, I’ll be compiling using release x86 as the target and running on 32-bit Windows. Now, let’s start a debug session on this sample application. Right after running it in the debugger, it will break right at the unmanaged entry point:

Microsoft (R) Windows Debugger Version 6.11.0001.404 X86
Copyright (c) Microsoft Corporation. All rights reserved.

CommandLine: .\DbgTest.exe
Symbol search path is: *** Invalid ***
****************************************************************************
* Symbol loading may be unreliable without a symbol search path.           *
* Use .symfix to have the debugger choose a symbol path.                   *
* After setting your symbol path, use .reload to refresh symbol locations. *
****************************************************************************
Executable search path is:
ModLoad: 012f0000 012f8000   DbgTest.exe
ModLoad: 777f0000 77917000   ntdll.dll
ModLoad: 73cf0000 73d3a000   C:\Windows\system32\mscoree.dll
ModLoad: 77970000 77a4c000   C:\Windows\system32\KERNEL32.dll
(f9c.d94): Break instruction exception - code 80000003 (first chance)
eax=00000000 ebx=00000000 ecx=001cf478 edx=77855e74 esi=fffffffe edi=7783c19e
eip=77838b2e esp=001cf490 ebp=001cf4c0 iopl=0         nv up ei pl zr na pe nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000246
*** ERROR: Symbol file could not be found.  Defaulted to export symbols for ntdll.dll -
ntdll!DbgBreakPoint:
77838b2e cc              int     3

Now let’s fix the symbol path and also make sure SOS is loaded at the right time:

0:000> .sympath srv*C:\symbols*msdl.microsoft.com/download/symbols
Symbol search path is: srv*C:\symbols*msdl.microsoft.com/download/symbols
Expanded Symbol search path is: srv*c:\symbols*msdl.microsoft.com/download/symbols
0:000> .reload
Reloading current modules
......
0:000> sxe -c ".loadby sos clr" ld:mscorlib
0:000> g
(1020.12c8): Unknown exception - code 04242420 (first chance)
ModLoad: 70b80000 71943000   C:\Windows\assembly\NativeImages_v4.0.30319_32\mscorlib\246f1a5abb686b9dcdf22d3505b08cea\mscorlib.ni.dll
eax=00000001 ebx=00000000 ecx=0014e601 edx=00000000 esi=7ffdf000 edi=20000000
eip=77855e74 esp=0014e5dc ebp=0014e630 iopl=0         nv up ei pl zr na pe nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000246
ntdll!KiFastSystemCallRet:
77855e74 c3              ret

At this point, managed code is not executing yet, but we’ve got SOS loaded. Now, what I’d like to do is set an initial breakpoint in the RunTest() method. Because it’s a managed method, we’d need to wait until it is jitted to be able to grab the generated code entry point. Instead of doing all that work, I’ll just use the !BPMD command included in SOS to set a pending breakpoint [1] on it, the resume execution:

0:000> !BPMD DbgTest.exe Program.RunTest
Adding pending breakpoints...
0:000> g
(110c.121c): CLR notification exception - code e0444143 (first chance)
(110c.121c): CLR notification exception - code e0444143 (first chance)
(110c.121c): CLR notification exception - code e0444143 (first chance)
JITTED DbgTest!Program.RunTest(System.String, Int32)
Setting breakpoint: bp 001600D0 [Program.RunTest(System.String, Int32)]
Breakpoint 0 hit
eax=000d37fc ebx=0216b180 ecx=0216b180 edx=0216b814 esi=0216b18c edi=00000000
eip=001600d0 esp=002fece0 ebp=002fecf4 iopl=0         nv up ei pl nz na po nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000202
001600d0 55              push    ebp

Now he debugger has stopped execution on the first call to RunTest, so we can actually examine the values of the method arguments:

0:000> !CLRStack -p
OS Thread Id: 0x121c (0)
Child SP IP       Call Site
002fece0 001600d0 Program.RunTest(System.String, Int32)
    PARAMETERS:
        this () = 0x0216b180
        msg () = 0x0216b814
        executionNumber (0x002fece4) = 0x00000000

So the first parameter is the this pointer, as this is a method call. The msg parameter is a string, so let’s examine that as well:

0:000> !dumpobj -nofields 0x0216b814
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 0

Now let’s look at this at a slightly lower level:

0:000> kbn3
 # ChildEBP RetAddr  Args to Child
WARNING: Frame IP not in any known module. Following frames may be wrong.
00 002fecdc 001600ab 00000000 00000000 004aa100 0x1600d0
01 002fecf4 727221db 002fed14 7272e021 002fed80 0x1600ab
02 002fed04 72744a2a 002fedd0 00000000 002feda0 clr!CallDescrWorker+0x33
0:000> !IP2MD 0x1600d0
MethodDesc:   000d37fc
Method Name:  Program.RunTest(System.String, Int32)
Class:        000d1410
MethodTable:  000d3810
mdToken:      06000002
Module:       000d2e9c
IsJitted:     yes
CodeAddr:     001600d0
Transparency: Critical
0:000> !IP2MD 0x1600ab
MethodDesc:   000d37f0
Method Name:  Program.Main(System.String[])
Class:        000d1410
MethodTable:  000d3810
mdToken:      06000001
Module:       000d2e9c
IsJitted:     yes
CodeAddr:     00160070
Transparency: Critical

Here we see the top 3 stack frames including the first 3 parameters to the call, and from the !IP2MD calls you can see the first 2 are the calls to RunTest() and Main(), just as we would expect.

The parameters displayed by the kb command, however, seem a bit weird for the RunTest call: 00000000 00000000 004aa100. These are, literally, the values on the stack:

0:000> dd esp L8
002fece0  001600ab 00000000 00000000 004aa100
002fecf0  002fed20 002fed04 727221db 002fed14

Notice that at the top of the stack we have the return address to the place in Main() where the method call happened, followed by the “3 parameters” displayed by kb. However, this isn’t actually correct.

The CLR uses a calling convention that resembles the FASTCALL convention a bit. That means that in this case, the left-most parameter would be passed in the ECX register, the next one in EDX and the rest on the stack. In our case, this means that the value of the this pointer will go in ECX:

0:000> r ecx
ecx=0216b180
0:000> !dumpobj ecx
Name:        Program
MethodTable: 000d3810
EEClass:     000d1410
Size:        12(0xc) bytes
File:        C:\temp\DbgTest\bin\release\DbgTest.exe
Fields:
None

It also means that the msg argument will go in EDX:

0:000> r edx
edx=0216b814
0:000> !dumpobj -nofields edx
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 0

So the value executionNumber argument will go in the stack, and we’ll find it at [esp+4]:

0:000> dd [esp+4] L1
002fece4  00000000

We could even disassemble the small piece of code in Main that calls RunTest, by backing up a bit before the current return address, and you’ll see how the value of i is pushed into the stack from the edi register and how the ecx and edx are likewise prepared for the call:

0:000> u 001600ab-12 L8
0016009b e87076b870      call    mscorlib_ni+0x2b7710 (70e37710)
001600a0 57              push    edi
001600a1 8bd0            mov     edx,eax
001600a3 8bcb            mov     ecx,ebx
001600a5 ff1504380d00    call    dword ptr ds:[0D3804h]
001600ab 47              inc     edi
001600ac 83ff0a          cmp     edi,0Ah

Knowing all this, if we wanted to print out the values of the msg and executionNumber parameters on all remaining calls to RunTest, we could replace the breakpoint setup by the !BPMD command with a regular breakpoint that executes a command and then continues execution. This would look something like this:

0:000> * remove existing breakpoint
0:000> bc 0
0:000> * check start address of RunTest
0:000> !name2ee DbgTest.exe Program.RunTest
Module:      000d2e9c
Assembly:    DbgTest.exe
Token:       06000002
MethodDesc:  000d37fc
Name:        Program.RunTest(System.String, Int32)
JITTED Code Address: 001600d0
0:000> * set breakpoint
0:000> bp 001600d0 "!dumpobj -nofields edx; dd [esp+4] L1; g"
0:000> g
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 1
002fece4  00000001
Executing test
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 2
002fece4  00000002
Executing test
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 3
002fece4  00000003
...

As you can see, we’re indeed getting the values of both our arguments without problems in the debugger log (which we could easily write to a file using the .logopen command). This is a simple scenario, but can still prove useful sometimes. Of course, you could argue that going through all these contortions might be over the top, given that the !ClrStack -p command can give you parameters to each function in the call stack. The answer is that !ClrStack doesn’t make it easy to dump just the first frame, not does it combine with other commands so that you can easily use !DumpObj on the parameter values.

[1] If !BPMD doesn’t seem to work, it’s likely because the CLR debugger notifications are disabled. See this post on how to fix it (for .NET 4.0, just remember to replace mscorwks for clr).



									
Tags windbg
Categories Development

Github Repo for Molokai

February 13, 2011 //
3

After a couple of requests, I’ve created a separate git repository on github for my Molokai color scheme for Vim. Enjoy!

Categories Vim

BizTalk Send Handlers/SSO Bug?

December 17, 2010 //
1

My good friend Carlos ran into a situation while working with one of his clients that seems to be triggering what looks like a bug in BizTalk Server 2010. The specific case came up when changing the Windows User Group associated with a BizTalk Host when that host is already associated as a Send Handler for an adapter.

Apparently, when doing the change, BizTalk will correctly update the information stored in the Enterprise Single Sign-On (ENTSSO) database (SSODB) for Receive Handlers, but not with Send Handlers, which leaves the system generating errors when the host later tries to access the SSODB to access the stored adapter settings.

Here’s  the repro instructions:

  1. Create a new BizTalk Host for the test. Let’s name it BTTestHost. Assign to it the default BizTalk Application Users group, and then create a new Host instance.
  2. Add the BTTestHost as a send handler for an adapter; we’ll use the FILE adapter in this example.
  3. At this point, if you’ check the ssodb.dbo.SSOX_ApplicationInfo table, you should see an entry associated with FILE_TL_BTTestHost, so that’s our newly created send handler. You’ll notice that the ai_admin_group_name column has the value ”BizTalk Application Users”, as expected.
  4. Now create a send port associated to the new Send Handler and test to verify everything is working correctly.
  5. Stop and delete the existing host instance.
  6. Now let’s create a new Windows Users Group, let’s name it “BizTalk Test Group”. We’ll also create a new windows user we’ll use to create a new host instance in a bit; and we’ll make that user a member of BizTalk Test Group. Make sure the user is not a member of “BizTalk Application Users”.
  7. Open the properties for BTTestHost and change the associated Windows User Group, to the new “BizTalk Test Group”.
  8. Go ahead and create a new Host Instance, add a new Send Port associated with our Send Handler and run a test. You’ll notice you’ll get an error with something like “Access denied. The client user must be a member of one of the following accounts to perform this function.” (followed by a list of windows user groups).
  9. At this point, if you go back and check the SSOX_ApplicationInfo table again, you’ll notice that the row FILE_TL_BTTestHost still has the same value on the ai_admin_group_name column referencing the original group associated with the host, not the new group we assigned to it. This causes the access denied error for the host instance, because the host user is not considered to have access to the SSODB to read the adapter settings.

The ugly part is that working around this problem requires you to delete the original Send Handler and recreating it, which requires you to move all send ports already associated with it one by one.

Categories BizTalk

BizTalk 2010 Config Tool Hanging

November 8, 2010 //
3

Last week I was installing BizTalk Server 2010 on my development Virtual Machine, which previously had 2009 installed. Installation went fine, but when I started the BizTalk Configuration tool, it started hanging for minutes at a time.

Strangely enough, the tool let me configure Enterprise Single Sign-On (ENTSSO) without any problems, but would hang every time I tried to configure a new BizTalk Group. After some tests, it became obvious it was hanging when trying to connect to SQL Server to check if the group databases could be created until the connection timeout would expire.

It would indeed eventually respond again, but trying to do anything would only cause it to hang again. It was really weird that it would hang here because of the database connection, when the SQL Server used was local and ENTSSO had configured straight away without any problems.

Fortunately, I managed to figure it out: The problem was that the SQL Server instance had TCP/IP connectivity disabled (only shared memory was enabled). Enabling TCP/IP and restarting the SQL Server service fixed the problem.

Categories BizTalk

WCF Messages Not Getting Closed

June 30, 2010 //
0

I spent some time yesterday evening tracking down an issue with a custom WCF transport channel. This particular channel would support the IInputChannel channel shape, but with a twist: it would return a custom implementation of System.ServiceModel.Channels.Message instead of one of the built-in WCF implementations.

Ther

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.