SOLVED-485 Ambiguous on routing calls to a UCMA application via the PSTN

I ran into an issue today where I was trying to send a call from Skype for Business (which I’ll continue to refer to as Lync here because it’s shorter) to an application via the PSTN.  The telephony setup is a little strange in my lab because there are several different environments with their own distinct servers working from the same gateway, so I’m trying to call from Lync A, out through an Audiocodes Mediant 1000, then back in through the same gateway into Lync B on a configured line URI.  I actually had this working from an outside line, but if I tried calling through the gateway, I’d get a 485 back on the mediation server with an error saying “Multiple users associated with the source phone number”  Off to google it, and I found a blog post from three years ago describing the same issue.  That I wrote…  I knew this problem sounded familiar.

This time, however, I managed to figure out a solution.  I think the issue goes something like this:  The users on my domain had line URIs that looked like tel:+19058825000;ext=2154 (main autoattendant number plus extension).  The caller ID coming from the gateway was <sip:19058825000;phone-context=PstnGateway_192.168.213.21@rnd.computer-talk.com;user=phone>;epid=0146001F41;tag=79915bb9.  The front end saw this and tried to match to an account, which it appears to do without any regard for the extension parameter (since, in the Microsoft world, apparently everyone has a DID).  It matched multiple users, hence the ambiguous error. 

The solution, as it turns out, is to just mangle the caller ID on the gateway so that it doesn’t match any part of any user’s line URI:

image

This makes the from URI on a call:

FROM: <sip:28819058825000;phone-context=PstnGateway_192.168.213.21@rnd.computer-talk.com;user=phone>;epid=0146001F41;tag=79915bb9

Which routes just fine.  I’d still probably class this as a bug, but at least now I know there’s a workaround for it.  So, in three more years when I have this problem again, now I’ll find the answer when I search for the error.

Posted in Uncategorized | Leave a comment

Revisiting RemoteFX USB redirection for a Lync development on a local HyperV system

A while back (2 years ago now), I wrote a follow up post about building a standalone Lync system on a laptop using HyperV, and at the end had everything working except for remoteFX USB redirection.  Time passed, and I didn’t give it much though until I got a new set of VMs that I wanted to experiment with, and came across the old problem of being able to test AV calls without an audio device.  I was, however, able to get audio redirection working, which helped, but that still meant that all remote connections shared the default device.  I thought there was a better solution, and as it turns out, there is, but you need to do some work to enable it…

There’s a good overview on what RemoteFX USB redirection does and how to configure it here on MSDN, although I did have to make some changes to the steps there to do what I needed to get it to work.

Here’s the VM environment I started out with:

  • Host: Windows 8.1 with HyperV role, and 2 virtual switches, one sharing the ethernet adapter, and one private
  • DC: A Windows Server 2012 R2 DC running ADDS, CA, DNS, and DHCP (private network)
  • Lync 2013 SE: Running Server 2012 R2 (private network)
  • Terminal Server: Running Server 2012 R2, Remote Desktop Services, and Office 2013 (connected to both the private and external networks).

At this point, I could RDP into the terminal server, run Lync in two sessions, and IM between them.  To enable USB redirection though, you need to change some things on both the domain policy (or the policy of the TS I suppose) AND on your local PC, plus you’ll need to connect to the remote session in a very specific way:

On the Domain policy (I changed this under “Default Domain Policy\Computer Configuration\Policies\Administrative Templates\Windows Components\Remote Desktop Services\”, so I’m assuming that as the starting point for any paths):

  • Change “Remote Desktop Connection Client\RemoteFX USB Device Redirection\Allow RDP redirection of other supported RemoteFX USB devices” to enabled:

image

  • You may also want to change “Allow audio and video playback redirection” and “allow audio recording redirection” under “Remote Desktop Session\Device and Resource Redirection\”.  This is what allows you to share the default devices on an RDP session (sometimes useful).

Once those changes are made, update the policy (gpupdate /force) on the terminal server.  Before you remote in though, you’ll need to change a policy setting on your local machine.  Go to “Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Connection Client\RemoteFX USB Device Redirection, and edit “Allow RDP redirection of other supported RemoteFX USB devices from this computer”:

image

Once that’s done, you’ll need to reboot, make sure the VMs are started, and then RDP into them.  In the RDP connection dialog, you’ll need to change a couple of things on the “Local Resources” tab:

image

First, click the “more” button and select the devices you want to redirect (in this case, a headset and camera)

image

Next, click the remote audio settings and disable remote audio playback and recording.  These settings will override USB redirection and will mean that “remote audio” is the only device that appears in the VM. 

image

Then connect to the session, and you should see a driver installation dialog that says something like “installing Plantronics Savi 7xx-M (redirected)” that should disappear fairly quickly.  After that, you should have your AV device appearing natively in the remote desktop session.

image

Of course, this works for multiple sessions and devices as well, so if you connect two headsets or speakerphones and two cameras, and log into the TS via RDP with two different users, you can place a video call between them and have each user get their own audio and video device.  The number of unique AV devices is really only limited by the number of USB ports you have available.

Note that you can also do this through the enhanced RDP connection if you’re running Gen2 VMs (which is just running a remote desktop session anyway).  This is useful because you don’t even need the external network adapter enabled:

image

Now, as for why this is useful, it’s great if you want to validate AV conferencing scenarios, or, say, UCMA applications that use custom audio routes, and want to do all of this in an isolated VM environment where it’s also not really practical to attach multiple devices to the virtual switch, which is where I use this kind of setup.  However you use it though, this is kind of a neat setup to show people, and it helped me understand how some new virtualization and remote desktop features work.

Posted in Uncategorized | 1 Comment

Tools, runtimes, and versions-what works, and what doesn’t

I already wrote about UCMA 3 applications and Lync 2013, and working on that post got me thinking about what configurations (supported or not) will actually work.  As a disclaimer for all of this, these results are based on some reasonably quick testing of a UCMA app.  I haven’t tested the entire runtime, so there may be problems that I don’t know about (although if you find cases that don’t work, drop me a line and let me know what you find).  That being said, the app I tested with set up an AudioVideoCall, created conferences, set custom audio routes, added and removed participants from a conference through dial out, used TTS, and processed Instant Messaging calls, so I’m reasonably confident that most of these cases are going to work out just fine.  I also don’t have any references to UC workflow in my projects (and neither should you, because it’s gone in UCMA 4), so there’s not that complication to worry about.

Most of this started with a reasonably simple question-how can you build a UCMA 3 application and a UCMA 4 application on the same machine.  As I’m sure others have found, attempting to install the UCMA 4 sdk on a machine that has UCMA 3 installed will force you to uninstall the old version before proceeding, but really, what is this error saying?  UCMA is really just a series of assembly references, and a runtime, right?  And I can have multiple .net versions installed on the same box, so why not UCMA?  As it turns out, it’s possible, but you have to selectively uninstall things, and be willing to work with manual provisioning in development (which I’d recommend doing anyway).

There are three pieces to the UCMA SDK that you’ll need to consider: the core components (OCSCore.msi), the SDK itself, and the Microsoft.speech SDK/voices.  First, the core components, which you’ll see in add/remove programs like this:

image

This is the stuff that installs in C:\Program Files\Microsoft Lync Server 2010\Deployment, and contains the files you need to bootstrap the machine, and install the local config store.  These are installed on every machine in the topology, and will have to get uninstalled if you want to dual build/target.  Note that if you’ve auto provisioned any applications on your server, they won’t work after this, so switch those over to manual provisioning (set –RequiresReplication to $false on your pool) on the 2010 server. 

The SDK is a bunch of files that installs under C:\Program Files\Microsoft UCMA 3.0\SDK, and really, the files you care about are in C:\Program Files\Microsoft UCMA 3.0\SDK\Core\Bin.  There is no reason that these can’t coexist, but I’ll detail an optional step in a minute to keep things even cleaner for you.

Finally, UCMA 3 installs the Microsoft speech platform SDK version 10.2, and one of the voices.  You may have installed more voices, in which case your add/remove programs will look something like this (yes, I installed all the voices…  you never know when you might want to try Japanese TTS):

image

Now, the UCMA 4 SDK installs new versions of each of these components.  The SDK files are actually the nicest, since they install to C:\Program Files\Microsoft UCMA 4.0\SDK, and can exist quite nicely beside the UCMA 3 versions.  The core components can’t exist concurrently though, and that’s what causes the setup error on the SDK install.  Version 11 of Microsoft.speech also can’t coexist with version 10.2, but in this case, the UCMA 4 install won’t fail if you have it on there already, so while you’ll get the new SDK files, your speech runtime will remain at 10.2.  Interestingly enough, this doesn’t seem to matter. 

The other thing to mention before getting to the results is .net.  UCMA 3 was built against .net 3.5, and the supported scenarios all have UCMA apps targeting this version of the framework.  UCMA 4 on the other hand, targets .net 4.5.  Of course, it’s well documented that an uplevel .net application can use an assembly from an earlier framework, so is there any reason that your UCMA 3 app can’t use .net 4.5 too?  As it turns out, no, there probably isn’t. 

So with all that in mind, I tested just about every combination of the following:

  • Lync Server Version: 2010 and 2013 (both with the latest updates)
  • UCMA 3 and 4 assemblies
  • OCSCore 3 installed, OCSCore 4 installed, and no OCSCore installed
  • .net 3.5 and 4.5
  • Microsoft.speech 10 and 11

And doing so gave me some really interesting results:

  • Referencing the UCMA 3 libraries in a .net 4.5 project worked, but you need to make sure to set the <startup useLegacyV2RuntimeActivationPolicy=”true”> in your app.config file (standard practice for loading an old assembly in new .net).  Note that I didn’t make any code changes in the app, but it did use the newer framework.
  • Building a UCMA 4 application using the old (10.2) speech runtime worked just fine.
  • Running a UCMA 3 application using the new (11.0) speech runtime worked just fine too, and actually seemed to have better TTS than the old version.
  • If you’re manually provisioning, it doesn’t seem to matter whether OCSCore is installed or not.
  • UCMA 4 apps run against Lync 2010.  I didn’t expect that to work, but it does imply that there are little to no changes in the SIP layer.  This is one case I’m definitely going to investigate further though, since the one thing I did not try is running this in a pure 2010 environment (not a hybrid), mostly because all of my dev lync environments are hybrids now.  Interesting prospects though.
  • I didn’t bother testing UCMA 4 with a lower level .net, but I don’t think it’d work.

Now, knowing all of that, the question is, what makes the most sense to release.  My final configuration was a bit of a compromise: I created a build that generated 2 sets of binaries: one using UCMA 3, and one using UCMA 4, but both using .net 4.5 and speech 11.  This means that clients will still be able to auto provision in production.  As for how I set this up:

  1. Uninstall the 2010 core components
  2. Uninstall the 10.2 speech SDK and voices
  3. Install the UCMA 4 SDK
  4. Verify that the v11 speech SDK is installed, and install any additional voices you want.
  5. In your project, reference the appropriate version of Microsoft.RTC.Collaboration etc (3 or 4).  NOTE-it’s important to remember to reference the specific version of the assembly, and not just the latest.  I did this by explicitly setting the HintPath in the project file to the path of the DLLs rather than using the GAC copy (although you could also use the SpecificVersion flag in the reference)
  6. Update your project to .net 4.5 if you want (although you could also split out by .net version).

So far, this seems to have worked out well, and going into the next full QA round for our products we’ll be testing these configurations more thoroughly.  I’m pretty impressed that these cases all worked out, but it would have been nice if it’d been easier to set this up.  I can understand why two versions of OCSCore can’t exist on the same machine, but why not the speech runtime and voices? 

In any case, this is what I managed to figure out through testing these cases out myself, and the end results are that this all looks really promising.  Have any of you tried these configurations yourself, and if so, what does and doesn’t work?  Any other things to watch out for?

Posted in Uncategorized | Leave a comment

How to provision a UCMA application against Lync-Automatic vs Manual

No matter what type of UCMA application you want to write, one of the first things you’ll need to do is set up your environment to work with the Lync server.  No matter what, this involves configuring a machine certificate, installing the prerequisites, and provisioning your application using the Lync Management shell.  The official documentation on provisioning has come a long way for UCMA 4, so I’ll just point you to here (General Provisioning steps) for the basic steps.  This post is just going to cover the changes for each method, and why you’d choose one over the other.

Automatic Provisioning (documented here) is the recommended activation procedure from Microsoft, and in theory it makes a lot of sense.  Your application server has a replica of the Central Management store, and creating your collaboration platform is greatly simplified:

//Create the platform settings
ProvisionedApplicationPlatformSettings platformSettings = new ProvisionedApplicationPlatformSettings(applicationUserAgent,_customProperties.ApplicationURN);

//Create the CollaborationPlatform             
_collaborationPlatform = new CollaborationPlatform(platformSettings);

Manual provisioning (documented here) on the other hand, is similar to what we had in UCMA 2, and has the app server semi-absent from the Lync topology.  It’s still a trusted application server, but it has no local replica.  As such, we have to specify a few more settings when we create the platform:

ServerPlatformSettings platformSettings = new ServerPlatformSettings(applicationUserAgent,
                                                                                                 _customProperties.ApplicationServerFQDN,
                                                                                                 (int)_customProperties.ApplicationServerPort,
                                                                                                 _customProperties.ApplicationGRUU,
                                                                                                 certificate);

//Create the CollaborationPlatform             
_collaborationPlatform = new CollaborationPlatform(platformSettings);

This constructor now has to include the gruu, certificate, port, Lync FQDN, and app server FQDN.  Most of these are just outputs from the provisioning cmdlets that you ran to set up the application server, and the certificate is easily obtained by getting a reference to the local machine store like this:

X509Store store = new X509Store(StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadOnly);
X509Certificate2Collection certificates = store.Certificates;

And then finding the appropriate cert based on the FQDN.  It’ll also mean storing a lot more application configuration than you would in the auto provisioned case, but we’ll see in a minute why that might not be a bad trade off.

Now, as for reasons you’d want to use either of these methods, consider these advantages for manual provisioning:

  • The application server does not need to be a member of the same domain as the Lync server, or even a member of ANY domain.  Even a workgroup computer will work, which I’ve used when building reusable VMs containing UCMA apps.  This can be really handy for cases where you’re deploying a managed application in an environment where someone wants you to support your app, but doesn’t want you having full domain access. 
  • Setup of the local replica can take a while: Usually, this goes pretty smoothly, but there’s more stuff to install, more reboots of the server, and more cmdlets to run to get replication working correctly.  In general, it seems like more trouble than it’s worth just to avoid having to copy a gruu around.
  • You’re tightly tied into one version of UCMA (more on this in another post…)
  • You can easily run applications against multiple Lync domains: Switching between a dev, test, and production Lync server is just a matter of changing a setting in your app.  No re-provisioning required. 

On the other hand, auto provisioning means:

  • less chance for errors when copying a gruu around
  • certificates are assigned through powershell, rather than in the application config

So, when to use each?  Ideally, support both in your app-it’s not difficult to add a flag, and you’ll need to store configuration information anyway, so give your users the option.  When you’re in development, you’re definitely going to want to use manual provisioning.  I know that I switch between Lync environments all the time, and my main Lync lab domain is separate from the domain my dev machines sit on.  It’s much more flexible, and other than the initial setup of copying the settings from Lync to the app server, management has never been a problem.  When deploying in production though, I’ll often flip things over to automatic provisioning if possible, just to keep the application servers in the topology, and allow Lync admins to manage app server certificates through the management shell.  As always though, having the option gives you flexibility in whether you’re on the domain or not. 

Posted in Uncategorized | 1 Comment

UCMA 3.0 applications and Lync 2013 RTM

Now that Lync 2013 is RTM, UCMA developers are left with a difficult choice.  Do you upgrade your existing apps to UCMA 4.0, or stick with 3.0?  The simplest answer to this question is-do you plan on supporting Lync 2010 or not?  If so, then sticking with 3.0 seems to be the safest bet, since your code will work in both places.  If you absolutely have to have async (given that .net 4.5 support is pretty much the only new feature in UCMA 4), then by all means upgrade.  It’s a painless process, since existing code should just work, but be forewarned that you’ll only be able to run against Lync 3013 pools.  This means either keeping two build configurations and sets of binaries, or not targeting anyone who hasn’t upgraded yet. 

If you do decide to stick with 3.0 for now, the question becomes what has changed from a provisioning standpoint.  Michael Greenlee already went through this exercise for the preview release, and I’m glad to say that things have gotten a little better since then.  I’ll go through a few cases here

Pure Lync 2013 environment, Auto provisioning

First, consider a case of a pure Lync 2013 environment (i.e. not an upgrade from 2010). 

  • Install the prerequisite components on your server.  This includes the UCMA 3.0 runtime, OCSCore.msi, and the TTS/ASR languages you’ll need.  The important thing here is to make sure you use the 2010 versions of these prerequisites (from the Lync 2010 iso)
  • On the application server, open the Lync management shell and run New-CsTrustedApplicationPool –Identity <ApplicationPoolFQDN> -Registrar <lync 2013 FQDN> -Site <Lync 2013 site ID> .  This is the same command you’d run for 2010. 
  • Run New-CsTrustedApplication and New-CsTrustedApplicationEndpoint, just as you would with 2010.
  • Don’t forget to run enable-csTopology to publish your changes.  Otherwise, they’ll get wiped out the next time someone else makes a change.
  • Install the rest of the prerequisites on the app server by running C:\Program Files\Microsoft Lync Server 2010\Deployment\Bootstrapper.exe /BootstrapLocalMgmt /MinCache .  This installs the local SQL express instance for auto provisioning.
  • Run Enable-CSReplica from the Lync powershell window
  • Reboot the server
  • After reboot, run Invoke-CSManagementStoreReplication
  • Verify that the replication status is $True

What’s interesting here is that if you look on the Lync server, you’ll actually see the application server under the 2010 branch in the topology builder, and not the 2013 one. 

    image

At this point, you’re good to get your certificates configured, and add more apps and endpoints to the server if you want.  I’ve found that anything you do using the 2010 PS on the app server works just fine, and keeps things in the 2010 hub.

Pure Lync 2013 environment, Manual provisioning

This case is actually quite similar to the one above, except that instead of creating an entire replica database, you’ll just provide some extra information to the UCMA platform startup.  This has the disadvantage of having to copy over a GRUU, but also means that you don’t have to join the app server to the Lync domain, and don’t have to hope that the replica DB comes up the first time you try.  The steps:

  • Install the prerequisite components on your server.  This includes the UCMA 3.0 runtime, OCSCore.msi, and the TTS/ASR languages you’ll need.  The important thing here is to make sure you use the 2010 versions of these prerequisites (from the Lync 2010 iso)
  • On the application server, open the Lync management shell and run New-CsTrustedApplicationPool –Identity <ApplicationPoolFQDN> -Registrar <lync 2013 FQDN> -Site <Lync 2013 site ID> –RequiresReplication $False .  Note that you’re disabling replication here.
  • Run New-CsTrustedApplication and New-CsTrustedApplicationEndpoint, just as you would with 2010.
  • Don’t forget to run enable-csTopology to publish your changes.  Otherwise, they’ll get wiped out the next time someone else makes a change.
  • Grab the GRUU (and other provisioning parameters) that you need for your app

Now, just to try it out, I ran these cmdlets from the 2013 powershell window instead of the application server, which meant that the app server appeared in the list of Lync 2013 application servers.  When I tried my apps though, everything seemed to work, so it appears that the issues that others were having in the beta about not being able to manage legacy objects from the 2013 PS have been resolved.  I don’t know if there are any long term implications for having the server appear under the 2013 branch though, so if anyone knows of any, please let me know. 

Upgrades

Probably the most common case, here is where you’ll have a 2010 environment, deploy 2013 alongside it, and want to move an application from one server to another.  Unfortunately, this is easier said than done-there is no way to easily move an app from one registrar to another.  Set-CSTrustedApplicationPool fails when you try to change the registrar, even after warning you that applications and endpoints will be orphaned.  The only way to upgrade is to manually re-provision your application…or to write a script to do it for you.  Such a script might look something like this:

function CTTMove-CSTrustedApplicationServer
{
param(
[Parameter(Mandatory = $true, HelpMessage="The current pool FQDN")]
[ValidatePattern("^(.+)$")]
$currentPool=(Get-CsService -Registrar)[0].PoolFqdn,
[Parameter(Mandatory = $true, HelpMessage="The new pool FQDN")]
[ValidatePattern("^(.+)$")]
$newPool=(Get-CsService -Registrar)[0].PoolFqdn,
[Parameter(Mandatory = $true, HelpMessage="The application server FQDN")]
[ValidatePattern("^(.+)$")]
$appServerFQDN=$null,
[Parameter(HelpMessage="Set this to true if moving a 2010 pool-2013 can't auto delete")]
[bool]$isLegacy=$False
)

#get the currently provisioned stuff for the apisLegacyp server
$site=Get-CSSite
$pool=Get-CsTrustedApplicationPool |where{$_.PoolFqdn -eq $appServerFQDN}
$cpus=Get-CsTrustedApplicationComputer |where{$_.Pool -eq $appServerFQDN}
$apps=foreach($a in $pool.Applications) {Get-CsTrustedApplication | where {$_.ApplicationID -eq $a}}
$eps=foreach($a in $pool.Applications){ Get-CsTrustedApplicationEndpoint |where {$_.OwnerUrn -eq $a} }

#remove the application server
if($isLegacy -eq $false)
{
    Remove-CsTrustedApplicationPool $appServerFQDN -Force
    Enable-CsTopology
}
else
{
    Write-Host "Please remove the pool from from the lync 2013 topology builder and publish the topology before proceeding" 
    Read-Host "Hit enter to continue:"
}

#re-add the pool

if($cpus.Count -eq 1)
{
    New-CsTrustedApplicationPool -Identity $pool.PoolFqdn -Registrar $newPool -Site $site.SiteId -RequiresReplication $pool.RequiresReplication -Force
}
else 
{ 
    New-CsTrustedApplicationPool -Identity $pool.PoolFqdn -Registrar $newPool -Site $site.SiteId -RequiresReplication $pool.RequiresReplication -ComputerFqdn $cpus[0].Fqdn -Force
    for($i = 1;$i -le $cups.Count;$i++)
    {
        New-CsTrustedApplicationComputer -Pool $pool.PoolFqdn -Identity $cups[$i].Fqdn
    }
}
foreach($a in $apps) {New-CsTrustedApplication -ApplicationId $a.ApplicationId -TrustedApplicationPoolFqdn $appServerFQDN -Port $a.Port}
foreach($e in $eps) {New-CsTrustedApplicationEndpoint -SipAddress $e.SipAddress -DisplayName $e.DisplayName -LineURI $e.LineUri -TrustedApplicationPoolFqdn $appServerFQDN -ApplicationId $e.OwnerUrn}
Enable-CsTopology
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

This script grabs the information about the application pool, computers, applications, and endpoints, and recreates them all using the new registrar.  This assumes that you want to target all applications on a particular server to a new registrar, which will likely be the case if you’re in the process of decommissioning Lync 2010.  Of course, this can’t be completely straightforward…  If you try to run the script as-is, you’ll probably get an error like this:

image

Which is saying that you can’t delete a legacy pool from the 2013 tools.  The problem is, you also can’t delete it from the 2010 powershell.  You can, however, delete it from the 2013 topology builder, but if you do this you lose the provisioning information that this script tries to hard to save.  What you need to do is run the script with –isLegacy $true.  This will gather all the info, pause the script, and then tell you to delete the existing pool.  When it pauses, open the topology builder, locate your pool, delete it, and then publish.  Then, let the script continue and everything should get recreated.  I tried having the script detect this case, but that big red failure of Remove-CSTrustedApplicationPool didn’t actually throw an exception, so I wasn’t able to catch it.  Feel free to modify and improve on this script if you’d like-I’m a .net developer, not a powershell ninja, and this is actually the first reusable script I’ve come up with.  Although, seeing how useful it can be, it probably won’t be the last. 

So, to summarize, UCMA 3.0 applications appear to “just work” against Lync 2013 as promised, which is great to see, and other than the upgrade path being a little more cumbersome than it needs to be, things will continue to work as they have in the past.  Has anyone out there found something about a UCMA 3 app that won’t work with Lync 2013?  If so, shoot me an email or leave a comment-I haven’t seen anything myself, but I’d love to know if there’s something else to watch out for. 

Posted in Uncategorized | 1 Comment

Lync 2013 RTM and Diagnostics-the disappearing OCSLogger

The RTM version of Lync 2013 has been available for a while now, but one of the things that you might notice when you come to it from 2010 is that the logging tool (OCSLogger.exe) is not installed on the system.  I know if was there in Beta, so I didn’t think they’d axed it, and luckily I did manage to find this download link for the tracing tools.  Not only does this contain the OCSLogger.exe, but also the snooper download from the resource kit. 

As for what to expect, the tools work pretty much the same as before, with snooper actually performing much faster than it did in the past.  This tool is absolutely essential for troubleshooting Lync issues, since there’s often more information in a trace than in the UCMA exceptions.  This is also true for app servers-you can run OCSLogger on a UCMA server and get traces to that machine from SIPStack and S4 (which actually stands for Simple Scalable SIP Stack) to troubleshoot there as well.

Anyway, just a quick post in case anyone else runs into the same issue.  No, OCSLogger hasn’t disappeared yet-it’s just better hidden.  Something else to add to the TODO list when installing a new Lync server or app server I suppose.

Posted in Uncategorized | Leave a comment

Building a standalone Lync Server part 2-Windows 8, HyperV, and a domain joined laptop

Earlier this year, I wrote a post about creating a HyperV laptop to run an isolated Lync instance on using Windows Server 2008 R2.  Now that Windows 8 is out though, I wanted to do the same thing for my main development machine, since we finally have a full on HyperV instance on the desktop OS.  This turned out to present some different challenges than the server based laptop, not so much with the drivers, but with networking. 

In HyperV, you now create virtual switches instead of just network connections.  Because my laptop was domain joined at the office, I didn’t want to put a new DC on that network, so I set up an internal network for the DC, Lync, Exchange, and terminal servers.  Then I wanted at least one server (the terminal server image that had Visual Studio and the UCMA 4 SDK preview) to be accessible via remote desktop for audio remoting, and also to be able to access the internet.  This seemed like a simple enough task-right click the HyperV node, select the virtual switch manager, and create an External switch that bound to my physical NIC (note-make sure you chose the correct adapter-wired or wireless).  I checked the box to allow the management OS to share the adapter, but after enabling the new switch, I’d lost all external network connectivity on the wired adapter.  My virtual switch manager looked like this:

image

There will also be a whole lot of things listed in the Network connections (search for “View network connections” to find it-this screen likes to hide):

image

Each virtual switch is accompanied by a vEthernet adapter, and possibly a bridge (in the case of my wireless adapter).  Now, in order to get everything working correctly after the initial setup (and occasionally after a reboot), I needed to do the following:

  1. Disable the vEthernet connection (public), the ethernet connection, and the bridge (if it was created)
  2. Enable the ethernet connection-wait until you have access
  3. Enable the bridge
  4. Enable the vEthernet connection

Once this is done, you’ll see something like this in the Network and Sharing Center:

image

Now, the nearest I can figure there were some race conditions in place here that meant that the virtual adapter came up first, and was trying to get a DHCP address from the DC on the virtual network.  Whatever the reason though, the good news is that now the host and guest both have network access.  This also means that your guest is going to get an IP from your main network’s DHCP server, and have all the access that it would had you plugged a cable into a physical machine, so be careful what you put on there. 

Now at this point, there’s one other thing you’ll have to do if you want to remote desktop into the guest:

On the host OS, open your network connections and open the vEthernet adapter that’s on the PRIVATE network.  Change the ipv4 properties to give the adapter a static IP on the same subnet as your VMs:

image

Now in the guest, change the IP settings of the private adapter to list the host’s static IP as the default gateway:

image

Now, you’ll be able to remote desktop into the guest from the host, which means you can enable remote audio like this:

image

And, you can even open up multiple remote desktop sessions, and connect audio to each of them, which means that two Lync clients could call each other without ever leaving your laptop.  Of course, they’ll both be using the same audio device, but there’s not much we can do about that…or can we?

Actually, no, we can’t, or at least I haven’t managed to figure that part out yet.  Using RemoteFX, you should technically be able to redirect a USB device through a remote desktop session like this:

image

In theory, this means that two USB speakerphones could be redirected to two different remote desktop sessions, and I could place calls between them, which would be a great lab setup.  Unfortunately, this doesn’t appear to work in Windows 8.  RemoteFX appears to depend on Remote Desktop Services, which isn’t present on Windows 8.  So very close…  Of course, I also wasn’t able to get RemoteFX USB redirection working on my Server 2012 instance, so maybe there’s something else going on?  There’s a pretty comprehensive guide out there if you want to give it a shot, and I’d like to hear from someone who has this working.  I use a couple of physical IP phones at the office for most of my Lync testing (a couple Polycom CX700s and a Snom 370), but having something I could use at home with a couple of Jabra Speak phones would be useful too. 

In any case, now there’s a simple way to set up a development machine that’s joined to a corporate domain that can run Lync without being on a VPN.  You’re probably not going to run a huge amount of volume through it, but placing a couple of calls or setting up a conference is certainly reasonable.  Has anyone else tried this and come up with a better setup? 

Posted in Uncategorized | Leave a comment