Thursday, December 24, 2015

Playing with Windows NVGRE Gateway and System Center VMM

Sample NV Gateway Config:
Management Network - VLAN 101 (native VLAN)
Back-End Network - VLAN 102
Front-End Network - VLAN 814

Issue 1:
After NV Gateway provisioning and front-end/back-end networks connectivity configuration attempt following error detected:

Error (21426)
Execution of Microsoft.SystemCenter.NetworkService::InstallDeviceConnection on the configuration provider 4ee559f1-f479-480c-9458-d14b8b1c1779 failed. Detailed exception: Microsoft.VirtualManager.Utils.CarmineException: Unable to set up Remote Access server to support multi-tenancy mode. (VMM cannot complete the host operation on the hyper-vhost1.dev.contoso.com server because of the error: The operation failed. Failed while applying switch port settings 'Ethernet Switch Port Isolation Settings' on switch '': One or more arguments are invalid (0x80070057). Resolve the host issue and then try the operation again.). Fix the issue in Remote Access server and retry the operation.

Recommended Action
Check the documentation for the configuration provider or contact the publisher support.


Solution:
Need to uncheck "Enable VLAN" 102 for Back-End Network in NVGW VM properties before NV Gateway connectivity configuration. Wizard will setup it for you.



Issue:
After manual attempt to attach Tenant network to "Tenant-1-VM1" following error detected:

Error (15020)
The virtual network adapter Tenant-1-VM1 [MAC: XXXXXXXXXXXX] doesn't have a CA (customer address) assigned from the VMSubnet related IP Pool. 

Recommended Action 
Please assign a CA address from the VMSubnet Address Pool to the virtual network adapter and try again.


Solution:
Make sure you are using "Dynamic IP" VM setting (this problem can occur if you set the IP Address to "Static IP" inside the VM Template as well)


To allocate static IP address from the pool manually, run the following PowerShell commands:

$VM = Get-SCVirtualMachine -Name 'Tenant-1-VM1'
$staticIPPool = Get-SCStaticIPAddressPool -Name '192.168.1.0'
$IP = '192.168.1.10'
Grant-SCIPAddress -GrantToObjectType 'VirtualNetworkAdapter' -GrantToObjectID $VM.VirtualNetworkAdapters[0].ID -StaticIPAddressPool $staticIPPool –IPAddress $IP

Once this IP has been granted from the pool, it can be assigned to the corresponding vNIC using the following command:

Set-SCVirtualNetworkAdapter -VirtualNetworkAdapter $VM.VirtualNetworkAdapters[0] -IPv4AddressType static –IPv4Addresses $IP

Issue 3:
After adding Hyper-V cluster (ex. created earlier in DMZ zone) to SCVMM following error and warning detected:

Error (25122)
The specified address ((AllocatedIPAddressData#2d583) { id = 11c8b775-dff5-4e16-a57b-6d5d411e14ac, LastUpdatedTimestamp = 2/27/2016 10:56:57 PM }) is already allocated by the pool (<IPPoolname>). This address should be assigned to only a single entity. 

Recommended Action
Resolve to which entity this address is allocated.

Warning (13926) 
Host cluster <FQDN of Cluster> was not fully refreshed because not all of the nodes could be contacted. Highly available storage and virtual switch information reported for this cluster might be inaccurate.

Recommended Action
Ensure that all the nodes are online and do not have Not Responding status in Virtual Machine Manager. Then refresh the host cluster again.

$ID = "11c8b775-dff5-4e16-a57b-6d5d411e14ac"
$IPPoolName = "IPPoolName"
If (-not (Get-Module virtualmachinemanager)) {
Import-Module virtualmachinemanager }
$IP = Get-SCIPAddress | Where-Object {$_.ID -eq $ID}
$IPPool = Get-SCStaticIPAddressPool -Name $IPPoolName
# Looking up DNS Name based on IP Address
$VMHostClusterName = [System.Net.Dns]::GetHostbyAddress($IP.Name)
$VMHostCluster = Get-SCVMHostCluster -Name $VMHostClusterName.HostName
# Giving the IP Address back to the IP Pool
Get-SCIPAddress -IPAddress $IP | Revoke-SCIPAddress
# Allocating the IP Address to the VM Host Cluster
Grant-SCIPAddress -GrantToObjectType HostCluster -GrantToObjectID $VMHostCluster.ID -IPAddress $IP.Name -StaticIPAddressPool $IPPool -Description $VMHostCluster.Name

Monday, December 21, 2015

Throttling Office 365 PST Upload traffic

Microsoft Office 365 has offered automated "PST upload" service called "Office 365 Import Service". This feature is in “Public Preview” through August 2015 and it's free while is in Preview Stage. It looks like it will become a paid feature in General Release based on the amount of data (/GB) being imported into the service. Microsoft anticipate making this service available for purchase sometime in the first quarter of 2016.

Very nice feature to migrate small customers with their non-Exchange main mailboxes and middle/large customers with their PST archives exported from archive solutions (or you can ship SATA II/III hard drives to Microsoft as well) to Exchange Online

Focus of this post won't be on "Office 365 Import Service" configuration. Process is very straightforward and described on technet. But I will be focused on network traffic throttling process during PST upload.

Microsoft Azure AZCopy Tool and Azure Storage Explorer are tools used to upload and manage PSTs in Office 365.


We've found out that we can easily reach limit of bandwidth capacity for customer environment during PST upload and provide unnecessary load (ex. during business time) on customer network.

Starting from Windows Vista/Windows Server 2008 we've got nice feature called Policy-based QoS. Feature still works for the latest Windows OS.

If you like UI-based process:

1. Start mmc
2. Add Group Policy Object Editor for Local Computer (assuming you run AzCopy on local server)


3. Go through Policy-based QoS Wizard, configure throttling limit (ex. 20 Mb/s) which applies to your particular environment and application AzCopy.exe executable


4. Once Policy-Based QoS is configured you can start using Microsoft Azure AZCopy Tool for upload process.

For Windows 8/Windows Server 2012 and later we have PowerShell New-NetQosPolicy cmdlet
control under QoS policies (Run as Administrator):

New-NetQosPolicy –Name "AzCopy" -AppPathNameMatchCondition AzCopy.exe -ThrottleRateActionBitsPerSecond 20MB

You can change throttle rate on the fly during upload process and it's really cool!

Set-NetQosPolicy –Name "AzCopy" -AppPathNameMatchCondition AzCopy.exe -ThrottleRateActionBitsPerSecond 100MB

By default AzCopy tries to utilize multiple concurrent operations for PST upload which may to lead to following errors on slow link connections:

[2015/12/21 14:18:46][ERROR] archive1.pst: Unable to connect to the remote server
[2015/12/21 14:18:48][ERROR] archive2.pst: Unable to connect to the remote server
[2015/12/21 14:19:07][ERROR] archive3.pst: Unable to connect to the remote server
[2015/12/21 14:28:17][ERROR] archive4.pst: The client could not finish the operation within specified timeout.
The client could not finish the operation within specified timeout.
[2015/12/21 14:28:34][ERROR] archive5.pst: The client could not finish the operation within specified timeout.
The client could not finish the operation within specified timeout.
[2015/12/21 14:28:34][ERROR] archive6.pst: The client could not finish the operation within specified timeout.
The client could not finish the operation within specified timeout.

To resolve such behavior you can consider AzCopy with /NC:1 or /NC:2 key:

.\AzCopy.exe /Source:\\FileServer\Data /Dest:https://3bc222bbb65a457777c0444.blob.core.windows.net/ingestiondata/FileServer/Data/ /Destkey:/ui2Jiu8uihiuUI6HdokDPJsdflEU1FPnvOp63HkqwY1IP19dTPSAM0y26FLoaNpjZzCtu+DtJA1pUl6EQg== /S /NC:1 /V:E:\PSTUpload\Uploadlog.log  

where:
  • /NC - is a number of concurrent operations;
  • /Source - Source fileshare directory. Required. Where the PST data is currently located;
  • /Dest - Destination virtual directory. Required. Location where PST files are staged in Azure;
  • /Destkey - Storage Key. Required. The key for the Azure storage account;
  • /S - Specifies recursive mode for copy operations. In recursive mode, AzCopy will copy all blobs or files that match the specified file pattern, including those in subfolders;
  • /V - Outputs verbose status messages into a log file (local path E:\PSTUpload\Uploadlog.log)

Some other network and capacity related notes:
What is the size of the largest PST that can be imported?
"In our testing we have found that if there are more than 1 million items in a folder in a PST, the import fails. Because it is hard to determine number of items, an approximate assumption is PST files that are of size 10 GB. If you have files that are larger than 10 GB, please break them up into smaller files. We are actively working on supporting larger per folder limits."
Do I need a specific network connection (such as ExpressRoute) to use network uploads?"No. Any Internet-facing connection is supported."

References:

Sunday, December 20, 2015

Using Task Scheduler to Purge Microsoft Exchange and IIS Logs

Pingback from Shane Jackson blog

Microsoft Exchange creates a lot of logs. Unless these are managed, they can quickly fill up your disk space. The following describes one way to use PowerShell and Task Scheduler to automatically purge the Exchange 2013+ and IIS logs. Scheduled task would be useful for Hybrid deployments where Exchange Server is used just for managing mailbox properties (applies Hybrid Identities) and you are not really too much interested in deeper diagnostic logs.

Scheduled Task Summary
Task NamePurge Exchange logs older than 7 days
FunctionDeletes all Exchange logs older than 7 days from the following location ‘c:\program files\microsoft\exchange server\V15\Logging’
ScheduleDaily at 1am
Program CalledC:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Parameters-NonInteractive -WindowStyle Hidden -command “. ‘c:\program files\microsoft\exchange server\V15\bin\RemoteExchange.ps1’; Connect-ExchangeServer -auto; gci ‘c:\program files\microsoft\exchange server\V15\Logging’ -Directory | gci -Include ‘*.log’,’*.blg’ -Recurse | ? LastWriteTime -lt (Get-Date).AddDays(-7) | Remove-Item
Runs AsSYSTEM
Task NamePurge IIS logs older than  14 days
FunctionDeletes all IIS logs older than 14 days from the following location ‘c:\inetpub\logs’
ScheduleDaily at 1am
Program CalledC:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Parameters-NonInteractive -WindowStyle Hidden -command “. ‘c:\program files\microsoft\exchange server\V15\bin\RemoteExchange.ps1’; Connect-ExchangeServer -auto; gci ‘C:\inetpub\logs’ -Directory | gci -Include ‘*.log’,’*.blg’ -Recurse | ? LastWriteTime -lt (Get-Date).AddDays(-14) | Remove-Item
Runs AsSYSTEM

Saturday, December 19, 2015

Why should you have at least one Exchange Server on-premises with AAD Connect/or AAD Sync/or DirSync

I've tried to explain multiple times my sales and engineering team that sometimes it makes no sense to have AAD Connect/AAD Sync/DirSync for "password sync" or Federated SSO for small organizations as these organizations should keep Exchange Server 2010+ On-Premises in addition to Synchronization tool. It is easier for such customers to keep and maintain identities separately.

Example customer scenarios:

  • "We want to make our legacy Exchange server decommission after Office 365 migration but want users and passwords synchronized from our Active Directory"

  • "We are going to be migrated from non-Exchange Server (Google Apps, Lotus Notes, GroupWise, Zimbra, etc.) to Office 365 and want to have users and passwords synchronized from our Active Directory"

  • "We used Cloud only identities for some time and now we want to have password sync or SSO using our Active Directory on prem credentials."

All mentioned scenarios require Exchange Server 2010+ deployed on premise for Exchange Online mailbox management.

I believe everyone who ever worked with Hybrid have seen similar error when tried to change some mailbox property like "Hide from address lists" or "Add new SMTP address" via Exchange Online Admin center instead of Exchange Server On Premise ECP:
error
The operation on mailbox "User1" failed because it's out of the current user's write scope. The action 'Set-Mailbox', 'HiddenFromAddressListsEnabled', can't be performed on the object '
User1' because the object is being synchronized from your on-premises organization. This action should be performed on the object in your on-premises organization.
error
The operation on mailbox "
User1" failed because it's out of the current user's write scope. The action 'Set-Mailbox', 'EmailAddresses', can't be performed on the object 'User1' because the object is being synchronized from your on-premises organization. This action should be performed on the object in your on-premises organization.
Here is collected reference to official Microsoft sources:

http://blogs.msdn.com/b/vilath/archive/2015/05/26/office-365-and-dirsync-why-should-you-have-at-least-one-exchange-server-on-premises.aspx
https://technet.microsoft.com/en-us/library/dn931280(v=exchg.150).aspx

You can obtain Exchange Server free product key if you still want to have such scenario.
It can be single VM and it will contain no mailbox data (mailbox content is Online). It doesn't have to be full Exchange Server Hybrid setup. It has same UI as Exchange Online for recipient management for Exchange 2013+.

So the only confusing thing here is that you use server On-Premises to manage cloud mailboxes. You can use UI or Shell (EMS) to manage those.

Screenshot from Exchange Management Console Exchange Server 2010 On-Premises:


Screenshot from Exchange admin center Exchange Server 2013 On-Premises:


Screenshot from Exchange admin center Exchange Online (Wave 15) where on premise Identities with remote Office 365 mailboxes properties synchronized to Office 365.

It's your own choice to use NON-SUPPORTED ADSIEdit way or PowerShell scripts like this instead of Exchange Server EMC/ECP or EMS.

Call Quality Dashboard for Skype for Business Online (Preview)

Call Quality Dashboard (CQD), previously a free Microsoft add-on for Skype for Business Server, is now available in Preview for Skype for Business Online Office 365 tenants.

More details on how to set it up and what it contains are on the Office 365 support page: Turning on and using Call Quality Dashboard in Skype for Business Online. At the moment it’s a single page preview edition.


Before you can start using the CQD, you'll need to activate it for your Office 365 organization:

1. Sign in to your Office 365 organization using your Global admin account, and then select the Admin tile to open the Admin center.
2. In the left pane, under Admin, Select Skype for Business to open the Skype for Business admin center.
3. On the Skype for Business admin center, select tools in the left pane and then select Skype for Business Online Call Quality Dashboard.


4. Then I was asked to sign in again to activate


5. It may take a couple of hours to process enough data to display meaningful results in the app. In my case it took about 10 minutes before the charts started getting populated.


6. Showing reports


7. The CQD Preview edition dashboard includes a Tenant Data Upload page, accessed by selecting the Tenant Data Upload link tag on the top right corner. This page is used for admins to upload their own building's information, such as mapping of IP address and geographical information, etc. “Building” tsv file, currently, there must be 14 columns for each row, and each column must have the following data type, and the columns must be in the order listed in the following table.


Friday, December 18, 2015

Obtain free Exchange Hybrid Edition Product key for Exchange 2010/2013/2016

You can request free Hybrid Edition product key (KB2939261) if all the following conditions apply to you:
  • You have an existing non-trial Office 365 Enterprise subscription
  • You have some non-licensed Exchange 2010 SP3 or 2013 or 2016 server in your on-premises organization.
  • You will not host any on-premises mailboxes on this Exchange 2010 SP3 or 2013 or 2016 server on which you apply the Hybrid Edition product key.
To obtain a Hybrid Edition product key for your Exchange 2010 SP3 or 2013 or 2016 server, go to the Exchange hybrid product key distribution wizard.





Apply key to your Hybrid Exchange Servers:

Set-ExchangeServer -Identity HybServer01 -ProductKey XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
Get-Service -Name MSExchangeIS -ComputerName HybServer01 | Restart-Service

Sunday, December 13, 2015

How to determine if SNMP network device is CERTIFIED for SCOM extended monitoring before discovery

Microsoft System Center Operations Manager 2012+ provides the ability to discover and monitor network routers and switches, including the network interfaces and ports on those devices and the virtual LAN (VLAN) that they participate in. Operations Manager can tell you whether network devices are online or offline, and can monitor the ports and interfaces for those devices. Operations Manager 2012 can monitor network devices that support SNMP, and can provide port monitoring for devices that implement interface MIB (RFC 2863) and MIB-II (RFC 1213) standards. Operations Manager may provide more detailed processor or memory monitoring for some network devices.

The only problem is that out-of-the-box SCOM 2012+ can provide such monitoring  for only Certified network devices.

Microsoft has published lists of the network devices that have extended monitoring capability for SCOM 2012 and SCOM 2012R2/2016. Those lists contain about 800 network devices and outdated, and may require some update with each System Center release or Service Pack or Update Rollup. True list contains 2500+ network devices. So how is to get this list?

If your network device is not CERTIFIED in SCOM you'll get for this device status GENERIC which allows SCOM to tell you only whether network device is online or offline and some basic interface monitoring.

Dell Force10 S4810 - network device with CERTIFIED status:


Juniper EX4550-48T - network device with GENERIC status:

Problem: 

So what is actual network devices list with extended monitoring capabilities for your current SCOM 2012+ environment? Or how can I determine if my network SNMP device is CERTIFIED for extended monitoring before actual SNMP discovery (ex. you plan to purchase some new network device)?

Solution: 

Proposed solution is a PowerShell script which gathers required information from your SCOM environment.

Step 1. Go to any SCOM 2012+ Management server in your environment (in my particular case it is SCOM 2012R2)
Step 2. Open default installation folder:
"C:\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\NetworkMonitoring\conf\discovery"
You will discover some list of specific vendor files with prefix "oid2type" and file extension ".conf". Those files contain actual information about SNMP Network Device certified status and SCOM extended monitoring capabilities for particular vendor network device.
Step 3. Open your device vendor oid2type*.conf file with Notepad and search it using SNMP System OID value (if you already know it) or model name (be creative it's just simple keyword search)


In my example earlier with Dell Force10 S4810 I have extended monitoring for CPU and RAM in addition to port/interface extended monitoring. If you haven't found you device in vendor oid2type-conf file then it means your network device will have status GENERIC in SCOM and no extended monitoring. 

4. Get full list of CERTIFIED Network Devices with extended monitoring capability for SCOM. Download and run Get-SCOMCertifiedNetworkDevices.ps1 PowerShell script on any SCOM 2012+ Management server. It will generate CSV list (my example contains about 2700 devices) on your desktop with all CERTIFIED network devices for your current SCOM 2012+ deployment.


In next article I'll show you how to "convert" GENERIC network device Juniper EX4550 to CERTIFIED using just Notepad and no XML MP development effort.

References:

Saturday, December 12, 2015

Enable Disk Counter in Task Manager on Windows Server 2012/2012 R2

By default Disk metrics in Task Manager are not monitored on Windows Server 2012/R2 due to performance implications. Enabling monitoring of Disk Metrics is a useful when troubleshooting issues.

To enable Task Manager’s Disk Counter run diskperf -y:



To disable Task Manager’s Disk Counter run diskperf -n:



Thursday, December 10, 2015

Quick Office 365 Demo Environment

You have valid MPN account and want quickly create Office 365 E3 Demo environment for customer. Go to:

https://demos.microsoft.com/

Note: All Office 365 Demo tenants and accounts will expire 90 days after their creation. To create a new demo tenant, please visit the "Get Demos" page of https://demos.microsoft.com/.

Tuesday, December 8, 2015

Recreate Remote App Collection - Export-Import published programs

Following PowerShell scripts helped me when I had to recreate RDS Session Collection for certain reasons and save published programs list.

Export Remote Apps to CSV file command:

Get-RDRemoteApp | Export-CSV myapps.csv

Import Remote Apps from CSV to RDS Session Collection command:

Import-CSV .\myapps.csv | foreach {New-RDRemoteApp -CollectionName "New Collection Name" -Alias $_.Alias -FolderName $_.FolderName -CommandLineSetting $_.CommandLineSetting -RequiredCommandLine $_.RequireCommandLine -DisplayName $_.DisplayName -FilePath $_.FilePath -ConnectionBroker activebroker.com}

Monday, December 7, 2015

Publish Remote Desktop Session (Full Desktop) in a Remote App Session Collection

Remote Desktop Services 2012+ allows you to publish two types of the Session collection on a Remote Desktop Session Host. This can be either a Remote Desktop Session Collection (Full Desktop experience) or a Remote App Session Collection. By default you cannot have both types on the Same Session Host. And this is frustrating.
Remote App Session Collection looks like this:

From the image above you see that only Remote Apps are shown and no "Full Desktop" RDP icon.
In fact, the moment you publish your first RemoteApp, the desktop connection disappears and the checkbox "Show the session collection in RD Web Access" is grayed out.


You may find some technical blogs which provides the guide how to bypass this limitation (for Session collections only). Per those blogs you may consider changing the following registry value on your RDCB hosts:

HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Terminal Server\CentralPublishedResources\PublishedFarms\<collection>\RemoteDesktops\<collection>


To show the Desktop Session Icon, Change the Dword value (ShowInPortal) to 1 from 0.
You can also rename the Desktop Session Icon name to something more to your liking. To do this, change the String value (Name).
There are issues after such modifications

Issue #1
After some testing you may find issues with User Profile Disks (UPD) and multiple RDS Hosts. When a user logs in to a RemoteApp, it will map the user to the user profile disk (UPD) as expected.
If the user starts a new connection (Full Desktop) while having a RemoteApp open, the user will get a temporary profile and a blank screen as the UPD cannot load. This means that when you launch a RemoteApp, the UPD is locked to that session meaning you cannot load a full blown desktop at the same time.

Issue #2
The "Full Desktop" stops also working and clear “Full Desktop” registry settings after any minor change to already created “Remote App Session Collection. So you need to update the registry key after any change to the Session Collection.

Solution

Here is more elegant and reliable solution that works without any issues (single Active Directory domain for RDSH and RDS users).
  • Publish "Remote Desktop Connection" (RDP client mstsc.exe) program
  • Go to the published program (mstsc.exe) settings
  • Type the RemoteApp program name, ex. "Full Desktop"

  • Go to the "Parameters" tab of "mstsc.exe" published program. Here is where the magic begins

  • Update "Command-line Parameters" with parameter "/V:%computername%.%UserDnsDomain%"
  • Save properties by clicking "OK" and check result

This modification makes RDP session to run on the same Session Host (provided command-line parameter is FQDN of current RDS host) where mstsc.exe had been run initially. No registry modification is needed. No issues at all.
The solution is really interesting for hosters and MSPs who provide Microsoft RDS DaaS (Desktop as a Service) to their customers; where you can combine both: "Full Desktop" experience and RemoteApp Session Collections experience with no need to create separate RDS farms.

Sunday, December 6, 2015

How to Turn a Polycom VVX/SoundPoint IP phone into Common Area Phone

Common Area Phone is really cool feature introduced in Microsoft Lync Server 2010. What is meant by common area phone? In the context of this blog a common area phone is a Polycom phone device located in an area such as a cafeteria, hotel/office lobby, meeting room or even a security entrance phone. It is a phone device located in an area where multiple people, whether authorized users or not, have access to the phone and the phone is not dedicated to a specific user.
Note: Cloud PBX with PSTN Calling doesn't support Common Area Phone

Network research says you have quite few options to purchase for this specific purpose. Those options are Polycom’s CX500, CX600, CX3000, CX5000 and Aastra’s 6721ip, 6725ip, HP's 4110 IP Phone. Let's demystify this.

My target goal


is to provision Common Area Phone for remote branch office without need to purchase additional phone devices and reuse existing ones. I have phones located in remote branch: Polycom SoundPoint IP 450, Polycom VVX models.

Prerequisites for Common Area Phone


  • Check if Skype for Business Server (Lync) is enabled for PIN authentication:
Get-CsWebServiceConfiguration | fl Identity, UsePinAuth, UseCertificateAuth
  • The DHCP 043/120 options which provide the ability to support PIN authentication (will be strict requirement for Common Area Phone accounts to authenticate via TLS-DSK) 
  • The DHCP 004/042 options (Time Server). Although the time server location will provide the accurate time required to perform authentication and registration processes the phone will display the time in GMT by default. To show the correct local time on the phone’s display the standard time offset DHCP 002 option (Time Offset, optional) can be used.
    Check your DHCP options presence on your DHCP server:
Get-DhcpServerv4OptionValue -ComputerName yourdhcpserver.com -ScopeId 172.20.0.0 -All | ft OptionID,Name,Value,VendorClass
  • You can use NTP SRV DNS record as an alternative to Time Server DHCP options
    _ntp._udp.<SIP domain> pointed to NTP server;
  • PIN authentication is only supported for internal networks which can contact the internal web services on a Lync Front End server internally;
  • PIN Authentication doesn't work via Edge Server;
  • Phone ability to sign in via PIN authentication.
It is possible to internally provision a Common Area Phone and then take the phone off-site, but if the user signs out or the client certificate expires (or is revoked by the server) then the device will not be able to connect again without bringing it back inside the network.
As you probably noticed everything is spinning around phone and network support for PIN authentication:
I assume you can discover other SfB certified non-Polycom devices may behave like Common Area Phones (just check if they support PIN authentication).

Recommended settings for Polycom/Soundpoint IP models acting as Common Area Phones


If we are in the process of deploying a phone in a common area we will most likely want to disable some of the default features (it is not strict prerequisite but recommendation), functions and physical ports on the VVX/Soundpoint IP.
  1. Disable physical ports on the phone such as the USB and PC ports
  2. Disable the speakerphone hard key and speakerphone functionality
  3. Disable the Home hard key to limit access to menus such as the Settings menu
  4. Remove and/or limit soft key functions (New Call, Sign Out, etc.)
  5. Disable additional features.
  6. Force phone device to use PIN authentication
These parameters would be put into your XML configuration file that will be uploaded to the phone via a provisioning server or via the WebUI of the phone. All commands are case sensitive.

  1. Disable physical ports on the phone such as the USB and PC ports:
device.set = "1"
device.net.etherModePC = Disabled
device.auxPort.enable.set = "1"
device.auxPort.enable = "0"
feature.usb.power.enabled = "0"
  2. Disable the speakerphone hard key and speakerphone functionality:
up.handsfreeMode = "0"
  3. Disable the Home hard key to limit access to menus such as the Settings menu:
key.26.function.prim = null
  4. Remove and/or limit soft key functions (New Call, Sign Out, etc.):
feature.enhancedFeatureKeys.enabled = "1"
softkey.feature.basicCallmanagement.redundant = "0"
softkey.feature.forward = "0"
softkey.feature.simplifiedSignIn = "0"
softkey.feature.mystatus = "0"
softkey.feature.buddies = "0"
softkey.feature.newcall = "0"
softkey.feature.doNotDisturb = "0"
  5. Disable additional features:
video.enable = "0"
diags.pcap.enabled = "0"
feature.callRecording.enabled = "0"
feature.pictureFrame.enabled = "0"
dir.local.readonly = "1"
  6. Force phone device to use PIN authentication
reg.1.auth.usePinCredentials = "1"

If your particular Polycom phone model doesn't support mention XML settings phone will ignore it.

Further Common Area Phone provisioning for Skype for Business Server (Lync) is very standard process and fully described on Technet and it is PowerShell only based process. You can use UI based tool called "Lync Common Area Phone Management" tool. This tool was successfully tested with Skype for Business Server 2015.

There is another cool feature introduced with Common Area Phone called Hot-Desking. You can set up Common Area Phones as hot-desk phones. With hot-desk phones, users can log on to their own user account, and, after they are logged on, use Skype for Business Server features and their own user profile setting. But it is another topic for discussion.

References:

Friday, December 4, 2015

Azure AD Connect Password Sync fails for multiple forests

Observed issue: AAD Connect does not synchronize passwords when it is configured for multiple source AD forests.
Fix: Change the ‘Configure Directory Partitions’ credential setting from ‘Use default forest credentials’ to ‘Alternate credentials for this directory partition’.
No service restart or reboot required. The way to test it is to reset a password and then monitor the Application event log on the Azure AD Connect server. Within 2 to 3 minutes you should see an event log entry that the password has been successfully set.

Office 365 ODT Configuration XML Editor

The Notepad is either your tool of choice or a last resort for editing XML files, but without the red squiggly lines we have come to love in Office. If you have ever accidentally typed </congifuration> then the web editor for the Office ProPlus Click-to-Run Configuration.xml file is for you. This web page provides a graphical method to generate and edit the Office Click-to-Run Configuration.xml file.


The Click-to-Run for Office 365 Configuration.xml file is used to specify Click-to-Run installation and update options. The Office Deployment Tool includes a sample Configuration.xml file that can be downloaded. Administrators can modify the Configuration.xml file to configure installation options for Click-to-Run for Office 365 products.
The Click-to-Run Configuration.xml file is a necessary component of the Office Deployment Tool. Click-to-Run customizations are performed primarily by starting the Office Deployment Tool and providing a custom Configuration.xml file. The Office Deployment Tool performs the tasks that are specified by using the optional properties in the configuration file.
Related: Product IDs that are supported by the Office Deployment Tool for Click-to-Run

Saturday, November 28, 2015

SCOM 2012 Network Monitoring: Rename network adapter "PORT-xxx"

Scenario:
Everyone who has implemented SCOM 2012 Network Monitoring will be satisfied with the improvements with respect to SCOM 2007. More and more devices can be deeply monitored with SCOM 2012. One of the downsides at this moment is the naming of the Network Adapters. The name inside SCOM does not always reflect the name on the switch. The alerts send for this network adapters will also contain the DisplayName. The subject of the alert with the above naming will be something like 'Interface PORT-0.15 is flapping'. This alert is not very useful. Sounds familiar?


Solution:
Following XML Management Pack from Arjan Vroege will change the DisplayName of your network adapters in SCOM. The standard names of the network adapters cannot be changed through the standard configuration settings. With this Management Pack the names will be changed by using SNMP variables.
After downloading the MP and importing it your existing Network adapters will be renamed with the following naming convention:

$Displayname = $NodeSysName + " -- " + $Description + " -- " + $InterfaceAlias


This information is collected by SCOM through the Network Discovery and are SNMP variables. If you want to change them you have to change the discovery of the Management Pack. You can find and change the discovery in the Authoring section of SCOM.

<?xml version="1.0" encoding="utf-8"?>
<ManagementPack SchemaVersion="2.0" ContentReadable="true" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <Manifest>
    <Identity>
      <ID>Network.Monitoring.ChangeNetworkAdapter</ID>
      <Version>1.0.0.0</Version>
    </Identity>
    <Name>Network.Monitoring.ChangeNetworkAdapter</Name>
    <References>
      <Reference Alias="Windows">
        <ID>Microsoft.Windows.Library</ID>
        <Version>7.5.8501.0</Version>
        <PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
      </Reference>
      <Reference Alias="System">
        <ID>System.Library</ID>
        <Version>7.5.8501.0</Version>
        <PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
      </Reference>
      <Reference Alias="SNL">
        <ID>System.NetworkManagement.Library</ID>
        <Version>7.1.10226.0</Version>
        <PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
      </Reference>
    </References>
  </Manifest>
  <Monitoring>
    <Discoveries>
      <Discovery ID="Network.Monitoring.ChangeNetworkAdapter.Network.Monitoring.ChangeNetworkAdapter.Discovery" Target="SNL!System.NetworkManagement.NetworkAdapter" Enabled="true" ConfirmDelivery="false" Remotable="true" Priority="Normal">
        <Category>Discovery</Category>
        <DiscoveryTypes>
          <DiscoveryClass TypeID="SNL!System.NetworkManagement.NetworkAdapter" />
        </DiscoveryTypes>
        <DataSource ID="DS" TypeID="Windows!Microsoft.Windows.TimedPowerShell.DiscoveryProvider">
          <IntervalSeconds>86400</IntervalSeconds>
          <SyncTime>04:00</SyncTime>
          <ScriptName>NetworkAdapterChange.ps1</ScriptName>
          <ScriptBody>

    # NetworkAdapterChange.ps1
    # Written by Arjan Vroege. All rights reserved.

    param($SourceID, $ManagedEntityID, $Key, $Description, $InterfaceAlias, $DeviceKey, $NodeSysName, $CurDisplayName)
    $Displayname = $NodeSysName + " -- " + $Description + " -- " + $InterfaceAlias

    if ($CurDisplayName -eq $DisplayName) { continue }
    
    $scomapi = new-object -comObject "MOM.ScriptAPI"
    $DiscData = $scomapi.CreateDiscoveryData(0, $SourceID, $ManagedEntityID)

    #fill out the key properties
    $NetworkAdapter = $DiscData.CreateClassInstance("$MPElement[Name='SNL!System.NetworkManagement.NetworkAdapter']$")
    $NetworkAdapter.AddProperty("$MPElement[Name='SNL!System.NetworkManagement.NetworkAdapter']/Key$", $Key)
    $NetworkAdapter.AddProperty("$MPElement[Name='SNL!System.NetworkManagement.Node']/DeviceKey$", $DeviceKey)
    $NetworkAdapter.AddProperty("$MPElement[Name='System!System.Entity']/DisplayName$", $Displayname)
    $scomapi.LogScriptEvent("NetworkAdapterChange.ps1",101,2, "Discovery was executed for $Devicekey and Current: $curDisplayName --&gt; new: $Displayname")

    # add the WebSite to the DiscoveryData object
    $discData.AddInstance($NetworkAdapter)
    $discData
  </ScriptBody>
          <Parameters>
            <Parameter>
              <Name>SourceID</Name>
              <Value>$MPElement$</Value>
            </Parameter>
            <Parameter>
              <Name>ManagedEntityID</Name>
              <Value>$Target/Id$</Value>
            </Parameter>
            <Parameter>
              <Name>Key</Name>
              <Value>$Target/Property[Type="SNL!System.NetworkManagement.NetworkAdapter"]/Key$</Value>
            </Parameter>
            <Parameter>
              <Name>Description</Name>
              <Value>$Target/Property[Type="SNL!System.NetworkManagement.NetworkAdapter"]/Description$</Value>
            </Parameter>
            <Parameter>
              <Name>InterfaceAlias</Name>
              <Value>$Target/Property[Type="SNL!System.NetworkManagement.NetworkAdapter"]/InterfaceAlias$</Value>
            </Parameter>
            <Parameter>
              <Name>DeviceKey</Name>
              <Value>$Target/Host/Property[Type="SNL!System.NetworkManagement.Node"]/DeviceKey$</Value>
            </Parameter>
            <Parameter>
              <Name>NodeSysName</Name>
              <Value>$Target/Host/Property[Type="SNL!System.NetworkManagement.Node"]/sysName$</Value>
            </Parameter>
            <Parameter>
              <Name>CurDisplayName</Name>
              <Value>$Target/Property[Type="System!System.Entity"]/DisplayName$</Value>
            </Parameter>
          </Parameters>
          <TimeoutSeconds>60</TimeoutSeconds>
          <StrictErrorHandling>true</StrictErrorHandling>
        </DataSource>
      </Discovery>
    </Discoveries>
  </Monitoring>
  <LanguagePacks>
    <LanguagePack ID="ENU" IsDefault="true">
      <DisplayStrings>
        <DisplayString ElementID="Network.Monitoring.ChangeNetworkAdapter.Network.Monitoring.ChangeNetworkAdapter.Discovery">
          <Name>Network.Monitoring.ChangeNetworkAdapter.Discovery</Name>
          <Description>Description for the new discovery.</Description>
        </DisplayString>
      </DisplayStrings>
      <KnowledgeArticles></KnowledgeArticles>
    </LanguagePack>
  </LanguagePacks>
</ManagementPack>

In next article I'll show you how to "convert" GENERIC network device (Juniper EX4550 and Dell Force10 MXL 10/40GbE) to CERTIFIED using just Notepad and no XML MP development effort.

References: