Sunday, March 19, 2017

Reset Skype for Business Statistics Manager (StatsMan) history


You have a brand new or existing deployment of Skype for Business Statistics Manager. Your graph shows some weird old statistics (few days/weeks ago) as well as recent statistics (by default: 24 hours old) in a single chart.

Don't know yet. Could be related to Open Source of Redis in-memory caching system.


Reset statistics completely (--age:0) or to 24 hours old.
1. Run as Administrator "cmd" on server where SfB Statistics Manager Listener is installed:
2. Run either:

>"C:\Program Files\Skype for Business Server StatsMan Listener\PerfAgentStorageManager.exe" --redis:localhost --action:cleanup --age:0


>"C:\Program Files\Skype for Business Server StatsMan Listener\PerfAgentStorageManager.exe" --redis:localhost --action:cleanup --age:24

Using age: 1.00:00:00
FillNameCacheAsync of 6,889 counters took 180ms
GetLastWriteTimesAsync of 6,889 counters took 56ms
Scanned through 7,389 redis keys in 79ms and found 36 that should be deleted.
36 counters, 0 hosts, 36 storage keys, haven't been written to in 6.00:00:00
Do you want to remove the above hosts,36 counters and 36 storage keys? Y|[N]:y
Removed: 36 keys in 29ms
BulkDereferenceCounters: Attempting to dereference 36 storage keys
BulkDereferenceCounters: Deleted 36 last write time entries in 10ms
BulkDereferenceCounters: Removed 36 storage key entries from set in 1ms

Deleted 0 server infos.

I got normal graph after clearing history.

More information 
about PerfAgentStorageManager.exe /?


    updateServerInfo : Need --redis : Must provide --file or (--hostname and one or more --population options), --file should be in the format of getServerInfo output
                     : optional --noPrompt (will automatically do it without asking)
                     : optional --disableUpdate (will do adds only)
                     : optional --debug will show extra output
                     : The following shard scheme docs are only for multi-azure storage account setups:
                     : optional --shardScheme=none (to disable updating shard assignement for new servers)
                     : optional --shardScheme=<scheme>
                     :          <scheme> is one or more population matches separated by comma.
                     :          The default is Pool_,Site_ and it is recommended to not change this.
                     :          This will look at each new server's populations and try to match (start with) with a
                     :          scheme component then it will assign to that populations shard if it exists.
                     :          So for the default scheme it will look for the servers Pool shard, if that doesn't exist
                     :          then it will look for the servers Site shard, if that doesn't exist it will just stay as

    getServerInfo : Need --redis : optional --uri for the location to upload to, Default is redishash:///_ServerInfo
                  : optional --file will output the xml for edit
                  : optional --mode=command will output the information with the command line options for PerfAgentStorageManager
                  : optional one or more --population will get only the servers in one of those populations

    getRedirectCandidates : Need --redis, default is to find populations with a single server where the name of the server is the last part of the population
                  : optional --mode=<current>, current will retrieve the current list of configured redirected populations
                  : optional --file=filename.csv will output a file that can be used with --action=updateRedirectCandidates
                  : example: Pool_mySEserverFE01 server=mySEServerFE01

    updateRedirectCandidates : Need --redis --file is a csv with Population,Server

    getShardInfo : Need --redis.
                : Optional: One or more modes
                : --mode=verbose to output every assignment
                : --mode=password to see the configured password (careful!)
                : --mode=history to see the history of all shard assignments (assumes --mode=verbose)

    updateShardAssignments : Need --redis and one or more --population or a single --hostname and --mode=specific or --mode=population and --value
                : --value=# the number of the shard to assign them to
                : mode=specific means assign that population or host to the shard
                : mode=population means assign the population(s) to the shard and all of the servers in all populations given
                : mode=correction (be careful with this) means update any shard assignement that matches -value=shardid and -match=tableID (see -a=getshardinfo -verbose)
                : this is used when a mistake in timing was made and the shard assignement hasn't been used yet and this will update to the next
                :if you get this wrong and use it on something that is in use, the data will be missing
and not easily recovered
                : The assignment won't switch until the next 12 hour period. Be careful that you assign far enough from daily 12 hour boundaries
                : Optional: if in --mode=population you can also add a --match=<population match> to also add any population to the shard for servers
                : in the given population. For example: --mode=population --population=Site_Blah --match=Pool_ this will add any Pool in has servers in that
                : site. It will ask you for each one unless you have --noPrompt

    updateAzureShard : Need --redis and --value=<shard ID> --azure=<accountName> --azurePassword=<azure key> --thumbprint=<password decryption certificate thumbprint>
                 : Optional --noPrompt to not prompt, you can add a --new to fail if expect it to be a new account (it will fail if the account or shard ID exists)

    validateSharding : Need --redis and --azure=<accountName> --azurePassword=<azure key> (for the default account)
                 : Optional --noPrompt will not prompt for keys to check but you'll need to provide --value=<hostname or population name or all>

    getRedirectedCounters : Need --redis

    deleteServerInfo : Need --redis and --file or one or more --hostname entries : File contains names of servers (one p
er line). optional --uri for the key to upload to, Default is redishash:///_ServerInfo (only redishash uri supported)

    checkServerInfo : Need --redis and --file or one or more --hostname entries : File contains names of servers (one pe
r line).

    cleanup : Need --redis, this will cleanup all hosts, populations and counters that haven't been written to in the given age time
            : Optional --age=<hours> (default 24)
            : Optional --match=<regex> to only match certain hosts/populations
            : Optional --notmatch=<regex> to exclude certain hosts/populations

    listKeys : Need --redis optional --file and --match and --key (key will make this a hash key list) --includeValues will show the value of the hash keys as well

    deleteKeys : Need --redis and --match  optional --notMatch, optional: --count=<batchSize> default=1000

    listHostIds : Need --redis and --match  optional --notMatch

    backupIdentifiers : Need --redis and --file, optional: --mode=overwrite

    subscribe : Need --redis and --counter

    listCounterStorageNames : Need --redis optional --file and --match and --notMatch

    listBuckets : Need --redis optional --match and --notMatch, optional --mode=<counters>, counters mode will try to distill down to the list of perf counters polled

    readListValues : Need --redis and --key optional --count

    deleteHashKeys : Need --redis and --key optional --match and --count (this will decide batch size), you will be prompted

    lookupStorageName: Need --redis and one or more --counter and/or --bucket this will lookup the counter or bucket storage names, optional: --mode=redirected

    findBadServers: Need --redis and optional --bucket (it will by default use \\*\Memory\% Memory Free_Maximum\@ bucket)

    getCounterValues : Need --redis and --counter : Optional --file=<csv output file path>

    getCounterValuesRange : Need --redis and --azure. Optionally need --counter (default is memory max) --startTime and --endTime (both in local time), defaults 24 hours ago and now

    counterLastWriteTimes: Need --redis optional --age=<seconds> will only print counters older than this, optional --count will limit the output, optional --mode=host will output the hosts and their youngest value age

    getBucketValues : Need --redis and optional --bucket (default will be \\*\Memory\% Memory Free_Maximum\@), optional --count will limit the output

    getBucketDueTimes: Need --redis optional --key default is _BucketDueTime

    getStats: Shows info about number of counters/buckets/etc. Need --redis

    listAggLocks: Need --redis. Shows which server owns which agg lock for buckets.

    setting: Need --redis and --setting=SettingName and --value=[true|false|int32#]. Updates/gets.

    all: retrieves all of the current values
    aggregation: optional --value=[true|false] (if not given it gets the current value)
    redirectedAggregations: optional --value=[true|false] (if not given it gets the current value)
    persistentStore: optional --value=[true|false] (if not given it gets the current value)
    rediscounterStore: optional --value=[true|false] (if not given it gets the current value)
    bucketManagerMaxConcurrency: optional --value=int32# (if not given it gets the current value)

  --value<:value>  Some actions require a 'value' parameter.
  --redis<:redis>  Set the redis server configuration
  --redisDatabase<:redisDatabase>  Set the redis database number. Default: 0
  --redisPassword<:redisPassword>  Optional redis password. Otherwise it will be looked up from the config.
  --azure<:azure>  Set the azure storage account name
  --azurePassword<:azurePassword>  Optional azure storage password. Otherwise it will be looked up from the config.
  --startTime<:startTime>  Some actions take a startTime arg.
  --endTime<:endTime>  Some actions take a endTime arg.
  --file<:file>  Some actions take a file arg.
  --hostname<:hostname>  Some actions take a hostname arg.
  --population<:population>  Some actions take a population arg.
  --match<:match>  Some actions uses a pattern or match.
  --notMatch<:notMatch>  Some actions uses a pattern or match to exclude.
  --new  Some actions are being tested with a 'new' mode.
  --disableUpdate  Some actions need a way to disable updates (and do adds only).
  --counter<:counter>  Some actions take a counter arg.
  --bucket<:bucket>  Some actions take a bucket arg.
  --shardScheme<:shardScheme>  Some actions take a shard scheme.
  --age<:age>  Some actions take an age arg. In seconds.
  --noPrompt  Some actions prompt to continue, this will answer affirmative to any prompt.
  --count<:count>  Some actions can take a count
  --includeValues  Some actions have an option to show the values.
  --key<:key>  Some actions need a key arg.
  --mode<:mode>  Some actions take a mode arg (often optional).
  --thumbprint<:thumbprint>  Some actions need a certificate thumbprint arg.
  --debug[:debug]  This will enable debugging output.
  --uri<:uri>  Some actions need a uri arg.
  --help  Print this help and exit.


  1. In the event that you have the accreditation it will be certainly simpler to look for a vocation as a task or program chief.ExcelR PMP Certification

  2. I've been looking for info on this topic for a while. I'm happy this one is so great. Keep up the excellent work
    ExcelR pmp certification

  3. Attend The Analytics Course in Bangalore From ExcelR. Practical Analytics Course in Bangalore Sessions With Assured Placement Support From Experienced Faculty. ExcelR Offers The Analytics Course in Bangalore.
    ExcelR Analytics Course in Bangalore

  4. Excellent Blog! I would like to thank for the efforts you have made in writing this post. I am hoping the same best work from you in the future as well. I wanted to thank you for this websites! Thanks for sharing. Great websites!
    ExcelR data science course in mumbai

  5. After being in the business for quite a long time, Alamo car rental knows exactly what their customers need. You will never go wrong with car rental corporate discounts, for sure. Sent by your employer on a business trip? 출장서비스

  6. An essential hotspot for the gathering of spectators that makes the peruser walk by step.
    Microsoft project alternative

  7. Nice post. I learn something totally new and challenging on blogs I stumbleupon everyday. It will always be helpful to read content from other writers and practice a little something from their sites. onsite mobile repair bangalore I blog often and I really appreciate your information. This article has really peaked my interest. I'm going to bookmark your website and keep checking for new information about once a week. I opted in for your RSS feed as well.
    asus display repair bangalore Pretty! This was an extremely wonderful article. Thanks for supplying this info. huawei display repair bangalore

  8. Having read this I thought it was very enlightening. I appreciate you taking the time and effort to put this content together. I once again find myself spending way too much time both reading and commenting. But so what, it was still worthwhile! online laptop repair center bangalore Hello there! I could have sworn I’ve visited this web site before but after going through many of the posts I realized it’s new to me. Regardless, I’m definitely happy I stumbled upon it and I’ll be book-marking it and checking back often! dell repair center bangalore

  9. Your style is very unique in comparison to other folks I have read stuff from. Many thanks for posting when you have the opportunity, Guess I will just bookmark this blog. macbook repair center bangalore I was able to find good info from your blog posts. acer repair center bangalore

  10. Thanks. I have been strugling with this issue for a while.

  11. Really awesome blog!!! I finally found a great post here.I really enjoyed reading this article. Nice article on data science . Thanks for sharing your innovative ideas to our vision. Your writing style is simply awesome with useful information. Very informative, Excellent work! I will get back here.
    Data Science Course
    Data Science Course Training in Bangalore

  12. wonderful article. Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. This article resolved my all queries.
    Data Science Course