Skip to content
Adam edited this page Jan 11, 2021 · 42 revisions

Is this script/runbook supported by Microsoft, Cireson, and/or other 3rd parties?

No (certainly not directly). This script is comprised of the Microsoft Active Directory module, Exchange Web Services API, Cireson Web API, core PowerShell cmd-lets, Mimekit, and SMLets. The SMlets module and MimeKit assembly are open source, community driven projects with no official support from anyone other than the community itself.

I'm testing out creating new work items from emails. When work items get created, they don't have attachments despite the email having them. And/or the email itself gets attached but not its attachments.

By default the connector will not add attachments less than or equal to 25kb. This can be changed near the top of the configuration.

I've turned on suggesting request offerings via the Cireson Portal to our end users and the connector seems to take longer to execute/process. Is there any way to speed this up?

Yes. As of this writing and Cireson Portal 8.x - the Stored Procedure in the ServiceManagement database that returns request offerings does so with the base64 image data. This adds some weight to the call so it's recommend to modify this stored procedure as described here - https://community.cireson.com/discussion/3468/light-weight-version-of-api-getservicecatalog#latest. This modification is supported by Cireson, would have to be applied after every portal upgrade, and does nothing more than remove the base64 image data from being returned through the API call.

Where could I find out some more information about this thing?

How often are updates published?

About every 2-3 months. Any core feature bugs identified/raised are addressed as soon as possible.

What's an easy way to upgrade OR see what's changed between my connector and the one I just downloaded?

Releases of the connector since v1.7 have link to the Pull Request to show you the line by line difference between versions. Alternatively you can check out the SMLets Exchange Connector blogpost here on the topic to perform this yourself in VSCode.

This Azure Cognitive Services functionality sounds fancy. How can I get an idea of how it would have performed?

We can do you one better. Not only is there a way to see what ACS would have suggested for Request Offerings or Knowledge Articles, you can also test to see how this would have worked if it wasn't turned on giving you the ability to compare the results between the two. Check it out here.

What's the difference between Azure Cognitive Services (ACS) and Azure Machine Learning (AML) features?

  • ACS (minutes to deploy, but no way to improve quality)
    • Predict IR/SR based on Sentiment (happy/angry) from the Message
    • Identify keywords to make Suggested KA/RO more relevant
    • Predict Priority/Urgency/Impact based on Sentiment from message
  • AML (hour or so to deploy, ongoing maintenance time to retrain/evaluate data, and is only as good as your own processes)
    • Predict IR/SR based on how the message was phrased based on historical data (SCSM DW)
    • Predict Classification based on how the message was phrased based on historical data (SCSM DW)
    • Predict Support Group based on how the message was phrased based on historical data (SCSM DW)
  • AML + ACS
    • AML predicts its 3 data points and can leverage ACS Sentiment to update Priority/Urgency/Impact as well as improve RO/KA Suggestsions

What's great about ACS is you don't need to understand anything about how the underlying technology works. Plug your key in and go. This however introduces a potential issue - you have no means whatsoever to improve it's scores and predictions. So if a wrong prediction is made, you have no way to correct it for next time.

What's great about AML is that it's completely tailored around you and your data. But this comes at a cost as well - it is only as good as your data. If you feed it bad data, it will predict equally bad data. So if Work Items are misclassified or are Incidents when they should be Service Requests you'll have poor prediction accuracy. As such, this requires an upfront investment of time beyond just its initial deployment.

  • Not only do you need to ensure your own internal processes are being followed such as Work Items being classified, routed, and chosen correctly but you'll also need to keep retraining AML to keep your scores high/improve as your own internal processes change
  • You'll need to keep evaluating where to set the confidence scores so you ensure you're actually generating time savings

When it comes to AML there is another school of thought that fogoes some of the initial upfront time investment of training and picking the right percent to set your predictions at and it's this. Set the scores very low so it always writes a value and then use this as an internal training opportunity to improve data quality. Going this route means you won't see savings for as long as you conduct this exercise, but does mean you'll have higher quality data. Which would result in higher quality predictions in the long run.

I have an idea and/or code I'd like to contribute. Can I?

Absolutely. You can raise an Issue on the GitHub repo which will be reviewed (bug, feature request, etc.) or you can Fork the repository into your own account. Then you can edit the connector as you so choose followed by submitting a Pull Request back to the repo here. This allows the merge of sorts to get involved with development. If you need further instruction check out the SMLets Exchange Connector blog