<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Lucas Alexander - Alexander Development]]></title><description><![CDATA[Lucas Alexander - Alexander Development]]></description><link>https://alexanderdevelopment.net/</link><generator>Ghost 1.20</generator><lastBuildDate>Wed, 28 Apr 2021 08:58:41 GMT</lastBuildDate><atom:link href="https://alexanderdevelopment.net/author/lucas/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[A motion-activated spider for Halloween with an Arduino and a Raspberry Pi]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Every year my family hangs a large decorative spider over our doorway for Halloween. This year I decided to make it flash red LED eyes and play a random assortment of spooky sounds (including, but not limited to, bats, chains and the Vincent Price laugh from &quot;Thriller&quot;) when</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/10/26/a-motion-activated-spider-for-halloween-with-an-arduino-and-a-raspberry-pi/</link><guid isPermaLink="false">5bd2522e14b5e0000112ec57</guid><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Arduino]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 26 Oct 2018 14:11:53 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/10/candy-corn.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/10/candy-corn.jpg" alt="A motion-activated spider for Halloween with an Arduino and a Raspberry Pi"><p>Every year my family hangs a large decorative spider over our doorway for Halloween. This year I decided to make it flash red LED eyes and play a random assortment of spooky sounds (including, but not limited to, bats, chains and the Vincent Price laugh from &quot;Thriller&quot;) when trick-or-treaters come to the door. In today's blog post, I'll show how I did it.</p>
<h4 id="theapproach">The approach</h4>
<p>A few years ago, I wrote a post about setting up a <a href="https://alexanderdevelopment.net/post/2016/01/18/dynamics-crm-and-the-internet-of-things-part-5/">motion-activated webcam to trigger license plate recognition with a Raspberry Pi</a>, so I already had a Raspberry Pi and an HC-SR04 ultrasonic rangefinder I could use to detect motion. I initially thought I would only need the Pi for this project, but as I started evaluating the physical constraints of my porch, I realized it would be difficult to mount the Pi in a secure location where it could handle motion detection and also flash the LED eyes. I decided a better approach would be to use an Arduino to detect motion and flash the LEDs, while the Pi would communicate with the Arduino over a serial connection and play Halloween sound effect MP3s.</p>
<h4 id="settingupthearduino">Setting up the Arduino</h4>
<p>I have two red LEDs taped over the spider's eyes that are connected in series to pin 12 on an Arduino UNO. The ultrasonic rangefinder is taped to the side of my porch so that trick-or-treaters have to pass by it before they can knock on my door, and it is connected to pin 9 for the trigger and pin 10 for the echo. The Arduino listens for serial commands from the Pi to measure distance or to flash the LEDs if the Pi determines the reported distance measurement means someone has approached.</p>
<p>Here's my sketch:</p>
<pre><code>//set the pin numbers
const int triggerPin = 9;
const int echoPin = 10;
const int ledPin = 12;

void setup() 
{
  Serial.begin(9600); // Starts the serial communication
  pinMode(triggerPin, OUTPUT); // Sets the triggerPin as an Output
  pinMode(echoPin, INPUT); // Sets the echoPin as an Input
  pinMode(ledPin, OUTPUT);
}

void loop() 
{
  if (Serial.available() &gt; 0) { 
    int controlcode = Serial.parseInt();
    if(controlcode==1)
    {
      //clear the trigger pin
      digitalWrite(triggerPin, LOW);
      delayMicroseconds(2);

      //send a 10 microsecond pulse
      digitalWrite(triggerPin, HIGH);
      delayMicroseconds(10);
      digitalWrite(triggerPin, LOW);

      //read the echo pin to get the duration in microseconds
      long duration = pulseIn(echoPin, HIGH);
      
      //calculate the distance in centimeters
      int distance = duration*0.034/2;
      
      //return the distance to the serial monitor
      Serial.println(distance);
    }
    if(controlcode==2)
    {
      flashEyes();
    }
  }
}

void flashEyes()
{
  //flash 50 times
  for(int i=0;i&lt;50;i++)
  {
    //turn the eyes on
    digitalWrite(ledPin, HIGH);
    
    //wait for 150ms
    delay(150);
    
    //turn the eyes off
    digitalWrite(ledPin, LOW);
    
    //wait for 100ms
    delay(100);
  }
}
</code></pre>
<h4 id="settinguptheraspberrypi">Setting up the Raspberry Pi</h4>
<p>My Raspberry Pi 2 Model B is connected to the Arduino over a standard USB A/B cable, and it's also connected to a Bluetooth speaker mounted behind the spider. To run things, I have a Python script that sends a serial command to the Arduino to measure the distance every 50 milliseconds. If the Arduino returns a value between 20 and 120 centimeters, the Python script sends a serial command to the Arduino to flash the LEDs, and uses <a href="https://linux.die.net/man/1/mpg123">mpg123</a> to play a random MP3 file.</p>
<p>Here's the script:</p>
<pre><code>import serial
import time
import subprocess
import random
port = &quot;/dev/ttyACM0&quot;

def playsound():
    i = 0
    screamnumber = str(random.randint(1,5))
    screamfile = &quot;/home/pi/halloween/X.mp3&quot;.replace(&quot;X&quot;,screamnumber) 
    subprocess.Popen([&quot;mpg123&quot;, screamfile])
 
if __name__ == '__main__':
    s1 = serial.Serial(
      port=port,\
      baudrate=9600)
    s1.flushInput()
    
    #wait 5 seconds before telling the arduino look for motion
    time.sleep(5)
    try:
        while True:
          if s1.in_waiting==0:
            #print('sent request')
            s1.write('1\n')
            time.sleep(.05)
          if s1.in_waiting&gt;0:
            #print('received response')
            inputValue = s1.readline()
            #print('Distance: ' + inputValue)
            if int(inputValue) &gt; 20 and int(inputValue) &lt; 120:
              print('Distance: ' + inputValue)
              s1.write('2\n')
              playsound()
              time.sleep(15)
            else:
              time.sleep(.05)
 
    #quit with ctrl + c
    except KeyboardInterrupt:
        print(&quot;script stopped by user&quot;)
</code></pre>
<p>To enable serial communication, I am using the <a href="https://github.com/pyserial/pyserial">pySerial</a> module. I should also note the logic for playing a random MP3 assumes that there are five files in the same directory as the script that are named 1.mp3, 2.mp3, etc. Using a different number of files would require changing the upper bound value on line 9.</p>
<p>Happy Halloween!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Building a custom Dynamics 365 data interface with OpenFaaS]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Over the past several months, I've been doing a lot of work with <a href="https://github.com/openfaas/faas">OpenFaaS</a> in my spare time, and in today's post I will show how you can use it to easily build and deploy a custom web service interface for data in a Dynamics 365 Customer Engagement online tenant.</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/07/05/building-a-custom-dynamics-365-data-interface-with-openfaas/</link><guid isPermaLink="false">5b3a415c97f5e30001931b7f</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[OpenFaaS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 05 Jul 2018 17:28:47 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/07/openfaas-d365-header.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/07/openfaas-d365-header.png" alt="Building a custom Dynamics 365 data interface with OpenFaaS"><p>Over the past several months, I've been doing a lot of work with <a href="https://github.com/openfaas/faas">OpenFaaS</a> in my spare time, and in today's post I will show how you can use it to easily build and deploy a custom web service interface for data in a Dynamics 365 Customer Engagement online tenant.</p>
<h4 id="openfaas">OpenFaaS</h4>
<p>If you're not familiar with OpenFaaS, it's basically a serverless functions platform like <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a> or <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>, but you run it on Kubernetes or Docker Swarm on your own servers or in the cloud. What I particularly like about OpenFaaS compared to the various commercial serverless platforms is that in addition to offering more control over how/where it's deployed, OpenFaaS supports a wider variety of languages for writing serverless functions.</p>
<blockquote>
<p>OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.</p>
</blockquote>
<p>To follow along with the samples in this post, you'll need access to a cluster with OpenFaaS deployed, so if you don't already have one, now would be an excellent time to look at the OpenFaaS <a href="http://docs.openfaas.com/deployment/">deployment docs</a> or maybe even work through the <a href="https://github.com/openfaas/workshop">hands-on workshop</a>. I've also previously written about how to securely deploy OpenFaaS on a free <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">Google Cloud VM with Docker Swarm</a> or on an <a href="https://alexanderdevelopment.net/post/2018/05/31/installing-and-securing-openfaas-on-an-aks/">Azure Kubernetes Service cluster</a>.</p>
<h4 id="preparingtobuildtheinterfacefunction">Preparing to build the interface function</h4>
<p>As soon as you have OpenFaaS running, it's time to look at the actual custom interface function.</p>
<p>My demo C# function does the following:</p>
<ol>
<li>Parse a JSON object sent in the client request for an access key and optional query filter</li>
<li>Validate the client-supplied access key to authorize or reject the request</li>
<li>Retrieve a Dynamics 365 OAuth access token using my <a href="https://alexanderdevelopment.net/post/2018/05/19/an-azure-ad-oauth2-helper-microservice/">Azure AD OAuth 2 helper microservice</a></li>
<li>Execute a query for contacts against the Dynamics 365 Web API</li>
<li>Return the Web API query results to the client in an array as part of a JSON object</li>
</ol>
<p>Because the OpenFaaS function uses my OAuth helper microservice instead of requesting an OAuth access token directly from Azure Active Directory, you need to deploy that microservice to your cluster before moving forward.</p>
<p>If you're using Kubernetes, you can create the deployment and corresponding service using the following YAML. You'll need to set the RESOURCE environment variable to the FQDN for your Dynamics 365 CE organization, but you can leave the CLIENTID and TOKEN_ENDPOINT values alone. <em>(While I used to think you needed to register a separate client application for every Dynamics 365 org to use OAuth authentication, I recently learned via a Twitter conversation that there is a <a href="https://twitter.com/bguidinger/status/1001796185798119424">&quot;universal&quot; CRM client id</a> you can use instead.)</em></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: azuread-oauth2-helper
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: azuread-oauth2-helper
    spec:
      containers:
      - name: azuread-oauth2-helper
        image: lucasalexander/azuread-oauth2-helper
        ports:
        - containerPort: 5000
        env:
        - name: RESOURCE
          value: &quot;https://XXXXXXXX.crm.dynamics.com&quot;
        - name: CLIENTID
          value: &quot;2ad88395-b77d-4561-9441-d0e40824f9bc&quot;
        - name: TOKEN_ENDPOINT
          value: &quot;https://login.microsoftonline.com/common/oauth2/token&quot;
---
apiVersion: v1
kind: Service
metadata:
  name: azuread-oauth2-helper
spec:
  ports:
  - port: 5000
  selector:
    app: azuread-oauth2-helper
</code></pre>
<p>Once you've deployed the microservice, here's the definition for a Kubernetes ingress. In this case my microservice is accessible on the same host as OpenFaaS (akskube.alexanderdevelopment.net), and it is secured with the same Let's Encrypt certificate. You'll want to update your configuration with the appropriate values for your specific situation.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: azuread-oauth2-helper-ingress
  annotations:
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - akskube.alexanderdevelopment.net
    secretName: faas-letsencrypt-production
  rules:
  - host: akskube.alexanderdevelopment.net
    http:
      paths:
      - path: /oauthhelper
        backend:
          serviceName: azuread-oauth2-helper
          servicePort: 5000
</code></pre>
<p>After the OAuth helper microservice is deployed, you should validate that you can get a token returned for a valid username/password combination. Here's what that looks like in Postman.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/microservice-validation-1.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<h4 id="buildingtheinterfacefunction">Building the interface function</h4>
<p>If you've made it to this point, building and deploying the function is easy!</p>
<p>First the function gets its configuration data from environment variables that are set when the function is deployed. If you were actually using this function in production, it would be better to store sensitive values like the access key and the Dynamics 365 password as <a href="https://github.com/openfaas/faas/blob/master/guide/secure_secret_management.md">secrets</a>, but I've used environment variables here to keep this overview as simple as possible.</p>
<pre><code>//get configuration from env variables        
var username = Environment.GetEnvironmentVariable(&quot;USERNAME&quot;);
var userpassword = Environment.GetEnvironmentVariable(&quot;USERPASS&quot;);
var tokenendpoint = Environment.GetEnvironmentVariable(&quot;TOKENENDPOINT&quot;);
var accesskey = Environment.GetEnvironmentVariable(&quot;ACCESSKEY&quot;);
var crmwebapi = Environment.GetEnvironmentVariable(&quot;CRMAPI&quot;);
</code></pre>
<p>After the function gets its configuration data, it deserializes the client request using Json.Net to extract a client-supplied access key and an optional query filter. The client-supplied key is validated against the stored key value, and if they don't match, an error response is returned.</p>
<pre><code>var queryrequest = JsonConvert.DeserializeObject&lt;QueryRequest&gt;(input);

if(accesskey!=queryrequest.AccessKey)
{
    JObject outputobject = new JObject();
    outputobject.Add(&quot;error&quot;, &quot;Invalid access key&quot;);
    Console.WriteLine(outputobject.ToString());
    return;
}
</code></pre>
<p>After the access key is validated, the function then makes a request to the authentication helper microservice to get an access token.</p>
<pre><code>var token = GetToken(username, userpassword, tokenendpoint);

...
...
...

string GetToken(string username, string userpassword, string tokenendpoint){
    try
    {
        JObject tokencredentials = new JObject();
        tokencredentials.Add(&quot;username&quot;, username);
        tokencredentials.Add(&quot;password&quot;,userpassword);
        var reqcontent = new StringContent(tokencredentials.ToString(), Encoding.UTF8, &quot;application/json&quot;);
        var result = _client.PostAsync(tokenendpoint, reqcontent).Result;
        var tokenobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(
            result.Content.ReadAsStringAsync().Result);
        var token = tokenobj[&quot;accesstoken&quot;];
        return token.ToString();
    }
    catch(Exception ex)
    {
        return string.Format(&quot;Error: {0}&quot;, ex.Message);
    }
}
</code></pre>
<p>Once the token is returned from the microservice, the function executes the Web API query. The query is just a hardcoded OData query in the form of <code>/contacts?$select=fullname,contactid</code> plus any filter supplied by the client. The function expects that the filter will also be provided in supported Dynamics 365 OData format like <code>startswith(fullname,'y')</code>.</p>
<pre><code>var crmreq = new HttpRequestMessage(HttpMethod.Get, crmwebapi + crmwebapiquery);
crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
crmreq.Content = new StringContent(string.Empty.ToString(), Encoding.UTF8, &quot;application/json&quot;);
var crmres = _client.SendAsync(crmreq).Result;

var crmresponse = crmres.Content.ReadAsStringAsync().Result;

var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponse);
</code></pre>
<p>Finally results are returned to the client in an array as part of a JSON object.</p>
<pre><code>JArray outputarray = new JArray();
foreach(var row in crmresponseobj[&quot;value&quot;].Children())
{
    JObject record = new JObject();
    record.Add(&quot;id&quot;, row[&quot;contactid&quot;]);
    record.Add(&quot;fullname&quot;, row[&quot;fullname&quot;]);
    outputarray.Add(record);
}
JObject outputobject = new JObject();
outputobject.Add(&quot;contacts&quot;, outputarray);
Console.WriteLine(outputobject.ToString());
</code></pre>
<p>Here's the complete function.</p>
<pre><code>using System;
using System.Text;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.IO;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System.Collections.Generic;

namespace Function
{
    public class FunctionHandler
    {
        private static HttpClient _client = new HttpClient();

        public void Handle(string input) {
            //get configuration from env variables        
            var username = Environment.GetEnvironmentVariable(&quot;USERNAME&quot;);
            var userpassword = Environment.GetEnvironmentVariable(&quot;USERPASS&quot;);
            var tokenendpoint = Environment.GetEnvironmentVariable(&quot;TOKENENDPOINT&quot;);
            var accesskey = Environment.GetEnvironmentVariable(&quot;ACCESSKEY&quot;);
            var crmwebapi = Environment.GetEnvironmentVariable(&quot;CRMAPI&quot;);
            
            //deserialize the client request
            var queryrequest = JsonConvert.DeserializeObject&lt;QueryRequest&gt;(input);
            
            //validate the client access key
            if(accesskey!=queryrequest.AccessKey)
            {
                JObject outputobject = new JObject();
                outputobject.Add(&quot;error&quot;, &quot;Invalid access key&quot;);
                Console.WriteLine(outputobject.ToString());
                return;
            }

            //get the oauth token
            var token = GetToken(username, userpassword, tokenendpoint);
            
            if(!token.ToUpper().StartsWith(&quot;ERROR:&quot;))
            {
                //set the base odata query
                var crmwebapiquery = &quot;/contacts?$select=fullname,contactid&quot;;
                
                //add a filter if the client included one in the request
                if(!string.IsNullOrEmpty(queryrequest.Filter))
                    crmwebapiquery+=&quot;&amp;$filter=&quot;+queryrequest.Filter;
                try
                {
                    //make the request to d365
                    var crmreq = new HttpRequestMessage(HttpMethod.Get, crmwebapi + crmwebapiquery);
                    crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
                    crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
                    crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
                    crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
                    crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
                    crmreq.Content = new StringContent(string.Empty.ToString(), Encoding.UTF8, &quot;application/json&quot;);
                    var crmres = _client.SendAsync(crmreq).Result;
                    
                    //handle the d365 response
                    var crmresponse = crmres.Content.ReadAsStringAsync().Result;

                    var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponse);
                    
                    try
                    {
                        //build the function response
                        JArray outputarray = new JArray();
                        foreach(var row in crmresponseobj[&quot;value&quot;].Children())
                        {
                            JObject record = new JObject();
                            record.Add(&quot;id&quot;, row[&quot;contactid&quot;]);
                            record.Add(&quot;fullname&quot;, row[&quot;fullname&quot;]);
                            outputarray.Add(record);
                        }
                        JObject outputobject = new JObject();
                        outputobject.Add(&quot;contacts&quot;, outputarray);
                        
                        //return the response to the client
                        Console.WriteLine(outputobject.ToString());
                    }
                    catch(Exception ex)
                    {
                        JObject outputobject = new JObject();
                        outputobject.Add(&quot;error&quot;, string.Format(&quot;Could not parse query response: {0}&quot;, ex.Message));
                        Console.WriteLine(outputobject.ToString());
                    }
                }
                catch(Exception ex)
                {
                    JObject outputobject = new JObject();
                    outputobject.Add(&quot;error&quot;, string.Format(&quot;Could not query data: {0}&quot;, ex.Message));
                    Console.WriteLine(outputobject.ToString());
                }
            }
            else
            {
                JObject outputobject = new JObject();
                outputobject.Add(&quot;error&quot;, &quot;Could not get token&quot;);
                Console.WriteLine(outputobject.ToString());
            }
        }

        string GetToken(string username, string userpassword, string tokenendpoint){
            try
            {
                JObject tokencredentials = new JObject();
                tokencredentials.Add(&quot;username&quot;, username);
                tokencredentials.Add(&quot;password&quot;,userpassword);
                var reqcontent = new StringContent(tokencredentials.ToString(), Encoding.UTF8, &quot;application/json&quot;);
                var result = _client.PostAsync(tokenendpoint, reqcontent).Result;
                var tokenobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(
                    result.Content.ReadAsStringAsync().Result);
                var token = tokenobj[&quot;accesstoken&quot;];
                return token.ToString();
            }
            catch(Exception ex)
            {
                return string.Format(&quot;Error: {0}&quot;, ex.Message);
            }
        }
    }

    public class QueryRequest
    {
        public string AccessKey {get;set;}
        public string Filter{get;set;}
    }
}
</code></pre>
<p>Because the function relies on Json.Net, you need to add a reference to it in your .csproj file before you build the function.</p>
<pre><code>&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;
  &lt;PropertyGroup&gt;
    &lt;TargetFramework&gt;netstandard2.0&lt;/TargetFramework&gt;
  &lt;/PropertyGroup&gt;
  &lt;PropertyGroup&gt;
    &lt;GenerateAssemblyInfo&gt;false&lt;/GenerateAssemblyInfo&gt;
  &lt;/PropertyGroup&gt;
  &lt;ItemGroup&gt;
    &lt;PackageReference Include=&quot;newtonsoft.json&quot; Version=&quot;11.0.2&quot; /&gt;
  &lt;/ItemGroup&gt;
&lt;/Project&gt;
</code></pre>
<p>Here is my function definition YAML file with enviroment variables included. You will need to update them with your appropriate values, and you will also need to change the image name if you're building your own function instead of just deploying mine from Docker Hub.</p>
<pre><code>provider:
  name: faas
  gateway: http://localhost:8080

functions:
  demo-crm-function:
    lang: csharp
    handler: ./demo-crm-function
    image: lucasalexander/faas-demo-crm-function
    environment:
      USERNAME: XXXXXX@XXXXXX.onmicrosoft.com
      USERPASS: XXXXXX
      TOKENENDPOINT: https://akskube.alexanderdevelopment.net/oauthhelper/requesttoken
      CRMAPI: https://lucastest20.api.crm.dynamics.com/api/data/v9.0
      ACCESSKEY: MYACCESSKEY
</code></pre>
<p>Once the function is deployed, you can execute it either through the OpenFaaS admin UI or another tool that makes HTTP requests like Curl or Postman. Here's what an unfiltered query in Postman looks like for a Dynamics 365 org with sample data installed.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/unfiltered-query.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<p>And here's a query with a filter included.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/filtered-query.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>Once I got OpenFaaS running, writing and deploying the actual function only took about an hour. Obviously writing a more complex data interface to support real-world requirements would take longer, but using a serverless functions platform like OpenFaaS is definitely a significant accelerator for custom Dynamics 365 integration development.</p>
<p>What do you think about this approach? Are you using serverless functions with your Dynamics 365 projects? What do you think about OpenFaaS vs Azure Functions or AWS Lambda? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Installing and securing OpenFaaS on an AKS cluster]]></title><description><![CDATA[<div class="kg-card-markdown"><p>A few months back, I wrote a <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">guide</a> for installing and locking down <a href="https://www.openfaas.com/">OpenFaaS</a> in a Docker Swarm running on <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machines. Today I want to share a step-by-step guide that shows how to install OpenFaaS on a new <a href="https://azure.microsoft.com/en-us/services/container-service/kubernetes/">Azure Kubernetes Service</a> (AKS) cluster using an <a href="https://github.com/kubernetes/ingress-nginx">Nginx</a></p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/31/installing-and-securing-openfaas-on-an-aks/</link><guid isPermaLink="false">5b0e9d8797f5e30001931b70</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Azure]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 31 May 2018 14:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-30_16-29-47.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-30_16-29-47.png" alt="Installing and securing OpenFaaS on an AKS cluster"><p>A few months back, I wrote a <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">guide</a> for installing and locking down <a href="https://www.openfaas.com/">OpenFaaS</a> in a Docker Swarm running on <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machines. Today I want to share a step-by-step guide that shows how to install OpenFaaS on a new <a href="https://azure.microsoft.com/en-us/services/container-service/kubernetes/">Azure Kubernetes Service</a> (AKS) cluster using an <a href="https://github.com/kubernetes/ingress-nginx">Nginx</a> ingress controller to lock it down with basic authentication and free <a href="https://letsencrypt.org/">Let's Encrypt</a> TLS certificates.</p>
<h4 id="beforewebegin">Before we begin</h4>
<p>If you just want to do a quick deployment of OpenFaaS on AKS, there's a guide in the official AKS documentation <a href="https://docs.microsoft.com/en-us/azure/aks/openfaas">here</a>, however it does not show how to implement TLS encryption or authentication.</p>
<p>All the Azure configuration I'll show today is done via the command line, so if you don't already have the Azure CLI installed on your local system, install it from <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">here</a>. You can do it through the Azure portal, but it's much faster to do with the CLI.</p>
<p>You will also need Git command-line tools installed so you can pull down the latest version of OpenFaaS from its repository.</p>
<p>Finally, in order to secure your OpenFaaS installation with TLS, you will need a domain and access to your DNS provider so you can point a hostname to your cluster's public IP address</p>
<p>Ready? Let's get started.</p>
<h4 id="basicazureconfiguration">Basic Azure configuration</h4>
<ol>
<li>From the command line, log in to Azure using <code>az login</code>. Follow the prompts to complete your authentication.</li>
<li>If you don't have an existing resource group you want to use for your AKS cluster, create a new one with <code>az group create -l REGIONNAME -n RESOURCEGROUP</code>. Replace REGIONNAME and RESOURCEGROUP with appropriate values, but make sure you use a region where AKS is <a href="https://docs.microsoft.com/en-us/azure/aks/container-service-quotas">currently available</a>.</li>
<li>Create a new AKS cluster with <code>az aks create -g RESOURCEGROUP -n CLUSTERNAME --generate-ssh-keys</code>. The RESOURCEGROUP value is the same as before, and CLUSTERNAME is whatever you want it to be called. Note that the default virtual machine size for your cluster is Standard_DS1_v2. You can change this by setting the <code>--node-vm-size</code>, and I am personally using burstable Standard_B2s VMs for my AKS cluster.</li>
<li>Once the AKS cluster creation completes, use this command to get the credentials you need to manage the cluster with the Kubernetes CLI <code>az aks get-credentials --resource-group RESOURCEGROUP --name CLUSTERNAME</code>.</li>
<li>Install the Kubernetes CLI (kubectl) with <code>az aks install-cli</code>.</li>
<li>Get the name of the node resource group that was created for your AKS cluster with this command <code>az resource show --resource-group RESOURCEGROUP --name CLUSTERNAME --resource-type Microsoft.ContainerService/managedClusters --query properties.nodeResourceGroup -o tsv</code>. You should get an output that looks like MC_resourcegroup_clustername_regionname. You will use this return value in the next step.</li>
<li>Create a public IP address in the node resource groupe with this command <code>az network public-ip create --resource-group MC_RESOURCEGROUP --name IPADDRESSNAME --allocation-method static</code>. You will get a JSON response that contains a &quot;publicIp&quot; object. Copy its &quot;ipAddress&quot; value and save it for later.<br>
<em>Note: you might be tempted to create a DNS name label for this IP address so you can avoid using a custom domain name, but *.cloudapp.azure.com host names don't work with Let's Encrypt.</em></li>
<li>Go to your DNS provider and register a new A record for your a hostname that points to the external IP you reserved in the previous step (mine is akskube.alexanderdevelopment.net). This will be the hostname you use to access OpenFaaS. You may also need to create a new CAA record to explicitly allow Let's Encrypt to issue certificates for your domain.</li>
</ol>
<h4 id="basicclusterconfiguration">Basic cluster configuration</h4>
<p>Once the basic Azure configuration work is done, it's time to configure the AKS cluster.</p>
<ol>
<li>If you don't already have the Helm client installed on your local system, install it by following the directions <a href="https://github.com/kubernetes/helm/blob/master/docs/install.md">here</a>. I am using a Windows dev workstation, so I installed <a href="https://chocolatey.org/">Chocolatey</a> and then installed the Helm client with <code>choco install kubernetes-helm</code>.</li>
<li>Install Helm components on your AKS cluster with <code>helm init --upgrade --service-account default</code>.</li>
<li>Install the Nginx ingress controller on your AKS cluster with <code>helm install stable/nginx-ingress --namespace kube-system --set rbac.create=false --set rbac.createRole=false --set rbac.createClusterRole=false --set controller.service.loadBalancerIP=STATICIPADDRESS</code>. Replace STATICIPADDRESS with the public IP address you created previously.</li>
<li>Install <a href="https://github.com/jetstack/cert-manager">cert-manager</a> to request and manage your TLS certificates <code>helm install --name cert-manager --namespace kube-system stable/cert-manager --set rbac.create=false</code>.</li>
</ol>
<h4 id="installingopenfaas">Installing OpenFaas</h4>
<p>It's relatively easy to install OpenFaaS on your AKS cluster using Helm, and a detailed readme is available <a href="https://github.com/openfaas/faas-netes/blob/master/chart/openfaas/README.md">here</a>. Basically you need to download the <a href="https://github.com/openfaas/faas-netes">faas-netes</a> Git repository to your local system, create a couple of namespaces on the AKS cluser and use the Helm chart in the repo you downloaded. Here's how I set it up on my AKS cluster.</p>
<pre><code>kubectl create ns openfaas
kubectl create ns openfaas-fn

git clone https://github.com/openfaas/faas-netes
cd faas-netes

helm install --namespace openfaas -n openfaas --set functionNamespace=openfaas-fn --set ingress.enabled=true --set rbac=false chart/openfaas/
</code></pre>
<p>Once OpenFaaS is installed, you need to create ingress resources to make it available externally.</p>
<h4 id="creatingtheingressresources">Creating the ingress resources</h4>
<p>Before creating your ingress resources, you need to create certificate issuer resources to get TLS certificates. Here's the YAML for a Let's Encrypt staging issuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: EMAILADDRESS
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    http01: {}
</code></pre>
<p>Copy it, replace EMAILADDRESS with your email address and save it as faas-staging-issuer.yml. Then run <code>kubectl apply -f faas-staging-issuer.yml -n openfaas</code>.</p>
<p>Here's a corresponding production issuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-production
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: EMAILADDRESS
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-production
    # Enable the HTTP-01 challenge provider
    http01: {}
</code></pre>
<p>Copy it, replace EMAILADDRESS with your email address and save it as faas-production-issuer.yml. Then run <code>kubectl apply -f faas-production-issuer.yml -n openfaas</code>.</p>
<p>Next you need to create a password file to implement basic authentication. If you are working on a system with apache2-utils installed, you can just use the htpasswd command. Otherwise, you can use a tool like <a href="http://aspirine.org/htpasswd_en.html">http://aspirine.org/htpasswd_en.html</a> to generate your htpasswd content. Once you have your htpasswd content generated, save it in a file named &quot;auth&quot; and run the following command <code>kubectl create secret generic basic-auth --from-file=auth -n openfaas</code></p>
<p>Now you can use the following YAML to create an ingress resource that exposes your OpenFaaS instance:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: faas-ingress
  annotations:
    nginx.ingress.kubernetes.io/auth-realm: &quot;Authentication Required&quot;
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-type: basic
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-staging
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - HOSTNAME
    secretName: faas-letsencrypt-staging
  rules:
  - host: HOSTNAME
    http:
      paths:
      - path: /faas-admin
        backend:
          serviceName: gateway
          servicePort: 8080
</code></pre>
<p>Copy it, replace both instances of HOSTNAME with the hostname you created earlier and save it as faas-ingress.yml. Deploy it to your cluser with this command <code>kubectl apply -f faas-ingress.yml -n openfaas</code>.</p>
<p>As the ingress starts up, it will request a staging certificate from Let's Encrypt, and then it will start listening for requests. It may take a few minutes, so now might be a good time to take a short break. Once everything is complete, you will be able to access your OpenFaaS UI from <a href="https://HOSTNAME/faas-admin/ui/">https://HOSTNAME/faas-admin/ui/</a>, and once you deploy functions, they will be available at <a href="https://HOSTNAME/faas-admin/functions/FUNCTIONNAME">https://HOSTNAME/faas-admin/functions/FUNCTIONNAME</a>. You should get a browser warning about the certificate because it's using a Let's Encrypt production certificate, but that's OK for now. You should also be prompted for basic authentication credentials, which will be the username and password you created earlier.</p>
<p>If everything looks good, you can switch over to using a production TLS certificate. Take the faas-ingress YAML and replace the &quot;letsencrypt-staging&quot; in the secretname and certmanager.k8s.io issuer values with &quot;letsencrypt-production&quot; instead. Save it and deploy the update with <code>kubectl apply -f faas-ingress.yml -n openfaas</code>. Like before, the ingress will take a few minutes to restart and request a production TLS certificate from Let's Encrypt. Once that's done, you can access the your OpenFaaS UI via the same URL, but now you should not get a warning about an invalid certificate.</p>
<p>At this point you have a locked-down OpenFaaS installation, but you might not want to use basic authentication to restrict access to your OpenFaaS functions. If that's the case you can create another ingress resource that exposes them outside the &quot;/faas-admin&quot; path. Here's the YAML for that resource:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: faas-function-ingress
  annotations:
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/rewrite-target: /functions
spec:
  tls:
  - hosts:
    - HOSTNAME
    secretName: faas-letsencrypt-production
  rules:
  - host: HOSTNAME
    http:
      paths:
      - path: /functions
        backend:
          serviceName: gateway
          servicePort: 8080
</code></pre>
<p>Copy it, replace both instances of HOSTNAME with your DNS hostname from earlier and save it as faas-function-ingress.yml. Deploy it to your cluser with this command <code>kubectl apply -f faas-function-ingress.yml -n openfaas</code>.</p>
<p>Once the ingress starts up and applies the TLS certificate, you will be able to access your functions at <a href="https://HOSTNAME/functions/FUNCTIONNAME">https://HOSTNAME/functions/FUNCTIONNAME</a> without authenticating.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>A few closing thoughts:</p>
<ol>
<li>I am still extremely new to AKS and Kubernetes, and I've tried to simplify this guide as much as possible for other newbies. In figuring out how to set this up, I relied heavily on the official <a href="https://docs.microsoft.com/en-us/azure/aks/">AKS docs</a>, and I encourage you to take a look at them if you want to dig in deeper.</li>
<li>This configuration does not expose the OpenFaaS Prometheus montitoring service. If you want to set that up, you will need to create a different DNS entry (either an A record or CNAME record) and create another ingress resource in the openfaas namespace for that host name that points to the &quot;prometheus&quot; service on service port 9090.</li>
<li>The Nginx ingress controller configuration I showed here is extremely simple. If you want to use a more sophisticated configurations to enable advanced features like rate limiting, for example, take a look at <a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration">https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration</a> and <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md</a>.</li>
</ol>
</div>]]></content:encoded></item><item><title><![CDATA[Using Dynamics 365 virtual entities to show data from an external organization]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I was recently asked to be a guest on the third-anniversary episode of the <a href="https://crm.audio/">CRM Audio podcast</a>. While I was there George Doubinski challenged me to create a plugin in one Dynamics 365 organization to retrieve records from another Dynamics 365 organization so they could be displayed as virtual entities.</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/28/using-dynamics-365-virtual-entities-to-show-data-from-an-external-organization/</link><guid isPermaLink="false">5b05bc3c97f5e30001931b67</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 28 May 2018 12:55:09 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/PluginRegistration_2018-05-23_13-48-15.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/PluginRegistration_2018-05-23_13-48-15.png" alt="Using Dynamics 365 virtual entities to show data from an external organization"><p>I was recently asked to be a guest on the third-anniversary episode of the <a href="https://crm.audio/">CRM Audio podcast</a>. While I was there George Doubinski challenged me to create a plugin in one Dynamics 365 organization to retrieve records from another Dynamics 365 organization so they could be displayed as virtual entities. I was promised adulation on <a href="https://crmtipoftheday.com/">Dynamics CRM Tip of the Day</a> and fame beyond my wildest dreams, so naturally I accepted.</p>
<p><img src="https://alexanderdevelopment.net/content/images/2018/05/tumblr_inline_n4m5yj9nMP1qa7k0a.gif" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>To address the challenge, I wrote a simple Dynamics 365 plugin that calls the Web API in a different Dynamics 365 organization to retrieve records and return them to a virtual entity data provider. From there, configuration of the Dynamics 365 virtual entity is simple. Let's take a look at how I did it.</p>
<h4 id="theplugin">The plugin</h4>
<p>First you need to create a plugin to retrieve the data from the &quot;external&quot; Dynamics 365 org. Because this code connects directly to the Web API, you'll need to get an access token from Azure AD before you can make the request to Dynamics 365. Just like I showed in my <a href="https://alexanderdevelopment.net/post/2016/11/29/scheduling-dynamics-365-workflows-with-azure-functions-and-csharp/">&quot;Scheduling Dynamics 365 workflows with Azure Functions and C#&quot;</a> post back in 2016, my sample code does not use <a href="https://github.com/AzureAD/azure-activedirectory-library-for-nodejs">ADAL</a> to get the access token, but rather it issues a request directly to the Azure AD OAuth 2 token endpoint.</p>
<p>Here's the code for the plugin. There are some configuration values you'll need to set for your Dynamics 365 organization and whatever query you want to run. It's not a best practice to have any of this actually hardcoded in your plugin, but I've done it this way so it's easier to see how things work.</p>
<pre><code>using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Data.Exceptions;
using Microsoft.Xrm.Sdk.Extensions;
using Microsoft.Xrm.Sdk.Query;
using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Newtonsoft.Json;

namespace VirtualEntityProvider
{
    public class RetrieveOtherOrgData : IPlugin
    {
        //set these values for your D365 instance, user credentials and Azure AD clientid/token endpoint
        string crmorg = &quot;https://XXXXX.crm.dynamics.com&quot;;
        string clientid = &quot;XXXXXXXXX&quot;;
        string username = &quot;lucasalexander@XXXXXX.onmicrosoft.com&quot;;
        string userpassword = &quot;XXXXXXXXXXXX&quot;;
        string tokenendpoint = &quot;https://login.microsoftonline.com/XXXXXXXXXXX/oauth2/token&quot;;

        //relative path to web api endpoint
        string crmwebapi = &quot;/api/data/v8.2&quot;;

        //web api query to execute - in this case all accounts that start with &quot;F&quot;
        string crmwebapipath = &quot;/accounts?$select=name,accountid&amp;$filter=startswith(name,'F')&quot;;

        public void Execute(IServiceProvider serviceProvider)
        {
            //basic plugin set-up stuff
            IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
            IOrganizationServiceFactory servicefactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            IOrganizationService service = servicefactory.CreateOrganizationService(context.UserId);
            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

            try
            {
                //instantiate a new entity collection to hold the records we'll return later
                EntityCollection results = new EntityCollection();

                //build the authorization request for Azure AD
                var reqstring = &quot;client_id=&quot; + clientid;
                reqstring += &quot;&amp;resource=&quot; + Uri.EscapeUriString(crmorg);
                reqstring += &quot;&amp;username=&quot; + Uri.EscapeUriString(username);
                reqstring += &quot;&amp;password=&quot; + Uri.EscapeUriString(userpassword);
                reqstring += &quot;&amp;grant_type=password&quot;;

                //make the Azure AD authentication request
                WebRequest req = WebRequest.Create(tokenendpoint);
                req.ContentType = &quot;application/x-www-form-urlencoded&quot;;
                req.Method = &quot;POST&quot;;
                byte[] bytes = System.Text.Encoding.ASCII.GetBytes(reqstring);
                req.ContentLength = bytes.Length;
                System.IO.Stream os = req.GetRequestStream();
                os.Write(bytes, 0, bytes.Length);
                os.Close();

                HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
                StreamReader tokenreader = new StreamReader(resp.GetResponseStream());
                string responseBody = tokenreader.ReadToEnd();
                tokenreader.Close();

                //deserialize the Azure AD token response and get the access token to supply with the web api query
                var tokenresponse = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(responseBody);
                var token = tokenresponse[&quot;access_token&quot;];

                //make the web api query
                WebRequest crmreq = WebRequest.Create(crmorg+crmwebapi+crmwebapipath);
                crmreq.Headers = new WebHeaderCollection();

                //use the access token from earlier as the authorization header bearer value
                crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
                crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
                crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
                crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
                crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
                crmreq.ContentType = &quot;application/json; charset=utf-8&quot;;
                crmreq.Method = &quot;GET&quot;;

                HttpWebResponse crmresp = (HttpWebResponse)crmreq.GetResponse();
                StreamReader crmreader = new StreamReader(crmresp.GetResponseStream());
                string crmresponseBody = crmreader.ReadToEnd();
                crmreader.Close();

                //deserialize the response
                var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponseBody);

                //loop through the response values
                foreach (var row in crmresponseobj[&quot;value&quot;].Children())
                {
                    //create a new virtual entity of type lpa_demove
                    Entity verow = new Entity(&quot;lpa_otheraccount&quot;);
                    //verow[&quot;lpa_otheraccountid&quot;] = Guid.NewGuid();
                    //verow[&quot;lpa_name&quot;] = ((Newtonsoft.Json.Linq.JValue)row[&quot;name&quot;]).Value.ToString();
                    verow[&quot;lpa_otheraccountid&quot;] = (Guid)row[&quot;accountid&quot;];
                    verow[&quot;lpa_name&quot;] = (string)row[&quot;name&quot;];

                    //add it to the collection
                    results.Entities.Add(verow);
                }

                //return the results
                context.OutputParameters[&quot;BusinessEntityCollection&quot;] = results;
            }
            catch (Exception e)
            {
                tracingService.Trace($&quot;{e.Message} {e.StackTrace}&quot;);
                if (e.InnerException != null)
                    tracingService.Trace($&quot;{e.InnerException.Message} {e.InnerException.StackTrace}&quot;);

                throw new InvalidPluginExecutionException(e.Message);
            }
        }
    }
}
</code></pre>
<p>Because the plugin uses JSON.Net, you'll need to use ILMerge to bundle the Newtonsoft.Json.dll assembly with your compiled plugin before you deploy it to Dynamics 365.</p>
<h4 id="settingupthevirtualentity">Setting up the virtual entity</h4>
<p>After you've deployed the plugin using the plugin registration tool, register a new data provider. When the data provider registration window opens, first create a new data source entity.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/2018-05-23_14-28-33.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Complete the details for the data source and save it.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/2018-05-23_13-53-40.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Complete the rest of the details for the data provider and save it. <img src="https://alexanderdevelopment.net/content/images/2018/05/2018-05-23_13-53-24.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>You should now see a new data provider and data source. <img src="https://alexanderdevelopment.net/content/images/2018/05/PluginRegistration_2018-05-23_13-53-57.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Open the Dynamics 365 web UI, and go to settings-&gt;administration-&gt;virtual entity data sources. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-54-52.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Click the &quot;new&quot; button to create a new virtual entity data source. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-55-21.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>In the window that pops up, select the data provider you created earlier. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-55-34.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Give your new virtual entity data source a name and save it. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-55-50.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Open your solution and create a new entity. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-56-22.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Configure your entity as a virtual entity that uses the virtual entity data source you created previously. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-58-31.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Once you save and publish the virtual entity, you can open an advanced find view that will retrieve data from your other Dynamics 365 organization and display it. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_14-07-38.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>If you export this data to Excel and unhide the id column, you will see that the GUIDs match the records in the external system.</p>
<p>And that's all there is to it. Happy entity virtualizing!</p>
</div>]]></content:encoded></item><item><title><![CDATA[An Azure AD OAuth 2 helper microservice]]></title><description><![CDATA[<div class="kg-card-markdown"><p>One of the biggest trends in systems architecture these days is the use of &quot;serverless&quot; functions like Azure Functions, Amazon Lambda and OpenFaas. Because these functions are stateless, if you want to use a purely serverless approach to work with resources secured using Azure Active Directory like Dynamics</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/19/an-azure-ad-oauth2-helper-microservice/</link><guid isPermaLink="false">5aff468b97f5e30001931b5d</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[Python]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sat, 19 May 2018 16:45:38 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-18_22-58-04-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-18_22-58-04-1.png" alt="An Azure AD OAuth 2 helper microservice"><p>One of the biggest trends in systems architecture these days is the use of &quot;serverless&quot; functions like Azure Functions, Amazon Lambda and OpenFaas. Because these functions are stateless, if you want to use a purely serverless approach to work with resources secured using Azure Active Directory like Dynamics 365 online, a new token will have to be requested every time a function executes. This is inefficient, and it requires the function to fully understand OAuth 2 authentication, which could be handled better elsewhere.</p>
<p>To address this problem, I've written a microservice in Python that can be used to request OAuth 2 tokens from Azure Active Directory, and it also handles refreshing them as needed. I've containerized it as Docker image so you can easily run it without needing to build anything.</p>
<h4 id="howitworks">How it works</h4>
<p>When a request containing a username and password arrives for the first time, the microservice retrieves an OAuth2 access token from Azure AD and returns it to the requester. The microservice also caches an object that contains the access token, refresh token, username, password and expiration time.</p>
<p>When subsequent requests arrive, the microservice checks its cache for an existing token that matches the username and password. If it finds one, it checks if the token has expired or needs to be refreshed.</p>
<p>If the existing token has expired, a new one is requested. If the existing token has not expired, but it will expire within a specified period of time (10 minutes is the default value), the microservice will execute a refresh request to Azure AD, cache the updated token and return it to the requester. If there's an unexpired existing token that doesn't need to be refreshed, the cached access token will be returned to the requester.</p>
<p>Here's what a raw token request to and response from the microservice looks like in Postman:<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-18_22-58-04.png#img-thumbnail" alt="An Azure AD OAuth 2 helper microservice"></p>
<p>Back in 2016 I shared <a href="https://alexanderdevelopment.net/post/2016/11/27/dynamics-365-and-python-integration-using-the-web-api/">some sample Python code</a> that showed how to authenticate to Azure AD and query the Dynamics 365 (then called Dynamics CRM) Web API. Here is an updated version of that sample code that uses this new microservice to acquire access tokens:</p>
<pre><code>import requests
import json

#set these values to retrieve the oauth token
username = 'lucasalexander@xxxxxx.onmicrosoft.com'
userpassword = 'xxxxxx'
tokenendpoint = 'http://localhost:5000/requesttoken'

#set these values to query your crm data
crmwebapi = 'https://xxxxxx.api.crm.dynamics.com/api/data/v8.2'
crmwebapiquery = '/contacts?$select=fullname,contactid'

#build the authorization request
tokenpost = {
    'username':username,
    'password':userpassword
}

#make the token request
print('requesting token . . .')
tokenres = requests.post(tokenendpoint, json=tokenpost)
print('token response received. . .')

accesstoken = ''

#extract the access token
try:
    print('parsing token response . . .')
    print(tokenres)
    accesstoken = tokenres.json()['accesstoken']

except(KeyError):
    print('Could not get access token')

if(accesstoken!=''):
    crmrequestheaders = {
        'Authorization': 'Bearer ' + accesstoken,
        'OData-MaxVersion': '4.0',
        'OData-Version': '4.0',
        'Accept': 'application/json',
        'Content-Type': 'application/json; charset=utf-8',
        'Prefer': 'odata.maxpagesize=500',
        'Prefer': 'odata.include-annotations=OData.Community.Display.V1.FormattedValue'
    }

    print('making crm request . . .')
    crmres = requests.get(crmwebapi+crmwebapiquery, headers=crmrequestheaders)
    print('crm response received . . .')
    try:
        print('parsing crm response . . .')
        crmresults = crmres.json()
        for x in crmresults['value']:
            print (x['fullname'] + ' - ' + x['contactid'])
    except KeyError:
        print('Could not parse CRM results')
</code></pre>
<p>Here's the output when I run the sample against my demo environment:<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-19_11-34-16.png" alt="An Azure AD OAuth 2 helper microservice"></p>
<h4 id="runningthemicroservice">Running the microservice</h4>
<p>Pull the image from <a href="https://hub.docker.com">Docker Hub</a>: <code>docker pull lucasalexander/azuread-oauth2-helper:latest</code></p>
<p><em>Required environment variables</em></p>
<ol>
<li>RESOURCE - The URL of the service that is going to be accessed</li>
<li>CLIENTID - The Azure AD application client ID</li>
<li>TOKEN_ENDPOINT - The OAuth2 token endpoint from the Azure AD application</li>
</ol>
<p>Run the image with the following command (replacing the environment variables with your own).</p>
<p><code>docker run -d -p 5000:5000 -e RESOURCE=https://XXXXXX.crm.dynamics.com -e CLIENTID=XXXXXX -e TOKEN_ENDPOINT=https://login.microsoftonline.com/XXXXXX/oauth2/token --name oauthhelper lucasalexander/azuread-oauth2-helper:latest</code></p>
<p>You can also optionally supply an additional &quot;REFRESH_THRESHOLD&quot; environment variable that sets the time remaining (in seconds) before a token's expiration time when it will be refreshed. The default value is 600 seconds.</p>
<h4 id="anoteonsecurity">A note on security</h4>
<p>Because the microservice is caching usernames, passwords and access tokens in memory, this approach is vulnerable to heap inspection attacks, so you'll want to make sure your environment is appropriately locked down. Also you'll want to make sure any communication between your code that requests tokens and the microservice is encrypted.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>Although I wrote this with Dynamics 365 in mind, it should work for any resource that is secured by Azure AD. If you'd like to take a closer look at the code, it's available on GitHub <a href="https://github.com/lucasalexander/azuread-oauth2-helper">here</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Using ML.NET in an OpenFaaS function]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Last week at its annual Build conference, Microsoft announced <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet">ML.NET</a>, an &quot;open source and cross-platform machine learning framework&quot; that runs in .NET Core. I took a look at the <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet/get-started/windows">getting started</a> samples and realized ML.NET would be a great tool to use in OpenFaas functions.</p>
<p>I</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/17/using-ml-net-in-an-openfaas-function/</link><guid isPermaLink="false">5afe3ef397f5e30001931b56</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[C#]]></category><category><![CDATA[machine learning]]></category><category><![CDATA[text analysis]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 18 May 2018 03:20:22 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-17_22-16-59.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-17_22-16-59.png" alt="Using ML.NET in an OpenFaaS function"><p>Last week at its annual Build conference, Microsoft announced <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet">ML.NET</a>, an &quot;open source and cross-platform machine learning framework&quot; that runs in .NET Core. I took a look at the <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet/get-started/windows">getting started</a> samples and realized ML.NET would be a great tool to use in OpenFaas functions.</p>
<p>I decided to write a proof-of-concept function based on the ML.NET sentiment <a href="https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/sentiment-analysis">analysis sample</a>. Because the function needs a trained model before it can run, you actually need to use a separate application to generate the model and save it as a file. Then you can include the model as part of your function deployment.</p>
<p>Here's a screenshot of my function in action. <img src="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-17_21-42-58.png#img-thumbnail" alt="Using ML.NET in an OpenFaaS function"></p>
<p>You can get the code for my OpenFaas sentiment analysis function <a href="https://github.com/lucasalexander/faas-functions/tree/master/get_sentiment_mlnet">here</a>, and the code for the application that generates the model is available <a href="https://github.com/lucasalexander/mlnet-samples/tree/master/sentiment-analysis">here</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Setting values in a Dynamics 365 CE quick create form from the main form]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Earlier this week I was asked to populate a field in a Dynamics 365 Customer Engagement quick create form with a value from a field on the main form. Unfortunately, the main form would not be saved at the time the quick create form was opened, so the value couldn't</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/03/17/setting-values-in-a-dynamics-365-ce-quick-create-form-from-the-main-form/</link><guid isPermaLink="false">5aad852d44999a000186ddb9</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sat, 17 Mar 2018 21:39:19 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Earlier this week I was asked to populate a field in a Dynamics 365 Customer Engagement quick create form with a value from a field on the main form. Unfortunately, the main form would not be saved at the time the quick create form was opened, so the value couldn't be read from the database.</p>
<p>While there is no good way to access the opening form from the quick create form using the Xrm.Page object model, there is a way to pass the value from the main form using the regular JavaScript browser object model. Because the quick create form and the main form are both children of the same topmost browser window, the main form can create a property in the top window that the quick create form can access when it loads.</p>
<p>Here's sample code that runs in the main form to set the topmost window's property value:</p>
<pre><code>var setValsForQuickCreate = function(){
  window.top.attributename = Xrm.Page.getAttribute(&quot;new_attributename&quot;).getValue();
}
</code></pre>
<p>And here's the corresponding sample code to run when the quick create form loads:</p>
<pre><code>var setValFromMainForm = function(){
  Xrm.Page.getAttribute(&quot;new_attributename&quot;).setValue(window.top.attributename);
}
</code></pre>
<p>This is a relatively simple example that assumes the value to set in the quick create form is the exact same value from the opening form, but there's no reason you can't do transformations if necessary. Additionally, there's no error checking here, so you'll probably want to at least add null checking/handling in the quick create form's script.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Updated solution for scheduling recurring Dynamics 365 workflows]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I've released an updated version of my recurring workflow scheduler for Dynamics 365 Customer Engagement. This solution targets Dynamics 365 version 9, so it should work in all current Dynamics 365 online organizations. You can download version 1.3 of my solution from here: <a href="https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner/releases/tag/v1.3">https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner/</a></p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/03/12/updated-solution-for-scheduling-recurring-dynamics-crm-workflows-2/</link><guid isPermaLink="false">5aa6908f44999a000186ddb1</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[utilities]]></category><category><![CDATA[FetchXML]]></category><category><![CDATA[process automation]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 12 Mar 2018 15:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/03/chrome_2018-03-12_09-40-49.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/03/chrome_2018-03-12_09-40-49.png" alt="Updated solution for scheduling recurring Dynamics 365 workflows"><p>I've released an updated version of my recurring workflow scheduler for Dynamics 365 Customer Engagement. This solution targets Dynamics 365 version 9, so it should work in all current Dynamics 365 online organizations. You can download version 1.3 of my solution from here: <a href="https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner/releases/tag/v1.3">https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner/releases/tag/v1.3</a>.</p>
<p>For more information on the use of this tool, take a look at the original blog posts:</p>
<ul>
<li><a href="https://alexanderdevelopment.net/post/2016/09/19/updated-solution-for-scheduling-recurring-dynamics-crm-workflows/">https://alexanderdevelopment.net/post/2016/09/19/updated-solution-for-scheduling-recurring-dynamics-crm-workflows/</a></li>
<li><a href="https://alexanderdevelopment.net/post/2013/05/18/scheduling-recurring-dynamics-crm-workflows-with-fetchxml/">https://alexanderdevelopment.net/post/2013/05/18/scheduling-recurring-dynamics-crm-workflows-with-fetchxml/</a></li>
</ul>
</div>]]></content:encoded></item><item><title><![CDATA[Installing and securing OpenFaaS on a Google Cloud virtual machine]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Here is a step-by-step guide that shows how to install <a href="https://www.openfaas.com/">OpenFaaS</a> on a new <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machine instance running Ubuntu Linux and how to secure it with <a href="https://www.nginx.com/">Nginx</a> as a reverse proxy using basic authentication and free SSL/TLS certificates from <a href="https://letsencrypt.org/">Let's Encrypt</a>.</p>
<p>As you look at this</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/</link><guid isPermaLink="false">5a9169a525028e00011718db</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sun, 25 Feb 2018 13:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/000-post-image.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/000-post-image.png" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"><p>Here is a step-by-step guide that shows how to install <a href="https://www.openfaas.com/">OpenFaaS</a> on a new <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machine instance running Ubuntu Linux and how to secure it with <a href="https://www.nginx.com/">Nginx</a> as a reverse proxy using basic authentication and free SSL/TLS certificates from <a href="https://letsencrypt.org/">Let's Encrypt</a>.</p>
<p>As you look at this guide, here are a few things to keep in mind:</p>
<ol>
<li>With the exception of a few steps at the beginning that are specific to using Google Cloud, this guide will work for (probably) any cloud hosting provider.</li>
<li>In order to secure your OpenFaaS installation with SSL/TLS, you will need a domain and access to your DNS provider so you can point your domain to your virtual machine instance's IP address.</li>
<li>Although OpenFaaS runs on Docker, this guide shows how to install Nginx as a service directly on the virtual machine instance instead of in a container. There's no reason you couldn't use a containerized Nginx proxy if you want.</li>
<li>If you're comfortable with Kubernetes (I am not yet), you might want to look at running OpenFaaS on Google Kubernetes Engine instead of setting things up the way I do here.</li>
<li>Finally, if you just want to get started playing around with OpenFaaS locally, there's no need to set up a reverse proxy. Instead you can just install OpenFaaS in your local environment and access it directly.</li>
</ol>
<h4 id="provisioningthevirtualmachineinstance">Provisioning the virtual machine instance</h4>
<p>Although my day job keeps me focused on the Microsoft/Azure stack, and I've recently started using Digital Ocean as my personal blog host, I decided to use Google Cloud as my OpenFaaS host because Google was offering $300 in trial credits. Once you have a Google Cloud Platform account, setting up a virtual machine instance is easy.</p>
<ol>
<li>From your project dashboard, go to Compute Engine-&gt;Images.</li>
<li>Select the Ubuntu 17.10 image and click &quot;create instance.&quot; <img src="https://alexanderdevelopment.net/content/images/2018/02/005-selectimage.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>On the create instance screen, fill out the necessary details. I am using a &quot;small&quot; instance for this demo. <img src="https://alexanderdevelopment.net/content/images/2018/02/010-createinstance.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <br>Make sure you open HTTP and HTTPS connections to the instance. <img src="https://alexanderdevelopment.net/content/images/2018/02/020-createinstance.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Once the instance is created, follow the steps here to reserve a static external IP address: <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#reserve_new_static">https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#reserve_new_static</a></li>
<li>Then follow these steps to assign the static external IP to your new instance: <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#IP_assign">https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#IP_assign</a></li>
<li>Finally you need to go to your DNS provider and register a new A record for your domain that points to the external IP you reserved in the previous step. (I am using faas.alexanderdevelopment.net.) While you're at it, you may want to create a new <a href="https://letsencrypt.org/docs/caa/">CAA record</a> to explicitly allow Let's Encrypt to issue certificates for your domain.</li>
</ol>
<h4 id="installingdockerandinitializingyourswarm">Installing Docker and initializing your swarm</h4>
<p>At this point you should have a virtual machine instance with a static IP address and ports 80 and 443 open, and your domain should also be pointing to the static IP address.</p>
<p>Now it's time to install Docker. Out of the box, OpenFaaS runs on top of either <a href="https://docs.docker.com/engine/swarm/">Docker Swarm</a> or <a href="https://kubernetes.io/">Kubernetes</a>. To keep things simple, I am using Docker Swarm.</p>
<ol>
<li>SSH to your instance. You can do this directly through the VM instance details page. <img src="https://alexanderdevelopment.net/content/images/2018/02/030-ssh.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Run the following command to update your repository package lists: <code>sudo apt-get update</code>.</li>
<li>Install Docker by following the steps here: <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/">https://docs.docker.com/install/linux/docker-ce/ubuntu/</a>.</li>
<li>To initialize your Docker swarm, first run <code>ifconfig</code> to see what network interfaces you have available to use for the advertise-addr parameter when you initialize the swarm. This address is what other nodes in the swarm will use to connect if you add more. In my case (and probably in yours, too, if you are using Google Cloud), &quot;ens4&quot; is the correct interface.</li>
<li>Finally, initialize the swarm with this command: <code>sudo docker swarm init --advertise-addr ens4</code>. If necessary replace the &quot;ens4&quot; with whatever value is correct for you. Here's what this looks like on my instance: <img src="https://alexanderdevelopment.net/content/images/2018/02/050-swarminit.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="installingopenfaas">Installing OpenFaaS</h4>
<p>After your swarm is initialized, installing OpenFaaS is easy. Just run these commands to get the latest copy of OpenFaaS from GitHub:</p>
<pre><code>git clone https://github.com/openfaas/faas
cd faas
git checkout -
sudo ./deploy_stack.sh
</code></pre>
<p>At this point OpenFaaS is running and listening on port 8080, but you can't connect to it remotely because the default firewall rules only allow traffic on ports 80 and 443. Now you need to install a reverse proxy to route requests from the public internet to OpenFaaS.</p>
<h4 id="basicnginxconfiguration">Basic Nginx configuration</h4>
<p>Although I've read about using a number of different reverse proxies with OpenFaaS, such as <a href="https://getkong.org/docs/">Kong</a>, <a href="https://traefik.io/">Traefik</a> and <a href="https://caddyserver.com/">Caddy</a>, I decided to use Nginx for this guide because I've used it previously with other projects, and it's relatively easy to configure it to handle basic authentication, HTTPS and rate limiting.</p>
<ol>
<li>Install Nginx by running this command: <code>sudo apt-get install nginx</code>. Assuming all the steps to this point were successful, you should now be able to navigate to your domain in your browser and see the default &quot;Welcome to nginx!&quot; page. <img src="https://alexanderdevelopment.net/content/images/2018/02/070-nginx-welcome.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Now you need to update the Nginx configuration to use a non-default site and directory for hosting pages. Although we're going to mainly use Nginx as a reverse proxy for connections to OpenFaaS, it will need to be able to serve pages to respond to validation challenges from Let's Encrypt so that you can get your certificates.</li>
<li>Create a new directory for your domain. Mine is /var/www/faas.alexanderdevelopment.net.</li>
<li>Create a basic hello world index.html page in this directory.</li>
<li>Update the content of your Nginx config file (/etc/nginx/sites-available/default) with the following, replacing the &quot;faas.alexanderdevelopmen.net&quot; entries with the correct falues for your domain and directory:</li>
</ol>
<pre><code>server {
        listen 80;

        server_name faas.alexanderdevelopment.net;

        root /var/www/faas.alexanderdevelopment.net;
        index index.html;

        location / {
                try_files $uri $uri/ =404;
        }
}
</code></pre>
<ol start="6">
<li>Reload your Nginx configuration with this command <code>service nginx reload</code>.</li>
<li>Refresh the browser window you opened earlier to verify the new test page is loaded.</li>
</ol>
<h4 id="basicauthentication">Basic authentication</h4>
<p>OpenFaaS has no built-in authentication mechanism, but you can use basic authentication in Nginx to only allow access to the admin areas of OpenFaaS to authenticated users. These next two steps will show you how to create an .htpasswd file that Nginx can use to authenticate and authorize users.</p>
<ol>
<li>Install apache2-utils <code>sudo apt-get install apache2-utils</code>.</li>
<li>Create the .htpasswd file and add a user named &quot;adminuser&quot; with this command <code>sudo htpasswd -c /etc/nginx/.htpasswd adminuser</code>. You can run the htpasswd command again if you want to create additional users.</li>
</ol>
<h4 id="securingyourendpointwithletsencrypt">Securing your endpoint with Let's Encrypt</h4>
<p>Now it's time to get a certificate from Let's Encrypt. We'll be using the <a href="https://certbot.eff.org/">Certbot</a> tool to automatically obtain a certificate and update the Nginx configuration to use HTTPS.</p>
<ol>
<li>Add the Certbot repository to your instance  <code>sudo add-apt-repository ppa:certbot/certbot</code>.</li>
<li>Update your repository package lists <code>sudo apt update</code>.</li>
<li>Get the Certbot tool <code>sudo apt-get install python-certbot-nginx</code>.</li>
</ol>
<p>Let's Encrypt <a href="https://letsencrypt.org/docs/rate-limits/">limits the number of requests</a> you can make against its production environment, so it's best to verify your configuration against the Let's Encrypt staging environment first. The staging environment will generate a certificate, but you'll get a certificate warning when you try to access your site, so you'll want to update your configuration to use a production certificate after you're sure everything works.</p>
<ol>
<li>Run this command to request a test certificate <code>sudo certbot --authenticator webroot --installer nginx --test-cert</code>. The first time you run the tool, you will be asked for your email and if you agree to the terms and conditions. After that, follow the prompts, and make sure you select the option to redirect all traffic to HTTPS in the final step. Here's what it looks like for me: <img src="https://alexanderdevelopment.net/content/images/2018/02/100-certbot-test.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Now you should be able to navigate to your domain's test index page using HTTPS to verify everything worked.</li>
<li>If you reopen your Nginx configuration file, you'll see that Certbot has added some sections as indicated by the &quot;managed by Certbot&quot; comments. <img src="https://alexanderdevelopment.net/content/images/2018/02/102-nginx-config-post-certbot.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="updatingthenginxconfigurationtoworkwithopenfaas">Updating the Nginx configuration to work with OpenFaaS</h4>
<p>Now that your Nginx server is able to handle HTTPS traffic, you need to update your Nginx configuration to set up the reverse proxy to OpenFaaS. At this point you'll also want to enable the basic authentication for the admin areas of the OpenFaaS user interface, and you'll need to make a small change to the HTTP-&gt;HTTPS redirect that Certbot set up previously so that you can request a production certificate from Let's Encrypt later.</p>
<p>Replace the contents of your Nginx configuration file with the following, again replacing the &quot;faas.alexanderdevelopment.net&quot; entries with the correct values for your domain and root directory:</p>
<pre><code>server {
        server_name faas.alexanderdevelopment.net;

        root /var/www/faas.alexanderdevelopment.net;
        index index.html;

        #serve acme challange files from actual directory
        location /.well-known {
                try_files $uri $uri/ =404;
        }

        #reverse proxy all &quot;function&quot; requests to openfaas and require no authentication
        location /function {
                proxy_pass      http://127.0.0.1:8080/function;
                proxy_set_header    X-Real-IP $remote_addr;
                proxy_set_header    Host      $http_host;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        #reverse proxy everthing else to openfaas and require basic authentication
        location / {
                proxy_pass      http://127.0.0.1:8080;
                proxy_set_header    X-Real-IP $remote_addr;
                proxy_set_header    Host      $http_host;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                auth_basic &quot;Restricted&quot;;
                auth_basic_user_file /etc/nginx/.htpasswd;
        }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/faas.alexanderdevelopment.net-0001/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/faas.alexanderdevelopment.net-0001/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

#this block redirects all non-ssl traffic to the ssl version
server {
        listen 80;

        server_name faas.alexanderdevelopment.net;
        root /var/www/faas.alexanderdevelopment.net;

        #serve acme challange files from actual directory
        location /.well-known {
                try_files $uri $uri/ =404;
        }

        #redirect anything other than challenges to https
        location / {
                return 301 https://$host$request_uri;
        }
}
</code></pre>
<p>A few notes on this configuration:</p>
<ol>
<li>The HTTP server section (starting on line 40) has been updated to not redirect requests to the &quot;/.well-known&quot; directory to HTTPS. With the 301 redirect in place for all requests, Certbot would return validation errors when I attempted to change over from my test certificate to a production certificate.</li>
<li>Basic authentication is enabled for all requests to the site except for the &quot;/.well-known&quot; directory and the &quot;/functions&quot; directory (see lines 28 and 29). The &quot;/.well-known&quot; directory needs to be accessible without authentication to handle Let's Encrypt validation requests, and the &quot;/functions&quot; directory has been left open with the assumption that you'll use some sort of <a href="https://github.com/openfaas/faas/tree/master/sample-functions/ApiKeyProtected">API key</a> mechanism for authenticating to your functions. If your function clients support passing basic auth credentials, you can secure the &quot;/functions&quot; directory with basic auth, too.</li>
<li>This configuration does not expose the Prometheus monitoring UI on port 9090 that is installed with OpenFaaS.</li>
</ol>
<p>One you update your configuration and reload Nginx, you should be able to test one of the default included functions with curl. <img src="https://alexanderdevelopment.net/content/images/2018/02/110-curl-validation.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <em>You'll note that I have passed the &quot;-k&quot; flag to curl to disable certificate validation.</em></p>
<p>You should also be able to navigate to the OpenFaaS admin user interface by going to https://YOUR_DOMAIN_HERE/ui/. You will be prompted for credentials and presented with a warning about the certificate. If you use the &quot;adminuser&quot; credentials you created earlier and acknowledge the warning/continue, you will see the OpenFaaS main user interface screen.</p>
<h4 id="gettingaproductioncertificate">Getting a production certificate</h4>
<p>If you've gotten to this point and everything works, you're ready to switch over from using a test SSL/TLS certificate to using a production certificate.</p>
<ol>
<li>Run Certbot without the --test-cert flag <code>sudo certbot --authenticator webroot --installer nginx</code>.</li>
<li>Select the option to renew and replace the existing certificate. <img src="https://alexanderdevelopment.net/content/images/2018/02/120-certbot-prod.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Follow the rest of the prompts like when you requested the test certificate, except when you get to the final step, instead of selecting the option to redirect all traffic, select option &quot;1: No redirect.&quot;</li>
<li>Close all your open browser windows and verify you can browse to the OpenFaaS UI by going to https://YOUR_DOMAIN_HERE/ui/. You should be prompted for credentials again, but this time you should not see any certificate warnings. <img src="https://alexanderdevelopment.net/content/images/2018/02/130-browser-verification.png#img-thumnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <br><br>You can also try running one of the default functions through <a href="https://www.getpostman.com/">Postman</a> to validate you receive no certificate errors. <img src="https://alexanderdevelopment.net/content/images/2018/02/140-postman-verfication.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="wrappingup">Wrapping up</h4>
<p>At this point you have a secure OpenFaaS server, but there are still a few things you should do.</p>
<ol>
<li>Back up your Nginx configuration, htpasswd file and certificates.</li>
<li>Remove the default functions because they are not protected by an API key, and they are runnable by anyone who can access your &quot;/functions&quot; directory, which, if you use my configuration, is actually anyone.</li>
<li>Set up <a href="https://lincolnloop.com/blog/rate-limiting-nginx/">rate limiting</a> for the Nginx server.</li>
<li>Schedule automated <a href="https://certbot.eff.org/docs/using.html#renewal">certificate renewals</a> using cron.</li>
<li>Get started writing functions and have fun!</li>
</ol>
</div>]]></content:encoded></item><item><title><![CDATA[Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Recently I was asked to set up a process to automatically disable or re-enable Dynamics 365 Customer Engagement users depending on some external data. This ended up being ridiculously easy to do with SSIS and KingswaySoft's <a href="http://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365">Dynamics 365 Integration Toolkit</a>. Let me show you how it works.</p>
<p>In Dynamics 365</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/08/disable-enable-dynamics-365-ce-users-with-ssis-kingswaysoft/</link><guid isPermaLink="false">5a7c568fc86c8900016cf372</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[integration]]></category><category><![CDATA[KingswaySoft]]></category><category><![CDATA[SSIS]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 08 Feb 2018 19:01:01 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/02_query_users-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/02_query_users-1.png" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"><p>Recently I was asked to set up a process to automatically disable or re-enable Dynamics 365 Customer Engagement users depending on some external data. This ended up being ridiculously easy to do with SSIS and KingswaySoft's <a href="http://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365">Dynamics 365 Integration Toolkit</a>. Let me show you how it works.</p>
<p>In Dynamics 365 CE, you can disable or enable a user record just by setting the value of its &quot;isdisabled&quot; attribute to true or false, so both my disable user data flow and re-enable user data flow do roughly the same thing.</p>
<ol>
<li>Get a list of Dynamics 365 user records to update.</li>
<li>Add a derived column to hold the value to use for updating isdisabled on the user records.</li>
<li>Update the user records.</li>
</ol>
<h4 id="thedisableuserspackage">The disable users package</h4>
<p>Here's a screenshot of a sample disable users data flow.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/01_disable_user_flow.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></p>
<p>Let's take a closer look at each step.</p>
<ol>
<li>Query users using FetchXML. <img src="https://alexanderdevelopment.net/content/images/2018/02/02_query_users.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></li>
<li>Add a derived column named &quot;isdisabled&quot; and set its value to 1. <img src="https://alexanderdevelopment.net/content/images/2018/02/03_isdisabled_column.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></li>
<li>Update the users. <img src="https://alexanderdevelopment.net/content/images/2018/02/04_update_users.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></li>
</ol>
<p>Enabling the users works exactly the same way, except the value of the &quot;isdisabled&quot; column should be 0 instead of 1, so I won't show the screenshots for that package.</p>
<h4 id="disableuserdemo">Disable user demo</h4>
<p>In my Dynamics 365 online instance, I have an active user named &quot;Angus Alexander&quot; who I want to disable. <img src="https://alexanderdevelopment.net/content/images/2018/02/05_enabled_users.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></p>
<p>When I run the disable users package with the query from above (<code>&lt;condition attribute=&quot;firstname&quot; operator=&quot;eq&quot; value=&quot;angus&quot; /&gt;</code>) in Visual Studio, I see success on every step. <img src="https://alexanderdevelopment.net/content/images/2018/02/06_package_run.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></p>
<p>I check back in Dynamics 365 to see Angus Alexander is no longer an enabled user. <img src="https://alexanderdevelopment.net/content/images/2018/02/07_enabled_users.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></p>
<p>Instead Angus shows up as a disabled user. <img src="https://alexanderdevelopment.net/content/images/2018/02/08_disabled_users.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></p>
<p>Now when Angus tries to access Dynamics 365, he sees that his account has been disabled. <img src="https://alexanderdevelopment.net/content/images/2018/02/09_disabled_message.png#img-thumbnail" alt="Disable and enable Dynamics 365 CE users with SSIS & KingswaySoft"></p>
</div>]]></content:encoded></item><item><title><![CDATA[Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the final post in my series about building a service relay for Dynamics 365 CE with RabbitMQ and Python. In my previous <a href="https://alexanderdevelopment.net/post/2018/02/05/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-3/">post</a> in this series, I showed the Python code to make the service relay work. In today's post, I will show how you can use <a href="https://azure.microsoft.com/en-us/services/functions/">Azure</a></p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/07/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-4/</link><guid isPermaLink="false">5a788a53c86c8900016cf367</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[integration]]></category><category><![CDATA[Python]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[Azure]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 08 Feb 2018 04:00:42 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay-2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay-2.png" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4"><p>This is the final post in my series about building a service relay for Dynamics 365 CE with RabbitMQ and Python. In my previous <a href="https://alexanderdevelopment.net/post/2018/02/05/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-3/">post</a> in this series, I showed the Python code to make the service relay work. In today's post, I will show how you can use <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a> to make a consumer service proxy using C# so client applications don't have to access to your RabbitMQ broker directly, and I will also discuss some general thoughts on security and scalability for this service relay architecture.</p>
<p>Although this simple service relay allows external consumers to get data from Dynamics 365 CE without needing to connect directly, the examples I've shown so far require that they can connect to a RabbitMQ broker. This may be problematic for a variety of reasons, so you would probably want external consumers to connect to a web service proxy that would write requests to and read responses from the RabbitMQ broker.</p>
<h4 id="buildingaserviceproxyfunction">Building a service proxy function</h4>
<p>You can build an Azure Functions service proxy with Python, but I don't recommend it for three reasons:</p>
<ol>
<li>Azure Functions Python support is still considered experimental.</li>
<li>Python scripts that use external libraries can run <a href="https://github.com/Azure/azure-functions-host/issues/1626">exceedingly slow</a>.</li>
<li>Getting the environment set up is a bit of a hassle.</li>
</ol>
<p>On the other hand, building a service proxy function with C# was so much easier, and it performed much better than a comparable Python function (~.5 seconds for C# compared to 5+ seconds for Python).</p>
<p>Here are the steps I took to build my C# service proxy function:</p>
<ol>
<li>Create a C# HTTP trigger function.</li>
<li>Create and upload a project.json file with a dependency on the RabbitMQ client (see below).</li>
<li>Take the &quot;RpcClient&quot; class from the <a href="https://www.rabbitmq.com/tutorials/tutorial-six-dotnet.html">RabbitMQ .Net RPC tutorial</a> and call it from within my function.</li>
</ol>
<p>Here's my project.json file:</p>
<pre><code>{
  &quot;frameworks&quot;: {
    &quot;net46&quot;:{
      &quot;dependencies&quot;: {
        &quot;RabbitMQ.Client&quot;: &quot;5.0.1&quot;
      }
    }
   }
}
</code></pre>
<p>And here's my run.csx file:</p>
<pre><code>using System.Net;
using System;
using System.Collections.Concurrent;
using System.Text;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;

public static async Task&lt;HttpResponseMessage&gt; Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info(&quot;Processing request&quot;);

    // parse query parameter
    string query = req.GetQueryNameValuePairs()
        .FirstOrDefault(q =&gt; string.Compare(q.Key, &quot;query&quot;, true) == 0)
        .Value;

    // Get request body
    dynamic data = await req.Content.ReadAsAsync&lt;object&gt;();

    // Set name to query string or body data
    query = query ?? data?.query;

    var rpcClient = new RpcClient();
    
    log.Info(string.Format(&quot; [.] query start time {0}&quot;, DateTime.Now.ToString(&quot;MM/dd/yyyy hh:mm:ss.fff tt&quot;)));
    var response = rpcClient.Call(query);

    log.Info(string.Format(&quot; [.] query end time {0}&quot;, DateTime.Now.ToString(&quot;MM/dd/yyyy hh:mm:ss.fff tt&quot;)));
    rpcClient.Close();

    return req.CreateResponse(HttpStatusCode.OK, response);
}

public class RpcClient
{
    private readonly IConnection connection;
    private readonly IModel channel;
    private readonly string replyQueueName;
    private readonly EventingBasicConsumer consumer;
    private readonly BlockingCollection&lt;string&gt; respQueue = new BlockingCollection&lt;string&gt;();
    private readonly IBasicProperties props;

    public RpcClient()
    {
        var factory = new ConnectionFactory() { HostName = &quot;RABBITHOST&quot;, UserName=&quot;RABBITUSER&quot;, Password=&quot;RABBITUSERPASS&quot;  };

        connection = factory.CreateConnection();
        channel = connection.CreateModel();
        replyQueueName = channel.QueueDeclare().QueueName;
        consumer = new EventingBasicConsumer(channel);

        props = channel.CreateBasicProperties();
        var correlationId = Guid.NewGuid().ToString();
        props.CorrelationId = correlationId;
        props.ReplyTo = replyQueueName;

        consumer.Received += (model, ea) =&gt;
        {
            var body = ea.Body;
            var response = Encoding.UTF8.GetString(body);
            if (ea.BasicProperties.CorrelationId == correlationId)
            {
                respQueue.Add(response);
            }
        };
    }

    public string Call(string message)
    {
        var messageBytes = Encoding.UTF8.GetBytes(message);
        channel.BasicPublish(
            exchange: &quot;&quot;,
            routingKey: &quot;rpc_queue&quot;,
            basicProperties: props,
            body: messageBytes);

        channel.BasicConsume(
            consumer: consumer,
            queue: replyQueueName,
            autoAck: true);

        return respQueue.Take(); ;
    }

    public void Close()
    {
        connection.Close();
    }
}
</code></pre>
<p>Here's a screenshot showing me calling the C# function with Postman.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/Postman_2018-02-05_22-02-52.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4"></p>
<p>Because I did actually build a Python function, I will go ahead and share how I did it if you're interested. Here are the steps I took:</p>
<ol>
<li>Create a Python HTTP trigger function.</li>
<li>Install Python 3.6 via site extensions (see steps 2.1-2.4 <a href="https://stackoverflow.com/a/47213859">here</a>).</li>
<li>Install the necessary libraries using pip via <a href="https://david-obrien.net/2016/07/azure-functions-kudu/">KUDU</a>.</li>
</ol>
<p>Here's the Python function code:</p>
<pre><code>import os
import sys
import json
import pika
import uuid
import datetime

class CrmRpcClient(object):
    def __init__(self):
        #RabbitMQ connection details
        self.rabbituser = 'RABBITUSERNAME'
        self.rabbitpass = 'RABBITUSERPASS'
        self.rabbithost = 'RABBITHOST' 
        self.rabbitport = 5672
        self.rabbitqueue = 'rpc_queue'
        rabbitcredentials = pika.PlainCredentials(self.rabbituser, self.rabbitpass)
        rabbitparameters = pika.ConnectionParameters(host=self.rabbithost,
                                    port=self.rabbitport,
                                    virtual_host='/',
                                    credentials=rabbitcredentials)

        self.rabbitconn = pika.BlockingConnection(rabbitparameters)

        self.channel = self.rabbitconn.channel()

        #create an anonymous exclusive callback queue
        result = self.channel.queue_declare(exclusive=True)
        self.callback_queue = result.method.queue

        self.channel.basic_consume(self.on_response, no_ack=True,
                                   queue=self.callback_queue)

    #callback method for when a response is received - note the check for correlation id
    def on_response(self, ch, method, props, body):
        if self.corr_id == props.correlation_id:
            self.response = body

    #method to make the initial request
    def call(self, n):
        self.response = None
        #generate a new correlation id
        self.corr_id = str(uuid.uuid4())

        #publish the message to the rpc_queue - note the reply_to property is set to the callback queue from above
        self.channel.basic_publish(exchange='',
                                   routing_key=self.rabbitqueue,
                                   properties=pika.BasicProperties(
                                         reply_to = self.callback_queue,
                                         correlation_id = self.corr_id,
                                         ),
                                   body=n)
        while self.response is None:
            self.rabbitconn.process_data_events()
        return self.response

print(&quot; [.] query start time %r&quot; % str(datetime.datetime.now()))
#instantiate an rpc client
crm_rpc = CrmRpcClient()

postreqdata = json.loads(open(os.environ['req']).read())
query = postreqdata['query']

crm_rpc = CrmRpcClient()
print(&quot; [.] query start time %r&quot; % str(datetime.datetime.now()))
queryresponse = crm_rpc.call(query)
print(&quot; [.] query end time %r&quot; % str(datetime.datetime.now()))
response = open(os.environ['res'], 'w')
response.write(queryresponse.decode())
response.close()
</code></pre>
<p>Here's a screenshot showing me calling the Python function with Postman.<img src="https://alexanderdevelopment.net/content/images/2018/02/Postman_2018-02-05_22-10-20.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4"></p>
<p>Note the difference in time between the two functions - 5.62 seconds for Python and .46 seconds for C#!</p>
<h4 id="securityandscalability">Security and scalability</h4>
<p>If you decide to use this approach in production, I'd suggest you carefully consider both security and scalability. Obviously the overall solution will only be as secure as your RabbitMQ broker and communications between the broker and its clients, so you'll want to look at best practices for access control and securing the communications with TLS. Here are some links for further reading on those subjects:</p>
<ul>
<li>TLS - <a href="https://www.rabbitmq.com/ssl.html">https://www.rabbitmq.com/ssl.html</a></li>
<li>Access control - <a href="https://www.rabbitmq.com/access-control.html">https://www.rabbitmq.com/access-control.html</a></li>
</ul>
<p>As for scalability, the approach I've shown creates a separate response queue for each consumer, but it can have problems scaling, especially if you are using a RabbitMQ cluster. You may want to look at the <a href="https://www.rabbitmq.com/direct-reply-to.html">&quot;direct reply-to&quot;</a> approach instead. For an interesting real-world overview of using direct reply-to, take a look at this <a href="https://facundoolano.wordpress.com/2016/06/26/real-world-rpc-with-rabbitmq-and-node-js/">blog post.</a>.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>I hope you've enjoyed this series and that it has given you some ideas about how to implement service relays in your Dynamics 365 CE projects. As I worked through the examples, I certainly learned a few new things, especially when I created my Python service proxy in Azure Functions.</p>
<p>Here are links to all the previous posts in this series.</p>
<ol>
<li><a href="https://alexanderdevelopment.net/post/2018/01/30/relaying-external-queries-to-on-premise-dynamics-365-ce-orgs-with-rabbitmq-and-python/">Part 1</a> - Series introduction</li>
<li><a href="https://alexanderdevelopment.net/post/2018/02/01/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-2/">Part 2</a> - Solution prerequisites</li>
<li><a href="https://alexanderdevelopment.net/post/2018/02/05/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-3/">Part 3</a> - Python code for the consumer and listener processes</li>
</ol>
<p>What do you think about this approach? Is it something you think you'd use in production? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 3]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In my last <a href="https://alexanderdevelopment.net/post/2018/02/01/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-2/">post</a> in this series, I walked through the prerequisites for building a simple service relay for Dynamics 365 CE with RabbitMQ and Python. In today's post I will show the Python code to make the service relay work.</p>
<p>As I described in the <a href="https://alexanderdevelopment.net/post/2018/01/30/relaying-external-queries-to-on-premise-dynamics-365-ce-orgs-with-rabbitmq-and-python/">first post</a> in this</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/05/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-3/</link><guid isPermaLink="false">5a6cab4cc86c8900016cf352</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[integration]]></category><category><![CDATA[Python]]></category><category><![CDATA[RabbitMQ]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 05 Feb 2018 17:57:29 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay-1.png" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 3"><p>In my last <a href="https://alexanderdevelopment.net/post/2018/02/01/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-2/">post</a> in this series, I walked through the prerequisites for building a simple service relay for Dynamics 365 CE with RabbitMQ and Python. In today's post I will show the Python code to make the service relay work.</p>
<p>As I described in the <a href="https://alexanderdevelopment.net/post/2018/01/30/relaying-external-queries-to-on-premise-dynamics-365-ce-orgs-with-rabbitmq-and-python/">first post</a> in this series, this approach relies on a consumer process and a queue listener process that can both access a RabbitMQ message broker.</p>
<blockquote>
<p>A consumer writes a request to a cloud-hosted RabbitMQ request queue (either directly or through a proxy service) and starts waiting for a response. On the other end, a Python script monitors the request queue for inbound requests. When it sees a new one, it executes the appropriate request through the Dynamics 365 Web API and writes the response back to a client-specific RabbitMQ response queue. The consumer then picks up the response from the queue.</p>
</blockquote>
<p>This solution is based on the remote procedure call (RPC) approach shown <a href="https://www.rabbitmq.com/tutorials/tutorial-six-python.html">here</a>. The main difference is that I have added logic to the queue monitoring script to query the Dynamics 365 Web API based on the inbound request from the consumer.</p>
<h4 id="consumersample">Consumer sample</h4>
<p>The consumer does the following:</p>
<ol>
<li>Read the text of the request to write to the queue from a command-line argument.</li>
<li>Establish a connection to the RabbitMQ broker.</li>
<li>Create a new anonymous, exclusive callback queue.</li>
<li>Write a request message a queue called &quot;rpc_queue.&quot; This message will include the callback queue as its &quot;reply_to&quot; property.</li>
<li>Monitor the callback queue for a response.</li>
</ol>
<p>There's no validation in this sample, so if you run it without a command-line argument, it will just throw an error and exit.</p>
<pre><code>import sys
import pika
import uuid
import datetime

class CrmRpcClient(object):
    def __init__(self):
        #RabbitMQ connection details
        self.rabbituser = 'crmuser'
        self.rabbitpass = 'crmpass'
        self.rabbithost = '127.0.0.1' 
        self.rabbitport = 5672
        self.rabbitqueue = 'rpc_queue'
        rabbitcredentials = pika.PlainCredentials(self.rabbituser, self.rabbitpass)
        rabbitparameters = pika.ConnectionParameters(host=self.rabbithost,
                                    port=self.rabbitport,
                                    virtual_host='/',
                                    credentials=rabbitcredentials)

                self.rabbitconn = pika.BlockingConnection(rabbitparameters)

        self.channel = self.rabbitconn.channel()

        #create an anonymous exclusive callback queue
        result = self.channel.queue_declare(exclusive=True)
        self.callback_queue = result.method.queue

        self.channel.basic_consume(self.on_response, no_ack=True,
                                   queue=self.callback_queue)

    #callback method for when a response is received - note the check for correlation id
    def on_response(self, ch, method, props, body):
        if self.corr_id == props.correlation_id:
            self.response = body

    #method to make the initial request
    def call(self, n):
        self.response = None
        #generate a new correlation id
        self.corr_id = str(uuid.uuid4())

        #publish the message to the rpc_queue - note the reply_to property is set to the callback queue from above
        self.channel.basic_publish(exchange='',
                                   routing_key=self.rabbitqueue,
                                   properties=pika.BasicProperties(
                                         reply_to = self.callback_queue,
                                         correlation_id = self.corr_id,
                                         ),
                                   body=n)
        while self.response is None:
            self.rabbitconn.process_data_events()
        return self.response

#instantiate an rpc client
crm_rpc = CrmRpcClient()

#read the request from the command line
request = sys.argv[1]

#make the request and get the response
print(&quot; [x] Requesting crm data(&quot;+request+&quot;)&quot;)
print(&quot; [.] Start time %s&quot; % str(datetime.datetime.now()))
response = crm_rpc.call(request)

#convert the response message body from the queue to a string 
decoderesponse = response.decode()

#print the output
print(&quot; [.] Received response: %s&quot; % decoderesponse)
print(&quot; [.] End time %s&quot; % str(datetime.datetime.now()))
</code></pre>
<h4 id="queuelistenersample">Queue listener sample</h4>
<p>The queue listener does the following:</p>
<ol>
<li>Establish a connection to the RabbitMQ broker</li>
<li>Monitor &quot;rpc_queue&quot; queue.</li>
<li>When a new message from the &quot;rpc_queue&quot; queue is delivered, decode the message body as a string, and determine what Web API query to execute. Note: This sample can return a list of contacts or accounts from Dynamics 365 CE based on the request the consumer sends (&quot;getcontacts&quot; or &quot;getaccounts&quot;). If any other request is received, the listener will return an error message to the consumer callback queue.</li>
<li>Execute the appropriate query against the Dynamics 365 Web API and write the response to the callback queue the client established originally.</li>
</ol>
<pre><code>import pika
import requests
from requests_ntlm import HttpNtlmAuth
import json

#NTLM credentials to access on-prem Dynamics 365 Web API
username = 'DOMAIN\\USERNAME'
userpassword = 'PASSWORD'

#full path to Web API
crmwebapi = 'http://33.0.0.16/lucastest02/api/data/v8.1'

#RabbitMQ connection details
rabbituser = 'crmuser'
rabbitpass = 'crmpass'
rabbithost = '127.0.0.1' 
rabbitport = 5672

#method to execute a Web API query based on the client request
def processquery(query):
    #set the Web API request headers
    crmrequestheaders = {
        'OData-MaxVersion': '4.0',
        'OData-Version': '4.0',
        'Accept': 'application/json',
        'Content-Type': 'application/json; charset=utf-8',
        'Prefer': 'odata.maxpagesize=500',
        'Prefer': 'odata.include-annotations=OData.Community.Display.V1.FormattedValue'
    }

    #determine which Web API query to execute
    if query == 'getcontacts':
        crmwebapiquery = '/contacts?$select=fullname,contactid'
    elif query == 'getaccounts':
        crmwebapiquery = '/accounts?$select=name,accountid'
    else:
        #only handle 'getcontacts' or 'getaccounts' requests
        return 'Operation not supported'

    #execute the query
    crmres = requests.get(crmwebapi+crmwebapiquery, headers=crmrequestheaders,auth=HttpNtlmAuth(username,userpassword))
    
    #get the results json
    crmjson = crmres.json()

    #return the json
    return crmjson

#method to handle new inbound requests
def on_request(ch, method, props, body):
    #convert the message body from the queue to a string
    decodebody = body.decode('utf-8')

    #print the request
    print(&quot; [.] Received request: '%s'&quot; % decodebody)

    #process the request query
    response = processquery(decodebody)

    #publish the response back to 'reply-to' queue from the request message and set the correlation id
    ch.basic_publish(exchange='',
                     routing_key=props.reply_to,
                     properties=pika.BasicProperties(correlation_id = \
                                                         props.correlation_id),
                     body=str(response).encode(encoding=&quot;utf-8&quot;, errors=&quot;strict&quot;))
    ch.basic_ack(delivery_tag = method.delivery_tag)

print(&quot; [x] Awaiting RPC requests&quot;)

#connect to RabbitMQ broker
rabbitcredentials = pika.PlainCredentials(rabbituser, rabbitpass)
rabbitparameters = pika.ConnectionParameters(host=rabbithost,
                               port=rabbitport,
                               virtual_host='/',
                               credentials=rabbitcredentials)
rabbitconn = pika.BlockingConnection(rabbitparameters)
channel = rabbitconn.channel()

#declare the 'rpc_queue' queue
channel.queue_declare(queue='rpc_queue')

#set qos settings for the channel
channel.basic_qos(prefetch_count=1)

#assign the 'on_request' method as a callback for when new messages delivered from the 'rpc_queue' queue
channel.basic_consume(on_request, queue='rpc_queue')

#start listening for requests
channel.start_consuming()
</code></pre>
<h4 id="tryingitout">Trying it out</h4>
<p>As I mentioned in my last post, I initially wrote my code to use a RabbitMQ broker running on my local PC, so that's why the connections in the samples show 127.0.0.1 as the host. For a demo, I've spun up a copy of RabbitMQ in a Docker container in the cloud and updated my connection parameters accordingly, but I am still running my queue listener and consumer processes on my local PC.</p>
<p>When the listener first starts, it displays a simple status message.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/1_start_listener.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 3"></p>
<p>Then I execute a &quot;getcontacts&quot; request from the consumer in a separate window.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/2_get_contacts.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 3"></p>
<p>From the timestamps before and after the request, you can see the round-trip time is less than .2 seconds, which includes two round trips between my local PC and the cloud-based RabbitMQ broker <em>plus</em> the actual query processing time in my local Dynamics 365 CE org.</p>
<p>Then I execute a &quot;getaccounts&quot; request.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/4_get_accounts.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 3"></p>
<p>This request was also fulfilled in less than .2 seconds.</p>
<p>Finally I execute an invalid request to show what the error response looks like.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/6_get_leads.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 3"></p>
<p>You'll note the total time from request to response is only about .05 seconds less than the total time for the valid queries. That indicates most of the time used in these samples is being spent on the round trips between my local PC and the RabbitMQ broker, which is not surprising.</p>
<p>Meanwhile, the queue listener wrote a simple status update for every request it received. If I were using this in production, I would use more sophisticated logging.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/7_listener_output.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 3"></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>That's it for now. In my next (and final) post in this series, I will show how you can use <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a> to make a consumer service proxy so consuming applications don't have to access to your RabbitMQ broker directly, and I will also discuss some general thoughts on security and scalability for the service .</p>
</div>]]></content:encoded></item><item><title><![CDATA[Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In my <a href="https://alexanderdevelopment.net/post/2018/01/30/relaying-external-queries-to-on-premise-dynamics-365-ce-orgs-with-rabbitmq-and-python/">last post</a> in this series, I outlined an approach for building a simple service relay with <a href="https://www.rabbitmq.com/">RabbitMQ</a> and <a href="https://www.python.org/">Python</a> to easily expose an on-premises Dynamics 365 Customer Engagement organization to external consumers. In this post I will walk through the prerequisites for building this out. I'm assuming you</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/01/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-2/</link><guid isPermaLink="false">5a6ca8fec86c8900016cf351</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[integration]]></category><category><![CDATA[Python]]></category><category><![CDATA[RabbitMQ]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 02 Feb 2018 03:24:51 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay.png" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 2"><p>In my <a href="https://alexanderdevelopment.net/post/2018/01/30/relaying-external-queries-to-on-premise-dynamics-365-ce-orgs-with-rabbitmq-and-python/">last post</a> in this series, I outlined an approach for building a simple service relay with <a href="https://www.rabbitmq.com/">RabbitMQ</a> and <a href="https://www.python.org/">Python</a> to easily expose an on-premises Dynamics 365 Customer Engagement organization to external consumers. In this post I will walk through the prerequisites for building this out. I'm assuming you have access to a Dynamics 365 CE organization, so I'm going to skip the setup for that and focus on just RabbitMQ and Python today.</p>
<h4 id="settinguprabbitmq">Setting up RabbitMQ</h4>
<p>Back in 2015 When I first blogged about RabbitMQ and Dynamics 365, I wrote a <a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2/">detailed post</a> showing how to install and configure RabbitMQ. Since then I have discovered the joys of <a href="https://www.docker.com/">Docker</a>, which makes the process significantly easier. If you have access to Docker, I highly recommend using it. Once you have Docker running, you can use one of the <a href="https://hub.docker.com/_/rabbitmq/">official RabbitMQ images</a>. For this project, I initially used the rabbitmq:3-management image in <a href="https://www.docker.com/docker-windows">Docker for Windows</a> running on my local PC. After I got the basic functionality working, I then moved to an instance of Docker running in the cloud on a <a href="https://www.digitalocean.com" target="_blank">Digital Ocean</a> VPS.</p>
<p>If don't want to use Docker, you can use a full RabbitMQ install like I showed <a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2/">previously</a>. The main thing to remember is that no matter how you set up your RabbitMQ server, if it is not accessible from the public internet, you will not be able to use it as a service relay between an on-premises Dynamics 365 org and external consumers.</p>
<h4 id="settinguppython">Setting up Python</h4>
<p>I'm assuming if you've gotten this far, you have a functional Python development environment (if not, give <a href="https://code.visualstudio.com/docs/languages/python">Visual Studio Code</a> a try), and the code I have written works in Python versions 2.7 or 3.x. In order to connect to both RabbitMQ and Dynamics 365, you will need a few additional packages. To connect to RabbitMQ, <a href="https://pika.readthedocs.io/en/0.11.2/">Pika</a> is the RabbitMQ team's recommended Python client, and you can get it using <a href="https://pypi.python.org/pypi/pip">pip</a>.</p>
<p>To communicate with Dynamics 365, you'll need to use the Web API, but authentication will be handled differently depending on whether you connect to an on-premises org or an online / IFD org. For online or IFD orgs, you can either use <a href="https://jlattimer.blogspot.com/2015/11/crm-web-api-using-python.html">ADAL</a> or this <a href="http://alexanderdevelopment.net/post/2016/11/27/dynamics-365-and-python-integration-using-the-web-api/">alternate approach</a> I described back in 2016. If you have an on-premises org, you can authenticate using the requests_ntlm package like I showed <a href="https://alexanderdevelopment.net/post/2018/01/15/connecting-to-an-on-premise-dynamics-365-org-from-python/">here</a>. As with the Pika client, all the packages you need to connect to Dynamics 365 are also available via pip.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>That's it for today. In my next post in this series I will show the Python code you need to make this service relay work.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 1]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Integrating with external systems is a common requirement in Dynamics 365 Customer Engagement projects, but when the project involves an on-premises instance of Dynamics 365, routing requests from external systems through your firewall can present an additional challenge. Over the course of the next few posts, I will show you</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/01/30/relaying-external-queries-to-on-premise-dynamics-365-ce-orgs-with-rabbitmq-and-python/</link><guid isPermaLink="false">5a636975e2df920001a88f8e</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[integration]]></category><category><![CDATA[Python]]></category><category><![CDATA[RabbitMQ]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Wed, 31 Jan 2018 01:01:10 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/01/simple-service-relay-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/01/simple-service-relay-1.png" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 1"><p>Integrating with external systems is a common requirement in Dynamics 365 Customer Engagement projects, but when the project involves an on-premises instance of Dynamics 365, routing requests from external systems through your firewall can present an additional challenge. Over the course of the next few posts, I will show you can easily build a simple service relay with <a href="https://www.rabbitmq.com/">RabbitMQ</a> and <a href="https://www.python.org/">Python</a> to handle inbound requests from external data interface consumers.</p>
<p>Here's how my approach works. A consumer writes a request to a cloud-hosted RabbitMQ request queue (either directly or through a proxy service) and starts waiting for a response. On the other end, a Python script monitors the request queue for inbound requests. When it sees a new one, it executes the appropriate request through the Dynamics 365 Web API and writes the response back to a client-specific RabbitMQ response queue. The consumer then picks up the response from the queue. This way the consumer doesn't need to know anything other than how to write the initial request, and no extra inbound firewall ports need to be opened.</p>
<p>This diagram shows an overview of the process. <img src="https://alexanderdevelopment.net/content/images/2018/01/simple-service-relay.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 1"></p>
<p>Although my original goal was to accelerate the deployment of data interfaces for on-premises Dynamics 365 CE instances, a simple service relay like this could also be useful for IFD or Dynamics 365 online deployments if you don't want to allow direct access to your organization. Because the queue monitoring process is single-threaded, it's an easy way to throttle requests, but you can run multiple instances of the queue monitor script if you want to increase the number of concurrent requests the relay can process.</p>
<h4 id="whyusethisapproach">Why use this approach?</h4>
<p>There are lots message brokers and service bus offerings (Azure Service Bus, IBM MQ, Amazon SQS, etc.) you could use to build a service relay. In fact there's even an Azure offering called <a href="https://docs.microsoft.com/en-us/azure/service-bus-relay/relay-what-is-it">Azure Relay</a> that aims to solve exactly the same problem that my approach does, but not just for Dynamics 365, so &quot;why use this?&quot; is a great question.</p>
<p>First, I think RabbitMQ is just a great tool, and I previously wrote a <a href="https://alexanderdevelopment.net/post/2015/01/27/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-5/">five-part series</a> about using RabbitMQ with Dynamics 365 (back when it was still called Dynamics CRM). Second, using RabbitMQ instead of a cloud-specific service bus offering gives you maximum flexibility in where you host your request and response queues and how you chose to scale. For example, my RabbitMQ broker runs in a <a href="https://www.docker.com">Docker</a> container on a <a href="https://www.digitalocean.com/" target="_blank">Digital Ocean</a> VPS. If I ever decide to move off of Digital Ocean, I can easily switch to any IaaS or VPS provider. I can also configure a RabbitMQ cluster to achieve significantly faster throughput.</p>
<p>As for why I'm using Python instead of C#, which is probably more familiar to most Dynamics 365 developers, Python also makes this approach more flexible. Using Python means I'm not tied to the Dynamics 365 SDK client libraries or a Windows host for running my queue monitoring process, and I can easily package my monitoring process in a Docker image. <em>(Although I highly recommend Python, there are RabbitMQ clients for <a href="https://www.nuget.org/packages/RabbitMQ.Client">.Net</a>, and you can also find RabbitMQ tutorials for other languages including Java, Ruby and JavaScript <a href="https://www.rabbitmq.com/getstarted.html">here</a>.)</em></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>That's it for now. In my next post in this series I will walk through the prerequisites for building the simple service relay.</p>
<p>How have you handled inbound data interfaces for on-premises Dynamics 365 CE organizations? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dynamics 365 Configuration Data Mover v2.4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I've released an <a href="https://github.com/lucasalexander/AlexanderDevelopment.ConfigDataMover/releases/tag/v2.4.6587.18905">updated version</a> of my popular Dynamics 365 Configuration Data Mover utility that was built with .Net 4.7 to address the new requirement to use TLS 1.2 (or better) for connections to Dynamics 365 online instances as described in this entry on the Microsoft Dynamics 365</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/01/16/dynamics-365-configuration-data-mover-v2-4/</link><guid isPermaLink="false">5a5e85dae2df920001a88f85</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[Configuration Data Mover]]></category><category><![CDATA[integration]]></category><category><![CDATA[utilities]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Tue, 16 Jan 2018 23:12:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I've released an <a href="https://github.com/lucasalexander/AlexanderDevelopment.ConfigDataMover/releases/tag/v2.4.6587.18905">updated version</a> of my popular Dynamics 365 Configuration Data Mover utility that was built with .Net 4.7 to address the new requirement to use TLS 1.2 (or better) for connections to Dynamics 365 online instances as described in this entry on the Microsoft Dynamics 365 team blog: <a href="https://blogs.msdn.microsoft.com/crm/2017/09/28/updates-coming-to-dynamics-365-customer-engagement-connection-security">https://blogs.msdn.microsoft.com/crm/2017/09/28/updates-coming-to-dynamics-365-customer-engagement-connection-security</a>.</p>
<p>This upgrade is fully compatible with existing job files.</p>
<h4 id="gettingthedynamics365configurationdatamover">Getting the Dynamics 365 Configuration Data Mover</h4>
<p>The source code is available in my GitHub repository <a href="https://github.com/lucasalexander/AlexanderDevelopment.ConfigDataMover">here</a>.</p>
<p>A compiled version can be downloaded <a href="https://github.com/lucasalexander/AlexanderDevelopment.ConfigDataMover/releases/tag/v2.4.6587.18905">here</a>.</p>
</div>]]></content:encoded></item></channel></rss>