Project Oakdale and Azure API Management

Microsoft Dataverse for Teams and Azure API Management

I think that Microsoft Dataverse for Teams (old name Project Oakdale) is the most important Power Platform announcement of the year, especially you are coming from a canvas apps background like myself. Teams are starting to be more and more of a platform for the business than a replacement for Skype. Now, with Microsoft Dataverse, users will have a real data capability and a route to future upgrades by transferring their application on top of Common Data Service if needed. Teams and Microsoft Dataverse for Teams apps offer the needed simplicity to build those small or even large everyday business apps that truly matter for the users.

There are multiple posts and how-to guides to learn Microsoft Dataverse for Teams and canvas apps development, so there is no need to go deeper. One thing that got my attention during the Ignite was the announcement of using Azure API Management with Dataverse for Teams solution through the existing Teams licensing!

This means that your professional developers can create API services to process data and connect to almost any enterprise service. Then the citizen or IT pro developers can leverage those functionalities on their application. These functions will be technically published as custom connectors to the Power Platform environment related to the Dataverse in Teams.

Earlier, this meant that you needed an extra license because a custom connector is a premium level connector, but now that is not needed with Dataverse for Teams environments. Let us see how to use this in real action. Again, we can use something easy for even non-developers and create an Azure Function with PnP PowerShell (I did write about this earlier).

Create Azure Function

Let us keep things simple and create an application that asks for some data from a user and then creates a new News page to SharePoint. We will also fetch some additional information from an “enterprise” service with a rest call during the creation process. The idea is to ask the title and the body from the user and then fetch some data from the Bacon Ipsum service and add it to the news page.

The source code of the function can be found from my GitHub. It is a lot easier to read the code from there, but I will cover the most important parts here: PowerShellCore / CreateBaconPage.

When I am creating PowerShell scripts, I have a habit of using the following type of structure. I think this helps to read and maintain the code.

  1. Start by reading the Azure Function request parameters if developing a function.
  2. Set the main internal parameters, like connection related, used in the function.
  3. Read the possible modules if needed.
  4. Then inside the first try-catch, open the necessary connection, for example, to SharePoint.
  5. Then check that all mandatory parameters are available. I have a parameter called $haveMainParameter that I update while checking the other parameters.
  6. If we have everything available and connections are open, we can start to run the main section of the program.
  7. In the last section, I am closing all the connections.
  8. If developing a function, I am pushing the return details so that the callers can continue their process.
  1. Create a new Azure Function with Visual Studio Code.
  2. We need to fetch three parameters from the request.
    • News title
    • Body of the news
    • Paragraph amount (int value) to be used in our service call to Bacon Ipsum
$newsTitle = $Request.Query.newsTitle
$newsBody = $Request.Query.newsBody
$meatParas = $Request.Query.meatParas
  1. Next, add the necessary parameters used to connect to SharePoint.
    • This time we will need to authenticate against SharePoint with users’ credentials because you cannot create pages with an app-only connection.
    • Make sure to save the credentials securely. I did use the application settings in this example, but Azure KeyVault is a better option.
  2. Now you can make a connection to SharePoint.
    • As a best practice, it is recommended to return the connection to a parameter and use every PnP function call.
    • This helps you overcome a possible mixing of the connection context that can happen when executing the parallel function. You will see ‘The object is used in the context different from the one associated with the object.’ error message when mixing happens.
$spConn = Connect-PnPOnline -Url $siteURL -Credentials $credential -ReturnConnection
  1. Now it is time to check that we have all the necessary parameters and connections available.
#Check the parameters necessary for the application
Write-Host " "
Write-Host "*Check the parameters necessary for the application"

If($spConn -and $newsTitle -and $newsBody -and $meatParas){
    #Parameters are available
    $haveMainParameters = $true
}
else {
    #Missign some parameters
    $haveMainParameters = $false
}
  1. If the parameters are OK, we can continue building the logic.
  2. First, let us create a basic page with a title and content.
    • As you can see, I like to write messages to the PowerShell host console a lot. I think this helps you with debugging and testing.
Write-Host " "
Write-Host "*Create the new page"

#Create basic section
Write-Host "..create basic section"

$newsPage = Add-PnPClientSidePage -Name $newsTitle -PromoteAs NewsArticle -Connection $spConn
Add-PnPClientSideText -Page $newsPage -Text $newsBody -Connection $spConn

#Connect to enterpise service
$baconText = GiveMeBacon -meatParas $meatParas #Save for later -meatType $meatType
  1. At this point, remember to save the function and test the logic by hitting F5.

Connecting to Enterprise Service

Now, let’s look at an example where we fetch some data from an enterprise service outside of Office 365 scope. As a test, let’s get some random text from the Bacon Ipsum service. We will do this by adding a custom module with the necessary logic into our Azure Function.

  1. I like to add a folder for my custom folder.
    • Add CustomModules -folders and EnterpiseAPI.psm1 file into it.
  2. Inside the module, we only need to add one function that is then exported to function logic.
function GiveMeBacon{
    [CmdletBinding()]
    param
    (
        [Parameter(Mandatory = $true, HelpMessage="Amount of meat")]
        [string] $meatParas
    )

    #**Give me bacon from - baconipsum.com
    Write-Host "#*#Give me bacon from - baconipsum.com"

    $response = ""

    try {
        $queryURL = ("https://baconipsum.com/api/?type=all-meat¶s={0}&start-with-lorem=1&format=text" -f $meatParas)
        
        $response = Invoke-RestMethod -Uri $queryURL -ContentType "application/json; charset=utf-8" -Method Post -UseBasicParsing
    }
    catch {
        $ErrorMessage = $_.Exception.Message
        Write-Host "**ERROR: #*#Give me bacon"
        Write-Error $ErrorMessage
    }

    return $response
}

Export-ModuleMember -Function GiveMeBacon
  1. Here is a quick overview of the function:
    • First, read the parameters for the paragraph and text type.
    • We need to construct the URL that we are calling and make a rest call against that URL.
    • As a response, we will get a random text set that we then return to the caller.
  2. Of course, in real life, an enterprise service connecting is most likely a lot complex, but this gives you an idea of building one.
  3. Next, let’s use this new logic in our original function. The first thing to do is to add a reference to the module.
    • Add the following text line at the beginning of your Azure function somewhere after the Input binding section.
#Get custom modules
$SP_ModulePath = $PSScriptRoot + "\CustomModules"
Import-Module "$SP_ModulePath\EnterpiseAPI.psm1" -Force
  1. Now we can extend the page creation by calling the enterprise function and adding the returned text into a separate section on the page. Add the following logic after the initial page creation section.
#Connect to enterpise service
$baconText = GiveMeBacon -meatParas $meatParas #Save for later -meatType $meatType

#Add related section
Write-Host "..add related section"

Add-PnPClientSidePageSection -Page $newsPage -SectionTemplate OneColumn -ZoneEmphasis 2 -Connection $spConn
Add-PnPClientSideText -Page $newsPage -Column 1 -Section 2 -Text "<h3>Related Info</h3>" -Connection $spConn

Add-PnPClientSideText -Page $newsPage -Column 1 -Section 2 -Text $baconText -Connection $spConn

#Add related section
Write-Host "..publish the page"
Set-PnPClientSidePage -Identity $newsPage -Publish -Connection $spConn

Again, you can test the function to make sure a correct type of page gets created to the SharePoint. When everything is working correctly, you can publish the function in Azure.

Microsoft Dataverse for Teams Application

Now let us go teams open the Power App application so that we can create and Dataverse application. The application is simple, with some data fields and a button. You can see the structure from the image below.

  1. I added a variable called EnableSendBtn to the OnStart setting of the App. We will use this parameter to enable and disable the Send button to send the form details only when all the necessary details are given.
  2. I added the following elements to the screen.
    • A one-line text box for the title of the news page.
    • A multiline text box for the body.
    • A number field used to give the number of paragraphs fetch from our enterprise service.
  3. Here are a few important things to notice from my example:
    • Remember to give a unique name for each element on the screens. This will help you to build and maintain the logic.
    • First, we set the EnableSendBtn as false to disable the button for the API call time.
    • The last two sections will enable the button again after the call and resets the form controls.
    • The DisplayMode setting of the Send button has the following logic:
If(
    EnableSendBtn And Not(IsBlank(txtNewsTitle.Value)) And Not(IsBlank(txtNewsBody.Value)) And Not(IsBlank(txtDetailsParagraph.Value)),
    DisplayMode.Edit,
    DisplayMode.Disabled
)

But how to call the Azure Function we made earlier?

Configuring and Using Azure API Management

If you have not used or created API Management before, you can start exploring the service with this simple documentation: Quickstart – Create an Azure API Management instance | Microsoft Docs. I will cover the Project Oakdale related basic settings in the next steps. I assume that you have Azure Function that you want to publish an Azure API Management instance created.

  1. The documentation link above also has details on how to add your first API to the management instance.
  2. In this example, you need to add an Azure Function.
    • You will see a form that you can use to find the necessary API details.
    • Click Browse from the form.
  1. Next, click the “Function App.”
    • You will see a list of available Azure Function Applications.
    • Select the one that holds the function you want to publish.
  1. A list of available functions is shown, and you can select those you want to publish.
    • Select the correct one and click Select.

  1. You will see the details of the function in a form.
    • I recommend giving a meaningful name for the details of the details because it will help you find and use the API in Project Oakdale.
    • Sure, these settings can be updated later also.
    • Finally, click create.
  2. An important thing to notice here is that every Azure API Management API is protected with a subscription key by default. This key needs to be added to the API call, or otherwise, the user is getting access denied error.
    • It is possible to turn the key usage off, but then the whole API would be public and don’t want that.
    • You can find the keys from the Azure API Management portal under Subscriptions.
    • Copy the necessary key, like the build-in primary key, because we will need that in the Power Apps side.
  1. In our function, there were three attributes that we need from the users. Those won’t be asked automatically unless we update the schema of OpenAPI details of the function and tell what we need.

     

  1. Select the post-call of our API and then click the edit link of the frontend section.
    • We want to add new Query parameters to the function.
    • You could write the JSON settings manually, but using the editor makes your life a lot easier.
    • Create parameters for all the necessary ones used in your function. In my case, I only needed a string or integer type of attributes.
    • Also, create one extra parameter for the subscription key called ‘subscription-key,’ type ‘String.’
    • Remember to save the changes.
  1. Now we can export the function. In the export, there is an option for Power Apps and Power Automate available.
    • In the export form, you see a dropdown box to select the Power Platform environment were to publish the connector.
    • Ensure that you have at least one Power App created in Teams because otherwise, you will not see the environment in the list. Also, makes sure to select the correct environment (been there, done that).
    • Give the connect a meaningful name and click export.
  1. It will take a few minutes to publish the API fully, but at this point, you can go to Teams and open the Project Oakdale application.
    • In case the App is open, I recommend hitting refresh for the browser.
  1. In the App, you need to add a new Data Connection.
    • Select Data from the left menu and click Add data.
    • Find the API with the name you gave during the publishing time and click it from the list.
    • A right popup menu will open, and you can click the Connect button from the form.
    • At least for the current preview version for Project Oakdale, you will see a warning about the Premium connection. Based on Microsoft documentation, there is no need to take any extra action based on that.
  1. Now we are ready to use the API. Go to the OnSelect setting of the button in our App and start to add a line after the CALL AZURE API MANAGEMENT comment in our example.
    • Remember to select the post-call of your function.
    • Then you need to give the custom parameters and associate the values to the form elements.
    • The final parameter, called ‘subscription-key,’ is the subscription key copied earlier. Without this key in the query, your API call will not be processed.
  2. My final logic of OnSelect of the Send button looks like this:
Set(
    EnableSendBtn,
    false
);
//CALL AZURE API MANAGEMENT
GiveMeBaconAPI.postcreatebaconpage(
    {
        newsTitle: txtNewsTitle.Value,
        newsBody: txtNewsBody.Value,
        meatParas: Value(txtDetailsParagraph.Value),
        'subscription-key': "YOUR_KEY_GOES_HERE"
    }
);
Set(
    EnableSendBtn,
    true
);
Reset(txtNewsTitle);
Reset(txtNewsBody);
Reset(txtDetailsParagraph);

PS. When writing this post, I have seen a couple of different setting options for the subscription key during the past week. This might because Project Oakdale is still in preview. Here I used one of the current working methods, but I will keep watching the progress and update my post if necessary.

  1. Now, we can save the application and make a test in the preview window.

When everything goes as planned, a new page is created to SharePoint with some enterprise service data. You can now continue to future develop the Project Oakdale app and publish it to the users. In case something goes wrong, you can check the possible errors in the Power App side after closing the preview windows. You can also debug the Azure Function by opening the monitoring and making a test call from the Power App. At this point, you will thank your-self for writing enough comments to the host inside your code.

Advertisement

Creating a Feedback Form Part 2 – Connecting the Flow

In my last post, I started to create a Feedback form that is using the new possibility to us PowerApps for SharePoint Online list form customization. The first part shows how to create a separated forms for View, Edit, and New action. You should check that out first because we will extend that functionality on this post.

PowerApp Custom Forms and Flow – Creating a Feedback Form Part 1

In the old days, you probably have created workflow and send an email when something is added to the list. This type of action is completely valid, and you can still use traditional workflows even in Office 365. But the modern better way to do similar things is to use Flow. For our Feedback form, I wanted to connect the given feedbacks as tasks for our internal team that is building our Intranet.

Let’s add a Flow to our custom form and create a new to do item in Planner that comes as default service for each new Microsoft Group.

  1. From the Feedback list click the PowerApps -> Customize forms from the lists action ribbon.
    1. This will open the custom app forms application we did in Part 1 of this series.
  1. From the ribbon select Flows.
    1. This will open the Associated Flows panel.
  2. Select Create a new Flow.
  3. Flow application will be opened in new tab, and a new flow is created automatically.
    1. You can see that the flow is associated to PowerApps.
  1. What we wanted to do is to add a new task to Planner so let’s add an action to do that.
    1. Click New step -> Add an action
    2. From the opened form search all Planner related action by writing Planner to the search box.
    3. Select the first action Planner – Create a task
    4. This will add a new task to your flow. This task is used to create a new task into Planner.
  1. Now you need to connect to correct plan and bucket where you want to add the tasks.
    1. From Plan Id select the drop-down menu by clicking the down arrow in the field.
    2. This will open a form showing all available Planner plans.
    3. Select the one you want to use.
    4. Do the same thing for Bucket Id. Except, this time you will see all available buckets found from the plan you selected.
  2. For task Title let’s get the value from a user through PowerApps.
    1. Select Title field and select dynamic content -> Ask in PowerApps.
    2. This will create a new PowerApps parameter that we need to populate in our custom form. We will come back to this later.
  1. You could give values for other fields also if you want.
    1. One option is to add current time and date to Start Date-Time field.
    2. Select the Start Date field and select dynamic content.
    3. Open the Expression tab and scroll until you see utcNow().
    4. Click utcNow and click OK.
  2. Now we have a task that creates a new task to Planner and uses the title detail that the user created as task title.

But we are asking some more information from the user also, so let’s add the value from the Description field also into the task.

  1. Add a new step after the first task above and add a new action.
  2. From task selection form again, search planner but this time select Update task details.
  1. To update the new task we just created, select Task Id field and Add dynamic content.
    1. From the list select Id.
    2. The created task will send us the details of the created task, and we can now use that info to find it and update the details.
  2. Next, select the Description.
    1. From dynamic content, menu select Ask in PowerApp to get the value from our custom form.
  1. Finally, click the default name next to Flow name at the beginning of the form.
    1. Give the flow a name you desire to use.
    2. Click Create flow.
  2. Finally, you can click Done, and our flow is completed.

At this point, we have a custom form and a flow that does the task creation. But we still need to combine these two, so that right after a new task is created a new task will be created automatically. Of course, we could attach the flow in New Item Added event on the list, but for this example, we will add the flow straight to our custom form.

  1. Go back to the list and open the custom form PowerApp.
  2. Open the NewFormScreen we created in Part 1.
  3. From the screen select CustomNewForm.
    1. Expand the Title and Description card details. You need the see the name of the fields of the cards later on.
  1. While the form is selected choose OnSuccess event from the attribute drop-down.
    1. This event is run every time a new item is added successfully to the list.
    2. By default, it includes two actions. One for clearing the form and another one to close the panel.
    3. Copy the current OnSuccess value and save them for later use.
  2. From the ribbon select Action -> Flows
    1. You should see the flow you created earlier.
    2. Select the flow, and it will be added as a task to the OnSuccess event.
  1. Now we need to give those two parameters, Createatask_Title and Updatetaskdetails_Description, we decided to ask for the app.
    1. We will connect the form fields to the Flow task call to get the text user has given.
  1. The field value reference can be done based on the field name on the card.
    1. The name depends on your environment, but on this example, the names are DataCardValue2_1 for Description, and DataCardValue1_1 for Title.
    2. With the name, you can refer to the Text value and use that on the Flow call.
  1. Finally add the default tasks back to the OnSuccess action so that the form will be reset and closed when everything is done.
    1. Here’s the whole value used in the example: NewIntranetFeedback.Run(DataCardValue1_1.Text, DataCardValue2_1.Text); ResetForm(CustomNewForm); RequestHide()
  2. Now save and publish the app to SharePoint.

Navigate back to your list and start to add a new item. After the save check from Planner (https://tasks.office.com/) that a new task is added for future steps.

 

Reading Word File Content from Office 365

I had a case where I needed to read a document content from Office 365 through REST API and use that on my project. More specifically I wanted to use the content in my Word Add-in. Back then I was struggling to be able to read the content in a correct way. I didn’t find a good solution on how to manipulate the data I’m getting back from REST API.

Use it in an Application

Finally, I did solve the issue and I will show you how. This is just a quick example of the functionality without a complete application. I’m having a session on upcoming Saturday 10/28/2017 in SharePoint Saturday New England at 9:00 am.

My session title is “Tools for Information Worker – Introduction to Office Add-ins Development.” On the session, I will demonstrate a complete example on how to use these techniques, and I will also share the source code of the application after the session.

http://spsnewengland.org/agenda/

Stay tuned and follow me in Twitter @mikkokoskinen to know when the application is available.

Reading in Node.js Application

But back on the solution. You are able to use the REST API call called getfilebyserverrelativeurl with the $value attribute to get document with content.

<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>
executor.executeAsync({
  url: "<app web url>/_api/SP.AppContextSite(@target)/web
    /getfilebyserverrelativeurl('/Shared Documents/filename.docx')/$value
    ?@target='<host web url>'",
  method: "GET",
  binaryStringResponseBody: true,
  success: successHandler,
  error: errorHandler
});

More info:  https://msdn.microsoft.com/en-us/library/office/dn450841.aspx

If you change the placeholders from the above REST call and navigate into that on your browser, the document will be downloaded automatically. In a case you have ever used Microsoft Graph to get documents from a document library, you may have seen a parameter called @microsoft.graph.downloadUrl. For example, this call will list documents from the library with a given id: https://graph.microsoft.com/v1.0/drives/{library-id}/root/children.

GraphImage

In case you didn’t know @microsoft.graph.downloadUrl gives you a short-term access to the file without a need to send authentication inside a call header etc. The URL has a temporary authentication token that is valid only for a couple of minutes. But the same thing with that URL. If you navigate to that URL, the document will be downloaded.

So how to get the content into a variable and use it? Here’s a short explanation on how you can use the call return value in an application that is using Node.js and TypeScript.  Maybe your application is a Word add-in, and you want to read the document in Office 365 as a starting point for your own document.

In the example, we have a situation that you have the @microsoft.graph.downloadUrl of the file, and you want to download the content into a variable.

  1. For easy call based on URL, we will use a module called node-fetch.
    1. It’s light-weight module that brings window.fetch to Node.js
    2. More information from here: https://www.npmjs.com/package/node-fetch
  2. Run npm install –save node-fetch in the terminal window to install the module for the project.
  3. Open the TypeScript file where you want to add a function for the call.
    NodeFetch
  4. Add a new reference for node-fetch: import fetch = require(‘node-fetch’);
  5. Then add a function that uses the @microsoft.graph.downloadUrl to get the content of the document.
      1. The URL is sent as a parameter in the function call.
      2. This function is resolving a promise so that we can use await functionality when we are calling the function.
    static getTemplateDocument(templateURL: string) {
            return new Promise<string>(async (resolve, reject) => {
                let templateArray: any;
                fetch (templateURL, {body: 'buffer'}).then(res => {
                    res.buffer().then( data => {
                        templateArray = data;
                        resolve(templateArray);
                    });
                });
            });
        }
    
  6. The important part is to set the body setting of the fetch call as a buffer. The default value for the body is empty, but we specifically want to get the content of the document.
    1. When this setting is set, we can use the buffer() function of the result we are getting back from the fetch to read the data.

And the Data is?

We are almost there. The question is that what does the getTemplateDocument call actually send back to us?

The answer is that we are getting back a Uint8Array that holds the content of the template Word document. We can now use this array in a way our application needs it. In Office.js there is a function called insertFileFromBase64. With this function, we can add a content of a docx file into our current document as long as the file is base64 encoded. And because we already have the file in Uin8Array format, it’s easy to make the transformation and insert the file.

Here’s a short example code for that when we have the file back in a result attribute from the function call above.

var templateBuffer = result.data;
    var u8 = result.data;
    var b64encoded = btoa(String.fromCharCode.apply(null, u8));

    Word.run(function (context) {

        // Create a proxy object for the document.
        var thisDocument = context.document;

        // Queue a command to get the current body.
        // Create a proxy range object for the selection.
        var body = context.document.body;

        // Queue a command to replace the body.
        body.insertFileFromBase64(b64encoded, Word.InsertLocation.replace);

        // Synchronize the document state by executing the queued commands,
        // and return a promise to indicate task completion.
        return context.sync().then(function () {
            console.log('Added the content of the file .');
        });
    })
    .catch(function (error) {
        console.log('Error: ' + JSON.stringify(error));
        if (error instanceof OfficeExtension.Error) {
            console.log('Debug info: ' + JSON.stringify(error.debugInfo));
        }
    });<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

Use case: List Posts – Using Widget Wrangler and AngularJS in the web part development

In this post, I will show you how you can create a widget or web part that show you a list of posts from SharePoint blog site. The element is done purely as client side coding. One thing that is true with SharePoint is the fact that you can do the same things in multiple ways. Same goes here with this example, but my aim here is to demonstrate how you can create plugins, web parts, apps however you want to call them, easily and purely as client side development. I think that a word widget is most descriptive, and that is what we will be using here.

This example is based on real life solution that I made on my last project on top of SharePoint 2013 on-prem. The widget is also tested to be working in SharePoint Online. Technically you could use it any other web platform, and that is the key why I want to present you a framework called Widget Wrangler.

What is Widget Wrangler?

Why try to summarise something when it’s done perfectly in the actual source? “The Widget Wrangler is a lightweight framework for managing the loading of javascript “widgets” on a web page.”. With the framework, you can create isolated widgets and control the loading of each file and dependency you need for the element.

With Widget Wrangler (later ww) you can encapsulate the functionality of the widget so that different, or even multiple of the same kind, elements won’t be interference with the hosting page and other widgets. This way the isolation and creation of truly separated UI and functionality are easier to do. The framework also manages the efficient loading when multiple web parts on a page use the same javascript libraries or CSS files.

<div class="latestPostWP">
 <div posts-element></div>

 <script type="text/javascript" src="/ourfirm/offices/SiteAssets/js/pnp-ww.min.js" 
 ww-appname="LatestPostWPApp" 
 ww-apptype="Angular"
 ww-appcss='[{"src": "/ourfirm/offices/SiteAssets/webparts/LatestPost/latestPost.css", "priority":0}]'
 ww-appScripts='[{"src": "/ourfirm/offices/SiteAssets/js/angular.min.js", "priority":0}}
 ]'>
 </script>
</div> 

You can download Widget Wrangler and find more information from here:
https://github.com/Widget-Wrangler/ww

There’s also more deeply demonstration available in Channel9 PnP Web Cast:
PnP Web Cast – Introducing Widget Wrangler for SharePoint development

Post-Listing Widget

To demonstrate the use of ww, let’s create a simple widget that lists blog post from SharePoint blog site. Actually, I have built this type of functions multiple times before so this is a real world example. I will use my favored framework AngularJS and SharePoint REST API to accomplish the actual functionality. This way we can create a solid client side solution and have a flexible separation of functionality and presentation. And that should be the starting point for every customization IMO.

Requirements

  • Get blog post from SharePoint blog site.
  • Show three latest post and other through pagination.
  • From each post title (link to post) 365 first characters from the blog post.
  • Show to order an email alert for new posts.
  • Show link to RSS feed.

The whole solution can be found from my GitHub repository https://github.com/MikkoKoskinen/WW-Demo-ListPost

From there you can find four files:

  • WWPostsWPHTML.html – File containing the ww implementation that was saved on the script web part.
  • listPost.js – The angular application that implements the widget functionality.
  • listPost.html – The presentation layer of the widget.
  • listPost.css – Styling of the widget.

I’m also using following frameworks and extensions:

Widget Wrangler Section

<div class="latestPostWP">
 <div posts-element blogSiteURL='' listTitle='Posts' listID='74DF3BE3-5536-45B6-B171-B97C1BCD61D1'></div>

 <script type="text/javascript" src="/ourfirm/offices/SiteAssets/js/pnp-ww.min.js" 
 ww-appname="LatestPostWPApp" 
 ww-apptype="Angular"
 ww-appcss='[{"src": "/ourfirm/offices/SiteAssets/webparts/LatestPost/latestPost.css", "priority":0}]'
 ww-appScripts='[{"src": "/ourfirm/offices/SiteAssets/js/angular.min.js", "priority":0},
 {"src": "/ourfirm/offices/SiteAssets/js/truncate.js", "priority":1},
 {"src": "/ourfirm/offices/SiteAssets/js/dirPagination.js", "priority":1},
 {"src": "/ourfirm/offices/SiteAssets/webparts/LatestPost/latestPost.js", "priority":2}
 ]'>
 </script>
</div> 

Above you can see the WW section of the solution that has to be added to the page. This code will handle the loading of the widget. I’m a fan of script web part, so I have placed that code in the snippet section. You could, of course, use other methods of adding the code of course. As we can see, you can control very well on what you want to load and in which order. Key things in the code are.

  1. Everything has to be wrapped inside a div. Div with class ‘latestPostWP’ in our case.
    1. This is an important thing to remember because ww won’t work without it.
  2. Ww implementation is done inside the script tag that is calling the library on the source attribute. After that, you can give the necessary settings.
  3. ww-appname = The name of your widget application. In Angular implementation, this has to match with the module name.
  4. ww-apptype = Type of the framework used on the widget. On the time of writing this, only Angular is supported.
  5. ww-appcss = List of CSS style sheet you want to be loaded for the widget. You can give multiple files and control the loading order with the parameter.
  6. ww-appScripts = List of script files sh you want to be loaded for the widget. You can give multiple files and control the loading order with the parameter.

And that’s it. It’s just that simple.

Now, if look at the browser console after page load, you can see that ww has initialized a widget on the page. As you can see our widget has an index number 0. If there were multiple of these elements on the page, all of them would have them own number. This is the power of ww and encapsulating of the widgets.

Presentation

<div class="resultItem announcementItem" dir-paginate="post in posts | itemsPerPage: 2" pagination-id="posts"> 
 <div class="resultTitle"> 
 <a class="" href="{{viewItemURL}}{{post.ID}}">{{post.Title}}</a> 
 </div> 
 <div class="resultContent"> 
 <span class="resultDate" ng-bind="post.PublishedDate | date:'LLLL dd, yyyy'"></span> 
 <br>
 <div class="resultDetail">{{ post.Body | htmlToPlaintext | characters:350 :true}}</div> 
 <div class="resultMore"><a href="{{viewItemURL}}{{post.ID}}">More≫</a></div> 
 </div> 
</div>
<div class="blogActions">
 <div class="action">
 <a class="ms-calloutLink" target="_blank" href="{{listRSSURL}}">
 <span style="height:16px;width:16px;position:relative;display:inline-block;overflow:hidden;" class="s4-clust ms-blog-linkCommandImage">
 <img src="/ourfirm/offices/_catalogs/theme/Themed/4D8112E7/spcommon-B35BB0A9.themedpng?ctag=3" style="position:absolute;left:-236px !important;top:-66px !important;border-width:0px;"></span>&nbsp;<span class="ms-splinkbutton-text">RSS Feed</span></a>
 </div>
 <div class="action">
 <a class="ms-calloutLink" href="{{postAlertURL}}">
 <span style="height:16px;width:16px;position:relative;display:inline-block;overflow:hidden;" class="s4-clust ms-blog-linkCommandImage"><img src="/ourfirm/offices/_catalogs/theme/Themed/4D8112E7/spcommon-B35BB0A9.themedpng?ctag=3" style="position:absolute;left:-236px !important;top:-30px !important;border-width:0px;"></span>&nbsp;<span class="ms-splinkbutton-text">Alert Me</span>
 </a>
 </div>
 <div class="ms-clear"></div>
</div>
<dir-pagination-controls pagination-id="posts"></dir-pagination-controls>

Here you can see the presentation layer. Technically it’s just plain HTML with some Angular code. But I would like to highlight few relevant things, though.

I’m not using the default repeater to show a list of fetched post. The code is using pagination extension for that. This will provide us an easy way to divide the result to a different section. Pagination is done in the first div element “dir-paginate=”post in posts | itemsPerPage: 2″ pagination-id=”posts””. The first setting is the same than in default repeater telling to loop through all the post in posts variable. Next, we will give the amount of visible items per page. Finally, we give a unique id for the pagination element. This way we can connect the repeater, and the pagination action element added as the last element on the widget (dir-pagination-controls div). The extension is also handling all the necessary functions like showing the page count and back and forward links.

Next thing to notice here is that we are modifying the date format for better visibility with ng-bind element.

 ng-bind="post.PublishedDate | date:'LLLL dd, yyyy'"
{{ post.Body | htmlToPlaintext | characters:350 :true}}

Lastly, we are showing a short teaser from post body with element above. We can truncate the text with Angular Truncate filter and give the amount of character shown to the user. True settings tell that also complete words can be cut. htmlToPlaintext is our custom filter that takes out any HTML elements form the body text. More about this soon.

Functionality

(function() {
  angular
    .module('LatestPostWPApp', ['truncate','angularUtils.directives.dirPagination'])
    .filter('htmlToPlaintext', function () {
        return function(text) {
            return  text ? String(text).replace(/&amp;lt;[^&amp;gt;]+&amp;gt;/gm, '') : '';
        };
    })
    .directive('postsElement', function() {
        return {
            restrict : 'EA',
            transclude : false,
            templateUrl: '/ourfirm/offices/SiteAssets/webparts/LatestPost/latestPost.html',
            controller: function ($scope, $log, $q, $http, $attrs) {

                $scope.getEvents = function getEvents() {
                    return $http({
                        method : &amp;quot;GET&amp;quot;,
                        url: _spPageContextInfo.webAbsoluteUrl + &amp;quot;/&amp;quot; + $attrs.blogsiteurl + &amp;quot;/_api/web/lists/GetByTitle('&amp;quot; + $attrs.listtitle + &amp;quot;')/items?$orderby=PublishedDate desc&amp;quot;,
                        headers: { &amp;quot;Accept&amp;quot;: &amp;quot;application/json;odata=verbose&amp;quot; }
                    })
                    .then(function sendResponseData(response) {
                        // Success
                        return {
                            Items: response.data.d
                        }

                    }).catch(function(response) {
                        $log.error('HTTP request error: ' + response.status)
                        return $q.reject('Error: ' + response.status);
                    });
                };

                $scope.getEvents()
                .then(function(data) {
					//Get list items
					$scope.posts = data.Items.results;

					if ($scope.posts.length &amp;gt; 0) {
                        $scope.viewItemURL = _spPageContextInfo.webAbsoluteUrl + &amp;quot;/&amp;quot; + $attrs.blogsiteurl + &amp;quot;/Lists/&amp;quot; + $attrs.listtitle + &amp;quot;/Post.aspx?ID=&amp;quot;;
                        $scope.listRSSURL = _spPageContextInfo.webAbsoluteUrl + &amp;quot;/&amp;quot; + $attrs.blogsiteurl + &amp;quot;/_layouts/15/listfeed.aspx?List={&amp;quot; + $attrs.listid + &amp;quot;}&amp;quot;;
                        $scope.postAlertURL = _spPageContextInfo.webAbsoluteUrl + &amp;quot;/&amp;quot; + $attrs.blogsiteurl + &amp;quot;/_layouts/15/SubNew.aspx?List={&amp;quot; + $attrs.listid + &amp;quot;}&amp;amp;Source=&amp;quot; + _spPageContextInfo.serverRequestPath + &amp;quot;&amp;quot;;
                    }
					else {

					    $scope.noItemsFound = true;
					}
				});
            }
        };
    }); // End directive()
}()); // End IFFE

The functionality of the widget is done as an Angular application. On the first rows of the code, we are giving a name for the application and loading the necessary extension. First thing on the code is a custom filter called htmlToPlaintext. You may call this function with a parameter holding some text content. The function will strip out all HTML elements with a regular expression and return a pure text content. This is used in the presentation layer.

 .directive('postsElement', function() {
 return {
 restrict : 'EA',
 transclude : false,
 templateUrl: '/ourfirm/offices/SiteAssets/webparts/LatestPost/latestPost.html',
 controller: function ($scope, $log, $q, $http, $attrs) {

After this, we have created a directive element named ‘postsElement’ that is will be called on the script web part.

Important things for the directive are to give the right path to the template file and pass-through the variables with $attrs parameter. As we can see from the directive element, we are passing a couple of parameters to be used during the functionality.

  • blogSiteURL = If the widget is placed on another web than in blog site, you can give the URL with this parameter. URL is used during the REST call to get the post. URL should be given as relative against site collection root.
  • listTitle = Title of the list where posts are read. The title is used during the REST call to get the post.
  • listID = Id of the list where posts are saved. This is used in email alert and RSS functions.

In the first section of the code, we are creating a function called ‘getEvents’ that makes and REST API call against SharePoint and gets all the posts from a given list. The call is using the parameters mentioned above to do the call. If the call is successful, the found data items will be returned.

Next, the code will construct a deferred object from the function above. We are catching the promise and saving the found data items to the ‘posts’ variable. Also if some items were found, we would construct few parameters for view item, RSS and alert links. These parameters are used in the template during the construction of the widget.

<div class="resultMore"><a href="{{viewItemURL}}{{post.ID}}">More≫</a></div>

And that’s it. Here you have a relative simple POC of client-side widget that is reading information from SharePoint and using Widget Wrangler framework for better maintenance.

Your Day with Microsoft NextGen Portals

Microsoft Ignite 2015 last month was huge. Basically everything from Microsoft stack was presented there. For me all things related to modern workplaces, meaning Office 365 and SharePoint, was under the microscope. But I have to say that the information flow was a blast and it was almost impossible to digest all the new things and news that were presented.

We heard about Groups, Delve, Infopedia, SharePoint 2016, Yammer etc. Microsoft is building their cloud and portal solutions based on following strategy – Cloud first, mobile first. You definitely saw in Ignite.

After the conference many may wonder that how is this all wrapped around together and what tools I should use? I don’t have a clear answer to that. There’s always the one and only “It depends” factor. But now after a while of reading and thinking all the new, I decided to give it a try. On one point of view at least.

I made a presentation of one full day for typical information worker and how these new or currently existing tools can be used and how they may help users during the day. At the same time you can see the main published NextGen portal tools from Ignite.

One new cool tool available for both public Office 365 and for enterprise Office 365 is Sway. With Sway you can quickly create nice looking mobile ready presentation. That is why I used that also. Remember to add Sway to your tool box.

>> Your Day with Microsoft NextGen Portals