Sitecore Experience Edge API Architecture and a Postman Collection

Sitecore Experience Edge is the new delivery platform for Sitecore Content Hub and Sitecore XM. As it happens documentation is not all very clear. While setting up the Experience Edge I encountered some issues. In this blog post, I discussed those issues and I discussed some concepts that I think will help others to understand Experience Edge architecture.

I have created a Postman API collection for Experience Edge APIs and shared it in this post. Click on the button below and fork from my publicly shared collection to use the APIs in your Postman. If I modify the collection in the future, you will be able to sync with your version.

Run in Postman

Experience Edge APIs

Understanding Experience APIs can be confusing initially. Let’s talk about the purposes of different domains first.

Content Hub Sandbox Domain

This is the domain name for your Content Hub Sandbox. The URL looks like <sanbox name>.stylelabsdemo.com. This is also the domain used for GraphQL Preview API to return GraphQL results of unpublished data. The URL for GraphQL Playground (IDE) is 
https://<sandbox name>.stylelabsdemo.com/api/graphql/preview/ide/ and GraphQL API endpoint is https://<sandbox name>.stylelabsdemo.com/api/graphql/preview/v1

Auth0 or Authentication Server Domain

This service is responsible for providing security for the Experience Edge Administration. We retrieve the JWT authentication token from this domain to use the admin APIs securely. The URL looks like https://one-sc-beta.eu.auth0.com

Audience Domain

The audience URL contains the Tenant Id for your Experience Edge system and identifies the tenant for which the authentication token is going to be issued. This token can be used only for administering the tenant mentioned in the URL. The URL looks like https://delivery.sitecore-beta.cloud/tenant_id. Tenant Id is usually the name of your Content Hub Sandbox. You can also find the Tenant Id in the Content Hub license using the API Content Hub API route /api/status/license.

Experience Edge Domain

Experience Edge is the delivery service that runs on Cloudflare. When we publish content from Content Hub those contents get pushed to Experience Edge and get cached. Experience Edge domain is used to manage the delivery service, manage delivery API Keys, and query data using GraphQL queries. The Admin API endpoint base URL is https://edge-beta.sitecorecloud.io/api/admin/v1,
the API Key endpoint base URL is https://edge-beta.sitecorecloud.io/api/apikey/v1,
GraphQL Playground URL is https://edge-beta.sitecorecloud.io/api/graphql/IDE,
and GraphQL API endpoint URL is https://edge-beta.sitecorecloud.io/api/graphql/v1.

The above mentioned domain names are what provided by Sitecore currently for the Sandbox. It may not be same in future. The production server domain names will be different. Whatever the domain names are these are the four domains and the corresponding URL we should be aware of.

There are Four types of APIs.

Admin API

Admin APIs are used to administer the Experience Edge delivery system. For example, if you want to change the Content Cache Time To Leave (contentCacheTtl) setting, you will use Admin API. The base URL for this is https://edge-beta.sitecorecloud.io/api/admin/v1.

Token API

Token APIs are used to manage API Keys generated for accessing the Experience Edge delivery system. For example, if you want to see what API Keys are there currently available in the system, you need to use the Token API. The base URL for this is https://edge-beta.sitecorecloud.io/api/apikey/v1.

Delivery API

Delivery API is the GraphQL endpoint to query content from the Experience Edge delivery system. You can use the Delivery API to access only the published data from the Content Hub. The URL for GraphQL Playground is https://edge-beta.sitecorecloud.io/api/graphql/IDE, and the URL for GraphQL endpoint is https://edge-beta.sitecorecloud.io/api/graphql/v1.

Preview API

Preview API is used to query unpublished data from Content Hub. The API Key and the URL for Preview API is different from Delivery API. The URL for GraphQL Playground is
https://<sandbox name>.stylelabsdemo.com/api/graphql/preview/ide/ and the URL for GraphQL endpoint is https://<sandbox name>.stylelabsdemo.com/api/graphql/preview/v1.

The following diagram shows different APIs and their URLs.

Location of client_id and client_secret for JWT

To get the JWT token to be used to authenticate Admin API, you need a client_id and client_secret. The documentation about where to find this information is not very clear. This is available in Content Hub OAuth Client. The OAuth Client name is ‘delivery’. Open that client and use the client_id and client_secret from that OAuth Client in Admin Token API.

Summary

In this blog, I discussed how APIs for Sitecore Experience Edge for Content Hub has been structured. I hope this helps you to understand the architecture quickly and helps you get on with your work. I have provided a Postman collection that will help you to quickly start with the APIs in your Experience Edge system.

References

Posted in Content Hub, jamstack, Sitecore, Sitecore Experience Edge, Uncategorized | Tagged , , , , , , | Leave a comment

Build a Static Website with Sitecore Experience Edge for Content Hub and Next.js

I am back with another installment of building static website with Content Management Service and I am back with another installment of building a static website with Content Management Service and Next.js. This time I am adding Sitecore Experience Edge for Content Hub in the Frontend First Architecture. Sitecore Experience Edge is a Content Delivery Service on the top of Content Hub or Sitecore XM. Experience Edge delivers content through GraphQL endpoints which I have used to generate the static web pages for my photo blog website. For my purpose, it is just another source of my website content. But, Experience Edge is more than just a content source. I have highlighted some important features at the end of this article.

Deployment Architecture

Based on my Frontend First Architecture, I needed to add Experience Edge in the deployment architecture. When I switch my content service to use Experience Edge, it uses GraphQL delivery APIs to get content from Experience Edge CDN and push my static pages to Vercel Edge. The following diagram shows the deployment architecture with Experience Edge included as one of the content delivery platforms.

Deployment Architecture including Experience Edge

Application Architecture

Our application architecture has not changed except that we have a new API Helper for working with Experience Edge. There is no need to discuss about the application architecture again. You can find that in my previous blog article. Below is the application architecture diagram including the new XEdgeApiHelper highlighted.

Application Architecture including Experience Edge Api Helper

It took me very little time to implement the new Api Helper code. In ContentFulApiHelper I have used GraphQL queries to populate data. I used the same approach for XEdgeApiHelper except that the GraphQL schemas are different. So the queries are different and handling data returned by the queries was a little different. I have shared the code in Github so that you can take a look.

Experience Edge Features

I am not going to discuss all Experience Edge features. You can learn about Experience Edge from the

I am not going to discuss all Experience Edge features. You can learn about Experience Edge from the documentation. I am going to talk about how to approach to work with Experience Edge. There is no free tier for Experience Edge or Content Hub. If you are an MVP you can send an email to
mvp-program@sitecore.net to get $50 credit. If the company you work for already has a sandbox that’s a better option for learning. You may read this document to understand how to set up the Content Hub sandbox including Experience Edge.

Currently, there is no UI to interact with Experience Edge. You have to manage it using APIs. I am hoping Sitecore will come up with a CLI at some point. There are two types of APIs, Management APIs (Token and Admin APIs) and Content APIs (Delivery and Preview APIs). The Management APIs are Rest APIs and Content APIs are GraphQL APIs. The following diagram shows how APIs are used.

Source: Sitecore

You will need a client_id and a client_secret from Sitecore to generate the token for authenticating Admin and Token APIs. For using Delivery and Preview APIs, you have to create API Keys in Content Hub.

The way Sitecore is positioning Experience Edge as a Content Delivery platform, I think Experience Edge will be always needed for Content Hub implementation, although the license for Experience Edge is separate from the Content Hub license. You may use just the Content Hub for your implementation but it will not perform as well as if you have Experience Edge. In my first post in this series, I discussed how we can implement with just Content Hub using Javascript Client SDK. Then the question arises, what is the use of Content Hub APIs. I think the Content Hub APIs will be used mainly for automated content creation and management. But when it comes to rendering content, the advantage of using Experience Edge cannot be ignored. The biggest advantage of using Experience Edge is omnichannel content delivery which can be based on devices, location, and many other segments. Rendering content via GraphQL gives the content consumers the ability to decide what content they want. Along with GraphQL, content caching and delivering content from the closest location will provide optimum performance.

That’s all for this article. Please checkout the code in Github. See you in the next article.

References

Posted in Content Hub, jamstack, Next.js, Sitecore | Tagged , , , , , , | Leave a comment

Frontend First Architecture for Decoupled Headless CMS Integration

In my last two blog posts, I discussed creating a static website using Nextjs and Sitecore Content Hub as the Headless CMS. I focused on creating an application architecture that can be integrated with Sitecore Content Hub only. A Headless CMS like Content Hub is like any other service in Composable DXP as opposed to traditional CMS. An application built on a traditional CMS has the CMS in the core and other composable services are added around it. There is no separation of the web application from the CMS in traditional CMS. Such is not the case for an application built with Headless CMS. We have a choice to select what CMS we want to use. Therefore the architecture of the application can be built with CMS decoupled from the application. This will give us the flexibility to use any Headless CMS with minimum change in the application. In this blog, I will discuss how I changed the architecture of my application for the previous posts to adapt a decoupled architecture. I used Sitecore Content Hub and ContentFul as Headless CMS. We will establish the fact that with this architecture we can switch between Headless CMS with very little change in the application.

Final Deployment Architecture

Below is the diagram that depicts the final state of our web application deployment architecture. Our application has the ability to switch between Headless CMS. A quick code change and deployment will switch the CMS.

Application Architecture

I built the architecture from the Frontend. If we build the architecture from the backend, i.e. a Headless CMS, it will be an application for that CMS. When we approach the architecture from the Frontend, we know what we need. We need all the React Components and we need data (props) for filling up these components with information. The data will be provided by the Headless CMS APIs. But, for building the schemas for the data, we don’t need to use a CMS immediately. We can build the schema and create fake APIs using something like json-server. I created a JSON file with data I needed and used json-server as if I am calling CMS APIs. This allowed me to build the full Frontend without worrying about any CMS. This way I could focus on the architecture.

To build an architecture that is decoupled from Headless CMS, I needed to use the Inversion of Control (IoC) design principle. There are many Dependency Injection (DI) Framework out there to implement IoC. One of the popular ones is TSyringe, a lightweight DI Framework for Typescript/Javascript. When it comes to using Design Pattern it is easier to work with a Typescript because it is a typed language. I converted my earlier Javascript based Nextjs application to Typescript. Next, I created an interface called IApiHelper for the methods I needed to create props for my React components. My component services used the CMS specific implementation of this interface to get data for the props. The DI container is used to inject the desired API helper to the service class to get the data from json-server or Headless CMS. The below diagram shows the architecture.

I have shared the code in Github.I have three implementations of IApiHelper. For Content Hub, I created ContentHubApiHelper, for ContentFul, I have created ContentFulApiHelper and for Json-server, I have created JasonServerApiHelper. To use a CMS, I need to import the Api Helper implementation of that CMS in GetStaticPropHelper.ts as shown below in the highlighted code.

import "reflect-metadata";
import HomeProps from "../models/HomeProps";
import BlogListProps from "../models/BlogListPorps";
import BlogProps from "../models/BlogProps";
import PageComponentService from "../services/PageComponentService";
import PageLayoutService from "../services/PageLayoutService";
import { container } from "tsyringe";
import { GetStaticPropsContext } from "next";
import { ParsedUrlQuery } from "querystring";
import AboutPorps from "../models/AboutProps";
import ApiHelper from "./ContentFulApiHelper";

container.register("IApiHelper", {
	useClass: ApiHelper,
});

Final Words

The purpose of this blog was to discuss how to build an architecture from the frontend so that we can decouple the web application from Headless CMS. I had to learn a whole new CMS called ContentFul which I enjoyed. ContentFul has fantastic documentation and tutorials. It is free for developers. I highly recommend you to explore ContentFul. I have deployed this web application in Vercel. This link https://photoblog-nextjs-ts.vercel.app/ will take you to the website.

References

Posted in Content Hub, ContentFul, jamstack, JavaScript, Next.js, Sitecore, Typescript | Tagged , , , , , , , , , , | Leave a comment

Two ways to publish content on-demand from Sitecore Content Hub to a static website

In the last post, I discussed how I created a static website using Next.js and Sitecore Content Hub as a content repository. Web pages in the static website as it says are static. That means when some content changes in the content repository that will not be reflected on the website unless we deploy code. In the solution, I used Next.js Incremental Static Regeneration (ISR) so that static pages will be revalidated after the assigned time and pages will be regenerated. For example, the below code shows that the Blog List page will be regenerated every hour.

  static async getStaticProps() {
      const client=await Helper.getContentHubClient();
      if(client) {
        const mainMenuItems = await Helper.getMainMenuItems(client);
        const footer = await Helper.getFooter(client);
        const intro = await Helper.getPageIntro(client, introName);
        const blogList =  await Helper.getBlogsFromCollection(client, contentCollection, route);
        return {
          props: {
            mainMenuItems: mainMenuItems,
            footer: footer,
            message: intro,
            blogList: blogList
          },
          revalidate: 3600
        }
      }      
  }

The above approach is ok but it doesn’t give any control to content editor. Once the content editor change content, she has to wait until someone accesses the page to trigger the build. The approach I will discuss here will enable the content editor to publish as and when needed.

Publish content to static website using Sitecore Content Hub Action

Vercel lets us trigger a build using Deploy Hooks. Deploy Hooks is an API endpoint with unique id that is associated with the deployment configuration. When someone calls this API, Vercel will start the deployment for the associated configuration. In the build settings in Vercel you will find the Deploy Hooks in the Git section. Here you can add Deploy Hooks.

Deploy Hooks

We need to take this API endpoint and create an action of type API Call in Sitecore Content Hub. Below screenshot shows that action. There is only one purpose of this action. That is to call the Vercel Deploy Hook.

Content Hub Action

The usual way to call an Action in Content Hub is to use a Trigger. A Trigger is associated with some events in Content Hub. When such events happen Trigger calls the associated Action. Example of events are, add/modify/delete of an entity in Content Hub. Below is screenshot of such trigger.

Trigger
Trigger Action

Although this approach works, there is a problem with this approach. The trigger runs every time content gets modified, and the action gets called. The action calls the Deploy Hook to run a build in Vercel. If the content editor changes a lot of content, this approach will deploy the site too many times. That’s not an efficient way to regenerate static pages. Ideally, I would like the content editor to push a button to call the action to publish the changed content when she is ready.

There is no straightforward way to call an Action manually in Sitecore Content Hub. Other than using Trigger, I can use Command API to call the External Action command like below but that will be calling from outside of Content Hub. I want to call the same API from Content Hub Admin site.

Request URL: https://my-ch-sandbox.stylelabs.io/api/commands/external.action/external.action
Request Method: POST

Request Body:
{
    "entity_id": XXXXX,
    "action_id": XXXXX,
    "properties": [],
    "relations": [],
    "action_execution_source": "ExternalAction",
    "extra_data": {
        "culture": "en-US"
    }
}

There are two ways to call the external.action command from within Content Hub that I am aware of. I discussed both methods below.

External Page Component

External Page Component allows us to create a component using external JavaScript libraries. I have used jQuery to create a button on the Content Collection page. I have explained below how it works. Here is the button (External Page Component) on the Content Collection page.

External Page Component

To add an External Page Component on a page, you have to go to Manage -> Pages and select the page where you want to place the component. In my case, I opened the Content Collection page and added External Component on the right header column.

Add External Page Component

To configure the component I clicked on three dots (…) and selected Edit. Here in the Configuration section I entered the control name.

External Component Configuration

In the Template section I entered below HTML code to create the button UI. I used css classes available in Content Hub to create consistent UI. Notice that id of the button is ‘target’. This id is used in the jQuery code to call the action.

<a
		id="target"
		href="#"
		class="btn btn-primary"
		title="Generate static pages"
		aria-label="Generate static pages"
	>
		<i class="m-icon m-icon-lightning-bolt"></i>
		<span class="d-none d-sm-inline-block"
			>Generate static pages</span
		>
</a>

In the Code section, I used below jQuery code to make ajax call to the action on button click event.

$( "#target" ).click(function() {
  var req = '{"entity_id":30487,"action_id":30872,"properties":[],"relations":[],"action_execution_source":"ExternalAction","extra_data":{"culture":"en-US"}}'
  $.ajax({
    url: `${options.api.commands.href}`.replace('{folder}', 'external.action').replace('{command}', 'external.action'),
    contentType: 'application/json',
     headers: {
        'x-auth-token' : '<my api key for auth>'
   },
    type: 'post',
    data : req,
    success: function(data, status, jqXHR)
    {
        console.log('success')
    },
    error: function (jqXHR, status, error)
    {
      console.log('error')
    }
  });
});

This accomplice the task. You may notice that I had to use an entity (entity_id) to make the API call because entity_id is a required parameter. My action though doesn’t need to be associated with an entity. This approach is good for any page but it is a little complex. Also, using External Page Component is a little risky because if Content Hub changes anything related to external libraries, templates, or in the API in the future, the component might break.

Custom External Action Entity Operation

The second approach to call the action manually doesn’t involve any coding. You can add External Action Entity Operation on any Detail Page. A Detail Page is always associated with an entity and the operation use that entity to call the action. If you follow the below animation, you will see how I added the External Action Operation on the Content Collection Details page.

Add External Action Entity Operation

This approach is simple. The only issue with this approach is that the Entity Operation button shows on all Content Collection Details pages. If we can live with that, this is the approach we should take.

Conclusion

In this article, I discussed how we could give control to content editors to publish content from Sitecore Content Hub to a static website. I have used Vercel for deploying and host my static site. The same approach works for other static site hosting services like Netlify, Surge, and others.

References

Posted in Content Hub, jamstack, Next.js, Sitecore | Tagged , , , | Leave a comment

How I solved the issues I encountered to build a static website using Next.js and Sitecore Content Hub

I built a static website using Next.js and Sitecore Content Hub. The architecture is simple. Sitecore Content Hub is used as a headless content and digital asset repository. I used Content Hub JavaScript Client SDK to access data from Content Hub. Next.js JavaScript framework is used to implement the website. I don’t want to talk about how I built the website. You can understand that if you look at the code which I shared in GitHub. What is more interesting to discuss is, what problems I faced and how I solved those problems.

The website is a simple blog website with multiple sections that you can visit using the main navigation or some internal links on the site. My objective was to build a website that is well structured and responsive. Here is the site in desktop and mobile view. If you want to visit the site click on this link. You may not see some images if they were not already cached in Vercel CDN and if my instance of Content Hub is not running. Images are hosted in Content Hub.

My Photo Blog

UI Markups

I started with designing the site using plain HTML and Sass. I am not much of a UI developer but these days I feel, I cannot completely stay away from understanding UI development if I want to architect web solutions. You can find the repository of the UI design markups in this GitHub repository My Photo Blog Markups. I made sure the design is responsive, followed best practices for Web Core Vitals, and followed best practices for accessibility. Two VS Code extensions that helped with UI markup creation that I highly recommend for anyone are, Live Sass Compiler and Live Server. I used ngrok to create a public URL to my localhost so that I can connect from my phone to test the markups.

Next.js App from UI Markups

The Next step was to convert these design markups to a Next.js application with hardcoded data. I separated code into different pages and components. For styling, I used module (component) level sass. Next.js has built-in support for Sass. You can find more information in Next.js documentation Built-in CSS Support. I shared this step of the application in GitHub repository https://github.com/himadric/photoblog-nextjs. This helped me understand how the Next.js application will work when I integrate with Sitecore Content Hub.

Next.js App integration with Sitecore Content Hub

Before I started working on converting my the Next.js application to access data from Sitecore Content Hub, I had to think about how I should organize content in Sitecore Content Hub. I had a good idea about how I want to approach it, but it was an iterative process. I separated contents into number content collections as shown in below screenshots. I had to create new content types and add additional properties in the built-in Blog type. The images are saved as Assets. I haven’t used blog images as attachments in the blog content to avoid drilling through relations to find images. I created public links of the images and saved the link in custom property.

Content Structure

Content Collections (click on the image to see larger view)

Contents in content collection

Content distribution

Issues I encountered

JavaScript Package Registration Issue

I used Content Hub JavaScript SDK to access data from Content Hub. Before I can install JavaScript SDK npm packages, I needed to add npm package feed https://slpartners.myget.org/F/m-public/npm/ in the npm package registry in my machine. After adding the package feed to the registry, I tried to install JavaScript SDK package @sitecore/sc-contenthub-webclient-sdk. I got the error that the package couldn’t be found.

The feed registration command

npm config set @sitecore:registry https://slpartners.myget.org/F/m-public/npm/

adds the feed URL in the global .npmrc file. When I looked at the content (you can use the command ‘npm config get’ or open the .npmrc from your users file folder in the text editor), I found the content looked like below

@sitecore:registry = “https://sitecore.myget.org/F/sc-npm-packages/npm/&#8221;
https://slpartners.myget.org/F/m-public/npm/ = “”

My global .npmrc already had a registry entry for Sitecore JSS. ‘npm config set’ doesn’t allow more than one value for the same registry key. There are two solutions to this.

1) Open the .npmrc file in a text editor (you can use the command ‘npm config edit’) and update the @sitecore registry with Content Hub JavaScript SDK feed.

2) Create a .npmrc file for the project and add the registry entry in that file.

The second solution is what I needed because it fixed the registry issue as well as I needed the .npmrc file to be part of the project for Vercel deployment to register the package feed.

Error: Cannot find module ‘form-data’ Issue

After installing the JavaScript SDK npm package, I wrote code to authenticate to Content Hub. When I tried to run the application I got the below compilation error.

error – ./node_modules/@sitecore/sc-contenthub-webclient-sdk/dist/clients/upload-client.js:17:0
Module not found: Can’t resolve ‘form-data’
null
Error: Cannot find module ‘form-data’

I resolved this issue by installing ‘form-data’ package using the below command.

npm install –save form-data

I posted the issue in Sitecore StackExchange

Module not found: Can’t resolve ‘fs’ Issue

Once I started refactoring code, Next.js thought refactored code that depends on packages that in turn dependent on ‘fs’ module will run in the browser. The code was written to run on the server side. I resolved this issue by adding below in the next.config.js.

webpack: (config, { isServer }) => {
  // Fixes npm packages that depend on `fs` module
  if (!isServer) {
      config.resolve.fallback.fs = false
    }
  return config
},

Challenges building Header and Footer content from Content Hub

I built the Header and Footer of the website from contents in the Content Hub. To statically generate the Header and the Footer for every page, I had to get these contents from Content Hub using the getStaticProps method. My website’s Header and Footer are part of the Layout component which I added in the _app.js file because I didn’t want to wrap the code with the Layout component in every page. This works fine if I don’t need to load content in Header and Footer from an external source. For loading data from an external source, I have to use getStaticProps but this method is not allowed in _app.js. This is a known issue. If you want to learn more look at this discussion getStaticProps on _app.

I had two choices, 1) I can use getInitialProps in _app.js to get data from Content Hub, or 2) Use Layout component in every page and use getStaticProps to get the data from Content Hub. Option 1 although seems a good choice it is actually not because getInitialProps has been deprecated. I went for option 2. Although option 2 seems inefficient, it turned out to be not too bad after I refactored code to helper methods and implemented caching.

429 too many requests Issue

Sitecore Content Hub allows only 15 API calls per second unless the APIs are called from Portal with a web browser. You can find more about this in the Throttling section of Content Hub documentation. When I tried to create a production build, I started getting error 429. I had already implemented server side memory caching to reduce the number of API calls. To avoid the issue completely, I added a delay after every API call is made. This was a simple solution and it’s not a huge problem because this will only add additional time to the build. The build in the Vercel deployment took one and a half minutes.

Sitecore Content Hub Free Modeling License

I had to create some custom content types in Content Hub for Footer, Banner, other components. This works but since these types are created from M.Content, API response always returns all fields included in M.Content. I thought creating ‘Custom Entity Definition’ will be a more appropriate way to create these contents. But, creating ‘Custom Entity Definition’ requires the Free Modeling license. You can find more about Custom Entity Definition in Content Hub documentation.

Final Words

I liked the way this website implementation worked out. I think this architecture works well if the requirements for the website don’t need dynamic data access a lot. Even if contents change in the website frequently this architecture works because Next.js supports Incremental Static Regeneration (ISR). Here I have used Sitecore Content Hub as a Headless Content Repository. Content can be easily managed by business users in the Content Hub. But, Content Hub is not an Agile Content Management System like Sitecore XM or XP. In this architecture, a business user will not be able to create a brand new page. If the requirement is such that business users need the ability to create and modify pages, adding Sitecore XM/XP and using Sitecore JSS will be the desirable architecture. As far as DXP features as personalization and experimentation go, Composable DXP can be used. In the case of Sitecore, it can be achieved by integrating with Boxever.

References

Posted in Content Hub, JavaScript, Next.js, Sitecore | Tagged , , | 1 Comment

Exploration of Four51 OrderCloud, its architecture, and Headstart setup

This is an exciting time to be in Software Development. Things that we had been hearing about, Micro Service Based Architecture, Cloud Native Application, API First Headless Architecture are finally shaping up nicely. Sitecore’s recent acquisition of cloud first commerce platform Four51 and Customer Data Platform (CDP) Boxever confirms that the trend is going to be integrating with specialty platforms than one company is building everything. Sitecore’s core platform is a Content Management System that enables us to create content and deliver content to the target audience. But, just delivering content without intelligence is not good enough. That’s why Sitecore has become a Digital Experience Platform. With the acquisition of Four51 and Boxever, Sitecore is bringing full Digital Experience to online Online Commerce. In this article, I will discuss the Four51 OrderCloud platform and show you how to set up the OrderCloud Headstart application, which is an application like Sitecore’s Habitat Demo. Let’s dive in.

tl;dr: If you are interested in only Headstart Setup go to the Setting Up Headstart section.

OrderCloud

OrderCloud is an API First Headless truly Cloud Native B2B eCommerce Platform designed by Four51. OrderCloud architecture is MACH (Microservices, API-First, Cloud & Headless) certified. The MACH certification tells a lot about a platform. A Commerce platform that follows MACH architecture is modular and truly open for integration to other systems via Microservices and APIs. A typical MACH certified Commerce platform architecture looks like below and this looks very close to what we envision the OrderCloud architecture will be when it will be fully integrated with Sitecore.

Source: machalliance.org

Functional Architecture

Speaking about architecture, let’s talk about the Functional Architecture of OrderCloud. It will help us to understand as a product, what OrderCloud offers and what kind of B2B Commerce solutions we can create from it. In the core of OrderCloud following are the important entities that exist.

Seller: Seller is the orchestrator of the business. Seller defines how the business will be done. If Seller is a Manufacturer they might be selling products only to Buyers, but if Seller is a Distributor, in addition to selling products to Buyer, they might be connecting the Suppliers with the Buyers. Seller users are the admin users with highest privilege to OrderCloud APIs.

Buyer: A Buyer is a Customer or an organization with an account with the Seller so that they can purchase products. A buyer has one or more users. A user authenticates to the Storefront and orders products for the buyer. Buyer users can be put into groups for managing access levels and personalizing the buying experience.

Supplier: A Supplier is an organization in OrderCloud that fulfills orders placed by buyers of their products. Supplier is an optional construct in OrderCloud. A seller can be the only supplier in the system. Supplier users have restricted access to the Seller admin site where they can manage their products, orders, supplier information, and users. The Supplier can assign different roles to their users. Some can be responsible for managing products, some can be responsible for managing orders, etc.

User: User is someone who authenticates to the Storefront (Buyer User) or Seller Admin Site (Seller User) to use the site. Users can be assigned to User Groups for managing their access to the system as well as provide a personalizing shopping experience to buyers.

User Group: User Groups are roles when it comes to the administration of the application, but it also contains information that leads to personalized shopping from the buyers perspective. For example, information like Catalogs, Locations are assigned to User Groups. This drives the configuration of customer specific products in specific buyer locations. Roles in User Groups determine what users belong to a User Group can manage as far as administering the application goes. For example, if a Buyer User Group has AddressAdmin Role, a user in that group can add/modify/delete buyer addresses.

Address: A buyer or a supplier can have multiple addresses. From the buyer’s perspective, the address is where to order items will be shipped. Orders in OrderCloud can be shipped to multiple places because addresses can be attached at the line level. From the supplier’s perspective addresses are locations and these can be the addresses of warehouses. Seller’s addresses are locations for the sellers. When a seller is a sole supplier, these addresses can be warehouse addresses for fulfillment.

Catalog: Catalog defines a set of products. It drives what products a buyer can or cannot see in the storefront. A product can be assigned to more than one catalog. Products are organized in different categories. A Product in a Seller organization can be added by the seller or supplier selling the product. Although Suppliers can add products to the system, they cannot assign products to catalogs or organize them in categories, only sellers can do that.

Order: Order in OrderCloud represents the cart as well as the order submitted to the system. An order goes through a different state. If the status is ‘Unsubmitted’, it is a cart. If the status is ‘Open’ it is submitted to the system as an order. An order has directions in OrderCloud. An order from the buyer’s perspective is an outgoing order, but from the seller or supplier perspective, it is an incoming order. The supplier doesn’t see the order until the order is submitted, but the seller can see the order (cart) as an incoming order before submission. After submission of the order from the seller’s perspective, the order is both incoming (from buyer) and outgoing (to the supplier). The same order from the supplier perspective is an incoming order. This concept applies when accessing orders from the system using OrderCloud API. You need to use proper direction based on the API credential. If you are using the buyer’s credentials, you have to use ‘outgoing’ as the direction, but you are using the supplier’s credentials, you need to use the ‘incoming’ direction. The below image on the OrderCloud website describes this relationship.

Order Directions, Source: OrderClould.io

I haven’t seen any mention of invoices in OrderCloud. In many eCommerce platforms orders represent invoices after the order is completely fulfilled. Some eCommerce platform invoices are maintained separately. Invoices enable the buyer to make the payments after the invoices are generated by the seller. Often eCommerce platform integrates with third party invoice payment system like BillTrust which makes it easy for the buyer to manage their invoices of different purchases in one place.

Storefront: A seller may host more than one eCommerce website to sell products. A very common scenario for this is separate retail (B2C) website from B2B website. Some sellers choose to separate B2C sites from B2B sites because buyers’ experience in B2C sites is quite different than B2B sites. This is not the only reason to host more than one website. Seller may in the business of more than one brand and that may require a separate website too. Storefront in OrderCloud represents a website. You can have one organization in OrderCloud with multiple storefronts. Each storefront can have multiple suppliers and buyers associated with it.

In addition to the above entities, there are usual eCommerce entities that exist in the system, like, Promotion, Price, Shipping, etc. which drive eCommerce functions, but the above entities provide functional structure in OrderCloud. You can use the above described constructs to define a business. There are different business models described using the above constructs in the Commerce Strategy section in the OrderCloud document.

In the below diagram I tried to capture the high level functional architecture of OrderCloud. I may not be 100% correct about the Storefront part because there is not much documentation about Storefront in OrderCloud, but it is reasonable to think that Storefront will be related to Catalogs as Buyers are.


OrderCloud High Level Functional Architecture

Technical Architecture

OrderCloud is Cloud Native API first Headless eCommerce Platform which is constructed based on Micro Service based architecture. Below I described different parts of OrderCloud Platform Architecture.

OrderCloud Portal: OrderCloud Portal is where you will define your Seller Organization. Once you Sign Up and define your Seller Organization, that becomes your eCommerce System which you will be used for providing service to your buyers and optionally to your suppliers. All will be done using Restful APIs. OrderCloud Portal provides a nice API Console that you can use to query your eCommerce data as well as modify them if needed. The user has to authenticate to use APIs. OrderCloud uses OpenId Connect and OAuth2 for securing APIs. Every client who connects to OrderCloud for access to API has to have a ClientId and Client Secret. Below is a screenshot of the API Console.


OrderCloud API Console

Middleware: In OrderCloud Architecture, middleware is where integration to third services and cloud services happens. For example, if you want to integrate OrderCloud with an ERP, you will be implementing that in the Middleware. Also, if you want to integrate Cloud Services like, App Configuration, Blob Storage, etc., you will be implementing that here. Middleware can also be used to integrate with OrderCloud Webhooks via API endpoints in the Middleware. All services in the Middleware should be exposed by APIs in a headless manner. OrderCloud has provided a starter middleware project Catalyst in github.

Buyer UI: Buyer UI is eCommerce Storefronts that end users use to browse and purchase products. A Storefront functionalities are implemented by integrating with Middleware and OrderCloud APIs. Since OrderCloud is Headless, Buyer UI can be implemented in any language and platform, either client-side or server-side technology. OrderCloud has provided both .Net SDK and Javascript SDK for this purpose.

Seller / Supplier Admin: Seller / Supplier Admin is the Admin Portal to manage the eCommerce backend. Seller Admin Portal provides restricted access to Supplier users so that they can manage products they are selling, manage orders placed to them, warehouse inventory, and manage their users. Whereas Seller Admin users have full access so that they can define and manage the business. Seller Admin connects to the eCommerce backend via OrderCloud APIs. It can have its middleware if it requires to connect to any system other than OrderCloud. For example, the Seller may want to manage the payment from Seller Admin. In that case, Seller Admin has to be integrated with the payment system and that will require a middleware to be created. Unlike, Buyer UI, I imagine Seller Admin functionalities will not change from client to client. Also, Seller Admin bit more tightly coupled with OrderCloud architecture. For this reason, providing a fully functional but extensible Seller Admin by Sitecore will make sense. This will reduce the implementation cost as well as enable partners to extend the Seller Admin.

Webhook: Webhook is the way OrderCloud let the integrated system know that some event occurred in OrderCloud. For example, if you want to send an email notification when order status changes, you can create a Webhook in OrderCloud’s order API and that will call your send order status API associated with the Webhook. There are Pre-hook and Post-hook. Sending email when order status change is a Post-hook because webhook gets called after order status changes. You can add Webhook configuration like type of Webhook, Payload URL, OrderCloud API endpoints and API method, etc. in OrderCloud API Console. Typically you will host Payload API in the Middleware. This document describes how to create a Webhook in the OrderCloud Platform.

Below is the High Level Architecture of the Headstart Application. It is important to note that Headstart is a sample application. In actual implementation, architecture may change quite a bit.


High Level Architecture of Headstart

Extending OrderCloud

Whatever eCommerce platform you choose to implement eCommerce site for your client, there will be always requirements that will require you to customize the platform. OrderCloud platform architecture supports the Open-Closed Design Principle. It means, the platform is open for extension but closed for modification. You will not be able to modify the core platform and that makes the platform easier to upgrade. Since it is open for extension, you can easily add custom features on the top of the core platform.

There are generally three ways to extend the OrderCloud platform.

  • For outside integration, Middleware Services should be used.
  • For injecting your operational code into OrderCloud operation you should use Webhook. You can implement your Webhook APIs in the Middleware or a separate service. We discussed Middleware and Webhook in the previous section.
  • If you want to extend the OrderCloud schema to store additional data, you need to use Extended Property (XP). OrderCloud stores XP as JSON and you can have an elaborately constructed JSON. There are two things to remember, 1) the entire XP object cannot be more than 8000 bytes, and 2) XP should be consistent within an object (if you create XP for Order, the structure of that XP should be always same). Data included in XP can be searched, filtered, and sorted. Typically, XP should be used when a small amount of additional data needs to be added. If the need is to add a new object in the implementation, Middleware is the way to go. This article has nicely described how XP works in OrderCloud.

Setting up Headstart

You will find Headstart in Github. The ReadMe instruction is quite good, but I did face some issues. Watch the below video to see how I set up Headstart in my local machine and then used Azure DevOps to deploy applications. To set up Headstart you need an Azure account. You can create a free Azure account here. The issues and the solutions for them are described after this video.

Setup Issues

Registering with Third Party Services: My goal was not to set up Headstart and making it fully functional. For this reason, I haven’t configured all third party services. This caused some issues, especially with Avalara. It can take a long time to configure Avalara to return proper tax. So, I faked the Avalara calls in the Middleware. I Changed the code in AvalaraCommand class. To see the difference visit my Forked repository in Github. I configured SmartyStreet, which is required for address validation and easy to configure. I also configured Sendgrid for sending emails using the provided templates in the Headstart solution.

Seller UI Build Issue: I had a problem with building Seller App. The issue was finding Python version 2 in my machine. I resolved that by installing Windows Build Tools in my machine using npm (npm install –global windows-build-tools). Before installing, I removed node modules from Seller App.

Issue with OrderCloud CMS API: After building Seller App, I was getting 400 Bad Request error from ordercloud-cms-test.azurewebsites.net. This issue has been fixed in the Headstart repository. I resolved it by merging the original repository to my forked repository. This Stackoverflow thread helped me to understand, How to Sync a Forked Repository in Github.

Azure DevOps Deployment Issue: The ReadMe in Headstart repository has not fully explained how Azure DevOps Deployment works. I resolved below issues.

  • The azure-pipelines.yml was not generating a zipped artifact for the Middleware project. In the Middleware publish task, I had to change zipAfterPublish: true.
  • For Build Once, Deploy Many, you need to add “node inject-css defaultbuyer-test && node inject-appconfig defaultbuyer-test” in the Buyer release pipeline and “node inject-appconfig defaultadmin-test” in the Seller release pipeline. For this, you need to add ‘bash’ task in your release pipelines for Buyer and Seller. I showed this in the video.
  • The idea of Build Once, Deploy Many applies when you have multiple environments to deploy your application. This requires you to create Slots in Azure App Service and configure the release pipelines against those Slots. Each Slot in an App Service is used for an environment. I created Test Slots for Middleware, Buyer, and Seller App and deployed code there. If I need to deploy code to UAT, I have to create Slots for that each in App Services and create a release pipeline to deploy code in UAT Slots.

Conclusion

I enjoyed setting up Headstart. It helped me understand OrderCloud architecture. I like that OrderCloud is a highly extensible, Cloud Native platform with Micro Service based architecture and it does not limit me to any particular technology for using the platform. I hope my exploration of the platform and this article helps others to onboard to the OrderCloud platform quickly.

Acknowledgment

I would like to thank my colleague Daniel Govier for helping me with the Azure DevOps configuration. Without his help, it wouldn’t be possible for me to configure Azure DevOps deployment.

I would also like to thank Crhistian Ramirez for patiently answering all my questions in OrderCloud Community Slack.

References

Posted in Commercce, OrderCloud | Tagged , , | Leave a comment

Address and Email domain validation in Sitecore

This is going to be a quick blog article. Only reason to write this blog is to share some code we developed for address and email domain name validation using the address recommendation service Loqate . For many of Nish Tech’s clients (specially eCommerce implementation), we have implemented address and email validation to reduce number of mistakes in account creation. We thought this can be helpful for others in the Sitecore community if they are looking for similar solution. Thanks to my colleague Santhosh twitter: @Santhosh4184. He worked on most of the coding.

I have shared only the Feature projects that contain the code for address and email validation in this Github Repo. You still need to add the project in your Helix based Sitecore Solution to make it work. We developed this using SXA but, same concept can be used for Sitecore Forms.

Here is an animation that shows how the module work.

Address and Email validation form

The way address validation through Loqate service works is, you have to create an account in Loqate and set up your account for the solution. After this, you will be provided an API Key and base Javascript code that you will need to add in your pages. For code example, look at ValidateAddress.cshtml and ValidateEmail.cshtml.

We hope this helps.

Links

– Code in Github: https://github.com/himadric/AddressValidation
– Loqate: https://www.loqate.com/
– Follow Santhosh: https://twitter.com/Santhosh4184

Posted in Sitecore | Tagged , , | Leave a comment

Serilog Appender for Sitecore Logging

In my last blog post I discussed concepts of Structured Logging as well as discussed what benefits it can bring to Sitecore if we use Structured Logging for troubleshooting. It is clear that we cannot include Structured Logging in Sitecore from ground up, unless, we rebuild Sitecore Diagnostic using some Structured Logging framework. Which is a huge undertaking for Sitecore. Honestly, things like logging which is although very important don’t get enough priority when a new release is planned. We thought that Sitecore will move to SaaS model and rearchitect the product in .NET Core, but recent release of Sitecore 10 suggests that they are sticking to the current platform architecture. So, looks like, we will be staying the Log4Net logging for some time. Beside that, many clients will stay with the older Sitecore version and logging for them will not change.

So, what are the options? Do we have to wait for Sitecore to rearchitect the product? Is there an option to use structured logging without Sitecore moving to Structured Logging? It seems, we can use Structured Logging and not wait for Sitecore. The result is not as good as including Structured Logging in the core product, but it is a significant improvement than using Log4Net text based logging.

SerilogAppender

To format Sitecore Logging to Structured Logging, I will use a Log4Net Appender that I created to convert Sitecore log to Serilog Log. It is not difficult to create a Log4Net Appender. All we need to do is to derive a class from BufferingAppenderSkeleton and override the method SendBuffer. Below is the code for that. SendBuffer gets the array of events. It initializes the Serilog with Enrichers and loops through the events to log the events to Serilog Sink Seq.

        protected override void SendBuffer(LoggingEvent[] events)
        {
            using (var log = new LoggerConfiguration()
                .MinimumLevel.ControlledBy(new LoggingLevelSwitch(GetLogEventLevel()))
                .Enrich.FromLogContext()
                .Enrich.WithMachineName()
                .Enrich.WithEnvironmentUserName()
                .Enrich.WithProcessId()
                .Enrich.WithProcessName()
                .Enrich.WithProperty("ThreadId", SystemInfo.CurrentThreadId)
                .Enrich.WithMemoryUsage()
                .WriteTo.Seq(_seqHost, apiKey: _apiKey)
                .CreateLogger())
            {
                foreach (var thisEvent in events)
                {
                    LogEvent(log, thisEvent);
                }
            }

        }
        private void LogEvent(Logger log, LoggingEvent loggingEvent)
        {
            try
            {
                if (loggingEvent.Level == Level.DEBUG)
                {
                    log.Debug(loggingEvent.RenderedMessage);
                }
                if (loggingEvent.Level == Level.INFO)
                {
                    log.Information(loggingEvent.RenderedMessage);
                }
                if (loggingEvent.Level == Level.WARN)
                {
                    log.Warning(loggingEvent.RenderedMessage);
                }
                if (loggingEvent.Level == Level.ERROR)
                {
                    log.Error(loggingEvent.RenderedMessage);
                }
                if (loggingEvent.Level == Level.FATAL)
                {
                    log.Fatal(loggingEvent.RenderedMessage);
                }
            }
            catch (Exception ex)
            {
                this.ErrorHandler.Error("Error occurred while logging the event.", ex);
            }
        }

For full source code visit to Github repo https://github.com/himadric/structured-logging-for-sitecore

To include the SerilogAppender in Sitecore logging we need a patch config to include the appender in the configuration. Below is the config patch (also available in the github repo).

<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:security="http://www.sitecore.net/xmlconfig/security/">
    <sitecore role:require="Standalone or ContentDelivery or ContentManagement">
        <log4net>
            <appender name="SerilogAppender" type="log4net.Appender.SerilogAppender, Foundation.SerilogAppender" patch:after = "appender[@name='LogFileAppender']">
                <minimumlevel value="DEBUG" />
                <apikey value="fz0IdNDO6IfPCY9ct9o5" />
                <seqhost value="http://localhost:5341" />
                <layout type="log4net.Layout.PatternLayout" />
                <encoding value="utf-8" />
            </appender>
            <root>
                <appender-ref ref="SerilogAppender" patch:instead = "*[@ref='LogFileAppender']"/>
            </root>
        </log4net>
  </sitecore>
</configuration>

Once the SerilogAppender is deployed along with the config patch file, open up Seq in the browser, you will see log messages are showing up in Seq.

If you would like to see a demo of this SerilogAppender, you may watch my session on Structured Logging in Cincinnati Sitecore User Group.

Useful Links

Posted in Debugging, Logging, Sitecore | Tagged , , , | Leave a comment

Structured Logging in Sitecore

In Sitecore Symposium 2019, Sitecore announced company’s plan to move Sitecore Platform to SaaS based model. If you want to know more about it, you can read this FAQ. As Sitecore is moving to SaaS, which will require completely revamping the architecture, they will be building on ASP.NET Core. With many things that will change in this transition, Sitecore will definitely going to look at the logging strategy in the platform. With all certainty they will move to Structured Logging because ASP.NET Core itself adopted Structured Logging as the logging strategy. In fact, Sitecore introduced Structured Logging using Serilog in their newest modules like xConnect and Sitecore Host.

In this article, we will discuss about Structured Logging to understand what is expected in future version of Sitecore SaaS. We will look into Sitecore’s current text based logging strategy based on Log4net and discuss about approach to convert logging of existing Sitecore applications to Structured Logging.

What is Structured Logging

The idea of logging information in computer program existed from the very beginning. Every computer language has some form of printf statement. In the very beginning printing statement in the console was the way of debugging applications. Later on debugging issues in complex applications became difficult. Specially, illusive issues that happen in the production environment required us to persist the logging statement in some place, in files, database etc. This required industry to come up with some logging frameworks that will enable us to use logging as an Aspect Oriented Programming (AOP) and enable us to capture information in different storage of our choice. Log4j/Log4net is a good example of this and has a wide acceptance in the industry. Log4net is an interface based framework for implementing AOP based logging in applications. Although it comes with several implementation of Appenders (Log4net terms for different way to persist information), it doesn’t restrict anyone to implement new Appenders or change existing one. Sitecore went one step ahead. They took Log4net source code and implemented entire logging in Sitecore.Logging module. This gives them better control over Log4net versioning and implementation. Also, it helps us to separate implementation specific logging from Sitecore internal logging. If you look at Sitecore implementation bin folder, you will not find any log4net.dll, because everything related to logging is in Sitecore.Logging.dll.

Is Log4net good enough for what we need? Let’s look at how log4net is described in Apache log4net About page.
The Apache log4net library is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent Apache log4j™ framework to the Microsoft® .NET runtime.
We can see log4net doesn’t say what to send in output log statement and that creates a huge problem in consuming logs. In other words, there is no structure in what we can send to output log, there is no rule. Log4net doesn’t stop us creating structure in the log statement, it’s just that, the framework was built based on Text Logging. In Text Logging, there is no separation between variables and values. Everything is included in the message. A Text Logging looks like below (taken from real sitecore log). When we log this way, we lose information because we cannot for example find all values of ‘interval’ easily.

35364 12:21:02 INFO  Starting periodic task "ExpiredMessagesCleanup" with interval 00:01:00
35364 12:21:02 INFO  Starting periodic task "CleanupTrackedErrors" with interval 00:01:00

The same logging in Structured Logging looks like below.

thread=35364, time=12:21:02, level=INFO, task="ExpiredMessagesCleanup", interval=00:01:00
thread=35364, time=12:21:02, level=INFO, task="CleanupTrackedErrors", interval=00:01:00

What Structured Logging provides is key/value pair, and that makes parsing logs much easier. But, what matters most is programming mind set, because, if we add the logging output to a message variable, it will become nothing but Text Logging. When we are adopting Structured Logging, we need to think what we need to capture to render logging that is easily navigable to troubleshoot difficult application issues. To help with this, Structured Logging standardized the process that we will discuss next. Logging framework such as Serilog is based on this Structured Logging concept and provides great tooling and programming extensions for implementing Structured Logging in applications.

Message Template

One thing should be mentioned that, Structured Logging is much harder for human to read than Text Logging because habitually, it is easier for us to read sentences than key/value pair. It is a problem that needed to be addressed because most of the time when we are troubleshooting issues, after querying or navigating through logs, we will read the log entries. This is something that has been addressed using Message Template. For example, if we want to capture above mentioned log entries using Message Template, we will log like below.

log.Information("Starting periodic task {taskname} with interval {interval}", taskname, interval);

The above will produce structured log entries like below.

{
    "thread":"35364",
    "time":"12:21:02",
    "level":"INFO",
    "template":"Starting periodic task {taskname} with interval {interval}",
    "properties":{
        "taskname":"ExpiredMessagesCleanup",
        "interval":"00:01:00"
    }
}
{
    "thread":"35364",
    "time":"12:21:02",
    "level":"INFO",
    "template":"Starting periodic task {taskname} with interval {interval}",
    "properties":{
        "taskname":"CleanupTrackedErrors",
        "interval":"00:01:00"
    }
}

Since the log entries are captured in Message Template format it is possible to render the entries in human readable format by replacing the properties in the template. At the same time since the properties are captured separately, it is also machine readable. One other benefit of using template is that, we can generate unique hash from the Message Template and use that to group the messages. To learn more about Message Templates visit https://messagetemplates.org/

Events and Event Types

Every entry captured in the log is due to an event that occurred. There is no different meaning of event in Structured Logging. Similar to Text Logging Structured Logging has event level like debug, information, warning etc.
In logging, level is used to reduce the number of logging events captured in the application to minimize the impact on performance as well as less number of event to parse in case of troubleshooting. Despite of this ability, our experience of troubleshooting issues by looking at log entries most of the time is overwhelming. Even if we set minimum level for logging to info, there are so many unrelated log entries, finding the entries that are related to issue becomes very difficult. If the issues could be categorized into different event types, we could exclude the events that we are not interested in and narrow our search. Something like facet filtering in search. In Text Logging this is done using Event Type, but it requires significant effort because we need to plan ahead to create the Event Types and then follow through that plan in application building. In Structured Logging we can use Message Template itself as the Event Type. If we are not interested in certain type of message, we can exclude those messages based on the Message Template. It is even easier if Message Template can be converted into hashes because each type of Message Template hash will be unique. For example, if we want to exclude all message of the below kind, we can exclude all messages created from the Message Template included in the message.

{
    "thread":"35364",
    "time":"12:21:02",
    "level":"INFO",
    "template":"Starting periodic task {taskname} with interval {interval}",
    "properties":{
        "taskname":"ExpiredMessagesCleanup",
        "interval":"00:01:00"
    }
}

Enrichment

Enrichment in Structured Logging is decorating logs with information to build a context so that we can correlate log entries. For example, in Sitecore application if someone says, a component in a page is showing error sometime for some users connected to a specific CD server. Diagnosing this kind of problem requires to build context in the log to narrow down the search. If we can add information like, Sitecore ItemId, UserId, Rendering ItemId, Data SourceId, Machine Name (which CD) etc., we may find that personalization rule for a group of users is failing because certain items were not published properly in problematic CD server. In Structured logging, Enrichment allows us to add information like Machine Name, ThreadId, even custom properties in the log to build context. Correlating log entries is specially difficult in asynchronous programming where you cannot really use Time or ThreadId easily to correlate log messages. In that case events can be wrapped using a MessageId to identify them belongs to one group. We will discuss more about Enrichment with examples in the second part of this blog about Serilog.

Log Parsing Tools

Tools do not have really anything to do with Structured Logging concepts, but Structured Logging makes it easier to build tools for slicing and dicing logs. In text logging, since there is lack of structure, after capturing logs, we use generic tools like LogParser, grep to parse logs. With logs captured using Structured Logging concept, it is easier to build tool to parse log, create reports & charts and send alerts. Seq is one such tool to parse logs captured using Serilog. We will look into Seq in the second part of this blog series.

Final Words

In this blog article I discussed the concepts of Structured Logging and tried to associate the concepts with Sitecore logging. Understanding these concepts will help us to understand logging in Sitecore in future SaaS version. Also, my goal is to see if I can convert the logging in the current versions of Sitecore to Structured Logging to some extent so that I can take advantage of fantastic tooling and extensions available in Serilog. That’s coming in future blog posts. Stay tuned.

References

Posted in Debugging, Logging, Sitecore | Tagged , , | Leave a comment

Sitecore Identity Part 3: Connecting to External Identity Provider

Introduction

Sitecore Identity Provider was implemented based on IdentityServer4 framework. IdentityServer4 doesn’t dictate how authentication to be done or what application can use the identity provider. It’s up to the implementer to decide that. In previous blog article, we discussed how a third party application can authenticate using Sitecore Identity Provider. In this blog we will look at the other side of Sitecore Identity. We know that Sitecore Identity authenticates users using the membership provider, but Sitecore Identity can delegate the authentication to other identity provider too. In fact Sitecore Identity comes with inbuilt AzureAd subprovider. If you enable it you should be able to authenticate against the Azure Active Directory. I added the following updated diagram to show how subprovider fits into the architecture.

Sitecore Identity with Subproviders

Configuring Azure Ad Subprovider

Sitecore provided some documentation about how to configure out of the box Azure Ad subprovider. It’s not in very detail and it takes some effort to configure it. I was going to write about it, but I had a problem setting it up end to end. So, I posted this question in Stack Exchange and through that I found this excellent blog post which explains everything in detail. I am skipping that part. I captured the following animation to show you how authentication happens once you setup Azure Ad subprovider.

If you setup the loginpage attribute of the shell site in Sitecore.Owin.Authentication.IdentityServer.config as
$(loginPath)shell/SitecoreIdentityServer/IdS4-AzureAd, Sitecore will skip Sitecore Identity Provider and use Azure Ad provider directly to authenticate.

Custom External Subprovider

Let’s write some code to implement a custom subprovider. I will be utilizing IdentityServer Demo site https://demo.identityserver.io/ to create our subprovider. The reason to use IdentityServer Demo is, it is simple and no configuration is needed on the IdentityServer side. If you go to that site, you will find several grant types. The one we will be using is Implicit grant type and the ClientId for that is ‘implicit’. Our subprovider will be using this ClientId to get access to the IdentityServer identity provider.
Sitecore Identity is a Sitecore Host application. Sitecore Host is a new framework introduced in Sitecore 9.1 that you can use to create Sitecore Services. The benefit of creating services using Sitecore Host is that, you get all the common features of Sitecore Host, like logging, Dependency Injection etc. right out off the bat. The subprovider we will be creating is a Sitecore Host plugin and since Sitecore Host supports dynamic plugin loading, our plugin will be loaded as soon as we drop the plugin the ‘sitecoreruntime’ folder under Sitecore Identity root folder.
You can find the plugin project in the github repo. It’s name is
Sitecore.Plugin.IdentityProvider.Ids4Demo. Here are the important code snippets.


The Sitecore.Plugin.IdentityProvider.Ids4Demo.xml contains the configuration of the subprovider. The <Enabled> property should be true to make the subprovider available to Sitecore Identity. <AuthenticationSchema> is to identify the subprovider and it is used along with the IdentityProvider name in Sitecore to configure which subprovider to be used for different site. <ClientId> contains the Id to be used to connect to IdentityServer demo provider. <DisplayName> is button caption that will be used in the Sitecore Identity login page for this subprovider.<ClaimsTranformations> are used to translate claims from the subprovider to Sitecore Identity claims.

<?xml version="1.0" encoding="utf-8"?>
<Settings>
<Sitecore>
<ExternalIdentityProviders>
<IdentityProviders>
<Ids4Demo type="Sitecore.Plugin.IdentityProviders.IdentityProvider, Sitecore.Plugin.IdentityProviders">
<AuthenticationScheme>IdS4-Ids4Demo</AuthenticationScheme>
<DisplayName>IdentityServer Demo Identity Provider</DisplayName>
<Enabled>true</Enabled>
<ClientId>implicit</ClientId>
<MetadataAddress></MetadataAddress>
<ClaimsTransformations>
<!--Place transformation rules here. -->
<ClaimsTransformation1 type="Sitecore.Plugin.IdentityProviders.DefaultClaimsTransformation, Sitecore.Plugin.IdentityProviders">
<SourceClaims>
<Claim1 type="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn" />
</SourceClaims>
<NewClaims>
<Claim1 type="email" />
</NewClaims>
</ClaimsTransformation1 >
<ClaimsTransformation2 type="Sitecore.Plugin.IdentityProviders.DefaultClaimsTransformation, Sitecore.Plugin.IdentityProviders">
<SourceClaims>
<Claim1 type="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" />
</SourceClaims>
<NewClaims>
<Claim1 type="email" />
</NewClaims>
</ClaimsTransformation2>
</ClaimsTransformations>
</Ids4Demo>
</IdentityProviders>
</ExternalIdentityProviders>
</Sitecore>
</Settings>

ConfigureSitecore class adds the subprovider in the services collection with appropriate option. In our case the code adds Authority as IdentityServer demo provider and also indicates that it is an external Identity Server to be used for sign in.

using Microsoft.AspNetCore.Authentication;
using Microsoft.AspNetCore.Authentication.OpenIdConnect;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Sitecore.Framework.Runtime.Configuration;
using Sitecore.Plugin.IdentityProvider.Ids4Demo.Configuration;
using System;
using System.Security.Claims;
using System.Threading.Tasks;
namespace Sitecore.Plugin.IdentityProvider.Ids4Demo
{
public class ConfigureSitecore
{
private readonly ILogger<ConfigureSitecore> _logger;
private readonly AppSettings _appSettings;
public ConfigureSitecore(ISitecoreConfiguration scConfig, ILogger<ConfigureSitecore> logger)
{
this._logger = logger;
this._appSettings = new AppSettings();
scConfig.GetSection(AppSettings.SectionName);
scConfig.GetSection(AppSettings.SectionName).Bind((object)this._appSettings.Ids4DemoIdentityProvider);
}
public void ConfigureServices(IServiceCollection services)
{
Ids4DemoIdentityProvider ids4DemoProvider = this._appSettings.Ids4DemoIdentityProvider;
if (!ids4DemoProvider.Enabled)
return;
this._logger.LogDebug("Configure '" + ids4DemoProvider.DisplayName + "'. AuthenticationScheme = " + ids4DemoProvider.AuthenticationScheme + ", ClientId = " + ids4DemoProvider.ClientId, Array.Empty<object>());
new AuthenticationBuilder(services).AddOpenIdConnect(ids4DemoProvider.AuthenticationScheme, ids4DemoProvider.DisplayName, (Action<OpenIdConnectOptions>)(options =>
{
options.SignInScheme = "idsrv.external";
options.ClientId = ids4DemoProvider.ClientId;
options.Authority = "https://demo.identityserver.io/";
options.MetadataAddress = ids4DemoProvider.MetadataAddress;
options.CallbackPath = "/signin-idsrv";
options.Events.OnRedirectToIdentityProvider += (Func<RedirectContext, Task>)(context =>
{
Claim first = context.HttpContext.User.FindFirst("idp");
if (string.Equals(first != null ? first.Value : (string)null, ids4DemoProvider.AuthenticationScheme, StringComparison.Ordinal))
context.ProtocolMessage.Prompt = "select_account";
return Task.CompletedTask;
});
}));
}
}
}

Copy the code as shown the following directory structure under the Sitecore Identity root folder.

sitecoreruntime
│ license.xml

└───production
│ Sitecore.Plugin.IdentityProvider.Ids4Demo.dll
│ Sitecore.Plugin.IdentityProvider.Ids4Demo.xml

└───sitecore
└───Sitecore.Plugin.IdentityProvider.Ids4Demo
│ Sitecore.Plugin.manifest

└───Config
Sitecore.Plugin.IdentityProvider.Ids4Demo.xml

Launch Sitecore shell site. As you are redirected to Sitecore Identity site you should see new login button in the site for the new subprovider as shown in the below picture.

Click on the new subprovider button, it should take you to IdentityServer demo provider for authentication. Enter username bob and password bob, you will be signed and redirected to Sitecore Identity provider. It works like below.


Unfortunately, at this point, when website is redirected to Sitecore Identity, it is throwing a server error. The log shows the following error, ‘ The payload was invalid’.

System.Security.Cryptography.CryptographicException: The payload was invalid.
   at Microsoft.AspNetCore.DataProtection.Cng.CbcAuthenticatedEncryptor.DecryptImpl(Byte* pbCiphertext, UInt32 cbCiphertext, Byte* pbAdditionalAuthenticatedData, UInt32 cbAdditionalAuthenticatedData)
   at Microsoft.AspNetCore.DataProtection.Cng.Internal.CngAuthenticatedEncryptorBase.Decrypt(ArraySegment`1 ciphertext, ArraySegment`1 additionalAuthenticatedData)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus& status)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean& requiresMigration, Boolean& wasRevoked)
   at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData)
   at Microsoft.AspNetCore.DataProtection.DataProtectionCommonExtensions.Unprotect(IDataProtector protector, String protectedData)
   at IdentityServer4.Infrastructure.DistributedCacheStateDataFormatter.Unprotect(String protectedText, String purpose)
   at Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectHandler.ReadPropertiesAndClearState(OpenIdConnectMessage message)
   at Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectHandler.HandleRemoteAuthenticateAsync()

I couldn’t get to the bottom of this problem. I checked the id_token and payload looks valid. So, I opened a support ticket with Sitecore. I have to wait until Sitecore provides me an explanation of why the error is happening for other Identity Providers, but not for Azure Ad. Same problem is happening with Okta too.

I decided to publish this blog before the issue is solved because, I don’t see any problem with the approach. As soon as I get a resolution, I will update the blog. Finger crossed.

References

Update

Sitecore answered the question about the issue I mentioned above and it resolved the issue. In the ConfigureSitecore class ConfigureServices method CallbackPath was missing. After I added options.CallbackPath = “/signin-idsrv”; my custom identity provider started working without any problem. I updated the code in Github also.

Posted in Security, Sitecore, sitecore Identity | Tagged , , , , , , | 17 Comments