Executing DTM at the top of the page

If you have been developing websites for a while, you will know that one of the typical recommendation is to execute as much JavaScript as possible at the bottom of the page. This is nothing new and Yahoo recommended it in 2007. The reason is very simple: JavaScript code tends to add a delay, both when loading the JS file and executing it; so, moving it towards the bottom, you make sure the HTML is loaded and the page is rendered before starting to execute any JavaScript. The user believes the page is loaded a bit sooner than when it is actually fully loaded. DTM knows that very well and this is why you have to add the two pieces of code: one at the top and one at the bottom of the HTML.

This approach works very well in most cases. DTM loads first the code that needs to be at the top of the page, but then allows you to defer the rest of the loading and executing at the bottom of the page, like Adobe Analytics, Adobe Audience Manager and 3rd party tags. This is the typical recommendation. The risk of this recommendation is that some page views might be lost, if the user moves away too fast.

However, there is one particular case where this recommendation fails very often. I was working with a well known British newspaper, helping them with the migration from another Web analytics tool to Adobe Analytics. They wanted to run both tools, side by side, for a short period of time, to make sure the numbers did not change too much. To our dismay, Adobe Analytics was showing a much lower number of page views than the other Web analytics tool. We realised that the problem was that Adobe Analytics was executed at the bottom of the page, as per the typical recommendations. The homepage of newspapers tends to be massive, taking many seconds, even minutes, to fully load. This means that the code at the bottom has the risk of not being executed, if the user clicks on a link or closes the browser tab after quickly reading the headlines.

The only solution in this case is to reorganise the code in a slightly different way:

  • The DTM header code needs to be moved to the bottom of the <head> section, ignoring the recommendation in DTM of pushing it to the top.
  • The DTM footer code should still be at the bottom of the <body>  section.
  • Add the data layer before the DTM header code.
  • Configure the Adobe Analytics tool to be executed at the top of the page.
    analytis-top
  • Set all Page Load Rules that will be setting analytics variables to be executed at the top of the page.
    plr-top
  • The Data Elements used for Adobe Analytics cannot use CSS selectors.

This solution guarantees that the analytics code is executed most of the times, at the expense of delaying for a few hundred milliseconds the load of the page.

Lifetime value of a visitor

A few years ago, one of my customers showed me a tip that I found very interesting: tracking the lifetime value of a customer. The SDKs offer a function to track the visitors lifetime value, but the traditional JavaScript implementation does not have anything similar. So, we will have to create it.

Before getting into the details, it must be noted that this is metric is not 100% precise. Visitors deleting the cookies or using different browsers, will have, as a consequence, fragmented data. However, I believe it still has some value, as it will provide additional information about your visitors. In fact, the visitor retention reports have the exact same limitation. In other words, you should apply the same considerations to this new metric as with the visitor retention reports.

The first thing we need is to devote an eVar and configure it as a counter eVar, with no expiration.

lifetimevalueevar

The next step is to add this piece of code in the doPlugins section:

Once you have this code live for a few days, you should see something like:

lifetimevaluereport

Probably, the data you will get is too granular to be useful, so, my suggestion is to create a classification of the values in ranges. For example:

  • 0 – 50: Very low value
  • 51 – 100: Low value
  • 101 – 200: Medium value
  • 201 – 500: High value
  • 501+: Very high value

Of course, the thresholds will be different for each business. Also, remember that the classification file needs to have all the values, you cannot specify ranges. If you are a regular expression ninja, you might want to try using the classification rule builder to achieve the same results.

VISTA rules

If you have been in an Adobe Analytics implementation, it is highly probable that, at one point or another, you have heard the expression “VISTA rules”. However, many of you might still wonder what those little beasts are. First of all, let’s start with the name. Unless you dig in Google or in the help section, you will never have guessed that VISTA stands for “Visitor Identification, Segmentation & Transformation Architecture”. Do not get too impressed with this name, it was just an imaginative way of getting a fancy name.

Read more

AAM, surveys and look alike modelling

As all digital marketers know, surveys provide invaluable information from visitors. They allow you to know various types of information from the visitors: the website itself, likelihood of buying, preferred products… The outcome of these surveys can be used to modify certain aspects of the experience or target the visitors with specific messages. All marketers would like every single customer to perform a survey and use that information to create a perfect experience for each visitor, but the reality is far from this ideal. Only very few visitors end up accepting the invitation and this usually happens when there is a potential reward.

Enter Adobe Audience Manager. One of its capabilities is the look alike modelling. Basically, this feature compares a base population with the rest of the population, finding similarities. You can think of it as an algorithm that gets all the traits from the base population, remove the base trait, and checks the rest of the population for visitors exhibiting the new list of traits. The main goal of this feature is to uncover hidden population segments and this is exactly what we need it for.

Going back to the survey, many on-line survey tools have the capability of processing the answers and provide a score or a classification. With this information,  once the user finishes it (or after selecting a particular answer in a question), we can add a tracking pixel, with a key/value pair different for each score or classification. Creating a set of traits from this tracking pixel is trivial.

The next step is to create one model using one of these traits. I am not going to talk today how to use this functionality; that is material for another post. The algorithm will extract a subset of the population that looks very similar to the people who have conducted the survey, even if they have not conducted this survey. In fact, the algorithm generates a trait, that can be used in a segment.

ModelReachAccuracy

After this explanation, let’s try to illustrate it in an example.

  • Bank website
  • 20,000,000 registered users
  • A survey is created to analyse investment interest
  • 5,000 people conduct the survey
  • 1,000 people are classified as “interested in investing”
  • A model is created to find similar people to those “interested in investing”
  • The algorithm, with an accuracy of 0.6, finds 1,000,000 visitors potentially “interested in investing”
  • These 1,000,000 visitors are then targeted with a campaign to show the investing products the bank has

In other words, from just a population of 1,000 visitors that we know for sure are interested in investing, we have uncovered a population 1,000 times bigger of potential investors, just by looking at similar traits.

The W3C data layer – part II

Now, looking into the standard, we will get into the different sections that conforms recommended data layer. Let’s review each of them in the following posts.

Root: digitalData

The JavaScript object should always be called digitalData .

Page Identifier Object

Although I personally do not find it very useful for Web analytics, this identifier should be completely unique. In particular:

This value SHOULD distinguish among environments, such as whether this page is in
development, staging, or production.

Page Object

This is where you store all the information about the page. It is very well suited for page name, section, subsection… In particular, s.pageName and s.channel  usually are taken from this object. For example:

If you want to track additional information from the page, just add more props to the analytics object s .

Product Object

This is the start of a set of objects that can be used in various ways. In particular,  digitalData.product[n]  is an array of product objects. You should this object for products that are shown on the page, irrespective of whether they have already been added to the basket. In a PLP (Product Listing Page), the contents of the array are straight forward.

However, in a PDP (Product Description/Details Page), it is not as obvious. Initially, you might think of only including one element in the array, the main product, but it might also be useful to include other product shown: similar products, recommended products, people who bought this product also bought these others… In the latter case, you may set digitalData.product[0] as the main product and   digitalData.product[n] for n>0 for the other products. This is useful to set the prodView event only on the main product.

Regarding the data that you can set, most of the elements are self explanatory and most of them are optional. Some comments from the sub-objects and nodes of this object:

  • productInfo.productID: it does not have to be the SKU, especially if you have a unique productID for each product, but the same SKU can be used for different colours, sizes… in which case, the productID is what you would use for the s.products  variable
  • productInfo.productName: I would not suggest that you used it as the product ID in the s.products  variable
  • category.primaryCategory: in version 15 of SiteCatalyst/Adobe Analytics, the category slot in the s.products  variable was fixed, although I have never seen an implementation that uses it consistently; in general, I suggest to create a merchandising eVar for the category
  • attributes: in case you want to know what kind of secondary product this is (similar products, recommended products, people who bought this product also bought these others…), you can set an attribute for this
  • linkedProduct: in the case of secondary products that are related to the main, you could link that secondary product to the main using this property

With all the previous comments, you could use the following code to create the s.products  variable:

Cart Object

Although the cart object might look similar to the Product Object, in fact, they serve different purposes. As it name implies, all products that are already in the cart should be added to this object. So, as a consequence, it is entirely possible to have both the Product and the Cart objects on the same page, with different contents: the user has already added some products to the basket and it is still browsing in order to add new products to it. It is up to the development team to decide whether it makes sense to include this object on all pages or only on those pages where it makes sense to have it; for example, you might want to remove it in the help section of the website.

Some comments from the sub-objects and nodes of this object:

  • cartID: a unique ID of the cart, that is usually created when the cart is opened
  • price: all details about the price of the contents of the cart; however, the values might not be 100% accurate, as you only know some values as you progress through the checkout process; the voucher and shipping detailsshould only contain cart-wide information
  • item[n].productInfo: this is exactly the same as  digitalData.product[n].productInfo
  • item[n].quantity is the total number of units for this particular item; however, remember that Adobe Analytics does not track units in the cart
  • item[n].price is where you would keep product-specific vouchers

Since you can have both Product and Cart objects, it is up to the implementation to decide which one to use on each page. For example, in a PLP, the Product Object will generally be used, but in a cart page, the Cart Object is the one to be used.

Hybrid apps, visitor stitching and Visitor ID Service

The classical problem of how to make sure that, in hybrid apps, the journey is not broken when transitioning from the native app to the embedded browser, is well known and it has been solved a long time ago. My colleague Carl Sandquist wrote a great post in the official Adobe blog some time ago about how to stitch visitors in hybrid apps. Two years later I still reference it to my customers. I recommend that, before you proceed with the rest of this post, you read it.

However, the aforementioned blog post does not cover the Visitor ID Service. It is still rarely used in mobile apps, but I am sure this will change as Web analysts use features like Customer Attributes.

Mobile SDKs currently support the Visitor ID service. It is very easy to enable: just add your Adobe Marketing Cloud Org ID in the ADBMobileConfig.json configuration file:

Now, the problem we face in hybrid apps is how to send the unique Visitor ID that has been given to the app, to the mobile web. I have not found anywhere any documentation about it, so I will attempt to explain it.

The first step is to retrieve all the possible IDs that may have been used. This is important as, during app upgrades, you do not want to create a new visitor and inflate your statistics. So, if an old value is still available, we should continue using it.

The next step is exactly to create a URL with all the previous parameters; this value needs to be passed to the web in a query string parameter. For example:

Moving to the web, the code becomes more complex, as there are various variables that might need to be set. In the s_code, outside of the doPlugins section,  you need to add the following code:

A few notes about the previous piece of code:

  • As with the SDK configuration, you need to replace YOUR-MCORG-ID with your Adobe Marketing Cloud Org ID
  • I am assuming you are using the AppMeasurement library; if you are still in the old H s_code, remember to replace the s.Util functions with the equivalents in H s_code.
  • The Visitor API version needs to be 1.3 or above.

If you are using DTM,you should copy this code to the “Customize Page Code” section of the Adobe Analytics tool.

visitoridhybrid

It must be noted that you must select “Before UI settings”.

Now, you should have a single journey in hybrid apps and use all the capabilities of the Marketing Cloud.

The W3C data layer – part I

This is the first post of a series of posts, in which I am going to describe the W3C data layer. A few months ago, I explained why it was a good idea to have a data layer. In this series, I am going to dive into the details of one particular data layer implementation: the W3C standard. For those of you who do not know what the W3C does, it is the international body that creates the standards that we use everyday on the web: HTML, CSS, Ajax… Although there are other options for data layers, like JSON-LD, I personally prefer the W3C standard; after all, this body has created some of the most important standards in the Internet.

The first thing I suggest is that you download the W3C data layer standard: http://www.w3.org/2013/12/ceddl-201312.pdf. It is completely free. Have a look at it. You will notice the amount of well known companies that contributed to this standard, including Adobe, my employer. In total, 56+ organisations and 102 individuals have collaborated in the creation of it. So, if you choose to follow this document, you can be confident that you are not on your own.

You might have also noticed the recency of this document: it is less than two years old (at the time of writing). This is probably why many Web analysts have never heard of the concept of data layer. That being said, the word is spreading quickly and it is starting to become the norm, rather than the exception. In fact, a few of my customers, that are undergoing a major redevelopment of their websites, are including a data layer, which they did not have before.

I hope that, by now, you are fully convinced of the need of a data layer and the benefits of going with the W3C standard. Your should also start spreading the word within your organisation. I have found that this step can be important, as any new addition to the website will face some resistance. It must also be remembered that this data layer is not exclusive for Web analytics; other Web marketing tools, like Web optimisers and DMPs will greatly benefit from a data layer.

Probably, the development team is going to be the most difficult to convince. They might have a different approach or think of the effort it will take, but my experience shows that, once they understand it, they will support this concept.

Start defining your data layer

Once you have everybody aligned, you should create a document with the contents of your particular implementation of data layer. Remember to include in the documenting process all on-line marketing teams: Web analytics, optimisers, advertisers… I was recently involved in the creation of a data layer for a customer and it took 5 weeks until it was finished. This is probably an edge case, but you should be aware that this stage might take longer than initially expected.

In a future post I will explain what is the content of the data layer. For now, I suggest you review section 6 of the W3C data layer document, to see what you can expect to include in the data layer. There are a couple of examples in section 7.

Location of the data layer

Before starting the development, the location of the data layer must be agreed with all parties involved. Ideally, it should be at the beginning of the <head>  section of the HTML document. The reason is that it can then be used by any other JavaScript code. If this top-most location cannot be achieved, it should be located before loading any tool that needs will read the data layer. For example, if you are using a tag manager or a DMP like Adobe Audience Manager, the data layer should be placed above all of these tools.

There is finally one additional technical problem with placing the data layer at the top. Page-level information is usually retrieved from the CMS and can easily be cached and set in the HTML. However, depending on the CMS, there is some information, like user-level information, which is not available on page load and it requires an AJAX call. As a consequence, it is possible that the code that needs this data executes before the data is available. For example, the Web analytics code might be capturing the log-in status and will need the user-level information when executing. This problem needs to be solved on a case-by-case basis.

 

In future posts I will describe in greater detail other aspects of the W3C data layer:

  • Each of the JavaScript object
  • Integration with DTM

One or multiple report suites in Adobe Analytics

Back in the old days, before SiteCatalyst 15 was released, the limitation in segmentation meant that, usually, you needed multiple report suites. You would usually have a combination of JavaScript and VISTA rules to do that segmentation (in case you are wondering, the S in VISTA stands for Segmentation), sending the data to different report suites. After that, you would also need a rollup to try to get an overall picture.

With the introduction of SiteCatalyst 15, segmentation became much more powerful. You could have one single report suite and use segments in SiteCatalyst to analyse the data. The segmentation interface received a massive improvement with the May 2014 release of Adobe Analytics.

However, there are still many valid reasons why you would want separate report suites. My friend Jan Exner gave his point of view some time ago: one or two report suites. I would go one step further and talk about more than just two and other reasons why you would want many.

  • Mobile apps. You will probably want to put all mobile apps in a separate report suite, with the Mobile UI enabled. Having a single report suite will probably create some headaches, as some features are mobile app specific and others, web specific. If you need totals, you can use Report Builder and create the totals in Microsoft Excel.
  • Multiple currencies. If you are selling in different currencies and you need an accurate reporting for each currency, then it might be better to have one report suite per country or region. However, you can stick to one single report suite, you can track both in the standard location of the s.products string and in a numeric event, and copy the currency code to an eVar. With this approach, you can report on the report suite default currency and in the local currency in which the transaction occurred.
  • Multiple time zones. As with currencies, if you sell in very different time zones and need accurate intra-day reporting, you might have to create multiple report suites, depending on the time zones. However, generally speaking, reports tend to span more than just one day and the differences in time zones is less noticeable.
  • Different teams. Some large organisations prefer to have the data separated in different report suites, so that it is possible to give permissions to access the data in a more granular way. It is then possible to give the least amount of privileges to the web analysts, so that they only have access to the data they need. For example, I was working with a customer that had completely different teams analysing Android and iOS data and these teams did not even talk to each other.
  • Legal requirements. This might not be very common, but if certain information should only be accessed by a limited number of people for legal reasons, then you need to have many report suites and grant access to the report suites depending on the needs, just like in the previous case. As an example, I was working with a supermarket and they were selling both their own white brand together with other brands; the analytics of their own brand, for obvious reasons, are not allowed to see the information of the other brands; this solution required a VISTA rule.
  • Multi-suite tagging. If your budget allows for it, the best solution is to go for both worlds: one global report suite and multiple local report suites. For lack of a better word, I use local not a geographical meaning.
  • Different SDRs. Well, this is a sin you should avoid at all costs, but if you have inherited implementations that use different SDRs, then you need different report suites unless you are willing to redesign all Adobe Analytics implementations.
  • IP address segmentation. If you need to segment by IP address, with a granularity finer than what the geolocation reports can provide, then you need a VISTA rule and multiple report suites. For example, if you have a call centre that actually uses the website, you do not want to “pollute” the main report suite with call centre data; instead, you want the call centre to be reported in a specific report suite.
  • Human vs non-human interactions. In a previous job, we had a Web services API that offered very similar information to the website. In fact, the information from the API was presented on third party websites, but we were not allowed to add any tagging to these websites. The solution was to track server-side the API usage, obviously, using a separate report suite.

I would like to hear your ideas on this topic or situations that you have found, which have led you to one or multiple report suites.

Out of stock – Advanced reports

In my last post, I described a simple solution to track out-of-stock products using Adobe Analytics. As its name implies, this is a rather simple approach: you just get a count of the number of times an out-of-stock product is shown. For many, that might be enough, but there are many different requirements for a one-size-fits-all solution.

Another of my customers wanted a more detailed view of the stock level for all products, not just the fact that a product is out of stock. For this solution, we are going to need three events:

  • event1: stock level
  • event2: stock check
  • event3: out of stock

The implementation, in theory, should be very simple. For example, let’s consider a page with three products:

  • SKU1: more than 10 products in stock
  • SKU2: 7 products in stock
  • SKU3: out of stock

The code would look like:

s.products = ";SKU1;;;event1=10|event2=1,;SKU2;;;event1=7|event2=1,;SKU3;;;event3=1";
s.events = "event1,event2,event3";

In this example, any number above 10 products in stock is not relevant.

Now, when it comes to reporting, you need to create a calculated metric: event1/event2. This calculated metric will show the average of items in stock for each product. Using event3 in the reports, you will get the number of times each product was shown and it was out of stock.

Out of stock – basic reports

The wealthiest man in Spain (my home country) is the owner of Zara. There are Zara shops everywhere in the world. Just as an example, I was in Bangkok two months ago and I found a Zara store in one of the most popular shopping centres. The success of this company has been widely studied. One of the key success factors of this company is stock management. If you are interested in a detailed explanation, here you have a video that I found very interesting:

In real stores, the only way to determine if a product is popular or not is by the number of units sold. I am not saying that this is not useful, but the mathematical models used could benefit from additional metrics. In the online world, we can go one step further and include other metrics in the algorithm, like product views, add to carts and number of times it is out of stock.

With Adobe Analytics, product views, add to cart, remove from cart and orders are standard metrics that will be included in any typical retail implementation. On the other hand, there is no standard out-of-stock report. I am sure different people will have slightly different views on what “out of stock” is. For me, it is the number of times per visit a product has been shown to a visitor and it was out of stock.

Let me summarise why I chose this way of measuring. While a product is in stock, you can measure the popularity of a product using metrics like add to basket or units sold. However, the moment it is out of stock, you do not have any way to measure how popular it is: you just know it cannot be sold. It could well be that the product is not popular any more and you can just remove it from the inventory. Or, it might be the most popular product, with thousands of page views and frustrated visitors that cannot purchase it. With my solution, you can tell how popular an out-of-stock product is.

After this long introduction, let’s go to the implementation with Adobe Analytics. This is probably the simplest part of it. My suggestion is to use a cookie and a list prop:

  • In the list prop, you set a comma separated list of product IDs that are shown and are out of stock. You need a list prop as it is possible that on one page there are many out of stock products.
  • In the cookie, you should store the list of product IDs that have already been reported during that visit.

I would like to show you some code, but since it entirely depends on each implementation, I will just show you the results. Surprisingly, the best example is a bra web page, as it has many different sizes:

out-of-stock-1

In this example, there are four sizes out of stock, so the list prop will get four values (I used the pipe as the separator):

out-of-stock-2

Since we do not want to continuously be reporting these values for this session, I keep the value in a cookie. Before setting the prop, a piece of JavaScript checks whether the value has already been reported.

out-of-stock-3

Finally, the report looks like:

out-of-stock-4

In this case, I am only interested in instances, but visits and visitors are other valid metrics that can be useful. An alternative would be to remove the cookie and always report the products. In the end, it will depend on how you want to use those values.