Archivo del Autor: Geekeando en Analytics

First steps with the tagging solution Signal (formerly BrightTag)

There are three basic concepts that you need to understand to start working with Signal:  data elements, inputs y outputs

  • The Data Elements are the variables we want to collect
    The set of Data Elements is the Data Dictionary
  • The Inputs are what we want to know about the user and its visit to answer our business questions (in plain English)
    It can be based on page loads or user interactions. Also can be for a web browser or an app.
  • The Outputs are the tags we use to send data to digital analytics tools, like Google or Adobe Analytics

Let’s see how they work 🙂

1) Data Elements

The data elements are the variables. The data we want to get from our web or app and send to our analytics tool. Just what in other tools are dubbed as “variables”.

The way to define them and creating the tagging map is what Signal names Data Binding.
That´s the way for us to tell Signal: in this input, you have to collect these data elements (variables)

For example, for every page load (that is an input) we want to collect (among other stuff):
– The page of the name.
– The page primary primary category.
– The page sub sub category.

So, in the Data Dictionary, we have to create a Data Element for all the variables.
1

2) Inputs

The inputs are what we want to know / get about the navigation & behaviour of our users. The pages that are being loaded or the key interactions during the visit.

For example:
– I want to know that the product pages are being loaded
> I create an input “product pages”

– I want to know that as the page loads, we collect the product name, category and subcategory
> I create a Data Element for each (previous example) and inlcude them within the input “product pages”

We should create an input for each type of page load or user interaction.
I mean that we need an input for product pages, another for home page, another for checkout page and so on. The reason is that some data elements are common to every page type (i.e. page name) but some are just for a specific type or some o them, but not all (i.e. product name is not something we want to collect in the product page, category, checkout etc. but not in the homepage…)

To track user interactions, we just follow the same process as page loads. If we want to know if i.e. users sign in or sign up (and both have two steps – first click + actual success) we would create an input for each of the interactions.

2

3) Outputs

Keeping in mind what has been explained so far, Signal already knows what actions we want to collect (product pages load, sign up etc.) and what we want to know about it (the name of the product, the category etc.)

The third step here is actually sending the data to the analytics tool we use. To do so, we need to create an output for each input, specifying the vendor (in the example Adobe Analytics).

3

 

Last idea

There are two steps:
– Signal getting data from the browser or app (data elements & inputs)
&
– Signal sending the data to the analytics tool (outputs)

Both can be the reason of not having data in the analytics tool, so we need to keep in mind this idea in case we detect data collection issues during the QA process.

Also we need the same inputs and outputs for each data element. I would explain in more detail in another post these three concepts and other functionalities of Signal.

Any idea? Any comment? Any complaint? Leave your comment and I will get back to you. You can also contact with me via email geekeandoenanalytics@gmail.com o through my Linkedin and Twitter profiles.

 

Data QA with Charles Debugging Proxy (basic level)

Digital Analytics is intended to transform data into actionable insights. Sure, you have heard it a million of times. However if the data collection has not been previously audited (so we can validate it) we may be taking decisions based in data that are just wrong.

This first and necessary step that is forgotten in many cases is a bit one of geeky obsessions and priorities in every analytics project. It’s something we should use to make sure we collect properly the data we want. But also it can be used to check and see what other companies do track i.e. the side filters the user may apply when looking for some content.

1 – Tag Assistant: check easily and quickly if there’s something wrong in your Google Analytics implementation

You just need to install the plugging Google Tag Assistant for Chrome (developed by Google itself) and click in the blue label appearing in the right corner of the browser. This label will be red if any error is detected, green if everything is ok and blue if there are things that can be improved.

Let’s see an example:

p1 i1

Oh, it seems that the GA code has been placed outside the <head> tag. This means that the “tracking beacon” could not been sending data before the user leave the page, having a logic and negative impact in the quality of the data we are collecting. To fix it, you just need to click in ‘more info’ and follow the recommendations from Google Analytics Support. In this case, the solution would be just pasting the GA code in the head tag.

It’s needless to say that the UA code showed by Tag Assistant and the one in your GA property should be the same.

Just taking this little step, we would be doing a basic -but important- step to understand what’s wrong with our data collection and get a first idea of what we need to do it to fix it.

2 – Debugging tools to check what we are actually sending to Google Analytics

My favorite tools for performing a basic analysis are Fiddler y Charles . I will use Charles in this example. You just need to download it (it’s free), open the browser and visit the website you want to audit.

Then, in Charles you need to look in “google-analytics.com” in the menu on the left (structure o sequence), y click in “request” in the right.

Let’s see a basic example with a downloads marketplace called Fileplaza

i2i3

What can we see here? In the screenshots above

  • What is being collected?
    A pageview
  • What page is being collected?
    http://www.fileplaza.com (I have just visited the homepage)
    The URI (URL without domain) being collected in Google Analytics will be: /
  • What’s the tittle of the page we are collecting?
    Free Software Download and Tech News – File Plaza
  • What’s the referrer that is sending the visit to fileplaza.com?
    google.co.uk (I am in the The UK and have searched “Fileplaza.com” in Google)
  • What’s the Google Analytics UA code of the property in which FilePlaza is collecting its data?
    Actually, there are two UA-48223160-1 & UA-23547102-20 (a different UA code in each screenshot)
  • Why are there two screenshots shoing the same data but having different UA codes?
    Because they are sending traffic to two GA properties.
  • Are they using Google Tag Manager?
    No, otherwise we would see GTM-XXXXXX

Now, we know what is being sent. But is is correct?

This is quite simple to answer. What we see should be the same as we want to collect with GA.
Otherwise, there’s something wrong. For example, if I do a specific interaction that is tracked with an event, but Charles doesn’t show that event, it means that the interaction is not collecting the event. And that’s something we should fix.

Let’s navigate a bit more …

1) I visit a product page: /home_education/teaching_tools/kid_pix_deluxe/

i4i5

In the screenshots above, I see:

  • A page view is being collected
  • The page being collected has changed
    Now is http://www.fileplaza.com/home_education/teaching_tools/kid_pix_deluxe/
    So the URI in Google Analytics will be /home_education/teaching_tools/kid_pix_deluxe/
  • The data is being sent again to the two properties.

2) I click in the button ‘download’ that leads me to a downloading page:  windows/home___education/teaching_tools/download/kid_pix_deluxe/

3 l click to download a software:

i6i7Oh! Now there’s something new… Looks like my click to download has triggered an event (first red arrow) that is sending the data below to the two GA properties.

1- Event category (ec): descarga
2- Event action (ea): final -I assume is the button type, since it’s the ‘final’ button to download
3- Event label (el): Kid Pix Deluxe -that’s the name of the software i have just downloaded-

As Fileplaza is a marketplace of downloads, so the buttons to download are key. What Charles shows is the fact that they use events to measure the success of this specific user interaction.

Let’s sumarise what Charles is showing:

  1. Fileplaza measure the success of the downloads using events.
    Do they do it correctly? Yes
  2. The events contain relevant data about the interaction: action being perfored by the user (download), button being clicked and name of the software
    Do they do it correctly? Yes
  3. For a reason, FilePlaza wants to collect the data in two different properties of GA
    Do they do it correctly? Yes

I like Charles, but which tool you use is the less important thing here. I am quite tool agnostic and there are other that are also very good, like Wasp (recommended for a data layer) Google Analytics Debugger or Data Slayer (recomended for ecommerce). Normally, I use the console, in which you can install pluggings for most tools like Google Analytics or Adobe Analytics.

Any idea? Any comment? Any complaint? 🙂 Leave your comment and I will get back to you. You can also contact with me via email geekeandoenanalytics@gmail.com o through my Linkedin and Twitter profiles.

 

Análisis de cohortes con Adobe Analytics

El análisis de cohortes permite dar un enfoque de largo plazo al análisis de datos. Y no quedarnos con el corto plazo simplemente, sin visión estratégica.

En el post explico porqué es importante este enfoque y como llevarlo a cabo con Adobe Analytics. El original -en inglés- lo he publicado en este mismo blog. Y para la versión española, nuevamente Carlos Lebron me ha prestado amablemente su blog www.AnalísisWeb.es

Puedes leerlo haciendo click en el enlace Análisis de Cohortes con Adobe Analytics

Adobe Analytics – Cohort Analysis

I heard for the first time about using Cohort Analysis in Adobe Analytics during the talk “The Chef’s Table” from Ben Gaines at Adobe Summit EMEA in London last May (2016). Ben explained that Cohort Analysis was one of the cool things coming with Workspace.

I immediately thought that it’s an “ingredient” that should be present in any analyst’s table in which meaningful insights are to be prepared.

What is Cohort Analysis?

Wikipedia says that “Cohort analysis is a subset of Behavioral Analytics
that takes the data from a given dataset”.

To put it clear and adapt the definition to the context, I will say that a cohort is a group of users who performed a specific action at the same time.

For example: users who came to our site from a PPC campaign and created an account on the first week of May. That’s the cohort, and the cohort analysis will let us take a digging about i.e. the amount of purchase orders generated by that cohort during the next ten weeks.

It will enable us to segment a bit further very easily, and know some characteristics about those users who actually purchased (what do they have in common?) and those who don’t (again, what do they have in common?)

Why is it important?

Looking at conversions / user behaviour over time, cohort analysis helps us to understand more easily the long term relationship with our users or customers.

We can use Cohort Analysis for business questions like:
– How effective was a PPC campaign aiming for new accounts in terms of orders over time?
– How is the purchasing loyalty for a specific product category?
– What segments (cohorts) generate more revenue over time?
– How is the engagement in terms of returning visits generated by a specific action?

We can now easily evaluate the impact and effectiveness of specific actions (or campaigns etc.) on user engagement, conversion content retention etc.

And last but not least, we can apply segments, so we can focus only on a section, referrer, device etc. that is key for us.

How Cohort Analysis can be done in WorkSpace?

Just go to Workspace and select a project. Then select “Visualizations” and drang “Cohort Table” into a “Freedom Table”

 

 

 

 

 

 

A table will appear containing three elements:

  • Granularity

Day, Week, Month etc.

  • Inclusion Metric

The metric that places a user in a cohort.
For example, if I choose Create Account, only users who have created an account during the time range of the cohort analysis will be included in the initial cohorts.

  • Return Metric

The metric that indicates the user has been retained.
For example, if I choose Orders, only users who performed an order after the period in which they were added to a cohort will be represented as retained.

The inclusion and return metric can be the same. For example Orders, to watch the purchasing loyalty.


Cohort Analysis in place

I am changing the example to just visits. Among the users who visited us in a specific day, how many of them returned during the next secuencial days? (remember that this question can be about orders or any other metric/s)

For the question above, I have the table below

 

 

 

 

 

 

 

Every cell, depending on if the numbers are bigger or smaller, show a brighter or softer green colour. And we can be “curious” about both groups (what seems to be working well, to take a digging about “why” who etc. and same thing to understand what seems not to be working as expected)

To know what have in common the users behind a specific cell we want to analyste further, just right click on a specific cell and it will open the segment manager tool containing the cohort.

 

 

 

 

 

 

 

 

Save the segment and take a digging, looking to the entry page, referrer, devices etc. of this specific cohort in which in the example we are not engaging, to know more about that specific type of user

Conclusion

Cohort Analysis can help to analyse the long run. It´s very common analysing how many sales have been generated by the different campaigns and “that´s all”. But Cohort Analysis can help us to know what happened over time with customers who bought for the first time from campaign a and the ones who did for b, o who purchased product type a or b.

And that´s an improvement 🙂

And you? How do you watch your key segments over the time?

Any idea? Any comment? Any complaint? 🙂 Leave your comment and I will get back to you. You can also contact with me via email geekeandoenanalytics@gmail.com o through my Linkedin and Twitter profiles.

 

Ce qu’il faut faire quand Google Analytics “ne montre plus de données”

Le problème: Google Analytics a arrêté de collecter de données ‘aujourd’hui’.

Normalement, on nous dit que ‘on a rien change, absolument pas!’ et / ou ‘on peut voir que le tagging marche’.

On peut voir dans l’image ci-dessous que Google Analytics ne montre aucun donnée après 8:00 am (du jour où j’ai pris l’image)

1

La solution. Savoir qu’est-ce qui passe et comment fixer le problème?

La première chose à faire, c´est identifier le genre du problème:

  • A) Google Analytics ne collecte plus de données.
    ou
  • B) Google Analytics ne montre plus de données.

Evidemment, ce n´est pas habituel mais il faut dire qu´il est possible que Google Analytics ne montre pas de données, tout simplement. Ça m’arrivé plusieurs fois..Parce il est possible que Google Analytics continue à collecter de donnes (comme d’habitude) mais ne montre plus les dernières (d’aujourd’hui).

Qu’est-ce qu’on peut faire pour savoir où on en est? Problème A ou B

Il faut qu’on applique un filtre.

C’est-à-dire, on oblige à Google Analytics “à penser”. Si Google Analytics a collecté les donnes correctement, il va nous les montrer. Dans l´image on peut voir qu´une fois le filtre est appliqué, ´tout à coup´ les donnes d’aujourd’hui apparaissent. Pour tout le trafique, et pour le segmente qu’on vient d’appliquer.

2

Si notre site web est un e-commerce, on peut cliquer aussi là-bas3

Comme on peut voir dans l’image dessus, les donnes d’e-commerce apparaissent aussi. Et encore pour ‘aujourd’hui’ (le jour ou que j’ai pris l’image) et les jours préalables. Alors, lorsqu’on oblige Google Analytics à penser, tout rente à l’ordre.

On peut aussi regarder les donnes en ´real time´ et vérifier s’il y a des donnes là-bas ou pas. Il est recommandé de regarder encore une fois le lendemain du souci ou quelques jours plus tard. Moi, quand ce problème m’a arrivé cette petite astuce m´a suffit pour le fixer.

Cependant, si on applique le filtre et le donnes ne revient jamais, alors on a un problème d’autre type et beaucoup plus grave. Il faudra faire attention au tagging et à la collection de donnes. Je parlerais de ce sujet prochainement.

Pour en finir, je vais dire que moi, je suis francophile mais pas francophone. Depuis j´ai quitté Paris je continue toujours à lire Le Monde Diplomatique et en general à faire d´autre choses en français 🙂 Mais pas forcément à écrire. J´espère qu´il n’y a pas de trop grosses fautes de grammaires ou de vocabulaire. Excuse my French, stp 🙂

Alors, des idées? De commentaires? Des plaintes? 🙂 Laisse ton commentaire, ou contacte mois dans mon email adresse geekeandoenanalytics@gmail.com ou sur mes profiles de Linkedin et  Twitter

That’s why I like juggling and you, as a Digital Analyst, should too

Even if it may sound a bit weird, analytics and circus have more in common that you would expect. At least in terms of mindset and skills to develop, I really see some interesting similarities. Probably, it happens the same thing with music- Unfortunately I cannot play any instruments, but apparently I have the looks 🙂

I like juggling and analytics. And what it’s around as well.

When I juggle or i.e. I track something complex for the first time, I can do things that one time in the past I even thought I would not  be able to do. It’s mainly about practice and eagerness to learn.

When I juggle (or I do analytics) always -and actually is always- I can learn something new, and have a new challenge to face new problems, or situations I have not faced before. And it nurtures -among other things- the which to exceed oneself, the curiosity or the creativity in the approach. At some point, dealing with new things becomes part of everyday life. 

There isn’t just a right way to juggle in terms of posture, the way you open your hands etc. Same thing in Analytics. There are different alternatives and approaches depending on what you do at a specific moment. I have learnt to be flexible enough depending on the context, with a focus on solving the problem/trick.

Juggling is a social thing. Most jugglers are pretty easy going and sociable. Juggling let you meet and interact with like minded people, with whom you normally share more things.
Same thing in Analytics, assuming you attend regularly to conferences or events for us.

It happens to me all the time that the balls fall down. And it happens because I am constantly pushing myself up, trying new stuff out of my  comfort zone. But finally I manage to learn and do the trick correctly, and each time in less time. The willingness to progress and doing well more complex things is what define your progress. And it also happens in Analytics.

Attending a juggling contention is fun. Attending Measure Bowling, Measure Camp or the ‘celebration’ at the Adobe Summit is fun as well. Same thing with some MeetUps. I have had always a good time and also had the chance of meeting new people, some of them becoming friends later.

Any idea? Any comment? Any complaint? Leave your comment and I will get back to you. You can also contact with me via email geekeandoenanalytics@gmail.com o through my Linkedin and Twitter profiles.

Anomaly Detection – ¿Por qué han subido o bajado mis conversiones?

La Detección de Anomalías (no confundir con picos o caídas)  es parte las nuevas funcionalidades ofrecidas por Adobe Analytics y aporta un método estadístico sobre como una métrica ha cambiado en un periodo de tiempo.

El post en español me lo he publicado Carlos Lebron en su blog Analísis Web y puedes leerlo haciendo click en el enlace

¿Por qué han subido o bajado mis conversiones?

Adobe Analytics – Anomaly Detection

What´s Anomaly Detection?

Anomaly Detection is part of new & cool stuff from Adobe Analytics and provides a statistical method to determine how a given metric has changed in relation to previous data”

Is an anomaly the same thing as a spike or a dip?

Not exactly 100%

A spike or a dip is what happens when a metric dramatically increase or decrease
for a specific period of time. And it might be “created” or “expected”.
For example, if we run an extra £10000 PPC campaign, then it’s normal we will have an increase in traffic (due to that campaign). Thus, if we have 20% more of traffic and 17% more of conversions, that’s not an anomaly, just a spike.

An anomaly is more about the way that metric has changed and has an statistical approach.

For example, if one day 23% of the orders come from a specific campaign that represents just 3% of the traffic, that’s an anomaly, but can also be a spike or not.

It worth taking a digging, and the results are statistically significant (it’s highly recommend to thick the box “Show Only Statistically Significant Items”)

As we can see in the graph below we can see that there is an anomaly on the 29th of June, but it’s not really a spike
1

How can we get started with Anomaly Detection?

1- Anomaly Detection can be found within “Reports”, and then Site Metrics

2

2- Select the metric/s & the period

Just click on “Edit Metrics” and then choose a “Training Period”

  • Metrics

You can select one or more metrics (so you can see the relation between i.e. two metrics)
You can select every Success Event, and also the Standard Events related to eCommerce (cart additions, views, removals, orders etc.)

  • Period

3

 

 

 

The three training periods available are: 30, 60 and 90 days. Note a bigger training period may reduce the size of an anomaly.

3- Take a digging for a specific anomaly

Once you select the metric and timing, you will see a graph showing the evolution, pointing out the anomalies
4
As soon as we click on an anomaly, we see below the graph the actuals and what would be reasonable for that metric during that period of time. Additionally, we also see its impact on percentage (in green if it’s positive and in red if it’s negative).

Then we should click on analyze (above the graph) to see the “contribution analysis”

4- Check the possible reasons

5
Adobe Analytics suggest a range a “items” (that can be product, campaign etc.) in which an anomaly has been spotted.

Each posibitility has a contribution score that take values from 1 to -1:
1: complete association for a spike or complete inverse association for a dip
0: No association for contribution
-1: Complete association for a dip or complete inverse association for a spike.
In the image can see in the second row: 1% of x has generated 23 of y…

5- Create a segment and inspect it

6

Just click on one of the items (rows) and a button to create a segment containing that item (product, campaign, referrer etc.) will appear.

Next steps? Save the segment and apply it by referrer, device etc. in order to take a digging and know what´s going on..

As you can see, it’s very fast to identify what’s “unusual” and the segments we need for our analysis, and it will save us loads of time.

Any idea? Any comment? Any complaint? Leave your comment and I will get back to you. You can also contact with me via email geekeandoenanalytics@gmail.com o through my Linkedin and Twitter profiles