Adobe Analytics – Cohort Analysis

I heard for the first time about using Cohort Analysis in Adobe Analytics during the talk “The Chef’s Table” from Ben Gaines at Adobe Summit EMEA in London last May (2016). Ben explained that Cohort Analysis was one of the cool things coming with Workspace.

I immediately thought that it’s an “ingredient” that should be present in any analyst’s table in which meaningful insights are to be prepared.

What is Cohort Analysis?

Wikipedia says that “Cohort analysis is a subset of Behavioral Analytics
that takes the data from a given dataset”.

To put it clear and adapt the definition to the context, I will say that a cohort is a group of users who performed a specific action at the same time.

For example: users who came to our site from a PPC campaign and created an account on the first week of May. That’s the cohort, and the cohort analysis will let us take a digging about i.e. the amount of purchase orders generated by that cohort during the next ten weeks.

It will enable us to segment a bit further very easily, and know some characteristics about those users who actually purchased (what do they have in common?) and those who don’t (again, what do they have in common?)

Why is it important?

Looking at conversions / user behaviour over time, cohort analysis helps us to understand more easily the long term relationship with our users or customers.

We can use Cohort Analysis for business questions like:
– How effective was a PPC campaign aiming for new accounts in terms of orders over time?
– How is the purchasing loyalty for a specific product category?
– What segments (cohorts) generate more revenue over time?
– How is the engagement in terms of returning visits generated by a specific action?

We can now easily evaluate the impact and effectiveness of specific actions (or campaigns etc.) on user engagement, conversion content retention etc.

And last but not least, we can apply segments, so we can focus only on a section, referrer, device etc. that is key for us.

How Cohort Analysis can be done in WorkSpace?

Just go to Workspace and select a project. Then select “Visualizations” and drang “Cohort Table” into a “Freedom Table”







A table will appear containing three elements:

  • Granularity

Day, Week, Month etc.

  • Inclusion Metric

The metric that places a user in a cohort.
For example, if I choose Create Account, only users who have created an account during the time range of the cohort analysis will be included in the initial cohorts.

  • Return Metric

The metric that indicates the user has been retained.
For example, if I choose Orders, only users who performed an order after the period in which they were added to a cohort will be represented as retained.

The inclusion and return metric can be the same. For example Orders, to watch the purchasing loyalty.

Cohort Analysis in place

I am changing the example to just visits. Among the users who visited us in a specific day, how many of them returned during the next secuencial days? (remember that this question can be about orders or any other metric/s)

For the question above, I have the table below








Every cell, depending on if the numbers are bigger or smaller, show a brighter or softer green colour. And we can be “curious” about both groups (what seems to be working well, to take a digging about “why” who etc. and same thing to understand what seems not to be working as expected)

To know what have in common the users behind a specific cell we want to analyste further, just right click on a specific cell and it will open the segment manager tool containing the cohort.









Save the segment and take a digging, looking to the entry page, referrer, devices etc. of this specific cohort in which in the example we are not engaging, to know more about that specific type of user


Cohort Analysis can help to analyse the long run. It´s very common analysing how many sales have been generated by the different campaigns and “that´s all”. But Cohort Analysis can help us to know what happened over time with customers who bought for the first time from campaign a and the ones who did for b, o who purchased product type a or b.

And that´s an improvement 🙂

And you? How do you watch your key segments over the time?

Any idea? Any comment? Any complaint? 🙂 Leave your comment and I will get back to you. You can also contact with me via email o through my Linkedin and Twitter profiles.


Ce qu’il faut faire quand Google Analytics “ne montre plus de données”

Le problème: Google Analytics a arrêté de collecter de données ‘aujourd’hui’.

Normalement, on nous dit que ‘on a rien change, absolument pas!’ et / ou ‘on peut voir que le tagging marche’.

On peut voir dans l’image ci-dessous que Google Analytics ne montre aucun donnée après 8:00 am (du jour où j’ai pris l’image)


La solution. Savoir qu’est-ce qui passe et comment fixer le problème?

La première chose à faire, c´est identifier le genre du problème:

  • A) Google Analytics ne collecte plus de données.
  • B) Google Analytics ne montre plus de données.

Evidemment, ce n´est pas habituel mais il faut dire qu´il est possible que Google Analytics ne montre pas de données, tout simplement. Ça m’arrivé plusieurs fois..Parce il est possible que Google Analytics continue à collecter de donnes (comme d’habitude) mais ne montre plus les dernières (d’aujourd’hui).

Qu’est-ce qu’on peut faire pour savoir où on en est? Problème A ou B

Il faut qu’on applique un filtre.

C’est-à-dire, on oblige à Google Analytics “à penser”. Si Google Analytics a collecté les donnes correctement, il va nous les montrer. Dans l´image on peut voir qu´une fois le filtre est appliqué, ´tout à coup´ les donnes d’aujourd’hui apparaissent. Pour tout le trafique, et pour le segmente qu’on vient d’appliquer.


Si notre site web est un e-commerce, on peut cliquer aussi là-bas3

Comme on peut voir dans l’image dessus, les donnes d’e-commerce apparaissent aussi. Et encore pour ‘aujourd’hui’ (le jour ou que j’ai pris l’image) et les jours préalables. Alors, lorsqu’on oblige Google Analytics à penser, tout rente à l’ordre.

On peut aussi regarder les donnes en ´real time´ et vérifier s’il y a des donnes là-bas ou pas. Il est recommandé de regarder encore une fois le lendemain du souci ou quelques jours plus tard. Moi, quand ce problème m’a arrivé cette petite astuce m´a suffit pour le fixer.

Cependant, si on applique le filtre et le donnes ne revient jamais, alors on a un problème d’autre type et beaucoup plus grave. Il faudra faire attention au tagging et à la collection de donnes. Je parlerais de ce sujet prochainement.

Pour en finir, je vais dire que moi, je suis francophile mais pas francophone. Depuis j´ai quitté Paris je continue toujours à lire Le Monde Diplomatique et en general à faire d´autre choses en français 🙂 Mais pas forcément à écrire. J´espère qu´il n’y a pas de trop grosses fautes de grammaires ou de vocabulaire. Excuse my French, stp 🙂

Alors, des idées? De commentaires? Des plaintes? 🙂 Laisse ton commentaire, ou contacte mois dans mon email adresse ou sur mes profiles de Linkedin et  Twitter

That’s why I like juggling and you, as a Digital Analyst, should too

Even if it may sound a bit weird, analytics and circus have more in common that you would expect. At least in terms of mindset and skills to develop, I really see some interesting similarities. Probably, it happens the same thing with music- Unfortunately I cannot play any instruments, but apparently I have the looks 🙂

I like juggling and analytics. And what it’s around as well.

When I juggle or i.e. I track something complex for the first time, I can do things that one time in the past I even thought I would not  be able to do. It’s mainly about practice and eagerness to learn.

When I juggle (or I do analytics) always -and actually is always- I can learn something new, and have a new challenge to face new problems, or situations I have not faced before. And it nurtures -among other things- the which to exceed oneself, the curiosity or the creativity in the approach. At some point, dealing with new things becomes part of everyday life. 

There isn’t just a right way to juggle in terms of posture, the way you open your hands etc. Same thing in Analytics. There are different alternatives and approaches depending on what you do at a specific moment. I have learnt to be flexible enough depending on the context, with a focus on solving the problem/trick.

Juggling is a social thing. Most jugglers are pretty easy going and sociable. Juggling let you meet and interact with like minded people, with whom you normally share more things.
Same thing in Analytics, assuming you attend regularly to conferences or events for us.

It happens to me all the time that the balls fall down. And it happens because I am constantly pushing myself up, trying new stuff out of my  comfort zone. But finally I manage to learn and do the trick correctly, and each time in less time. The willingness to progress and doing well more complex things is what define your progress. And it also happens in Analytics.

Attending a juggling contention is fun. Attending Measure Bowling, Measure Camp or the ‘celebration’ at the Adobe Summit is fun as well. Same thing with some MeetUps. I have had always a good time and also had the chance of meeting new people, some of them becoming friends later.

Any idea? Any comment? Any complaint? Leave your comment and I will get back to you. You can also contact with me via email o through my Linkedin and Twitter profiles.

Anomaly Detection – ¿Por qué han subido o bajado mis conversiones?

La Detección de Anomalías (no confundir con picos o caídas)  es parte las nuevas funcionalidades ofrecidas por Adobe Analytics y aporta un método estadístico sobre como una métrica ha cambiado en un periodo de tiempo.

El post en español me lo he publicado Carlos Lebron en su blog Analísis Web y puedes leerlo haciendo click en el enlace

¿Por qué han subido o bajado mis conversiones?

Adobe Analytics – Anomaly Detection

What´s Anomaly Detection?

Anomaly Detection is part of new & cool stuff from Adobe Analytics and provides a statistical method to determine how a given metric has changed in relation to previous data”

Is an anomaly the same thing as a spike or a dip?

Not exactly 100%

A spike or a dip is what happens when a metric dramatically increase or decrease
for a specific period of time. And it might be “created” or “expected”.
For example, if we run an extra £10000 PPC campaign, then it’s normal we will have an increase in traffic (due to that campaign). Thus, if we have 20% more of traffic and 17% more of conversions, that’s not an anomaly, just a spike.

An anomaly is more about the way that metric has changed and has an statistical approach.

For example, if one day 23% of the orders come from a specific campaign that represents just 3% of the traffic, that’s an anomaly, but can also be a spike or not.

It worth taking a digging, and the results are statistically significant (it’s highly recommend to thick the box “Show Only Statistically Significant Items”)

As we can see in the graph below we can see that there is an anomaly on the 29th of June, but it’s not really a spike

How can we get started with Anomaly Detection?

1- Anomaly Detection can be found within “Reports”, and then Site Metrics


2- Select the metric/s & the period

Just click on “Edit Metrics” and then choose a “Training Period”

  • Metrics

You can select one or more metrics (so you can see the relation between i.e. two metrics)
You can select every Success Event, and also the Standard Events related to eCommerce (cart additions, views, removals, orders etc.)

  • Period





The three training periods available are: 30, 60 and 90 days. Note a bigger training period may reduce the size of an anomaly.

3- Take a digging for a specific anomaly

Once you select the metric and timing, you will see a graph showing the evolution, pointing out the anomalies
As soon as we click on an anomaly, we see below the graph the actuals and what would be reasonable for that metric during that period of time. Additionally, we also see its impact on percentage (in green if it’s positive and in red if it’s negative).

Then we should click on analyze (above the graph) to see the “contribution analysis”

4- Check the possible reasons

Adobe Analytics suggest a range a “items” (that can be product, campaign etc.) in which an anomaly has been spotted.

Each posibitility has a contribution score that take values from 1 to -1:
1: complete association for a spike or complete inverse association for a dip
0: No association for contribution
-1: Complete association for a dip or complete inverse association for a spike.
In the image can see in the second row: 1% of x has generated 23 of y…

5- Create a segment and inspect it


Just click on one of the items (rows) and a button to create a segment containing that item (product, campaign, referrer etc.) will appear.

Next steps? Save the segment and apply it by referrer, device etc. in order to take a digging and know what´s going on..

As you can see, it’s very fast to identify what’s “unusual” and the segments we need for our analysis, and it will save us loads of time.

Any idea? Any comment? Any complaint? Leave your comment and I will get back to you. You can also contact with me via email o through my Linkedin and Twitter profiles

Why you need to attend Measure Bowling or how to organize your own

Measure Bowling will be back on the 2nd of June. For the 7th time, Digital Analysts from about 25 European cities will go bowling together.

Next Measure Bowling












For those who don´t know Measure Bowling or don´t work in Analytics it might seem like a weird concept. I have seen seen a few poker faces on friends or even my boss when I have tried to explain it / convince them to attend. But once they´ve experienced it, they agree that it´s pretty cool, and ask me why it doesn’t happen more often.

Why should you attend Measure Bowling?

Well, let´s recognize that our favourite part in a conference is at the end – when you have the chance to talk with others in a relaxed environment, and also talk with / ask questions of someone in particular, and of course, socializing.

Measure Bowling goes directly to this part since there are no talks. We play one or two bowling games, taking part in a local competition, but also an international one between the different cities. In most cases, it also includes dinner, (which is highly appreciated by the Spaniards) and some beers.

There is also a t-shirt competition, normally themed around nerd t-shirts. Unfortunately, I don´t have the pic, but the best one I have ever seen said “NO, I will not fix your computer” A few of us also wear nerd socks on the evening (yes, there is a pic for this). No white socks please, though! There are also prizes (books, toques, Amazon vouchers etc).

nerd socks






If you are reading this, I will tell you that you should definitely include Measure Bowling in your agenda, and attend the one happening in your city. Or…you could organize your own!

To keep reading, please click on the link below to the original post.

Como trasladar a Signal un plan de etiquetado

Este es el segundo post de la serie sobre gestor de etiquetas Signal conocido anteriormente como BrightTag

En cualquier implementación de analítica, empezamos por definir las variables que se van a lanzar en cada carga de página y en cada interacción. Y en Enhanced Ecommerce si utilizaramos Google Analytics.

Lo primero es definir las variables y los posibles valores que puedan tomar.
Por ejemplo, pongamos que en cada carga de página queremos recoger:
– Nombre de la página (page name)
– Categoría (primary category)
– Subcategoría (sub category)



Luego debemos trasladar las variables del Excel que hemos hecho a las herramientas. Si utilizamos Adobe Analytics, tendremos que crear esas mismas variables como eVars o Props o ambas cosas, en función de nuestras necesidades.

Y en Signal, lo que tenemos que hacer es:
● Crear un Data Element cada una de las variables definidas
● Asegurarnos de que cada Data Element tiene el mismo numero de inputs que de Outputs
● Asegurarnos de que dentro de cada Data Element, cada input corresponde con cada output

1) Crear un Data Element cada una de las variables definidas.

Tan solo hay que ir al Data Dictionary y clicar en “Add Data Element”, introducir el nombre y la descriccion y guardar clicando en “Create”


Consejo: Recomiendo que los nombres sigan una “naming convencion” coherente y consitente, e incluir una descipcion, para nosotros mismos mas adelante y para otras personas que puedan utilizar Signal mas adelante (sean o no analistas)

2) Asegurarnos que cada Data Element tiene el mismo numero de inputs que de Outputs





De lo contario hay un problema, que puede ser de dos tipos:

Mas inputs que outputs > Estamos duplicando datos
Tenemos al menos 2 inputs apuntando a un mismo Output

-Problema: Cada vez que lanzamos esos inputs estamos enviando datos a Adobe Analytics (o a la herramienta en cuestión). Es decir, en Adobe Analytics, no podremos saber cuál es el input que se ha lanzado y debido al cual se han recogido esos datos.

-Consecuencia: Si no sabemos que ha ocurrido para que se hayan recogido esos datos, entonces esos datos en Adobe Analytics son de muy baja calidad o directamente no sirven para nada.

Mas outputs que inputs > Estamos dejando de recoger datos (que creemos que estamos recogiendo)
Tenemos al menos un input que no apunta a ningún output

-Problema: Cada vez que lanzamos inputs sin outputs, esos inputs no se envían a ningún sitio puesto que no se le está dando a Signal la orden de hacerlo

-Consecuencia: Datos que supuestamente estamos recogiendo nunca llegaran a Adobe Analytics y nos daremos cuenta cuando queramos analizarlos (y ya será demasiado tarde)

3) Asegurarnos de que dentro de cada Data Element, cada input corresponde con cada output

No basta con ver que cada Data Element tiene los mismos inputs que outputs. Pudiera ser que tuvieramos inputs sin outputs o outputs sin inputs, pero que ambas cosas (inputs & outputs) sumasen el mismo numero. Luego tenemos que comprobar que nos casan todas las parejas input & ouput

Para el Data Element Page Subcategory tenemos el input Home







Que corresponde con el output Mobile App: Page Load: Home Page








Es decir, cuando se carga la página de inicio (home) Signal sabe que tiene que:

● Recoger esa carga de pagina
Ya que hemos creado un input “home”
● Recoger el Data Element “Page Sub Category”
Ya que está incluido en el input home – primera captura de pantalla
● Enviar ese dato a Adobe Analytics
Ya que hemos creado una etiqueta con ese vendor asignada al input home (la cual hemos llamado “mobile app page load home page) – segunda captura de pantalla
Y así sucesivamente con el resto de parejas input & ouput

Puedes leer el post de introdución a Signal haciendo click aquí

¿Alguna idea? ¿Algún comentario? ¿Alguna queja? 😉 Deja tu comentario y te responderé. Si algo no te ha quedado claro, tienes alguna duda y crées que puedo ayudarte también puedes enviarme un mail a o contactar conmigo a través de Linkedin o Twitter


Introducción al Gestor de Etiquetas Signal (BrightTag)

Para empezar a comprender como funciona el gestor de etiquetas Signal (anteriormente conocido como BrightTag) hay que empezar por tres conceptos básicos: data elements, inputs y outputs

¿Qué son?

  • Data Elements son las variables que queremos recoger
    La recopilación de Data Elements se llama Data Dictionary
  • Los Inputs son aquello que ocurre que queremos saber y por tanto recoger
    Ya sean cargas de páginas o interacciones realizadas por el usuario.
    Ya sea en un navegador web o una app.
  • Los Outputs son las etiquetas con las que enviamos datos a otra herramientas
    Como Google o Adobe Analytics por ejemplo

¿Cómo funcionan?

1) Data Elements

Los data elements son los datos que queremos capturar de nuestra web o app y enviar a una herramienta de analítica digital. Es lo que en otras herramientas llamaríamos “variables”.

La forma de definirlos y crear un mapa del tageado (plan de marcación) es a través de lo que Signal denomina Data Binding. Que es la forma de decirle a Signal: en este input tienes que recoger estos data elements (variables)

Por ejemplo, en cada carga de página (que es un input), podemos querer recoger (entre otras cosas):
– El nombre de la página (page name)
– La categoría (primary category)
– La subcategoría (sub category).

Luego dentro del Data Dictionary, crearíamos un Data Element para cada una de estas variables


2) Inputs

Los inputs es aquello que que queremos saber / recoger sobre la navegación y comportamiento de los usuarios. Las páginas que se cargan y las interacciones relevantes que se realizan durante la visita.

Explicación Práctica con un Ejemplo:
– Quiero saber que se visitan las páginas de producto
> creo un input “product pages”
– Quiero que al cargarse las páginas de producto, se recoja el nombre del producto, la categoría y la subcategoria
> creo un Data Element para cada una de estas variables (ejemplo anterior) y los incluyo dentro del input product pages

Lo mismo con la interacciones, si queremos saber si los usuarios crean cuentas y / o se logean (y eso en conlleva dos pasos) crearíamos un input para cada una de estas acciones.









3) Outputs

Con lo que explicado hasta ahora, Signal ya sabe qué acciones queremos recoger (que se carguen las páginas de producto, que el usuario crea una cuenta etc.) y que queremos saber de esas acciones (el precio del producto, el nombre, la categoría etc.)

El tercer paso es enviar esos datos a la herramienta de Analítica Digital que queramos, como por Adobe Analytics o Google Analytics (en el ejemplo, el vendor en cuestión es Adobe). Para ello, hemos de  crear un output para cada input.


Puedes leer el siguiente post sobre como trasladar a Signal un plan de equitado haciendo click aquí

¿Alguna idea? ¿Algún comentario? ¿Alguna queja? 😉 Deja tu comentario y te responderé. Si algo no te ha quedado claro, tienes alguna duda y crées que puedo ayudarte también puedes enviarme un mail a o contactar conmigo a través de Linkedin o Twitter