Skip to main content
QQL Background
Lukas Nordbeck avatar
Written by Lukas Nordbeck
Updated over 4 years ago

History

Back in the days we only had pre-made metrics. One of the most used metrics at this time was the Interaction Rate for Facebook, which is basically the sum of all engagement numbers divided by the fan count. The idea was very simple, to be able to even compare small to large profiles in terms of engagement. The fan number was used to normalize. While the default definition of our Interaction Rate is still useful, clients were asking if there was a way to change the default formula to better match their own needs. Let’s take a look at the default definition we offered back then:

Now depending on your company’s Social Media strategy, you might be more interested in comments than likes, or even likes might not be as important at all to describe your definition of success. Requests of this kind were asked more often over time, and this is where we decided to completely rethink how we offer data to our clients. It became clear to us that we need to build flexibility into the system, rather than adding hundreds of pre-made metrics to match all clients needs.

This is where QQL was born. The idea was fairly simple: Instead of delivering pre-made metrics with no flexibility, why not offer our data sets in form of SQL tables and let our clients write their own SQL queries on top. The combination of pre-made SQL tables plus a custom query within our universe is what we called QQL, the quintly Query Language.

How does it work?

If you look at the whole QQL stack, this is what it looks like:

Let’s go bottom up. The raw-level data sets are within our actual databases. We use different database solutions to match the specifics of each data set best. Anyhow, this layer is invisible for you as a user. The next layer named data sources is important for you to fetch data for your metrics. It is a simplified view of our actual database organized as a simple set of SQL tables. Here it’s important to note: You never have access to our actual databases holding all the raw-level data, instead you access the data source layer which can be seen as an ad-hoc database created just for your specific query, interpreting your context.

Technical hint: The ad-hoc database is built by using Sqlite 3 ( https://www.sqlite.org/). Thus all features of Sqlite 3 can be used when writing queries.

We’ll now look at a concrete example to give you further insights into how it works. Our goal is a line chart metric showing the Page Impressions of a Facebook Page over time. Noteworthy here is that the Impressions of a Facebook Page is a private-level metric, thus we need to take it from the Facebook Insights. We assume there is a data source called facebookInsights. It’s schema is:

Our query describing this metric is:

SELECT profileId AS dim1, time AS dim2, impressions AS dim3 FROM facebookInsights

You might wonder what the renaming to dim1 and dim2 does (dim stands for Dimension). We’ll go into much more detail on how to map data onto chart types at a later stage, but for now you just need to know that this is how you tell the system to use profiles as series of the line chart (Dimension 1) and to put the time onto the x axis (Dimension 2).

Now we’ll describe in detail what the QQL stack does when calling this metric. For this let’s assume we have selected a group including two Facebook Pages for the time period 2016/01/01 to 2016/01/01 at a daily interval.

Extracting contextual information: First our system will check your preferences and what you have currently selected in the profile and date selector. This includes the set of profiles you are looking at, the time range you have selected, the interval (e.g. daily, weekly, monthly), your time zone setting and much more. It’s basically all your preferences relevant for presenting the data and information about the current request mainly described by your profile and date selection.

Query analyzing: Now as we know the full context, next QQL will look at your metric’s query. It might feel strange to check the query before making the data sources available, but this way our system knows what data you are interested in. So it does not even need to load any other data you probably won’t access. By checking the table names in your query it knows what data sources you are interested in.

Building data sources on-the-fly: This is where the magic happens. After step 2 the system ramps up the data sources and fills in the data by using the contextual information extracted in step 1. The main parameters here are the set of profiles and the time range selected via the profile and date selector. At the end of this step there is a fully available ad-hoc database holding tables for all the data sources used within your metric’s query, filled with the data matching your contextual information. In our example there will be a facebookInsights table holding the following information. Let’s assume the profiles within the selected group have ID 1 and 2 within our system. As you can see, it holds the number of impressions for every single day, for every single profile selected.

Mapping data onto the chart: All the data now resides within the Sqlite facebookInsights table. Next our system automatically loads all the data by executing your metric’s query, which was:

SELECT profileId AS dim1, time AS dim2, impressions AS dim3 FROM facebookInsights

Next to the columns of the facebookInsights data source we would like to show, it also holds information about how the data should be mapped onto the line chart. We tell it by renaming “profileId AS dim1” and “time AS dim2” (dim stands for Dimension). For a line chart dimension 1 are the series, which we want to be filled up with the profiles. Dimension 2 is the x axis. Dimension 3 is the data to be shown on the y axis. Technically the “impressions AS dim3” can also be shortened to just “impressions” as our system automatically expects the last selected column to be mapped onto the last dimension.

So hopefully this article gave you a better sense on how QQL works, and especially how the different layers - metric, data source and raw data sets - work together.

Did this answer your question?