Heap Analytics

Overview
Heap Analytics is a web analytics tool that allows you to capture and measure every user action on your website or mobile app including clicks, taps, swipes, page views, form submissions, and more.
Setup
Black Crow will send unique information about the user and their score as a User Property into your heap.io platform on your behalf. To run an analysis on your users, please see Heaps documentation here.
Please notify your Account Manager, who can help get this enabled for you.
How we pass scores through Heap
Inside Heap, the user property for Black Crow is a series of key-value pairs. The key is constructed out of three parts, separated by a pipe (|):
- Vendor name: A hard-coded static string, BlackCrow, that identifies us as the vendor
- Comparison group: A dynamic text field that indicates the comparison group used in generating the score, as described below
- Bucket count: A dynamic integer that indicates the number of buckets used in generating the score, as described below
The value in the key-value pair is simply the score, an integer that can be as low as 1 and as high as the bucket count.
As an example, let’s say the comparison group is “site” (the prediction was compared to all other predictions on the site), and the bucket count is 10 (the prediction was bucketed into 10 roughly equal score groups). If the score for the pageview was a 5, you would expect to see the following returned through Heap on the user's profile: BlackCrow|site|10: 5
Helpful information for your Analyst
Introduction
Black Crow generates real-time predictions of how likely a user is to take a specific action, typically a purchase, in the future. We make (and update) this prediction on every pageview a user logs. To make these predictions more consumable, we simplify them and translate them into a “score”. The higher the score for a pageview, the more likely the user is to make a purchase, or take whatever other action the model is trained on.
About scores
A score is generated by comparing a prediction to other predictions in a defined group (called the comparison group). We aim to have roughly the same number of predictions in each group, so these scores can be thought of as deciles, or quartiles, or n-tiles generally, depending on how many scores we bucket predictions into. This we call the bucket count.
Illustrating the bucket count
As an example, let’s say the comparison group is every pageview on your site in a given day, and the bucket count is 10 (deciles). If the prediction for a pageview was very high, let’s say in the 95th percentile, it will get a score of 10. If the prediction for the pageview was in the 43rd percentile, it would get a score of 5. If instead, the bucket count was 3, then the prediction in the 95th percentile would be scored a 3, and the prediction in the 43rd percentile scored a 2.
Illustrating the comparison group
The comparison group is the other component of a score that we vary. You might want a relatively even distribution of predictions that were made only on your cart page. To support this we’ll create a “cart” page score, which is generated by comparing a prediction only to other predictions made on the cart page, versus the example above, where predictions were compared to all other predictions made on the site. To continue the example above, the prediction that was in the 43rd percentile when compared to all predictions on the site might only be in the 16th percentile when compared to predictions on the cart page, and thus would be scored a 2 with a bucket count of 10, or a 1 with a bucket count of 3.