Welcome to the AI Focus white paper!
We welcome all PhDs and PhD students, Machine Learning folk and Google Partners to review our results. Click the above button to download your PDF copy of the white paper. You can download our anonymized raw data via the button on the bottom of this page.
By registering with us using the form on the right, PhDs and PhD students can earn a $299 per diem verification fee directly from our clients seeking independent verification of their Google My Business data collected by our Gbizbot.
Here are our results:
We are AI Focus, an artificial intelligence startup in Silicon Beach using machine learning to explore and develop new marketing technologies for our clients. The focus of this White Paper is the statistical analysis of the results of our Gbizbot, an online AI tool which drives customers to engage with brick and mortar businesses. You were invited to download our presentation because of your interest and expertise in statistics, big data and new business technologies. We welcome your thoughts, questions and critiques regarding our results and methodology. If you are interested in analyzing our raw data for an academic article or PhD project, please contact us at aifocus as we are happy to share our data.
The primary goal of a business is to increase revenue. Revenue is an increasing function of the number of customers a business serves. Our company AI Focus leverages artificial intelligence and big data to drive customers towards our clients. Using machine learning, we determine how much value our clients are getting from their current digital marketing efforts and how their resources can produce better results. The most basic way clients measure the impact of digital marketing is through revenue. To achieve increased revenue, we must manipulate the variables which have the highest impact on revenue. For instance, increasing the number of people who discover our clients’ website, increasing phone calls, or increasing the frequency of branded searches can all increase the revenue for a client. While there exists marketing industry standard methods to impact revenue, we leveraged our decades of marketing expertise with new technologies to develop advanced tools and methodologies capable of manipulating variables far beyond industry expectations.
To protect confidentiality, the data presented here has been anonymized and codes will be used to represent various clients. The project presented herein is a basic comparison between 10 different businesses before the intervention of our Gbizbot and after our interventions, thus providing two sets of data for comparison. All of our client data is provided by Google for the Google My Business page and by Caller Insights for our client’s tracked phone numbers. Here are the variables tracked on a daily basis.
Variables provided by Google for each client
We have found the following six variables have the most significant effect on revenue and thus our analysis will focus on these. From past analysis, we know the other seven variables Google tracks are functions of the six key variables stated below so we do not need to analyze all thirteen variables.
As Google provides data for these variables via their Google My Business page, for the purpose of this project we have sufficient, accurate, consistent data from a credible and universal source. As we have access to Google data recorded before our clients initiated our services, we can test the impact of our activities on our clients’ trading position at different points in time.
The table below shows the mean, standard deviations and confidence intervals for each of the variables 100 days before and after our interventions. For each of our clients, there are six variables with the last part of the variable name being the code assigned to the client for the purpose of anonymity. The 95% confidence intervals of the means shows that the bounds for all the variables before our intervention are lower than the bounds of the same variables after our intervention. In other words, the confidence intervals do not intersect and thus the means of each of the variable was significantly lower before our intervention compared to the means of 100 days after our interventions.
|Variables one hundred
days Before and After
|95% Confidence Interval for Mean|
|Lower Bound||Upper Bound|
|GMB call actions||Before||15.066||18.974|
Analysis of Variance
The table shows the analysis of variance output for the six variables. We can easily observe that in every case the p value is way below 0.05. In fact for this test we are able to test our hypothesis at 99% level of confidence. In all cases the null hypothesis that the means for these variables before our intervention and after are rejected with overwhelming evidence.
|Final Cluster Centers|
|Variables that were used
to generate clusters centers
Three Means Cluster Analysis Graph 1
From graph 1 above we can clearly see that the web activity data can be clustered into two clusters. There is the first cluster that contains the web activity data for the period when we had not been signed up for marketing services. The second cluster contains the data for the web activity after the impact of our interventions had taken full effect.
The clustering means table 2 below shows the distribution of the means of the six variables across the two clusters. By observation we can see that the means for the two clusters of each variable are quite different implying that the data has been changing over time as the impact of our activity took effect.
|Cluster Means for the 6 Main Variables|
|GMB Call Actions||Cluster1||17.07|
|Percentage Change From Cluster 1 to Cluster 2|
|Direct Search||Website Action||Direction Action|
|Phone call Actions||Owners Photo Views||Get Directions|
The clustering vector below shows the distribution of all data points that did not have missing values for the 100 days before and after our intervention. We can easily observe that the data is divided into two clusters. The first cluster contains more 1’s than the second cluster. The second cluster took most of the 2’s. Given that the data is divided into two equal portions where we had 100 days before our interventions and 100 days after our interventions, we can easily observe that there are more 1’s compared to 2’s. This can be explained by the fact that there is a 20+/-5 days lag before the impact of our intervention is felt. From the tables above, we know that the 2’s or cluster two represent the days when web activity was high. On the other hand, 1’s represent the days when web activity was low. By observations we can see that 1’s dominate the first half of the clustering vector which implies that most of the 100 days before our intervention had low web activity. For the 100 days after the 2’s dominate which shows that most of the days that had high web activity are in the 100 days after our interventions.
1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 2 2 2 1 2 2 2 2 2 1 2 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 1 1 2 2 2 1 2 1 1 2 2 1 2 2 2 1 2 2 2 1 2 1 2 1 2 1 2 1 2 2 2 2
Thank you for reviewing our data sets. We hope you found the presentation informative and interesting. The GbizBot, our artificial intelligence tool, is constantly learning and being upgraded, and we will update this White Pape in the future when we have 50 client data sets.
We also plan to release future White Papers with statistical analysis of the GbizBot results for the 12 month time frame and analysis of what befalls a client when the Gbizbot is turned off.
We welcome your thoughts on the results presented. If you are interested in the raw data, we are happy to share the anonymized data for your own analysis.