In May, OMD EMEA partnered with Google to test initial concepts in a fast-paced environment, assessing their possibilities and pitfalls for further development with feedback from a consumer focus group.
Four teams participated in the two-day event, gaining knowledge around implementing Google Assistant solutions for brands. Google provided teams with guidance around their current capabilities and conversational AI design. Each team explored a unique use case which they brought to life in a prototype, taking advantage of additional integration opportunities and planning for common failure states.
Taking a people-first approach, we conducted a consumer focus group to test the teams’ ideas with six participants selected based on a number of factors including age, family and technology adoption. The consumer workshop focused on unearthing opinions on the formation and potential of the ideas and innovations, rather than their final form functionality. The users were surprised and excited by the variety of use cases and new experiences they offered.
Brief Overview of the Concepts Explored
Meet Sasha, your own personal hair stylist assistant. No more bad hair days – get customised hairstyle recommendations and step by step guidance, so you can be your BEST you!
- Provides personalised suggestion based on uploaded photos
- Enjoyed the pause after each instruction, waiting to be told to go to the next step
- The time estimates and items needed is unique and a great addition
Meet Indi, a trendsetter and expert that will help you find your own self expression through clothing customisation. Get access to limited edition items, discounts and exclusive experiences. She also collects ideas and experiences from all the social platforms to help you form the best ideas.
- Fun, strong personality
- The customisation aspect is unique and stands out
- Great way to feel closer to the brand and augment in-store visits
Meet the Configurator, your personalised car configuration assistant making the process of finding the perfect vehicle easier. Get the most relevant information and advice to customise your perfect vehicle.
- Different personality options for different types of consumers
- Provides adds an extra way to get information
- A configuration summary to enhance the dealership experience
Meet Dino Adventures, the learning and development action for kids making reading more fun and exciting.
- Built around busy moments that could become additional family time
- Alternative to an iPad, enjoyed that it isn’t screen based
- Open to being recorded to add increased interactivity with the experience
Key Consumer Themes
Consumers are Open to AI interactions
All of the consumers in the focus group had used a smart speaker, but they didn’t all own a device. They felt comfortable with smart speakers recording interactions as long as they were clear on when and how the interaction adds value or functionality in return. They also wanted control functionality to change features and turn settings on and off.
Set Expectations Early
It is important to set expectations early, as consumers are very excited about these new interactions. As a result, they are finding many current activations underwhelming because they can’t do as much as expected. Consider starting with a narrow-focus and building out from there. Be very specific about what it can and can’t do.
The group also talked about how assistants are becoming very realistic. One consumer referred to a recent experience stating:
“I was dealing with a bot recently and it was scarily real. For a second, I forgot I was talking to a bot. It was weird. It was like he was almost interacting with me – the way I asked questions and stuff. I felt comfortable, but it was just that moment where I remembered I was talking to a robot.”
Sharing Data has Become the Norm
Consumers are more accepting of sharing their data in order to sign-up for a product or service. However, a consumer described the situations as a catch-22 saying:
“I just have come to accept it. I think there is lots changing with GDPR now and who is holding your data. There is a trust element, you would like to think they are not sharing your data.”
Child-Friendly AI
The consumers didn’t like the idea of having a smart speaker in a child’s room. They discussed how it was an expensive device to have in a child’s room. However, they enjoyed how experiences could evolve with the child.
Parental-controls was an important feature discussed. They thought it was an immense amount of freedom to give to a child on their own. The group also wanted to be able to set time limits on the amount of time a child could spend with experiences.
Want to know more about OMD Hackathons? Email [email protected] or [email protected]
Following OMD’s first entry into I-COM’s annual Hackathon in Porto in 2017; Paul Cuckoo, Chris Morris, Giuseppe Angele, and myself: Aaron Brace flew to San Sebastian the northern-most point of Spain to compete in this year’s event. The Hackathon itself consists of a gruelling 24 hours, answering two components of a data and marketing challenge. The sponsors this year (Intel) set the challenge of predicting January 2018’s digital engagement of artificial intelligence, using only limited historical data of indexed Google Trends, Kantar’s reporting of media spend, Twitter comments, and Web activity over the period. The second component, the qualitative component, was developing the right marketing strategies to be seen as a leader in AI, and the best tactics to engage customers and audiences . Competing with us were a blend of media agencies, specialist data-science consultancies and major European academic institutions, commercial/financial companies – needless to say, the competition was fierce.
The enormity of the situation left me awash like the unexpectedly stormy Spanish weather ambushing teams en route, as we were allocated our room – a 30-ft ceilinged, windowless music rehearsal space. With the acoustics designed for instrumental practice, all echo and reverberation was removed from our voices, our now alien timbre made the room feel low on oxygen. This is where it would be won and lost, our War Room until we’d hand over the file to the competition jury.
Our ambition was to develop not only an accurate predictive model, but to also develop an application or tool that allows a user to simulate predictions and visualise core components of the data used. We began isolating our KPI – converting 13 million tweets in the AI space, and a similar amount of web traffic hits. Once cleaned, aggregated, and processed, these would become our two key variables to predict for the month of January 2018, forming the first component of the challenge. We hypothesised however that the Twitter data may also provide another function: if we could determine the frequency of specific keywords in social data (a list determined from the mining of keywords from web-traffic URLs and search terms), this might allow us to determine specific brand and keyword engagement, and the extent to which this might be associated with brand perceptions, or predicted by brand specific media spend.
Despite our best preparations with the small amount of sample data that was provided ahead of time, the period from 9am until after lunch was primarily focused on data wrangling; getting the data in the format required for the modelling and prediction work to begin. Building the models long into the small hours, Paul and Giuseppe knew that they had to produce predictions with sufficient time for myself to feed the data into the data tool, and for Chris to be able to construct the story of the presentation, and the ‘strategic implications for leadership section’ of the task. By 5am the models were complete, Paul and Giuseppe having utilised Gradient Boosting and ARIMA time series algorithms to make their predictions, the stage was set for myself to frantically pipe the data into our application and finalise the functionality, and for Chris to craft the media tale; to make some sense of the last 20 hours of chaos. With the 9.30am deadline rapidly looming and having not slept, there was still sufficient time for me to have some (not metaphorical) last minute data frustrations, before Paul ran to submit our USB drive at 9.28am – a cool 2 minutes to spare, never in doubt.
We now waited for the presentation of findings which were split into two groups: Tier 2 (designed for those with less experience, or University entrants) and intimidatingly, our tier, Tier 1 (for experts or the more experienced data scientists). The submissions and stories presented from teams were vast in the qualitative element; the University of Kiev presenting a solution derived from using conferences to drive social engagement in the AI space and Ekimetrics proposed a solution and strategy built closely around the use of traditional print media. Other teams’ solutions for future strategy were constructed around the concept of tweet sentiment: using positive sentiment as a measure of engagement with a brand – the extent to which sentiment could show sufficient variation to understand engagement predictability in tech contexts in my opinion might however have been optimistic. Our solution was primarily built around the idea of AI strategy piggybacking off of ‘cognitive consonance’ – aligning targeting for the product with the product: if you want to influence through technology – use technology. With only 6 minutes to showcase 24 hours’ work – Chris’ presentation had to be more than concise, and it was over. Whilst we narrowly missed out on being chosen as a top-two finalist, our position did leave us the best performing media agency in Tier 1.
One key learning from competing across this 24-hour period was that you need everything to go your way early on; from choice of models, choosing efficient code to merge and manipulate your data, even down to choices of technology at hand. 24 hours in hindsight is a very short period to start playing from behind – which we did at times despite being satisfied with our performance.
We learned a huge amount; understanding other teams’ approaches to data challenges, and through understanding industry engagement and sentiment with the proposed strategies first hand – we have gained key experience of how data-science is viewed by tech leaders like Intel themselves, and how even pioneering brands in the tech space see the value of data-science in marketing in critically developing business solutions as AI becomes more relevant in our space and across the industry.
We’d love to chat to anyone interested in our experiences, or anyone who’d feel like there is something they could take from our learnings going forward, from a data-science, analytics, tool building, or data storytelling strategy perspective. Please contact us at [email protected]

The I-COM Data Science Hackathon is a 36-hour marathon, where competing teams develop algorithms using data science analytics to solve predictive modelling challenges on marketers’ datasets.

This year, it was hosted in the beautiful Cruise Terminal in Porto. Unilever and Intel provided the challenges for teams. The attending teams were a mix of academic data scientists (universities), analytics and marketing specialists (agencies).
The team from OMD EMEA took on the Intel challenge –
Business Challenge: What is the impact of discussions in social media and brand health indicators on advertising effectiveness for high consideration purchases such as consumer PC sales in the US?
Prediction Challenge: Predict the sales revenue by CPU brand/device brand combination by month for Jan and Feb 2017.
A sample of the data was provided by sponsors and Intel in advance of the Hackathon for teams to interrogate. The shared data included social (twitter volume), Millward Brown brand health survey data, search, ad spend and sales data.
To cover all aspects, OMD EMEA sent a team with a blended skill set to approach it. Our team included Paul Cuckoo (Global Channel Planning Manager), Harry Daniels (Analyst), Cate McVeigh (Head of Marketing Sciences, Intel team) and Adam Abu-Nab (Social Intelligence Exec Director).
The Hackathon –
The OMD EMEA team created a predictive sales estimator from a combination of MMM (marketing mix models) and data output to show how variable ad spend can affect revenue.
What worked: Consideration was most effective in predicting sales. The consideration data from Millward Brown was effective in allowing us to predict revenue. Consideration was shown to be a strong driver of revenue and we were able to isolate a strong December/Christmas trend.
What didn’t work: Twitter data effects. We weren’t able to truly isolate the effects of the twitter data on media effectiveness.

Finalist teams from Ebiquity and Analytic Partners also presented MMM solutions but instead used a nested approach. This approach carves out relationships with twitter/brand health and spend first, before nesting this in a final revenue model.
Interestingly, dashboard solutions were also presented as outputs. These dashboards could forecast spend required to meet revenue targets based on brand health/twitter indicators.
Our key takeaways –
Do your prep: Sample data prep is the secret ingredient to success for Hackathons. Teams which pre-formatted and did as much data prep work in advance of the hackathon freed up valuable time. This meant they had more time to spend on modelling their solutions, as well as conceptualising and visualising the story they wanted to tell.
A fuller data eco-system is needed for business application: A challenge for all teams was the limitations of the data sources provided. For there to be actual business applications, a fuller eco-system of data sources and metrics could be provided.
For example, a common problem teams faced was the search and social data provided (tweets) was solely volume over time and a mix of owned (brand driven) and earned (user) mentions within that. This limited data caused predictable peaks around owned campaign activity and campaign seasonality trends (Black Friday, Christmas, Apple Launches). This volume was also a mention and not the reach of a mention, which could prove a stronger correlation with ad effectiveness/intent/sales.
The semantics are equally important as the numbers: For social to be used as an indicator for purchase, you need to be able to cut where the real user discussion is happening and the richer semantics out of it. For example, are these mentions positive, negative, intent or consideration based? Can they be correlated and validated with intent/consideration survey data from Millward Brown? For search, what is the context in which people are searching for your brand, not just the volume?
At OMD EMEA, we have the capability to use tools that can cut social and search in these more meaningful ways. There’s also the differing audience discussion environments that need to be considered. A parallel test we ran using our social tools found that more Intel sales/intent discussions and social video views were happening on wider social platforms. For example, YouTube/twitch were platforms that resonated with gamers, while forums were preferred for B2B tech-heads in particular.
Given the nature of a hackathon, it’s understandable that the amount of data provided to teams is managed so that a solution can be turned around in 24 hours. What it has allowed teams to do is test some interesting ideas and models, take these and plug them into the broader data sets they have to work with during their day to day.
MMM still the most useful for marketers
Data science is an exciting field with new techniques that can revolutionise accurate predictions with minimal data. However, to properly answer business questions, regression modelling in the form of MMM has a long way to go before it’s beaten. Feeding that model with all the correct data sources is key to its accuracy.