Extrapolating on the human experience: Recommender engines

-

Extrapolating on the human experience: Recommender engines

Skip ahead to: Down the rabbit hole

Our understanding of the natural world, its forces and elements falls back on a foundation of mathematical patterns. I often find that the human experience can be understood on similar terms, every point in our life measurable by its breadth and depth. We can ascribe breadth to the expansive nature of our experiences and depth to the quality of our experiences. 

Society-wide innovations often spur the growth of either dimension, changing the way we live and experience. In a pre-internet era, both dimensions faced constraints, namely choice scarcity and time scarcity. The breadth of our experiences was defined by the choices within our physical parameters and the depth of our experiences was limited by the time available to truly explore and optimise them. The internet broke down physical barriers, supplementing the choice scarcity problem with an overabundance of choice. But in doing so, the time scarcity issue was only exacerbated. The day was still 24 hours long and the ability to explore the sea of choices for any one decision was overwhelming, people often reverting to the comfort of familiar choices to combat this. 

So while the internet opened up the parameters along all vectors, most of us remained within the borders of our own comfort. Recommender engines are the systems that allow individuals to seamlessly interact beyond these self-imposed borders, reaching into the expansive world of choice from a comfortable perch. As we explore the true depth and breadth of choice introduced by the internet, we don’t just optimise our existing experiences but discover ones well beyond the borders of what we may have otherwise allowed ourselves.  

For simplicity’s sake let’s distil these series of experiences down to the individual’s consumer life, we’ll return to the bigger picture later on. Recommender engines ease points of decision friction for consumers through two modes. Some expedite the sales funnel, only showing us complementary consumer decisions and filtering out the detestable process of hard selling. Others discover on our behalf, tackling the ‘what to watch next’ type of decisions while also exposing us to our own latent desires and needs. In this kind of recommender oriented economy, businesses don’t have to construct paths toward customers only to find they are completely mismatched. Rather, they can identify the customers en-route and simply sand down any bumps where necessary. As we increasingly erode points of friction, we can imagine recommender engines expanding beyond simple consumer tools to become central to a seamless human experience.

Given that the end goal of these systems is to streamline decisions and experiences, optimising for a minimal error rate is imperative. Some recommender engines rely on manual sorting, tagging and matching while others are completely automated. Regardless of the method, they all work with two commonly known components: product fit and preference data. 

Product fit: this refers to the mechanism of making recommendations based on consumer compatibility. Mixing up recommendations with loosely masked hard sells is a strategy that doesn’t favour the business or the consumer.

Preference data: this refers to the need for an abundance of data on both sides of the exchange – data that is nuanced enough to surpass surface level red herrings about what a customer wants or what a product’s true offering is. 

The common hurdle: In a world of perfect recommender engines, we would have a massive aggregate database of information on the product and consumer side – a marketplace that could fluidly reach into either pool and make inferences more insightful than the consumers themselves. Every individual would be effectively matched with goods, services and experiences that enhanced their existing preferences or expanded their preferential bounds. But the competitive, incentive-driven nature of our markets would never allow a free-for-all exchange of information like this. The closest we have to this is the internal databases of giants like Amazon and Alibaba that operate across multiple verticals and service large populations, a robustness not accessible to many other recommender engines. Without access to this kind of information across the board, highly personalised recommender engines appear distant and unreachable.  

An alternative future: But if we imagine the final form of such a personalised engine, we may find that the route to it looks somewhat different to an aggregate database. The major barrier we face today lies in the need for a mutual point of data exchange, a platform or marketplace that can act as the locus of information. But what if we ported the locus of data from an aggregate platform to the individuals themselves. Imagine an integrated ecosystem made up of black box nodes that are able to communicate with each other, constantly pairing individual preferences with relevant experiences. Your particular black box would contain all your nuanced desires and inclinations, from eating habits to career goals – or even something as specific as odours you like or dislike. If you had a major life event approaching, a wedding or a relocation, it could recommend the most suitable organisational tool. A tool you may have never thought to use or heard of. It could go a step further to track how effectively you use the tools it recommends, and offer a substitute if it notices you spend 2 hours to complete a task that used to take you 1 hour to complete. The personalisation engine would be a seamlessly interactive experience – human intuition and algorithmic efficiency co-existing.

At the moment, we have a less evolved version of this. Some platforms like Youtube and Twitter will actively ask for your input, if you don’t like seeing something simply indicate so and the algorithm will filter it out. Users also intuitively do this on other platforms where they intentionally search for music genres or specific topics in order to bring the recommender engine’s attention to those elements. This is called ‘collaborative’ filtering, a mechanism that can evolve to be more intuitive and seamless, even if not to the extent of the vision above. 

Amid privacy and security concerns recommender systems can often be the subject of conversations around corruption, bias or exploitation – but their objective value add is not deterred. In their ideal state, the streamlining enabled by robust recommender engines would be impossible to replicate by any individual for whom choice and time scarcity remained an issue. Acknowledging this, it’s easy to see that there is an enormous amount of value yet to be captured as recommender systems evolve into personalised engines. We can take any vertical within our lives – food, lifestyle, knowledge etc – and understand its potential to create value as a reflection of how efficient its recommender engines are. Think about the platforms, tools and products you use, how seamlessly they integrate with your life right now and how that might change in the face of an optimised recommender engine. 

It’s not surprising that some of the biggest platforms/services in the world with the most loyal customer bases also have some of the best recommender engines – there’s a reason we return to platforms like Youtube and Twitter on a daily basis. You can trust Youtube will show you content that you love or that twitter will present you with an amicable mix of both comforting and enraging tweets. At the base level, recommender engines create customer satisfaction, a sure path to becoming an all-pervasive technology. 

Down the Rabbit Hole

1. Amazon Personalise: democratising access to recommender engines 

As part of Amazon’s AWS stack, it provides a recommendation engine tool that belongs to a broader set of tools that democratise access to AI/ML functionality. 

“The most well-known and successful ML use cases have been retail websites, music streaming apps, and social media platforms. For years, they’ve been embedding ML technologies into the heart of their user experience. They commonly provide each user with an individual personalized recommendation, based on both historic data points and real-time activity (such as click data).”

The nature of siloed ecosystems and competition don’t incentivise shared datasets, a tool like Amazon Personalise is the next best thing. The tool helps businesses optimise their recommendation systems within the scope of their existing customer data (however limited), in the absence of such aids we would see significantly less personalisation across the board.  

Source: Creating a recommendation engine using Amazon Personalize – Amazon 

2. The paradox of choice – why more is less

Barry Schwartz acknowledges the ‘explosion of choice’ in modern life and posits that more choice does not necessarily equate to greater satisfaction or happiness. The logic follows that in the face of an insurmountable wall of choices, we are likely to experience choice paralysis or dwell on the perceived opportunity costs of our decisions. 

“One effect, paradoxically, is that it produces paralysis rather than liberation. With so many options to choose from, people find it very difficult to choose at all.”

“Opportunity costs subtract from the satisfaction that we get out of what we choose, even when what we choose is terrific. And the more options there are to consider, the more attractive features of these options are going to be reflected by us as opportunity costs.” 

Source: TedTalk: The Paradox of Choice – Barry Schwartz

3. Siloed ecosystems: Incentives against interoperable data sets

As ideal as integrated, interoperable data sets are for the greater good of recommender engine technology, there are greater economic incentives that work against it. If we zoom out, the trade-offs that are made in order to retain certain benefits become clear.

“By creating an ecosystem that minimises external dependencies to nil, or close to nil, the premiums typically accrued through the value chain are traded for internal economies of scale otherwise difficult to emulate. The ability to pass on these cost efficiencies to the customer pricing model is a big part of how these competitors initially attract and later hoard customers. We can imagine that if the pricing and flexibility attracts the customer, then the completely integrated vertical solution keeps them bound through familiarity, convenience and most importantly customer loyalty.”

The likes of Amazon and Google, who have massive data sets that help define the core function of their integrated vertical solutions, would utilise recommender engines as a mode of hyper charging familiarity, convenience and customer loyalty to build value. This naturally incentivises against interoperable data sets, as a means of keeping competitors from diluting the value proposition of the siloed ecosystems.   

Source: Empire building by democratising access to tech – 4th Quadrant

4. Exposure: recommender engines

When thinking about exposure to recommender engines, there are three general points of innovation that will contribute to its effective implementation :

  1. algorithm and big data manipulation technologies
  2. data set aggregator/marketplace technologies that will inform the algorithms
  3. secure data technology that enables seamless use of data for recommender engines

While technology is one way to categorise exposure, another is to look at the various models plugging into recommender engine technology

Tier 1: Tool/Infrastructure providers 

Companies like Amazon, who have played the data aggregator role thus far, have also embarked on providing recommender engines through ML-as-a-service:

Tier 2: Businesses deploying recommender engines 

Stand-alone platforms that use the recommendation engine to enhance their core value proposition,  capturing the upside available in this technology. Their competitive edge relies on their ability to outperform their competitors in the quality of the recommendations. 

For example in social media/entertainment (Netflix, Disney, Prime), music (Spotify, Apple, Google), e-commerce (Shopify, Amazon) and navigation (CityMapper, Moovit). 

Tier 3: Bleeding edge of data mining & exchange

In our current landscape of siloed ecosystems and competitive hoarding of users and their data, there is no natural incentive to interoperate data or create a marketplace for its exchange. An alternative route to efficiency could be to shift the locus of data from aggregators to individuals i.e from  “data is owned by aggregators” to “data is owned by individuals”. There is then a potential incentive to create a central location/marketplace to exchange data or create data sets – the ideal state of data utilisation for recommender engines.  

While this seems far off, there are some technologies already moving us towards achieving individual data ownership and the associated data privacy and security. 

Blockchain/DLT for Digital Identity Management – one of the first endeavours of moving the locus of ownership of data to the individual can be defined through “identity”. Identity acts as a tether to which all other characteristics and attributes can be assigned. Control over our identities in the physical world is often taken for granted but in the digital landscape is not as straightforward. Blockchain/DLT technology presents a way to redistribute control over digital identities back to individuals, with enterprises such as IBM (IBM Verify Credentials) and DLT companies like r3 (r3 Corda enterprise DLT) working to build on top of open standards to create decentralised identity solutions.

CONTACT THE EDITOR

We welcome thoughtful discourse on all our content. If you would like to further explore or discuss any of the ideas covered in this article please contact our editors directly.
Contact Details