Predictive AI – how it got an actress un-hired based on her social media life

Predictive AI – how it got an actress un-hired based on her social media life

It’s hard to be neutral about predictive AI when it is applied to humans. It can be harsh.

Early this year, the results of predictive AI changed the mind of a film studio and the casting director about to hire an actress for a role after a predictive AI review of social media accounts identified red flags and risks to the brand of the hire.

Predictive AI will be the basis for everything

Predictive AI is becoming increasingly important in a number of areas, including law enforcement to predict future criminality, in lending to predict repayment of loans, and for employers and insurers to predict health, stability, absenteeism risks associated with job candidates or insurance applicants.

A growing number of companies use AI to analyze information about prospective employees on social media, including the pictures they post on Instagram, the words and hashtags they use and their social media connections, to assess their personality and make predictions about their future conduct.

Machines have several advantages over humans because they can collect and assess a large volume of data and analyze it using algorithms designed to look for evidence of acceptable or unacceptable social behavior and behaviour that carries a reputation risk for brands – an issue that is especially important in the film industry, for governments and for corporations and NGOs with valuable brands.

Predictive AI will be the basis for everything – we will use it to gauge public sentiment and predict election results before voting so that politicians can change course; to bet on outcomes of litigation or on proposed laws; to predict health care patient outcomes; to assess what consumers want so that companies and governments alike can market to those wants; to bet on horse races; to gauge changing traffic patterns to control transportation logistics and so on. Every industry will be fundamentally shaped by predictive AI.

In the employment context, predictive AI has a more basic application – in essence what it does is answer this simple question for an employer (or a film studio in this case): “”If we associate with you, will it harm us or help us?” Typically, humans make that assessment for prospective engagements, and rightly or wrongly, they judge based on appearance, apparel, social media, level of education and work experience.

Now machines are making that assessment for a fraction of the time and the cost and with more accuracy. But there are two additional important differences when a machine predicts the risk you may pose versus a human – the analysis is deeper and it drags in present and past social media content.

You vs normal in the target industry

The type of predictive AI used in the employment context varies but mainly those based on social media work by evaluating a person’s social media presence against a baseline of normal and it scans for the same things as a human would only faster, with more precision and with the ability to make connections a human cannot make. For example, while not an exhaustive list, it evaluates:

  • Followers including number of followers, quality of followers, real followers and followers that pose a reputation risk.
  • Engagements such as likes, views, comments and the quality of those engagements.
  • Content in the description of posts including hashtags and repetition of hashtags.
  • Overly sexualized content in the photos and in hashtags, or that are out of proportion.
  • Use of social media as an open diary.
  • Alcohol in the photos, descriptions and in hashtags.
  • Large mood swings and personality changes from one post to another.
  • Evidence of criminal conduct or of inappropriate content that does not respect the rights of others.

For example, it’s probably no surprise that two important factors for an employer or film studio related to reputation when looking at social media pictures are alcohol and overly sexualized content. The propensity of a person to post pictures that are disproportionately sexually suggestive is weighed negatively for most employment contexts. But this is not the case for well-known prominent people, so it depends upon who is posting the content. The same is true for hashtags that reference excessive alcohol or illegal drugs or criminality because none of those are the norm.

The key is proportionality and repetition and so the program does not give a negative weight to a picture of a person wearing a bikini on the beach or of periodic nights out at the bar but it does factor how often there are such pictures in comparison to all the other pictures the person makes available to the public and as against the baseline in the context of the industry running the predictive AI.

Predictive AI is judging you based on what you show the world

While it may appear that the technology is passing judgement on the person, it really is not.

It analyzes a person purely on their own data, meaning on the pictures they willingly choose to publish and on the words they use to describe themselves and their activities. Another way of explaining it is that it only looks at, and scores you based on what you have decided is important in life, your lifestyle, how you portray yourself and how you want to be judged by the world.

For example, let’s look at alcohol. It’s an interesting one for profiling. If a person publishes multiple pictures that are alcohol-related, or disproportionately alcohol-related or hashtagged for alcohol, that person informs the world what they value and in their case, the consumption of alcohol is one of their values.

That is in contrast to the person valuing sports, such as with pictures of people skiing, or family values, such as pictures of them talking with children or of family dinners at home with parents.

But there is more at play with alcohol that predictive AI tunes into and it is this –  if the person publishes pictures of them consuming alcohol while on the job (a taxi driver, a server, a pilot), the results change because now there is a heightened issue of consuming alcohol on the job and putting third parties at risk, including the employer and its customers.

This is all very tricky and because with this type of predictive AI, the goal is to take our human values and program them into a machine that can accurately predict our conduct and assess risk that would be harmful for enterprise or government based on future conduct.

What you post is who you are

If 50% of a person’s pictures hashtag alcohol, it’s indicative. If 50% of a person’s pictures contain some element of sexualized content, it is also indicative. On the other hand, if 50% of a person’s pictures hashtag their new car or new shoes, its also indicative of what they value. You get the idea. An employer would typically rather hire a person posting many pictures of shoes or a car than many pictures of a person who is wasted. But again, it depends upon the industry.

This type of analysis is not new but what is new is the ability to scrape people’s pictures and data in minutes from social media and run it through algorithms that spit out results that can accurately predict behavior and assess risk.

The technology has real value in a number of applications that are reputation-sensitive such as for a brand that is tied to the image of one person, a politician, a bank or technology CEO. The technology is also important for insurance companies and lenders.

The technology also can save organizations millions of dollars avoiding a risky hire. A person who scores high as a result of alcohol-related pictures or hashtags, and who values alcohol consumption, statistically speaking, presents a greater risk for everyone including insurance companies, lenders and employers. According to the Centers for Disease Control, having 3 or more glasses of wine a few times a week manifests itself in the risk of earlier death, absenteeism at work, greater health care costs, car crashes and not surprisingly, a greater risk of committing crimes.

Predictive AI did its job

Back to the actress – she scored off the charts on potential risks to the film studio on a number of factors including high risk of: harming the brand; using alcohol on set or showing up on set hung over; breaching the terms of a contract; and taking illegal drugs.

Scanning a twelve month period for 2016, the predictive AI found multiple references (hashtags and descriptions) or pictures of alcohol consumption on the job and many more  off the job, a disproportionate number of pictures that were highly sexualized and numerous statements expressing anger at different unnamed individuals. Generally speaking, the results of the predictive AI were a  “no” to the hire because of the various risks associated with predicted future conduct.

This type of predictive AI can be harsh because it looks to past conduct evidenced on social media and leaves no room for a person to improve. But I suppose that is the point of using predictive AI – to take the humanity out of it.