I.

History of Change

More Intelligent AI

Faster Cloud Computing

More Data

More Connectivity

Customers now expect more from their cars.

A personalised journey

Safety and (semi-) autonomous driving

Advanced Driver Assistance Systems ADAS

Smartphone integration: the journey starts before entering the car

In-cabin personalisation (infotainment)

Features

VIMA’s groundbreaking technology opens up unprecedented opportunities in behavioural intelligence for connected cars:

  • Understanding emotional reactions, combined with personality traits.
  • Full body movements analysis instead of only facial expression.
  • Truly multimodal (voice tonality, eye gazing, facial micro-expressions) as well biophysical constants like heart rate.
  • Monitor all the time and can continuously learn.
II.

Use Cases

 

 

Introducing VIMA’s digital chauffeur, an AI system that can accurately and reliably understand a driver’s emotional or physical state in order to:

  • monitor the driver’s condition to make sure they are alert and focusing on the task of driving,
  • trigger other types of actions, such as suggesting a rest stop or turning on the entertainment sound system to a particular artist or genre to match the person’s mood and personality,
  • adapt the actions of Advanced Driver-Assistance Systems ADAS by combining behavioural data to CAN bus data and dash cam feed.

The more the digital chauffeur accompanies drivers, the more intelligent the agent becomes. This leads to a higher accuracy upon making decisions, especially in complex driving situations.

With cognitive vehicles, possibilities are endless.

 

A.

Safety: monitor driver behaviour to prevent accidents

Negative or dangerous driver behaviours such as fatigue, drunkenness and distraction are leading causes of road accidents.

The digital chauffeur tracks eye gazing, facial expressions and body movement to evaluate the driver condition.

If it thinks you’re not paying attention to the road, the car starts an escalating series of warnings to get your attention.

The car’s central computer then takes appropriate measures (e.g. alarm, vocal message, visual signal, etc.) to enhance the driver’s and passengers’ safety by notifying car occupants in real time about how to revert back to ideal driving conditions.

 

B.

Infotainment: in-cabin personalisation

The digital chauffeur will augment your ride experience.

Lowering music volume when children are falling asleep in the back, adjusting the heating temperature according to the behaviour of passengers, or adjusting the driving style to whom is in the car. These simple acts of caring are done naturally by the driver.

VIMA provides the capacity to the car to act in such a natural and human way.

In the background, thousands of features from the video, audio and sensors signals are processed within VIMA’s proprietary deep learning framework.

The data is provided to the car manufacturer or OEM to develop next-generation interfaces.

 

c.

Next-generation Advanced Driver Assistance Systems ADAS

Just as the eyes, ears and brain of a human chauffeur, the digital chauffeur uses camera-based machine vision systems, radar-based detection units and sensor fusion engine control units (ECUs) to adapt how the car is driven.

This not only depends on the external environment (e.g. traffic, weather, etc.) and car parts (e.g. tire pressure, gear, etc.) but as well as on the driver’s/passengers’ behaviours.

The digital chauffeur thus continuously monitors both the car and the occupants’ behaviours to undertake actions in real time.

The digital chauffeur can for example allow for:

  • adapt the distance with in front vehicle while using ADAS according to driver’s preference.
  • change the suspension, acceleration or breaking dynamics.
  • monitor attention level for level 3 to 4 of autonomy.
III.

VIMA as your trusted partner

With connected cars, more and more data is being generated by cars and their occupants.

Powered by VIMA, more and more real-time information about drivers and passengers can be inferred.

This represents the opportunity for car manufacturers to understand customers’ desires, how they react to products and services and adapt to their needs and behaviours.

This will unlock unprecedented opportunities for car brands to showcase their unique features while increasing customer satisfaction, road safety and grow market shares.

Recently, our research was extensively referenced in an article published in Harvard Business Review:

Leveraging over ten years of research in Switzerland, VIMA’s advanced behavioural intelligence technology allows an immediate and automatic appraisal of people’s traits and skills from a video feed. It gives access to an in-depth understanding of complex behaviours by evaluating 27 interpersonal skills and personality traits.

The time has come to treat every customer in a personalised way based on her/his personality. Because every individual is different, so is the corresponding personality profile that VIMA provides after the analysis of a video feed.

By embedding VIMA’s technology into fleets of connected cars, car manufacturers can understand their customers better, not only at the individual level but also with respect to a norm group, allowing further segmentation depending on personality profiles.

VIMA’s technology is a proprietary combination of speech processing, computer vision and AI that allows for an emotional calibration based on personalities, as opposed to “simple” inference based on superficial signs, such as with facial expressions only. For example, an enthusiastic person tends to smile more often than an introverted person, which is taken into account by VIMA’s proprietary technology.

Understanding of traits and behaviours

Advanced emotion detection and calibration based on personality

Multimodality: facial micro-expressions, voice tonality, eye gazing, body movement, sensors

Automatic, real-time and scalable solution

Highest scientific validation available

IV.

Technological and Psychological Background

 

Personality traits and emotional states are manifested through verbal and nonverbal behaviours. VIMA is measuring them objectively using machine detection, which is a subtle combination of a live video feed, a microphone and other sensors, if available (pressure sensor in the steering wheel, the breaking pedal, etc.). This is to ensure the best possible accuracy and limit bias to a minimum.

What can be detected?

VIMA’s technology can not only capture emotions and traits but analyse them, correlate them and offer a level of accuracy never reached before.

Emotions and traits should be distinguished as follows:

  • an emotion is short but intense, triggered by a specific event. VIMA can detect emotions such as joy, anger, sadness, fear, disgust, surprise, contempt, pride, or relief and also evaluate more complex ones such as stress, arousal, pleasantness, etc.
  • a trait is a more stable internal characteristic that determines an individual’s behaviour across a range of situations. The “Big Five” personality dimensions are openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (OCEAN).

Emotions occur at different levels such a physiological (e.g. heart beat), motor-expressive (e.g. smile), etc. VIMA can quantify emotional responses by measuring these different types of changes.

Personality traits can be inferred from a pattern of behaviours, attitudes, feelings, and habits of individuals. Thanks to its unique system, VIMA can determine traits of drivers…

How can we measure emotions and personalities?

 

A.

Behavioural psychology

Research has shown that people behave in certain ways that express their emotions and personality traits. In addition, it has concluded that people are quick and correct at inferring what others are thinking and feeling from merely observing them.

These studies converge to the consistent finding that verbal and nonverbal behaviours act as cues to the emotions, motivations and personal traits of a person. These can be picked up by others and used to form impressions, even in very brief interactions and with no or limited verbal content.

 

B.

Computational technologies and machine learning

Acoustic and visual information is proved to have a correlation with the personality skills and behaviours that VIMA is analysing. They are thus at the centre of what we are extracting and analysing.

For acoustic features, we focus on the following subgroups:

  • acoustic (numeric) such as intonation, intensity, perturbation, etc., and
  • linguistic (symbolic) such as disfluencies, laughter, sighs, etc.

More than 6,300 features are extracted and compressed from raw data samples into specific features per window of analysis​.

For visual features, we have developed a proprietary system:

  1. First, a so-called “encoder” first transforms each individual face images (over 30 frames per second for a video feed) into low-dimensional features. In our neural network (see figure below), this represents the first layers: its role is to encode the appearance of the face in roughly 2,000 features.
  2. Second, a so-called “regressor” then predicts skills based on given features. The is the output layer of the neural network: it predicts the skills given the appearance.

 

C.

Behavioural intelligence

By carefully combining state-of-the-art sensing methods (extraction), computational framework (processing and analysis) and behavioural psychology (linked to the “ground truth”), VIMA effectively takes advantage of verbal and non-verbal behaviours of humans to accurately predict not only their current emotions but personality traits as well.

In short, for the first time using AI, VIMA captures the behaviour while understanding it.

D.

Cross-cultural modelling

Cross-cultural phenomena are one of the most challenging issues in behavioural modelling. To ensure a highly reliable assessment of different cultures, VIMA’s technology uses a variety of techniques to compensate cross-cultural differences. Transfer learning is used to adapt predictive models for new cultures even with limited amount of training instances.

In parallel, VIMA is relentlessly working to enhance its next generation multi-cultural engine by developing early and late “fusion techniques” of acoustic and video cues, culture-independent video and acoustic modelling techniques, etc.

V.

About VIMA

VIMA is a spin-off of the Idiap Research Institute, a world-class AI centre affiliated with the Swiss Federal Institute of Technology EPFL. The company leverages over ten years of research in social computing, behavioural psychology, computer vision and speech processing.

The research has been partially funded by the Swiss Agency for Innovation Innosuisse and has been scientifically validated in high impact journals for years. Moreover, it has attracted worldwide attention by being referenced in Harvard Business Review.

 

 

Vima has been selected by the IMD Business School for its outstanding technology.

The company is ideally positioned to become the future leader in providing behavioural intelligence tools that will be key to unlock unheard of opportunities in next-generation human-machine interactions.

 

 

 

 

 

 

 

 

Thank you.

Contact

Philippe Labouchère
Business Development Manager
philippe@vima.swiss
Direct +41 27 720 55 22

Vima Link SA
Idiap Research Institute
Rue Marconi 19
1920 Martigny
Switzerland

Please fill in form to download PDF:

 

Name
Your E-Mail Address
Company
Country
Your message to us.

 

 

Data privacy statement of Vima.