eCommerceNews New Zealand - Technology news for digital commerce decision-makers
Story image
SAS digs deeper into the core tenets of responsible AI
Thu, 8th Apr 2021
FYI, this story is more than a year old

What is responsible artificial intelligence (AI)? There is a simple message at the heart of the discussion: Those that develop an AI model must be accountable for its development.

The broad term artificial intelligence was born in the 1940s and gained its moniker in 1956. For years it played along with the image of a human-like robot, trying to make sense of people to help them.

AI was much more than this stereotypical image, though. Its capabilities were developed as new ideas, innovations, and technologies evolved.

SAS is a technology company that has leveraged AI as part of its business for decades.

In 2019, SAS announced a $1 billion investment in AI over a three year period, building on a broad scope of capabilities ranging from advanced analytics to natural language processing, forecasting, computer vision, model governance, and many areas in between.

It could be easy to jump down rabbit holes to explore what's possible - and that's part of what innovation is about, but SAS strongly believes in the responsible development and implementation of AI - a process that takes into consideration the world's policymakers, regulators, and of course, its customers.

But what exactly is ‘the responsible development and implementation of AI'?

According to the Australian Government, there are eight fundamental principles associated with AI design, development, integration, and use.

These principles include accountability, contestability, fairness, human-centred values, human, social and environmental wellbeing, privacy protection and security, reliability and safety, and transparency and explainability.

The principles, whilst voluntary, are designed to complement AI regulations and AI development by helping to achieve better outcomes, encourage ethical decisions, and to reduce the risk of negative impacts.

For SAS, responsible AI means developing and using AI through accountability, careful management, fairness, and transparency.

For example, SAS' software features may vary depending on customer needs, but there are some key pillars to AI that remain relevant for every AI build, which the company explains in its How to Take AI Projects from Start to Win ebook.

Common features for the responsible deployment and management of AI and analytics include:

1. Data quality that promotes accountability, fairness, and transparency

SAS data management software enables organisations to:

  • Procure data from numerous potentially disparate data sources
  • Address potential privacy and bias issues in the data
  • Analyse data in real time and on an ongoing basis
  • Assess and improve data quality and completeness
  • Maintain audit records
  • Raise alerts in response to any data quality degradation

2. Model quality and management

SAS Model Manager offers model tracking, validation, auditing, and retraining features. These features help customers manage their AI models in a responsible way.

A centralised model repository offers model lifecycle templates and version control abilities - these provide visibility into an organisation's analytical processes, ensure complete traceability, and enable model governance.

3. Model interpretability

Whether organisations are looking for interpretability on an enterprise or industrial scale, SAS software features explainability technologies such as:

  • LIME – Local Interpretable Model-Agnostic Explanations
  • Shapley values (SAS enhanced)
  • ICE – Individual Conditional Expectation (SAS enhanced)
  • PD – Partial Dependence (SAS enhanced)
  • Explainable surrogate models

Between data quality, model management and interpretability, you might be wondering how these technical features help to improve human understanding of AI.

4. Human-centricity

SAS understands that responsible AI is also about helping customers follow the same good practice. That's why the company offers comprehensive training and certifications on the responsible use of AI across different applications.

SAS Domain Lead - Advanced Analytics, Ray Greenwood points to a SAS, McKinsey and Intel study that found the biggest barrier to AI adoption within organisations is trust. Frontline staff are supposed to leverage AI output but if they don't trust it, AI is an expense that is unused and delivers no benefits.

"SAS as a technology provider is investing in the delivery of all of the techniques that can help bring transparency to AI. SAS helps to make it apparent both to the developers - and the ultimate consumers of AI - why a decision was reached, how that decision was reached, and what influenced it. All of that can help with explainability," he says.

Furthermore, the company educates customers and prospects about the technology that underpins AI, but much of the conversation is about AI literacy. Data scientists can also leverage certifications and training.

"The idea is to get data scientists and non-data scientists on the same page in terms of what AI realistically can or can't achieve within the circumstances that AI is going to be put into production. When you do that, you get far more profitable outcomes and you get far more beneficial use of AI because the expectations are aligned with the likely outcome."

Human-centric AI is built into SAS Visual Data Mining and Machine learning (VDMML) by providing model interpretation reports in simple language, while SAS Visual Investigator captures governance, audit, and compliance details for humans who triage and manage cases. Interfaces are also customisable, so each SAS customer can rely on people to maintain human oversight and human intervention.

No AI solution would be complete without recognising diversity and accessibility. SAS believes that diverse teams are more likely to create solutions that anticipate unfair bias and take steps to avoid or mitigate them.  This is why SAS encourages diversity within its company and also invests in the development of STEM talent.

Software should be accessible to those with disabilities. SAS has a dedicated accessibility team that trains its R-D staff to incorporate accessibility needs into product development. SAS Disability Support Center is key to this support because it provides information about the accessibility features of SAS products and training for users with disabilities.

Bringing the core components of responsible AI into any AI project

Data quality, model quality and management, model interpretability and human-centricity make up the baseline of any kind of AI development and deployment.

SAS recommends that organisations start small, then build a repeatable, scalable and trustworthy approach that will win buy-in for your next AI project.

Read about the four pillars of starting a successful AI program and see examples of how other organisations have taken their AI projects from start to finish - download the How to Take AI Projects from Start to Win ebook now.