Have AI, got Trust?

Pamela Gupta
4 min readJun 3, 2021

--

I initially published this on my LinkedIn account…

One important differentiator of AI programs from conventional programs is impact. AI capabilities are and continue to become more powerful in their impact (think logarithmic not linear) and pervasive, we have to be very cognizant and careful of how these systems are built and used.

• We cannot realize the full potential of AI without building Trust in AI.

• We cannot achieve the intended outcomes, without building Trust in AI.

AI has high potential for pitfalls. Primarily due to the fact it is taking bigger decision-making role in more industries.

With minimal to no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs are being built consciously or unconsciously, with structural biases.

Recent hearing on Section 230 was an attempt to hold large tech CEO’s from Facebook, Google and Twitter to accountability. At best it was an attempt. What we need are well informed and concisely laid out guidelines and expectations. What is Section 230? It is a provision in the 1996 Communications Decency Act that spells out who is legally responsible for content on the internet. It is applicable to large social media platforms but also to any online service irrespective of size. These companies aren’t liable for harmful content a user posts on their sites but it also gives them the power to remove content they deem objectionable also without liability, as long as they act in good faith.

We need clearly defined holistic guidelines and expectations even more critically for AI systems because these are complex systems and unlike in conventional systems we cannot go back and add components or tweak them to ensure right outcomes.

AI platforms are large and complex and can get out of hand and impossible to fix. Case in point:

Joaquin Candela heads Facebook’s Responsible AI team. He created an AI based platform for managing content to help with business objectives of adding viewers and pushing content to them based on their likes and dislikes. One would expect this to be aligned to the business model, so not an issue.

But it evolved into a system for polarization and lies and hate and it became impossible to fix it.

At Facebook, teams train up on an AI machine-learning model called FBLearner Flow. It allows engineers with little AI experience to train and deploy machine-learning models within days. They use it to decide whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and is not allowed on the platform). If a model reduces engagement too much, it is discarded. Otherwise, it is deployed and continually monitored. Engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining. Again, One would expect this to be aligned to the business model, so not an issue.

But this approach did cause issues. The models that maximize engagement also favor controversy, misinformation, and extremism: why? Seems people like extremist content.

This then can lead to inflaming existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide.

Did Facebook do enough to mitigate the issue? Could they institute checks and balances to avoid such issues? Yes, but so do other companies.

What we require an understanding of and consensus on at a systemic level are to build proactively with principles across multi disciplines including Security, Privacy, Ethics, Governance, Audit, Transparency and Empathy.

At Advancing Trust in AI, we are a group of leaders and practitioners from various industries and are hosting events to highlight areas of concern and risk around AI.

Our mantra: From dialog to action. We want to ensure that in addition to highlighting issues we identify existing solutions or help build solutions to build Trusted AI.

Join us as a active member or an information receiver of our call to action events. You can email us at advancingtrustinai@gmail.com

Subscribe to our YouTube channel and see our latest event at https://youtu.be/2jMbvq25NUc

Thank You! I am Pamela Gupta: Founder of the Advancing Trust in AI. I came to the US with a degree in psychology, to pursue a masters in AI and Computer Science. I created a AI based product that was sold to Westinghouse. Subsequently I went into Cybersecurity where I created risk based strategic security initiatives for over 25 years.

A few years ago I realized we were talking about the value of AI systems to accomplish tasks conventional systems couldn’t but there was no audible talk about building these systems securely, with transparency or accountability.

To address this gap I created a framework that takes a holistic approach to building AI systems with Trust called AI SPIT as in Security, Privacy, Integrity and Transparency. I am also a co-chair at a NIST based public private partnership to advise on Security & Privacy for Smart Cities.

--

--

Pamela Gupta
Pamela Gupta

Written by Pamela Gupta

Pamela Gupta is a Globally respected CyberSecurity strategist and a leading voice in Trusted Artificial Intelligence & Responsible AI.