Key Takeaways from Uptycs Cybersecurity Standup: Bias in AI

Tags: ,
Blog Author
Laura Kenner

Artificial intelligence (AI) is changing the world we live in; but like any tool, its impact depends on how we wield it. This episode of Cybersecurity Standup, presented by Uptycs, was dedicated to unmasking a critical aspect of AI that often goes overlooked—bias.


Special guest George Kamide is a multi-disciplinary thinker with a penchant for challenging established norms and tackling big issues. George's diverse experience ranges from orchestrating go to market product strategies to creating thought leadership in dynamic tech environments. As a host of the award-winning podcast, "Bare Knuckles and Brass Tacks," George continues to share his insights about complex topics.


The host for this event was Bronwen Hudson, Social Media Manager at Uptycs, who is known for her insightful contributions to the conversation surrounding inclusivity, diversity, and technology. Her role enables her to generate awareness and foster discussions about ethics within the cybersecurity industry. Her dedication to understanding and addressing bias in AI was evident throughout this event.


As the two converse about what bias is and how it intersects with artificial intelligence systems, they also gear up toward an important question: What do we do?


Catch the replay here:



Watch the video for the full experience. It’s a conversation that will challenge your understanding of how popular AI tools, such as ChatGPT and Bard, both influence us and can be influenced by us. It’ll also inspire you to think critically about complex ethical issues such as bias in AI, in technology, and in ourselves.

Here are some key takeaways from this enlightening event that explore the reality of bias in AI, the implications for society, and potential paths forward.


AI bias and its importance

AI bias is a critical issue that deserves our attention.

“When we're talking about machine learning, deep learning, large  language models, we really want to make the distinction:  Is the bias mechanistic, or is it a question of morality?” 
– George Kamide

Bias can occur when machine learning algorithms make decisions or predictions that unjustly favor one group over another. This can perpetuate existing societal biases and injustices, potentially leading to discrimination in crucial areas such as hiring, lending, and law enforcement. George and Bronwen’s conversation sheds light on the significance of addressing bias to create fair and equitable AI systems.

The two point out that there’s nothing particularly new about bias in AI-driven systems. But as generative technologies enter mainstream society as well as cybersecurity tool and product development, systemic bias does need to be addressed. 

Researchers at the National Institute of Standards and Technology (NIST) recommend “widening the scope of where we look for the source of these biases—beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed.”


The introduction of bias into AI systems

An intriguing discussion topic was the introduction of bias into AI systems such as GPT-3. It turns out bias can infiltrate AI systems through data used to train them. If training data contains biases, the AI system can learn and replicate them. For instance, if AI is trained on text sourced from the internet, it could inadvertently acquire gender stereotypes prevalent in the sample text. Understanding how bias enters AI systems is essential for developing strategies to mitigate it effectively.


OpenAI's approach to addressing bias in AI

OpenAI, a leading research organization, has committed to reducing both glaring and subtle biases in how AI systems respond to disparate inputs, including its widely recognized GPT-3 model. It is actively working on improving default behavior of its AI, permitting users to customize its behavior and seeking public input on establishing appropriate defaults and hard boundaries. OpenAI's dedication to this task showcases its mission to create AI systems that are fair, responsive, and accountable.


The meaning of 'AI literacy'

AI literacy emerged as a key concept for combating bias. It refers to understanding how AI works, its implications, and how to responsibly interact with this technology. As AI systems become increasingly prevalent, having AI literacy has become increasingly more important. It empowers you to question AI system outputs, advocate for fair and ethical practices, and actively contribute to the ongoing conversation regarding AI ethics and bias. For cybersecurity and information security professionals, George emphasizes that both the design of and familiarity with AI systems will be critical in the coming years.


Individual efforts to combat bias in AI

We all have the power to make a difference in combating AI bias. By increasing our understanding of AI and its implications, we become more adept at challenging system outputs. During the Cybersecurity Standup live event, participants were encouraged to share their experiences and perspectives; they actively engaged in the conversation to foster a more inclusive and unbiased AI landscape.


Corporate contribution to minimizing AI bias

Corporations play a significant role in minimizing AI bias. By investing in training and education, they can enhance AI literacy among all staff to encourage a culture of awareness and responsibility. Implementing policies that promote fair and ethical AI practices is important. Ensuring diversity within all teams brings a variety of perspectives to the design, development, and implementation of AI systems, thus minimizing the potential for biased outcomes. Collectively, these efforts can contribute to the creation of fair and unbiased AI systems within corporate environments.


The concept of 'thought partner' in AI context

AI as a thought partner was explored during our event. It refers to using AI as a tool to aid in problem solving, brainstorming, and decision making. Rather than replacing human intellect, AI acts as a facilitator, providing new perspectives and helping people get started on complex tasks. Embracing it as a thought partner encourages a collaborative approach, leveraging the strengths of both humans and AI to arrive at innovative solutions and navigate the complexities of the modern world.


Moving forward in the fight against AI bias

Let's continue to  pay attention to bias in AI, raising awareness for fairer systems and advocating for ethical practices. If you’re new to the topic, you can head to George Kamide’s human-vetted resource guide


Connect with Uptycs on LinkedIn to stay informed about future events, webinars, and discussions for the cybersecurity community. Together, let's strive for fair, ethical, and unbiased technologies.

You can also connect with George Kamide on LinkedIn and explore his thought-provoking podcast, "Bare Knuckles and Brass Tacks," for more insights into the world of tech and cybersecurity.